→ “This is Sticking with Them:” Professor Explores Benefits of Model-Based Learning
Through model-based learning, students use diagrams as a way to think about and reason with systems—and to think about how complex systems interact and change.
“Model-based learning” seems like a reframing of classic teaching practices, but it’s nonetheless a powerful reframing. Emphasizing the model—and encourage students to test and iterate their models—is catchy. It’s also deliberately organizational—it requires students to organize and structure their thinking about a given system, often visually.
→ There is a seemingly myriad of terms to describe people who interact with models
There is a seemingly myriad of terms to describe people who interact with models. Just a few terms that are currently in usage include researchers, data scientists, machine learning researchers, machine learning engineers, data engineers, infrastructure engineers, DataOps, DevOps, etc. Both Miner and Presser commented upon and agreed that before any assignment of any term, the work itself existed previously. Presser defines data engineering as embodying the skills to obtain data, build data stores, manage data flows including ETL, and provide the data to data scientists for analysis. Presser also indicated that data engineers at large enterprise organizations also have to be well versed in “cajoling” data from departments that may not, at first glance, provide it. Miner agreed and indicated that there is more thought leadership around the definition of data science versus data engineering which contributes to the ambiguity within the market. — https://blog.dominodatalab.com/collaboration-data-science-data-engineering-true-false/
→ Van Horn and Perona open with a brilliant one-liner: the world is long-tailed
Van Horn and Perona open with a brilliant one-liner: the world is long-tailed. The diagram above shows analysis from Deep Learning Analytics, the #2 team placing in the iNaturalist 2018 competition. Part of that challenge was how many of the classes to be learned had few data points for training. That condition is much more “real world” than the famed ImageNet – with an average of ~500 instances per class – which helped make “deep learning” a popular phrase. The aforementioned sea change from Lange, Jonas, et al., addresses the problem of reducing data demands. I can make an educated guess that your enterprise ML use cases resemble iNaturalist more than ImageNet, and we need to find ways to produce effective models which don’t require enormous labeled data sets. — https://blog.dominodatalab.com/themes-and-conferences-per-pacoid-episode-2/
→ Interpretability is needed when auxiliary criteria are not met and questions about bias, trust, safety, ethics, and mismatched objectives arise
Interpretability is needed when auxiliary criteria are not met and questions about bias, trust, safety, ethics, and mismatched objectives arise. Kim and Doshi-Velez “argue that the need for interpretability stems from incompleteness in the problem formalizing, creating a fundamental barrier to optimization and evaluation” for example, “incompleteness that produces some kind of unquantified bias”. — https://blog.dominodatalab.com/make-machine-learning-interpretability-rigorous/
→ Learning how to live sustainably in an always-online society is mostly about learning where your limits are, and learning how much connection you can handle before it’s time to withdraw
Learning how to live sustainably in an always-online society is mostly about learning where your limits are, and learning how much connection you can handle before it’s time to withdraw. Knowing when to log off is the main skill to master — and this applies IRL, too, because while it’s easy to understand why you feel drained after random accounts brigade your Twitter mentions, it’s harder to recognize when the people around you become draining themselves. But more often it’s simpler than that: the fact that there’s a society-wide expectation to be constantly available means there’s no escape from the insistent pings and buzzes that accompany human connection, from friends to enemies to lovers and everything in between. And now we have more — and more persistent — friendships than ever, mediated by Facebook and Twitter and Instagram, which means that the alerts come more frequently than ever. The human brain has not evolved as quickly as its technology has; we are not built for this much connection, though we have, by and large, adapted. — https://www.theverge.com/2018/9/2/17805138/finding-silence-online-is-difficult-but-the-pursuit-is-worthwhile
Memorial University of Newfoundland
Helping changemakers change their worlds through systemic design and with innovation, leadership, and changemaking education.