If academia ceases to have an impact it loses its raison d’être. Impact is what differentiates meaningful academic work from mere busywork. It makes the difference between signal and noise.
Ultimately, the questions that concerns us [are] what role research plays in society and how we can create a research system with impact at its core?
I like this project. Benedikt and Sascha say they’re taking a systemic approach to model the full complexity of academic impact:
academia struggles with creating/measuring/generating impact because it struggles to conceptualise and structurally anticipate it. We are missing a systemic perspective on impact that is grounded in the fact that different forms of meaningful academic work show very different forms of impact.
The work is supposedly semi-open. The authors ask anyone that reads each chapter, released incrementally on Google Docs, to contribute comments, and then they will work to incorporate these insights back into the final output.
Abhijit Banerjee and Esther Duflo of M.I.T. and Michael Kremer of Harvard have devoted more than 20 years of economic research to developing new ways to study — and help — the world’s poor. On Monday, their experimental approach to alleviating poverty won them the 2019 Nobel Memorial Prize in Economic Sciences. Dr. Duflo, 46, is the youngest economics laureate ever and the second woman to receive the prize in its half-century history.
Amazing news. Esther Duflo has been a research-hero of mine since Cal Newport profiled her as a story of purpose-finding.
In this Wired article, Adam Savage provides a pragmatic description of how he breaks down complex projects using lists.
In my mind, a list is how I describe and understand the mass of a project, its overall size and the weight that it displaces in the world, but the checkbox can also describe the project’s momentum. And momentum is key to finishing anything.
Momentum isn’t just physical, though. It’s mental, and for me it’s also emotional. I gain so much energy from staring at a bunch of colored-in checkboxes on the left side of a list, that I’ve been known to add things I’ve already done to a list, just to have more checkboxes that are dark than are empty. That sense of forward progress keeps me enthusiastically plugging away at rudimentary, monotonous tasks as well as huge projects that seem like they might never end.
I love the physics metaphor here. There’s lots of other insights to be gained by thinking about how work follows physical principles. For instance, projects also have inertia, friction, and surface area:
To return to momentum, though, Adam makes an excellent point: breaking down the work helps keep momentum going even when you put the work down.
That may be the greatest attribute of checkboxes and list making, in fact, because there are going to be easy projects and hard projects. With every project, there are going to be easy days and hard days. Every day, there are going to be problems that seem to solve themselves and problems that kick your ass down the stairs and take your lunch money. Progressing as a maker means always pushing yourself through those momentum-killers. A well-made list can be the wedge you need to get the ball rolling, and checkboxes are the footholds that give you the traction you need to keep pushing that ball, and to build momentum toward the finish.
Another point in the article that’s worth emphasizing:
[I]n a project with any amount of complexity, the early stages won’t look at all like the later stages, and [the manager] wanted to take the pressure off any members of the group who may have thought that quality was the goal in the early stages.
I’ve heard this discussed in the context of critique, or “10% feedback”. When sharing work with others, it’s important to disclose the stage the work is at. Typos should be caught at a project that’s basically ready to publish. They shouldn’t even be discussed when a work is being conceptualized. The focus on early stages should be the concepts themselves, and how they fit within the broader context.
Last thing. This is excellent:
There is a famous Haitian proverb about overcoming obstacles: Beyond mountains, more mountains.
For serious system mapping work, spending [significant] time studying, thinking about, and mapping your system helps ensure you are addressing root causes rather than instituting quick fixes. In the long term, the time and resources you invest in Systems Practice will pay dividends.
But what if youʼre not quite sold on the Systems Practice methodology yet? What if you havenʼt encountered systems thinking before and just want to dip your toes in? Or what if youʼre an expert or an educator with only a few hours to introduce Systems Practice to a fresh new group of systems thinkers?
I have been in the latter situation, and it’s a challenge. In my experience, people who are wholly new to systems thinking can take a lot of time to acclimate to the mindset. But! If, as a teacher, you can’t illustrate the benefits quickly, it’s easy to disengage.
So, I’m glad this exists. This is a wonderful new resource from Kumu’s Alex Vipond that helps walk you through systems and Kumu’s tools at the same time.
Land that became too toxic for people to farm and live on after the 2011 meltdown at the Fukushima Dai-ichi Nuclear Power Station will soon be dotted with windmills and solar panels.
The Fukushima disaster unfolded as an incredible story of systemic response to new scales of tragedy. Take, for instance, the Skilled Veterans Corps: a group of elderly volunteers who helped with cleanup, knowing that the damaging radiation would have less impact on their lives than it would on younger volunteers.
Now Fukushima’s next chapter is evolving as an example of systemic creative destruction, as new opportunities are unlocked by the collapse of the region’s previous energy strategy.
“In the five years that weʼve had to asses the effect [the Gigafactory has] had on the workforce, on the community, I think there have been these ramifications that we talk about in the episode that nobody was really prepared for,” Damon said in an interview with The Verge. “Like, we knew there was going to be an issue with housing, which other cities are experiencing, too. But thatʼs become super critical.”
Side-effects of growth are not a new problem, but the massive initiatives we’re seeing recently might spark new varieties of old issues.
Through model-based learning, students use diagrams as a way to think about and reason with systems—and to think about how complex systems interact and change.
“Model-based learning” seems like a reframing of classic teaching practices, but it’s nonetheless a powerful reframing. Emphasizing the model—and encourage students to test and iterate their models—is catchy. It’s also deliberately organizational—it requires students to organize and structure their thinking about a given system, often visually.
There is a significant gap in research about Canadian data collection activities on a granular scale. This lack of knowledge regarding data collection practices within Canada hinders the ability of policymakers, civil society organizations, and the private sector to respond appropriately to the challenges and harness unrealized benefits.
So true. This looks like an interesting series from the great team at Brookfield.
Something strange is happening with text messages in the US right now. Overnight, a multitude of people received text messages that appear to have originally been sent on or around Valentine’s Day 2019. These people never received the text messages in the first place; the people who sent the messages had no idea that they had never been received, and they did nothing to attempt to resend them overnight.
It is incredible to think that this could happen on a scale big enough to hit headlines now, but it wasn’t noticeable on Valentine’s Day originally.
That’s one of the problems with our ever-more-complex technologies. We’re accommodating to the bugs. It gets easier and easier to dismiss weird tech events as glitches and move on without worrying. Unreliability is, itself, unreliable.
But there can be major consequences to seemingly innocent bugs:
… one person said they received a message from an ex-boyfriend who had died; another received messages from a best friend who is now dead. “It was a punch in the gut. Honestly I thought I was dreaming and for a second I thought she was still here,” said one person, who goes by KuribHoe on Twitter, who received the message from their best friend who had died. “The last few months haven’t been easy and just when I thought I was getting some type of closure this just ripped open a new hole.”
Herein, then, lies the tyranny of classification: The borders we draw for ourselves create a prison of thought and collaboration, inhibiting movement, connectivity, and learning.
Dominic Hofstetter outlines the many benefits of categorization, too. We have to have both specialization and generalization—categories and loose files. The key is developing processes, protocols, and ways of working that elevate the benefits of both.
The ever-refreshing Paul Jarvis shares some uncommon thoughts on productivity in Jocelyn K. Glei’s Hurry Slowly podcast.
In particular, Paul and Jocelyn discuss the importance of resilience. Citing research and his own experience, Paul points out that resilience is a more important factor in success than many others.
Obviously, though, enabling resilience is not as easy as simply pointing out how important it is. As they discuss, resilience isn’t something innate—which means that it can only be developed through experience. And this is where things get tricky: who gets to have resilience-building experiences?
In my research on innovation skills, I discovered that resilience was one of three key domains that wasn’t an important outcome for our public education systems. This means that resilience training isn’t necessarily a public good. Only if you’re lucky (or privileged) will you have the chance to build up your resilience muscle.
Incredible achievement, but it makes me wonder—what are the .2% of humans doing differently?
These stories of AI achievement are sure to proliferate in the coming years. By focusing on those people who are still able to think around machine learning strategies, we might learn something about how humans and machines can best complement each other.
From the Future Today Institute’s recent release.
The bias-on-capture issue is a particularly nuanced problem. How can we know if the capta is corrupt? Scrutiny of the capture source, perhaps?