Every part of this trial sounds made up. They should just air it in lieu of a Good Fight episode. Elizabeth Lopatto’s writeup is worth worshipping.
Spiro then coined the worst acronym I’ve heard in years, and I edit stories about aerospace so I know from bad acronyms. It is: JDART, for joking, deleted, apologized-for, responsive tweets.
But there’s at least one abstract takeaway that’s interesting to me:
At this point, Wood tried to enter an email exchange into evidence, resulting in a great deal of confusion on Judge Wilson’s part about how email reply chains work. (You read from the bottom.)
At this point, the “pedo guy” Twitter thread was entered into evidence, and the befuddled court had to be told that the reply chains work the other way on Twitter — the first tweet is at the top, and the last tweet is at the bottom.
Yet another example of the ways in which the world’s accelerating faster than many institutions can keep up.
Anyone, anywhere can propose an idea. YouTube creators will help spread the word, and the best proposals could be put into motion with the help of businesses, policymakers, and and celebrities supporting the initiative.
The initiative will culminate in a summit in Bergen, Norway next October to share the solutions that came out of the effort. Countdown will work with a panel of experts and scientists to vet proposals, and the strongest will be turned into TED talks. The talks will be filmed at the summit in Norway, in front of “a hand-picked audience capable of turning those ideas into action,” according to a press release.
An interesting partnership, and yet another example of “crowdsolving”: trying to find solutions to wicked problems via the mobilizing power of the Internet.
In this research article, the authors point out that the cycles of translation from English to the language of the context and back again can be costly and inconvenient. But, they point out three benefits to investing in translation and multi-lingual research spaces.
First, the authors argue that disseminating the results of research in local languages not only makes your research accessible to stakeholders, but it also helps stakeholders value all research more. They write:
Translations are expensive and time-consuming, so a large part of our work stays in English unavailable to the local stakeholders, who may have participated in the research process. This is an issue not only because it reduces their possibilities to learn from the systematized outcomes of the processes in which they participate, but because it reduces their perception of the value of research. When stakeholders feel that researchers write exclusively for other foreign researchers, their readiness to support and fund research may decrease.
The second benefit:
Second, academics who don’t read English may find it difficult to continue building on knowledge published only in that language.
This takeaway is obvious. So many publications are translated to English, but the reverse is rare.
Third, and by no means least, naming complex issues or ideas only in English impoverishes other languages. When we forsake finding a word for a particular concept or idea in a given language, we impoverish that language.
This is quite insightful. Language is intrinsic to organizational learning. If the concepts advanced in our research are never introduced to the local language, then it may be impossible for that learning to take root.
The authors recognize a fascinating tension in this work. They demonstrated the possibility of multi-language research spaces via a virtual research commons for their project.
What we have learned from working with different languages and acknowledging them during the full research cycle, including the dissemination stage, is that they are time- consuming, costly and even a bit messy and uncomfortable. For example, in the case of the virtual space above, some participants complained that having to find their own language among texts written in other languages begs an extra effort from them and slows them down. However, the alternative is renouncing inclusion and plurality, which is at odds with the challenge faced by academia to address complex societal problems.
There is a cost to complexity, but solution spaces need to be more complex than the problems they’re resolving.1
Chinaʼs 2017 decision to turn away Americaʼs trash has left the recycling industry reeling as it figures out what to do with all the packaging online shoppers leave behind.
Recycling is a funny thing. For me, it’s almost a guilt-free act. “Sure, I’m using all of these boxes, but they’re recycled, so who cares?” But increasingly recycling and the trash bin seem like equivalent destinations. It’s even imaginable that recycling is worse, because recycled objects might travel farther before being dumped into a landfill anyway.
“Itʼs very difficult for American material recovery facilities to satisfy that standard because Americans put plastic bags and chewing gum and bowling balls and dirty diapers and everything else you can imagine into the recycling containers,” Biderman says. The strict rules also apply to plastic and other recyclables, but cardboard and mixed paper have seen the sharpest drops in prices.
I’m tempted to blame people: “It’s too bad we can’t be more considerate. Have you ever looked in the recycling bins in public receptacles?” Et cetera. But really, we should be designing systems that make this easy—or incentivize good behaviours somehow. Either way, the current situation is insufficient:
There has also been a noticeable shift in the source of the cardboard, says Coupland: itʼs coming from peoplesʼ homes instead of brick-and-mortar businesses. Thatʼs bad news, since retailers are less likely to generate cardboard thatʼs too filthy to be recycled. Consumersʼ cardboard boxes are often mixed with other, dirty recyclables like ketchup bottles or soda cans that spill their contents over the cardboard. On average, about 25 to 30 percent of the materials picked up by a recycling truck are too contaminated to go anywhere but a landfill or incinerator, Coupland says.
Today former Secretary of State John Kerry and former California governor Arnold Schwarzenegger declared war on climate change. The two led an all-star cast of lawmakers and celebrities to launch an initiative called World War Zero, which aims to get individuals, businesses, and governments to drastically slash greenhouse gas emissions. The initiative, for now, boasts a lot of glitzy names without many details on how it will achieve its goal. Its bipartisan founding members — which include Bill and Hillary Clinton, Richard Branson, Jimmy Fallon, Cindy McCain, and Al Sharpton, and more than 70 other notable names — plan to hold 10 million “climate conversations” in 2020, The New York Times reported over the weekend.
Seems like an incredible effort. And it’s an excellent angle. “War”—when declared by major public figures—certainly catches the public attention.
Kerry compared the urgency of climate change to the challenges facing America during World War II. “When America was attacked in World War II we set aside our differences, united and mobilized to face down our common enemy,” Kerry said in a statement. “We are launching World War Zero to bring that spirit of unity, common purpose, and urgency back to the world today to fight the great threat of our time.”
Existing options for algorithmic evaluation of the similarity of documents depend on shallow measures: does this word seem important? What words is it used with? How frequent are they? Which is why this is cool—in this paper, the authors compare the language in a given document with broader knowledge of words and their synonyms:
In this paper, the Frequency Google Tri-gram Measure is proposed to assess similarity between documents based on the frequencies of terms in the compared documents as well as the Google n-gram corpus as an additional semantic similarity source.
And it works!
The experimental results demonstrate that the proposed measure improves significantly the quality of document clustering, based on statistical tests. We further demonstrate that clustering results combining bag-of-words and semantic similarity are superior to those obtained with either approach independently
One issue identified on an unnamed carrierʼs implementation could allow any app on your phone to download your RCS configuration file, for example, giving the app your username and password and allowing it to access all your voice calls and text messages. In another case, the six-digit code a carrier uses to verify a userʼs identity was vulnerable to being guessed through brute force by a third-party. These problems were found after researchers analyzed a sample of SIM cards from several different carriers.
RCS is supposed to be a big deal. It’s fascinating how these system-wide policies can be messed up in microsystem implementations.
This report is intentionally broad and robust. We have included a list of adjacent uncertainties, a detailed analysis of 315 tech trends, a collection of weak signals for 2020, and more than four dozen scenarios describing plausible near futures.
Impressive work. I particularly like the CIPHER heuristic they use in analysis signals: contradictions, infections, practices, hacks, extremes, rarities.
Medical crowdsourcing offers hope to patients who suffer from complex health conditions that are difficult to diagnose. Such crowdsourcing platforms empower patients to harness the “wisdom of the crowd” by providing access to a vast pool of diverse medical knowledge.
An interesting application of crowdsourcing. What’s the incentive for healthcare providers to participate, though? I’m not sure doctors can bill for participation in Figure 1. I think the main reason they engage at all is curiosity, and that would likely degrade if, as the authors of the linked study discuss, there was a lot of “noise” from uninteresting posts by patients who aren’t medically literate.
Today, we’re excited to formally launch the final version of OPSI’s AI primer: Hello, World: Artificial Intelligence and its Use in the Public Sector
Another interesting output from the OPSI. It seems usefully pragmatic:
The AI primer is broken up into four chapters that seek to achieve three key aims: (1) Background and technical explainer; (2) overview of the public sector landscape; (3) implications and guidance for governments.
We find that high-, medium-, and low-engagement-state gamers respond differently to motivations, such as feelings of effectance and need for challenge. In the second stage, we use the results from the first stage to develop a matching algorithm that learns (infers) the gamer’s current engagement state “on the fly” and exploits that learning to match the gamer to a round to maximize game-play. Our algorithm increases gamer game-play volume and frequency by 4%–8% conservatively, leading to economically significant revenue gains for the company.
As ever with this kind of mechanism, are we sure we want this to exist..? The potential is no doubt powerful. Imagine interactive TV shows that modulate what they’re presenting based on readings of the viewer… Hrm.
The halo effect is essentially how positive—but irrelevant—traits influence our perception of what the thing with the halo actually says or does. These authors explored how charities manifest the halo effect on their websites, and find evidence for four varieties of halo effect.
this study employs charity websites as a multi-attribute donation channel consisting of three attributes of information content quality (mission information, financial information, and donation information) and four attributes of system quality (navigability, download speed, visual aesthetics, and security). Based on the proposed framework, this study proposes four types of halos that are relevant to charity website evaluation —collective halo (attribute-to-attribute), aesthetics halo (attribute-to- dimension), reciprocal-quality halo (dimension-to-dimension), and quality halo (dimension-to-dimension)
One of the core issues of the talk is innovation doubt—the “if it ain’t broke, don’t fix it” mentality. To paraphrase Piret:
[…] why are we doing innovation at all? Maybe sometimes things are working fine, why do we think about innovation at all? We start off with four questions:
- Do you want to do things better?
- Do you have goals and purposes to fulfill?
- Do you want to address the needs of your stakeholders?
- Do you want to prepare for the risks and uncertainties that the future holds? If you answered “yes” to at least one of those questions, then your job is to do innovation—your job is to be a changemaker.
Also, the talk includes a neat model for different varieties of innovation, image courtesy of this post by Adrian M. Senn over on Medium:
In the next three years, as many as 120 million workers in the world’s 12 largest economies may need to be retrained or reskilled as a result of Artificial Intelligence (AI) and intelligent automation.
cf. Lee Se-Dol.
This is according to the latest IBM Institute for Business Value (IBV) study, titled The Enterprise Guide to Closing the Skills Gap.
Seems like an interesting guide. This metric surprised me:
In 2014, it took three days on average to close a capability gap through training in the enterprise. In 2018, it took 36 days.
I didn’t know this measure existed, but I can see the utility. As knowledge work grows ever more specialized, this time-to-capability can only grow.
In two senses, the work of innovation for public value and social impact is changing in Australia and around the world. What we expect public innovation to do and what we need it to achieve, and how that work should be done, are both changing. And they are changing together while they are changing each other.
It’s true. It’s hard to keep up with the discipline of changemaking, but it’s even harder to keep up with the change that needs to be made. Therefore Martin Stewart-Weeks calls for optimism:
Despite some of the uncomfortable and unsettled conditions, there is real energy in the search for more effective ways to solve the big problems we face in common — managing our complex cities, rewiring large and complex health and social care systems, tackling climate change, searching for better ways to integrate the human and technology capabilities of the digital age and making our communities healthy and resilient.
The speed, intensity and sheer connectedness of these and many other complex, public challenges are giving rise to new methods and tools that can help to tackle them with purpose and skill.
The South Korean Go champion Lee Se-dol has retired from professional play, telling Yonhap news agency that his decision was motivated by the ascendancy of AI. “With the debut of AI in Go games, Iʼve realized that Iʼm not at the top even if I become the number one through frantic efforts,” Lee told Yonhap. “Even if I become the number one, there is an entity that cannot be defeated.”
Wow. Perhaps the first real example of “AI took my job?”
The Twttr prototype app gave me another feedback form today. It’s been my habit to complain, at every opportunity, about the trends page you have to engage with whenever you go to the Search tab. I feel a little bad for the designers and developers, because the beta is really all about how conversations on Twitter look and feel. Still, this feedback form was no different. Here’s what I wrote in the “Dislike” section: ￼ I wish I could control the trends page.
It is the absolute worst part of my Twitter experience. It just feels… unhealthy. Like going through a grocery store magazine aisle. Sure, some of the headings are instructive or inspiring, but many are gross, irrelevant, or completely malignant gossip.
The experience is also invasive. Because trends are forced upon you when you intend on searching for something specific, and because they’re algorithmically-tunes to be as attention grabbing as possible, it’s easy to be distracted and forget why you even entered the search pane. I never explicitly consent to learning about celebrity gossip or US politics when I use Twitter. If I tap on some of those topics, it’s not because I want to. It’s because it’s malicious click bait. In turn, it’s corrupt to design an experience that drags the user through it repeatedly.
Sure, this content is viral. But shouldn’t we be inoculating against viruses, not encouraging them to spread?
An incredible story out of New York today, as reported by The Verge:
A flooded subway entrance stopped Brooklyn commuters in their tracks yesterday. For four hours on Wednesday, the staircase leading down to Broadway Station in Williamsburg was blocked off and completely submerged. The sight was even stranger since it hadnʼt rained in New York City that day.
The Transit Authority was testing adaptations they’d installed in case of real flooding. Still, I’m sure that the social/informational impact was felt, too.
Also, the MTA’s sarcastic explanation is gold. From Twitter:
We’re pivoting to submarines. ^JLP
I have a foreboding of an America in my children’s or grandchildren’s time—when the United States is a service and information economy; when nearly all the manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what’s true, we slide, almost without noticing, back into superstition and darkness…
Carl Sagan, as quoted by @Andromeda321 in this interesting Reddit thread on the regretful trends of the 2010s.
The thread discusses the growth of anti-intellectualism and conspiracy theories. I’m reminded of this timeless Medium post about how hating Ross in Friends became a meme in and of itself, reinforcing the persecution of science in the ’90s. From David Hopkins:
I want to discuss a popular TV show my wife and I have been binge-watching on Netflix. It’s the story of a family man, a man of science, a genius who fell in with the wrong crowd. He slowly descends into madness and desperation, led by his own egotism. With one mishap after another, he becomes a monster. I’m talking, of course, about Friends and its tragic hero, Ross Geller.
If you remember the 1990s and early 2000s, and you lived near a television set, then you remember Friends. Friends was the Thursday night primetime, “must-see-TV” event that featured the most likable ensemble ever assembled by a casting agent: all young, all middle class, all white, all straight, all attractive (but approachable), all morally and politically bland, and all equipped with easily digestible personas. Joey is the goofball. Chandler is the sarcastic one. Monica is obsessive-compulsive. Phoebe is the hippie. Rachel, hell, I don’t know, Rachel likes to shop. Then there was Ross. Ross was the intellectual and the romantic.
Eventually, the Friends audience — roughly 52.5 million people — turned on Ross. But the characters of the show were pitted against him from the beginning (consider episode 1, when Joey says of Ross: “This guy says hello, I wanna kill myself.”) In fact, any time Ross would say anything — about his interests, his studies, his ideas — whenever he was mid-sentence, one of his “friends” was sure to groan and say how boring Ross was, how stupid it is to be smart, and that nobody cares. Cue the laughter of the live studio audience. This gag went on, pretty much every episode, for 10 seasons. Can you blame Ross for going crazy?
People in the Reddit thread point out that these seemingly recent trends have been taking root for a long time. While this is true, it’s also true that (just like seemingly everything else) these phenomena have been moving much faster and growing much larger in recent years. Which leads to a curious tangent: how do accelerated scales of change play on our biases? Does the interaction between these biases and our accelerated experiences change our perception of the world?
The over- and misuse of AI is one of my biggest tech pet peeves. It truly is evil to tack the AI term onto the description of most products. It also damages the long-term potential of AI by corrupting what it means—especially for the everyday people who aren’t involved or invested in building these tools, but who will use them (or be used by them).
Arvind Narayanan on Twitter:
Much of what’s being sold as “AI” today is snake oil. It does not and cannot work. In a talk at MIT yesterday, I described why this happening, how we can recognize flawed AI claims, and push back. Here are my annotated slides: https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf
Key point #1: AI is an umbrella term for a set of loosely related technologies. Some of those technologies have made genuine, remarkable, and widely-publicized progress recently. But companies exploit public confusion by slapping the “AI” label on whatever they’re selling.
Key point #2: Many dubious applications of AI involve predicting social outcomes: who will succeed at a job, which kids will drop out, etc. We can’t predict the future — that should be common sense. But we seem to have decided to suspend common sense when “AI” is involved.
Key point #3: transparent, manual scoring rules for risk prediction can be a good thing! Traffic violators get points on their licenses and those who accumulate too many points are deemed too risky to drive. In contrast, using “AI” to suspend people’s licenses would be dystopian.