Fulcra

Made by Ryan J.A. Murphy using Quartz, © 2022

Last updated Unknown

Tags: highlights, tech, AI Link: https://twitter.com/random_walker/status/1196870349574623232

# How to recognize AI snake oil

The over- and misuse of AI is one of my biggest tech pet peeves. It truly is evil to tack the AI term onto the description of most products. It also damages the long-term potential of AI by corrupting what it means—especially for the everyday people who aren’t involved or invested in building these tools, but who will use them (or be used by them).

Arvind Narayanan on Twitter:

Much of what’s being sold as “AI” today is snake oil. It does not and cannot work. In a talk at MIT yesterday, I described why this happening, how we can recognize flawed AI claims, and push back. Here are my annotated slides: https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf

Key point #1: AI is an umbrella term for a set of loosely related technologies. Some of those technologies have made genuine, remarkable, and widely-publicized progress recently. But companies exploit public confusion by slapping the “AI” label on whatever they’re selling.

Key point #2: Many dubious applications of AI involve predicting social outcomes: who will succeed at a job, which kids will drop out, etc. We can’t predict the future — that should be common sense. But we seem to have decided to suspend common sense when “AI” is involved.

Key point #3: transparent, manual scoring rules for risk prediction can be a good thing! Traffic violators get points on their licenses and those who accumulate too many points are deemed too risky to drive. In contrast, using “AI” to suspend people’s licenses would be dystopian. Harms of AI for predicting social outcomes

Check out the whole thread.