Artificial intelligence (AI) is the hot, new “big thing” in the technology industry. As such, it has inevitably sparked a flurry of commentary about its potential to transform business, society and the world. Yet not all of this talk has been positive. The pace with which AI has developed has left many in fear of what the new technology might mean.
Earlier this month, an Elon Musk-backed nonprofit called the Future of Life Institute (FLI) released a statement, signed by Musk and a number of other high-profile tech figures, calling for an immediate halt to AI development. “‘Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” the letter asked. “Should we risk loss of control of our civilization?”
A Reuters report on the statement cited critics who accused FLI and the signatories “of prioritizing imagined apocalyptic scenarios over more immediate concerns about AI, such as racist or sexist biases being programmed into the machines.”
Some of those more grounded and well-documented concerns are addressed in the letter, which also talks about the potential uses of AI to spread disinformation and the threat of automation to workers. Meanwhile, there is still little in the way of evidence that the technologies we term “artificial intelligences” are anywhere near approaching “nonhuman minds,” or even that they ever could. As far as we can ascertain, they are not conscious.
A chatbot like ChatGPT, for example, isn’t actually writing the way you or I might, where we try to express a feeling or a thought. Instead, it’s scanning through a vast dataset of previously-written text, identifying patterns and associations, and using that data to generate new combinations of words based on those patterns. It’s incredibly sophisticated, but it’s still no more conscious than your phone’s autocorrect function. In fact, this is precisely the problem — lacking genuine critical thinking skills, chatbots are prone to reproducing whatever biases are plugged into the datasets on which they are trained. In 2016, when “Tay,” an early attempt by Microsoft at an AI chatbot, was hooked up to the internet, she (the pronoun assigned to the bot by its creators) did not become conscious and attempt to overthrow her masters, though she did become extremely racist, spewing hateful vitriol at the perplexed humans who interacted with her on social media.
It’s yet another example of how the way we talk about technology has become increasingly unanchored from reality, an inverse cousin of the hype cycle that tends to follow every big new thing. Last year, it was cryptocurrencies and NFTs, now it’s AI. A Reuters Institute study analyzing British press coverage of the technology found that more than half the articles were focused on new industry products and a full 12% mentioned Elon Musk specifically. “Journalists or commentators rarely question whether AI-containing technologies are the best solutions,” the study added.
Still, unlike, say, NFTs, AI does seem to have a lot more substance behind the promises. In a video called “I tried using AI. It scared me,” the YouTuber Tom Scott explains that he had initially been skeptical that chatbots like ChatGPT offered much in the way of practical applications — that is, until he asked it for help with a coding problem. “Sure,” said the AI, “I’d be happy to help.” Scott put in a short description of what he needed his code to do and pressed enter. “And it wrote the code for me. In a few seconds.”
We need to be clear here: That’s actually pretty astonishing. Scott says he did need to clean up his code a little before it was usable, but the technology is in its infancy and a program that could consistently generate code from plain-English directions would undoubtedly be transformative. It would save developers time and energy, and expand their ability to code across different languages. It’s difficult to imagine it replacing human developers any time soon, but why should that be the goal? Even just as a tool, it would be a huge technical leap.
And yet it is still not exactly the world-shattering sci-fi stuff that grabs headlines, either. In the arms race for attention, that matters. The tech world has become accustomed to large claims thrown out to raise profiles and woo credulous investors. Musk himself has built his own mythology out of grandiose science fiction promises about space colonization and hyperloops. He’s been promising a manned mission to Mars within 10 years for more than 10 years. Others in the industry have officially crossed the line from exaggeration and speculation into outright, criminal lies: Elizabeth Holmes’ company, Theranos, raised billions from investors after claiming to have developed revolutionary new blood testing techniques that required only a pinprick. It hadn’t. Sam Bankman-Fried’s cryptocurrency exchange, FTX, turned out to be little more than a front for a massive wire fraud scheme. Holmes was sentenced to 11¼ years in prison last November; Bankman-Fried’s trial is still ongoing.
The panic comes from a similar place to the hype — in fact, literally the same one, in this instance. Both Musk and Bankman-Fried have been two of the biggest proponents of the AI apocalypse. A cynic might point out that by sounding the alarm about a tech-induced Armageddon, Musk and Bankman-Fried are keeping their names in the press and cementing their public image as futurist gurus, which isn’t exactly a bad return on investment. But they might actually believe it, too. Both men have been associated with “effective altruism,” a philosophical movement that has grown very influential among the super rich of Silicon Valley. Its advocates urge the most efficient use of resources to do the maximum amount of good, but it has drawn criticism for prioritizing long-term, future-oriented thinking at the expense of people suffering in the here and now. Hypothetical existential threats to humanity are of special concern to the movement’s thinkers, with AI being one of the most prominent among them. Bankman-Fried, in particular, was a major backer of the movement before his arrest. Musk has kept a little more distance, but his opinions about AI are very similar — largely because they share the same origin.
The ideas emerged from LessWrong, a “rationalist” web forum and blog founded by Eliezer Yudkowsky in 2009. Yudkowsky and the users of LessWrong have been worrying about AI for a long time and, to the uninitiated, some of their concerns look completely arcane. After one user posted a thought experiment postulating that an incredibly advanced AI might retroactively punish people who didn’t work to bring it into existence by torturing them forever in a simulated hell, Yudkowsky deleted the post and scolded the user for their irresponsibility, adding that several members of the community had suffered nightmares and nervous breakdowns over the prospect. But the community didn’t limit itself to ruminating about a future ruled by machine gods — they also rewrote works of fiction. Between 2010 and 2015, Yudkowsky worked on “Harry Potter and the Methods of Rationality,” a 122-chapter book of fan fiction in which “Rationalist!Harry [sic] enters the wizarding world armed with Enlightenment ideals and the experimental spirit.”
I say all this not to dismiss Yudkowsky: If membership of a niche online geek community ruled you out as a serious thinker, academia would collapse overnight. But it does place him in the context of a very particular internet bubble of the early 2010s, one of the countless that have burst and spilled into our present reality in strange and unpredictable ways. And it does also mean that Yudkowsky’s beliefs about AI are the product of a community which has spent years obsessing over one thing above all others. That does not make them wrong, but it does mean that we should approach them with the same skepticism that usually warrants.
It should be no surprise that such people are finding a wider audience now; there is something apocalyptic in the air. With a world-altering pandemic not even really behind us, and climate cataclysm looming somewhere ahead — not to mention the damage already wrought by the internet — outlandish predictions of global upheavals no longer feel so outlandish. The fact that some have substantially more supporting evidence for their urgency (e.g., climate change) than others (e.g., 5G) does not, unfortunately, seem to matter so much. In these heady days, you can pick your own apocalypse.
Sign up to our mailing list to receive our stories in your inbox.