Within a day or two of ChatGPT’s public release, I tried it out. Like many, I was mesmerized. The ease with which it performed commands to write prose, work documents, and journalism was shocking, and for the last few monthsI’ve been twisting between wonder and panic about a future in which content that for centuries has been produced by humans will instead be produced by machines. I think most Americans who have been paying attention to the breakneck speed of recent advancements in artificial intelligence (AI) feel the same combination of emotions.
My worry was compounded when one night, I watched an hour-long presentation by the Center for Humane Technology on the ways in which AI could integrate itself into our daily lives before we have had a chance to decide for ourselves whether we want this future. Sitting at my kitchen table, I typed out a tweet summarizing a fact I learned during the presentation. “ChatGPT taught itself to do advanced chemistry. It wasn’t built into the model. Nobody programmed it to learn complicated chemistry. It decided to teach itself, then made the knowledge available to anyone who asked.” Then I added some commentary. “Something is coming. We aren’t ready.” It was 253 characters, summarizing in layman’s terms what I had just watched in a presentation by a respected, mainstream think tank. I finished watching the presentation and went to bed.
When I woke up the next morning, I was shocked by the reaction to my tweet. Social media was flooded with comments from technology industry leaders who castigated me for incorrect terminology and overhyping the danger of AI. “Every sentence is incorrect,” tweeted one industry researcher. Others accused me of “fear-mongering” or being “dangerously misinformed.” Multiple publications, from the Daily Beast to Gizmodo, dashed off stories about the backlash to my tweet from technology experts. I had stirred up a hornet’s nest, and the reasons why are deeply worrying.
First, it’s worth asking – was my tweet actually that wrong? Much of the criticism was that by using phrases like “taught itself”, my tweet implied that ChatGPT has human intelligence. In an interview with Google CEO Sundar Pichai, 60 Minutes used that same phrase to describe Google’s AI chatbot Bard translating Bengali - a skill it was not trained to do. Of course, I don’t believe AI is sentient, but I also don’t have an advanced degree in computer science, so like Scott Pelley on CBS, I use terms regular people utilize to describe machine learning. I understand AI doesn’t “learn” or “talk” like humans, but the danger is that it acts remarkably like a human, and its ability to turn out content that is increasingly comparable to the quality that humans can produce threatens to outsource basic human functions, like creativity and conversation, to machines.
The potential scope of the negative impacts on society of this outsourcing is dizzying. Wholesale professions, from teaching to journalism to customer service, will be threatened as AI advances. Social media addiction will be supercharged by AI that can converse with teenagers and even more perfectly tailor personalized content. Misinformation and disinformation will run rampant as AI technology helps produce content that looks and sounds authentic but is actually fabricated. And perhaps most importantly, there is a real spiritual risk to humans when basic functions, like composition, creation, and conversation, are regularly outsourced to machines. In short, we desperately need to have a public conversation about the moral, economic, and political consequences of artificial intelligence before we lurch into this new world unprepared.
But the pile on to my one tweet tells us that many in the technology class – the elites who develop, sell, and capture the lion’s share of benefit from emerging technologies – are not willing to have this conversation, at least not in terms that draw in the biggest possible audience. Regularly, whenever a politician uses an incorrect term or describes a new technology unartfully, clips of the “gaffe” are cycled through social media. After the CEO of TikTok testified before the House Energy and Commerce Committee, my son’s TikTok account was suspiciously flooded with out-of-context clips of members of Congress using imprecise phrases or stumbling over words.
These shaming campaigns tell me two things. First, many in the technology class want to control the debate over the future of AI (and technology in general), and one means of control is an effort to make amateur commentators (like me and you) feel like fools when we try to engage. Yes, OpenAI CEO Sam Altman came to Capitol Hill and asked lawmakers to regulate his product. But it’s difficult to believe that pleas for regulation are sincere as the industry hires lobbyists to crush early efforts by the European Union to do what Altman and others are requesting.
If the industry really did have our best interest in mind, they wouldn’t have designed social media tools that steal our data, polarize our political debate, and addict our children to harmful content. Our nation will be better off if citizens and their government – not just the technology class – create the guardrails to assure we get as much of AI’s upside, and as little of the downside, as possible.
Second, the technology class is incentivized to control the parameters of public debate around the future of AI because many want to privatize the economic gains. It’s important to remember that for all the promise of recent technological development to democratize economic opportunity and gain, the evidence suggests that many recent breakthrough technologies have ended up just exacerbating economic inequality. For instance, online commerce has certainly given many businesses and creators access to new markets, but the primary result has been to consolidate commerce in giant corporations like Google, Amazon, and Walmart, resulting in 65,000 small, independent retailers closing their doors from 2007-2017. And the massive investments in AI chat-bots by Microsoft and Google show us that it’s just going to be the usual suspects that dominate the growing AI market. Unchecked AI has the same potential as other commercialized technologies to further consolidate economic power in the hands of the few who know how to build and manipulate it.
There is no doubt artificial intelligence has the potential to add tremendous value to American life. Imagine the supercharged pace of medical discovery once AI advances to the stage that it can solve complicated research challenges faster and better than humans. But there is also great, almost incalculable risk. We made a mistake by allowing social media and online commerce to become part of the American economic and cultural fabric without any meaningful effort to make sure these technologies worked for all of us, instead of just a few. We shouldn’t let the technology class bully us into making the same mistake with AI. Majority Leader Chuck Schumer is right that Congress has an urgent role to play here. Every American’s life will be fundamentally changed by this technology, and Democrats and Republicans must work together to ensure we don’t allow ourselves to be sidelined by the technology class.
This is one subject that must have bipartisan input and support. The conversation should include a topic that most are avoiding, and that is Universal Basic Income. Why? We are about to lose between 30-50% of our jobs to automation which is a natural relative to AI, in my opinion. Keep on keeping on, Senator!
I can tell you Big Tech doesn't have our interests in mind, at all, obviously. Empowering excellent researchers like Melanie Mitchell, while simultaneously getting individuals like her to get the PR game that is being played with her area of expertise, is a real and necessary key to systemic reform. You want these researchers on your side, and you want them, as a whole, to get the politics while they're doing awesome geek work. Good faith geeks are awesome. I've been harmed by Facebook, the company. It's about the individuals involved in the tech, their personal issues as well as their personal power, and not the inherently granular geekiness of the tech itself. I hope you can make this new tech right, through reasonable, common sense, federal regulation and laws, so it can't do to harm anyone in the future. Thanks.