86 Comments
User's avatar
Vivien Blackford's avatar

Chris Murphy is honest, patrotic, brilliant, tough, and clever enough to lead our troubled country well. Murphy for president in 2028! He wouldn’t be in it for himself…

Expand full comment
Ranita Shows's avatar

Ditto

Expand full comment
Elaine Vogelstein's avatar

, Yes! And we must protect the American soul, the American spirit! We must learn who we are, first individually that we are just not a material entity. That we have been gifted with a mind, a spirit, some may call it a soul. This spirit part of us is vast. We must not be the slaves of pure materiality, but realize that our spirit is great, it is big, it can hold everything! And this spirit in each of us is called upon to hold one another! I do not fear AI. I fear the bad actors who will misuse AI.

Expand full comment
Gail T's avatar

So, so, so true!!!!!

Expand full comment
Anne Gunn's avatar

The first time I heard "our goal is to maximize return for our investors" (Economist Milton Friedman, I believe), I almost threw up. I believe Professor Friedman was given the Nobel Prize in Economics for the work that produced this statement. Once his idea was embraced, America changed -- and not for the good or benefit of its workers.

Companies were bought and sold like baseball trading cards; people lost their jobs -- and those with enough savvy and the right connections began to make larger piles of money. Those who still had jobs were moved from pension plans to 401K's. Some of those workers benefitted from comfortable retirements. Those in the work force as well as those entering in the next ten years will find fewer jobs while they drag along their astronomical student debt -- a post for another day.

Shame on America for allowing this boondoggle to be perpetrated on the American people. No wonder we're afraid to tax those who have benefitted the most from this travesty. Money is power and our politicians run for cover for fear of not being reelected.

All incomes should be taxed -- period.

Expand full comment
WinstonSmithLondonOceania's avatar

Absolutely. Even worse, the same fans of Friedman (I call them "Friedmanites") are also fans of Ayn Rand, who wasn't even an economist. They're all about laissez faire, zero sum, "winner" take all capitalism. Just about everyone on Forbes top 100 list. And they're all crooks. That's stolen wealth, partially through wage theft, and mostly through playing the Wall St, shell game - now you see it, now you don't.

I say it's past time for a UBI! Tax the billionaire oligarchs!

Expand full comment
Patricia Jaeger's avatar

I agree with your thesis and I'd like to add which US industry developed without the need for regulations? How many industries polluted (and continue to) pollute our air and water (Musk's data center is causing a lot of pollution and harm). We learned early on that when automobiles were invented and manufactured that we needed a lot of regulations for safety and environmental reasons. Look at the food manufacturing industry and the pharmaceutical industry. Look at what happened with Boeing's airplanes when it insisted on more self-regulation. Unfortunately, it's imperative that the government, at all levels, regulate human greed.

Expand full comment
WinstonSmithLondonOceania's avatar

The C-Suite class wants to put the fox in charge of the henhouse. Boeing was great as long as engineers were in charge, but as soon as they put bean counters in charge, crash, literally.

Expand full comment
realsaramerica's avatar

Chris - another issue that I'm struggling to get politicians to listen to is this: several of my copyrighted novels have been used without my knowledge or permission or the knowledge or permission of my publisher to train these LLMs. How is it permissible for technology firms to wholesale rip off the product of years of research and hard work without compensation — all for their own profit? Discovery in Facebook lawsuit filings show that they are fully cognizant of the fact that what they're doing isn't really fair use. I've tried talking about it with my state senator and he was pretty flippant. I hope you'll take the issue more seriously. https://authorsguild.org/news/meta-libgen-ai-training-book-heist-what-authors-need-to-know/

Expand full comment
WinstonSmithLondonOceania's avatar

Copyright infringement is a huge issue with AI. Are you a member of the writers guild? You might want to join a group of like minded writers and other creators to fight this scourge.

Expand full comment
realsaramerica's avatar

Yes, I'm a member of the Authors Guild - doing what I can to help fight the wholesale theft of intellectual property by amoral techbro billionaires.

Expand full comment
CR Burnett's avatar

Thank you for acknowledging that this is potentially a serious problem. That if governmental oversight does not place guardrails on AI development then those who control it will continue to abuse the power and influence they believe is the only way of developing this technology. Please continue to speak out about abuse of power in all industries related to AGI.

Expand full comment
Squid's avatar

Thank you for stepping up! Regulation is necessary. There is no need to pause or hinder regulatory efforts with AI and in fact we should do everything we can to strictly regulate it first! Who’s to say that the AI responses won’t be learned from autocratic regimes to dismantle the democratic institutions around the world. Look at how much damage websites like Twitter, Google, and Facebook have done. The algorithms push false narratives and there’s no regulations to let the user stop or tweak it.

Expand full comment
Steven Panicci's avatar

Thank you Chris Murphy for being ahead of the curve on this important and serious tech issue. It doesn’t take a PHD to connect the dots when it comes to unregulated CORPORATE power, goals and profits at the expense of society and culture. Start with Citizens United, unregulated banking practices, throw in the social media tech giants, a psycho POTUS and cult followers and you’ve got an authoritarian regime ripe for non-accountability. The wealth gap in this country makes it difficult to do the right thing for all of us due to money and profits. I can’t imagine the horror of this AI thing added to the mix. TECH must be regulated.

Expand full comment
Peter Miller's avatar

Yes. But please spell out what regulations look like. I’m all for preventing the dystopian future you describe. I don’t trust the AI companies to regulate themselves. What should we do?

Expand full comment
Random Anon's avatar

I work in this industry.

The main thing that China has over us, is, quite ironically, some degree of transparency in what is produced; at least enough so that researchers can probe and build open the foundational models released by independent actors in the country (the training datasets and more particular aspects of methodology are usually not released).

For example, Alibaba has released an "open weights" lineup of language models known as Qwen (the most recent of which is the Qwen3 series), and these models are popular not just in domestic research coming from China, but also Western research (due to permissive licensing and the ability to independently finetune or train on top of these open weight models for particular use cases).

Very little of this exists out in the open; Meta had their own lineup released in a similar fashion (Llama), the most recent of which (Llama4) was widely considered to be a flop.

Researchers from all around the world are essentially forced to build upon something Chinese in order for their work to be most reflective of what approaches actually work at frontier scale. DeepSeek is especially notable for being an open weights outlier in the same fashion, as they have released by far the largest and most capable open weight models for both research and enterprise use.

The only regulations I can think of that would meaningfully boost competition - not just in enterprise adoption, but also fundamental (public facing) research of the technology - would be those that enforce, at minimum, some degree of this kind of transparency (of publicly released weights). Otherwise, the ecosystem is built off of what some Chinese company has learned about their own models thanks to other actors (including those in the West) that they wouldn't have learned themselves. Consequentially, they can absorb the useful published findings from researchers into their deployment faster than actors in the West can.

Expand full comment
Random Anon's avatar

For the record, I am not saying "mandate open weights" (this is not realistic), but defining some degree of standards when it comes to transparency beyond immediate safety precautions seems both fairly diligent and realistic.

Also, if explicitly factoring in safety concerns: having a stronger understanding of how to make this kind of technology safe primarily stems from having *more eyes* being able to probe and investigate it, not less.

Cybersecurity has broadly learned this kind of lesson already; that is, "security through obscurity is not an effective way to make a system safe". I have strong reason to believe that this principle extends to AI systems.

Expand full comment
David J. Brown Ph.D. (cantab.)'s avatar

Just one point here - about "safety" and "open weights."

Geoff Hinton has been giving some recent interviews in which he's been talking about his personal apprehensions about what he think can (and will) happen with AI if we're not careful about its development.

One of the things he seems to be very concerned about is the idea of a general open release of the weights for a commercially trained AI (LLM.)

I believe that his point is that once anyone has those, then they can trivially implement an LLM (an AI) with that level of 'knowledge' (capacity.)

His metaphor is nuclear weapons: What keeps that somewhat under control in the modern world is that while it's broadly known how to make a nuclear bomb nowadays, not just *anyone* can get the Plutonium to make the trigger, nor the highly enriched Uranium into which it is shot to make the very big "boom!"

Geoff likens the weights for an LLM to these fissionable materials.

So this *specifically* is *one* of the many questions that come up in this arena of safety and regulation, before we just run wild and the genie is out of the bottle, as it were :)

On the sharp end of some of this, are the folks at Anthropic: Dario and Amodei, who formed Anthropic I believe, specifically because of some of these concerns.

Again: Sorry to be a bit less equipped with citations in the instant, I'll see what I can do about that.

And I'll try to find a pointer to the interview in which I heard Geoff Hinton's recently talking about all this and his concerns.

Expand full comment
David J. Brown Ph.D. (cantab.)'s avatar

Thanks for these good comments, and for starting a bit of discussion about some of the things that are involved here, and that we need to be thinking about.

As you point out (or at least allude to) is that *none* of the US companies are doing anything *like* letting their source code out, nor - more importantly are they releasing the "weights" which establish what a trained AI substrate is able to do.

Both are still "secret sauce" (i.e. valuable intellectual property) to these companies, and the "weights" are especially so, as the cost of training these models is stupendous.

To some degree, this is why DeepSeek had to do this on their own, and have now also decided to Open Source what they've done there, making it possible for others now to implement these mechanisms for themselves.

Something quite important in this arena, is this matter of the "weights" resulting from the training of one of the AI LLMs. This is a vastly computationally and large-data intensive and expensive thing to do, and thus far only companies with enormous capital - and access to the right vast data have done that (over here.)

At this point, it seems to have become apparent that the focus on "model training" is now winding down, and the focus for new IC's (chips) to implement AI systems is on "edge" devices that do various recognition functions, based upon these already trained embedded AI models.

But without the weights resulting from a properly trained model, the recognition function in these edge devices doesn't work as well as may be needed.

So, up until this recent disruption of DeepSeek, all of these trained models in the companies in the US have *not* been available to any but those major companies that have developed them. They do let you *use* their LLMs as a web-deployed application (such as ChatGPT, etc. etc.) but this really only helps them further to gather data as vast numbers of people now feed queries ("prompts") to these systems.

There's rather a lot more to say about all of this, but as I am not personally a front lines engineer who is working on any of these systems, we will have to look to those people who *are* to drill down further on it all, and help us better to undertand the poignant questions and technical frontiers du jour.

I will stop here for the moment therefore.

Expand full comment
Laura Twing's avatar

New to AI, let’s hear some of the regulations ideas and how they would protect objective reality.

Expand full comment
Steven Panicci's avatar

How about just protecting jobs.

Expand full comment
WinstonSmithLondonOceania's avatar

A UBI would go a long way to that end.

Expand full comment
BG Pete Chiefari's avatar

Possibly one of the finest articles I've ever seen written on AI and the AI dilemma we face. My instincts are telling me that he is absolutely right! Using the Chinese as the bogeyman to justify a flaccid approach to AI regulation is an absurd and stupid way to do business!

Expand full comment
Cristy Stockinger's avatar

It’s all our sci-fi movies coming true. It scared me in the movies, I don’t want it in real life.

Expand full comment
WinstonSmithLondonOceania's avatar

At least the dystopian ones.

Expand full comment
Suzanne Cooper's avatar

I agree 100%. Yet the hypocrisy of the current administration believes in winning at all cost-and I mean ALL costs! Please contact your elected officials and let them know that unchecked AI may well be the end of society as we know it. Don’t we have enough problems without adding AI into the mix. Think about it. Thank you.

Expand full comment
WinstonSmithLondonOceania's avatar

All costs to us that is, not to them. No cost to them. Just flying palaces from Qatar and under the table profits from $KingMAGAcoin and $QueenMAGAcoin.

AI will definitely be the end of society as we know it - that's the plan. It's the "philosophy" of one "Mencius Moldbug", AKA Curtis Yarvin, who is inexplicably worshiped by tech titans like MuskRat, Marc Andreessen, Peter Thiel, etc. His central premise is that democracy is incompatible with freedom. Try wrapping your brain around that one.

Expand full comment
Sandy Sears's avatar

I agree, Chris!!

Expand full comment
Linda Querry's avatar

“ Fake video and audio, without accountability or legal liability, could obliterate any notion of objective truth. The social isolation crisis that already exists, especially for American teens, could be set on fire by AI chatbots and friendship programs (watch Mark Zuckerberg’s recent interview to witness how excited the industry is to replace human friends with robot friends). The substitution of essential human functions - like composition and creativity and conversation - by machines will likely lead to incalculable spiritual atrophy. And that’s just the tip of the iceberg”.

This is really terrifying . Congress did not regulate social media and we have sen the negative effects this lack of control has had, Not being proactive and looking at the possible consequences of AI and regulating them now, not after the damage is done will have horrific consequences. this country needs to grow up and stop acting like an child just being reactive to problems,

Expand full comment
Marilyn Jones's avatar

And I understand that the people pushing unregulated development of AI are already getting what they want. The wrongly named Big Beautiful Bill up for consideration for passage in the Senate has been flagged as having a provision that postpones any Federal regulation of AI until 2035. That's 10 years of unfettered and unregulated development of AI. By that time it will be far too late to try to rein in a technology that will have far surpassed our understanding of it. In our rush to "beat" China and make profit from this technology, we are committing suicide.

Expand full comment