Chris Murphy is honest, patrotic, brilliant, tough, and clever enough to lead our troubled country well. Murphy for president in 2028! He wouldn’t be in it for himself…
Or VP, Chief of staff, National Security Adviser, etc. Grateful for Chris Murphy’s courage and expertise as a Senator committed to a government by, for, and of All The People!
, Yes! And we must protect the American soul, the American spirit! We must learn who we are, first individually that we are just not a material entity. That we have been gifted with a mind, a spirit, some may call it a soul. This spirit part of us is vast. We must not be the slaves of pure materiality, but realize that our spirit is great, it is big, it can hold everything! And this spirit in each of us is called upon to hold one another! I do not fear AI. I fear the bad actors who will misuse AI.
The first time I heard "our goal is to maximize return for our investors" (Economist Milton Friedman, I believe), I almost threw up. I believe Professor Friedman was given the Nobel Prize in Economics for the work that produced this statement. Once his idea was embraced, America changed -- and not for the good or benefit of its workers.
Companies were bought and sold like baseball trading cards; people lost their jobs -- and those with enough savvy and the right connections began to make larger piles of money. Those who still had jobs were moved from pension plans to 401K's. Some of those workers benefitted from comfortable retirements. Those in the work force as well as those entering in the next ten years will find fewer jobs while they drag along their astronomical student debt -- a post for another day.
Shame on America for allowing this boondoggle to be perpetrated on the American people. No wonder we're afraid to tax those who have benefitted the most from this travesty. Money is power and our politicians run for cover for fear of not being reelected.
So there is more to Friedman's quote that businesses neglected....direct from his essay in the NYT in 1970: "there is one and only one social responsibility of business—to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud.” I think Friedman would agree that it is in the business's best interest to think about the ecosystem, but they don't have to. It is up to the government to create laws to manage the evils business could do. It gets back to what Murphy proposes in his essay here, which is spot on. I think the Powell Memo is the overlooked document that was a response to the protests in the 60s that laid the thought leadership framework to get us here.
Thanks, Mary. I appreciate knowing more about Friedman's thesis. What he could not account for would be greed, corporate and consumer with politicians looking out for themselves! Greed makes it imperative that everyone, no exceptions, pay tsxes in line with their assets and earnings.
Absolutely!!! I was fascinated by Friedman and felt he was misunderstood so I read a lot about him at one point. And you're 100% correct - he didn't account for greed. He was a bit naive and altruistic in some of his views in how the world should work (as most Libertarians can be). As an example....he didn't like seatbelt laws because he felt that the market wouldn't buy dangerous cars and cars with seatbelts would naturally dominate. That assumes that consumers know better (we know they don't). But he did like car emission laws because emissions from a car in CA could impact a human anywhere on the globe. That's bad. And businesses should be forced to reduce emissions to save the planet but it's up to the government to set those regulations in a way to economically impact companies that don't comply. So back to profits.
Absolutely. Even worse, the same fans of Friedman (I call them "Friedmanites") are also fans of Ayn Rand, who wasn't even an economist. They're all about laissez faire, zero sum, "winner" take all capitalism. Just about everyone on Forbes top 100 list. And they're all crooks. That's stolen wealth, partially through wage theft, and mostly through playing the Wall St, shell game - now you see it, now you don't.
I say it's past time for a UBI! Tax the billionaire oligarchs!
I agree with your thesis and I'd like to add which US industry developed without the need for regulations? How many industries polluted (and continue to) pollute our air and water (Musk's data center is causing a lot of pollution and harm). We learned early on that when automobiles were invented and manufactured that we needed a lot of regulations for safety and environmental reasons. Look at the food manufacturing industry and the pharmaceutical industry. Look at what happened with Boeing's airplanes when it insisted on more self-regulation. Unfortunately, it's imperative that the government, at all levels, regulate human greed.
The C-Suite class wants to put the fox in charge of the henhouse. Boeing was great as long as engineers were in charge, but as soon as they put bean counters in charge, crash, literally.
I agree with your comment. It is imperative we have regulations. And yes we pay a price for them in the products we buy and consume. I'm willing to pay the extra cost and I'm probably not alone.
Chris - another issue that I'm struggling to get politicians to listen to is this: several of my copyrighted novels have been used without my knowledge or permission or the knowledge or permission of my publisher to train these LLMs. How is it permissible for technology firms to wholesale rip off the product of years of research and hard work without compensation — all for their own profit? Discovery in Facebook lawsuit filings show that they are fully cognizant of the fact that what they're doing isn't really fair use. I've tried talking about it with my state senator and he was pretty flippant. I hope you'll take the issue more seriously. https://authorsguild.org/news/meta-libgen-ai-training-book-heist-what-authors-need-to-know/
Copyright infringement is a huge issue with AI. Are you a member of the writers guild? You might want to join a group of like minded writers and other creators to fight this scourge.
Thank you for stepping up! Regulation is necessary. There is no need to pause or hinder regulatory efforts with AI and in fact we should do everything we can to strictly regulate it first! Who’s to say that the AI responses won’t be learned from autocratic regimes to dismantle the democratic institutions around the world. Look at how much damage websites like Twitter, Google, and Facebook have done. The algorithms push false narratives and there’s no regulations to let the user stop or tweak it.
Thank you for acknowledging that this is potentially a serious problem. That if governmental oversight does not place guardrails on AI development then those who control it will continue to abuse the power and influence they believe is the only way of developing this technology. Please continue to speak out about abuse of power in all industries related to AGI.
Possibly one of the finest articles I've ever seen written on AI and the AI dilemma we face. My instincts are telling me that he is absolutely right! Using the Chinese as the bogeyman to justify a flaccid approach to AI regulation is an absurd and stupid way to do business!
Thank you Chris Murphy for being ahead of the curve on this important and serious tech issue. It doesn’t take a PHD to connect the dots when it comes to unregulated CORPORATE power, goals and profits at the expense of society and culture. Start with Citizens United, unregulated banking practices, throw in the social media tech giants, a psycho POTUS and cult followers and you’ve got an authoritarian regime ripe for non-accountability. The wealth gap in this country makes it difficult to do the right thing for all of us due to money and profits. I can’t imagine the horror of this AI thing added to the mix. TECH must be regulated.
Yes. But please spell out what regulations look like. I’m all for preventing the dystopian future you describe. I don’t trust the AI companies to regulate themselves. What should we do?
The main thing that China has over us, is, quite ironically, some degree of transparency in what is produced; at least enough so that researchers can probe and build open the foundational models released by independent actors in the country (the training datasets and more particular aspects of methodology are usually not released).
For example, Alibaba has released an "open weights" lineup of language models known as Qwen (the most recent of which is the Qwen3 series), and these models are popular not just in domestic research coming from China, but also Western research (due to permissive licensing and the ability to independently finetune or train on top of these open weight models for particular use cases).
Very little of this exists out in the open; Meta had their own lineup released in a similar fashion (Llama), the most recent of which (Llama4) was widely considered to be a flop.
Researchers from all around the world are essentially forced to build upon something Chinese in order for their work to be most reflective of what approaches actually work at frontier scale. DeepSeek is especially notable for being an open weights outlier in the same fashion, as they have released by far the largest and most capable open weight models for both research and enterprise use.
The only regulations I can think of that would meaningfully boost competition - not just in enterprise adoption, but also fundamental (public facing) research of the technology - would be those that enforce, at minimum, some degree of this kind of transparency (of publicly released weights). Otherwise, the ecosystem is built off of what some Chinese company has learned about their own models thanks to other actors (including those in the West) that they wouldn't have learned themselves. Consequentially, they can absorb the useful published findings from researchers into their deployment faster than actors in the West can.
For the record, I am not saying "mandate open weights" (this is not realistic), but defining some degree of standards when it comes to transparency beyond immediate safety precautions seems both fairly diligent and realistic.
Also, if explicitly factoring in safety concerns: having a stronger understanding of how to make this kind of technology safe primarily stems from having *more eyes* being able to probe and investigate it, not less.
Cybersecurity has broadly learned this kind of lesson already; that is, "security through obscurity is not an effective way to make a system safe". I have strong reason to believe that this principle extends to AI systems.
Just one point here - about "safety" and "open weights."
Geoff Hinton has been giving some recent interviews in which he's been talking about his personal apprehensions about what he think can (and will) happen with AI if we're not careful about its development.
One of the things he seems to be very concerned about is the idea of a general open release of the weights for a commercially trained AI (LLM.)
I believe that his point is that once anyone has those, then they can trivially implement an LLM (an AI) with that level of 'knowledge' (capacity.)
His metaphor is nuclear weapons: What keeps that somewhat under control in the modern world is that while it's broadly known how to make a nuclear bomb nowadays, not just *anyone* can get the Plutonium to make the trigger, nor the highly enriched Uranium into which it is shot to make the very big "boom!"
Geoff likens the weights for an LLM to these fissionable materials.
So this *specifically* is *one* of the many questions that come up in this arena of safety and regulation, before we just run wild and the genie is out of the bottle, as it were :)
On the sharp end of some of this, are the folks at Anthropic: Dario and Amodei, who formed Anthropic I believe, specifically because of some of these concerns.
Again: Sorry to be a bit less equipped with citations in the instant, I'll see what I can do about that.
And I'll try to find a pointer to the interview in which I heard Geoff Hinton's recently talking about all this and his concerns.
Thanks for these good comments, and for starting a bit of discussion about some of the things that are involved here, and that we need to be thinking about.
As you point out (or at least allude to) is that *none* of the US companies are doing anything *like* letting their source code out, nor - more importantly are they releasing the "weights" which establish what a trained AI substrate is able to do.
Both are still "secret sauce" (i.e. valuable intellectual property) to these companies, and the "weights" are especially so, as the cost of training these models is stupendous.
To some degree, this is why DeepSeek had to do this on their own, and have now also decided to Open Source what they've done there, making it possible for others now to implement these mechanisms for themselves.
Something quite important in this arena, is this matter of the "weights" resulting from the training of one of the AI LLMs. This is a vastly computationally and large-data intensive and expensive thing to do, and thus far only companies with enormous capital - and access to the right vast data have done that (over here.)
At this point, it seems to have become apparent that the focus on "model training" is now winding down, and the focus for new IC's (chips) to implement AI systems is on "edge" devices that do various recognition functions, based upon these already trained embedded AI models.
But without the weights resulting from a properly trained model, the recognition function in these edge devices doesn't work as well as may be needed.
So, up until this recent disruption of DeepSeek, all of these trained models in the companies in the US have *not* been available to any but those major companies that have developed them. They do let you *use* their LLMs as a web-deployed application (such as ChatGPT, etc. etc.) but this really only helps them further to gather data as vast numbers of people now feed queries ("prompts") to these systems.
There's rather a lot more to say about all of this, but as I am not personally a front lines engineer who is working on any of these systems, we will have to look to those people who *are* to drill down further on it all, and help us better to undertand the poignant questions and technical frontiers du jour.
UBI will be an essential element in the future for a society that values human life. MAGA doesn't and the Tech Bros? They are ready for space. I say put all of them on a Musk owned rocket....heh heh.
Let’s put AI to good work by replacing CEO’s and programming AI to improve customer/patient experience and workers’ productivity while increasing their wages and benefits and eliminating the cost of the management outrageous salaries, benefits and exit packages determined on hiring.
Mr. Murphy’s central warning is accurate: the scramble to dominate AI risks compromising safety, control, and democratic integrity. However, the solution is not to decelerate innovation, but to deploy it with strategic discipline- embedding robust guardrails, international norms, and proactive governance.
IMO, what’s still missing from the current policy discourse- and where we can truly differentiate as a society- is a deliberate investment in education. To sustainably “win” the AI future, we must elevate our educational system beyond technical proficiency. This means:
Bolstering critical thinking curricula at all levels, ensuring that students are equipped to question, interpret, and ethically evaluate both AI outputs and the frameworks behind them.
Reinvigorating the humanities- philosophy, performing arts, education, history, ethics, social sciences- as foundational disciplines for future leaders, technologists, and citizens. These fields provide the context, ethical grounding, and analytical rigor necessary for responsible AI stewardship.
Bridging STEM with liberal arts, building interdisciplinary talent pipelines that can anticipate societal impacts, not just technical breakthroughs.
If we want to lead responsibly in the AI era, we must invest as heavily in cultivating ethical, critical, and creative minds as we do in technology itself. This is not just a risk mitigation strategy- it’s a long-term competitive advantage. The true “race” should not be for faster AI, but for a society resilient enough to govern and guide it wisely.
Our educational system is the linchpin. Without it, we risk ceding both leadership and values in the age of AI.
And I understand that the people pushing unregulated development of AI are already getting what they want. The wrongly named Big Beautiful Bill up for consideration for passage in the Senate has been flagged as having a provision that postpones any Federal regulation of AI until 2035. That's 10 years of unfettered and unregulated development of AI. By that time it will be far too late to try to rein in a technology that will have far surpassed our understanding of it. In our rush to "beat" China and make profit from this technology, we are committing suicide.
Pay school teachers what they deserve and what will attract people to the profession, make classes smaller, and you have new human employment.
Double the number of med schools, nursing schools, and support it fiscally so tuition is not a barrier. We are terrible short of primary care physicians and cannot even fill our needs with foreign physicians.
We can create job openings while improving our society and well being. Throttling the lure of the jobs AGI may eliminate could be an opportunity. (Ask AGI to figure this out for us.)
Chris Murphy is honest, patrotic, brilliant, tough, and clever enough to lead our troubled country well. Murphy for president in 2028! He wouldn’t be in it for himself…
Ditto
Or Secretary of State!
Or VP, Chief of staff, National Security Adviser, etc. Grateful for Chris Murphy’s courage and expertise as a Senator committed to a government by, for, and of All The People!
, Yes! And we must protect the American soul, the American spirit! We must learn who we are, first individually that we are just not a material entity. That we have been gifted with a mind, a spirit, some may call it a soul. This spirit part of us is vast. We must not be the slaves of pure materiality, but realize that our spirit is great, it is big, it can hold everything! And this spirit in each of us is called upon to hold one another! I do not fear AI. I fear the bad actors who will misuse AI.
So, so, so true!!!!!
The first time I heard "our goal is to maximize return for our investors" (Economist Milton Friedman, I believe), I almost threw up. I believe Professor Friedman was given the Nobel Prize in Economics for the work that produced this statement. Once his idea was embraced, America changed -- and not for the good or benefit of its workers.
Companies were bought and sold like baseball trading cards; people lost their jobs -- and those with enough savvy and the right connections began to make larger piles of money. Those who still had jobs were moved from pension plans to 401K's. Some of those workers benefitted from comfortable retirements. Those in the work force as well as those entering in the next ten years will find fewer jobs while they drag along their astronomical student debt -- a post for another day.
Shame on America for allowing this boondoggle to be perpetrated on the American people. No wonder we're afraid to tax those who have benefitted the most from this travesty. Money is power and our politicians run for cover for fear of not being reelected.
All incomes should be taxed -- period.
So there is more to Friedman's quote that businesses neglected....direct from his essay in the NYT in 1970: "there is one and only one social responsibility of business—to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud.” I think Friedman would agree that it is in the business's best interest to think about the ecosystem, but they don't have to. It is up to the government to create laws to manage the evils business could do. It gets back to what Murphy proposes in his essay here, which is spot on. I think the Powell Memo is the overlooked document that was a response to the protests in the 60s that laid the thought leadership framework to get us here.
Thanks, Mary. I appreciate knowing more about Friedman's thesis. What he could not account for would be greed, corporate and consumer with politicians looking out for themselves! Greed makes it imperative that everyone, no exceptions, pay tsxes in line with their assets and earnings.
Absolutely!!! I was fascinated by Friedman and felt he was misunderstood so I read a lot about him at one point. And you're 100% correct - he didn't account for greed. He was a bit naive and altruistic in some of his views in how the world should work (as most Libertarians can be). As an example....he didn't like seatbelt laws because he felt that the market wouldn't buy dangerous cars and cars with seatbelts would naturally dominate. That assumes that consumers know better (we know they don't). But he did like car emission laws because emissions from a car in CA could impact a human anywhere on the globe. That's bad. And businesses should be forced to reduce emissions to save the planet but it's up to the government to set those regulations in a way to economically impact companies that don't comply. So back to profits.
Absolutely. Even worse, the same fans of Friedman (I call them "Friedmanites") are also fans of Ayn Rand, who wasn't even an economist. They're all about laissez faire, zero sum, "winner" take all capitalism. Just about everyone on Forbes top 100 list. And they're all crooks. That's stolen wealth, partially through wage theft, and mostly through playing the Wall St, shell game - now you see it, now you don't.
I say it's past time for a UBI! Tax the billionaire oligarchs!
I agree with your thesis and I'd like to add which US industry developed without the need for regulations? How many industries polluted (and continue to) pollute our air and water (Musk's data center is causing a lot of pollution and harm). We learned early on that when automobiles were invented and manufactured that we needed a lot of regulations for safety and environmental reasons. Look at the food manufacturing industry and the pharmaceutical industry. Look at what happened with Boeing's airplanes when it insisted on more self-regulation. Unfortunately, it's imperative that the government, at all levels, regulate human greed.
The C-Suite class wants to put the fox in charge of the henhouse. Boeing was great as long as engineers were in charge, but as soon as they put bean counters in charge, crash, literally.
Healthcare has the same issue.
I agree with your comment. It is imperative we have regulations. And yes we pay a price for them in the products we buy and consume. I'm willing to pay the extra cost and I'm probably not alone.
Chris - another issue that I'm struggling to get politicians to listen to is this: several of my copyrighted novels have been used without my knowledge or permission or the knowledge or permission of my publisher to train these LLMs. How is it permissible for technology firms to wholesale rip off the product of years of research and hard work without compensation — all for their own profit? Discovery in Facebook lawsuit filings show that they are fully cognizant of the fact that what they're doing isn't really fair use. I've tried talking about it with my state senator and he was pretty flippant. I hope you'll take the issue more seriously. https://authorsguild.org/news/meta-libgen-ai-training-book-heist-what-authors-need-to-know/
Copyright infringement is a huge issue with AI. Are you a member of the writers guild? You might want to join a group of like minded writers and other creators to fight this scourge.
Yes, I'm a member of the Authors Guild - doing what I can to help fight the wholesale theft of intellectual property by amoral techbro billionaires.
Sad that this is happening to you. Thanks for adding a link re this issue. Working for better, more fair days.
Thank you for stepping up! Regulation is necessary. There is no need to pause or hinder regulatory efforts with AI and in fact we should do everything we can to strictly regulate it first! Who’s to say that the AI responses won’t be learned from autocratic regimes to dismantle the democratic institutions around the world. Look at how much damage websites like Twitter, Google, and Facebook have done. The algorithms push false narratives and there’s no regulations to let the user stop or tweak it.
Thank you for acknowledging that this is potentially a serious problem. That if governmental oversight does not place guardrails on AI development then those who control it will continue to abuse the power and influence they believe is the only way of developing this technology. Please continue to speak out about abuse of power in all industries related to AGI.
Possibly one of the finest articles I've ever seen written on AI and the AI dilemma we face. My instincts are telling me that he is absolutely right! Using the Chinese as the bogeyman to justify a flaccid approach to AI regulation is an absurd and stupid way to do business!
Thank you Chris Murphy for being ahead of the curve on this important and serious tech issue. It doesn’t take a PHD to connect the dots when it comes to unregulated CORPORATE power, goals and profits at the expense of society and culture. Start with Citizens United, unregulated banking practices, throw in the social media tech giants, a psycho POTUS and cult followers and you’ve got an authoritarian regime ripe for non-accountability. The wealth gap in this country makes it difficult to do the right thing for all of us due to money and profits. I can’t imagine the horror of this AI thing added to the mix. TECH must be regulated.
Yes. But please spell out what regulations look like. I’m all for preventing the dystopian future you describe. I don’t trust the AI companies to regulate themselves. What should we do?
I work in this industry.
The main thing that China has over us, is, quite ironically, some degree of transparency in what is produced; at least enough so that researchers can probe and build open the foundational models released by independent actors in the country (the training datasets and more particular aspects of methodology are usually not released).
For example, Alibaba has released an "open weights" lineup of language models known as Qwen (the most recent of which is the Qwen3 series), and these models are popular not just in domestic research coming from China, but also Western research (due to permissive licensing and the ability to independently finetune or train on top of these open weight models for particular use cases).
Very little of this exists out in the open; Meta had their own lineup released in a similar fashion (Llama), the most recent of which (Llama4) was widely considered to be a flop.
Researchers from all around the world are essentially forced to build upon something Chinese in order for their work to be most reflective of what approaches actually work at frontier scale. DeepSeek is especially notable for being an open weights outlier in the same fashion, as they have released by far the largest and most capable open weight models for both research and enterprise use.
The only regulations I can think of that would meaningfully boost competition - not just in enterprise adoption, but also fundamental (public facing) research of the technology - would be those that enforce, at minimum, some degree of this kind of transparency (of publicly released weights). Otherwise, the ecosystem is built off of what some Chinese company has learned about their own models thanks to other actors (including those in the West) that they wouldn't have learned themselves. Consequentially, they can absorb the useful published findings from researchers into their deployment faster than actors in the West can.
For the record, I am not saying "mandate open weights" (this is not realistic), but defining some degree of standards when it comes to transparency beyond immediate safety precautions seems both fairly diligent and realistic.
Also, if explicitly factoring in safety concerns: having a stronger understanding of how to make this kind of technology safe primarily stems from having *more eyes* being able to probe and investigate it, not less.
Cybersecurity has broadly learned this kind of lesson already; that is, "security through obscurity is not an effective way to make a system safe". I have strong reason to believe that this principle extends to AI systems.
Just one point here - about "safety" and "open weights."
Geoff Hinton has been giving some recent interviews in which he's been talking about his personal apprehensions about what he think can (and will) happen with AI if we're not careful about its development.
One of the things he seems to be very concerned about is the idea of a general open release of the weights for a commercially trained AI (LLM.)
I believe that his point is that once anyone has those, then they can trivially implement an LLM (an AI) with that level of 'knowledge' (capacity.)
His metaphor is nuclear weapons: What keeps that somewhat under control in the modern world is that while it's broadly known how to make a nuclear bomb nowadays, not just *anyone* can get the Plutonium to make the trigger, nor the highly enriched Uranium into which it is shot to make the very big "boom!"
Geoff likens the weights for an LLM to these fissionable materials.
So this *specifically* is *one* of the many questions that come up in this arena of safety and regulation, before we just run wild and the genie is out of the bottle, as it were :)
On the sharp end of some of this, are the folks at Anthropic: Dario and Amodei, who formed Anthropic I believe, specifically because of some of these concerns.
Again: Sorry to be a bit less equipped with citations in the instant, I'll see what I can do about that.
And I'll try to find a pointer to the interview in which I heard Geoff Hinton's recently talking about all this and his concerns.
Thanks for these good comments, and for starting a bit of discussion about some of the things that are involved here, and that we need to be thinking about.
As you point out (or at least allude to) is that *none* of the US companies are doing anything *like* letting their source code out, nor - more importantly are they releasing the "weights" which establish what a trained AI substrate is able to do.
Both are still "secret sauce" (i.e. valuable intellectual property) to these companies, and the "weights" are especially so, as the cost of training these models is stupendous.
To some degree, this is why DeepSeek had to do this on their own, and have now also decided to Open Source what they've done there, making it possible for others now to implement these mechanisms for themselves.
Something quite important in this arena, is this matter of the "weights" resulting from the training of one of the AI LLMs. This is a vastly computationally and large-data intensive and expensive thing to do, and thus far only companies with enormous capital - and access to the right vast data have done that (over here.)
At this point, it seems to have become apparent that the focus on "model training" is now winding down, and the focus for new IC's (chips) to implement AI systems is on "edge" devices that do various recognition functions, based upon these already trained embedded AI models.
But without the weights resulting from a properly trained model, the recognition function in these edge devices doesn't work as well as may be needed.
So, up until this recent disruption of DeepSeek, all of these trained models in the companies in the US have *not* been available to any but those major companies that have developed them. They do let you *use* their LLMs as a web-deployed application (such as ChatGPT, etc. etc.) but this really only helps them further to gather data as vast numbers of people now feed queries ("prompts") to these systems.
There's rather a lot more to say about all of this, but as I am not personally a front lines engineer who is working on any of these systems, we will have to look to those people who *are* to drill down further on it all, and help us better to undertand the poignant questions and technical frontiers du jour.
I will stop here for the moment therefore.
New to AI, let’s hear some of the regulations ideas and how they would protect objective reality.
How about just protecting jobs.
A UBI would go a long way to that end.
UBI will be an essential element in the future for a society that values human life. MAGA doesn't and the Tech Bros? They are ready for space. I say put all of them on a Musk owned rocket....heh heh.
Definitely! A "Starship" 😂
Jobs evolve. We need an economy that recognizes the inherent instability of “jobs”. That values tasks differently.
Let’s put AI to good work by replacing CEO’s and programming AI to improve customer/patient experience and workers’ productivity while increasing their wages and benefits and eliminating the cost of the management outrageous salaries, benefits and exit packages determined on hiring.
Mr. Murphy’s central warning is accurate: the scramble to dominate AI risks compromising safety, control, and democratic integrity. However, the solution is not to decelerate innovation, but to deploy it with strategic discipline- embedding robust guardrails, international norms, and proactive governance.
IMO, what’s still missing from the current policy discourse- and where we can truly differentiate as a society- is a deliberate investment in education. To sustainably “win” the AI future, we must elevate our educational system beyond technical proficiency. This means:
Bolstering critical thinking curricula at all levels, ensuring that students are equipped to question, interpret, and ethically evaluate both AI outputs and the frameworks behind them.
Reinvigorating the humanities- philosophy, performing arts, education, history, ethics, social sciences- as foundational disciplines for future leaders, technologists, and citizens. These fields provide the context, ethical grounding, and analytical rigor necessary for responsible AI stewardship.
Bridging STEM with liberal arts, building interdisciplinary talent pipelines that can anticipate societal impacts, not just technical breakthroughs.
If we want to lead responsibly in the AI era, we must invest as heavily in cultivating ethical, critical, and creative minds as we do in technology itself. This is not just a risk mitigation strategy- it’s a long-term competitive advantage. The true “race” should not be for faster AI, but for a society resilient enough to govern and guide it wisely.
Our educational system is the linchpin. Without it, we risk ceding both leadership and values in the age of AI.
And I understand that the people pushing unregulated development of AI are already getting what they want. The wrongly named Big Beautiful Bill up for consideration for passage in the Senate has been flagged as having a provision that postpones any Federal regulation of AI until 2035. That's 10 years of unfettered and unregulated development of AI. By that time it will be far too late to try to rein in a technology that will have far surpassed our understanding of it. In our rush to "beat" China and make profit from this technology, we are committing suicide.
Pay school teachers what they deserve and what will attract people to the profession, make classes smaller, and you have new human employment.
Double the number of med schools, nursing schools, and support it fiscally so tuition is not a barrier. We are terrible short of primary care physicians and cannot even fill our needs with foreign physicians.
We can create job openings while improving our society and well being. Throttling the lure of the jobs AGI may eliminate could be an opportunity. (Ask AGI to figure this out for us.)
It’s all our sci-fi movies coming true. It scared me in the movies, I don’t want it in real life.
At least the dystopian ones.