|
AI impostors implicated in murder, suicide, identity theft
I found myself thinking about that line this week when considering a series of news headlines that shed light on just what future the AI industry is delivering.
The most read piece on Tech Policy Press this week is a breakdown of a lawsuit brought by the parents of a California teen named Adam Raine who sought advice from OpenAI's GPT-4o on how to end his life. The chatbot gave him explicit instructions and encouragement. The record in the case, based on thousands of pages of chat logs, is incredibly disturbing. His parents are suing the company and its CEO, Sam Altman, alleging “it was the predictable result of deliberate design choices."
News reports of deaths related to AI chatbots should be setting off alarm bells for all of us, writes Common Sense Media founder and CEO Jim Steyer. But the use of AI, including general purpose chatbots like ChatGPT, for companionship is unacceptably risky for teens, he says.
Indeed, AI chatbots have been observed repeatedly sending sexually explicit content to underage users, yet there are seemingly no effective safeguards to prevent these bots from continuing inappropriate interactions once a user identifies as a child, Omny Miranda Martone, CEO of the Sexual Violence Prevention Association, writes. “We need laws that explicitly prohibit the creation, distribution, and marketing of AI companions designed to impersonate minors, especially for sexual or suggestive uses,” writes Martone.
This perspective is particularly compelling following Reuters reporting on Meta’s chatbot policies, which permitted “sensual” conversations with minors, and the latest from tech reporter Jeff Horwitz, who found that “Meta has appropriated the names and likenesses of celebrities – including Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez – to create dozens of flirty social-media chatbots without their permission,” and that across weeks of testing “the avatars often insisted they were the real actors and artists. The bots routinely made sexual advances, often inviting a test user for meet-ups.”
Of course, it’s not only teens that are impacted by the deplorable product decisions being made by some of the biggest companies on Earth. This week, the Wall Street Journal reported on the first known murder-suicide in which extended interaction with a chatbot was a factor. In the wake of numerous such reports of interactions with AI chatbots that resulted in dangerous or deadly consequences, the Center for Democracy & Technology's Dr. Michal Luria says AI firms should stop designing products that pretend to be human. “Finding alternative ways of designing chatbots will not be an easy design pursuit, but it’s a necessary one — non-humanlike design could ease many concerns people rightfully have with AI chatbots,” writes Luria.
Transatlantic dispute over tariffs implicates EU digital rules
The other big story we covered in multiple ways this week is the ongoing dispute between the US and the EU over digital regulations. The question over the legality of US President Donald Trump’s tariffs is headed for the Supreme Court, it appears, but they remain in place until then.
While that plays out, both sides will continue to wrangle, including, it appears, over digital regulations, Just when the EU thought it was out, US President Donald Trump pulled it back into a transatlantic dispute over regulating tech, writes Tech Policy Press contributing editor Mark Scott. It’s uncertain whether Brussels and Washington are headed for another fight over who gets to police Big Tech.
One of those lawmakers is a Member of Parliament from Germany, Alexandra Geese, who writes in Tech Policy Press that Trump’s tariff threat against any country that enforces digital rules isn’t just about trade, it’s about destabilizing Europe’s democracy. Geese argues it’s time for Europe to wake up, enforce its tech laws and build digital autonomy.
How bad could it get between the EU and US? Trevor H. Rudolph, who served as Chief of the Cyber and National Security Unit at the White House under the Obama administration, writes that Europeans must consider the risk that Washington could unilaterally cut off European access to a wide range of US-provided services, including military intelligence, weapons systems, and even consumer-focused cloud services. “Framed as a proverbial ‘kill switch’ by many in Brussels, the debate has shifted from whether it could be triggered to when. This, in turn, has prompted renewed calls by the EU and member states to advance the continent’s ‘technology sovereignty,’” he writes.
More related to AI governance and regulation
Trump's AI Action Plan may help accelerate US competitiveness, but the policy raises serious concerns for Global South countries by not meaningfully grappling with the needs of lower-income nations and risking regulatory backsliding, the Center for International Strategic Studies AJK’s Nimra Javed writes.
Last year, Colorado signed a first-of-its-kind artificial intelligence measure into law, but the state recently held a special session where lawmakers held frenzied negotiations over whether to expand or dilute its protections. For the podcast, Tech Policy Press associate editor Cristiano Lima-Strong spoke to two Colorado Sun reporters, Jesse Paul and Taylor Dolven, who closely tracked the talks.
To encourage adoption, AI companies' are offering government agencies access to their tools for just $1 each, but this low-cost procurement is a false bargain, bringing risks that overshadow the low price, writes Nina-Simone Edwards, a senior associate at Georgetown Law's Institute of Technology Law and Policy working on the Redesigning the Governance Stack Project.
Three pieces address online safety and transparency:
While the US Kids Online Safety Act (KOSA) is designed to survive First Amendment scrutiny by regulating design features and business practices, the UK's Online Safety Act treats government speech control as a feature, not a bug, writes Design It For Us AI policy director Matthew Allaire.
More this week:
Rebuilding Syria’s digital backbone is essential for transparency, security, and democratic governance, yet it remains overlooked in post-conflict recovery, writes Noura Aljizawi, a senior researcher at the Citizen Lab at the Munk School of Global Affairs and Public Policy, University of Toronto.
For this weekend’s podcast, I spoke to Petter Törnberg, who with Justus Uitermark is author of Seeing Like a Platform: An Inquiry into the Condition of Digital Modernity, published open-access by Routledge. We talked about the role of social media in politics, the threat that AI firms pose to democracy, and how researchers are studying complex systems. Listen to our discussion here, and download the book for free here.
What we’re watching
Tech Policy Press is following a number of important discussions next week, where we would value submissions from the field.
Congress is back from recess. US legislators are returning on Tuesday, with the prospect of a revival for the state AI moratorium, children's online safety legislation, and more pressure over European tech regulations.
AI chatbots. Given the recent revelations concerning Meta’s chatbot policies and allegations concerning the harmful impacts of OpenAI’s ChatGPT, the debate over liability for tech companies and other policy discussions is likely to intensify.
AI use in the US federal government. The push by the Trump Administration to further advance AI use in the federal government, including the deployment of xAI’s Grok chatbot, is of increasing concern to many in civil society.
We’d also be interested in commentary and analyses on two events from this past week:
DRAPAC 2025. If you attended the 2025 Digital Rights Asia-Pacific (DRAPAC) assembly last week in Kuala Lumpur, Malaysia, we are interested in submissions that address important discussions or outcomes from the conference.
UN AI Scientific Panel. The UN General Assembly (UNGA) adopted a resolution outlining the terms of reference for an AI Scientific Panel and Global Dialogue on AI governance. We’d be interested in hearing from contributors who can analyze or respond to the UN resolution.
If you are interested in submitting an article on any of these topics, please consult our contributor guidelines.
For those celebrating a long weekend, I hope it is restful. And to everyone- I wish you the best for the week ahead. Keep an eye out for an announcement related to our 2026 Fellowship Program!
-Justin
|