|
Good morning!
This week, RightsCon, which bills itself as "the world’s leading summit on human rights in the digital age," descends on Taipei. To learn more about the dynamics in the civil society community working on digital rights and tech policy matters in Taiwan, I spoke to Article 19's Liu I-Chen, Judicial Reform Foundation's Grace Huang, and Taiwan Association for Human Rights' Kuan-Ju Chou.
Expect a report from Taipei next week. I'm hoping to hear from folks in the field about the impact of cuts at USAID and the State Department and how that is affecting internet freedom, digital rights, and pro-democracy work. If you're attending, look me up.
In the meantime, enjoy these highlights from Tech Policy Press:
European regulation
- Serving as host of the Tech Policy Press podcast for the first time, Associate Editor Ramsha Jahangir moderates a discussion featuring GNI’s Hillary Ross, the DSA Observatory’s Magdalena Jozwiak, and EFF’s Svea Windwehr about the first systemic risk reports and independent units under the Digital Services Act.
- Ramsha is also tracking key public statements from US and EU officials, tech industry representatives, and other stakeholders highlighting the clash over the enforcement of Europe's tech laws.
- Do the new EU's AI Act guidelines provide legal clarity for an AI system definition and prohibitions? Kris Shashak breaks it down.
- The UK is forcing Apple to weaken encryption. Such demands have no precedent in democratic countries and would have far-reaching consequences not only for UK residents but for Internet users worldwide, write TechFreedom’s Berin Szóka and Santana Boulton.
AI: From Paris to PAIRS
- Next in the series, how can we oppose AI governance when it fails to help those who stand to lose the most from the technology’s deployment? Blair Attard-Frost, Ana Brandusescu, David Gray Widder, and Christelle Tessono point to lessons learned from Canada’s failure to pass national AI legislation.
- Also as part of the series from participants in PAIRS, Jonathan van Geuns writes that the crisis in AI governance isn't just about flawed AI. It's about corporate capture of governance processes. (We'll have more from PAIRS next week.)
- The notion that a human rights-centered approach hinders progress is a dangerous fallacy. For too long, “innovation” has been used as an excuse to sideline trust, safety, and user rights, writes shirin anlen.
- Wondering where today's subject line came from? It's from this piece, which is a must-read. Some AI enthusiasts are fantasizing about the potential future suffering of chatbots. But David McNeill and Emily Tucker say there are a lot of very good reasons for rejecting the claim that contemporary AI research is on its way toward creating genuinely intelligent, much less conscious, machines. (This is one of my personal favorites this week.)
Online safety
- Writing with Amnesty Tech's Pat de Brún, Rohingya survivor Maung Sawyeddollah says Meta's recent policy shifts could lead to more violence or genocide. In January, Sawyeddollah filed a complaint with the SEC over Meta’s failure to heed warnings about Facebook’s role in fueling violence in Myanmar.
- Last week, the US Senate unanimously passed the TAKE IT DOWN Act. The legislation has broad support, but some advocacy groups say certain provisions in the legislation, which targets nonconsensual intimate image abuse, could risk undermining fundamental rights, writes Kaylee Williams.
- The support for the legislation is driven by the growing scale of the problem, write Encode's Sunny Gandhi and Adam Billen. Congress can restore victims' power, close legal loopholes, and ensure a safer online future, they say.
- A South Carolina state legislator who spearheaded the passage of Gavin’s Law joined other experts testified before the US Senate Judiciary Committee on Wednesday about legislative action to protect children online. Edward Simon Cruz and Jeremy Fredricks report on the hearing. (Read the transcript of the hearing here).
- Tech Policy Press fellow Dia Kayyali provides a guide to terms and issues at the intersection of automation, AI, machine learning, and content moderation. I recommend checking this out- you'll want to bookmark it!
- Disabled people already face barriers online, and Meta just made it worse, writes Tech Policy Press fellow Ariana Aboulafia. Fact-checking is gone, ableist hate is allowed, and misinformation will spread unchecked. It doesn’t have to be this way.
- Platform manipulation and foreign interference strategies to influence elections are becoming ever more sophisticated, yet social media companies appear to be throwing in the towel, writes Milan Wiertz. Governments must demand more, he says.
DOGE watch
- Emily Tavoulareas helped found the US Digital Service at the Department of Veterans Affairs. She says the US DOGE team understands tech isn’t just about websites and apps—it’s the foundation of everything the government does. And now, they hold the reins. But to what end?
That’s all for this week. More to come, from Taipei and beyond!
Justin
|