Cody Venzke, ACLU
Min Aung, GNI
Neil Chilson, Abundance Institute
Miranda Bogen, CDT AI Governance Lab
Becca Branum, Center for Democracy & Technology






AI plays an increasing role in our daily lives — powering everything from self-driving cars to search results — but it also carries risks including perpetuating discrimination, inappropriately disclosing private information, or misunderstanding context.
The Biden Administration sought to address safety risks by means both formal (including a major Executive Order), as well as informal means( like voluntary commitments from AI companies). The Trump Administration has pursued its own course, railing against what it has termed “woke AI,” and issuing its “Woke AI” Executive Order, restricting government procurement of LLMs that prioritize “historical accuracy” and “objectivity” without hidden “partisan or ideological judgments.“
This session will examine the First Amendment and free expression implications when governments involve themselves in determining what data AI systems can be trained on and what they can say. What are the ways in which governments place pressure on AI developers and deployers to encourage particular design choices, evaluation methods, or output filters? What are the implications for open source models?