Tuesday, October 28th
The session will endeavor to place the state of the case law on jawboning in the context of the efforts that the Trump Administration has engaged in to shape the speech environment online, at universities, and in other contexts. It will further explore new ways to effectuate the goals of reducing inappropriate government pressure on speech intermediaries, as well as tensions in the government’s authority to execute the law and advocate for its position and impermissible pressure on private speakers.
Section 230 has been maligned as a get out of jail free card for “Big Tech.” But that framing obscures a larger and more important reality: without 230, platforms would be unlikely to risk liability by hosting content with even a whiff of controversy, limiting speech and debate to the safest of topics.
This session will delve into the ways in which Section 230 benefits internet users, both by directly protecting them from liability and facilitating products and conversations that would not exist at all without Section 230’s shield.
AI plays an increasing role in our daily lives — powering everything from self-driving cars to search results — but it also carries risks including perpetuating discrimination, inappropriately disclosing private information, or misunderstanding context.
The Biden Administration sought to address safety risks by means both formal (including a major Executive Order), as well as informal means( like voluntary commitments from AI companies). The Trump Administration has pursued its own course, railing against what it has termed “woke AI,” and issuing its “Woke AI” Executive Order, restricting government procurement of LLMs that prioritize “historical accuracy” and “objectivity” without hidden “partisan or ideological judgments.“
This session will examine the First Amendment and free expression implications when governments involve themselves in determining what data AI systems can be trained on and what they can say. What are the ways in which governments place pressure on AI developers and deployers to encourage particular design choices, evaluation methods, or output filters? What are the implications for open source models?
Wednesday, October 29th
Almost as soon as President Trump took office, the newly minted chairs of the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC) began separate campaigns designed to influence the editorial judgment of the entities within their jurisdiction.
This session will delve into the First Amendment’s limits on FCC and FTC authority to control or encourage particular editorial policies. It will discuss potential impacts on journalism and online content moderation process and it will look closely at the benefits and challenges of transparency as a normative matter.
This session will spotlight experts who have directly experienced or work on behalf of those who have experienced government pressure to change the content of their speech to avoid punishment. Speakers come from journalism, student activism, community moderation, and research backgrounds.
The U.S. and foreign governments are reportedly using automated content detection tools to monitor online expression and take action against the speakers with whom they disagree. In the United States, non-citizens — including permanent residents — are being whisked to distant detention facilities and refused immigration benefits based on their viewpoints. To make matters worse, the tools used to find disfavored content of these speakers are often inaccurate.
This session will show how governments’ use of social media monitoring to crush dissent and discourage advocacy is eroding free expression in the U.S. and around the world. It will explore the tools governments use to engage in social media monitoring and the online speech and privacy problems that flaws in these tools exacerbate.