Select Page

AI and Technologies of Freedom in the Age of “Weaponized” Government

 

Adam Thierer, R Street

A heated debate is unfolding over what role artificial intelligence (AI) technologies and systems might play in “weaponizing” the federal government’s power against particular people, parties, institutions, or ideas. The House Select Subcommittee on the Weaponization of the Federal Government held a hearing on Feb. 6 that featured heated bickering between Republican and Democratic lawmakers with very different perspectives on how government might abuse algorithmic capabilities and digital platforms.

This is a politically charged topic, with worries on both sides of the aisle about how White House officials have (or might someday) lean on large technology companies to favor or disfavor their speech or content. In each case, however, lawmakers claim it is their party or ideas that will be victimized by such weaponization while accusing the other side of inflated or bogus concerns. At this week’s hearing, for example, some Republicans claimed the Biden administration used threats to intimidate tech companies with the goal of censoring certain types of content during the COVID-19 lockdowns. Meanwhile, some Democrats accused former President Donald J. Trump of posing a bigger danger to the future of freedom if he is reelected and uses new digital technologies to intimidate opponents once back in office.

Greg Lukianoff, President and CEO of the Foundation for Individual Rights and Expression, delivered powerful testimony that cut through all this partisanship to highlight how conflicting fears about AI could lead to much broader attacks on digital technologies and free speech. “[T]he most chilling threat that the government poses in the context of emerging AI is regulatory overreach that limits its potential as a tool for contributing to human knowledge,” Lukianoff observed. “AI offers even greater liberating potential, empowered by First Amendment principles, including freedom to code, academic freedom, and freedom of inquiry,” he noted. “We are on the threshold of a revolution in the creation and discovery of knowledge.”

Lukianoff is correct. AI has the potential to become what late communications theorist Ithiel de Sola Pool called a “technology of freedom” in his prescient 1983 book on the future promise of robust computing and electronic speech. In terms of broadening access to information and expanding human freedom more generally, he argued that “[t]he easy access, low cost, and distributed intelligence of modern means of communication are a prime reason for hope.”

Pool’s Technologies of Freedom set forth several “Guidelines for Freedom” regarding electronic speech, which remain relevant 40 years later. The first four guidelines were:

1. The First Amendment applies fully to all media.

2. Anyone may publish at will.

3. Enforcement must be after the fact, not by prior restraint.

4. Regulation is a last recourse. In a free society, the burden of proof is for the least possible regulation of communication.

This is basically the same pro-freedom policy framework Lukianoff proposed for AI at this week’s House hearing. However, if we get policy wrong, it could have dangerous ramifications for online speech. “A regulatory panic could result in a small number of Americans deciding for everyone else what speech, ideas, and even questions are permitted in the name of ‘safety’ or ‘alignment,’” Lukianoff argued.

What he is alluding to is how the government might seek to control computational systems in various ways in an effort to make algorithmic and computational systems safer and more aligned with human values. In the abstract, everyone would agree that safety and value alignment are important goals. But, as I noted in https://www.rstreet.org/research/flexible-pro-innovation-governance-strategies-for-artificial-intelligence/an R Street Institute report last year, the devil is very much in the details when it comes to what different people and groups mean by “alignment,” or even which values or activities they seek to promote—or perhaps curtail. More importantly, exactly how is that alignment accomplished through regulation? This is where weaponization concerns could arise.

Nonetheless, these so-called alignment issues pervade various AI policy documents issued by the Biden administration. For example, in October 2022, the White House released an “AI Bill of Rights” that was heavily steeped in fear-based narratives about algorithmic technologies. The document claimed algorithmic systems are “unsafe, ineffective, or biased” and “deeply harmful,” and that they “threaten the rights of the American public.” It also emphasized theoretical dangers associated with AI over the potential benefits and opportunities of algorithmic capabilities throughout.

A year later, the Biden administration followed up on this gloomy framework by issuing a wide-ranging, 100-plus-page executive order on “Safe, Secure, and Trustworthy Artificial Intelligence,” which stretches executive authority over digital technology well beyond statutory limits and raises the danger of AI overregulation. In this way, the order runs strongly counter to Pool’s guidelines for technological freedom by threatening to preemptively treat algorithmic innovators as guilty until proven innocent.

Congress must ensure that AI remains a technology of freedom. To achieve that goal, we must avoid fear-based approaches to policy like the one the Biden administration is advancing. As Lukianoff put it:

Yes, we may have some fears about the proliferation of AI. But what those of us who care about civil liberties fear more is a government monopoly on advanced AI. Or, more likely, regulatory capture and a government-empowered oligopoly that privileges a handful of existing players. The end result of pushing too hard on AI regulation will be the concentration of AI influence in an even smaller number of hands. Far from reining in the government’s misuse of AI to censor, we will have created the framework not only to censor but also to dominate and distort the production of knowledge itself.

Because algorithmic and advanced computational technologies have important ramifications for America’s national competitiveness and geopolitical security, overly restrictive regulation could backfire in another way. As Lukianoff observed, “[T]he potential end result of America tying the hands of the greatest programmers in the world would be to lose our advantage to our most determined foreign adversaries.” Indeed, as I’ve repeatedly warned in R Street research, U.S. policymakers must not lose sight of this important danger as China and other nations look to counter America’s early lead in digital technology and AI capabilities.

“[W]ith decentralized development and use of AI, we have a better chance of defeating our staunchest rivals,” Lukianoff concluded, because “[I]t’s what gives us our best chance for understanding the world without being blinded by our current orthodoxies, superstitions, or darkest fears.”

This is precisely the sort of principled policy vision America needs to adopt if AI is to become the next great technology of freedom.

 


Adam Thierer is a senior fellow for the R Street Technology & Innovation team.