Preserving Trust and Freedom in the Age of AI

As AI rapidly advances, an emerging challenge threatens to weaken the foundations of societal institutions: How can we maintain trust and accountability online when AI systems become indistinguishable from real people? A new proposal for “personhood credentials” (PCHs) aims to address this challenge while preserving privacy and civil liberties. 

Generated in Midjourney.

The Problem: AI-Powered Deception at Scale. For years, malicious actors have exploited the anonymous nature of the internet to deceive and manipulate. From fake social media profiles to automated spam and fraud, bad actors have long found ways to hide behind false identities online. But AI threatens to supercharge these deceptive practices.

As AI systems become more sophisticated, they will soon be able to generate human-like text, images, audio, and video that are virtually indistinguishable from content created by real people. This blurring of reality, coupled with decreasing costs and increasing accessibility of AI technologies, is enabling more scalable AI-powered deception by malicious actors. In the future, AI systems will also be able to operate accounts and interact online in ways that mimic human behavior. This creates the potential for malicious actors to deploy massive armies of AI personae, overwhelming authentic human activity.

Why This Matters. Widespread AI-powered deception has profound implications. Our institutions rely on a social trust that individuals are engaging in authentic conversation and transactions. Anything that undermines that trust weakens the foundations for communication, commerce, and government interactions and erodes the basic trust and shared understanding that enables societies to function.

The recent debate over photos from Kamala Harris’ presidential campaign, in which individuals dismissed real images of a crowd supporting Harris as AI-generated, illustrates the difficulty of detecting AI-generated images. Similar challenges emerge when AI systems pushing hidden agendas flood comment sections on news stories, product reviews, and governments’ proposed rules and notices

The Personhood Credential Option Solution. I recently contributed to a paper with over 20 prominent AI researchers, legal experts, and tech industry leaders from institutions including OpenAI, the Partnership on AI, Microsoft, the University of Oxford, a16z crypto, and the Massachusetts Institute of Technology. Our work introduces the concept of PHCs, an innovative, privacy-preserving tool to verify human identity online without revealing personal information.

PHCs would be designed as an optional, privacy-centric tool for online identity verification. They would allow individuals to prove that there was a real person behind an account without disclosing personal details. Various trusted entities, such as government agencies, could issue PHCs. The enrollment process would involve a one-time verification of an individual’s personhood, ensuring they hadn’t previously obtained a credential. Once acquired, users could authenticate their humanity to websites and online platforms without compromising their privacy or revealing their identity. For instance, a state could issue a PHC to individuals holding a tax identification number, allowing them to create pseudonymous accounts that are verified as human without revealing their identity. This system aims to balance the need for human verification in digital spaces with robust personal privacy protection.

This approach offers several key benefits:

  • Mitigate fake accounts and bot activity: Only verified humans can obtain valid credentials, enhancing platform integrity and combating bot activity.
  • Preserve user privacy: Enable anonymity for users who prefer not to disclose their full identity online.
  • Enable effective “per-person” rate limiting: Restrict activities like commenting or account creation on a per-individual basis, making large-scale manipulation much harder.
  • Maintain optionality: PHCs are not a mandatory digital ID system, but rather an optional tool for both users and platforms to foster higher-trust online interactions when desired.

PHCs also improve upon and complement conventional methods in several ways:

  • Robustness: Unlike CAPTCHAs or behavioral filters, which advanced AI might bypass, PHCs rely on secure cryptography that AI cannot fake, making them more resilient to AI impersonation.
  • Avoids economic barriers: Paid subscriptions and credit card verification can exclude legitimate users, whereas PHCs can be designed to be more accessible.
  • Maintains privacy: PHCs can verify human control without requiring personal information disclosure, unlike methods involving ID checks or video calls.
  • Scarcity: Unlike easily obtainable phone numbers or email addresses, PHCs are designed to be more difficult to acquire in bulk, limiting large-scale bot operations.
  • Adaptability: PHCs can work alongside AI content detection methods, providing an additional layer of verification when AI-generated content is suspected.

The Way Forward. As with any new system, companies and standards bodies would need to work out important details to implement PHCs. Crucial questions around governance, security, privacy, and equitable access also require thoughtful debate and consideration.

But the core concept—creating a privacy-preserving way to distinguish real humans from AI online—is one that policymakers and technology leaders should take seriously. As AI capabilities rapidly advance, we need proactive solutions to preserve digital trust. 

The post Preserving Trust and Freedom in the Age of AI appeared first on American Enterprise Institute – AEI.