Charting a Bipartisan Course: The Senate’s Roadmap for AI Policy

The Bipartisan Senate AI Working Group, comprising Senate Majority Leader Chuck Schumer (D-NY) and Sens. Mike Rounds (R-SD), Todd Young (R-IN), and Martin Heinrich (D-NM) have released their long-anticipated Roadmap for Artificial Intelligence Policy based on nine forums held throughout 2023, involving 150 experts from across the AI ecosystem. The roadmap covers a wide range of topics, including the importance of strategic investments to maintain US leadership in AI innovation while mitigating potential risks and negative societal impacts.

Midjourney: illustration of a roadmap with the word “AI” in the middle

The roadmap is worth reading in its entirety, but a few takeaways are worth highlighting:

A Bipartisan Approach to Promoting AI Innovation

The roadmap strikes a balance between promoting AI innovation and acknowledging emerging risks. Instead of endorsing research pauses, expansive new regulatory schemes, or complex licensing systems, the overarching guidance is to have relevant congressional committees develop legislation to “support AI advancements and minimize potential risks from AI.”

The roadmap appears to reflect a genuine bipartisan compromise. By incorporating diverse perspectives and fostering collaboration across party lines, the guidance aims to create a more comprehensive, effective, and durable framework for AI governance.

Building a Robust Testing and Evaluation Infrastructure for AI Safety

The roadmap acknowledges the importance of proactively mitigating potential risks associated with AI systems. It takes a measured approach to AI regulation by supporting AI safety systems and risk-based frameworks. It seeks to establish a solid foundation for AI safety by calling for increased funding for AI efforts at National Institute of Standards and Technology, establishing AI testing and evaluation infrastructure, and committing additional funding to the US AI Safety Institute. The endorsement of voluntary commercial AI auditing standards is a step in the right direction, as it encourages industry collaboration in ensuring AI safety. 

Bolstering Human Capital

The working group emphasizes the need for additional worker re-skilling, along with long-overdue improvements to the US immigration system for highly-skilled STEM workers. Attracting skilled international talent is essential, but prioritizing the cultivation of domestic STEM talent is equally important. Just as relying on foreign manufacturing creates supply-chain and national-security risks, so too does relying on foreign production of STEM talent without bolstering our domestic supply.  

​​Recent national assessments show the most significant drop in math performance among 4th and 8th graders since the program’s inception in 1990, underscoring the fragility and severe limitations of our nation’s STEM talent supply chain. To address this, our nation’s leaders must invest in STEM education at all levels, from primary through postsecondary, to cultivate a strong foundation among youth that will allow them to participate in STEM jobs.

Congress should also pass the Economic Innovation Group’s Heartland Visa concept to create a flexible pathway for skilled immigrants to settle in communities confronting population or economic stagnation. This would serve as an economic catalyst for human capital and entrepreneurial activity beyond coastal technology hubs.

Protecting Children in the Age of AI

As AI technologies rapidly advance, there are alarming instances of AI being used to generate child sexual abuse material and deepfakes created by teenagers of their peers. The roadmap rightly calls for urgent reforms to safeguard children. 

Equally commendable is the AI working group’s emphasis on studying the impact of AI on young people’s mental health. The empathy that makes AI systems so powerful as tutors or medical assistants could have unintended consequences if they lead young people to rely on them more for friendship and socialization at the expense of real-life, face-to-face human interactions. Platforms like Character.AI attract 3.5 million daily users who spend an average of two hours a day interacting with AI-powered companions. If AI becomes a substitute for human connection, our nation’s loneliness epidemic will only worsen.

We do not yet understand how these technologies will affect young people, making it crucial for the government, companies, and academia to proactively examine their potential effects on children’s well-being, sense of self, and social connections.

Leveraging Grand Challenges

Finally, it’s worth highlighting the roadmap’s endorsement of “AI Grand Challenge” programs, modeled after successful programs run by the US Defense Advanced Research Projects AgencyDepartment of EnergyNational Science Foundation, and the XPRIZE. These programs incentivize and accelerate innovative breakthroughs on difficult challenges by attracting a diverse range of innovators with varied perspectives and solutions. 

Although this measured strategy may not satisfy those calling for immediate, sweeping AI regulations, it creates an environment where AI innovation can flourish while still ensuring safety and accountability. It sends most decision-making back to the appropriate congressional committees, acknowledging the need for sector-specific expertise. By striking a balance between fostering AI development and implementing necessary safeguards, the roadmap lays the groundwork for a more nuanced, adaptable, and responsible approach to AI governance—one that can evolve alongside the rapid advancements in AI technology. Ultimately, this foundation will prove crucial in navigating the complex landscape of AI and ensuring that its benefits are harnessed while mitigating potential risks.

The post Charting a Bipartisan Course: The Senate’s Roadmap for AI Policy appeared first on American Enterprise Institute – AEI.