Transparency—Like Charity—Begins at Home? 

As the debate about regulating artificial intelligence applications heats up, much is being made of the need for transparency

For the most part, AI algorithms “do their thing” in an “black box” that renders the basis for their decisions opaque even to their developers. Transparency—and its bedfellows “explainability”, “interpretability” and “understandability”—feature prominently in the development of standards for the ethical and acceptable use of AI applications. The words “transparent” and “transparency” feature 36 times in the 42-page National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework—the guidelines to date, which have become the basis of industry self-governance amongst AI developers and which underpin the draft AI RMF Generative AI Profile, prepared pursuant to President Biden’s Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. 

The NIST Risk Management Framework defines trustworthy AI applications as those having the characteristics:

“valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.” (emphasis added)

Furthermore:

“While all characteristics are socio-technical system attributes, accountability and transparency also relate to the processes and activities internal to an AI system and its external setting. Neglecting these characteristics can increase the probability and magnitude of negative consequences.”

The characteristics interact as displayed here:

Transparency is separate and distinct from explainability and interpretability. The latter describe the ability for the results of decisions made by algorithms in the “black box” to be understood and explained by humans. Transparency means that that all relevant characteristics of the AI—including but not limited to features such as the data it was trained on, the standards and tests used to assess its performance in various characteristics—will be made available to stakeholders, so that both the AI itself and the systems and processes going into its development and operation can be audited independently of the firms developing and operating it. 

Meeting these high standards is no trivial matter for AI developers and deployers—particularly those offering Generative tools such as the Large Language Models which have grabbed attention since the launch of ChatGPT in November 2022. While what is made transparent will depend to some extent on the context and use case for each application, the draft NIST Generative AI profile contains some 467 action areas to be considered, documented, addressed and made transparent to auditors and stakeholders if an application is to conform to its standards. The EU AI Act is even more onerous in the obligations it requires developers and deployers of high-risk AIs to be used in its territory to meet simply in order to get permission to be released in the market.

In short, transparency will require developers and deployers to practically lay themselves bare if they wish to meet the legally-imposed (EU) or voluntary industry (US) standards required of “safe, secure and trustworthy” AI systems.

With such stringent obligations imposed on developers and deployers, it begs the question of what standards of transparency are to be required of the governments and other civil society entities demanding such high standards be met in the first place (and may also be engaged in the auditing of operators bound by their rules). For if the applications they oversee are to be trusted by end users, their processes and actions must also be transparent. 

It will not be possible to engender the hoped-for trust if standards of transparency follow those recently observed in the Christchurch Call, the multi-stakeholder entity established in 2019 to reduce the quantity of terrorist and violent content online. The Call’s advisory network (CCAN), led by the Secretariat—New Zealand and French government officials—“indirectly threatened” the authors of an audit report (the Internet Governance Program (IGP) at Georgia Tech—a leading international authority on internet matters), if their report of the Call’s performance were released. Apparently, these findings “would be highly embarrassing for all involved” because they called out government actions inconsistent with the Call’s objectives. 

According to the IGP, when finished reports were sent to stakeholder governments, “there was significant pushback” with threats that “if the reports were published, some countries would refuse to engage with CCAN in the future, while interactions with others would become “more senior, more formal and more strained.” IGP subsequently withdrew from participating in the Christchurch Call.

The lesson here is that if governments and civil societies requiring AI providers and deployers to be ethical and transparent in all their activities, then they too must hold themselves to the same high standards in regard to AI monitoring and regulation. If users cannot trust the gatekeepers, they can hardly trust the gated. 

Transparency—like charity—begins at home.

The post Transparency—Like Charity—Begins at Home?  appeared first on American Enterprise Institute – AEI.