AI Regulation, the Meeting on Capitol Hill. “Society Grants Our License to Operate”, Says Christina Montgomery

AI Regulation, the Meeting on Capitol Hill. “Society Grants Our License to Operate”, Says Christina Montgomery
Christina Montgomery spoke to the US Congress about AI regulation. Generated with MidJourney

Legislators on Capitol Hill asked AI experts and industry leaders to chime in on the issue of AI regulation. While the US is great at pouring out disruptive technologies, they are famously behind in regulating them. With even more powerful AIs in sight, Senators are looking to understand AI technology well enough to create AI regulation that makes sense. Christina Montgomery consulted on the thorny issue. She and Sam Altman were sworn in, as industry representatives. The meeting raised hystorical ethical issues.

Christina Montgomery is IBM’s vice president and Chief Privacy and Trust Officer. She oversees compliance and is also part of IBM’s ethics board. She is also a member of the AI Commission inside of The United States Chamber of Commerce and a member of the AI Advisory Committee.

AI Regulation Needs To Define Degrees of Risk, Says Christina Montgomery

Montgomery’s proposal was similar to the EU model, appreciated as balanced and welcomed to the Wild West AI industry. The European Union already published an Artificial Intelligence Act. The document proposes that the EU develops rules proportional to the risk of the AI, from unacceptable risk applications to high and minimal or limited risk. Instead of a blanket approach, they would judge use cases individually.

“IBM urges Congress to adopt a precision regulation approach to AI. This means establishing rules to govern the deployment of AI in specific use cases, not regulating the technology itself. (...) The strongest regulation should be applied to use cases with the greatest risks to people and society. There must be clear guidance on AI uses or categories of AI supported activity that are inherently high risk,” says Montgomery.

The European Union AI Act defines four degrees of risk: unacceptable, high, minimal, and limited.

The Unacceptable Risk Applications, Defined in Europe, Discussed in the US

In the EU, unacceptable risks are those infringing on safety and privacy.

The issue of the public’s manipulation has been raised a lot, including during the Capitol Hill hearings, as AIs have great potential for creating deep fakes or manipulating opinions. There are also multiple issues of disinformation, as AI hallucinates with great confidence. In a famous case, it blamed a completely innocent professor for sexual harassment and even sourced a non-existent Washington Post article.

Issues like this one can be insignificant when you’re generating cat photos. They can in turn undermine democracy altogether in an election or as a research tool for journalists. Proper legislation would differentiate between the two use cases.

On the Issue of Transparency and Explainability of AIs

The recurring solution from Montgomery was a demand for transparency and accountability from developers.

  • Consumers should know when they are interacting with an AI, making those who use it to manipulate opinions or to scam others liable.
  • Companies should show impact and check for biases. Developers would need to account for issues like the black box and be able to show the mechanics behind AI’s decision-making.
“Businesses also play a critical role in ensuring the responsible deployment of AI. Companies active in developing or using AI must have strong internal governance, including, among other things, designating a lead AI ethics official responsible for an organization’s trustworthy AI strategy. [They should] set up an ethics board, or a similar function as a centralized clearinghouse for resource resources to help guide implementation of that strategy”, Montgomery explains.

In the regulatory environment, internal governance is code for “don’t regulate, let us deal with the issue internally.” This previously proved more of a miss than a hit with tech in the US.

Sure, Transparency, but Don’t Check Us on It

Montgomery advocated multiple times for transparency: “disclosure of the model and how it performs and making sure that there’s continuous governance over these models”. On the other hand, she had a back-and-forth with Senator Lindsey Graham on the issue of creating an agency to oversee AI development.

Christina Montgomery did not recognize the need for a government agency with independent experts who would regulate the industry. In her opinion, internal company governance would be enough to impose ethics on the development of AIs.

“I just don’t understand how you could say that you don’t need an agency to deal with the most transformative technology maybe ever,” said Graham eventually.

On the other side of the panel, both Altman from OpenAI and Gary Marcus, AI ethicist and professor at New York University agreed there is a dire need for a government agency to oversee operations.

On a later interview for Bloomberg, Gary Marcus said “we all agreed over there, almost everybody, except the IBM Executive, that we need some kind of national agency governing AI and probably some global agency doing that.

In the end, transparency itself, the way it was proposed by Christina Montgomery, depends on an independent audit.