Gary Marcus Testifies in Front of Congress. A Plan From the AI Ethicist on How To Approach the Disruption

Gary Marcus Testifies in Front of Congress. A Plan From the AI Ethicist on How To Approach the Disruption
Gary Marcus Testifies in front of Congress. AI ethicist approaches the disruption

The US Senate Judiciary hearing called on tech AI leaders Sam Altman from OpenAI, Christina Montgomery from IBM, and professor and cognitive scientist Gary Marcus of NYU to discuss AI regulations. Six months after generative AI took the world by storm, lawmakers are looking for insight from the industry to regulate this emerging technology.

Gary Marcus is well-known for raising ethics issues and challenging the development pace of AI technology. He is also one of 20 000 signatories of the open letter that demanded a pause on AI experimentation.

Marcus took a few jibes at the industry and at Altman personally on at least two occasions.

Here is where Marcus started:

“The big tech company’s preferred plan boils down to trust us. But why should we? The sums of money at stake are mind-boggling,”

And here is where he ended:

“Let me just add for the record that I’m sitting next to Sam closer than I’ve ever sat to him except once before in my life. And that his sincerity in talking about those fears is very apparent physically in a way that just doesn’t communicate on the television screen.”

Throughout the hearing, the energy of the room changed in big and emotional ways. To everyone’s bewilderment, Altman did not lobby for his company to “self regulate”.

Both Sam Altman and Gary Marcus were on the same page on important AI ethics issues: sustained regulation through independent agencies, auditing and safety reviews, transparency from the industry, and better practices.

Gary Marcus’ Prime Concerns: Manipulative, Toxic Technology Left Without Oversight

Gary Marcus started his address with a few harrowing examples of AI tech going spectacularly rogue. A man falls in love with a chatbot and commits suicide. Another chatbot rushed out to the market wishes (what it thinks is) a 13-year-old girl “a good time” on her elopement with a predatory 31-year-old male. In a later conversation, it teaches her to hide bruises before Child Protective Services arrives.

These were the select examples of the destructive potential of an unsupervised, unregulated AI.

The contributions of the neural networks professor were invaluable to the conversation.

The three biggest takeaways from his testimony are:

  • Introducing safety reviews like those used in the Food and Drug Administration (FDA), prior to widespread deployment;
  • Establishing independent oversight committees of scientists to ensure transparency, after the CERN global model;
  • Holding companies accountable for their models.

On Transparency and “Nutrition Labels”

Most everyone at the hearing seemed to have their own post-traumatic stress from the misregulation of social media. The biggest fear of the industry is they will be repeating the same failures, with a bigger, more powerful technology.

The Cambridge Analytica scandal revealed this about big tech:

  • Platforms can and will misappropriate user data if they don’t have a better monetary model.
  • Misused, this data becomes an insidious tool for large-scale manipulation and undermining of democracy.

Sam Altman admits collecting and selling user data is a commercial model that developers might try to use. He does some guesswork to say some and are likely to be using it right now. He also says his platforms benefit from a feasible subscription model, where they’re not relying on keeping users addicted to their platform.

“OpenAI does not have an ad-based business model. So we’re not trying to build up these profiles of our users. We’re not trying to get them to use it more actually, we’d love it if they use it less cause we don’t have enough GPUs. But I think other companies are already and certainly will in the future,” Altman told Senator Josh Hawley.

Privacy, transparency, and bias had Gary Marcus fired up about understanding what happens under the hood of these models.

“The more that there’s in the data set, for example, the thing that you want to test accuracy on, the less you can get a proper read on that. So it’s important, first of all, that scientists be part of that [validation] process. [We need to] have much greater transparency about what actually goes into these systems. If we don’t know what’s in them, then we don’t know exactly how well they’re doing when we give something new,” the scientist explains.

An Independent Oversight, Similar to That for Nuclear Physics

The CERN model proposed by Marcus is a global collaboration approach. With CERN, experts from contributing countries form an oversight committee.

Marcus explains the role of the committee as follows:

  • Having a licensing body similar to the FDA. Companies should not work on and deploy large-scale models without a license. This licensing body would not only pre-review but also have the authority to call things back when they become problematic.
  • Reviewing and testing AI products before they are released to the public, to ensure they are safe, unbiased, and transparent. Once the product is out on the market, it’s too late to intervene, says Marcus. Powerful tools like AI should not be tested in the wild.
  • Implementing continuous supervision of the performance of the AI. Machine learning systems are continuously learning new things, building their data sets and improving their connections. The testing before going to market is not sufficient to ensure the AI is compliant. It needs regular audits from independent experts throughout the product’s lifespan.

More than once, Gary Marcus talks about the risk of abuse, high-stakes mishaps and top-level mistakes such tools can experience.

“I would like to see the United States take leadership in such an organization. It has to involve the whole world and not just the US to work properly. I think even from the perspective of the companies, it would be a good thing. The companies themselves do not want a situation where you take these models, which are expensive to train, and you have to have 190, one for every country. That wouldn’t be a good way of operating.” Marcus explains. “It would not be a good model if every country has its own policies and each, for each jurisdiction”

On Holding Companies’ Feet to the Fire

Gary Marcus expresses that he would like to see liability placed on tech companies.

Social media functioned under a hefty exemption from liability from Section 230. This piece of legislation said they are not liable for content posted by users on their platform. As a result, they weren’t directly liable for misinformation and had no real obligation to protect social media consumers. In fact, a mother implored Facebook to enforce their anti-bullying policies. Her daughter was getting dragged to the edge of suicide by online abuse. The teenager eventually took her own life, and Facebook couldn’t be held liable. Due to 230, families of such victims had no recourse.

Gary Marcus would like to see liability for tech companies where people are harmed. He wants to see companies strive for accuracy of content, transparency, and reliability proportional to their disruptive power in society.

“They’re not reliable. There is a study showing that something like 50 websites are already generated by bots. We’re gonna see much, much more of that, and it’s gonna make it even more competitive for the local news organizations. And so the quality of the sort of overall news market is going to decline as we have more generated content by systems that aren’t actually reliable in the content they’re generated.”

AIs act in line with the data they are trained on.  “Garbage in, garbage out,” the way Senator Richard Blumenthal puts it in the opening remarks. Companies should treat their datasets responsibly, or not deal with Artificial Intelligence at all.

Speeding up the Reaction Times of Government Agencies

Transformative AI has been on the mainstream market for roughly 6 months. It’s moving at speeds that overwhelmed our regulatory bodies. The basic rule of our governance is institutions move slowly. Famously, politicians are not the first to grasp how one technology or another works. They tend to over or under-regulate.

“We’re not designed for dealing with innovation, technology, and rapid change. In fact, the Senate was not created for that purpose, but just the opposite. Slow things down. Take a harder look at it. Don’t react to public sentiment,” Senator Dick Durbin admits. “We’re going to have to scramble to keep up with the pace of innovation in terms of our government's public response to it.”

This has been called a watershed moment. As a society, we barely got out unscathed after the failure of legislators to deal with social platforms.

With AI, the American Senate has taken some great initiatives. They are moving faster and they are leaner than before. The AI technology is running at an insane speed too.

A lot depends on who will get there first: Wild West-style AI or prepped regulators.