Sam Altman Urges Congress To Respond With AI Legislation. “If This Technology Goes Wrong, It Can Go Quite Wrong”

Sam Altman Urges Congress To Respond With AI Legislation. “If This Technology Goes Wrong, It Can Go Quite Wrong”
Sam altman spoke about AI legislation in front of Congress. Generated by MidJourney

The American Congress invited AI industry leaders Sam Altman (OpenAI), Christina Montgomery (IBM), and Professor Gary Marcus (NYU) to discuss AI’s immense potential and risks. In a rare consensus, Congress and tech representatives agreed that companies should be forced to tread responsibly around AI development.

Sam Altman in particular agreed AI can be dangerous if misused and asked for AI legislation for his company and competitors. In the words of Senator Dick Durbin of Illinois, “What I’m hearing today is stop me before I innovate again!”

The three admit under oath that AI needs to fix its technical issues before taking a more high-stakes role in society. With more than a few leading AI products in the works at the company he runs, Sam Altman agreed to every proposal in the room. He said yes to data privacy, industry and company accountability, copyright for artists, and transparency.

Congress Did Not Give Sam Altman a Dress-Down

Sam Altman is one of many industry leaders to speak to Congress. Mark Zuckerberg spoke in front of Congress in 2018 and Jeff Bezos in 2020. While the first two had a tug-of-war with lawmakers, Sam Altman openly accepted his industry desperately needs oversight.

This is a new move in the world of tech. Most large tech companies did their best to get outside the reach of the law on important issues like data protection acts, content accountability, and privacy regulations. Mark Zuckerberg famously lobbied against privacy regulations during the Cambridge Analytica scandals but also expressed opposition against data protection like GDPR. Jeff Bezos’ Amazon is famous for his all-out war on labor unionization, online sales regulations, sales taxes, and antitrust and competition scrutiny.

With his products in the prime of their popularity, the industry expected the same pushback on regulation from OpenAI’s CEO.

Altman said this in front of Congress:

My worst fears are that we, the technology, the industry, cause significant harm to the world. I think that could happen in a lot of different ways. (...) It’s a big part of why I’m here today and why we’ve been here in the past and we’ve been able to spend some time with you. I think if this technology goes wrong, it can go quite wrong. We want to be vocal about that. We want to work with the government to prevent that from happening.”

Here are just a few of the issues Sam Altman agreed on eagerly: copyright, privacy, oversight and audit, licensing. As a supplementary solution, he advised Congress on limiting computer power for developers.

Copyright has been a hot topic in the AI industry:

(1) AI needs and will continue to need creators in order to train their models. Dall-E and MidJourney are not in their final form.

(2) Even while they are feeding their work to the tech industry, creators risk being displaced by AIs.

Companies need large amounts of data to train, but can’t reasonably buy that amount of data at market value. They also might continue to need that data coming in, otherwise their models, the way they look today, might stagnate. Unless they find better ways to train their models, developers are stuck with stealing the works of artists.

Senator Marsha Blackburn from Tennessee raised the issue of “owning the virtual YOU”. What happens to consent on the part of the artists and how is their work being used?

“I think it’s important to let people control their virtual you, their information in these settings,” the Tennessee senator said. “You’re training [Jukebox] on these copyrighted songs, these mini files, these sound technologies. So as you do this, who owns the rights to that AI-generated material and using your technology? Could I remake a song, insert content from my favorite artist, and then own the creative right to that song?”

This question is broader than just the music industry and pertains to all copyright holders: visual artists, writers, movie creators, songwriters, and singers.

“We think that creators deserve control over how their creations are used and what happens beyond the point of them releasing it into the world. Second, I think that we need to figure out new ways with this new technology that creators can win, succeed, have a vibrant life. And I’m optimistic that this will present it.”

Altman says they’re discussing with a copyright office to find real solutions for artists, including an economic model to pay for their work.

He agrees creators should control how their work is being used (voice too is subjected to copyright). Therefore, artists can say they don’t want their voices being used to train models.

Privacy Issues. Where Social Media Failed Spectacularly, AI Can’t Afford To

Throughout the hearings, social media has been the scarecrow. Facebook, Twitter, and TikTok were held up as an example of ethical and regulatory failure.

Here are the main concerns tied to privacy:

Training their systems on user data. It’s worth mentioning OpenAI does not currently train on user data, and Altman says they don’t plan to do so;

Profiling users, like in the Cambridge Analytica scandal, and creating personalized (advertising, campaign, manipulation) messages for them. Gary Marcus warns this doesn’t even have to happen at the initiative of developers. A dysregulated, rushed-to-market AI can easily glitch itself into the problem.

Altman wanted to clearly differentiate their business models from ad-based ones. He explained he is planning to maintain a subscription model, in which the company has no incentive to use or monetize user data. Under oath, Altman says ChatGPT deletes user data according to legal requirements.

Their monetization scheme is not built on selling private data.  “We’re not trying to build up these profiles of our users,” he explains. “We’re not trying to get them to use it more. Actually, we’d love it if they use it less cause we don’t have enough GPUs.” He does surmise some AI models are already doing it and probably will take this approach in the future.

On Transparency. We Need To Know More About How the Model Works

Altman advocated for critical transparency “to understand the political ramifications, the bias ramifications, and so forth.” He admitted: “We need transparency about the data. We need to know more about how the models work. We need to have scientists that have access to them.”

This is a bigger issue about how machine learning works under the hood. You can’t expect a lot of transparency from a system that isn’t capable of thoroughly explaining itself. There’s a blind spot (the black box) and not even developers can make sense of it.

The promise of transparency from the industry is thorny. An obligation for complete transparency means nobody can comply. Nobody could reasonably create use cases that require complete transparency while the black box remains an issue.

This is precisely the point. In areas where we worry about bias, we shouldn’t use AI without a human in the loop. These are the high and unacceptable risk categories, as the EU cataloged them and they seem to be adopted in the US conversation.

Should Companies Get a License To Produce an AI Tool?

A license for developers would ensure companies get kickstarted under certain restrictions.

“If you make a ladder and the ladder doesn’t work, you can sue the people that made the ladder. But there’s some standard out there to make a ladder,” Sen. Lindsey Graham pressed.

Sam Altman did agree that there should be an agency that grants or restricts tech companies from working with AI. “We’d be enthusiastic about that”

Senator Lindsey Graham proposed an agency that has the power to grant an AI developer a license to activate, but also revoke it if the AI develops biases.

Here is what this would entail:

  • oversight from a commission of independent scientists;
  • an audit before the products are released to the public;
  • continuous oversight throughout the life of the product.

AIs are products that change throughout their lifespan. They need to be regularly checked  to make sure they don’t develop biases

“I think that [independent experts should] hear the results of our test, of our model before we release it. Here’s where it has weaknesses, here’s where it has strengths. Independent audits for that are very important. These models are getting more accurate over time.”, Altman agreed. “This technology is in its early stages. It definitely still makes mistakes. We find that users are pretty sophisticated and understand where the mistakes are. I worry that as the models get better and better the users can have sort of less and less of their own discriminating thought process around it.”

On Accountability. ChatGPT Doesn’t Need or Want To Be Subjected to Section 230

Senators brought into discussion Section 230 on multiple occasions. Section 230 doesn’t apply to AI platforms. It would be hard to stretch so that it covers AIs, and would probably be pointless.

This piece of legislation is part of a Congress initiative to extend free speech to platform providers. The initiative started on the following question: if you advertise something illegal on an advertising billboard, is the advertiser liable?

In 1996, Congress decided that everyone is responsible for their own speech, but nobody can be held liable for what other people said. While this makes sense at an excruciating Thanksgiving dinner with weird relatives, it has created a world of trouble in regulating social media platforms. For example, while Facebook promises protection from bullying on the platform, it can’t be put on trial when it fails to deliver. Facebook is not liable for false information spreading through the platform and there is no incentive for them to develop fact-checking.

This section has very little to do with platforms like ChatGPT, Dall-E, or MidJourney. As Altman explains, these are solitary platforms, not communities. They don’t explicitly need to be exempted from liability for what their users say.

Altman in fact said more than once that developers like OpenAI should not have complete immunity for harm caused by their products.

“I do think for a very new technology, we need a new framework. Certainly, companies like ours bear a lot of responsibility for the tools that we put out in the world, but tool users do as well, and also people that will build on top of it between them and the end consumer. How we want to come up with a liability framework is a super important question,” Altman said.

A Solution in Limiting Computer Power

The strength of an AI system is largely based on scale. The larger the model, the more capable it becomes. An AI trained on a small dataset can deliver an approximate result to the prompts. For example, an image generator won’t generate a decent enough road sign, or a written text on the t-shirt. With scale, the model gets better and better and is finally capable of introducing the actual text into the image.

The more processing power a system has, the more capable it becomes. This can become, as industry leaders explain, an efficient leash on AI.

“The easiest way to do it, I’m not sure if it’s the best, but the easiest would be to talk about the amount of processing that goes into such a model. So we could define a threshold of computing power (...) that says above this amount of compute you are in this regime.”

This ties into the European approach, where AI models are cataloged according to their risk implications, from very low to unacceptable. Beyond an acceptable risk application, the legislators can simply yank on the processing power leash.

Altman gives a few examples of unacceptable threshold risks: a model that can persuade, manipulate, influence a person’s behavior or a person’s beliefs, a model that could help create novel biological agents (bioweapons).

Why This Meeting Is Important. The US, the Country With the Most To Regulate

The US is the main AI regulator.

China’s biggest concern is for maintaining its firewall up and for censorship. They have their own red AI iterations. Their AIs serve more as a case study for what happens when an AI is too stressed about censorship.

The EU has come up with on-point regulations quite fast, but there is little to regulate for them.

America’s Silicon Valley is the powerhouse of AI innovation. This puts pressure on Congress to create effective regulations for industry leaders like OpenAI and Alphabet. Their goal is to strike the balance between protecting Americans from breakneck development speeds and continuing to shelter innovation.