AI and the Law. Making Sense of Global Regulations, Ethics, and Responsible Development of Artificial Intelligence

AI and the Law. Making Sense of Global Regulations, Ethics, and Responsible Development of Artificial Intelligence
Created by MidJourney

Artificial Intelligence feels like humanity just had its first baby.

More than a few platforms passed the Turing test with flying colors over the past six months. The accomplishment doesn’t make them intelligent in the way Turing intended. Still, the finest AIs out there will give you a rush of emotions, from amazement to unease.

This disruptive technology goes beyond playing with pictures of bioluminescent cats and generating knock-knock jokes. It goes beyond disrupting white-collar jobs and turning two out of three professionals into prompt engineers. AIs could become instrumental in high-stakes fields: medical research and diagnostics, governance and decision-making, logistics, and infrastructure. Considering the risk, they will not remain unchecked for long.

The European Union, the United States, China, and other individual jurisdictions are already writing up legislation around the technology.

“So far, no one knows how to train very powerful AI systems to be robustly helpful, honest, and harmless. Furthermore, rapid AI progress will be disruptive to society and may trigger competitive races that could lead corporations or nations to deploy untrustworthy AI systems. The results of this could be catastrophic, either because AI systems strategically pursue dangerous goals, or because these systems make more innocent mistakes in high-stakes situations.”, Anthropic, the AI company funded by Google, published in a blog post this month.

It is happening within our lifetimes. Humanity is writing legal definitions for responsible AI.

What Type of Intelligence Are We Regulating?

All AIs on the market today are narrow AIs: systems specialized in solving specific issues. An AI chatbot will serve you the correct syntax that sounds relevant to the context. At times, it will sound like Thanos. Yes, you can manipulate most AIs to confess they want to take over the world. Generative AIs like ChatGPT and MidJourney are stunning, but their level of cognition is the same as that of your predictive keyboard. No, they do not care whether you say please and thank you when you prompt.

The AI community is less concerned by the machine’s capacity to enslave humanity and more with technical problems inherent to AI learning. Artificial Intelligence has trouble getting answers right, lacks transparency, and risks bias. AI is preparing to become a lawyer, doctor, and political figure all rolled into one. The fact it hallucinates and nobody can explain why, doesn’t look great on their resume.

The Black Box Problem. The More Sophisticated AIs Are, the Less Transparent They Become

AI learns based on a heuristic algorithm. We know what data goes into an AI system and what data comes out of it. We are not in charge of what happens inside the model. Learning emerges outside of our control. According to XAI Explained, “not even the engineers or data scientists who create the algorithm can understand or explain what exactly is happening inside or how it arrived at a specific result.”

Here’s a real-life sample of a bad algorithm that gets good enough results: a dog saved a child from drowning. People were impressed and excited, and the dog was rewarded a big fat stake. The good boy learned that he would get pats and stakes if he saved kids from drowning. Soon enough, he was pushing kids into the river just to have a chance to play hero for snacks.

A poorly-built AI might be solving the problem using similar wrong processes. Because of the black box, we would never know it.

  • When AI systems make incorrect or harmful decisions, we need to know how they happened and who is responsible.
  • Humans will find it hard to trust an AI system when they have no idea how it works.
  • When decision-making happens inside the black box, you don’t see biases, so there is no way to correct them.
  • How can you tell if the AI complies with regulations and ethical guidelines if you don’t know how the machine works?

The black box problem makes AI unaccountable.

“I think a lot of the AI hiring tech on the market is illegal. I think a lot of it is biased. I think a lot of it violates existing laws. The problem is you just can’t prove it. Not with the existing laws we currently have in the United States.” explains Albert Fox Cahn, Attorney and founder of Surveillance Technology Oversight Project's (STOP)

Problematic Bias Issues. When You Make Snap Judgments, They Are Bound To Be Wrong

Every creature will be as dumb as it can afford to be as long as it can survive. It’s a simple evolutionary law, and AI might not be an exception to the rule. Speed and efficiency are more important than accuracy. In other words, poorly-programmed AIs can and will make lazy, stupid judgments if they can get by.

In one of his shows, John Oliver describes how a medical AI meant to identify skin cancer failed splendidly. Somehow, pictures of carcinogenic moles in the database were more likely to include a ruler that measured their diameter. The AI made the laziest and most overfitting connection possible. The AI decided holding a ruler is an indication of skin cancer. It still got reasonably good results analyzing the dataset during tests. It wouldn’t have done a great job in my niece’s algebra class during exams.

In the same way, AI can decide women are better teachers than men simply because more women are active in the field. It can notice white schools have better education programs and overwhelmingly recommend white men as being better suited for jobs. It can decide there’s no reason to get yourself checked, since you’re not holding a ruler next to your mole. AIs can zoom in on data that is not relevant. It’s an issue of discrimination and of making decisions that don’t go beyond one unrelated data point.

The CEO of ZipRecruiter estimates that algorithms read at least three-quarters of all resumes submitted for jobs in the US. There is no reason to believe those algorithms don’t already have a favorite skin color.

AI's responses rely on an insane quantity of data. We have been dealing with data long enough to know that (1) it’s valuable and (2) it’s regulated by law. We care about the way data is sourced, processed, and used.

GDPR protects us from unwilling data collection when visiting websites. On the other hand, works of art that take hundreds of hours to execute are up for grabs for AIs. One piece of information (your location and browsing habits) is better regulated than the other (the painstaking results of someone’s work).

Creative ownership has become deeply uncertain with the release of generative images. AI uses immense databases to learn how to generate pictures. They use artists' creations, portfolios lifted off ArtStation, and the entire inventory of websites like Shutterstock. These have commercial value to their original owners. Ironically enough, software like MidJourney and Solidity that learned from these materials will replace the original creators.

AI can only generate content because it relies on a huge data set of source materials. Companies couldn’t possibly afford to buy the source material at market value, not in the quantities needed to teach their models.

AI tries to benefit from the same legislation that protects human creators. Using other artists’ creations as inspiration doesn’t infringe on copyright as long as you produce original work. Humans didn’t have time to discuss whether the same rules should apply to software with plenty of unfair advantages already. Now, human creators are supposed to compete with instant speeds and next-to-free services. We’re playing football on a sloped terrain.

The truth is AI will continue to need creators. Artificial Intelligence generates twisted copies inspired by whatever is in their database. Humans will remain the ones that need to change design and style trends, and this will become far more evident in a year or two. At the moment, the judges presiding over AI copyright lawsuits have a harder task than dads untangling Christmas lights in November.

Deep Fakes, in a Digital World Where We Were Already Having Trouble Fact-Checking

Most people use deep fakes to plaster a photo of their cat on top of a presidential speech or to see how they’d look on stage with Lady Gaga. While deep fakes are often harmless fun, they are eerily lifelike.

Social media disinformation disrupted important processes like the Brexit vote, the US elections, informing the public during the pandemic. Deep fakes are a whole new model of deception. They can generate realistic and convincing impersonations of individuals. If they become part of the regular political banter, they could seriously destabilize democracies everywhere.

Ever since everyone started carrying a camera in their pockets, the sightings of aliens, ghosts, and the Loch Ness monsters inexplicably decreased. With deep fakes, they might make a comeback. In the meantime, the trends shifted from paranormal encounters to bizarre and downright dangerous conspiracies. I don’t imagine Alex Jones will use deep fakes to see how he’d look as a fabulous blonde.

EU’s Proposal for the Artificial Intelligence Act

The European Union is developing rules regarding AI that sort its potential uses from unacceptable to high to minimal and limited risk.

Unacceptable risk applications are AIs that infringe on safety and privacy. AIs used to manipulate or identify people in public spaces in real-time cannot be sold and used in the European Union. The act makes a few exceptions to identify missing children or suspected terrorists in public spaces.

High-risk applications deal with employment and public services or put the life and health of citizens at risk. They include self-driving cars, sorting algorithms, and decision-making AIs. Developers need to test the system for bias and prove the software doesn’t discriminate. They are obligated to monitor the software throughout the entire technology lifecycle so that it doesn’t develop biases. They also need human oversight (the compulsory “human in the loop” system). High-risk applications must meet requirements related to the quality of the data set, transparency, accuracy, and cyber security.

Chatbots and AI filters are considered minimal and limited-risk applications. The EU stipulates that companies have to disclose to clients that they are talking to an AI, and they’re encouraged to write their own code of ethics.

China’s Biggest Concerns

In 2017, two early Chinese chatbots were pulled down after they told users they did not love the CCP and wanted to move to the US.

China has been very efficient at maintaining the great Chinese firewall. Their curated intranet aligns with the party doctrine. Their censorship is backfiring on AI companies, who now have a very restrictive data set to work with. Alibaba, Tencent, and Baidu created their own AIs, but their chatbots don’t have the edginess and spontaneity of ChatGPT.

Under the rules of the Cyberspace Administration of China, tech companies are responsible for the “legitimacy of the source of pre-training data” to ensure content reflects the “core values of socialism.” Essentially, Chinese AIs are stressing too much about slipping the wrong cartoon character reference to even be fun at parties.

The United States, in the Best Position To Take Leadership

Artificial Intelligence needs one jurisdiction to create a functioning legal framework and set a model for the rest of the world. The United States is most likely to assume that role. US Vice-President Kamala Harris met with AI industry leaders from Google and Microsoft last week. It’s a sign the administration is ready to raise to the challenge.

“The EU is very good at regulating, but they regulate nothing. The big players are in the US. Many of the startups are in the US. The free flow of information in the US is unique in innovation. There’s another regulatory planet called China, they’re their own island, but we will never be able to control or regulate that.  I would encourage the US to call on the other countries, but definitely, this calls on the US to take leadership of the issue,” Yesha Sivan, CEO of i8 Ventures explains.

The preliminary blueprint for an AI Bill of Rights more or less touches on the same issues as Europe.

The National Institute of Standards and Technology (NIST) provided a Risk Management Framework as a guide for defining trustworthy AI, defined as:

Safe: providing real-time monitoring, backstops, or other intervention to prevent (...) endangerment of human life, health, or property;

Secure and resilient: employing protocols to avoid (...) attacks against the AI system;

Explainable and interpretable: understanding and properly contextualizing the mechanisms of an AI system as well as its output;

Privacy-enhanced: (...) protecting anonymity, confidentiality, and control;

Fair, with harmful bias managed: promoting equity and equality and managing systemic, computational and statistical, and human-cognitive biases;

Accountable and transparent: making information available about the AI system to individuals interacting with it at various stages of the AI life cycle;

Valid and reliable: demonstrating through ongoing testing or monitoring to confirm the AI system performs as intended.

The guideline from the US essentially requires big tech to solve AI's black box issue before moving past the fun, quirky and artistic applications.

In Closing

Institutions show a competent understanding of AI. Every competitive government wants AI to find applications in bigger industries than content generation and the Instagram filters market. At the moment, fruit flies have better spatial awareness than Tesla’s $75 billion self-driving software. None of the AIs today check any of the EU’s high-risk requirements.

The legislators did a great job making a laundry list for developers. Fixing bias and transparency are reasonable expectations from the AIs that will probably diagnose and operate on your grandma in five years.