The Need For A.I Policy and Regulation

Partner Data Protection Law Firm Will Richmond-Coggan (Photo Courtesy of Will Richmond-Coggan)

As Artificial Intelligence (AI) continues to make its way into the tech industry, with companies such as IBM using it to replace their employees and other sectors considering doing the same thing, it raises the question of whether we should be worried. Will Richmond-Coggan partner at Data Protection Law Firm breaks down the potential regulations and policy issues that come with AI.

Visit Freeths.co.uk/people/Will-Richmond-Coggan for additional information about Will Richmond-Coggan
Follow Will on Twitter at: Twitter.com/Tech_Litig8or
Follow Will on Linkedin at: Linkedin.com/in/WillRichmondCoggan


AI Regulation and Policy Transcription

Announcer: Mason Vera Paine.

Mason: Ai has become a major part of the tech industry, with companies such as IBM using it to replace their employees and other sectors considering doing the same thing. To date, no government has taken action against this technology, which raises the question, should we be concerned? Data protection law partner Will Richmond Coggin joins me to break down the regulations and policy issues with AI. Thanks for joining me, Will.

00:27Will Richmond: Oh, you’re welcome.

00:28Mason: So AI, it’s such a big thing right now, such an explosion. But do you think that right now that AI is going to stick around?

00:37Will Richmond: Definitely. We’ve seen a paradigm shift in the way in which AI has penetrated both popular consciousness but also the commercial world. The really big change this time is that you haven’t got AI, which people are developing as an experiment to see what can be done. You’ve got AI with actual practical applications which is being deployed in real world environments and to solve real world problems.

1:08Mason: Now, you said that companies were experimenting with it. I know personally, one of the weirdest things that I saw with this experimentation was McDonald’s. Mcdonald’s is having its own server and you get to talk to it for a minute and then maybe a real person will come on and help you. Are there any other industries that it’s strange to see AI in?

1:30 Will Richmond: Yeah, I think it’s certainly a bit of a risk at the moment for companies that are in the public service or customer service as their main model using AI because it’s still a little unreliable and you definitely have those risks that the AI is going to say something unexpected or produce an output which damages your brand. That’s one thing if it’s an environment where people are tolerant of that mistake, but it’s something where customers expect to be handled with a certain degree of courtesy, say, or to get correct answers first time to the problems that they’re having. They probably don’t want to feel like they’re part of an experiment and they certainly don’t want to be insulted or offended. I think that’s probably one area that’s surprising. The other is at the other end of the spectrum, which is where you’ve got AI being used for really complex, difficult decision making. For example, in the UK, there was an AI tool that was deployed by a local authority, so a local government body, to determine whether or not people that were receiving state benefits were potentially fraudulent, say, whether they were dishonestly claiming those. The AI wrongly identified large numbers of people as being high risk for fraud, and that resulted in them being accused wrongly and all sorts of unfortunate consequences on some of the most vulnerable people in society.

3:09Will Richmond: Again, it was surprising to see that being rushed so quickly into an area that could have such profound adverse impact.

3:16Mason: I actually find that to be really strange, and I’m glad you brought that up. This technology may not be new, but it definitely is unreliable. I believe there’s even a lawsuit that is currently out there of radio host is suing the creators of chat GPT for misinformation and slander. T hat’s one of the biggest things about AI is that you don’t know if this information is correct. And there are some people who are taking it for face value. Why put it out there?

3:47Will Richmond: I think there’s something to be said to that. I think equally, the people who are developing these tools would say putting it out into the public is an important stage in the process of testing it amongst a much wider audience. But it does rely on people understanding what it is they’re using. And chat GPT is a really good example of something which it sounds very credible. It can give very authoritative answers, but often those will be incorrect and they look right and at face value, people might take them seriously. But that’s really problematic because really it’s a tool that’s designed to make convincing looking answers. It’s not necessarily a tool that’s designed to be accurate. And your radio host isn’t the only example of people who’ve been caught out by that. In fact, in the UK, we are aware of one lawyer who used chat GPT to write arguments for a case. And when he went in front of the judge, the judge discovered that the arguments, the cases that ChatGP had identified were not real cases. They’ve been inventors, looked convincing, but they didn’t stand up to even the most simple scrutiny.

5:01Mason: With that being said, do you think regulation needs to start now, or should we wait until AI is almost perfected?

5:11Will Richmond: Definitely a variety of views on that. My own view is that regulation should already have been being thought about some time ago. T here are some piecemeal bits of regulation, again, depends on where you are in the world. For example, I know that in some US states, there are bits of regulation that deal with, for example, biometric processing, which is one of the things that AI can be used for and can be very efficient at, but also has all sorts of risks around misuse. The European Union is looking to put together the first comprehensive set of AI legislation, but they’re finding it very difficult. Part of the reason for that is, as you say, because actually we don’t really quite know the final shape of what AI will look like and all of the different ways in which it might be developed and used. Any legislation that’s developed now runs the risk of only being able to provide an incomplete solution or protection against the ways in which the technology can develop. That said, technology always develops faster than law, so you’ve got to make a start because otherwise, they’re just going to be further and further behind.

6:26Will Richmond: In the end, we might be in a position where humans just can’t keep pace at all and we end up having to ask AI to write the laws for us as well as being the thing we’re trying to police.

6:35Mason: I’m not sure I would trust them. I think it would be a little bit biased. When it comes to the US, I always feel as if the United States is a bit behind when it comes to technology regulations or laws. And it’s because we’re slow to pass it. We have to go through all these checks and balances and everything takes forever. Do you think any country right now is at the forefront of all these technology advancements?

7:03Will Richmond: In terms of the technology itself, I think there are a few different places where there are real strides being made. One of those is in the US, and the reason for that is largely because there’s a lot of investment available there. For example, chat GPT itself has really taken off since Microsoft made their investment into open AI. You see those big players Facebook and Apple, particularly, also getting really interested in AI. Google was a little bit self esteem, but they’re also now putting serious money in. You can see how quickly they start to make strides when those really massive investments are being made. In other places where legislation is perhaps a bit further advanced, say, for example, in Europe, a lot of that legislation is actually being used to ban particular types of AI. Well, that might be safer, but it’s going to stifle innovation in those areas because those developers will not want to investigate certain avenues if they think that something that they spend a lot of time and money developing is actually going to be banned before they can bring it to market and deploy it. Then conversely, you’ve got other parts of the world which has probably a much lighter touch regulatory regime, but where the state is much more heavily involved in promoting the development of the technology.

8:27Will Richmond: The obvious place is China, where AI has been in use by the state for years in all sorts of ways which other countries might regard as problematic. But the reality is they’ve got an extremely large population and using AI in that way in the field has enabled them to actually get effectively a real world experimentation bed with hundreds of millions of test subjects. That in itself has allowed their AI to come on loops and bounds. But as I say, with some difficult ethical questions, which I think other countries wouldn’t want to necessarily replicate that model.

9:07Mason: I know you have a lot of experience with data breaches and cyber incidents and along those lines. When it comes to AI, where do you think is going to be the biggest problem that is going to be in the foreseeable future using this technology?

9:23Will Richmond: I think there’s a couple of big risks. One of them is around AI’s ability to produce very convincing replicas of human behavior, either speech or even face mapping, where you have someone appearing to be someone else through the use the technology to claim their facial features onto a different person. That presents the opportunity for hackers who are trying to do more and more sophisticated spear phishing attacks or something like that, or something where they want to impersonate a senior executive in a business, either to discredit the business or to be able to authorize transactions or something of that sort. That’s definitely one risk area. But also, I think it’s not so much the AI itself as the fact that often you need to gather very, very large data sets in order to train the AI model. So, for example, where you’re trying to train a model to diagnose a health problem, underpinning that training is millions and millions of health records belonging to individuals, and all of those health records are in one place. They need to be accessible to the people that are developing the tools, and that presents a risk in itself that they can also be exposed to the wider world through an error or someone being tricked into opening up access.

10:50Will Richmond: And that type of very sensitive information could suddenly find itself in the hands of people who are going to charge a ransom or they’ll release it. You can only imagine if, for example, it was people who had HIV diagnosis or something else that they didn’t necessarily want to disclose or some mental health condition that they weren’t ready to be open about. And all of those records might be in the hands of people who were willing to extort them to keep that information secret. So I think that’s another real risk area.

11:24Mason: It’s so funny. I didn’t even think of the security risks behind it because I was thinking of the surface things where AI music, there’s a whole genre now on YouTube where they create different AI music from a whole different person. And it sounds like it’s wild. Kanye West is singing some of the weirdest songs out there. And you have Taylor Swift singing songs she definitely shouldn’t be singing. But it’s not Taylor Swift. It’s not Kanye West. And then you have deep fake and all this other stuff. But the security risk, my God, I didn’t even think of you’ll eventually have medical records attached to this stuff, and you just need one person to click on a phishing email and the whole thing is compromised. I think that’s wild.

12:12 – Will Richmond: But I think you’re right as well. And one of the cases that people are going to be watching quite carefully from the point of view is that issue around use of other people’s intellectual property is a claim that’s being brought by Getty Images at the moment. You may know that they hold the license to lots of imagery that’s used online. And so they are doing because they say that these AI tools that scrape online images in order to be able to generate fresh artworks or things that look like other art styles or other creative content, that all of that has been taking and using images which are licensed to them. They’re not been making any payment for that use. Getty is suing not only on their own behalf, but also on behalf of all of the artists that they represent to try and recover some value that’s being derived by these AI tools for the creatives to actually put that content onto the internet in the first place. So if that’s successful, then I’m sure you’ll see the recording artists queuing up to make the same claims.

13:17Mason: When it comes to regulation and when it comes to the laws and policies for AI, should we be looking at these smaller battlegrounds and not the larger portion of it?

13:29Will Richmond: I think we can’t lose sight of the smaller battlegrounds because that’s where people are being impacted today. And it’s all very well to have very high level regulatory reform that’s aimed at two or three years down the line. But in the meantime, people are coming up against problems with AI right now, and the existing legislative regime is not robust enough, and it wasn’t really designed with these sorts of problems in mind to be able to give people a really good solution to those problems. I think we are going to have to see people moving quite quickly to plug particular gaps and address particular problems as they arise. But at the same time, without losing sight of the big picture, which is that we do need an overall understanding of the ethical landscape in which AI is going to be developed and some of these really big questions about what data can be collected and how can it be used to train tools? Can we use AI to detect emotion? And if we can do it for the purposes of, let’s say, someone playing a numeric video game in the metaverse, should governments also be able to use it to detect whether people are disposed kindly towards them or whether they’re opposed to them in order to round up possible defenders before they actually make trouble?

14:53Will Richmond: And there’s obviously dividing lines that have to be drawn between the users that can be allowed to be made and those that can’t, those questions no one is bringing a lawsuit about today, but they’re vitally important that we get the answers right and that we get them right before something really terrible happens on a national or global scale.

15:15Mason: Yeah, I think getting ahead of it is the best course of action. But I’m just curious, how long do you really think it’s going to take for any government around the world to actually impose something? How many years?

15:27Will Richmond: I think we might see some legislation in the next 12 months, probably in Europe, but maybe elsewhere. That said, the European Parliament, who are currently debating AI legislation, have got themselves into a bit of difficulty because they have a coalition of different parties who were all pulling in the same direction to try and get that through, and they’ve fallen out over the handling of biometric data. At the moment, that’s all stalled. I might be being massively ambitious and it might be much longer. At the same time, we are definitely seeing, I think, this year, a bit of a wake up call at different levels. There’s recent discussion between the G7 countries about the importance of making sure that they are legislating around AI, and they’ve agreed a common framework that they’re all going to be working towards to try and put in place protections which are consistent across different countries. The UK has been talking about just allowing AI to develop along commercial lines and then thinking about regulation a bit later. They’ve changed their mind. They’ve decided now they actually need to get on and legislate. I think that’s because everyone has had a bit of a wake up call with things like chat GPT.

16:47Will Richmond: The AI is here. It’s not some futuristic potential problem that everyone needs to think about in their spare time. It’s right here and now. If there are going to be regulatory protections, those need to be in place before the technology landscape changes even further.

17:05Mason: Absolutely. Well, well, thank you so much for joining me. I really appreciate you being here and for those listening. Where can people find more information about you?

17:16. – Will Richmond: Well, I work with a law firm called Frees in the UK, and you can find my profile on there. I’m on LinkedIn @Will Richmond, hyphen Coggin. I’m also on Twitter @TechTech, under score, Litigator, L I T I, G 80OR. It may be that you can put those details in your show notes or something. But yeah, I’m always happy for people to contact me. I love talking about this stuff and to talk through any problems that people are having, if they’d find that helpful.

17:48Announcer: This has Been the. Mason Vera Paine Show. Thanks for listening.


Like Mason on Facebook at: Facebook.com/MasonVeraPaine and follow on Twitter at: Twitter.com/MasonVeraPaine. Interested in being a guest on the show or wish to send pitches contact us at: Contact@Masonverapaine.com

Related posts

How AI Tools Can Aid Your Job Search: Insights from Google

Online Cyber Security: How to Stay Safe in the Digital Age

Learn Here About Google’s Latest Hardware

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More