AI Regulation. Sort of

At our recent roundtable event, we identified that many participants are using AI already. The key question is, to what extent will this use become regulated?

International approach

We had the Bletchley Declaration in November 2023. Here, the UK, US, EU, Australia and China declared that all “actors” in the AI space need to identify and address the risks of AI.

We also have the AI resolution adopted by the UN in March 2024. This is to “promote safe, secure and trustworthy AI systems” that uphold human rights and sustainable development for all. The US proposed this resolution and 120 countries backed it.

So, everyone agrees we need to take a serious approach to using and regulating AI. Right?

What does this look like on the ground?

Well, your mileage may vary depending upon the part of the world you live / trade in.

UK. The UK government is taking a collaborative approach. It published its “pro-innovation” policy paper. This proposes the UK will consult further and engage with industry bodies and regulators to set up a framework to monitor the use of AI. This is a typical approach in the UK: let everyone work out how far they will police it themselves and then step in to legislate if they can’t. The outcome of that consultation is here.

US. In some ways, the US has a similar approach to the UK. There is the Whitehouse Blueprint for an AI Bill of Rights from October 2022. This is non-binding and provides guidance to minimise harm from AI implementations. There is also the AI Risk Management Framework  from the National Institute of Standards and Technology in August 2022. This has non-binding voluntary guidelines and recommendations for businesses deploying AI systems. Of course, there is also the California Privacy Rights Act, with latest changes effective from January 2023 which may have relevance to AI. But this is a patchy approach to regulation without an overarching federal law.

JAPAN. Japan has its  Social Principles of Human-Centric AI from 2019. It is using this to base its AI regulatory policy but currently has no general constraints on the use of AI.

CHINA. The Cyberspace Administration of China has adopted a regulatory notice in March 2024 that it intends to enforce rules aimed at clearly marking AI-generated content. It doesn’t seem to be much wider than this at this stage.

EU. The EU has decided to take a prescriptive approach with its draft AI Act. This Act has a (hotly debated) definition of AI. It introduces a risk-based approach, prohibiting some AI practices and designating others high-risk. It also provides guidance on governance, supervisory authorities and the creation of codes of conduct. The EU Commission first proposed the EU AI Act in April 2021. The European Parliament formally adopted it in March 2024. The final hurdle before it becomes EU law is for the Council of Ministers to adopt and that is expected soon. The EU Act will become effective in stages with some measures entering force before others. More on this in my separate post here.

What next?

Well, don’t change your deployment of AI just yet. Regulation is coming but, as with everything, technology has got there first. There is a balance of those who want industry to develop safeguards by themselves. This strategy is not without its flaws with the “godfather of AI” famously quitting Google saying he regrets his work. And Elon Musk warned the UK prime minister of humanoids that will chase you anywhere, raising fears of Terminator-style robots.

Perhaps it is not surprising that the EU is seeking to regulate. Not everyone is a fan of this approach. Don’t hate the player, hate the game, as they say. Regulators will regulate and the EU definitely falls into this category! The key issue to watch out for is whether this will become a global quasi-standard form of AI regulation in a similar manner that GDPR is doing for data protection.

If you need advice, contact me f.jennings@teacherstern.com or +44 (0) 20 7611 2338.

Image by Freepik

What's your view? Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.