Apparently, 35% of businesses are already using AI. The market is estimated to reach $306 billion by the end of this year. The global AI market is projected to grow at a compound annual growth rate of 28%. As we all know, there are lies, damned lies, and statistics. But, all the surveys and reports agree that the adoption of AI is growing at a rapid pace. If you’re not using it yet, you will be soon.
But with the adoption of new technology and tools, comes risk. How can you identify and manage the risk of adopting AI? Here are a few pointers to help:
Beware AI washing
Everyone wants the advantages of AI. 61% of CIOs say their investments are driven by the fear of missing out. In your rush to jump on the AI bandwagon, make sure you don’t buy AI-washed tools. Those are existing tools which have been re-branded as AI-capable. We see this with each innovation, whether it’s related to GDPR or climate change / green energy or, now, AI. Do they really feature AI and will they produce worthy output? It’s important you don’t buy into AI without a clear analysis of the benefits you’re expecting in relation to your investment.
Beware sharing your IP
Be careful you don’t upload your confidential or proprietary information into an AI tool, which will retain it, train on it and reuse it for other customers. That’s what Samsung discovered last year when its employees were using Samsung’s proprietary code in ChatGPT. So, it disabled employee access. Microsoft and Amazon have also taken similar steps with other AI tools this year. Don’t forget, ChatGPT 4o can reason across audio, vision, and text in real-time, so make sure you lock down your information while you can.
Even if you’re not sharing your IP with the AI tool, you should beware that others might. That’s why George R.R. Martin, John Grisham and other authors took umbrage against the “flagrant” infringement of their copyright and brought legal action against OpenAI, creators of ChatGPT. We await the outcome.
Beware of how the AI is acting
We all remember Microsoft’s early foray into chatbots. In 2016, it shut down its chatbot, Tay, only 16 hours after launch due to the inflammatory and offensive tweets it was making through its Twitter account. Well, it can still happen in 2024. Earlier this year DPD had to disable part of its online support chatbot after it swore at a customer! Check there are appropriate standards and safeguards in place.
Beware adjustments which benefit the provider, not the customer
There is a concern that sometimes the customer experience is downgraded in favour of the profits of the provider. Accusations include supplying “addictive content” to users on social media even if this is detrimental to their health. Or placing sponsored links or adverts in places a user would expect to find unpaid-for links / ads. With AI, this might take the form of locking in users to a proprietary system without interoperability or favouring content from particular key partners without disclosing this, thus skewing the results. Are you sure you’re getting the best for you?
Beware AI fog
A few years ago, AI output was minimal. Now, the input to an AI tool might itself have been generated by the same AI tool or another tool. This can lead to AI bias or a type of AI fog, where it – and consequently – you can no longer see the issues clearly. If you’re making key decisions for the business, you need to know that the output is trustworthy.
Lessons from cloud adoption
As with everything, the key is to use AI with your eyes open. If you’re buying standard, generic AI, you might not be able to adjust the terms on which you’re buying it. But you should at least identify where the risks are and shop around if you’re not happy. We saw this with cloud. At first, it was public cloud on non-negotiable standard terms. That’s still there, of course, but now there are managed service providers who will help tailor the cloud to your needs. You might be able to negotiate a more tailored approach this way. I anticipate the same will happen with AI.
What to do?
- Don’t just believe the blurb or the salesperson, review the specification.
- Ensure the AI tool has specified standards with appropriate checks and balances.
- Check what the terms say the AI tool will do with your input. Make sure you have a “locked box” AI tool.
- Restrict access to and use of your information to those who need it, are trustworthy and in your control.
- Ensure the contract contains all relevant promises in the legal terms, specification or SLA.
- Confirm the legal terms work and that they don’t exclude all liability
If you need advice, contact me at +44 20 3824 9748 or fjennings@hcrlaw.com.

[…] Labour included a commitment in its manifesto (PDF – see p35). Also, AI is prevalent – I mentioned in May that 35% of businesses are already using AI. Remember, this speech references the government wants […]
LikeLike
[…] This is nothing new. Samsung disabled employee access to ChatGPT last year when it discovered employees were sharing Samsung’s proprietary code! Microsoft and Amazon took similar steps with other AI tools this year. I wrote about that earlier this year. […]
LikeLike
[…] investigating the use of personal data. But, as we have seen before, the dataset might also contain proprietary intellectual property rights or confidential business information. Or it might contain inaccurate data or even AI-generated […]
LikeLike