Like it or not, generative AI is very likely to become a huge part of your business over the next few years. The advantages are clear to see - scores of processes can be automated, streamlined or enhanced across pretty much all of a company’s operations. Over the past few months much has been written about the groundbreaking future potential of AI. However, more recently debate has moved on to just how far and fast we should let AI go. Indeed, we seem to now hear apocalyptic warnings about AI on a daily basis. For business leaders this naturally raises a lot of questions. If AI is an undoubtedly powerful and rapidly developing field - how can it be used both effectively and ethically?
The first thing to do is to put everything in perspective. Much of the discussion around AI is speculation and hype. Currently, although very impressive, ChatGPT and other gen AI apps are a long way from being able to do even a small percentage of what humans are capable of. They are also far from flawless. The risk of Skynet being created tomorrow is negligible. This means, that when we speak about the average business using gen AI ethically, we are not talking about the big, world-changing risks - we’re talking about the small, complex actions businesses will regularly take that, if mishandled, could have undesirable consequences. These decisions can soon stack up to have big implications for a business and society at large.
As AI is developing at such a pace, businesses simply can’t rely on regulation to fully guide them. The law cannot keep up. We saw earlier in the year that the EU’s AI act had to be hastily redrafted because legislators were completely blindsided by the launch of ChatGPT. This pace of development also means that creating your own ethical framework needs to happen now - even if you do not currently have plans to use gen AI. The longer you delay the more difficult it will be to create an ethical decision making culture within your organisation.
Getting started
Data ethics is not a checklist of do’s and don’ts. It’s the creation of guardrails and principles that underpin an ethical culture which should equip decision makers with the knowledge and expertise to make the right judgement calls when presented with challenging moral issues.
A company’s approach to ESG plays an outsized role in determining whether it will have the tools to use data ethically. There’s a very simple reason for this. A diverse team is able to leverage all of its experiences and perspectives to anticipate how your use of data will impact different groups. One of the clearest risks of using generative AI is that it will be biassed against a particular group of people. This is an issue that has already impacted a lot of companies when they use data and design algorithms.
Accountability and transparency
The next step is to look at the structures and policies that will enable ethical decision making to happen in practice. Accountability is a key aspect of this. You need someone who is ultimately responsible for holding your organisation to its self stated ethical standards.
There is some debate as to who is best placed to have this task. For some companies that may be the Chief Data Officer, but this has the potential for conflict of interest (see marking their own homework), others choose the Chief Compliance Officer - however - ethics goes beyond legal compliance. Personally, I think the Chief Executive will often be the most logical fit, especially for smaller companies. Whichever individual oversees your ethical policy it’s essential that they are empowered - both to make critical decisions and to hold colleagues to account should they fail in their ethical responsibilities.
Aligned with accountability is transparency and trust. Your team and your customers or clients need to know how and why you do and do not use AI for particular purposes. Communicating your values and decision making in clear and understandable language is key.
Your actual ethics
Putting pen to paper to outline your ethics is the relatively easy part of this endeavour. It should be in harmony with your company values and be framed in a way that it will support your organisation not impede it. Think of it from the perspective of ‘what should you do’, rather than ‘what you shouldn’t do’. There are resources online that can help support you on this journey.
For example, we have collaborated with Pinsent Masons and a host of data academics and experts to create a free ethics guide that provides a lot of practice advice.
Data education
It is impossible to comprehend the ramifications of generative AI if you do not have a basic understanding of how it works. This knowledge needs to be shared throughout an organisation for a few basic reasons. One, nearly every member of your team will end up using AI or the outputs of data science to undertake day-to-day tasks. Two, having this expertise siloed in your data team creates both bottlenecks and also runs the risk of this team ‘marking their own homework’ with little oversight. And finally, innovation can come from any part of your organisation. Team members will be better able to responsibly apply generative AI in new and creative ways if they have been upskilled on data.
It’s also important to note that training is not a once and done exercise. Knowledge can be easily lost or become obsolete. Running annual, or ideally, bi annual training sessions for your team will help to ensure your culture is maintained.
What I have discussed here is just the tip of the iceberg. However, we have covered the most critical lesson that using AI responsibly means encoding data ethics into your company's DNA. This is not achieved by just writing an ethical policy, it requires a commitment to the right education, structure and skills.
Alistair Dent is Chief Strategy Officer at Profusion.