Things are moving fast. ChatGPT was only released to the public in November 2022, and since then, it’s been a barrage of AI news, successes, and fears. Sam Altman is raising $500B for his Stargate programme, while DeepSeek threatens China’s leap towards AI domination – apparently.
These are easy headlines. But the elephant in the room is being missed. We aren’t seeing the breakthroughs in adoption and productivity that were expected. Why? Because businesses need safe, secure AI and they aren’t getting it.
When you hear the terms AI safety, security, responsibility, and trust, a simpler one can be used: AI reliability and control. What is needed by the vast majority of businesses users is a reliable technology, or failing that, certainty. If we aren’t going to see safer, more secure AI released, and we still want to see adoption increase, business needs clear rules that tell them what they are going to be held accountable for and how they should mitigate risks.
This is where governance, clear rules, standards, independent assurance, and regulations have a place. Business hasn’t lost the plot when it comes to AI adoption. It’s excited, but it’s not going to destroy itself in the quest for AI. Businesses wanting to increase productivity and growth or seeking to make efficiency gains through AI adoption are excited by the potential. But ultimately, they will only scale adoption if it is reliable and they know they are unlikely to be fired or face regulatory fines.
Adoption is not slow because some invisible authority is threatening companies or because there is not enough compute or power stations. It is because companies do not trust the tech and do not believe it is reliable enough. AI adoption is a little like electricity. Everyone wants it, but simply building more and more power plants isn’t enough. Users want to be able to switch their lights on without risking their house burning down or electrocuting the cat.
Learning from the past
Rules, standards, independent assurance and regulations, used properly, have accelerated adoption of technology. The same is needed now for AI. There is plenty of ‘what’ companies should be doing, but no ‘how’. Perhaps it is seen as too difficult, but some clarity and certainty is needed urgently to enable widespread and responsible AI adoption in the enterprise economy.
Take electricity, for example. When it was first being introduced, there were no typical ways for anyone to consume it safely. There were different voltage levels, safety concerns, and no standardised plugs or sockets. Early users were at risk of electrocution, and widespread adoption was slow because there was no trust in the infrastructure. Once safety regulations and standardised systems were introduced, electricity became ubiquitous, reliable, and trusted.
Similarly, when the internet came along, TCP/IP protocols allowed for interoperability between networks. Without common communication standards, the internet would have remained a fragmented mess of incompatible systems. The introduction of these protocols accelerated the adoption of the internet, making it a fundamental part of business and daily life today.
Regulation provides certainty
AI needs a similar approach. The absence of rules doesn’t drive innovation; it fosters uncertainty and stifles adoption. Companies will not risk large-scale implementation of AI if they are unsure about its reliability and liability. Imagine trying to sell a self-driving car without any regulations governing its safety – would anyone trust their life to it?
Regulation should not be seen as an obstacle but as an enabler, an essential part of the growth agenda. Well-implemented AI governance will provide a framework for companies to adopt AI with confidence, knowing there are clear standards that ensure safety, fairness, and accountability.
The absence of rules doesn’t drive innovation; it fosters uncertainty and stifles adoption.
For example, Europe’s AI Act may not be perfect, but it is at least attempting to classify AI applications based on their risk levels and define clear compliance requirements. While some may see this as restrictive, it is also an opportunity for businesses to navigate AI adoption with clarity. Instead of waiting for regulators to impose unwieldy constraints, industry leaders should take the lead in defining practical, adaptable standards that promote trust and adoption.
Industry must lead
Industry leaders, policymakers, and researchers must work together to create AI standards that promote trust, safety, and ultimately, adoption.
Business leaders should demand better AI reliability. Investors should prioritise AI startups that integrate safety and robustness into their models. Governments should create AI regulations that are pragmatic, adaptive, and innovation-friendly.
If this is done correctly, AI adoption will explode, productivity will surge, and businesses will unlock unprecedented value. If done incorrectly, AI will remain a powerful yet underutilised technology, plagued by distrust and fear.
We have to stop thinking this is the Wild West and nothing can be done. We need specifics, clarity, rules, and yes… regulation.
David Sully is the CEO & cofounder of Advai.