With any new technology, there are concerns about the risks associated with its development and deployment.

Leading figures in the AI field, including Elon Musk, signed an open letter calling for a halt to AI development.

While this may have been well-intentioned, it is impractical. Simply pausing AI development won’t be enough if we fail to develop effective methods for fact-checking, source citation, and preventing the spread of misinformation. These are all essential mechanisms if we wish to build public confidence in AI.

OpenAI’s opaque development process is shrouding AI in mystery – no one knows what is happening behind closed doors. As a result, this is alarming many commentators in the field.

Many signatories have cited that AI’s human-competitive intelligence’ presents significant risks to humanity, prompting this open letter to call for a lengthy six-month pause to create a safe regulatory environment for AI development.

Yet, this halt will only lead to an unfair AI marketplace, allowing Big Tech companies to continue to amass users’ personal data to train new models, whilst slowing the progress of innovative start-ups.

So, what can we do to avoid stunting AI’s development, whilst keeping it safe and building up public confidence?

How a pause will create an imbalance in AI

From the outset, the likes of Google and Microsoft have had a substantial financial advantage and access to world-leading data infrastructure. Therefore, they can afford to spend hundreds of thousands of dollars per day on data center expenses to gain an advantage in AI, creating an inherent competitive imbalance.

Adding to this inherent asymmetry is that Large Language Models (LLMs) benefit from greater access to training data and user input, meaning hyperscalers with unlimited access to user data from other branches of the business and high server usage have a distinct advantage over start-ups.

In particular, the likes of Microsoft already possess stockpiles of training data, and as the user base for early generative AI applications expands, the scale of data accessed by these organizations can increase significantly.

Pausing AI development only deepens these obstacles and further entrenches the “Big Tech advantage”. It creates a dynamic where Big Tech firms can continue amassing the data they need to train models, putting start-ups on the back foot in the race for AI innovation.

Transparency over a 6-month moratorium

Realistically, there’s no advantage to pausing AI development. The AI community can still safely develop technology and build up trust in AI without a pause, so long as we focus on transparency and explainability.

Transparency means making the development and deployment of AI more visible and understandable to the public. This includes AI leaders making information available about AI algorithms’ development and how their data will be collected and used.

To accomplish these ends, businesses could create a standardized method of documenting processes of building algorithms, such as whitepapers, or create a public-facing website where users can access information about data sources and collection. By providing this information, people have peace of mind about what data and processes are informing and building these models.

Greater transparency draws back the curtains: allowing people to understand the foundations of AI models, whilst easing their concerns and providing a greater understanding of why language models produce the responses they do.

This, in turn, promotes confidence and acceptance of AI technologies in the workplace, as well as keeping the AI community accountable for safe development.

How we improve AI’s explainability

End-users are often concerned with the factual inaccuracy of AI responses. We need to make the process of developing factual validation for language models’ responses an open one.

The primary way to increase explainability is to trace facts and responses, so users can assess the journey the model has taken to arrive at its output.

Organizations can produce better and more reliable results by providing source citations when a result has been generated. In a similar way to university students providing footnotes in their essays, AI-generated results should cite information about the origin, the quality of data, and the processing of said data.

Right now some LLM’s are explaining how they arrived at their own answers, and with the possibility for hallucinations, these explanations are not sufficient.

Beyond the quest to stop models from generating imaginary sources, we should focus on the process: helping models to show a user why they’ve come to the conclusion that they have.

AI models and their developers are starting to be held accountable for factual accuracy and errors. By helping users understand generative AI’s inner workings, we can alleviate concerns to do with AI’s opaque decision-making and offer reasoning behind its responses to promote safe AI.

By improving explainability and transparency, the AI community can organically improve public confidence and safety in AI technology, rather than a pause which has the potential for Big Tech firms to push out start-ups.

Subscribe to our daily newsletters