As with everything, you can either see the positives or focus on the negatives. With electric vehicles (EVs), you can worry about range anxiety, the higher cost of EVs versus gas vehicles, and the mining of rare earths used in EV electronics. Or you can be happy that EVs contribute to lowering greenhouse gas emissions, thrilled with never having to visit a gas station, and wowed by amazing off-the-line acceleration.

While watching the Olympics, a large enterprise software company ran ads warning about the dangers of having your data stolen by your AI partner. Recently, I had a chance to attend a four-hour briefing from another large enterprise software company about its AI software stack. The upshot of the meeting was that the company will indemnify you if you use it and get sued. What are the other terms and conditions? I didn’t get to read the contract to find out. Presumably, you’d have to use their models, their datasets, and of course, the governance management software to avoid getting into legal jeopardy.

Are these real problems or imaginary concerns? So, far largely theoretical from what I’ve seen. Still, instilling FUD (fear, uncertainty, and doubt) is a time-tested marketing strategy, especially when there is new technology or a significant market transition. What I haven’t seen is AI infrastructure suppliers like Nvidia warning about the dangers of stolen data and worrying about indemnification. Instead, we’re presented with a bright future where AI is making things more efficient and where new technologies like digital twins enable comprehensive modeling of the real world virtually.

So, what does the future of AI actually look like? No one really knows for sure. Goldman Sachs recently published a report that the $1T spent in AI capital expenditure in the future may not return commensurate economic benefits. Still, even Goldman’s analysts think that the future benefits of AI will outweigh the costs.

What are business and IT leaders to do?

Ignoring AI risks getting left behind by competitors and the general public expecting quarterly improvements in ease-of-use and richer features. Most companies are studying how to implement AI within their enterprise, and many are actively implementing proof-of-concept projects to test out new use cases.

In a January 2024 survey, Gartner reported that over two-thirds of organizations are using Generative AI across multiple business units, a 19 percent increase from a September 2023 survey. In addition, 40 percent of respondents reported that they are using Gen AI in three or more business units with 20 percent reporting that they have a Gen AI application in production.

What is the best way to test out AI? Many enterprise software companies offer AI-powered features using the customer data they are already managing on behalf of customers. Others, like Nutanix, are offering turnkey “GPT-in-a-Box” solutions for enterprise customers to try out a GPT model customized with its enterprise data. For the slightly more advanced and adventurous, Nvidia offers NIM, part of Nvidia’s AI Enterprise which packages AI models, tools, and APIs in ready-to-deploy Kubernetes containers.

Whatever approach you decide on, the commonality is that you’ll need a large supply of enterprise-specific data to train your models. The general rule of thumb of AI is that “mo’ data, mo’ better models.” So, whether you’re currently working on implementing AI-powered products or just investigating the possibilities of AI, retain all of your customer interaction data, support data, product data, and anything else that will be useful in some future AI product that you haven’t yet thought of. Then, at least you’ll be ready for the day when you want to implement AI.

That’s called a “no regrets” move-in strategy.