Would You Trust An AI To Look After Your Business Finances?

This article was originally posted on Forbes.com

Shutterstock

Artificial intelligence is on the rise. I write and read extensively on AI, and one gets the impression that this is a zero-sum proposition: Either everything improves and we will live in a Utopia — or we face an imminent apocalypse.

Thankfully, things are not so black and white. Automation, long seen as a killer of jobs, may displace fewer people than experts once believed. Advanced algorithms promise to deliver us from routine, tedious tasks that are part and parcel of running a business, without running the business for us. Further, AI is far more accurate than the human mind for certain tasks — the perfect detail-oriented worker.

Yet, efficacy aside, this speculation raises an interesting question: Can you trust an AI to manage your organization’s finances? If you choose to use an AI, will there even be space for you to continue running the financial side of your business?

The answer, believe it or not, is yes to both questions. But as with everything, there are caveats.

AI Is Far More Limited Than You Think

When it comes to AI, overestimating current capabilities is the most common sin. The widespread perception of AI as a world-ending, malevolent superintelligence isn’t accurate; this is only one category of AI (artificial general intelligence, or AGI), and a rather fanciful one at that.

That’s because developing an AGI is incredibly difficult and time-consuming, and some experts believe it may not even be possible. Currently, AIs evolve primarily through machine learning, though machine testing would be a better phrase. If you want to train a program to recognize an image of a bee from the number three, algorithms have to test, fail and repeat millions of times until they hit upon a breakthrough. That’s how Google’s AlphaGo program beat human playersat the notoriously abstract game by playing millions of games on its own and learning through its failures. It is trial and error on a massive scale.

While AGI may come to be, it’s hard to see an AI stumbling into superintelligence anytime soon. Instead, the rote nature of machine learning makes for very talented specialist bots that excel in narrow niches. Need to trade stocks at lightspeed? Need to browse huge databases of documents for legal discovery?

An AI can’t really run your whole business for you, at least not in the near future. Until machine learning becomes more refined, bots can’t deal with the unpredictability of real life. AI can carry out lower-level functions, like keeping your books, tracking your expenses and profits, generating fancy reports and even suggesting courses of action. “Revenue is up this year,” your AI may say, “but due to tariffs on solar panels, it may be down in the next fiscal year.” Consider diversifying to other clean energy sources, like residential wind turbines or biogas digesters.

There are already software tools that achieve this exact purpose, like Intuit’s QuickBooks product line. But QuickBooks can’t run your business for you. It can’t decide to explore a new avenue of business, forge relationships with suppliers and customers, or lead your team.

The greatest flaw of AI is that it cannot adapt as well as humans. Most AI cannot carry over their experiences from one set of circumstances to another. This means that one AI trained on one narrow group of conditions cannot function on another related group. An AI trained to track small business finances can’t track corporate spending; unlike humans, it can’t extrapolate transferable skills to fit a different situation.

Still, AI’s Thought Processes Are Unclear

Another lingering criticism of AI is its lack of empathy. When dealing with humans, we can assume that a person is working off the same emotional wavelengths as us, allowing us to empathize with each other.

Not AI, which is widely seen as a black box. No one knows what it thinks or feels, and humans cannot understand its processes, even if we understand the sort of brute force approach it takes to get there. One expert compares AI neural networks, patterned after the structure of a human brain and containing multiple layers of inputs and nodes, to fiddling with millions of knobs.

A financial reporting AI could tweak billions of different parameters and connections to analyze variables like politics, competitors, margins, etc. The problem is that it can’t explain the thought process behind an answer. If human analysts suggest that you buy stock in manufacturers of electric vehicle battery packs, they can justify their conclusion by analyzing market demand, benchmark indexes and raw material prices.

Yet AI won’t remain a black box for much longer, as organizations are trying to change the nature of machine learning. Some, like Bonsai, move away from the trial-and-error testing that characterizes deep learning. Earlier this year, an image analysis AI successfully justified its answers. In one picture, it explained that water was calm because there were no waves and you could see the sun’s reflection, while the water in another picture was not calm because it showed frothy, foamy waves.

But humans are also terrible at justifying themselves. Just look at an economic recession or a therapist trying to dig out the reason behind a divorce. In each case, there are plenty of underlying, unseen factors that caused this outcome. In truth, a person’s (and a society’s) decision-making process can be as opaque as any AI black box. With AI, at least we can improve transparency through new strategies and upgrades.

So, if you’re worried about an AI making you obsolete or draining your business accounts when your head is turned, don’t be. If anything, succumbing to paranoia will lead you to miss out on AI’s many benefits. And in this world of rapid innovation, clinging to yesterday’s technology is a death knell for competition. Just ask Kodak.