Why We’ll Never Be Able To Use Siri Like Dwayne Johnson

This article was originally posted on Forbes.com

Shutterstock

There are a couple of things we may never be able to do like Dwayne Johnson. Rocking a black turtleneck, for instance. Or using Siri to order a Lyft for you in under 60 seconds.

You might check your phone in the middle of a conversation. There’s a physical person sitting across from you, but you get that itch to link up with the digital world. The result: You unintentionally ignore every word they’re saying.

So how can you stop this from happening? There has to be a way (other than just leaving your phone in your pocket), right?

As it turns out, there is’t. We weren’t programmed to accommodate for Siri.

The Rise Of Siri

The divide between the digital and physical worlds is becoming more evident. Apple first introduced Siri in 2011 with the iPhone 4, advertising it as a personal assistant right in the palm of your hands. The greatest attraction was the interaction between the user and artificial intelligence.

Siri wasn’t intended as a digital assistant whose purpose was to obey the orders put forth by the users. Siri was a step forward in the world of artificial intelligence (AI), creating a technology that could understand and respond to human needs the way other humans do. It wasn’t another search engine, it was a “do” engine.

Too tired to open your Lyft App? Just ask, “Hey Siri, can you get me a ride home?” and she’ll do the rest. Maybe ask her about the meaning of life on the way home. She’ll give you some thoughtful answers.

Artificial intelligence is continuously progressing to blur the lines between the physical and digital world. But you’ll soon find a problem: We can’t put our full attention on both the physical and digital worlds at the same time. There’s a reason for that.

Humans Vs. AI

When it comes to concentration, your visual and mental capacity are working together. Your brain is telling you what your eyes process. But do you actually see everything that’s in your field of vision? Do you perceive the world to be exactly what it is?

The simple answer is no. In a chaotic, restless world, where our senses are bombarded by countless stimuli each second, the brain has evolved defenses and workarounds; it filters out what it deems to be white noise and based on past experience and generates predictions for what to expect. Your brain decides the amount of data it can process without overloading and ignores what doesn’t fit within that capacity.

In one experimentresearchers had subjects switch their eyes between two flashing squares on a computer screen while hooked up to a fMRI machine to gauge brain activity. Scientists expected to find a delay in the fMRIs, given that our eyes move more quickly than our minds do. Instead, they found a seamless process, as the brain compensated for its slower speed by predicting the position of the next flashing square. There was no delay between eye movements and brain activity, which shows the power of our predictive minds.

But our brains aren’t perfect. Say you send a quick text while at a red light. You put your phone down once the light changes to green. Your brain may still be processing the text instead of the road that’s ahead of you, which creates blind spots and slows reaction time.

This is also known as inattentional blindness.

Multitasking: A Myth?

Inattentional blindness proves the impossibility of multitasking. In reality, the brain can only concentrate on one thing at a time — anyone who claims to be good at multitasking isn’t. Our brains just switch between tasks very quickly, giving the illusion of multitasking.

This is actually a good thing. Our brains developed this way so that we could retain as much information as possible. When you’re multitasking, your brain ends up ignoring some of the information in your surroundings so it can try to filter what’s most important.

But too much stimulation can mean that we can’t focus on anything. With the emergence of technology in our everyday lives, this can increase the occurrence of inattentional blindness. Although many may view their physical and digital lives as one, there is always an increased focus on one versus the other.

The Future Of Human-AI Interactions

Programs like Siri and Google Assistant highlight the limitations of the human-AI interface. For example, Siri can read that text aloud for you since your hands are occupied with the delicate timing of flipping a pancake. But while you’re focused on the stove, you’ve completely missed what the text even said. Siri can note down that you owe Emily $12 for lunch while you’re crossing the street. But that’s only after you’ve realized that you were supposed to take a left instead of crossing.

Though Siri is often used to streamline things for frantic, stressed out humans, she is held back by the limited abilities of our brains — specifically our inability to multitask. This means that as AI capabilities grow by leaps and bounds, they could potentially displace more and more humans. For instance, an AI air traffic controller could efficiently juggle hundreds of flights in the air at any given time, assign out runways to arrivals and departures and minimize the cost of operations (one algorithm versus an entire team of controllers) as well as reduce inefficiencies (like time wasted on the runway).

To avoid mass layoffs (and the ensuing economic instability), AIs must work as partners, not replacements. This is already happening. Even as investment firms use AIs to trade thousands of stocks each secondhumans are still dominant; while algorithms recognize patterns and mine data, humans set the parameters, come to conclusions and make the final decisions. AI may burn through reams of data and excel at multitasking, but they cannot pull off higher-level creative work as well as humans.

So while our brains can’t quite keep up with AI in processing, we are unlikely to go the way of the dodo. At least until someone perfects artificial creativity.