AI versus Hackers

Bruce Draper bought a new car recently. The car has all the latest technology, but those bells and whistles bring benefits — and, more worryingly, some risks.

“It has all kinds of AI going on in there: lane assist, sign recognition, and all the rest,” Draper says, before adding: “You could imagine all that sort of thing being hacked — the AI being attacked.”

It’s a growing fear for many — could the often-mysterious AI algorithms, which are used to manage everything from driverless cars to critical infrastructure, healthcare, and more, be broken, fooled or manipulated?

What if a driverless car could be fooled into driving through stop signs, or an AI-powered medical scanner tricked into making the wrong diagnosis? What if an automated security system was manipulated to let the wrong person in, or maybe not even recognize there was ever a person there at all?

As we all rely on automated systems to make decisions with huge potential consequences, we need to be sure that AI systems can’t be fooled into making bad or even dangerous decisions. City-wide gridlock or essential services being interrupted could be just some of the most visible problems that could result from the failure of AI-powered systems. Other harder-to-spot AI system failures could create even more problems.

During the past few years, we’ve placed more and more trust in the decisions made by AI, even if we can’t understand the decisions that are reached. And now the concern is that the AI technology we’re increasingly relying on could become the target of all-but-invisible attacks — with very visible real-world consequences. And while these attacks are rare right now, experts are expecting a lot more will take place as AI becomes more common.

February 24, 2023

Written by Danny Palmer

Click to read the entire article on ZDNet

More Posts

November 18 through 21, 2024

I will be away from my desk in the late morning every day this week due to appointments. Otherwise I will be continuing to work