Preventing AI From Divulging Its Own Secrets

A masking defense could stop neural networks from revealing their inner workings to adversaries

One of the sneakiest ways to spill the secrets of a computer system involves studying its pattern of power usage while it performs operations. That’s why researchers have begun developing ways to shield the power signatures of AI systems from prying eyes.

Among the AI systems most vulnerable to such attacks are machine-learning algorithms that help smart home devices or smart cars automatically recognize different types of images or sounds such as words or music. Such algorithms consist of neural networks designed to run on specialized computer chips embedded directly within smart devices, instead of inside a cloud computing server located in a data center miles away.

This physical proximity enables such neural networks to quickly perform computations with minimal delay but also makes it easy for hackers to reverse-engineer the chip’s inner workings using a method known as differential power analysis.

“This is more of a threat for edge devices or Internet of Things devices, because an adversary can have physical access to them,” says Aydin Aysu, an assistant professor of electrical and computer engineering at North Carolina State University in Raleigh. “With physical access, you can then measure the power or you can look at the electromagnetic radiation.”

The North Carolina State University researchers have demonstrated what they describe as the first countermeasure for protecting neural networks against such differential-power-analysis attacks. They describe their methods in a preprint paper to be presented at the 2020 IEEE International Symposium on Hardware Oriented Security and Trust in San Jose, Calif., in early December.

Differential-power-analysis attacks have already proven effective against a wide variety of targets such as the cryptographic algorithms that safeguard digital information and the smart chips found in ATM cards or credit cards. But Aysu and his colleagues see neural networks as equally likely targets with possibly even more lucrative payoffs for hackers or commercial competitors at a time when companies are embedding AI systems in seemingly everything.

In their latest research, they focused on binarized neural networks, which have become popular as lean and simplified versions of neural networks capable of doing computations with less computing resources.

The researchers started out by showing how an adversary can use power-consumption measurements to reveal the secret weight values that help determine a neural network’s computations. By repeatedly having the neural network run specific computational tasks with known input data, an adversary can eventually figure out the power patterns associated with the secret weight values. For example, this method revealed the secret weights of an unprotected binarized neural network by running just 200 sets of power-consumption measurements.

Next, Aysu and his colleagues developed a countermeasure to defend the neural network against such an attack. They adapted a technique known as masking by splitting intermediate computations into two randomized shares that are different each time the neural network runs the same intermediate computation. Those randomized shares get processed independently within the neural network and only recombine at the final step before producing a result.

That masking defense effectively prevents an adversary from using a single intermediate computation to analyze different power-consumption patterns. A binarized neural network protected by masking required the hypothetical adversary to perform 100,000 sets of power-consumption measurements instead of just 200.

“The defense is a concept that we borrowed from work on cryptography research and we augmented for securing neural networks,” Aysu says. “We use the secure multipart computations and randomize all intermediate computations to mitigate the attack.”

Such defense is important because an adversary could steal a company’s intellectual property by figuring out the secret weight values of a neural network that forms the foundation of a particular machine-learning algorithm. Knowledge of a neural network’s inner workings could also enable adversaries to more easily launch adversarial machine-learning attacks that can confuse the neural network. [READ MORE]

Comments :