google-site-verification=-uKYkdhctWR5v_va46skb4mDmHfWkGvmjz4YsiXlam0 How to Steal an AI Model Without Actually Hacking Anything - Get News Daily
Technology

How to Steal an AI Model Without Actually Hacking Anything

Artificial intelligence models can be surprisingly stealable—provided you somehow manage to sniff out the model’s electromagnetic signature. While repeatedly emphasizing they do not, in fact, want to help people attack neural networks, researchers at North Carolina State University described such a technique in a new paper. All they needed was an electromagnetic probe, several pre-trained, open-source AI models, and a Google Edge Tensor Processing Unit (TPU). Their method entails analyzing electromagnetic radiations while a TPU chip is actively running.

“It’s quite expensive to build and train a neural network,” said study lead author and NC State Ph.D. student Ashley Kurian in a call with Gizmodo. “It’s an intellectual property that a company owns, and it takes a significant amount of time and computing resources. For example, ChatGPT—it’s made of billions of parameters, which is kind of the secret. When someone steals it, ChatGPT is theirs. You know, they don’t have to pay for it, and they could also sell it.”

Theft is already a high-profile concern in the AI world. Yet, usually it’s the other way around, as AI developers train their models on copyrighted works without permission from their human creators. This overwhelming pattern is sparking lawsuits and even tools to help artists fight back by “poisoning” art generators.

“The electromagnetic data from the sensor essentially gives us a ‘signature’ of the AI processing behavior,” explained Kurian in a statement, calling it “the easy part.”  But in order to decipher the model’s hyperparameters—its architecture and defining details—they had to compare the electromagnetic field data to data captured while other AI models ran on the same kind of chip.

In doing so, they “were able to determine the architecture and specific characteristics—known as layer details—we would need to make a copy of the AI model,” explained Kurian, who added that they could do so with “99.91% accuracy.” To pull this off, the researchers had physical access to the chip both for probing and running other models. They also worked directly with Google to help the company determine the extent to which its chips were attackable.

Kurian speculated that capturing models running on smartphones, for example, would also be possible — but their super-compact design would inherently make it trickier to monitor the electromagnetic signals.

“Side channel attacks on edge devices are nothing new,” Mehmet Sencan, a security researcher at AI standards nonprofit Atlas Computing, told Gizmodo. But this particular technique “of extracting entire model architecture hyperparameters is significant.” Because AI hardware “performs inference in plaintext,” Sencan explained, “anyone deploying their models on edge or in any server that is not physically secured would have to assume their architectures can be extracted through extensive probing.”

https://gizmodo.com/app/uploads/2024/12/ai-neural-network.jpg

2024-12-28 06:00:30

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button