What could the use of security to regulate AI chips look like

Researchers from OpenAI, University of Cambridge, Harvard University and University of Toronto have offered an “exploratory” service ideas on how to tune AI chips and hardwareand how security policies could prevent the abuse of advanced artificial intelligence.

The recommendations provide ways to measure and verify the development and use of advanced artificial intelligence systems and the chips that power them. Policy enforcement recommendations include limiting system performance and implementing security features that can remotely disable rogue chips.

“Training highly capable AI systems currently requires the accumulation and orchestration of thousands of AI chips,” the researchers wrote. “[I]If these systems are potentially dangerous, limiting the accumulated computing power could serve to limit the production of potentially dangerous AI systems.”

Governments have largely focused on software for AI policy, and the paper is a companion piece covering the hardware side of the debate, says Nathan Brookwood, principal analyst at Insight 64.

However, he warns, the industry will not welcome any security features that affect the performance of AI. Making AI safe through hardware “is a noble aspiration, but I don’t see anyone making it. The genie is out of the bottle, and good luck putting it back in,” she says.

Limiting connections between clusters

One of the proposals suggested by the researchers is a limit to limit the computational processing capacity available for AI models. The idea is to put in place security measures that can identify the abuse of artificial intelligence systems and stop and limit the use of the chips.

Specifically, they suggest a targeted approach to limit bandwidth between memory and chip clusters. The simplest alternative, preventing access to the chips, was not ideal as it would affect the overall performance of the AI, the researchers wrote.

The document does not suggest ways to implement such security barriers or how abuse of AI systems could be detected.

“Determining the optimal bandwidth limit for external communication is an area that deserves further research,” the researchers wrote.

Large-scale AI systems require enormous network bandwidth, and AI systems like Microsoft’s Eagle and Nvidia’s Eos are among the most popular. the 10 fastest supercomputers in the world. There are ways to limit network performance for devices that support the P4 programming language, which can analyze network traffic and reconfigure routers and switches.

But good luck asking chipmakers to implement AI security mechanisms that could slow down chips and networks, Brookwood says.

“Arm, Intel and AMD are all busy building the fastest, meanest chips they can build to be competitive. I don’t know how anyone can slow down,” he says.

Remote possibilities come with some risks

The researchers also suggested disabling the chips remotely, which Intel has built into its latest server chips. Feature On Demand is a subscription service that will allow Intel customers to turn on and off on-chip features such as AI extensions such as heated seats in a Tesla.

The researchers also suggested an attestation scheme in which chips allow only authorized parties to access AI systems via cryptographically signed digital certificates. The firmware may provide guidelines on authorized users and applications, which may be changed with updates.

While the researchers did not provide technical recommendations on how to do this, the idea is similar to how confidential computing protects applications on chips via certifying authorized users. Intel AND AMD they have confidential computers on their chips, but it’s still early days for the emerging technology.

There are also risks to enforcing policies remotely. “Remote enforcement mechanisms have significant disadvantages and can only be justified if the expected harm from the AI ​​is extremely high,” the researchers wrote.

Brookwood agreed.

“Even if you could, there will be bad guys coming after him. Putting artificial constraints on the good guys will be ineffective,” he said.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *