Beyond AI Logo



【プレスリリース】脳のゆらぎを取り入れてAIを安全にする ――深層ニューラルネットワークの隠れ層にゆらぎを導入し脆弱性を軽減―― [Press Release] Making AI secure by introducing random noise that mimics brain neurons ――Reduces its vulnerability against some adversarial examples by injecting random noise in the hidden layers of deep neural networks――

東京大学大学院医学系研究科 機能生物学専攻 統合生理学分野の大木研一教授と浮田純平大学院生(研究当時)の研究チームは、深層ニューラルネットワークに脳の神経細胞を模したゆらぎを導入することで、深層ニューラルネットワークが持つ脆弱性の一部が軽減できることを明らかにしました。


本研究は、Beyond AI 研究推進機構、日本医療研究開発機構(AMED)「革新的技術による脳機能ネットワークの全容解明プロジェクト」、文部科学省科学研究費助成事業、CREST-JSTなどの支援を受けて行われました。

本研究の成果はNeural Networks誌(9月16日オンライン版)に掲載されました。


・掲載論文 (Title: Adversarial attacks and defenses using feature-space stochasticity)
DOI 10.1016/j.neunet.2023.08.022

・東京大学ホームページ(UTokyo Focus)

A research team consisting of Professor Kenichi OHKI and Jumpei UKITA (a graduate student at the time of the research) of the Department of Physiology, Division of Functional Biology, Graduate School of Medicine, the University of Tokyo, revealed that some of the vulnerabilities in deep neural networks can be mitigated by injecting random noise that mimics the brain in deep neural networks.

Currently, artificial intelligence (AI) is evolving at an accelerating pace, and its underlying structure is based on deep neural networks. Deep neural networks are known to be tricked into producing outputs that are clearly different from that of humans by malicious attacks, known as adversarial attacks. For example, image recognition AI in self-driving cars must correctly recognize a "stop" road sign as "stop" to make the car stop. However, the image recognition AI may not be able to correctly recognize a "stop" road sign generated by a hostile attack, even if it is clearly a "stop" sign when seen by a human. As a result, the car may not be able to stop, leading to a traffic accident. Thus, vulnerability to hostile attacks is one of the major challenges of implementing AI in society.

Incorporating brain properties of animals, such as humans, into AI may help overcome such vulnerabilities. The research team discovered that certain types of vulnerability can be reduced by introducing noise that mimics the randomness of neurons in the brain into deep neural networks. By using this method, the possibility of creating AI that more closely resembles the behavior of humans and other animals is thought to be greater.

This research was supported by the Institute for AI and Beyond of the University of Tokyo, the Japan Agency for Medical Research and Development (AMED) "Project to Elucidate the Entire Functional Brain Network through Innovative Technology", the Grant-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology, and CREST-JST.
The research result was published in Neural Networks on 16 September 2023 online.

Professor Ohki is the Project Leader for the Basic Research Project entitled “Development of next generation AI by modeling brain information” at the Institute for AI and Beyond.

・The full press release (in Japanese only) in PDF.

・Published paper (Title: Adversarial attacks and defenses using feature-space stochasticity)
DOI 10.1016/j.neunet.2023.08.022

・Article in English at UTokyo Focus

  • Twitter
  • facebook
  • LINE