Skip to main content

Sound Identification System from Auditory Cortex by Using fMRI and Deep Learning: Study on Experimental Design for Capturing Brain Images

  • Conference paper
  • First Online:
Advances in Artificial Intelligence, Software and Systems Engineering (AHFE 2020)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1213))

Included in the following conference series:

  • 1792 Accesses

Abstract

This study establishes a technique estimating sounds using deep learning based on brain images when humans hear sounds captured by fMRI. Humans are hearing complex sounds with mixed frequencies, so we develop estimating complex sound system to establish this technique. So far, we developed a system identifying single sound based on brain images when humans hear single sound by using CNN one of the deep learning, but it doesn’t support complex sound. Therefore, we focus on complex sound and aim to develop a system identifying complex sounds based on brain images when humans hear complex sound. Since identification results generally depend on the brain image used for identification, this report considers block design and event-related design which fMRI experimental designs for capturing brain images to understand effect of stability for brain activity on brain image. As a result, the identification rates of two types complex sounds were almost the same for both designs, and the effect of the stability of brain activity didn’t appear in the identification rates so we decide use event-related design.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Horikawa, T., Tamaki, M., Miyawaki, Y., Kamitani, Y.: Neural decoding of visual imagery during sleep. Science 340(6132), 639–642 (2013)

    Article  Google Scholar 

  2. Narumi, S., Hironobu, S., Kyoko, S., Yoshio I.: Study of deep learning for sound scale decoding technology from human brain auditory cortex. In: 2019 IEEE 1st Global Conference on Life Sciences and Technologies (LifeTech), pp. 212–213 (2019)

    Google Scholar 

  3. Kikuchi, Y., Senoo, A, Abo, M., Watanabe, S., Yonemoto, K.: SPM8 manual for brain image analysis (2012). (in Japanese)

    Google Scholar 

  4. fMRI. https://www.healthcare.siemens.co.jp. Accessed 17 Jan 2019

  5. OptoACTIVE. http://www.optoacoustics.com/medical/optoactive/features. Accessed 17 Jan 2019

  6. SPM. https://www.fil.ion.ucl.ac.uk/spm/software/spm12/. Accessed 17 Jan 2019

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun Shinke .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shinke, J., Shibata, K., Satoh, H. (2021). Sound Identification System from Auditory Cortex by Using fMRI and Deep Learning: Study on Experimental Design for Capturing Brain Images. In: Ahram, T. (eds) Advances in Artificial Intelligence, Software and Systems Engineering. AHFE 2020. Advances in Intelligent Systems and Computing, vol 1213. Springer, Cham. https://doi.org/10.1007/978-3-030-51328-3_6

Download citation

Publish with us

Policies and ethics