九州大学 研究者情報
研究者情報 (研究者の方へ)入力に際してお困りですか?
基本情報 研究活動 教育活動 社会活動
上田 和夫(うえだ かずお) データ更新日:2024.04.12

准教授 /  芸術工学研究院 音響設計部門 知覚心理学


主な研究テーマ
劣化音声知覚
キーワード:市松音声,断続音声,局部時間反転音声,雑音駆動音声,モザイク音声
2007.04.
無関連音効果に関する研究
キーワード:無関連音効果,系列再生課題,音声明瞭度
2009.10.
断続音声の知覚的補完現象
キーワード:断続音声,知覚的補完現象
2015.09.
局部時間反転音声の知覚の多言語比較
キーワード:明瞭性,時間反転,時間窓
2012.12~2017.05.
臨界帯域フィルターを通した連続音声の因子分析と雑音駆動音声を用いた音声知覚の研究
キーワード:多言語,周波数帯域,パワー変化,雑音駆動音声
2007.04.
雑音下の音声知覚と学習効果
キーワード:聴覚心理学,耐雑音性,SN 比,同定正答率,第二言語学習,非母語話者
1999.04.
音声と非音声の短期記憶
キーワード:聴覚心理学,短期記憶,音の高さ,系列再生,干渉効果
1992.04.
従事しているプロジェクト研究
多言語音声知覚における脳内リズムと意味理解
2019.04~2024.03, 代表者:上田和夫, 九州大学, 九州大学(日本)
聴覚情報の処理を行う脳内系路には,40-50 ms 程度の短い時間窓で処理を行う系路と,100-200 ms 程度の長い時間窓で処理を行う系路との二つの処理系が存在し,これらが並行して働くとの考え方がある。しかし,この二つの処理系がどのように組み合わされて音声の意味理解に用いられているのかは未解明である。本研究では,脳は長い時間窓で処理した大まかな情報と,短い時間窓で処理される情報とを照合しながら推測を行い,全体のつじつまが合うように解釈しているのではないかとの仮説を立て,この仮説について,多言語音声に共通して必要とされる 4 周波数帯域のそれぞれで,時間分解精度(リズム)を組織的に操作した刺激を用いて検証を試みる。本研究で音声知覚の脳内機構を明らかにすることにより,音声情報の効率的な圧縮方法,人工内耳の開発,言語学習を効率化する方法の開発,脳の異常による音声知覚障害解明などに貢献できる。.
Irrelevant speech effects with locally time-reversed speech
2016.01~2019.06, 代表者:Kazuo Ueda, Kyushu University, Kyushu University (Japan)
Irrelevant speech effect was investigated with locally time-reversed speech, employing both German native participants and Japanese native participants. The results of the investigation has been published in the Journal of the Acoustical Society of America [Ueda, K., Nakajima, Y., Kattner, F., and Ellermeier, W. (2019). "Irrelevant speech effects with locally time-reversed speech: Native vs non-native language," J. Acoust. Soc. Am., 145(6), 3686-3694]..
Perceptual restoration of interrupted speech
2015.09, 代表者:Kazuo Ueda, Kyushu University, Kyushu University, Japan.
Perception of locally time-reversed speech
2012.10~2017.05, 代表者:Kazuo Ueda and Wolfgang Ellermeier, Kyushu University and Techinishe Universitäte Darmstadt
Perception of locally time-reversed speech is investigated. A paper has been published (Ueda et al., 2017)..
Irrelevant sound effects generated by degraded speech
2009.10~2019.06, 代表者:Wolfgang Ellermeier, Techinische Universitaet Darmstadt
A series of experiments is planned to use results from Ueda & Naka jima (2008) in order to produce speech which is degraded in systematical ways. Specifically, noise-vocoded speech shall be produced that lacks the spectral fine-structure of the original recording, and that permits to systematically vary the number of frequency channels used in the synthesis. Furthermore, conventional critical-band based synthesis methods shall be compared with ones that partition the audible frequency range into ’meaningful’ units as determined by Ueda, Naka jima & Araki (2009). Finally, input signals other than noise may be used for the vocoder. The stimuli thus generated may serve as interfering background in ’irrelvant speech’ type paradigms as studied by Ellermeier & Zimmer (1997) or Zimmer, Ghani & Ellermeier (2008). The results may elucidate what makes sounds ’speech-like’ and what are the acoustical properties that produce the greatest degree of memory impairment in the irrelevant sound paradigm. Simultaneously they serve to validate the concept of speech-based auditory universals proposed by Ueda et al. (2009). Several aural presentations has been conducted (e.g., Ueda, Nakajima, Doumoto, Ellermeier, and Kattner, 2010; 2011; Ellermeier, Kattner, Ueda, Nakajima, and Doumoto, 2012). The first output of our collaboration in a referee journal was published in the Journal of the Acoustical Society of America ( Ellermeier, Kattner, Ueda, Doumoto, and Nakajima, 2015). The second output was published in the same Journal (Ueda, Nakajima, Kattner, and Ellermeier, 2019)..
研究業績
主要著書
1. 上田和夫,他 14 名, 感覚知覚の心理学, 朝倉書店, 第 9 章,聴覚, pp. 128-144., 2023.04, [URL].
2. 上田和夫,他 9 名, 知覚と感覚の心理学,ライブラリ 心理学を学ぶ 2, サイエンス社, 第6章「聴覚:基礎編,6.2聴覚の仕組み,6.3聴覚における周波数分析」pp. 138-160,第6章「復習問題5-13,参考図書」pp. 166-167,第7章「聴覚:応用編,7.2聴覚コミュニケーション」pp. 175-193,「コラム7.2聴覚研究者の集まり,復習問題3-10,参考図書」pp. 197-199(全47頁),「聴覚デモンストレーション」ウェッブ・ページの作成, 2022.12, [URL].
3. 上田和夫, 知覚・認知心理学, 遠見書房, 公認心理師の基礎と実践 第7巻, 箱田裕司(編), 第4章 聴知覚, pp. 48--67., 2020.03.
4. 生物音響学会編, 上田 和夫, 生き物と音の事典, 朝倉書店, 「フィルター」1 頁,「臨界帯域」 1 頁,「言語の理解に必要な音声情報」 2 頁, 2019.11.
5. 日本音響学会編, 上田 和夫, 音響キーワードブック, コロナ社, 「無関連音効果」の項, pp. 410-411, 2016.03.
6. 中島 祥好, 佐々木隆之, 上田 和夫, REMIJN GERARD BASTIAAN, 聴覚の文法, コロナ社, 本文 159 ページ, CD-ROM 付き, 2014.03.
7. 中島祥好,上田和夫, 大学生の勉強マニュアル:フクロウ大学へようこそ, ナカニシヤ出版, 2006.03.
8. 日本音響学会編,上田和夫を含む 51 名, 音響用語辞典, コロナ社,東京, 音楽心理学,音楽理論に関する用語の執筆を担当., 2003.07.
9. Kの会編,上田和夫,他 13 名, 心理学の方法, ナカニシヤ出版, 京都, 執筆箇所(単著):第10章, 音声と記憶:聴覚心理学における実験例, pp. 141-158, 2002.12.
10. 大串健吾監訳, 上田和夫,他 8 名, 音楽の認知心理学, 誠信書房, 東京, 翻訳担当箇所:1 章, 音楽における情動と意味, 10 章, 熟練した音楽家による時間比率の知覚,生成,模倣. Rita Aiello ed. (1994). Musical Perceptions (Oxford, New York), 1998.03.
11. 平原達也 監訳, 上田和夫, 赤木正人 訳, ナチュラル・コンピュテーション2, パーソナルメディア社, 東京, 翻訳担当箇所:”6 壊れたのかバウンドしたのか?音で事象を判断することの心理物理学,” ”7 旋律の知覚”,
"Sound interpretation," in Natural Computation, edited by W. Richards (MIT Press, 1988), 1994.07.
12. 大串健吾監訳, 上田和夫,他 3 名, 聴覚心理学概論, 誠信書房, 東京, 翻訳担当箇所:3 章§6-12, pp. 120-148, 5 章, pp. 171-209, 8 章, pp. 277-310, 用語集 pp. 363-367., An Introduction to the Psychology of Hearing, by B. C. J. Moore (Academic Press, 1989), 3rd ed., 1994.04.
主要原著論文
1. Kazuo Ueda, Masashi Hashimoto, Hiroshige Takeichi, and Kohei Wakamiya, Interrupted mosaic speech revisited: Gain and loss in intelligibility by stretching, The Journal of the Acoustical Society of America, 10.1121/10.0025132, 155, 3, 1767-1779, 2024.03, [URL], Our previous investigation on the effect of stretching spectrotemporally degraded and temporally interrupted speech stimuli showed remarkable intelligibility gains [Ueda, Takeichi, and Wakamiya (2022). J. Acoust. Soc. Am. 152(2), 970–980]. In this previous study, however, gap durations and temporal resolution were confounded. In the current investigation, we therefore observed the intelligibility of so-called mosaic speech while dissociating the effects of interruption and temporal resolution. The intelligibility of mosaic speech (20 frequency bands and 20 ms segment duration) declined from 95% to 78% and 33% by interrupting it with 20 and 80 ms gaps. Intelligibility improved, however, to 92% and 54% (14% and 21% gains for 20 and 80 ms gaps, respectively) by stretching mosaic segments to fill silent gaps (n = 21). By contrast, the intelligibility was impoverished to a minimum of 9% (7% loss) when stretching stimuli interrupted with 160 ms gaps. Explanations based on auditory grouping, modulation unmasking, or phonemic restoration may account for the intelligibility improvement by stretching, but not for the loss. The probability summation model accounted for “U”-shaped intelligibility curves and the gain and loss of intelligibility, suggesting that perceptual unit length and speech rate may affect the intelligibility of spectrotemporally degraded speech stimuli..
2. Kazuo Ueda, Linh Le Dieu Doan, and Hiroshige Takeichi, Checkerboard and interrupted speech: Intelligibility contrasts related to factor-analysis-based frequency bands, The Journal of the Acoustical Society of America, 10.1121/10.0021165, 154, 4, 2010-2020, 2023.10, [URL], It has been shown that the intelligibility of checkerboard speech stimuli, in which speech signals were periodically interrupted in time and frequency, drastically varied according to the combination of the number of frequency bands (2–20) and segment duration (20–320 ms). However, the effects of the number of frequency bands between 4 and 20 and the frequency division parameters on intelligibility have been largely unknown. Here, we show that speech intelligibility was lowest in four-band checkerboard speech stimuli, except for the 320-ms segment duration. Then, temporally interrupted speech stimuli and eight-band checkerboard speech stimuli came in this order (N = 19 and 20). At the same time, U-shaped intelligibility curves were observed for four-band and possibly eight-band checkerboard speech stimuli. Furthermore, different parameters of frequency division resulted in small but significant intelligibility differences at the 160- and 320-ms segment duration in four-band checkerboard speech stimuli. These results suggest that factor-analysis-based four frequency bands, representing groups of critical bands correlating with each other in speech power fluctuations, work as speech cue channels essential for speech perception. Moreover, a probability summation model for perceptual units, consisting of a sub-unit process and a supra-unit process that receives outputs of the speech cue channels, may account for the U-shaped intelligibility curves..
3. Kazuo Ueda, Hiroshige Takeichi, and Kohei Wakamiya, Auditory grouping is necessary to understand interrupted mosaic speech stimuli, The Journal of the Acoustical Society of America, 10.1121/10.0013425, 152, 2, 970-980, 2022.08, [URL], The intelligibility of interrupted speech stimuli has been known to be almost perfect when segment duration is shorter than 80 ms, which means that the interrupted segments are perceptually organized into a coherent stream under this condition. However, why listeners can successfully group the interrupted segments into a coherent stream has been largely unknown. Here we show that the intelligibility for mosaic speech, in which original speech was segmented in frequency and time, and noise-vocoded with the average power in each unit, was largely reduced by periodical interruption. At the same time, the intelligibility could be recovered by promoting auditory grouping of the interrupted segments with stretching the segments up to 40 ms and reducing the gaps, provided that the number of frequency bands was enough (>= 4) and the original segment duration was equal to or less than 40 ms. The interruption was devastating for mosaic speech stimuli, very likely because the deprivation of periodicity and temporal fine structure with mosaicking prevented successful auditory grouping for the interrupted segments..
4. Hikaru Eguchi, Kazuo Ueda, Gerard B. Remijn, Yoshitaka Nakajima, and Hiroshige Takeichi, The common limitations in auditory temporal processing for Mandarin Chinese and Japanese, Scientific Reports, 10.1038/s41598-022-06925-x, 12, 1, 3002-3002, Article number: 3002, 2022.02, [URL], The present investigation focused on how temporal degradation affected intelligibility in two types of languages, i.e., a tonal language (Mandarin Chinese) and a non-tonal language (Japanese). The temporal resolution of common daily-life sentences spoken by native speakers was systematically degraded with mosaicking (mosaicising), in which the power of original speech in each of regularly spaced time-frequency unit was averaged and temporal fine structure was removed. The results showed very similar patterns of variations in intelligibility for these two languages over a wide range of temporal resolution, implying that temporal degradation crucially affected speech cues other than tonal cues in degraded speech without temporal fine structure. Specifically, the intelligibility of both languages maintained a ceiling up to about the 40-ms segment duration, then the performance gradually declined with increasing segment duration, and reached a floor at about the 150-ms segment duration or longer. The same limitations for the ceiling performance up to 40 ms appeared for the other method of degradation, i.e., local time-reversal, implying that a common temporal processing mechanism was related to the limitations. The general tendency fitted to a dual time-window model of speech processing, in which a short (~ 20–30 ms) and a long (~ 200 ms) time-window run in parallel..
5. Ueda, K., Kawakami, R., and Takeichi, H., Checkerboard speech vs interrupted speech: Effects of spectrotemporal segmentation on intelligibility, JASA Express Letters, 10.1121/10.0005600, 1, 7, 1-7, 075204, 2021.07, [URL], The intelligibility of interrupted speech (interrupted over time) and checkerboard speech (interrupted over time-by-frequency), both of which retained a half of the original speech, was examined. The intelligibility of interrupted speech stimuli decreased as segment duration increased. 20-band checkerboard speech stimuli brought nearly 100% intelligibility irrespective of segment duration, whereas, with 2 and 4 frequency bands, a trough of 35%-40% appeared at the 160-ms segment duration. Mosaic speech stimuli (power was averaged over a time-frequency unit) yielded generally poor intelligibility (
6. Kazuo Ueda and Ikuo Matsuo, Intelligibility of chimeric locally time-reversed speech: Relative contribution of four frequency bands, JASA Express Letters, 10.1121/10.0005439, 1, 6, 1-6, 065201, 2021.06, [URL], Intelligibility of 4-band speech stimuli was investigated (n = 18), such that only one of the frequency bands was preserved, whereas other bands were locally time-reversed (segment duration: 75-300 ms), or vice versa. Intelligibility was best retained (82% at 75 ms) when the second lowest band (540-1700 Hz) was preserved. When the same band was degraded, the largest drop (10% at 300 ms) occurred. The lowest and second highest bands contributed similarly less strongly to intelligibility. The highest frequency band contributed least. A close connection between the second lowest frequency band and sonority was suggested.
7. Kazuo UEDA and Valter CIOCCA, Phonemic restoration of interrupted locally time-reversed speech: Effects of segment duration and noise levels, Attention, Perception, & Psychophysics, 10.3758/s13414-021-02292-3, 83, 5, 1928-1934, Published online on 14 April 2021. Published in the completed journal issue on 19 June 2021., 2021.06, [URL], Intelligibility of temporally degraded speech was investigated with locally time-reversed speech (LTR) and its interrupted version (ILTR). Control stimuli comprising interrupted speech (I) were also included. Speech stimuli consisted of 200 Japanese meaningful sentences. In interrupted stimuli, speech segments were alternated with either silent gaps or pink noise bursts. The noise bursts had a level of -10, 0 or +10 dB relative to the speech level. Segment duration varied from 20 to 160 ms for ILTR sentences, but was fixed at 160 ms for I sentences. At segment durations between 40 and 80 ms, severe reductions in intelligibility were observed for ILTR sentences, compared with LTR sentences. A substantial improvement in intelligibility (30-33%) was observed when 40-ms silent gaps in ILTR were replaced with 0- and +10-dB noise. Noise with a level of -10 dB had no effect on the intelligibility. These findings show that the combined effects of interruptions and temporal reversal of speech segments on intelligibility are greater than the sum of each individual effect. The results also support the idea that illusory continuity induced by high-level noise bursts improves the intelligibility of ILTR and I sentences.
8. Kazuo Ueda, Yoshitaka Nakajima, Florian Kattner, and Wolfgang Ellermeier, Irrelevant speech effects with locally time-reversed speech: Native vs non-native language, The Journal of the Acoustical Society of America, 10.1121/1.5112774, 145, 6, 3686-3694, 2019.06, [URL], Irrelevant speech is known to interfere with short-term memory of visually presented items. Here, this irrelevant speech effect was studied with a factorial combination of 3 variables: the participants' native language, the language the irrelevant speech was derived from, and the playback direction of the irrelevant speech. We used locally time-reversed speech as well to disentangle the contributions of local and global integrity. German and Japanese speech was presented to German (n = 79) and Japanese (n = 81) participants while they were performing a serial-recall task. In both groups, any kind of irrelevant speech impaired recall accuracy as compared to a pink-noise control condition. When the participants' native language was presented, normal speech and locally time-reversed speech with short segment duration, preserving intelligibility, was the most disruptive. Locally time-reversed speech with longer segment durations and normal or locally time-reversed speech played entirely backward, both lacking intelligibility, was less disruptive. When unfamiliar, incomprehensible signal was presented as irrelevant speech, no significant difference was found between locally time-reversed speech and its globally inverted version, suggesting that the effect of global inversion depends on the familiarity of the language..
9. Kazuo Ueda, Tomoya Araki, Yoshitaka Nakajima, Frequency specificity of amplitude envelope patterns in noise-vocoded speech, Hearing Research, 10.1016/j.heares.2018.06.005, 367, 169-181, 2018.08, [URL], We examined the frequency specificity of amplitude envelope patterns in 4 frequency bands, which universally appeared through factor analyses applied to power fluctuations of critical-band filtered speech sounds in 8 different languages/dialects [Ueda and Nakajima (2017). Sci. Rep., 7 (42468)]. A series of 3 perceptual experiments with noise-vocoded speech of Japanese sentences was conducted. Nearly perfect (92–94%) mora recognition was achieved, without any extensive training, in a control condition in which 4-band noise-vocoded speech was employed (Experiments 1–3). Blending amplitude envelope patterns of the frequency bands, which resulted in reducing the number of amplitude envelope patterns while keeping the average spectral levels unchanged, revealed a clear deteriorating effect on intelligibility (Experiment 1). Exchanging amplitude envelope patterns brought generally detrimental effects on intelligibility, especially when involving the 2 lowest bands (≲1850 Hz; Experiment 2). Exchanging spectral levels averaged in time had a small but significant deteriorating effect on intelligibility in a few conditions (Experiment 3). Frequency specificity in low-frequency-band envelope patterns thus turned out to be conspicuous in speech perception..
10. Yoshitaka Nakajima, Mizuki Matsuda, Kazuo Ueda, and Gerard B. Remijn, Temporal Resolution Needed for Auditory Communication: Measurement with Mosaic Speech, Frontiers in Human Neuroscience, 10.3389/fnhum.2018.00149, 12, 149, 2018.04, [URL], Temporal resolution needed for Japanese speech communication was measured. A new experimental paradigm that can reflect the spectro-temporal resolution necessary for healthy listeners to perceive speech is introduced. As a first step, we report listeners' intelligibility scores of Japanese speech with a systematically degraded temporal resolution, so-called "mosaic speech": speech mosaicized in the coordinates of time and frequency. The results of two experiments show that mosaic speech cut into short static segments was almost perfectly intelligible with a temporal resolution of 40 ms or finer. Intelligibility dropped for a temporal resolution of 80 ms, but was still around 50%-correct level. The data are in line with previous results showing that speech signals separated into short temporal segments of
11. Kazuo UEDA, Yoshitaka NAKAJIMA, Wolfgang ELLERMEIER, Florian KATTNER, Intelligibility of locally time-reversed speech: A multilingual comparison, Scientific Reports, 10.1038/s41598-017-01831-z, 7, doi:10.1038/s41598-017-01831-z, 2017.05, [URL], A set of experiments was performed to make a cross-language comparison of intelligibility of locally time-reversed speech, employing a total of 117 native listeners of English, German, Japanese, and Mandarin Chinese. The experiments enabled to examine whether the languages of three types of timing---stress-, syllable-, and mora-timed languages---exhibit different trends in intelligibility, depending on the duration of the segments that were temporally reversed. The results showed a strikingly similar trend across languages, especially when the time axis of segment duration was normalised with respect to the deviation of a talker's speech rate from the average in each language.
This similarity is somewhat surprising given the systematic differences in vocalic proportions characterising the languages studied which had been shown in previous research and were largely replicated with the present speech material. These findings suggest that a universal temporal window shorter than 20--40~ms plays a crucial role in perceiving locally time-reversed speech by working as a buffer in which temporal reorganisation can take place with regard to lexical and semantic processing..
12. Yoshitaka NAKAJIMA, Kazuo UEDA, Shota FUJIMARU, Hirotoshi MOTOMURA, Yuki OHSAKA, English phonology and an acoustic language universal, Scientific Reports, 10.1038/srep46049, 7, 46049, 1-6, doi: 10.1038/srep46049, 2017.04, [URL], Acoustic analyses of eight different languages/dialects had revealed a language universal: Three spectral factors consistently appeared in analyses of power fluctuations of spoken sentences divided by critical-band filters into narrow frequency bands. Examining linguistic implications of these factors seems important to understand how speech sounds carry linguistic information. Here we show the three general categories of the English phonemes, i.e., vowels, sonorant consonants, and obstruents, to be discriminable in the Cartesian space constructed by these factors: A factor related to frequency components above 3,300 Hz was associated only with obstruents (e.g., /k/ or /z/), and another factor related to frequency components around 1,100 Hz only with vowels (e.g., /a/ or /i/) and sonorant consonants (e.g., /w/, /r/, or /m/). The latter factor highly correlated with the hypothetical concept of sonority or aperture in phonology. These factors turned out to connect the linguistic and acoustic aspects of speech sounds systematically..
13. Kazuo UEDA, Yoshitaka NAKAJIMA, An acoustic key to eight languages/dialects: Factor analyses of critical-band-filtered speech, Scientific Reports, doi: 10.1038/srep42468, 7, 42468, 1-4, doi: 10.1038/srep42468, 2017.02, [URL], The peripheral auditory system functions like a frequency analyser, often modelled as a bank of non-overlapping band-pass filters called critical bands; 20 bands are necessary for simulating frequency resolution of the ear within an ordinary frequency range of speech (up to 7,000 Hz). A far smaller number of filters seemed sufficient, however, to re-synthesise intelligible speech sentences with power fluctuations of the speech signals passing through them; nevertheless, the number and frequency ranges of the frequency bands for efficient speech communication are yet unknown. We derived four common frequency bands---covering approximately 50--540, 540--1,700, 1,700--3,300, and above 3,300 Hz---from factor analyses of spectral fluctuations in eight different spoken languages/dialects. The analyses robustly led to three factors common to all languages investigated---the low & mid-high factor related to the two separate frequency ranges of 50--540 and 1,700--3,300 Hz, the mid-low factor the range of 540--1,700 Hz, and the high factor the range above 3,300 Hz---in these different languages/dialects, suggesting a language universal..
14. Takuya KISHIDA, Yoshitaka NAKAJIMA, Kazuo UEDA, Gerard Remijn, Three Factors Are Critical in Order to Synthesize Intelligible Noise-Vocoded Japanese Speech, Front. Psychol., 26 April 2016, http://dx.doi.org/10.3389/fpsyg.2016.00517, 7, 517, 1-9, 2016.04, [URL].
15. Wolfgang Ellermeier, Florian Kattner, Kazuo UEDA, Kana Doumoto, Yoshitaka NAKAJIMA, Memory disruption by irrelevant noise-vocoded speech: Effects of native language and the number of frequency bands, the Journal of the Acoustical Society of America, http://dx.doi.org/10.1121/1.4928954, 138, 3, 1561-1569, 2015.09, [URL], To investigate the mechanisms by which unattended speech impairs short-term memory performance, speech samples were systematically degraded by means of a noise vocoder. For experiment 1, recordings of German and Japanese sentences were passed through a filter bank dividing the spectrum between 50 and 7000 Hz into 20 critical-band channels or combinations of those, yielding 20, 4, 2, or just 1 channel(s) of noise-vocoded speech. Listening tests conducted with native speakers of both languages showed a monotonic decrease in speech intelligibility as the number of frequency channels was reduced. For experiment 2, 40 native German and 40 native Japanese participants were exposed to speech processed in the same manner while trying to memorize visually presented sequences of digits in the correct order. Half of each sample received the German, the other half received the Japanese speech samples. The results show large irrelevant-speech effects increasing in magnitude with the number of frequency channels. The effects are slightly larger when subjects are exposed to their own native language. The results are neither predicted very well by the speech transmission index, nor by psychoacoustical fluctuation strength, most likely, since both metrics fail to disentangle amplitude and frequency modulations in the signals.
(C) 2015 Acoustical Society of America..
16. Kazuo Ueda, Reiko Akahane-Yamada, Ryo Komaki, and Takahiro Adachi, Identification of English /r/ and /l/ in noise: the effects of baseline performance, Acoustical Science and Technology, 28 (4) 251-259, 2007.07.
17. Ueda, K., Short-term auditory memory interference: the Deutsch demonstration revisited, Acoustical Science and Technology, vol. 25, no. 6, 457-467, 2004.11.
18. Ueda, K., and Akagi, M., Sharpness and amplitude envelopes of broadband noise, Journal of the Acoustical Society of America, vol. 87, no. 2, 814-819, 1990.02.
19. Ueda, K., and Ohgushi, K., Perceptual components of pitch: Spatial representation using a multidimensional scaling technique, Journal of the Acoustical Society of America, vol. 82, no. 4, 1193-1200, 1987.10.
主要総説, 論評, 解説, 書評, 報告書等
1. Alexandra J. Wolf, Kazuo Ueda, and Yoji Hirano, Recent Updates of Eye-movement Abnormalities in Patients with Schizophrenia: A Scoping Review, Psychiatry and Clinical Neurosciences, 10.1111/pcn.13188, 2020.12, [URL], Aim: Although eye-tracking technology expands beyond capturing eye data just for the sole purpose of ensuring participants maintain their gaze at the presented fixation cross, gaze technology remains of less importance in clinical research. Recently, impairments in visual information encoding processes indexed by novel gaze metrics have been frequently reported in patients with schizophrenia. This work undertakes a scoping review of research on saccadic dysfunctions and exploratory eye movement deficits among patients with schizophrenia. It gathers promising pieces of evidence of eye movement abnormalities in attention-demanding tasks on the schizophrenia spectrum that have mounted in recent years and their outcomes as potential biological markers. Methods: The protocol was drafted based on PRISMA for scoping review guidelines. Electronic databases were systematically searched to identify articles published between 2010 and 2020 that examined visual processing in patients with schizophrenia and reported eye movement characteristics as potential biomarkers for this mental illness. Results: The use of modern eye-tracking instrumentation has been reported by numerous neuroscientific studies to successfully and non-invasively improve the detection of visual information processing impairments among the screened population at risk of and identified with schizophrenia. Conclusions: Eye-tracking technology has the potential to contribute to the process of early intervention and more apparent separation of the diagnostic entities, being put together by the syndrome-based approach to the diagnosis of schizophrenia. However, context-processing paradigms should be conducted and reported in equally accessible publications to build comprehensive models..
2. Nakajima, Y., Ueda, K., Remijn, G. B., Yamashita, Y., and Kishida, T., How sonority appears in speech analyses, Acoustical Science and Technology, 39(3), 179-181, 10.1250/ast.39.179, (和訳)中島祥好,上田和夫, G. B. レメイン,@山下友子,#岸田拓也 "音声分析に現れる鳴音性" 日本音響学会誌 74 巻 2 号 93-96, 2018.05.
3. 上田和夫,鮫島俊哉, 卒業研究評価に見られるバイアス, 芸術工学研究:九州大学大学院芸術工学研究院紀要, Vol. 16, (2012), pp. 33-38, (査読有り), 2012.03, The present article focuses on a persistent bias observed in evaluating graduation theses and oral presentations. Inspections on score distributions marked by chairs and deputy-chairs revealed that the same person exhibited opposite directions of bias, depending on the role given to the person: chairs generally scored higher than deputy-chairs, with a smaller variance. This tendency has been confirmed with observations from the recent two years of practice. Thus, it is fair to employ both chairs and deputy-chairs in the evaluation, despite the effort staffs have to pay. Moreover, this built-in antagonistic system also activated scientific discussions at oral presentations, which actually gave favorable impression to some attending students..
4. 上田和夫,鮫島俊哉, 卒業研究評価法の比較, 芸術工学研究, Vol. 13, 57-61(査読有り)., 2010.12.
5. 中島祥好,上田和夫, たのしい資質開発:フクロウ大学の試み, 芸術工学研究:九州大学大学院芸術工学研究院紀要, 6 巻 91-99, 2006.12.
6. 上田和夫,中島祥好, 聴覚情報処理のフロンティア研究と情報通信技術への応用 [I] --聴覚体制化と聴覚情景分析--, 電子情報通信学会誌, 89 巻 9 号 842-847, 2006.09.
主要学会発表等
1. 棟近光太郎,上田和夫,竹市博臣,蓮尾絵美,Gerard B. Remijn, 市松音声の市松雑音マスキング:変調指数スペクトルによる分析, 日本音響学会第 151 回(2024 年春季)研究発表会, 2024.03.
2. Kazuo Ueda, Linh Le Dieu Doan, and Hiroshige Takeichi, Checkerboard and interrupted speech: Critical intelligibility differences observed in factor-analysis-based checkerboard speech stimuli, Acoustics 2023 Sydney, the 185th Meeting of the Acoustical Society of America, 2023.12.
3. Kazuo Ueda, Masashi Hashimoto, Hiroshige Takeichi, and Kohei Wakamiya, Interrupted mosaic speech revisited: Gain and loss of stretching on intelligibility, Acoustics 2023 Sydney, the 185th Meeting of the Acoustical Society of America, 2023.12.
4. Jun Hasegawa, Kazuo Ueda, Hiroshige Takeichi, Gerard B. Remijn, and Emi Hasuo, Selective Listening in Checkerboard and Interrupted Speech Stimuli with Two Talkers, Acoustics 2023 Sydney, the 185th Meeting of the Acoustical Society of America, 2023.12.
5. Kazuo Ueda, Masashi Hashimoto, Hiroshige Takeichi, and Kohei Wakamiya, Interrupted mosaic speech revisited: A curious biphasic effect of stretching on intelligibility, Fechner Day 2023: The 39th Annual Meeting for the International Society for Psychophysics, 2023.09.
6. Koutaro Munechika, Kazuo Ueda, Hiroshige Takeichi, and Gerard B. Remijn, Phonemic restoration and energetic masking with checkerboard speech stimuli: Effects of noise-filling on intelligibility, Fechner Day 2023: The 39th Annual Meeting for the International Society for Psychophysics, 2023.09.
7. Kazuo Ueda, Hiroshige Takeichi, and Kohei Wakamiya, Auditory grouping is necessary to understand interrupted mosaic speech stimuli, Acoustical Society of America, P&P Virtual Journal Club, 2022.09, [URL], The intelligibility of interrupted speech stimuli has been known to be almost perfect when segment duration is shorter than 80 ms, which means that the interrupted segments are perceptually organized into a coherent stream under this condition. However, why listeners can successfully group the interrupted segments into a coherent stream has been largely unknown. Here, we show that the intelligibility for mosaic speech in which original speech was segmented in frequency and time and noise-vocoded with the average power in each unit was largely reduced by periodical interruption. At the same time, the intelligibility could be recovered by promoting auditory grouping of the interrupted segments by stretching the segments up to 40 ms and reducing the gaps, provided that the number of frequency bands was enough (>= 4) and the original segment duration was equal to or less than 40 ms. The interruption was devastating for mosaic speech stimuli, very likely because the deprivation of periodicity and temporal fine structure with mosaicking prevented successful auditory grouping for the interrupted segments..
8. Kazuo Ueda, Riina Kawakami, Hiroshige Takeichi, Checkerboard Speech: A New Experimental Paradigm for Investigating Speech Perception, Fechner Day 2021: The 57th Annual Meeting of the International Society for Psychophysics, 2021.10, [URL].
9. Wolf, A., Ueda, K., and Hirano, Y., Eye movement abnormalities among patients with schizophrenia, Fechner Day 2021: The 57th Annual Meeting of the International Society for Psychophysics, 2021.10, [URL].
10. Zhang, Y., Nakajima, Y., Ueda, K., and Remijn, G. B., Acoustic correlates of English consonant-vowel-consonant (CVC) words obtained with multivariate analysis, Fechner Day 2021: The 57th Annual Meeting of the International Society for Psychophysics, 2021.10, [URL].
11. Nakajima, Y., Onaka, T., Oyama, A., Ueda, K., and Remijn, G. B., Temporal and frequency resolution needed for auditory communication: Comparison between young and senior listeners utilizing mosaic speech, Fechner Day 2021: The 57th Annual Meeting of the International Society for Psychophysics, 2021.10, [URL].
12. 棟近光太郎,上田和夫,松尾行雄,竹市博臣, Gerard B. Remijn, 局部時間反転キメラ音声の了解度に影響を及ぼす周波数帯:実験および誤答分析, 日本音響学会秋季研究発表会, 2021.09.
13. 川上 里以菜,上田 和夫,竹市 博臣, 市松音声の知覚:聴覚体制化における断続音声との違い, 日本音響学会秋季研究発表会, 2021.09.
14. Kazuo UEDA, Riina KAWAKAMI, Hiroshige TAKEICHI, Checkerboard speech, The 52nd Perceptual Frontier Seminar: Non-Invasive Exploration of the Brain with Visual, Tactile, and Auditory Stimuli, 2021.05, "Checkerboard speech" is a kind of degraded speech discarding 50% of original speech, to study spectrotemporal characteristics of speech perception. Here we show that 20-band checkerboard speech maintained nearly 100% intelligibility irrespective of segment duration in the range from 20 to 320 ms, whereas 2- and 4-band checkerboard speech showed a trough of 35% to 40% intelligibility between the segment durations of 80 and 160 ms (n = 2 and n = 20), and that mosaicked checkerboard speech stimuli showed less than 10% intelligibility except for the stimuli with the finest resolution (20 frequency bands and 20-ms segment duration). The results suggest close connections with the modulation power spectrums of the stimuli, a spectrotemporal interaction in speech perception, and perceptual cue integration based on temporal fine structure..
15. Kazuo UEDA, Riina KAWAKAMI, and Hiroshige TAKEICHI, Combined Effects of Temporal and Spectral Segmentation on Intelligibility of Degraded Speech, 日本音響学会聴覚研究会, 2021.05, [URL].
16. Kazuo Ueda, Valter Ciocca, Gerard B. Remijn, and Yoshitaka Nakajima, Perceptual restoration of interrupted locally time-reversed speech: Effects of noise levels and segment duration, Fechner Day 2019: the 35th Annual Meeting of the International Society for Psychophysics, 2019.10.
17. Kazuo UEDA, Yoshitaka NAKAJIMA, Shunsuke TAMURA, Wolfgang Ellermeier, Florian Kattner, Stephan Daebler, The effect of segment duration on the intelligibility of locally time-reversed speech: A multilingual comparison, The 31st Annual Meeting of the International Society for Psychophysics, Fechner Day 2015, 2015.08.
18. Kazuo Ueda, Yoshitaka Nakajima, and Yu'ichi Satsukawa, Effects of frequency-band elimination on syllable identification of Japanese noise-vocoded speech: Analysis of confusion matrices, Fechner Day '10: the 26th Annual Meeting of the International Society for Psychophysics, 2010.10.
19. Kazuo Ueda, Tomoya Araki, and Yoshitaka Nakajima, The effect of amplitude envelope coherence across frequency bands on the quality of noise-vocoded speech, EURONOISE 2009, 2009.10.
20. Kazuo Ueda and Yoshitaka Nakajima, A consistent clustering of power fluctuations in British English, French, German, and Japanese, ASJ meeting of Hearing Research, 2008.12.
21. 上田和夫,中島祥好, 臨界帯域フィルターを通した音声の因子分析:日英仏独語に共通する因子構造, 日本音響学会秋季研究発表会, 2008.09.
22. Kazuo Ueda, Yoshitaka Nakajima, A critical-band-filtered analysis of Japanese speech sentences, The 24th Annual Meeting of the International Society for Psychophysics, 2008.08.
23. Kazuo Ueda, Yoshitaka Nakajima, Factor analyses of critical-band-filtered speech of British English and Japanese, Acoustics'08 Paris, 2008.07.
24. Kazuo Ueda and Yoshitaka Nakajima, Principal component analyses of critical-band-filtered speech, The 2nd International Symposium on Design of Artificial Environments, 2007.11.
25. 上田和夫, 中島祥好, 臨界帯域フィルターを通した音声の主成分分析:イギリス英語の例, 日本音響学会秋季研究発表会, 2007.09.
26. Ueda, K., Yamada, R. A., Komaki, R., and Adachi, T., English /r/ and /l/ identification by native and non-native listeners in noise: applying screening text, signal-to-noise ratio variation, and training, 日本音響学会聴覚研究会, 2005.12.
特許出願・取得
特許出願件数  1件
特許登録件数  5件
学会活動
所属学会名
生物音響学会
日本基礎心理学会
The International Society for Psychophysics(国際精神物理学会)
The Acoustical Society of America(米国音響学会)
日本心理学会
日本音響学会
日本音楽知覚認知学会
学協会役員等への就任
2023.03~2025.02, 一般社団法人 日本音響学会, 代議員.
2022.09~2023.02, 一般社団法人 日本音響学会, 第 45 回日本音響学会功績賞選定委員会選定委員.
2022.09~2023.02, 一般社団法人 日本音響学会, 第 26 回日本音響学会論文賞選定委員会選定委員.
2022.09~2025.09, The International Society for Psychophysics, 理事.
2021.09~2022.02, 一般社団法人 日本音響学会, 第 45 回日本音響学会功績賞選定委員会選定委員.
2021.09~2022.02, 一般社団法人 日本音響学会, 第 26 回日本音響学会論文賞選定委員会選定委員.
2019.10~2022.09, The International Society for Psychophysics, 会長.
2018.10~2019.09, The International Society for Psychophysics, 副会長.
2017.06~2017.11, 日本音楽知覚認知学会, 論文賞選考委員会委員.
2016.09~2018.08, 社団法人 日本音響学会, 音声コミュニケーション調査研究委員会 委員.
2015.04~2017.03, 社団法人 日本音響学会, 生物音響調査研究委員会 幹事.
2014.05~2017.12, 一般社団法人 生物音響学会/The Society for Bioacoustics, 理事.
2011.04~2013.03, 社団法人 日本音響学会九州支部, 評議員.
2010.10~2011.02, 日本音響学会, 第 51 回佐藤論文賞選定委員会選定委員.
2009.05~2011.04, 日本音響学会聴覚研究委員会, 副委員長.
2008.05~2009.04, 電気関係学会九州支部連合会, 会計監査.
2007.04~2009.03, 社団法人 日本音響学会九州支部, 会計監査.
2005.04~2005.05, 社団法人 日本音響学会, 日本音響学会独創研究奨励賞 板倉記念選定委員会委員.
2003.05~2011.04, 日本音楽知覚認知学会, 常任編集委員.
2003.05~2019.05, 日本音楽知覚認知学会, 理事.
2003.05~2004.05, 電気関係学会九州支部連合会.
2002.04~2004.03, 社団法人 日本音響学会九州支部.
1996.05~1999.04, 日本音響学会聴覚研究委員会, 幹事.
学会大会・会議・シンポジウム等における役割
2023.03.06~2023.03.08, 日本音響学会春季研究発表会, 座長.
2022.12.17~2022.12.18, ASJ Auditory Research Meeting (日本音響学会聴覚研究会,国際セッションにおける発表 10 件を含む), 現地世話人.
2022.09.14~2022.09.16, 日本音響学会秋季研究発表会, 座長.
2021.05.10~2021.05.10, The 52nd Perceptual Frontier Seminar: Non-Invasive Exploration of the Brain with Visual, Tactile, and Auditory Stimuli, Organizer.
2020.11.11~2020.11.11, The 51st Perceptual Frontier Seminar: Analysis of Visual Images and Speech, Organizer.
2020.03.14~2020.04.14, Yoshitaka Memorial Poster-Symposium (Virtual), Organizing Committee Chair, Editor.
2019.12.14~2019.12.15, ASJ Auditory Research Meeting (日本音響学会聴覚研究会,国際セッションにおける発表 5 件を含む), 現地世話人.
2018.08.06~2018.08.06, Joint Seminar of RIKEN Seminar on New Horizons in Sensory Physiology and the 41st Perceptual Frontier Seminar: Dynamic Processes in Perception, Organizer.
2017.10.22~2017.10.26, Fechner Day 2017: the 33rd Annual Meeting of the International Society for Psychophysics, Organizing Committee (Editorial Chief).
2017.10.22~2017.10.26, Fechner Day 2017: the 33rd Annual Meeting of the International Society for Psychophysics, Session Chair.
2017.03.15~2017.03.17, 日本音響学会春季研究発表会, 座長(Chairmanship).
2016.12.17~2016.12.18, The Auditory Research Meeting, Organizer, 現地世話人.
2016.07.24~2016.07.29, The 31st International Congress of Psychology, プログラム委員.
2016.07.24~2016.07.29, The 31st International Congress of Psychology, 司会(Moderator).
2016.07.24~2016.07.29, The 31st International Congress of Psychology, 座長(Chairmanship).
2016.05.14~2016.05.15, 日本音楽知覚認知学会平成 28 年度春季研究発表会, 研究選奨選考委員会委員.
2015.12.12~2015.12.13, The 2nd Annual Meeting of the Society for Bioacoustics, Chair of the Organizing Committee.
2015.12.12~2015.12.13, The 2nd Annual Meeting of the Society for Bioacoustics, Editor of the Proceedings.
2015.12.12~2015.12.13, The 2nd Annual Meeting of the Society for Bioacoustics, Chair of the Organizing Committee.
2015.03.06~2015.03.08, The 48th Colloquium on Perception & International Five-Sense Symposium, 第 48 回知覚コロキウム世話人会代表.
2015.03.06~2015.03.08, 第 48 回知覚コロキウム, 座長(Chairmanship).
2014.12.20~2014.12.21, The Auditory Research Meeting, Organizer, 現地世話人.
2013.12.14~2013.12.15, The 18th Auditory Research Forum, 座長(Chairmanship).
2013.11.09~2013.11.10, 日本音楽知覚認知学会 平成 25 年度秋季研究発表会, 研究選奨選考委員会委員長.
2012.12.15~2012.12.16, The Auditory Research Meeting, Organizer, 現地世話人.
2012.05~2012.11.04, 日本基礎心理学会第 31 回大会, 準備委員会委員.
2012.03.13~2012.03.15, 日本音響学会春季研究発表会, 座長(Chairmanship).
2011.09.20~2011.09.22, 日本音響学会秋季研究発表会, 座長(Chairmanship).
2011.07.11~2011.07.14, The 4th Conference of the Asia-Pacific Society for the Cognitive Sciences of Music (APSCOM 4), 座長(Chairmanship).
2011.05.13~2011.05.14, 日本音響学会聴覚研究会, 座長(Chairmanship).
2011.03.09~2011.03.11, 日本音響学会春季研究発表会, 座長(Chairmanship).
2011.01.28~2011.01.28, 応用知覚研究懇話会:「知覚と脳」シリーズ第 5 回および日本基礎心理学会フォーラム「音楽の人間科学」, 指定討論者.
2010.12.10~2010.12.11, Colloquium of Applied Perceptual Research: The 3rd Session of the "Perception and the Brain" Unit (Auditory Research Meetings), Organizer.
2010.12.04~2010.12.05, The 15th Auditory Research Forum, 座長(Chairmanship).
2010.09.14~2010.09.16, 日本音響学会秋季研究発表会, 座長(Chairmanship).
2010.07.17~2010.07.18, 日本音響学会聴覚研究会, 座長(Chairmanship).
2010.05.14~2010.05.15, 日本音響学会聴覚研究会, 座長(Chairmanship).
2010.03.08~2010.03.10, 日本音響学会春季研究発表会, 座長(Chairmanship).
2010.03.04~2010.03.05, 日本音響学会聴覚研究会, 座長(Chairmanship).
2010.02.06~2010.02.07, 日本音響学会聴覚研究会, 座長(Chairmanship).
2010.01.18~2010.01.18, 日本音響学会聴覚研究会, 座長(Chairmanship).
2009.12.04~2009.12.05, 日本音響学会聴覚研究会, 座長(Chairmanship).
2009.10.09~2009.10.10, 日本音響学会聴覚研究会, 座長(Chairmanship).
2009.09.15~2009.09.17, 日本音響学会秋季研究発表会, 座長(Chairmanship).
2009.06.25~2009.06.26, 日本音響学会聴覚研究会, 座長(Chairmanship).
2009.05.29~2009.05.30, 日本音響学会聴覚研究会, 座長(Chairmanship).
2008.12.12~2008.12.13, The Joint Meeting of the Technical Committee of Psychological and Physiological Acoustics, the Acoustical Society of Japan, and The Auditory Ergonomics Group, the Japanese Ergonomics Society, Organizer, 現地世話人.
2008.12.06~2008.12.07, The 13rd Auditory Research Forum Japan, 座長(Chairmanship).
2008.09.10~2008.09.12, 日本音響学会秋季研究発表会, 座長(Chairmanship).
2008.08~2008.08, The 10th International Conference on Music Perception and Cognition, Program committee member.
2008.03~2008.03, 平成 19 年度 若手研究者研究成果発表会, 座長(Chairmanship).
2007.10.20~2007.10.23, The 23rd Annual Meeting of the International Society for Psychophysics, Fechner Day 2007, 座長(Chairmanship).
2006.12.16~2006.12.17, The Joint Meeting of the 13th Workshop of the Perceptual Psychology Unit and the Technical Committee of Psychological and Physiological Acoustics, the Acoustical Society of Japan, Organizer, 現地世話人.
2006.11.28~2006.12.02, Fourth joint meeting of Acoust. Soc. Am. and Acoust. Soc. Jpn, 座長(Chairmanship).
2006.03~2006.03, 日本音響学会春季研究発表会, 座長(Chairmanship).
2005.12.03~2005.12.04, The 10th Auditory Research Forum Japan, 座長(Chairmanship).
2005.05~2005.05, 日本音響学会聴覚研究会, 座長(Chairmanship).
2005.03~2005.03, 日本音響学会春季研究発表会, 座長(Chairmanship).
2004.12.04~2004.12.05, 日本音響学会聴覚研究会, 座長(Chairmanship).
2004.12~2004.12, Joint Meeting of the Technical Committees of Psychological and Physiological Acoustics and Musical Acoustics, Acoustical Society of Japan, オーガナイザ.
2004.11.20~2004.11.21, 聴覚研究フォーラム, 座長(Chairmanship).
2004.09~2004.09, 日本音響学会秋季研究発表会, 座長(Chairmanship).
2003.12~2003.12, 日本音響学会聴覚研究会, 座長(Chairmanship).
2003.09~2003.09, 日本音響学会秋季研究発表会, 座長(Chairmanship).
2003.03~2003.03, 日本音響学会春季研究発表会, 座長(Chairmanship).
2002.12~2002.12, 日本音響学会聴覚研究会, 座長(Chairmanship).
学会誌・雑誌・著書の編集への参加状況
2022.11, Brain and Language, 国際, Reviewer.
2023.02, Heliyon, 国際, Reviewer.
2023.03, Journal of Experimental Psychology: Human Perception and Performance, 国際, Reviewer.
2021.10, Auditory Perception & Cognition, 国際, 編集委員.
2021.12, Auditory Perception & Cognition, 国際, 査読者.
2019.11~2020.12, Frontiers in Psychology, Cognition: Research topic, "Consumer's Behavior beyond Self-Report", 国際, 編集委員.
2013.06~2017.06, 日本音響学会誌, 国内, 社団法人日本音響学会編集委員会会誌部会幹事.
2011.06~2013.06, 日本音響学会誌, 国内, 編集委員.
2003.11~2008.03, 音楽知覚認知研究, 国内, 編集委員.
1996.04, 音楽知覚認知研究, 国内, 査読委員.
1995.04, Acoustical Science and Technology, 国内, 査読委員.
1995.04, 日本音響学会誌, 国内, 査読委員.
学術論文等の審査
年度 外国語雑誌査読論文数 日本語雑誌査読論文数 国際会議録査読論文数 国内会議録査読論文数 合計
2023年度    
2022年度 10    30    40 
2021年度      
2020年度      
2019年度     11 
2018年度      
2017年度 24  28 
2016年度    
2015年度      
2014年度  
2013年度  
2012年度    
2011年度      
2010年度    
2009年度    
2008年度  
2007年度   190    195 
2006年度    
2005年度      
2004年度    
2003年度    
その他の研究活動
海外渡航状況, 海外での教育研究歴
School of Audiology and Speech Sciences, Faculty of Medicine, The University of British Columbia, Canada, 2020.02~2020.03.
Institute fuer Psychologie, Technische Universitaet Darmstadt, Germany, 2018.12~2018.12.
School of Audiology and Speech Sciences, Faculty of Medicine, The University of British Columbia, Visual Performance & Oculomotor Mobility Lab, Department of Ophthalmology & Visual Sciences, The University of British Columbia, Canada, 2018.11~2018.11.
Department of East Asian Languages & Literature, Colgate University, Department of Psychology, Colgate University, UnitedStatesofAmerica, 2015.08~2015.08.
School of Audiology and Speech Sciences, Faculty of Medicine, The University of British Columbia , Canada, 2015.06~2015.10.
Institute fuer Psychologie, Technische Universitaet Darmstadt, UMR-S 1114 INSERM, Neuropsychologie Cognitive et Physiopathologie de la Schizophrénie, Faculte de Medecine, Université de Strasbourg , Germany, France, 2013.10~2013.10.
University of Toronto, Canada, 2012.10~2012.10.
Baycrest Center, Canada, 2008.07~2008.07.
Faculteit der Sociale Wetenschappen, Instituut Psychologie, Cognitieve Psychologie, Universiteit Leiden, Holland, 2008.07~2008.07.
ボルドー第 2 大学音響心理学研究所, France, 1992.04~1993.03.
外国人研究者等の受入れ状況
2021.09~2022.03, 1ヶ月以上, 九州大学, Poland, 日本学術振興会.
2019.09~2021.08, 1ヶ月以上, 九州大学, Poland, 日本学術振興会.
受賞
Twenty-five year awards, the Acoustical Society of America, 2013.06.
粟屋 潔 学術奨励賞, 日本音響学会, 1988.03.
研究資金
科学研究費補助金の採択状況(文部科学省、日本学術振興会)
2019年度~2023年度, 基盤研究(A), 代表, 多言語音声知覚における脳内リズムと意味理解.
2017年度~2019年度, 挑戦的研究(萌芽), 代表, 音声知覚のトップダウンとボトムアップ:脳はいかに欠けた情報を埋め合わせるか.
2017年度~2019年度, 挑戦的研究(開拓), 分担, 音響学的音韻論の試み:英語音声におけるスペクトルの時間変化と音節形成.
2014年度~2017年度, 挑戦的萌芽研究, 分担, 音楽的緊張感の計算モデル:和音の生起順序に対する脳応答の研究.
2013年度~2017年度, 基盤研究(A), 分担, 公共空間における音響放送の改善:知覚的相互作用を考慮した音デザイン(代表:中島祥好).
2011年度~2013年度, 基盤研究(C), 分担, ユニバーサル放送の実用化を目指した放送音声の音響デザイン.
2011年度~2013年度, 挑戦的萌芽研究, 分担, 実時間コミュニケーションのリズム解析(代表:中島祥好).
2008年度~2010年度, 萌芽研究, 分担, 聴覚の時計と視覚の時計の相互交渉場面を探る(代表:中島祥好).
2008年度~2012年度, 基盤研究(B), 代表, 音声の耐雑音性を生みだす聴覚特性の研究.
2007年度~2011年度, 基盤研究(S), 分担, 言語情報伝達における連続性と分節性: 知覚心理学,言語学,音声科学の融合(代表:中島祥好).
2004年度~2006年度, 萌芽的研究, 分担, 音楽療法の真実を問う(代表:中島祥好).
2002年度~2006年度, 基盤研究(S), 分担, 聴覚の文法:言語と非言語とを包括する体制化の研究(代表:中島祥好).
1997年度~1998年度, 奨励研究(A), 代表, 自然な合成音声と調波複合音による聴覚短期記憶の干渉効果の研究.
1996年度~1996年度, 奨励研究(A), 代表, 聴覚短期記憶における干渉効果と音の高さ:音声と非音声.
1994年度~1994年度, 奨励研究(A), 代表, 聴覚短期記憶における干渉効果の研究.
日本学術振興会への採択状況(科学研究費補助金以外)
2019年度~2021年度, 平成31年度(2019年度)第2回採用分 外国人特別研究員(一般), 代表, 統合失調症患者および気分障害患者における視覚的注意障害の研究.
競争的資金(受託研究を含む)の採択状況
2021年度~2021年度, カワイサウンド技術・音楽振興財団,研究助成【サウンド技術振興部門】, 代表, 音声理解における脳内時間処理:時間劣化音声を用いた統合失調症患者と健常者との比較.
2018年度~2018年度, 研究成果展開事業 大学発新産業創出プログラム 社会還元加速プログラム(SCORE), 分担, 音声明瞭化技術の事業化検証のための音声強調条件決定システムの開発.
2011年度~2012年度, カワイサウンド技術・音楽振興財団 研究助成金, 分担, 乳幼児期の喃語における音声生成発達の過程:日本語圏・英語圏の比較.
2003年度~2007年度, 研究拠点形成費補助金(21世紀COE) (文部科学省), 分担, 感覚特性に基づく人工環境デザイン研究拠点.
共同研究、受託研究(競争的資金を除く)の受入状況
2020.04~2025.03, 代表, 多言語音声知覚における脳内リズムと意味理解に関する情報学研究.
2020.04~2021.05, 代表, 音声加工技術の共同研究.
2006.11~2009.03, 分担, 子音強調システム.
2003.08~2004.03, 代表, 単語了解度に対するアクセント型適切性の効果に関する研究.
2002.02~2002.03, 代表, 音声言語学習行動と音環境の関係に関する調査研究.
寄附金の受入状況
2021年度, 一般財団法人 カワイサウンド技術・音楽振興財団, 2021 年度サウンド技術振興部門研究助成金.
2000年度, 財団法人実吉奨学会, 財団法人実吉奨学会研究助成金,1224.
学内資金・基金等への採択状況
2015年度~2015年度, 平成27年度芸術工学研究院海外派遣制度, 代表, The University of British Columbia, the School of Audiology and Speech Sciences への派遣.
2012年度~2013年度, 九州大学教育研究プログラム・研究拠点形成プロジェクト, 分担, 文理融合型の知覚・認知研究拠点.
2012年度~2012年度, 九州大学教育研究プログラム・研究拠点形成プロジェクト, 代表, 人は音声の何を聴いているのか.
2011年度~2011年度, 大型外部資金獲得のためのプロジェクト研究, 代表, 音声コミュニケーションの動作基盤に関する多次元的研究.
2006年度~2007年度, 九州大教育研究プログラム・研究拠点形成プロジェクト, 代表, 高齢者に配慮した公共音響設備の研究.
1997年度~1997年度, 京都府立大学学術振興会「永井特別奨励金(研究奨励)」, 代表, 自然な合成音声と調波複合音による聴覚短期記憶の干渉効果の研究.
1996年度~1996年度, 京都府立大学学術振興会「永井特別研究奨励金」, 代表, 聴覚短期記憶の干渉効果.

九大関連コンテンツ

pure2017年10月2日から、「九州大学研究者情報」を補完するデータベースとして、Elsevier社の「Pure」による研究業績の公開を開始しました。