2018.07～2019.03, 代表者：中島祥好, 九州大学大学院, 国立研究開発法人科学技術振興機構（JST）
2018.05～2018.10, 代表者：中島祥好, 九州大学大学院, 九州大学.
2018.06～2018.06, 代表者：中島祥好, 九州大学, 九州大学病院ARO次世代医療センター
2012.04～2014.03, 代表者：中島祥好, 九州大学, 九州大学（日本）.
2003.04～2008.03, 代表者：栃原裕, 九州大学, 九州大学.
||Rolf BADER (Ed.)
Chapter 21: Simon GRONDIN, Emi HASUO, Tsuyoshi KURODA, Yoshitaka NAKAJIMA, "Auditory Time Perception.", Springer Handbook of Systematic Musicology, Springer, 423-452, 2018.04.
||中島祥好, 佐々木隆之, 上田和夫, G. B. レメイン, 聴覚の文法, コロナ社, 2014.03, 「始部」「終部」「持続」「空白」の４つの音要素が簡単な文法規則によって音脈を形成するとの理論を構築した。.
||中島祥好、上田和夫, 大学生の勉強マニュアル：フクロウ大学へようこそ, ナカニシヤ出版, 2006.02.
||Yoshitaka NAKAJIMA, Mizuki MATSUDA, Kazuo UEDA, Gerard B. REMIJN, Temporal resolution needed for auditory communication: Measurement with mosaic speech, Frontiers in Human Neuroscience, Frontiers in Human Neuroscience, 10.3389/fnhum.2018.00149, 12, 149, 1-8, 2018.06, Temporal resolution needed for Japanese speech communication wasmeasured. A new experimental paradigm that can reflect the spectro-temporal resolution necessary for
healthy listeners to perceive speech is introduced. As a first step, we report listeners’ intelligibility scores of Japanese speech with a systematically degraded temporal resolution, so-called “mosaic speech”: speech mosaicized in the coordinates of time and frequency. The results of two experiments show that mosaic speech cut into short static segments was almost perfectly intelligible with a temporal resolution of 40 ms or finer. Intelligibility dropped for a temporal resolution of 80 ms, but was still around 50%-correct level. The data are in line with previous results showing that speech signals separated into short temporal segments of <100 ms can be remarkably robust in terms of linguistic-content perception against drastic manipulations in each segment, such as partial signal omission or temporal reversal. The human perceptual system thus can extract meaning from unexpectedly rough temporal information in speech. The process resembles that of the visual system stringing together static movie frames of 40 ms into vivid motion..
||Kazuo UEDA, Yoshitaka NAKAJIMA, An acoustic key to eight languages/dialects: Factor analyses of critical-band-filtered speech, SCIENTIFIC REPORTS, 10.1038/srep42468, 7, 42468, 1-4, 2017.02, The peripheral auditory system functions like a frequency analyser, often modelled as a bank of nonoverlapping
band-pass filters called critical bands; 20 bands are necessary for simulating frequency
resolution of the ear within an ordinary frequency range of speech (up to 7,000 Hz). A far smaller
number of filters seemed sufficient, however, to re-synthesise intelligible speech sentences with power
fluctuations of the speech signals passing through them; nevertheless, the number and frequency
ranges of the frequency bands for efficient speech communication are yet unknown. We derived four
common frequency bands—covering approximately 50–540, 540–1,700, 1,700–3,300, and above
3,300 Hz—from factor analyses of spectral fluctuations in eight different spoken languages/dialects.
The analyses robustly led to three factors common to all languages investigated—the low & mid-high
factor related to the two separate frequency ranges of 50–540 and 1,700–3,300 Hz, the mid-low factor
the range of 540–1,700 Hz, and the high factor the range above 3,300 Hz—in these different languages/
dialects, suggesting a language universal..
||Yoshitaka NAKAJIMA, Kazuo UEDA, Shota FUJIMARU, Hirotoshi MOTOMURA, Yuki OHSAKA, English phonology and an acoustic language universal, SCIENTIFIC REPORTS, 10.1038/srep46049, 7, 46049, 1-6, 2017.04, Acoustic analyses of eight different languages/dialects had revealed a language universal: Three
spectral factors consistently appeared in analyses of power fluctuations of spoken sentences divided
by critical-band filters into narrow frequency bands. Examining linguistic implications of these factors
seems important to understand how speech sounds carry linguistic information. Here we show the
three general categories of the English phonemes, i.e., vowels, sonorant consonants, and obstruents,
to be discriminable in the Cartesian space constructed by these factors: A factor related to frequency
components above 3,300 Hz was associated only with obstruents (e.g., /k/ or /z/), and another factor
related to frequency components around 1,100 Hz only with vowels (e.g., /a/ or /i/) and sonorant
consonants (e.g., /w/, /r/, or /m/). The latter factor highly correlated with the hypothetical concept
of sonority or aperture in phonology. These factors turned out to connect the linguistic and acoustic
aspects of speech sounds systematically..
||Takuya KISHIDA, Yoshitaka NAKAJIMA, Kazuo UEDA, Gerard B. REMIJN, Three factors are critical in order to synthesize intelligible noise-vocoded Japanese speech, Frontiers in Psychology, 10, 3389, 2016.04, The method of factor analysis was modified to obtain factors suitable for resynthesizing speech sounds as 20-critical-band noise-vocoded speech. If the number of factors is 3 or more, elementary linguistic information is preserved..
||Wolfgang Ellermeier, Florian Kattner, Kazuo UEDA, Kana Doumoto, Yoshitaka NAKAJIMA, Memory dirsuption by irrelevant noise-vocoded speech: Effects of native language and the number of frequency bands , Journal of the Acoustical Society of America, 138, 1561-1569, 2015.08, To investigate the mechanisms by which unattended speech impairs short-term memory performance,
speech samples were systematically degraded by means of a noise vocoder. For experiment 1,
recordings of German and Japanese sentences were passed through a filter bank dividing the spectrum
between 50 and 7000 Hz into 20 critical-band channels or combinations of those, yielding 20,
4, 2, or just 1 channel(s) of noise-vocoded speech. Listening tests conducted with native speakers
of both languages showed a monotonic decrease in speech intelligibility as the number of frequency
channels was reduced. For experiment 2, 40 native German and 40 native Japanese participants
were exposed to speech processed in the same manner while trying to memorize visually presented
sequences of digits in the correct order. Half of each sample received the German, the other half
received the Japanese speech samples. The results show large irrelevant-speech effects increasing
in magnitude with the number of frequency channels. The effects are slightly larger when subjects
are exposed to their own native language. The results are neither predicted very well by the speech
transmission index, nor by psychoacoustical fluctuation strength, most likely, since both metrics
fail to disentangle amplitude and frequency modulations in the signals..
||Emi HASUO, Yoshitaka NAKAJIMA, Michiko WAKASUGI, Takuya FUJIOKA, Effects of sound marker durations on the perception of inter-onset time intervals: a study with instrumental sounds, 基礎心理学研究（The Japanese Journal of Psychonomic Science）, 34, 1, 2-16, 2015.12, Previous studies have shown that the time interval marked by the onsets of two successive pure tone bursts is perceived to be longer when the second sound marker is lengthened. The present study examined whether this phenomenon appeared in a more natural setting in which the time interval was marked by instrumental sounds with complex temporal and spectral structures. Real piano sounds and synthesized sounds that simulated either just the temporal structure of the piano sound or both its harmonic and temporal structures were used as sound markers. Lengthening the second marker increased the perceived duration of the interval, as in previous studies, but only in limited cases, and this did not occur in an experiment in which only the synthesized piano sounds were used. Thus, the effect of sound durations was weakened with the new series of sounds. Characteristics of piano sounds that were not captured in the synthesized sounds seem to have played an important role in duration perception..
||Yoshitaka NAKAJIMA, Emi HASUO, Miki YAMASHITA, Yuki HARAGUCHI, Overestimation of the second time interval replaces time-shrinking when the difference between two adjacent time intervals increases, Frontiers in Human Neuroscience, 10.3389/fnhum.2014.00281, 8, 281, 1-12, 2014.05, When the onsets of three successive sound bursts mark two adjacent time intervals, the second time interval can be underestimated when it is physically longer than the first time interval by up to 100 ms. This illusion, time-shrinking, is very stable when the first time interval is 200 ms or shorter (Nakajima et al., 2004, Perception, 33). Time-shrinking had been considered a kind of perceptual assimilation to make the first and the second time interval more similar to each other. Here we investigated whether the underestimation of the second time interval was replaced by an overestimation if the physical difference between the neighboring time intervals was too large for the assimilation to take place; this was a typical situation in which a perceptual contrast could be expected. Three experiments to measure the overestimation/underestimation of the second time interval by the method of adjustment were conducted. The first time interval was varied from 40 to 280 ms, and such overestimations indeed took place when the first time interval was 80–280 ms. The overestimations were robust when the second time interval was longer than the first time interval by 240 ms or more, and the magnitude of the overestimation was larger than 100 ms in some conditions. Thus, a perceptual contrast to replace time-shrinking was established. An additional experiment indicated that this contrast did not affect the perception of the first time interval substantially: The contrast in the present conditions seemed unilateral..
||Emi HASUO, Yoshitaka NAKAJIMA, Erika TOMIMATSU, Simon GRONDIN, Kazuo UEDA, The occurrence of the filled duration illusion: A comparison of the method of adjustment with the method of magnitude estimation, Acta Psychologica, 147, 111-121, 2014.01.
||Yuko YAMASHITA, Yoshitaka NAKAJIMA, Kazuo UEDA, Yohko SHIMADA, David HIRSH, Takeharu SENO, Benjamin A. SMITH, Acoustic analyses of speech sounds and rhythms in Japanese- and English-learning infants, Frontiers in Psychology, 10.3389/fpsyg.2013.00057, 4, 57, 1-10, 2013.02, The purpose of this study was to explore developmental changes, in terms of spectral fluctuations
and temporal periodicity with Japanese- and English-learning infants. Three age
groups (15, 20, and 24 months) were selected, because infants diversify phonetic inventories
with age. Natural speech of the infants was recorded.We utilized a critical-band-filter
bank, which simulated the frequency resolution in adults’ auditory periphery. First, the
correlations between the power fluctuations of the critical-band outputs represented by
factor analysis were observed in order to see how the critical bands should be connected
to each other, if a listener is to differentiate sounds in infants’ speech. In the following
analysis, we analyzed the temporal fluctuations of factor scores by calculating autocorrelations.
The present analysis identified three factors as had been observed in adult speech
at 24 months of age in both linguistic environments. These three factors were shifted to
a higher frequency range corresponding to the smaller vocal tract size of the infants. The
results suggest that the vocal tract structures of the infants had developed to become
adult-like configuration by 24 months of age in both language environments. The amount
of utterances with periodic nature of shorter time increased with age in both environments.
This trend was clearer in the Japanese environment..
||Yoshitaka NAKAJIMA, Hiroshige TAKEICHI, Human processing of short temporal intervals as revealed by an ERP waveform analysis, Frontiers in Integrative Neuroscience, 10.3389/fnint.2011.00074, 2011.12, To clarify the time course over which the human brain processes information about durations up to ∼300 ms, we reanalyzed the data that were previously reported by Mitsudo et al. (2009) using a multivariate analysis method. Event-related potentials were recorded from 19 scalp electrodes on 11 (nine original and two additional) participants while they judged whether two neighboring empty time intervals – called t1 and t2 and marked by three tone bursts – had equal durations. There was also a control condition in which the participants were presented the same temporal patterns but without a judgment task. In the present reanalysis, we sought to visualize how the temporal patterns were represented in the brain over time. A correlation matrix across channels was calculated for each temporal pattern. Geometric separations between the correlation matrices were calculated, and subjected to multidimensional scaling. We performed such analyses for a moving 100-ms time window after the t1 presentations. In the windows centered at <100 ms after the t2 presentation, the analyses revealed the local maxima of categorical separation between temporal patterns of perceptually equal durations versus perceptually unequal durations, both in the judgment condition and in the control condition. Such categorization of the temporal patterns was prominent only in narrow temporal regions. The analysis indicated that the participants determined whether the two neighboring time intervals were of equal duration mostly within 100 ms after the presentation of the temporal patterns. A very fast brain activity was related to the perception of elementary temporal patterns without explicit judgments. This is consistent with the findings of Mitsudo et al. and it is in line with the processing time hypothesis proposed by Nakajima et al. (2004). The validity of the correlation matrix analyses turned out to be an effective tool to grasp the overall responses of the brain to temporal patterns..
||Tsuyoshi Kuroda, Yoshitaka NAKAJIMA, Shuntarou EGUCHI, Illusory continuity without sufficient sound energy to fill a temporal gap: Examples of crossing glide tones, Journal of Experimental Psychology : Human Perception and Performance, 10.1037/a0026629, 38, 1254-1267, 2012.10, The gap transfer illusion is an auditory illusion where a temporal gap inserted in a longer glide tone is perceived as if it were in a crossing shorter glide tone. Psychophysical and phenomenological experiments were conducted to examine the effects of sound-pressure-level (SPL) differences between crossing glides on the occurrence of the gap transfer illusion. We found that the subjective continuity-discontinuity of the crossing glides changed as a function of the relative level of the shorter glide to the level of the longer glide. When the relative level was approximately between −9 and +2 dB, listeners perceived the longer glide as continuous and the shorter glide as discontinuous, that is, the gap transfer illusion took place. The glides were perceived veridically below this range, that is, gap transfer did not take place, whereas above this range the longer glide and the shorter glide were both perceived as continuous. The fact that the longer glide could be perceived as continuous even when the crossing shorter glide was 9 dB weaker indicates that the longer glide's subjective continuity cannot be explained within the conventional framework of auditory organization, which assumes reallocation of sound energy from the shorter to the longer glide. The implicated mechanisms are discussed in terms of the temporal configuration of onsets and terminations and the time-frequency distribution of sound energy..
||Emi HASUO, Yoshitaka NAKAJIMA, Satoshi OSAWA, Hiroyuki FUJISHIMA, Effects of temporal shapes of sound markers on the perception of inter-onset time intervals, Attention, Perception, & Psychophysics, 10.3758/s13414-011-0236-1, 74, 430-445, 2012.03, This study investigated how the temporal characteristics,
particularly durations, of sounds affect the
perceived duration of very short interonset time intervals
(120–360 ms), which is important for rhythm perception in
speech and music. In four experiments, the subjective
duration of single time intervals marked by two sounds was
measured utilizing the method of adjustment, while the
markers’ durations, amplitude difference (which accompanied
the duration change), and sound energy distribution in
time were varied. Lengthening the duration of the second
marker in the range of 20–100 ms increased the subjective
duration of the time interval in a stable manner. Lengthening
the first marker tended to increase the subjective
duration, but unstably; an opposite effect sometimes
appeared for the shortest time interval of 120 ms. The
effects of varying the amplitude and the sound energy
distribution in time of either marker were very small in the
present experimental conditions, thus proving the effects of
marker durations per se..
||Takayuki Sasaki, Yoshitaka Nakajima, Gert ten Hoopen, Edwin van Buuringen, Bob Massier, Taku Kojo, Tsuyoshi Kuroda, and Kazuo Ueda, Time-stretching: Illusory lengthening of filled auditory durations, Attention, Perception, and Psychophysics, 印刷中, 1404-1421, 2010.05.
||Tsuyoshi Kuroda, Yoshitaka Nakajima, Shimpei Tsunashima, and Tatsuro Yasutake, Effects of spectra and sound pressure levels on the occurrence of the gap transfer illusion, Perception, 2009.04.
||Takako Mitsudo, Yoshitaka Nakajima, Gerard B. Remijn, Hiroshige Takeichi, Yoshinobu Goto, and Shozo Tobimatsu, Electrophysiological evidence of auditory temporal perception related to the assimilation between two neighboring time intervals, NeuroQuantology, 2009.03.
||Y. Nakajima, G. ten Hoopen, T. Sasaki, K. Yamamoto, M. Kadota, M. Simons, D. Suetomi, Time-shrinking: the process of unilateral temporal assimilation, Perception, vol 33, 1061-1079, 2004.12.
||Yoshitaka Nakajima, Demonstrations of the gap transfer illusion, Acoustical Science and Technology, 2006.06.
||Y. Nakajima, T. Sasaki, K. Kanafuka, A. Miyamoto, G. Remijn, G. ten Hoopen, Illusory recouplings of onsets and terminations of glide tone components, Perception & Psychophysics, vol. 62, 1413-1425, 2000.07.
||Yoshitaka Nakajima, Gert ten Hoopen, Rene van der Wilk, A new illusion of time perception, Music Perception, Vol. 8, 431-448, 1991.06.
主要総説, 論評, 解説, 書評, 報告書等
||Yoshitaka NAKAJIMA, Kazuo UEDA, Gerard B. REMIJN, Yuko YAMASHITA, Takuya KISHIDA, How sonority appears in speech analyses, Acoustical Science and Technology, 2018.05, Sonority is a subjective or linguistic property of speech sounds closely related to syllable formation. We showed
that it was highly correlated to one of the 3 or 4 factors extracted to describe spectral changes of speech, and this factor was closely related to a frequency range around 540–1,700 Hz. The factor scores of this factor were high in vowels, lower in sonorant consonants, and even lower in obstruents in British English. Another factor related to a range above 3,300 Hz was found to be negatively correlated to sonority; the factor scores of this factor were high in obstruents, in which high-frequency components were conspicuous..
||中島祥好, 聴覚認知の心理学, Clinical Neuroscience, 2014.02.
||中島祥好, 聴覚におけるリズム知覚, 月刊言語, 2009.06.
||Yoshitaka NAKAJIMA, Kazuo UEDA, Gerard Remijn, Yuko Yamashita, Takuya KISHIDA, Phonology and psychophysics: Is sonority real?, 33rd Annual Meeting of the International Society for Psychophysics, 2017.10.
||Yoshitaka NAKAJIMA, Perceptual interactions between adjacent time intervals marked by sound bursts, 5th Joint Meeting: Acoustical Society of America and Acoustical Society of Japan, 2016.11, Perceptual interactions take place between adjacent time intervals up to ~600 ms even in simple contexts. Let us suppose that two adjacent time intervals, T1 and T2 in this order, are marked by sound bursts. Their durations are perceptually assimilated in a bilateral manner if the difference between them is up to ~50 ms. When T1 200 ms and T1 T2 < T1 + 100 ms, T2 is underestimated systematically,
and the underestimation is roughly a function of T2—T1. Except when T1 ’ T2, this is assimilation of T2 to T1, partially in a unilateral manner. This systematic underestimation, time-shrinking, disappears when T1 > 300 ms. When T2 = 100 or 200 ms and T1 = T2 + 100 or T2 + 200 ms, T1 is perceptually contrasted against T2: T1 is overestimated. When 80 T1 280 ms and T2 T1 + 300 ms, T2 is contrasted against T1: In this case, T2 is overestimated. Assimilation and contrast are more conspicuous in T2 than in T1. For three adjacent time intervals, T1, T2, and T3, the perception of T3 can be affected by both T1 and T2, and the perception of T2 by T1..
||中島祥好, 音楽聴取における知覚体制化, 第18回日本ヒト脳機能マッピング学会, 2016.03, ヒトが音楽を聴取するときの聴覚の特徴的な働きについて考察を加える。「音」という知覚上のまとまりを形成し、音と音とのまとめあげ、関係づけを行うような聴覚の働きである「体制化」に着目し、音楽の知覚を特徴づけるような次の三つの事柄について論ずる：１）拍節構造の形成と破壊、２）音脈の多義性、３）調性の形成と破壊。最近数世紀の音楽家は、このような特徴をより組織的に、先鋭的に用いているため、あたかも音楽に特有の知覚の仕組みがあるように見える場合があるのではないかと推測される。.
||利島 保, 佐藤 隆夫, 長谷川 寿一, 長田 久雄, 箱田 裕司, 安藤 清志, 中島 祥好, 原田 悦子, 松井 三枝, 学士課程における心理学教育の質保証に関する参照基準の役割：学術会議の参照基準検討部会報告公表を顧みて, 日本心理学会第78回大会, 2014.09.
||Yoshitaka NAKAJIMA, Mizuki MATSUDA, Gerard B. REMIJN, Kazuo UEDA, Temporal resolution needed to hear out Japanese morae in mosaic speech, 日本音響学会聴覚研究会, 2014.05.
||中島祥好, 音声信号の騒音、残響に対する耐性を増す：聴覚の特性を考慮したアルゴリズム, 九州大学新技術説明会, 2013.10.
||Yoshitaka NAKAJIMA, Hiroshige TAKEICHI, Takako MITSUDO, Shozo TOBIMATSU, Perceptual processing of pairs of acoustically marked time intervals: Correspondence between psychophysical and electrophysiological data, Fechner Day 2013, The 29th Annual Meeting of the International Society for Psychophysics, 2013.10, Event-related potentials (ERPs) elicited by pairs of subsequent time intervalsmarked by sound bursts were recorded in our previous study1, and the data were reanalyzed utilizing a new multivariate method. Subsequent time intervals t1 and t2 are often perceived as equal in duration when t2 is shorter than 300 ms and up to 50 ms shorter or up to 80 ms longer than t1; the subjective equality holds even if the physical difference is larger than the just noticeable difference obtained for t1 and t2 separated in time. This phenomenon is called auditory temporal assimilation. ERPs were registered in two types of sessions: J sessions, where the participants judged whether the two intervals were subjectively equal or not, and NJ sessions, where no judgments were required. Slow negative components occurred in brain activities in the J sessions, more conspicuous when inequality between t1 and t2 was perceived, in agreement with our earlier study. An experiment in which t2 was fixed at 200ms was chosen for the present analysis. For a moving 100-ms time window, a correlation matrix across the 19 electrodes was calculated for each temporal pattern, and the correlation matrix distance (CMD = Euclidean distance between the respective correlation matrices) between each two patterns was evaluated. The patterns for which subjective equality dominated were classified as equal (E) patterns, those for which subjective inequality dominated as unequal (UE) patterns. There were four E patterns and three UE patterns, but no patterns to be classified otherwise. A measure of separation of E vs. UE patterns
in terms of brain activities was calculated as the sum of squared CMDs between E and UE patterns, and expressed as relative separation (proportionally to the total squared CMD). The relative separation was a function of time, represented by the temporal midpoint of the moving window. The relative separation in the J sessions showed a peak around 70ms after t2, similarly to our earlier findings2. A process related to E-UE judgment is thus likely to take place within 100 ms after t2. Peaks within 100ms after t2 were observed also in the NJ sessions, suggesting that implicit judgments, although not required, may have occurred in a very early stage. The perceptual separation between the E and the UE patterns can thus be related to dynamic aspects of brain activities, critical factors of which we are trying to identify and locate.
||Yoshitaka Nakajima, Emi Hasuo, Yuki Haraguchi, and Miki Yamashita, Effects of a preceding time interval on the perception of an inter-onset time interval: Is unilateral assimilation in auditory modality replaced by contrast when the difference
between the neighboring time intervals increases?, 4th conference of the Asia-Pacific Society for the Cognitive Sciences of Music (APSCOM4), 2011.07, 1. Background
It had been established that, when the onsets of three successive sound bursts mark two neighboring time intervals, the second time interval can be underestimated when it is
longer than the first time interval by up to 100 ms. This underestimation is very
stable when the first time interval is 200 ms or shorter. Little knowledge had been accumulated, however, about whether any underestimation or overestimation would
take place when the second time interval was lengthened systematically; such
combinations of neighboring time intervals appear frequently in music.
We examined whether, when, and to what degree the second time interval could be
overestimated when it was long enough to be contrasted perceptually with the first time interval. We were especially interested in whether the underestimation of the second
time interval, which should take place when the second time interval was by up to 100
ms longer than the first time interval, would give way smoothly to overestimation when the second time interval was lengthened step by step.
We conducted three similar experiments employing 5-6 participants in each of them. The durations of the two neighboring time intervals were varied systematically, and the
points of subjective equality of the second time interval were measured. The measured values were compared with the corresponding values in a control condition, in which
only the second time intervals were presented with two successive tone bursts. The
Proceedings of the 4th conference of the Asia-Pacific Society for the Cognitive Sciences of Music (APSCOM4), Beijing, China
durations of the first and the second time interval were 40-280 ms and 40-1000 ms,
The underestimation of the second time interval appeared clearly when this interval
was longer than the first time interval by 80 ms, but gave way to overestimation rather
smoothly as the difference between the neighboring time intervals increased when the first time interval was 80-280 ms. The overestimation stayed positive, typically around 50 ms, when the difference between the neighboring time intervals was between 240 and 800 ms. This range covers considerable cases in which a shorter and a longer
time interval neighbor each other in this order in music.
It was revealed that the well-established underestimation of the second time interval
gave way to overestimation, which can be viewed as contrast between the neighboring
time intervals, when the second time interval was longer than the first time interval by 240 ms or more. Considering the combinations of the duration values, overestimation of this kind can take place frequently in music. 6. Keywords
time perception, rhythm, contrast, overestimation
7. Topic areas
Rhythm & Time
Proceedings of the 4th conference of the Asia-Pacific Society for the Cognitive Sciences of Music (APSCOM4), Beijing, China
||Nakajima, Y., Shimada, Y., Motomura, H., Ueda, K., and Seno, T., Factor analyses of critical-band-filtered infant babbling, The 161st Meeting of the Acoustical Society of America, 2011.05.
||中島祥好, 時間知覚の時間構造：聴覚における空虚時間知覚, 第26回日本生体磁気学会, 2011.06, My colleagues and I conducted systematic
experiments on the perception of time intervals
marked by the onsets of very short sound markers,
and found that the subjective duration of a time
interval in relative judgment is approximately in
proportion to its physical duration plus a constant
of about 80 ms. After that, we discovered that a
time interval up to 300 ms is underestimated stably
when preceded immediately by a time interval that
is shorter by up to 100 ms. A time interval is also
assimilated perceptually to a neighboring time
interval which is shorter or longer by up to 50 ms.
We are trying to understand how these phenomena
can be related with each other, and to find their
||Yoshitaka Nakajima, Emi Hasuo, Miki Yamashita, Hiroki Nozaki, and Yoko Fujishima, Effect of sound marker duration on the occurrence of time-shrinking, The Rhythm Perception and Production Workshop 2009, 2009.07.
||Yoshitaka Nakajima, Emi Hasuo, Miki Yamashita, Hiroki Nozaki, and Yoko Fujishima, Effect of sound marker duration on the occurrence of time-shrinking, The Rhythm Perception and Production Workshop 2009, 2009.07.
||Yoshitaka Nakajima, Illusions related to the temporal continuity and discontinuity of sounds, The 24th Annual Meeting of the International Society for Psychophysics, 2008.07.
||中島祥好, 聴覚体制化と聴覚の文法, 情報科学研究科セミナー, 2007.02.
||Yoshitaka Nakajima, Auditory grammar: The event construction model and spoken language, 4th Joint Meeting of the Acoustical Society of America and the Acoustical Society of Japan, 2006.11.
||Yoshitaka Nakajima, Kazuo Ueda, Auditory events in language and music, 4th Joint Meeting of the Acoustical Society of America and the Acoustical Society of Japan, 2006.11.
||中島祥好, 調性の話, 日本音響学会聴覚研究会, 2004.10.
||Yoshitaka Nakajima, He Wang, Kazuo Ueda, Takayuki Sasaki, Perceptual organization of onsets and offsets in speech, 10th Rhythm Perception and Performance Workshop, 2005.07.
2014.08～2019.06, The Asia-Pacific Society for the Cognitive Sciences of Music, 理事.
2015.06～2019.06, 日本音楽知覚認知学会, 会長.
2014.08～2015.07, The Asia-Pacific Society for the Cognitive Sciences of Music, 副会長.
2014.01～2014.08, The Asia-Pacific Society for the Cognitive Sciences of Music, 会長.
2012.07～2013.12, The Asia-Pacific Society for the Cognitive Sciences of Music, 副会長.
2010.01～2012.07, The Asia-Pacific Society for the Cognitive Sciences of Music, 会長.
2009.06～2013.05, 日本音楽知覚認知学会, 副会長.
2008.04～2009.12, The Asia-Pacific Society for the Cognitive Sciences of Music, 副会長.
2000.05～2009.06, 日本音楽知覚認知学会, 理事.
2005.05～2013.05, 日本音響学会, 評議員.
2017.08.25～2017.08.27, 第6回アジア太平洋音楽認知科学協会大会（APSCOM 6）, プログラム副委員長、組織委員、担当国内学会代表.
2017.10.22～2018.10.26, 第33回国際精神物理学会年次大会（Fechner Day 2017）, 組織委員長、プログラム委員、座長.
2016.05.14～2016.05.15, 日本音楽知覚認知学会平成28年度春季研究発表会, 世話役.
2016.07.24～2016.07.29, The 31st International Congress of Psychology, 司会（Moderator）.
2014.03.07～2014.03.07, 国際五感シンポジウム, 座長（Chairmanship）.
2014.12.20～2014.12.21, 日本音響学会聴覚研究会, 座長（Chairmanship）.
2014.08.04～2014.08.08, 13th ICMPC-5th APSCOM Joint Conference , 座長（Chairmanship）.
2013.06.02～2013.06.07, 21st International Congress on Acoustics , 座長（Chairmanship）.
2012.12.15, 日本音響学会聴覚研究会, 座長（Chairmanship）.
2012.07.27～2012.07.27, 12th ICMPC-8th ESCOM Joint Conference , 座長（Chairmanship）.
2011.07.10～2011.07.14, 4th conference of the Asia-Pacific Society for the Cognitive Sciences of Music (APSCOM4), 座長（Chairmanship）.
2012.11.04～2012.11.04, 日本基礎心理学会第31回大会, 司会（Moderator）.
2010.12, 日本音響学会聴覚研究会, 座長（Chairmanship）.
2010.05, 日本音響学会聴覚研究会, 座長（Chairmanship）.
2008.12.12～2008.12.13, 日本音響学会聴覚研究会, 座長（Chairmanship）.
2008.08.25～2008.08.29, 国際音楽知覚認知学会, 座長（Chairmanship）.
2008.07.29～2008.08.01, 国際心理物理学会, 座長（Chairmanship）.
2007.10, 国際心理物理学会, 座長（Chairmanship）.
2006.12, 日本音響学会聴覚研究会, 座長（Chairmanship）.
2006.11, 4th Joint Meeting of the Acoustical Society of America and the Acoustical Society of Japan, 座長（Chairmanship）.
2005.01, 日本音響学会聴覚研究会, 座長（Chairmanship）.
2015.03.06～2015.03.08, 第 48 回知覚コロキウム 国際五感シンポジウム, 準備委員 シンポジウム座長.
2014.08.04～2014.08.08, 第13回国際音楽知覚認知会議 第５回アジア太平洋音楽認知科学会大会, 共催学会長.
2012.11～2012.11, 日本基礎心理学会第３１回大会, 準備委員.
2012.07～2012.07, 第12回国際音楽知覚認知学会議, 顧問（学会理事）.
2011.07～2011.07, 第13回リズム産出・知覚研究会, 学術委員.
2011.07.11～2011.07.15, 第４回アジア太平洋音楽認知科学会大会, 顧問（学会長）.
2008.08～2008.08, 第10回国際音楽知覚認知学会議, 組織委員会副委員長、プログラム委員.
2018.12, Music & Science, 国際, 編集委員.
2007.03～2018.03, 音楽知覚認知研究, 国内, 編集委員.
2005.09～2009.08, Canadian Journal of Experimental Psychology, 国際, 編集委員.
2003.09, Music Perception, 国際, 編集委員.
ナイメーヘン大学, ライデン大学, ヘント大学, Netherlands, Belgium, 2019.05～2019.05.
大連外国語大学, 大連理工大学, 遼寧師範大学, China, 2019.03～2019.03.
アイルランド国立大学ゴールウェイ校, Ireland, 2019.03～2019.03.
北京大学, China, 2018.09～2018.09.
北京大学, China, 2017.03～2017.03.
アイルランド国立大学ゴールウェイ校, Ireland, 2016.01～2016.01.
アイルランド国立大学ゴールウェイ校, Ireland, 2015.06～2015.06.
ブリティッシュ・コロンビア大学, Canada, 2015.03～2015.03.
トロント大学, Canada, 2014.09～2014.09.
ダルムシュタット工科大学, ストラスブール大学, Germany, France, 2013.10～2013.10.
トロント大学, ラヴァル大学, Canada, 2012.10～2012.10.
首都師範大学, China, 2010.11～2010.11.
ラヴァル大学, Canada, 2010.09～2010.09.
2018.09～2018.09, 2週間未満, トロント大学, Canada, 学内資金.
2018.09～2018.09, 2週間未満, アイルランド国立大学ゴールウェイ校, UnitedKingdom, 学内資金.
2017.10～2017.11, 2週間未満, 北京大学, China.
2017.10～2017.11, 2週間以上1ヶ月未満, アイルランド国立大学ゴールウェイ校, UnitedKingdom, 学内資金.
2016.12～2016.12, 2週間未満, 北京大学, China.
2015.11～2015.11, 2週間未満, ISCA Japan, Ireland, 外国政府・外国研究機関・国際機関.
2015.11～2015.12, 2週間以上1ヶ月未満, アイルランド国立大学ゴールウェイ校, UnitedKingdom, 外国政府・外国研究機関・国際機関.
2016.08～2016.10, 1ヶ月以上, アイルランド国立大学ゴールウェイ校, Ireland, 学内資金.
2014.12～2014.12, 2週間未満, トロント大学, Canada, 外国政府・外国研究機関・国際機関.
2014.07～2014.07, 2週間未満, トロント大学, Canada, 外国政府・外国研究機関・国際機関.
2014.04～2014.04, 2週間未満, シドニー大学, Australia, 外国政府・外国研究機関・国際機関.
2014.12～2015.01, 2週間以上1ヶ月未満, アイルランド国立大学ゴールウェイ校, セルビア, 学内資金.
2014.11～2015.01, 1ヶ月以上, アイルランド国立大学ゴールウェイ校, UnitedKingdom, 学内資金.
2013.04～2013.04, 2週間未満, シドニー大学, Australia, 学内資金.
2013.12～2013.12, 2週間未満, トロント大学, Canada, 外国政府・外国研究機関・国際機関.
2013.11～2013.12, 1ヶ月以上, アイルランド国立大学ゴールウェイ校, UnitedKingdom, 学内資金.
2013.08～2013.08, 2週間未満, 首都師範大学, China, 学内資金.
2012.12～2012.12, 2週間未満, 首都師範大学, China, 学内資金.
2012.12～2012.12, 2週間以上1ヶ月未満, アイルランド国立大学ゴールウェイ校, UnitedKingdom, 学内資金.
2011.11～2011.12, 1ヶ月以上, 応用知覚研究センター, UnitedKingdom, 学内資金.
2010.11～2010.12, 1ヶ月以上, 応用知覚研究センター, UnitedKingdom, 学内資金.
2008.12～2008.12, 2週間未満, CzechRepublic, 民間・財団.
2010.04～2010.05, 1ヶ月以上, ラヴァル大学, Canada, 外国政府・外国研究機関・国際機関.
2006.10～2007.12, 1ヶ月以上, 日本学術振興会, Spain, 日本学術振興会.
2003.11～2008.03, 1ヶ月以上, 九州大学, UnitedKingdom, 日本学術振興会.
2003.04～2005.09, 1ヶ月以上, 九州大学, Netherlands, 日本学術振興会.
International Educator of the Year 2007, International Biographical Centre, 2007.01.
Top 100 Scientists 2007, International Biographical Centre, 2007.01.
Who's Who in Science and Engineering, 2008-2009 掲載, Marquis Who's Who, 2007.01.
Who's Who in Science and Engineering, 2006-2007 掲載, Marquis Who's Who, 2006.01.
2016年度～2019年度, 挑戦的研究（萌芽）, 分担, 音声知覚のトップダウンとボトムアップ：脳はいかに欠けた情報を埋め合わせるか.
2016年度～2019年度, 基盤研究(B), 分担, 没入体感型英語スピーチ学習システムの開発と検証：視線方向と音声要素に基づく研究.
2017年度～2019年度, 挑戦的研究（開拓）, 代表, 音響学的音韻論の試み：英語音声におけるスペクトルの時間変化と音節形成.
2015年度～2017年度, 挑戦的萌芽研究, 分担, 英語映像アーカイブのマルチモーダルコーパス化と分析：言語学と工学の融合.
2014年度～2016年度, 挑戦的萌芽研究, 代表, 音楽的緊張感の計算モデル：和音の生起順序に対する脳応答の研究.
2013年度～2017年度, 基盤研究(A), 代表, 公共空間における音響放送の改善：知覚的相互作用を考慮した音デザイン.
2011年度～2013年度, 挑戦的萌芽研究, 代表, 実時間コミュニケーションのリズム解析.
2010年度～2012年度, 基盤研究(B), 分担, 異種感覚情報の意識的，無意識的時間認知の研究.
2008年度～2012年度, 基盤研究(B), 分担, 音声の耐雑音性を生みだす聴覚特性の研究.
2008年度～2010年度, 萌芽研究, 代表, 聴覚の時計と視覚の時計の相互交渉場面を探る.
2007年度～2011年度, 基盤研究(S), 代表, 言語情報伝達における連続性と分節性：知覚心理学，言語学，音声科学の融合.
2007年度～2009年度, 基盤研究(B), 分担, 脳の中の時計:時間知覚の神経基盤.
2006年度～2007年度, 基盤研究(B), 分担, 「聴く建築」、音の景相に基づいた新たな空間設計手法の立案に向けた研究.
2004年度～2006年度, 萌芽研究, 代表, 音楽療法の真実を問う.
2002年度～2006年度, 基盤研究(S), 代表, 聴覚の文法：言語と非言語とを包括する体制化の研究.
2017年度～2017年度, 九州大学 先端融合医療創成センター ARO 橋渡研究, 代表, モザイク化音声を用いた聴力検査.
2018年度～2018年度, SCORE, 代表, 音声明瞭化技術の事業化検証のための音声強調条件決定システムの開発.
2015年度～2016年度, カワイサウンド技術・音楽振興財団, 分担, パブリック・スピーキングの音響分析：日本人の国際的発信力強化に向けて.
2011年度～2012年度, カワイサウンド技術・音楽振興財団, 代表, 乳幼児期の喃語における音声生成発達の過程 ： 日本語圏 ・ 英語圏の比較.
2010年度～2010年度, （財）立石科学技術振興財団・研究助成(A) , 分担, 人間と機械のリズムの調和に関する基礎研究.
2008年度～2008年度, ヤマハ音楽支援制度研究活動支援, 代表, 時間とリズムを紡ぐ脳－MEG計測による時間とリズム知覚に関する心理生理学的研究－.
2017.07～2018.03, 代表, 騒音下における音声の聴き取りやすさの指標化.
2016.08～2017.03, 代表, カーラジオに混入するノイズ音の定量評価技術.
2015.07～2017.02, 代表, 音声に含まれる言語情報の抽出による音声信号信号伝達方式の創出.
2015.06～2016.03, 代表, カーラジオに混入するノイズ音の定量評価技術.
2014.06～2015.03, 代表, カーラジオに混入するノイズ音の定量評価技術.
2008.06～2009.03, 代表, 音声明瞭化装置の開発／音声明瞭化装置の試作及び特性評価.
2018年度～2018年度, 第Ⅱ期九大ギャップファンド, 代表, 音声明瞭化技術.
2017年度～2017年度, Progress 100, 代表, 世界トップレベル研究者招へいプログラム.
2012年度～2013年度, 九州大学Ｐ＆Ｐ（Ａタイプ）, 代表, 文理融合型の知覚・認知研究拠点.
2006年度～2007年度, 九州大学教育研究プログラム・研究拠点形成プロジェクト, 分担, 高齢者に配慮した公共音響設備の研究.