Kyushu University Academic Staff Educational and Research Activities Database
List of Presentations
Yoshitaka NAKAJIMA Last modified date:2019.06.06

Professor / perceptual psychology / Department of Human Science / Faculty of Design


Presentations
1. Yoshitaka Nakajima, English phonemes in a space extracted from spectral changes of speech in time, 5th Institute of Mathematical Statistics Asia Pacific Rim Meeting, 2018.06.
2. Goshi TAKESHITA, Yoshitaka NAKAJIMA , Time-shrinking as observed in contexts close to music, 6th Conference of the Asia-Pacific Society for the Cognitive Sciences of Music, 2017.08.
3. Yoshitaka NAKAJIMA, Mizuki MATSUDA, Kazuo UEDA, Gerard REMIJN, Temporal resolution needed for auditory communication: Measurement with mosaic speech, 33rd Annual Meeting of the International Society for Psychophysics, 2017.10.
4. Yoshitaka NAKAJIMA, Kazuo UEDA, Gerard Remijn, Yuko Yamashita, Takuya KISHIDA, Phonology and psychophysics: Is sonority real?, 33rd Annual Meeting of the International Society for Psychophysics, 2017.10.
5. Chinami ONISHI, Yoshitaka NAKAJIMA, Emi HASUO, The perception of a dotted rhythm embedded in a two-four-time framework, 14th International Conference for Music Perception and Cognition, 2016.07.
6. Kunito IIDA, Yoshitaka NAKAJIMA, Kazuo UEDA, Gerard REMIJN, Yukihiro SERIZAWA, Effects of the duration and the frequency of temporal gaps on the subjective distortedness of music fragments, 14th International Conference for Music Perception and Cognition, 2016.07.
7. Yoshitaka NAKAJIMA, Mizuki MATSUDA, Erika TOMIMATSU, Emi HASUO, Perceptual contrast between two short adjacent time intervals marked by clicks, The 31st International Congress of Psychology, 2016.07.
8. Yoshitaka NAKAJIMA, Principles of music perception, The 31st International Congress of Psychology, 2016.07.
9. Yoshitaka NAKAJIMA, Perceptual interactions between adjacent time intervals marked by sound bursts, 5th Joint Meeting: Acoustical Society of America and Acoustical Society of Japan, 2016.11, Perceptual interactions take place between adjacent time intervals up to ~600 ms even in simple contexts. Let us suppose that two adjacent time intervals, T1 and T2 in this order, are marked by sound bursts. Their durations are perceptually assimilated in a bilateral manner if the difference between them is up to ~50 ms. When T1 200 ms and T1 T2 < T1 + 100 ms, T2 is underestimated systematically,
and the underestimation is roughly a function of T2—T1. Except when T1 ’ T2, this is assimilation of T2 to T1, partially in a unilateral manner. This systematic underestimation, time-shrinking, disappears when T1 > 300 ms. When T2 = 100 or 200 ms and T1 = T2 + 100 or T2 + 200 ms, T1 is perceptually contrasted against T2: T1 is overestimated. When 80 T1 280 ms and T2 T1 + 300 ms, T2 is contrasted against T1: In this case, T2 is overestimated. Assimilation and contrast are more conspicuous in T2 than in T1. For three adjacent time intervals, T1, T2, and T3, the perception of T3 can be affected by both T1 and T2, and the perception of T2 by T1..
10. Miharu FUYUNO, Yuko YAMASHITA, Yoshitaka NAKAJIMA, Multimodal Corpora of English Public Speaking by Asian Learners: Analyses on Speech Rates and Pauses, 6th International Conference on Corpus Linguistics, 2014.05.
11. Miharu FUYUNO, Yuko YAMASHITA, Yoshikiyo KAWASE, Yoshitaka NAKAJIMA, Analyzing Speech Pauses and Facial Movement Patterns in Multimodal Public-Speaking Data of EFL Learners , The International LCSAW Symposium 2014, 2014.05.
12. Yoshitaka NAKAJIMA, Takayuki SASAKI, Kazuo UEDA, Gerard B. REMIJN, Auditory Grammar in Music, The 13th International Conference on Music Perception and Cognition and the 5th Conference of the Asia-Pacific Society for the Cognitive Sciences of Music, 2014.08.
13. Satoshi MORIMOTO, Gerard B. REMIJN, Yoshitaka NAKAJIMA, Computational Model-Based Analysis of Context Effects on Chord Processing, The 13th International Conference on Music Perception and Cognition and the 5th Conference of the Asia-Pacific Society for the Cognitive Sciences of Music, 2014.08.
14. Zhimin BAO, Yuko YAMASHITA, Kazuo UEDA, Yoshitaka NAKAJIMA, The Acquisition of Speech Rhythm in Chinese-, English-, and Japanese-Learning Infants, The 13th International Conference on Music Perception and Cognition and the 5th Conference of the Asia-Pacific Society for the Cognitive Sciences of Music, 2014.08.
15. Emi HASUO, Kazuo UEDA, Takuya KISHIDA, Haruna FUJIHIRA, Satoshi MORIMOTO, Gerard B. REMIJN, Kimio SHIRAISHI, Shozo TOBIMATSU, Yoshitaka NAKAJIMA, Duration Perception of Filled and Empty Intervals: A Study with Magnitude Estimation and Electroencephalography, The 13th International Conference on Music Perception and Cognition and the 5th Conference of the Asia-Pacific Society for the Cognitive Sciences of Music, 2014.08.
16. Takuya KISHIDA, Yoshitaka NAKAJIMA, Kazuo UEDA, Gerard B. REMIJN, Perceptual Roles of Power Fluctuation Factors in Speech, The 13th International Conference on Music Perception and Cognition and the 5th Conference of the Asia-Pacific Society for the Cognitive Sciences of Music, 2014.08.
17. Gerard B. REMIJN, Yushiro TSUBAKI, Kazuo UEDA, Yoshitaka NAKAJIMA, Auditory Reorganization of Gliding Tones in Different Frequency Ranges, The 13th International Conference on Music Perception and Cognition and the 5th Conference of the Asia-Pacific Society for the Cognitive Sciences of Music, 2014.08.
18. Yamashita Yuko, Miharu FUYUNO, Yoshitaka NAKAJIMA, Influence of speech rate and pauses on the efficiency of English public speaking of Japanese EFL learners, 日本音響学会聴覚研究会, 2014.12.
19. Takuya KISHIDA, Yoshitaka NAKAJIMA, Kazuo UEDA, Gerard B. REMIJN, Perceptual roles of power-fluctuation factors in speech perception: A new method of factor analysis, 日本音響学会聴覚研究会, 2014.12.
20. Shinya ISAJI, Kazuo UEDA, Yoshitaka NAKAJIMA, Gerard B. REMIJN, Speech intelligibility mapped onto a time-frequency resolution plane, 日本音響学会聴覚研究会, 2014.12.
21. Yoshitaka NAKAJIMA, Mizuki MATSUDA, Gerard B. REMIJN, Kazuo UEDA, Temporal resolution needed to hear out Japanese morae in mosaic speech, 日本音響学会聴覚研究会, 2014.05.
22. Yoshitaka NAKAJIMA, Kazuo UEDA, Shota FUJIMARU, Yuki Ohsaka, Sonority in British English, 21st International Congress on Acoustics, 165th Meeting of the Acoustical Society of America, 52nd Meeting of the Canadian Acoustical Association, 2013.06.
23. Emi HASUO, Yoshitaka NAKAJIMA, Takuya KISHIDA, Erika TOMIMATSU, Kazuo UEDA, Simon Grondin, The filled duration illusion with the method of of adjustment when filled vs. empty comparison intervals are used, Fechner Day 2013, The 29th Annual Meeting of the International Society for Psychophysics, 2013.10.
24. Yoshitaka NAKAJIMA, Hiroshige TAKEICHI, Takako MITSUDO, Shozo TOBIMATSU, Perceptual processing of pairs of acoustically marked time intervals: Correspondence between psychophysical and electrophysiological data, Fechner Day 2013, The 29th Annual Meeting of the International Society for Psychophysics, 2013.10, Event-related potentials (ERPs) elicited by pairs of subsequent time intervalsmarked by sound bursts were recorded in our previous study1, and the data were reanalyzed utilizing a new multivariate method. Subsequent time intervals t1 and t2 are often perceived as equal in duration when t2 is shorter than 300 ms and up to 50 ms shorter or up to 80 ms longer than t1; the subjective equality holds even if the physical difference is larger than the just noticeable difference obtained for t1 and t2 separated in time. This phenomenon is called auditory temporal assimilation. ERPs were registered in two types of sessions: J sessions, where the participants judged whether the two intervals were subjectively equal or not, and NJ sessions, where no judgments were required. Slow negative components occurred in brain activities in the J sessions, more conspicuous when inequality between t1 and t2 was perceived, in agreement with our earlier study. An experiment in which t2 was fixed at 200ms was chosen for the present analysis. For a moving 100-ms time window, a correlation matrix across the 19 electrodes was calculated for each temporal pattern, and the correlation matrix distance (CMD = Euclidean distance between the respective correlation matrices) between each two patterns was evaluated. The patterns for which subjective equality dominated were classified as equal (E) patterns, those for which subjective inequality dominated as unequal (UE) patterns. There were four E patterns and three UE patterns, but no patterns to be classified otherwise. A measure of separation of E vs. UE patterns
in terms of brain activities was calculated as the sum of squared CMDs between E and UE patterns, and expressed as relative separation (proportionally to the total squared CMD). The relative separation was a function of time, represented by the temporal midpoint of the moving window. The relative separation in the J sessions showed a peak around 70ms after t2, similarly to our earlier findings2. A process related to E-UE judgment is thus likely to take place within 100 ms after t2. Peaks within 100ms after t2 were observed also in the NJ sessions, suggesting that implicit judgments, although not required, may have occurred in a very early stage. The perceptual separation between the E and the UE patterns can thus be related to dynamic aspects of brain activities, critical factors of which we are trying to identify and locate.
.
25. Tsuyoshi KURODA, Simon GRONDIN, Yoshitaka NAKAJIMA, Kazuo UEDA, French and English rhythms are perceptually discriminable with only intensity changes in low frequency regions of speech, Fechner Day 2012, The 28th Annual Meeting of the International Society for Psychophysics, 2012.10, The purpose of this study was to determine which frequency band would contribute to discrimination between speech rhythms of French and English. Each trial consisted of two noises with different intensity changes. Each intensity change simulated the one that was derived from a frequency band of recorded sentences of French or English; the band had a center frequency of 350, 1000, 2150, or 4800 Hz. Participants evaluated the rhythm dissimilarity of two noises with an 8-point scale. Two noises were evaluated as more dissimilar when two sentences whose intensity changes were simulated by the noises were in different languages than when they were in the same language. Moreover, this tendency was reduced in 4800 Hz compared with the other bands. This indicates that French and English rhythms are discriminable with intensity changes of low frequency bands, even without any signs of pitch and phoneme.
.
26. Kazuo UEDA, Yoshitaka NAKAJIMA, Perceptual roles of different frequency bands in Japanese syllable identification
, Fechner Day 2012, The 28th Annual Meeting of the International Society for Psychophysics, 2012.10, Ueda et al. [(2010). Fechner Day 2010, Padua.] indicated that speech information could be essentially transmitted by the power fluctuations in four frequency bands. We aimed at clarifying the roles of these frequency bands in Japanese speech perception in V/CV syllable identification. We first performed factor analyses of power fluctuations of critical- band-filtered speech, and obtained four frequency bands as in the previous research. The speech was a set of V/CV patterns uttered by a male and a female speaker. The speech patterns were converted into noise-vocoded speech so that only the power fluctuation in each frequency band was preserved. There were also patterns in which one of the frequency bands was eliminated resulting in a spectral gap. Eliminating the lowest band (50-570 Hz) crucially deteriorated perceptual differentiation between voiced and unvoiced consonants. Eliminating the second lowest band (570-1850 Hz) interfered vowel identification turning almost all vowels into /i/. The roles of the other frequency bands were not obvious, but their temporal relationships with the lowest band was suggested to play a role.
.
27. Yoshitaka NAKAJIMA, Kazuo UEDA, Shota FUJIMARU, Hirotoshi MOTOMURA, Yuki Ohsaka, Acoustic correlate of phonological sonority in British English
, Fechner Day 2012, The 28th Annual Meeting of the International Society for Psychophysics, 2012.10, Sonority or aperture proposed in linguistic literature can be considered a kind of subjective measure specific to speech perception. Vowels have high sonorities corresponding to a linguistic fact that they can be nuclei of syllables, and fricatives and stops have low sonorities. In order to understand how sonority is perceived, we attempted to find an acoustic dimension on which we could construct a psychophysical scale of sonority. We applied the multivariate analysis method as in Ueda et al. (2010, Fechner Day, Padua) to spoken sentences in British English collected in a commercial database, in which phonemes were segregated and labeled. The speech signals went through a bank of critical-band filters, and the output power fluctuations were subjected to factor analysis. The three factors as in our previous study appeared. The analyzed phonemes were classified into three categories, i.e., vowels, sonorant consonants, and obstruents. These categories were represented well in the Cartesian space whose coordinates were the factor scores of the above factors. One of the factors located around 1000 Hz was highly correlated with sonority or aperture.

.
28. Emi HASUO, Yoshitaka NAKAJIMA, Erika TOMIMATSU, Simon GRONDIN, Kazuo UEDA, Perceiving filled vs. empty time intervals: A comparison of adjustment and magnitude estimation methods, Fechner Day 2012, The 28th Annual Meeting of the International Society for Psychophysics, 2012.10, A time interval between the onset and the offset of a continuous sound (filled interval) is often perceived to be longer than a time interval between two successive brief sounds (empty interval) of the same physical duration. The present study examined the occurrence of such phenomenon, sometimes called the filled duration illusion, for time intervals of 40-520 ms with the method of adjustment and the method of magnitude estimation. When the method of adjustment was used, the filled duration illusion appeared clearly for a few participants, while it did not appear for the majority of participants. With magnitude estimation, the filled duration illusion was more likely to occur. The amounts of the illusion did not correlate between the two methods, and it was suggested that even for the same participant, the perception of the empty and the filled intervals can be influenced by the experimental methods.
.
29. Kazuo UEDA, Yoshitaka NAKAJIMA, Comparison of Factors Extracted from Power Fluctuations in Critical-Band-Filtered Homophonic Choral Music, The 12th International Conference on Music Perception and Cognition, 2012.07.
30. Takako MITSUDO, Yoshitaka NAKAJIMA, Gerard Remijn, Hiroshige TAKEICHI, Yoshinobu GOTO, Shozo Tobimatsu, Electrophysiological Substrates of Auditory Temporal Assimilation Between Two Neighboring Time Intervals, The 12th International Conference on Music Perception and Cognition, 2012.07.
31. Yoshitaka NAKAJIMA, Hiroshige TAKEICHI, Saki KIDERA, Kazuo UEDA, Multivariate analyses of speech signals in singing and non-singing voices, The 12th International Conference on Music Perception and Cognition, 2012.07.
32. Hiroshige TAKEICHI, Takako MITSUDO, Yoshitaka NAKAJIMA, Shozo Tobimatsu, Electrophysiological correlates of subjective equality and inequality between neighboring time interval, The 12th International Conference on Music Perception and Cognition, 2012.07.
33. Temporal structures of temporal perception: How time intervals
marked by very short sounds are perceived.
34. Auditory illusions related to temporal continuity and discontinuity.
35. Auditory grammar and auditory organization.
36. Musical tonality.
37. Perceptual organization of onsets and offsets in speech.