Kyushu University Academic Staff Educational and Research Activities Database
List of Papers
Kazuo UEDA Last modified date:2024.04.12

Associate Professor / Perceptual Psychology / Department of Acoustic Design / Faculty of Design


Papers
1. Kazuo Ueda, Masashi Hashimoto, Hiroshige Takeichi, and Kohei Wakamiya, Interrupted mosaic speech revisited: Gain and loss in intelligibility by stretching, The Journal of the Acoustical Society of America, 10.1121/10.0025132, 155, 3, 1767-1779, 2024.03, [URL], Our previous investigation on the effect of stretching spectrotemporally degraded and temporally interrupted speech stimuli showed remarkable intelligibility gains [Ueda, Takeichi, and Wakamiya (2022). J. Acoust. Soc. Am. 152(2), 970–980]. In this previous study, however, gap durations and temporal resolution were confounded. In the current investigation, we therefore observed the intelligibility of so-called mosaic speech while dissociating the effects of interruption and temporal resolution. The intelligibility of mosaic speech (20 frequency bands and 20 ms segment duration) declined from 95% to 78% and 33% by interrupting it with 20 and 80 ms gaps. Intelligibility improved, however, to 92% and 54% (14% and 21% gains for 20 and 80 ms gaps, respectively) by stretching mosaic segments to fill silent gaps (n = 21). By contrast, the intelligibility was impoverished to a minimum of 9% (7% loss) when stretching stimuli interrupted with 160 ms gaps. Explanations based on auditory grouping, modulation unmasking, or phonemic restoration may account for the intelligibility improvement by stretching, but not for the loss. The probability summation model accounted for “U”-shaped intelligibility curves and the gain and loss of intelligibility, suggesting that perceptual unit length and speech rate may affect the intelligibility of spectrotemporally degraded speech stimuli..
2. Gerard B. Remijn, Masaki Teramachi, and Kazuo Ueda, Auditory Ensemble Perception (Summary Statistics) for Music Scale Tones by Listeners with and without Absolute Pitch, Auditory Perception & Cognition, 10.1080/25742442.2024.2310460, 2024.01, [URL].
3. Kazuo Ueda, Linh Le Dieu Doan, and Hiroshige Takeichi, Checkerboard and interrupted speech: Intelligibility contrasts related to factor-analysis-based frequency bands, The Journal of the Acoustical Society of America, 10.1121/10.0021165, 154, 4, 2010-2020, 2023.10, [URL], It has been shown that the intelligibility of checkerboard speech stimuli, in which speech signals were periodically interrupted in time and frequency, drastically varied according to the combination of the number of frequency bands (2–20) and segment duration (20–320 ms). However, the effects of the number of frequency bands between 4 and 20 and the frequency division parameters on intelligibility have been largely unknown. Here, we show that speech intelligibility was lowest in four-band checkerboard speech stimuli, except for the 320-ms segment duration. Then, temporally interrupted speech stimuli and eight-band checkerboard speech stimuli came in this order (N = 19 and 20). At the same time, U-shaped intelligibility curves were observed for four-band and possibly eight-band checkerboard speech stimuli. Furthermore, different parameters of frequency division resulted in small but significant intelligibility differences at the 160- and 320-ms segment duration in four-band checkerboard speech stimuli. These results suggest that factor-analysis-based four frequency bands, representing groups of critical bands correlating with each other in speech power fluctuations, work as speech cue channels essential for speech perception. Moreover, a probability summation model for perceptual units, consisting of a sub-unit process and a supra-unit process that receives outputs of the speech cue channels, may account for the U-shaped intelligibility curves..
4. Kazuo Ueda, Hiroshige Takeichi, and Kohei Wakamiya, Auditory grouping is necessary to understand interrupted mosaic speech stimuli, The Journal of the Acoustical Society of America, 10.1121/10.0013425, 152, 2, 970-980, 2022.08, [URL], The intelligibility of interrupted speech stimuli has been known to be almost perfect when segment duration is shorter than 80 ms, which means that the interrupted segments are perceptually organized into a coherent stream under this condition. However, why listeners can successfully group the interrupted segments into a coherent stream has been largely unknown. Here we show that the intelligibility for mosaic speech, in which original speech was segmented in frequency and time, and noise-vocoded with the average power in each unit, was largely reduced by periodical interruption. At the same time, the intelligibility could be recovered by promoting auditory grouping of the interrupted segments with stretching the segments up to 40 ms and reducing the gaps, provided that the number of frequency bands was enough (>= 4) and the original segment duration was equal to or less than 40 ms. The interruption was devastating for mosaic speech stimuli, very likely because the deprivation of periodicity and temporal fine structure with mosaicking prevented successful auditory grouping for the interrupted segments..
5. Hikaru Eguchi, Kazuo Ueda, Gerard B. Remijn, Yoshitaka Nakajima, and Hiroshige Takeichi, The common limitations in auditory temporal processing for Mandarin Chinese and Japanese, Scientific Reports, 10.1038/s41598-022-06925-x, 12, 1, 3002-3002, Article number: 3002, 2022.02, [URL], The present investigation focused on how temporal degradation affected intelligibility in two types of languages, i.e., a tonal language (Mandarin Chinese) and a non-tonal language (Japanese). The temporal resolution of common daily-life sentences spoken by native speakers was systematically degraded with mosaicking (mosaicising), in which the power of original speech in each of regularly spaced time-frequency unit was averaged and temporal fine structure was removed. The results showed very similar patterns of variations in intelligibility for these two languages over a wide range of temporal resolution, implying that temporal degradation crucially affected speech cues other than tonal cues in degraded speech without temporal fine structure. Specifically, the intelligibility of both languages maintained a ceiling up to about the 40-ms segment duration, then the performance gradually declined with increasing segment duration, and reached a floor at about the 150-ms segment duration or longer. The same limitations for the ceiling performance up to 40 ms appeared for the other method of degradation, i.e., local time-reversal, implying that a common temporal processing mechanism was related to the limitations. The general tendency fitted to a dual time-window model of speech processing, in which a short (~ 20–30 ms) and a long (~ 200 ms) time-window run in parallel..
6. Kazuo Ueda and Ikuo Matsuo, Erratum: Intelligibility of chimeric locally time-reversed speech: Relative contribution of four frequency bands [ JASA Express Lett. 1(6), 065201 (2021)], JASA Express Letters, 10.1121/10.0006007, 1, 9, 095201-095201, 2021.09, [URL].
7. Ueda, K., Kawakami, R., and Takeichi, H., Erratum: Checkerboard speech vs interrupted speech: Effects of spectrotemporal segmentation on intelligibility [JASA Express Lett. 1(7), 075204 (2001)], JASA Express Letters, 10.1121/10.0005990, 1, 8, 1-1, 085204, 2021.08, [URL].
8. Ueda, K., Kawakami, R., and Takeichi, H., Checkerboard speech vs interrupted speech: Effects of spectrotemporal segmentation on intelligibility, JASA Express Letters, 10.1121/10.0005600, 1, 7, 1-7, 075204, 2021.07, [URL], The intelligibility of interrupted speech (interrupted over time) and checkerboard speech (interrupted over time-by-frequency), both of which retained a half of the original speech, was examined. The intelligibility of interrupted speech stimuli decreased as segment duration increased. 20-band checkerboard speech stimuli brought nearly 100% intelligibility irrespective of segment duration, whereas, with 2 and 4 frequency bands, a trough of 35%-40% appeared at the 160-ms segment duration. Mosaic speech stimuli (power was averaged over a time-frequency unit) yielded generally poor intelligibility (
9. Kazuo Ueda and Ikuo Matsuo, Intelligibility of chimeric locally time-reversed speech: Relative contribution of four frequency bands, JASA Express Letters, 10.1121/10.0005439, 1, 6, 1-6, 065201, 2021.06, [URL], Intelligibility of 4-band speech stimuli was investigated (n = 18), such that only one of the frequency bands was preserved, whereas other bands were locally time-reversed (segment duration: 75-300 ms), or vice versa. Intelligibility was best retained (82% at 75 ms) when the second lowest band (540-1700 Hz) was preserved. When the same band was degraded, the largest drop (10% at 300 ms) occurred. The lowest and second highest bands contributed similarly less strongly to intelligibility. The highest frequency band contributed least. A close connection between the second lowest frequency band and sonority was suggested.
10. Kazuo UEDA and Valter CIOCCA, Phonemic restoration of interrupted locally time-reversed speech: Effects of segment duration and noise levels, Attention, Perception, & Psychophysics, 10.3758/s13414-021-02292-3, 83, 5, 1928-1934, Published online on 14 April 2021. Published in the completed journal issue on 19 June 2021., 2021.06, [URL], Intelligibility of temporally degraded speech was investigated with locally time-reversed speech (LTR) and its interrupted version (ILTR). Control stimuli comprising interrupted speech (I) were also included. Speech stimuli consisted of 200 Japanese meaningful sentences. In interrupted stimuli, speech segments were alternated with either silent gaps or pink noise bursts. The noise bursts had a level of -10, 0 or +10 dB relative to the speech level. Segment duration varied from 20 to 160 ms for ILTR sentences, but was fixed at 160 ms for I sentences. At segment durations between 40 and 80 ms, severe reductions in intelligibility were observed for ILTR sentences, compared with LTR sentences. A substantial improvement in intelligibility (30-33%) was observed when 40-ms silent gaps in ILTR were replaced with 0- and +10-dB noise. Noise with a level of -10 dB had no effect on the intelligibility. These findings show that the combined effects of interruptions and temporal reversal of speech segments on intelligibility are greater than the sum of each individual effect. The results also support the idea that illusory continuity induced by high-level noise bursts improves the intelligibility of ILTR and I sentences.
11. Natalia, Postnova, Yoshitaka Nakajima, Kazuo Ueda, Gerard B. Remijn, Perceived Congruency in Audiovisual Stimuli Consisting of Gabor Patches and AM- and FM-tones, Multisensory Research, 10.1163/22134808-bja10041, 34, 5, 455-475, 2020.10.
12. Yixin Zhang, Yoshitaka Nakajima, Kazuo Ueda,Takuya Kishida, and Gerard B. Remijn, Comparison of Multivariate Analysis Methods as Applied to English Speech, Applied Sciences, 10.3390/app10207076, 10, 7076, 1-10, 2020.10, [URL].
13. Santi, Yoshitaka Nakajima, Kazuo Ueda, and Gerard B. Remijn, Intelligibility of English Mosaic Speech: Comparison between Native and Non-Native Speakers of English, Applied Sciences, doi:10.3390/app10196920, 10, 6920, 1-13, 2020.10.
14. Ikuo Matsuo, Kazuo Ueda, and Yoshitaka Nakajima, Intelligibility of chimeric locally time-reversed speech, The Journal of the Acoustical Society of America Express Letters, 10.1121/10.0001414, 147, 6, EL523-EL528, 2020.06, [URL], The intelligibility of chimeric locally time-reversed speech was investigated. Both (1) the boundary frequency between the temporally degraded band and the non-degraded band and (2) the segment duration were varied. Japanese mora accuracy decreased if the width of the degraded band or the segment duration increased. Nevertheless, the chimeric stimuli were more intelligible than the locally time-reversed controls. The results imply that the auditory system can use both temporally degraded speech information and undamaged speech information over different frequency regions in the processing of the speech signal, if the amplitude envelope in the frequency range of 840–1600 Hz was preserved.
(C) 2020 Acoustical Society of America.
15. Kazuo Ueda, Yoshitaka Nakajima, Florian Kattner, and Wolfgang Ellermeier, Irrelevant speech effects with locally time-reversed speech: Native vs non-native language, The Journal of the Acoustical Society of America, 10.1121/1.5112774, 145, 6, 3686-3694, 2019.06, [URL], Irrelevant speech is known to interfere with short-term memory of visually presented items. Here, this irrelevant speech effect was studied with a factorial combination of 3 variables: the participants' native language, the language the irrelevant speech was derived from, and the playback direction of the irrelevant speech. We used locally time-reversed speech as well to disentangle the contributions of local and global integrity. German and Japanese speech was presented to German (n = 79) and Japanese (n = 81) participants while they were performing a serial-recall task. In both groups, any kind of irrelevant speech impaired recall accuracy as compared to a pink-noise control condition. When the participants' native language was presented, normal speech and locally time-reversed speech with short segment duration, preserving intelligibility, was the most disruptive. Locally time-reversed speech with longer segment durations and normal or locally time-reversed speech played entirely backward, both lacking intelligibility, was less disruptive. When unfamiliar, incomprehensible signal was presented as irrelevant speech, no significant difference was found between locally time-reversed speech and its globally inverted version, suggesting that the effect of global inversion depends on the familiarity of the language..
16. Kazuo Ueda, Tomoya Araki, Yoshitaka Nakajima, Frequency specificity of amplitude envelope patterns in noise-vocoded speech, Hearing Research, 10.1016/j.heares.2018.06.005, 367, 169-181, 2018.08, [URL], We examined the frequency specificity of amplitude envelope patterns in 4 frequency bands, which universally appeared through factor analyses applied to power fluctuations of critical-band filtered speech sounds in 8 different languages/dialects [Ueda and Nakajima (2017). Sci. Rep., 7 (42468)]. A series of 3 perceptual experiments with noise-vocoded speech of Japanese sentences was conducted. Nearly perfect (92–94%) mora recognition was achieved, without any extensive training, in a control condition in which 4-band noise-vocoded speech was employed (Experiments 1–3). Blending amplitude envelope patterns of the frequency bands, which resulted in reducing the number of amplitude envelope patterns while keeping the average spectral levels unchanged, revealed a clear deteriorating effect on intelligibility (Experiment 1). Exchanging amplitude envelope patterns brought generally detrimental effects on intelligibility, especially when involving the 2 lowest bands (≲1850 Hz; Experiment 2). Exchanging spectral levels averaged in time had a small but significant deteriorating effect on intelligibility in a few conditions (Experiment 3). Frequency specificity in low-frequency-band envelope patterns thus turned out to be conspicuous in speech perception..
17. Yoshitaka Nakajima, Mizuki Matsuda, Kazuo Ueda, and Gerard B. Remijn, Temporal Resolution Needed for Auditory Communication: Measurement with Mosaic Speech, Frontiers in Human Neuroscience, 10.3389/fnhum.2018.00149, 12, 149, 2018.04, [URL], Temporal resolution needed for Japanese speech communication was measured. A new experimental paradigm that can reflect the spectro-temporal resolution necessary for healthy listeners to perceive speech is introduced. As a first step, we report listeners' intelligibility scores of Japanese speech with a systematically degraded temporal resolution, so-called "mosaic speech": speech mosaicized in the coordinates of time and frequency. The results of two experiments show that mosaic speech cut into short static segments was almost perfectly intelligible with a temporal resolution of 40 ms or finer. Intelligibility dropped for a temporal resolution of 80 ms, but was still around 50%-correct level. The data are in line with previous results showing that speech signals separated into short temporal segments of
18. Kazuo UEDA, Yoshitaka NAKAJIMA, Wolfgang ELLERMEIER, Florian KATTNER, Intelligibility of locally time-reversed speech: A multilingual comparison, Scientific Reports, 10.1038/s41598-017-01831-z, 7, doi:10.1038/s41598-017-01831-z, 2017.05, [URL], A set of experiments was performed to make a cross-language comparison of intelligibility of locally time-reversed speech, employing a total of 117 native listeners of English, German, Japanese, and Mandarin Chinese. The experiments enabled to examine whether the languages of three types of timing---stress-, syllable-, and mora-timed languages---exhibit different trends in intelligibility, depending on the duration of the segments that were temporally reversed. The results showed a strikingly similar trend across languages, especially when the time axis of segment duration was normalised with respect to the deviation of a talker's speech rate from the average in each language.
This similarity is somewhat surprising given the systematic differences in vocalic proportions characterising the languages studied which had been shown in previous research and were largely replicated with the present speech material. These findings suggest that a universal temporal window shorter than 20--40~ms plays a crucial role in perceiving locally time-reversed speech by working as a buffer in which temporal reorganisation can take place with regard to lexical and semantic processing..
19. Yoshitaka NAKAJIMA, Kazuo UEDA, Shota FUJIMARU, Hirotoshi MOTOMURA, Yuki OHSAKA, English phonology and an acoustic language universal, Scientific Reports, 10.1038/srep46049, 7, 46049, 1-6, doi: 10.1038/srep46049, 2017.04, [URL], Acoustic analyses of eight different languages/dialects had revealed a language universal: Three spectral factors consistently appeared in analyses of power fluctuations of spoken sentences divided by critical-band filters into narrow frequency bands. Examining linguistic implications of these factors seems important to understand how speech sounds carry linguistic information. Here we show the three general categories of the English phonemes, i.e., vowels, sonorant consonants, and obstruents, to be discriminable in the Cartesian space constructed by these factors: A factor related to frequency components above 3,300 Hz was associated only with obstruents (e.g., /k/ or /z/), and another factor related to frequency components around 1,100 Hz only with vowels (e.g., /a/ or /i/) and sonorant consonants (e.g., /w/, /r/, or /m/). The latter factor highly correlated with the hypothetical concept of sonority or aperture in phonology. These factors turned out to connect the linguistic and acoustic aspects of speech sounds systematically..
20. Kazuo UEDA, Yoshitaka NAKAJIMA, An acoustic key to eight languages/dialects: Factor analyses of critical-band-filtered speech, Scientific Reports, doi: 10.1038/srep42468, 7, 42468, 1-4, doi: 10.1038/srep42468, 2017.02, [URL], The peripheral auditory system functions like a frequency analyser, often modelled as a bank of non-overlapping band-pass filters called critical bands; 20 bands are necessary for simulating frequency resolution of the ear within an ordinary frequency range of speech (up to 7,000 Hz). A far smaller number of filters seemed sufficient, however, to re-synthesise intelligible speech sentences with power fluctuations of the speech signals passing through them; nevertheless, the number and frequency ranges of the frequency bands for efficient speech communication are yet unknown. We derived four common frequency bands---covering approximately 50--540, 540--1,700, 1,700--3,300, and above 3,300 Hz---from factor analyses of spectral fluctuations in eight different spoken languages/dialects. The analyses robustly led to three factors common to all languages investigated---the low & mid-high factor related to the two separate frequency ranges of 50--540 and 1,700--3,300 Hz, the mid-low factor the range of 540--1,700 Hz, and the high factor the range above 3,300 Hz---in these different languages/dialects, suggesting a language universal..
21. Takuya KISHIDA, Yoshitaka NAKAJIMA, Kazuo UEDA, Gerard Remijn, Three Factors Are Critical in Order to Synthesize Intelligible Noise-Vocoded Japanese Speech, Front. Psychol., 26 April 2016, http://dx.doi.org/10.3389/fpsyg.2016.00517, 7, 517, 1-9, 2016.04, [URL].
22. Wolfgang Ellermeier, Florian Kattner, Kazuo UEDA, Kana Doumoto, Yoshitaka NAKAJIMA, Memory disruption by irrelevant noise-vocoded speech: Effects of native language and the number of frequency bands, the Journal of the Acoustical Society of America, http://dx.doi.org/10.1121/1.4928954, 138, 3, 1561-1569, 2015.09, [URL], To investigate the mechanisms by which unattended speech impairs short-term memory performance, speech samples were systematically degraded by means of a noise vocoder. For experiment 1, recordings of German and Japanese sentences were passed through a filter bank dividing the spectrum between 50 and 7000 Hz into 20 critical-band channels or combinations of those, yielding 20, 4, 2, or just 1 channel(s) of noise-vocoded speech. Listening tests conducted with native speakers of both languages showed a monotonic decrease in speech intelligibility as the number of frequency channels was reduced. For experiment 2, 40 native German and 40 native Japanese participants were exposed to speech processed in the same manner while trying to memorize visually presented sequences of digits in the correct order. Half of each sample received the German, the other half received the Japanese speech samples. The results show large irrelevant-speech effects increasing in magnitude with the number of frequency channels. The effects are slightly larger when subjects are exposed to their own native language. The results are neither predicted very well by the speech transmission index, nor by psychoacoustical fluctuation strength, most likely, since both metrics fail to disentangle amplitude and frequency modulations in the signals.
(C) 2015 Acoustical Society of America..
23. Yoshitaka NAKAJIMA, Takayuki SASAKI, Kazuo UEDA, Gerard B. REMIJN, Auditory Grammar, Acoustics Australia, 42, 2, 97-101, 2014.08.
24. Emi HASUO, Yoshitaka NAKAJIMA, Erika TOMIMATSU, Simon GRONDIN, Kazuo UEDA, The occurrence of the filled duration illusion: A comparison of the method of adjustment with the method of magnitude estimation, Acta Psychologica, 147, 111-121, (Accepted 4 October 2013; Available online 5 November 2013), 2014.02, A time interval between the onset and the offset of a continuous sound (filled interval) is often perceived to be longer than a time interval between two successive brief sounds (empty interval) of the same physical duration. The present study examined whether and how this phenomenon, sometimes called the filled duration illusion (FDI), occurs for short time intervals (40–520 ms). The investigation was conducted with the method of adjustment (Experiment 1) and the method of magnitude estimation (Experiment 2). When the method of adjustment was used, the FDI did not appear for the majority of the participants, but it appeared clearly for some participants. In the latter case, the amount of the FDI increased as the interval duration lengthened. The FDI was more likely to occur with magnitude estimation than with the method of adjustment. The participants who showed clear FDI with one method did not necessarily show such clear FDI with the other method..
25. Yuko Yamashita, Yoshitaka Nakajima, Kazuo Ueda, Yohko Shimada, David Hirsh, Takeharu Seno and Benjamin Alexander Smith, Acoustic analyses of speech sounds and rhythms in Japanese- and English-learning infants, Frontiers in Language Sciences, 10.3389/fpsyg.2013.00057, 4, 57, 2013.02, The purpose of this study was to explore developmental changes, in terms of spectral fluctuations and temporal periodicity with Japanese- and English-learning infants. Three age groups (15, 20, and 24 months) were selected, because infants diversify phonetic inventories with age. Natural speech of the infants was recorded. We utilized a critical-band-filter bank, which simulated the frequency resolution in adults’ auditory periphery. First, the correlations between the power fluctuations of the critical-band outputs represented by factor analysis were observed in order to see how the critical bands should be connected to each other, if a listener is to differentiate sounds in infants’ speech. In the following analysis, we analyzed the temporal fluctuations of factor scores by calculating autocorrelations. The present analysis identified three factors as had been observed in adult speech at 24 months of age in both linguistic environments. These three factors were shifted to a higher frequency range corresponding to the smaller vocal tract size of the infants. The results suggest that the vocal tract structures of the infants had developed to become adult-like configuration by 24 months of age in both language environments. The amount of utterances with periodic nature of shorter time increased with age in both environments. This trend was clearer in the Japanese environment. - See more at: http://www.frontiersin.org/language_sciences/10.3389/fpsyg.2013.00057/abstract#sthash.R2weBtfH.dpuf.
26. Emi Hasuo, Yoshitaka Nakajima, and Kazuo Ueda, Does filled duration illusion occur for very short time intervals?, Acoustical Science and Technology, 32, 2, 82-85, 2011.03.
27. Takayuki Sasaki, Yoshitaka Nakajima, Gert ten Hoopen, Edwin van Buuringen, Bob Massier, Taku Kojo, Tsuyoshi Kuroda, and Kazuo Ueda, Time-stretching: Illusory lengthening of filled auditory durations, Attention, Perception, & Psychophysics, 72, 1404-1421, 2010.07.
28. Kazuo Ueda, Reiko Akahane-Yamada, Ryo Komaki, and Takahiro Adachi, Identification of English /r/ and /l/ in noise: the effects of baseline performance, Acoustical Science and Technology, 28 (4) 251-259, 2007.07.
29. Takahiro Adachi, Reiko Akahane-Yamada, and Kazuo Ueda, Intelligibility of English phonemes in noise for native and non-native listeners, Acoustical Science and Technology, vol. 27, no. 5, 285-289, 2006.09.
30. Kazuo Ueda, Yoshitaka Nakajima, and Reiko Akahane-Yamada, An artificial environment is often a noisy environment: Auditory scene analysis and speech perception in noise, Journal of Physiological Anthropology and Applied Human Science, 24, 1, 129-133, 2005.02.
31. Nakajima, Y., Sasaki, T., Remijn, G. B., and Ueda, K., Perceptual organization of onsets and offsets of sounds, Journal of Physiological Anthropology and Applied Human Science, 23, 6, 345-349, 2004.12.
32. Ueda, K., Short-term auditory memory interference: the Deutsch demonstration revisited, Acoustical Science and Technology, vol. 25, no. 6, 457-467, 2004.11.
33. Ueda, K., Akahane-Yamada, R., and Komaki, R., Identification of English /r/ and /l/ in white noise by native and non-native listeners, Acoustical Science and Technology, 23, 6, 336-338, 2002.11.
34. Semal, C., Demany, L., Ueda, K., and Halle, P., Speech versus nonspeech in pitch memory, Journal of the Acoustical Society of America, 100, 2, 1132-1140, 1996.08.
35. Ueda, K., and Akagi, M., Sharpness and amplitude envelopes of broadband noise, Journal of the Acoustical Society of America, vol. 87, no. 2, 814-819, 1990.02.
36. Ueda, K., and Hirahara, T., Frequency response of headphones measured in free field and diffuse field by loudness comparison, Journal of the Acoustical Society of Japan (E), vol. 12, no. 3, 131-138, 1991.05.
37. Ueda, K., and Ohtsuki, M., The effect of sound pressure level difference on filled duration extension, Journal of the Acoustical Society of Japan (E), vol. 17, no. 3, 159-161, 1996.05.
38. Ueda, K., and Ohgushi, K., Perceptual components of pitch: Spatial representation using a multidimensional scaling technique, Journal of the Acoustical Society of America, vol. 82, no. 4, 1193-1200, 1987.10.
39. Should we assume a hierarchical structure for adjectives describing timbre?.
40. Spatial representations of two components of pitch using multidimensional scaling technique.