|Gerard Remijn||Last modified date：2021.03.04|
Associate Professor / Human Science Course, Human Science International Course, Research Center for Applied Perceptual Science / Department of Human Science / Faculty of Design
|Gerard Remijn||Last modified date：2021.03.04|
|1.||Paulus, Y.T., Remijn, G.B., Usability of various dwell times for eye-gaze-based object selection with eye tracking, (2021). Displays, 67, 101997, doi.org/10.1016/j.displa.2021.101997, 2021.03.|
|2.||Postnova, N., Nakajima, Y., Ueda, K., Remijn, G.B., Perceived congruency in audiovisual stimuli consisting of Gabor patches and AM and FM tones, (2020). Multisensory Research, 1-21, doi.org/10.1163/22134808-bja10041, doi.org/10.1163/22134808-bja10041, 2020.11.|
|3.||Zhang, Y., Nakajima, Y., Ueda, K., Remijn, G.B., Comparison of multivariate analysis methods as applied to English speech, (2020). Applied Sciences, 10(20), 1 - 12., 2020.10.|
|4.||Santi, Nakajima, Y., Ueda, K., Remijn, G.B., Intelligibility of English mosaic speech: Comparison between native and non-native speakers of English, (2020). Applied Sciences, 10(19), 1 - 13., 2020.10.|
|5.||Sato, H., Morimoto, Y., Remijn, G.B., Seno, T., Differences in Three Vection Indices (Latency, Duration, and Magnitude) Induced by "Camera-Moving" and "Object-Moving" in a Virtual Computer Graphics World, Despite Similarity in the Retinal Images, (2020). i-Perception 11(5), 2020.10.|
|6.||Paulus, Y.T., Remijn, G.B., What kind of grid formations and password formats are useful for password authentication with eye-gaze-based input?, (2019). Journal of Ergonomics, 9, 1 (249), 1-11., 2019.09.|
|7.||João Paulo Cabral, Gerard Bastiaan Remijn, Auditory icons
Design and physical characteristics, Applied Ergonomics, 10.1016/j.apergo.2019.02.008, 78, 224-239, 2019.07, Auditory icons are short sound messages that convey information about an object, event or situation. Originally, auditory icons have been used in computer interfaces, but are nowadays found in many other fields. In this review article, an overview is given of the main theoretical ideas behind the use and design of auditory icons. We identified the most common fields in which auditory icons have been used, and analyzed their acoustic characteristics. The review shows that few studies have provided a precise description of the physical characteristics of the sounds in auditory icons, e.g., their intensity level, duration, and frequency range. To improve the validity and replicability of research on auditory icons, and their universal design, precise descriptions of acoustic characteristics should thus be provided..
|8.||Victoria Korshunova, Gerard Bastiaan Remijn, Synes Elischka, Catarina Mendonca, The Impact of Sound Systems on the Perception of Cinematic Content in Immersive Audiovisual Productions, 12th Asia Pacific Workshop on Mixed and Augmented Reality, APMAR 2019 Proceedings of the 2019 12th Asia Pacific Workshop on Mixed and Augmented Reality, APMAR 2019, 10.1109/APMAR.2019.8709163, 2019.05, With fast technological developments, traditional perceptual environments disappear and new ones emerge. These changes make the human senses adapt to new ways of perceptual understanding, for example, regarding the perceptual integration of sound and vision. Proceeding from the fact that hearing cooperates with visual attention processes, the aim of this study is to investigate the effect of different sound design conditions on the perception of cinematic content in immersive audiovisual reproductions. Here we introduce the results of a visual selective attention task (counting objects) performed by participants watching a 270-degree immersive audiovisual display, on which a movie (»Ego Cure») was shown. Four sound conditions were used, which employed an increasing number of loudspeakers, i.e., mono, stereo, 5.1 and 7.1.4. Eye tracking was used to record the participant's eye gaze during the task. The eye tracking data showed that an increased number of speakers and a wider spatial audio distribution diffused the participants' attention from the task-related part of the display to non-task-related directions. The number of participants looking at the task-irrelevant display in the 7.1.4 condition was significantly higher than in the mono audio condition. This implies that additional spatial cues in the auditory modality automatically influence human visual attention (involuntary eye movements) and human analysis of visual information. Sound engineers should consider this when mixing educational or any other information-oriented productions..|
|9.||Remijn, G.B., Fuyuno, M., Blanco Cortes, L., Ono, A. , English as a medium of instruction at a Japanese university: Preferences and opinions of domestic and international students, (2019). Bulletin of KIKAN Education, 5, 75-85., 2019.03.|
|10.||Yesaya Tommy Paulus, Herlina, Khairu Zeta Leni, Chihiro Hiramatsu, Gerard B. Remijn, A Preliminary Experiment on Grid Densities for Visual Password Formats, 9th International Conference on Awareness Science and Technology, iCAST 2018 2018 9th International Conference on Awareness Science and Technology, iCAST 2018, 10.1109/ICAwST.2018.8517236, 122-127, 2018.10, Visual passwords are passwords made by selecting a sequence of objects on a screen, such as symbols, pictures, or patterns, either by manual input or eye-gaze-based input. Visual passwords can be useful alternatives to alphanumeric passwords, particularly for authentication on devices in semi-private or public spaces (e.g., on ATMs, laptops, smartphones, or car dashboards). The grid is an essential factor in the use of a visual password, because it can act as a guide for the position of an object and its identification. In this study, we obtained user judgments of 16 different grid densities for three visual password formats. The grid densities were in between 2×2 to 7×7 cells (columns × rows). The participants were asked to judge how easy to use and how safe they thought the grid densities would be, if they would use it for password authentication with eye tracking in a public setting. The results showed that for each visual password format some grid densities were thought to be relatively difficult to use (e.g., a 7×7 grid) or potentially unsafe (e.g., a 2×2 grid). Following this, the password registration time was measured for 16 grid densities (from 3×3 to 6×6 cells). The participants were asked to memorize and register a visual password (short or long) using actual eye tracking. The preliminary results show that password registration time increased when the number of grid cells increased and that the password format might influence registration as well..|
|11.||Gerard B. Remijn, Tatsuya Yoshizawa, Hiroaki Yano, Streaming, Bouncing, and Rotation
The Polka Dance Stimulus, i-Perception, 10.1177/2041669518777259, 9, 4, 2018.07, When the objects in a typical stream-bounce stimulus are made to rotate on a circular trajectory, not two but four percepts can be observed: streaming, bouncing, clockwise rotation, and counterclockwise rotation, often with spontaneous reversals between them. When streaming or bouncing is perceived, the objects seem to move on individual, opposite trajectories. When rotation is perceived, however, the objects seem to move in unison on the same circular trajectory, as if constituting the edges of a virtual pane that pivots around its axis. We called this stimulus the Polka Dance stimulus. Experiments showed that with some viewing experience, the viewer can “hold” the rotation percepts. Yet even when doing so, a short sound at the objects’ point of coincidence can induce a bouncing percept. Besides this fast percept switching from rotation to bouncing, an external stimulus might also induce slower rotation direction switches, from clockwise to counterclockwise, or vice versa..
|12.||Yoshitaka Nakajima, Mizuki Matsuda, Kazuo Ueda, Gerard B. Remijn, Temporal resolution needed for auditory communication
Measurement with mosaic speech, Frontiers in Human Neuroscience, 10.3389/fnhum.2018.00149, 12, 2018.04, Temporal resolution needed for Japanese speech communication wasmeasured. A new experimental paradigm that can reflect the spectro-temporal resolution necessary for healthy listeners to perceive speech is introduced. As a first step, we report listeners’ intelligibility scores of Japanese speech with a systematically degraded temporal resolution, so-called “mosaic speech”: speech mosaicized in the coordinates of time and frequency. The results of two experiments show that mosaic speech cut into short static segments was almost perfectly intelligible with a temporal resolution of 40 ms or finer. Intelligibility dropped for a temporal resolution of 80 ms, but was still around 50%-correct level. The data are in line with previous results showing that speech signals separated into short temporal segments of <100 ms can be remarkably robust in terms of linguistic-content perception against drastic manipulations in each segment, such as partial signal omission or temporal reversal. The human perceptual system thus can extract meaning from unexpectedly rough temporal information in speech. The process resembles that of the visual system stringing together static movie frames of ~40 ms into vivid motion..
|13.||Yoshitaka Nakajima, Kazuo Ueda, Gerard B. Remijn, Yuko Yamashita, Takuya Kishida, How sonority appears in speech analyses, Acoustical Science and Technology, 10.1250/ast.39.179, 39, 3, 179-181, 2018.01.|
|14.||Yuko Yoshimura, Mitsuru Kikuchi, Norio Hayashi, Hirotoshi Hiraishi, Chiaki Hasegawa, Tetsuya Takahashi, Manabu Oi, Gerard B. Remijn, Takashi Ikeda, Daisuke N. Saito, Hirokazu Kumazaki, Yoshio Minabe, Altered human voice processing in the frontal cortex and a developmental language delay in 3- to 5-year-old children with autism spectrum disorder, Scientific reports, 10.1038/s41598-017-17058-x, 7, 1, 2017.12, The inferior frontal and superior temporal areas in the left hemisphere are crucial for human language processing. In the present study, we investigated the magnetic mismatch field (MMF) evoked by voice stimuli in 3- to 5-year-old typically developing (TD) children and children with autism spectrum disorder (ASD) using child-customized magnetoencephalography (MEG). The children with ASD exhibited significantly decreased activation in the left superior temporal gyrus compared with the TD children for the MMF amplitude. If we classified the children with ASD according to the presence of a speech onset delay (ASD - SOD and ASD - NoSOD, respectively) and compared them with the TD children, both ASD groups exhibited decreased activation in the left superior temporal gyrus compared with the TD children. In contrast, the ASD - SOD group exhibited increased activity in the left frontal cortex (i.e., pars orbitalis) compared with the other groups. For all children with ASD, there was a significant negative correlation between the MMF amplitude in the left pars orbitalis and language performance. This investigation is the first to show a significant difference in two distinct MMF regions in ASD - SOD children compared with TD children..|
|15.||Fathrul Azarshah Abdul Aziz, Hilman Fauzi, Mohd Ibrahim Shapiai, Aznida Firzah Abdul Aziz, Gerard Remijn, Zool Hilmi Ismail, EEG BSI-HHT in ischaemic stroke with multifocal infarction, 2017 IEEE Region 10 Conference, TENCON 2017 TENCON 2017 - 2017 IEEE Region 10 Conference, 10.1109/TENCON.2017.8228123, 1651-1656, 2017.12, Electroencephalography (EEG) monitoring is known to be technically feasible and possibly clinically relevant to determine patients with acute ischaemic hemispheric stroke. The EEG is a very useful tool in understanding neurological dysfunction of stroke and likely improving the treatment and rehabilitation. The traditional method for diagnosing with EEG is mainly based on conventional Fast Fourier Transform (FFT). FFT is known to be limited to linear and stationary signal processing. Previously, a technique known as Brain Symmetry Index (BSI) has been proposed to determine the index value for asymmetry of blood flow in right and left brain hemispheres. The estimated index between stroke patient and healthy person ranges between zero and one. Similarly, the existing standard BSI limit the frequency band until 25Hz to eliminate EMG artifact, but at higher frequency band some useful information can also be represented. Hence, highlighting the limitation of FFT in providing the coefficient to calculate BSI index. In this study, we employed the Hilbert Huang Transform (HHT) to extract important feature for this research to use as coefficient in calculating the BSI index for stroke patients with multifocal infarction and healthy individuals. The proposed BSI-HHT spectral analysis conducted in this research was compared with several existing BSI techniques. In order to validate the performance of BSI-HHT, we conducted two experiments 1) 1-25Hz and 2) 1-64Hz for both patients and healthy individuals. The proposed technique offers constant BSI index for two cases and providing better resolution results in determining the index as compared to the existing BSI techniques..|
|16.||Yesaya Tommy Paulus, Chihiro Hiramatsu, Yvonne Kam Hwei Syn, Gerard B. Remijn, Measurement of viewing distances and angles for eye tracking under different lighting conditions, 2nd International Conference on Automation, Cognitive Science, Optics, Micro Electro-Mechanical System, and Information Technology, ICACOMIT 2017 Proceedings of the 2nd International Conference on Automation, Cognitive Science, Optics, Micro Electro-Mechanical System, and Information Technology, ICACOMIT 2017, 10.1109/ICACOMIT.2017.8253386, 54-58, 2017.07, Eye tracking can be used for eye-gaze-based authentication in public settings, such as for registration on personal computers and automated teller machines. In this study, we conducted a series of measurements with low-cost eyetracking devices to assess the feasibility of their use in such settings. We investigated the devices' minimum and maximum viewing distance limits as well as viewing angle range. Both these parameters were tested under three different lighting conditions. The eye-tracking devices used in the measurements were among the most cost-effective devices commercially available today (Tobii EyeX.|
|17.||Yesaya Tommy Paulus, Gerard B. Remijn, Yvonne Kam Hwei Syn, Chihiro Hiramatsu, The use of glasses during registration into a low-cost eye tracking device under different lighting conditions, 2nd International Conference on Automation, Cognitive Science, Optics, Micro Electro-Mechanical System, and Information Technology, ICACOMIT 2017 Proceedings of the 2nd International Conference on Automation, Cognitive Science, Optics, Micro Electro-Mechanical System, and Information Technology, ICACOMIT 2017, 10.1109/ICACOMIT.2017.8253387, 59-64, 2017.07, It is known that the use of glasses can hamper the quality and speed of user registration into eye-tracking devices. Studies have been done in which the performances of various eye-tracking devices were compared, typically under ideal viewing angles, with the user sitting behind a display under a fixed lighting condition. Here we investigated the influence of the use of glasses on the quality and time of user registration into a low-cost eye-tracking device under various lighting conditions. Furthermore, we compared the use of glasses on registration within the same group of participants. Participants with prescription glasses were asked to register into the eye-tracking device both with and without their glasses, if possible, and users without prescription glasses or with contact lenses were also asked to register without glasses or with replica, nonprescription glasses. The present results showed indeed that the use of glasses negatively influenced registration quality and time - significantly here, however, only when registration was performed under artificial lighting. Under natural lighting, the difference in registration quality and speed did not reach significance, but bordered on significance. A follow-up measurement confirmed these results, and suggested that calibration with glasses can improve when participants register under 'ideal' viewing angles for their particular viewing position..|
|18.||Gerard Remijn, Kikuchi, M., Shitamichi, K., Ueno, S., Yoshimura, Y., Tsubokawa, T., Kojima, H., Higashida, H., Minabe, Y., A NIRS study on cortical hemodynamic responses to normal and whispered speech in 3- to 7-year-old children, (2017). Journal of Speech, Language and Hearing Research, 60, 465–470, 2017.03.|
|19.||Gerard B. Remijn, Mitsuru Kikuchi, Yuko Yoshimura, Kiyomi Shitamichi, Sanae Ueno, Tsunehisa Tsubokawa, Haruyuki Kojima, Haruhiro Higashida, Yoshio Minabe, A near-infrared spectroscopy study on cortical hemodynamic responses to normal and whispered speech in 3- to 7-year-old children, Journal of Speech, Language, and Hearing Research, 10.1044/2016_JSLHR-H-15-0435, 60, 2, 465-470, 2017.02, Purpose: The purpose of this study was to assess cortical hemodynamic response patterns in 3- to 7-year-old children listening to two speech modes: normally vocalized and whispered speech. Understanding whispered speech requires processing of the relatively weak, noisy signal, as well as the cognitive ability to understand the speaker’s reason for whispering. Method: Near-infrared spectroscopy (NIRS) was used to assess changes in cortical oxygenated hemoglobin from 16 typically developing children. Results: A profound difference in oxygenated hemoglobin levels between the speech modes was found over left ventral sensorimotor cortex. In particular, over areas that represent speech articulatory body parts and motion, such as the larynx, lips, and jaw, oxygenated hemoglobin was higher for whisper than for normal speech. The weaker stimulus, in terms of sound energy, thus induced the more profound hemodynamic response. This, moreover, occurred over areas involved in speech articulation, even though the children did not overtly articulate speech during measurements. Conclusion: Because whisper is a special form of communication not often used in daily life, we suggest that the hemodynamic response difference over left ventral sensorimotor cortex resulted from inner (covert) practice or imagination of the different articulatory actions necessary to produce whisper as opposed to normal speech..|
|20.||Fujihira, H., Shiraishi Kimio, Gerard Remijn, Elderly listeners with low intelligibility scores under reverberation show degraded subcortical representation of reverberant speech, (2017). Neuroscience Letters, 637, 102-107, 2017.01.|
|21.||Hilman Fauzi, Mohd Ibrahim Shapiai, Rubiyah Yusof, Gerard B. Remijn, Noor Akhmad Setiawan, Zuwairie Ibrahim, The design of spatial selection using CUR decomposition to improve common spatial pattern for multi-trial EEG classification, 17th International Conference on Asia Simulation, AsiaSim 2017 Modeling, Design and Simulation of Systems - 17th Asia Simulation Conference, AsiaSim 2017, Proceedings, 10.1007/978-981-10-6463-0_37, 428-442, 2017.01, The most important factor in EEG signal processing is the determination of relevant features in encoding the meaning of the signal. Obtaining relevant features for EEG can be done using a spatial filter. The Common Spatial Pattern (CSP) is known to produce discriminative features when processing EEG signals. Yet, CSP is also sensitive to noise and is channel-dependent, as it is considered to be a spatial filter. However, the disadvantage of CSP is that channels containing only noise are also considered as active channels. In this paper, the design of a filter for spatial selection is proposed using CUR decomposition to select important channels or the time segment of EEG trials in order to improve CSP performance. CUR decomposition can also be used as a noise rejection technique because CUR can be used in factorizing the given EEG signals. In other words, CUR decomposition rejects the non-active channels, which typically contain noise, before spatially filtering the EEG signals. Once the EEG signal is decomposed based on the importance of the channels, time segmentation, and EEG factorization, the decomposed signal can be used as input to the CSP. In general, three approaches were proposed in this framework: (1) channel selection, i.e., C selection; (2) time segment selection, R; and (3) signal factorization, U. Furthermore, the performance accuracy between the original CSP and CSP in which the input was spatially filtered by the proposed framework was validated using datasets IVa of BCI competition III. The test results show that the CSP with spatial selection using C selection and U factorization offers 12% and 9% improvement compared to the original CSP, respectively. Hence, the proposed method in this study can be used as a spatial filter to improve the CSP performance..|
|22.||Yuko Yoshimura, Mitsuru Kikuchi, Hirotoshi Hiraishi, Chiaki Hasegawa, Tetsuya Takahashi, Gerard B. Remijn, Manabu Oi, Toshio Munesue, Haruhiro Higashida, Yoshio Minabe, Haruyuki Kojima, Atypical development of the central auditory system in young children with Autism spectrum disorder, Autism Research, 10.1002/aur.1604, 9, 11, 1216-1226, 2016.11, The P1m component of the auditory evoked magnetic field is the earliest cortical response associated with language acquisition. However, the growth curve of the P1m component is unknown in both typically developing (TD) and atypically developing children. The aim of this study is to clarify the developmental pattern of this component when evoked by binaural human voice stimulation using child-customized magnetoencephalography. A total of 35 young TD children (32–121 months of age) and 35 children with autism spectrum disorder (ASD) (38–111 months of age) participated in this study. This is the first report to demonstrate an inverted U-shaped growth curve for the P1m dipole intensity in the left hemisphere in TD children. In addition, our results revealed a more diversified age-related distribution of auditory brain responses in 3- to 9-year-old children with ASD. These results demonstrate the diversified growth curve of the P1m component in ASD during young childhood, which is a crucial period for first language acquisition. Autism Res 2016, 9: 1216–1226..|
|23.||Satoshi Morimoto, Gerard B. Remijn, Yoshitaka Nakajima, Computational-model-based analysis of context effects on harmonic expectancy, PloS one, 10.1371/journal.pone.0151374, 11, 3, 2016.03, Expectancy for an upcoming musical chord, harmonic expectancy, is supposedly based on automatic activation of tonal knowledge. Since previous studies implicitly relied on interpretations based on Western music theory, the underlying computational processes involved in harmonic expectancy and how it relates to tonality need further clarification. In particular, short chord sequences which cannot lead to unique keys are difficult to interpret in music theory. In this study, we examined effects of preceding chords on harmonic expectancy from a computational perspective, using stochastic modeling. We conducted a behavioral experiment, in which participants listened to short chord sequences and evaluated the subjective relatedness of the last chord to the preceding ones. Based on these judgments, we built stochastic models of the computational process underlying harmonic expectancy. Following this, we compared the explanatory power of the models. Our results imply that, even when listening to short chord sequences, internally constructed and updated tonal assumptions determine the expectancy of the upcoming chord..|
|24.||Yuko Yoshimura, Mitsuru Kikuchi, Hirotoshi Hiraishi, Chiaki Hasegawa, Tetsuya Takahashi, Gerard B. Remijn, Manabu Oi, Toshio Munesue, Haruhiro Higashida, Yoshio Minabe, Synchrony of auditory brain responses predicts behavioral ability to keep still in children with autism spectrum disorder
Auditory-evoked response in children with autism spectrum disorder, NeuroImage: Clinical, 10.1016/j.nicl.2016.07.009, 12, 300-305, 2016.01, The auditory-evoked P1m, recorded by magnetoencephalography, reflects a central auditory processing ability in human children. One recent study revealed that asynchrony of P1m between the right and left hemispheres reflected a central auditory processing disorder (i.e., attention deficit hyperactivity disorder, ADHD) in children. However, to date, the relationship between auditory P1m right-left hemispheric synchronization and the comorbidity of hyperactivity in children with autism spectrum disorder (ASD) is unknown. In this study, based on a previous report of an asynchrony of P1m in children with ADHD, to clarify whether the P1m right-left hemispheric synchronization is related to the symptom of hyperactivity in children with ASD, we investigated the relationship between voice-evoked P1m right-left hemispheric synchronization and hyperactivity in children with ASD. In addition to synchronization, we investigated the right-left hemispheric lateralization. Our findings failed to demonstrate significant differences in these values between ASD children with and without the symptom of hyperactivity, which was evaluated using the Autism Diagnostic Observational Schedule, Generic (ADOS-G) subscale. However, there was a significant correlation between the degrees of hemispheric synchronization and the ability to keep still during 12-minute MEG recording periods. Our results also suggested that asynchrony in the bilateral brain auditory processing system is associated with ADHD-like symptoms in children with ASD..
|25.||Takuya Kishida, Yoshitaka Nakajima, Kazuo Ueda, Gerard B. Remijn, Three factors are critical in order to synthesize intelligible noise-vocoded Japanese speech, Frontiers in Psychology, 10.3389/fpsyg.2016.00517, 7, APR, 2016.01, Factor analysis (principal component analysis followed by varimax rotation) had shown that 3 common factors appear across 20 critical-band power fluctuations derived from spoken sentences of eight different languages [Ueda et al. (2010). Fechner Day 2010, Padua]. The present study investigated the contributions of such power-fluctuation factors to speech intelligibility. The method of factor analysis was modified to obtain factors suitable for resynthesizing speech sounds as 20-critical-band noise-vocoded speech. The resynthesized speech sounds were used for an intelligibility test. The modification of factor analysis ensured that the resynthesized speech sounds were not accompanied by a steady background noise caused by the data reduction procedure. Spoken sentences of British English, Japanese, and Mandarin Chinese were subjected to this modified analysis. Confirming the earlier analysis, indeed 3-4 factors were common to these languages. The number of power-fluctuation factors needed to make noise-vocoded speech intelligible was then examined. Critical-band power fluctuations of the Japanese spoken sentences were resynthesized from the obtained factors, resulting in noise-vocoded-speech stimuli, and the intelligibility of these speech stimuli was tested by 12 native Japanese speakers. Japanese mora (syllable-like phonological unit) identification performances were measured when the number of factors was 1-9. Statistically significant improvement in intelligibility was observed when the number of factors was increased stepwise up to 6. The 12 listeners identified 92.1% of the morae correctly on average in the 6-factor condition. The intelligibility improved sharply when the number of factors changed from 2 to 3. In this step, the cumulative contribution ratio of factors improved only by 10.6%, from 37.3 to 47.9%, but the average mora identification leaped from 6.9 to 69.2%. The results indicated that, if the number of factors is 3 or more, elementary linguistic information is preserved in such noise-vocoded speech..|
|26.||Yuko Yoshimura, Mitsuru Kikuchi, Sanae Ueno, Kiyomi Shitamichi, Gerard B. Remijn, Hirotoshi Hiraishi, Chiaki Hasegawa, Naoki Furutani, Manabu Oi, Toshio Munesue, Tsunehisa Tsubokawa, Haruhiro Higashida, Yoshio Minabe, A longitudinal study of auditory evoked field and language development in young children, NeuroImage, 10.1016/j.neuroimage.2014.07.034, 101, 440-447, 2014.11, The relationship between language development in early childhood and the maturation of brain functions related to the human voice remains unclear. Because the development of the auditory system likely correlates with language development in young children, we investigated the relationship between the auditory evoked field (AEF) and language development using non-invasive child-customized magnetoencephalography (MEG) in a longitudinal design. Twenty typically developing children were recruited (aged 36-75. months old at the first measurement). These children were re-investigated 11-25. months after the first measurement. The AEF component P1m was examined to investigate the developmental changes in each participant's neural brain response to vocal stimuli. In addition, we examined the relationships between brain responses and language performance. P1m peak amplitude in response to vocal stimuli significantly increased in both hemispheres in the second measurement compared to the first measurement. However, no differences were observed in P1m latency. Notably, our results reveal that children with greater increases in P1m amplitude in the left hemisphere performed better on linguistic tests. Thus, our results indicate that P1m evoked by vocal stimuli is a neurophysiological marker for language development in young children. Additionally, MEG is a technique that can be used to investigate the maturation of the auditory cortex based on auditory evoked fields in young children. This study is the first to demonstrate a significant relationship between the development of the auditory processing system and the development of language abilities in young children..|
|27.||Remijn, G.B., Hasuo, E., Fujihira, H., Morimoto, S. (2014), An introduction to the measurement of auditory event-related potentials (ERPs), Acoustical Science and Technology, 35, 229-242, 2014.07.|
|28.||Nakajima, Y., Sasaki, T., Ueda, K., Remijn, G.B. (2014), Auditory grammar, Acoustics Australia, 42, 97-101, 2014.06.|
|29.||Gerard B. Remijn, Mitsuru Kikuchi, Kiyomi Shitamichi, Sanae Ueno, Yuko Yoshimura, Kikuko Nagao, Tsunehisa Tsubokawa, Haruyuki Kojima, Haruhiro Higashida, Yoshio Minabe, Somatosensory evoked field in response to visuotactile stimulation in 3-to 4-year-old children, Frontiers in Human Neuroscience, 10.3389/fnhum.2014.00170, 8, MAR, 2014.03, A child-customized magnetoencephalography system was used to investigate somatosensory evoked field (SEF) in 3-to 4-year-old children. Three stimulus conditions were used in which the children received tactile-only stimulation to their left index finger or visuotactile stimulation. In the two visuotactile conditions, the children received tactile stimulation to their finger while they watched a video of tactile stimulation applied either to someone else's finger (the finger-touch condition) or to someone else's toe (the toe-touch condition). The latencies and source strengths of equivalent current dipoles (ECDs) over contralateral (right) somatosensory cortex were analyzed. In the preschoolers who provided valid ECDs, the stimulus conditions induced an early-latency ECD occurring between 60 and 68 ms mainly with an anterior direction. We further identified a middle-latency ECD between 97 and 104 ms, which predominantly had a posterior direction. Finally, initial evidence was found for a late-latency ECD at about 139-151 ms again more often with an anterior direction. Differences were found in the source strengths of the middle-latency ECDs among the stimulus conditions. For the paired comparisons that could be formed, ECD source strength was more pronounced in the finger-touch condition than in the tactile-only and the toe-touch conditions. Although more research is necessary to expand the data set, this suggests that visual information modulated preschool SEF. The finding that ECD source strength was higher when seen and felt touch occurred to the same body part, as compared to a different body part, might further indicate that connectivity between visual and tactile information is indexed in preschool somatosensory cortical activity, already in a somatotopic way..|
|30.||Gerard B. Remijn, Emi Hasuo, Haruna Fujihira, Satoshi Morimoto, An introduction to the measurement of auditory event-related potentials (ERPs), Acoustical Science and Technology, 10.1250/ast.35.229, 35, 5, 229-242, 2014.01, In 1939, Pauline Davis reported the first study on event-related potentials (ERPs) performed on awake humans. ERPs are time-locked brain potentials that occur in response to cognitive, motor or perceptual events. The events used by Davis were sounds, and in the decades that followed her landmark study ERP research significantly contributed to the knowledge of auditory perception and neurophysiology we have today. ERPs are very well suited to study neural responses to sound stimuli, since the researcher can monitor the brain's registration of sound edges and spectral changes in sound on a millisecond-by-millisecond basis. In this overview we will introduce basic concepts of auditory ERP research. The overview includes descriptions of typical ERP components, experimental paradigms, sound stimuli, research methodology, and ways to analyze data..|
|31.||Yoshitaka Nakajima, Takayuki Sasaki, Kazuo Ueda, Gerard Bastiaan Remijn, Auditory grammar, Acoustics Australia, 42, 2, 97-101, 2014.01, Auditory streams are considered basic units of auditory percepts, and an auditory stream is a concatenation of auditory events and silences. In our recent book, we proposed a theoretical framework in which auditory units equal to or smaller than auditory events, i.e., auditory subevents, are integrated linearly to form auditory streams. A simple grammar, Auditory Grammar, was introduced to avoid nonsense chains of subevents, e.g., a silence succeeded immediately by an offset (a termination); a silence represents a state without a sound, and to put an offset, i.e., the end of a sound, immediately after that should be prohibited as ungrammatical. By assuming a few gestalt principles including the proximity principle and this grammar, we are able to interpret or reinterpret some auditory phenomena from a unified viewpoint, such as the gap transfer illusion, the split-off phenomenon, the auditory continuity effect, and perceptual extraction of a melody in a very reverberant room..|
|32.||Yuko Yoshimura, Mitsuru Kikuchi, Sanae Ueno, Eiichi Okumura, Hirotoshi Hiraishi, Chiaki Hasegawa, Gerard B. Remijn, Kiyomi Shitamichi, Toshio Munesue, Tsunehisa Tsubokawa, Haruhiro Higashida, Yoshio Minabe, The brain's response to the human voice depends on the incidence of autistic traits in the general population, PloS one, 10.1371/journal.pone.0080126, 8, 11, 2013.11, Optimal brain sensitivity to the fundamental frequency (F0) contour changes in the human voice is important for understanding a speaker's intonation, and consequently, the speaker's attitude. However, whether sensitivity in the brain's response to a human voice F0 contour change varies with an interaction between an individual's traits (i.e., autistic traits) and a human voice element (i.e., presence or absence of communicative action such as calling) has not been investigated. In the present study, we investigated the neural processes involved in the perception of F0 contour changes in the Japanese monosyllables "ne" and "nu." "Ne" is an interjection that means "hi" or "hey" in English; pronunciation of "ne" with a high falling F0 contour is used when the speaker wants to attract a listener's attention (i.e., social intonation). Meanwhile, the Japanese concrete noun "nu" has no communicative meaning. We applied an adaptive spatial filtering method to the neuromagnetic time course recorded by whole-head magnetoencephalography (MEG) and estimated the spatiotemporal frequency dynamics of event-related cerebral oscillatory changes in beta band during the oddball paradigm. During the perception of the F0 contour change when "ne" was presented, there was event-related de-synchronization (ERD) in the right temporal lobe. In contrast, during the perception of the F0 contour change when "nu" was presented, ERD occurred in the left temporal lobe and in the bilateral occipital lobes. ERD that occurred during the social stimulus "ne" in the right hemisphere was significantly correlated with a greater number of autistic traits measured according to the Autism Spectrum Quotient (AQ), suggesting that the differences in human voice processing are associated with higher autistic traits, even in non-clinical subjects..|
|33.||Yuko Yoshimura, Mitsuru Kikuchi, Kiyomi Shitamichi, Sanae Ueno, Toshio Munesue, Yasuki Ono, Tsunehisa Tsubokawa, Yasuhiro Haruta, Manabu Oi, Yo Niida, Gerard B. Remijn, Tsutomu Takahashi, Michio Suzuki, Haruhiro Higashida, Yoshio Minabe, Atypical brain lateralisation in the auditory cortex and language performance in 3- to 7-year-old children with high-functioning autism spectrum disorder
A child-customised magnetoencephalography (MEG) study, Molecular Autism, 10.1186/2040-2392-4-38, 4, 1, 2013.10, Background: Magnetoencephalography (MEG) is used to measure the auditory evoked magnetic field (AEF), which reflects language-related performance. In young children, however, the simultaneous quantification of the bilateral auditory-evoked response during binaural hearing is difficult using conventional adult-sized MEG systems. Recently, a child-customised MEG device has facilitated the acquisition of bi-hemispheric recordings, even in young children. Using the child-customised MEG device, we previously reported that language-related performance was reflected in the strength of the early component (P50m) of the auditory evoked magnetic field (AEF) in typically developing (TD) young children (2 to 5 years old) [Eur J Neurosci 2012, 35:644-650]. The aim of this study was to investigate how this neurophysiological index in each hemisphere is correlated with language performance in autism spectrum disorder (ASD) and TD children. Methods. We used magnetoencephalography (MEG) to measure the auditory evoked magnetic field (AEF), which reflects language-related performance. We investigated the P50m that is evoked by voice stimuli (/ne/) bilaterally in 33 young children (3 to 7 years old) with ASD and in 30 young children who were typically developing (TD). The children were matched according to their age (in months) and gender. Most of the children with ASD were high-functioning subjects. Results: The results showed that the children with ASD exhibited significantly less leftward lateralisation in their P50m intensity compared with the TD children. Furthermore, the results of a multiple regression analysis indicated that a shorter P50m latency in both hemispheres was specifically correlated with higher language-related performance in the TD children, whereas this latency was not correlated with non-verbal cognitive performance or chronological age. The children with ASD did not show any correlation between P50m latency and language-related performance; instead, increasing chronological age was a significant predictor of shorter P50m latency in the right hemisphere. Conclusions: Using a child-customised MEG device, we studied the P50m component that was evoked through binaural human voice stimuli in young ASD and TD children to examine differences in auditory cortex function that are associated with language development. Our results suggest that there is atypical brain function in the auditory cortex in young children with ASD, regardless of language development..
|34.||Gerard B. Remijn, Mitsuru Kikuchi, Yuko Yosmimura, Sanae Ueno, Kiyomi Shitamichi, Yoshio Minabe, Cortical hemodynamic response patterns to normal and whispered speech, 21st International Congress on Acoustics, ICA 2013 - 165th Meeting of the Acoustical Society of America Proceedings of Meetings on Acoustics, 10.1121/1.4799766, 19, 2013.06, Whispered speech is often used in direct person-to-person communication as a means to confidentiality. Compared with normally-vocalized speech, whispered speech is predominantly unvoiced, i.e., produced without vocal fold vibration, and has no clear fundamental frequency. By using near-infrared spectroscopy (NIRS), we assessed cortical hemodynamic response patterns to normally-vocalized and whispered speech in adult listeners (n=13). Stimuli consisted of 20-s strings of Japanese word associations spoken by a female voice. Average oxygenated hemoglobin values (oxy-Hb) were obtained over two regions of interest (ROIs). Oxy-Hb values during the perception of normally-vocalized speech were highest over the left temporal ROI, but not significantly different from values measured over other ROIs. Oxy-Hb values during whispered speech were highest over the right temporal ROI and significantly higher (p<0.05) than those obtained over the left temporal ROI. No significant differences, however, were found in oxy-Hb comparisons between normally-vocalized and whispered speech, although the right temporal ROI comparison bordered on significance, with whisper inducing the higher value. Together, the results seem to suggest that whispered speech is a potent catalyst of cortical hemodynamic activity, especially over the right temporal cortex, in spite of its relatively modest sound level as compared to normal speech..|
|35.||Mitsuru Kikuchi, Kiyomi Shitamichi, Yuko Yoshimura, Sanae Ueno, Hirotoshi Hiraishi, Tetsu Hirosawa, Toshio Munesue, Hideo Nakatani, Tsunehisa Tsubokawa, Yasuhiro Haruta, Manabu Oi, Yo Niida, Gerard Bastiaan Remijn, Tsutomu Takahashi, Michio Suzuki, Haruhiro Higashida, Yoshio Minabe, Altered brain connectivity in 3-to 7-year-old children with autism spectrum disorder, NeuroImage: Clinical, 10.1016/j.nicl.2013.03.003, 2, 1, 394-401, 2013.04, Autism spectrum disorder (ASD) is often described as a disorder of aberrant neural connectivity and/or aberrant hemispheric lateralization. Although it is important to study the pathophysiology of the developing ASD cortex, the physiological connectivity of the brain in young children with ASD under conscious conditions has not yet been described. Magnetoencephalography (MEG) is a noninvasive brain imaging technique that is practical for use in young children. MEG produces a reference-free signal and is, therefore, an ideal tool for computing the coherence between two distant cortical rhythms. Using a custom child-sized MEG, we recently reported that 5- to 7-year-old children with ASD (n = 26) have inherently different neural pathways than typically developing (TD) children that contribute to their relatively preserved performance of visual tasks. In this study, we performed non-invasive measurements of the brain activity of 70 young children (3-7 years old, of which 18 were aged 3-4 years), a sample consisting of 35 ASD children and 35 TD children. Physiological connectivity and the laterality of physiological connectivity were assessed using intrahemispheric coherence for 9 frequency bands. As a result, significant rightward connectivity between the parietotemporal areas, via gamma band oscillations, was found in the ASD group. As we obtained the non-invasive measurements using a custom child-sized MEG, this is the first study to demonstrate a rightward-lateralized neurophysiological network in conscious young children (including children aged 3-4 years) with ASD..|
|36.||Mitsuru Kikuchi, Yuko Yoshimura, Kiyomi Shitamichi, Sanae Ueno, Tetsu Hirosawa, Toshio Munesue, Yasuki Ono, Tsunehisa Tsubokawa, Yasuhiro Haruta, Manabu Oi, Yo Niida, Gerard B. Remijn, Tsutomu Takahashi, Michio Suzuki, Haruhiro Higashida, Yoshio Minabe, A custom magnetoencephalography device reveals brain connectivity and high reading/decoding ability in children with autism, Scientific reports, 10.1038/srep01139, 3, 2013.02, A subset of individuals with autism spectrum disorder (ASD) performs more proficiently on certain visual tasks than may be predicted by their general cognitive performances. However, in younger children with ASD (aged 5 to 7), preserved ability in these tasks and the neurophysiological correlates of their ability are not well documented. In the present study, we used a custom child-sized magnetoencephalography system and demonstrated that preserved ability in the visual reasoning task was associated with rightward lateralisation of the neurophysiological connectivity between the parietal and temporal regions in children with ASD. In addition, we demonstrated that higher reading/decoding ability was also associated with the same lateralisation in children with ASD. These neurophysiological correlates of visual tasks are considerably different from those that are observed in typically developing children. These findings indicate that children with ASD have inherently different neural pathways that contribute to their relatively preserved ability in visual tasks..|
|37.||Mitsuru Kikuchi, Yuko Yoshimura, Kiyomi Shitamichi, Sanae Ueno, Hirotoshi Hiraishi, Toshio Munesue, Tetsu Hirosawa, Yasuki Ono, Tsunehisa Tsubokawa, Yoshihiro Inoue, Manabu Oi, Yo Niida, Gerard B. Remijn, Tsutomu Takahashi, Michio Suzuki, Haruhiro Higashida, Yoshio Minabe, Anterior Prefrontal Hemodynamic Connectivity in Conscious 3- to 7-Year-Old Children with Typical Development and Autism Spectrum Disorder, PloS one, 10.1371/journal.pone.0056087, 8, 2, 2013.02, Socio-communicative impairments are salient features of autism spectrum disorder (ASD) from a young age. The anterior prefrontal cortex (aPFC), or Brodmann area 10, is a key processing area for social function, and atypical development of this area is thought to play a role in the social deficits in ASD. It is important to understand these brain functions in developing children with ASD. However, these brain functions have not yet been well described under conscious conditions in young children with ASD. In the present study, we focused on the brain hemodynamic functional connectivity between the right and the left aPFC in children with ASD and typically developing (TD) children and investigated whether there was a correlation between this connectivity and social ability. Brain hemodynamic fluctuations were measured non-invasively by near-infrared spectroscopy (NIRS) in 3- to 7-year-old children with ASD (n = 15) and gender- and age-matched TD children (n = 15). The functional connectivity between the right and the left aPFC was assessed by measuring the coherence for low-frequency spontaneous fluctuations (0.01 - 0.10 Hz) during a narrated picture-card show. Coherence analysis demonstrated that children with ASD had a significantly higher inter-hemispheric connectivity with 0.02-Hz fluctuations, whereas a power analysis did not demonstrate significant differences between the two groups in terms of low frequency fluctuations (0.01 - 0.10 Hz). This aberrant higher connectivity in children with ASD was positively correlated with the severity of social deficit, as scored with the Autism Diagnostic Observation Schedule. This is the first study to demonstrate aberrant brain functional connectivity between the right and the left aPFC under conscious conditions in young children with ASD..|
|38.||Gerard Bastiaan Remijn, Theme-based research and education on sound and hearing, Acoustical Science and Technology, 10.1250/ast.33.398, 33, 6, 2012.11.|
|39.||Tatsuya Yoshizawa, Gerard Bastiaan Remijn, Takumi Kitamura, Detection of incomplete, self-relevant auditory information presented to the unattended ear, Acoustical Science and Technology, 10.1250/ast.33.147, 33, 3, 147-153, 2012.06, Dichotic listening studies have shown that information relevant to listeners, such as their own name, can be recognized even when presented to the unattended ear. Here, we used a dichotic listening paradigm to explore whether Japanese listeners could identify their name in the unattended ear even when sensory information was incomplete. The results showed that Japanese listeners with family names of 3, 4, or 5 morae - a speech unit equivalent to a syllable in English - recognized their name in about 20-60% of the trials even when the first or the last mora of the name was omitted. The data further showed a name-final effect under the 4- and 5-morae conditions: name recognition significantly decreased when the last mora of the listener's name was omitted as compared with the omission of the first mora. A possible explanation for these results is that self-relevant information, even when incomplete, automatically draws attention to the supposedly unattended ear and that the listener's recognition of the information is more robust when its end part is presented..|
|40.||Sanae Ueno, Eiichi Okumura, Gerard Bastiaan Remijn, Yuko Yoshimura, Mitsuru Kikuchi, Kiyomi Shitamichi, Kikuko Nagao, Masayuki Mochiduki, Yasuhiro Haruta, Norio Hayashi, Toshio Munesue, Tsunehisa Tsubokawa, Manabu Oi, Hideo Nakatani, Haruhiro Higashida, Yoshio Minabe, Spatiotemporal frequency characteristics of cerebral oscillations during the perception of fundamental frequency contour changes in one-syllable intonation, Neuroscience Letters, 10.1016/j.neulet.2012.03.031, 515, 2, 141-146, 2012.05, Accurate perception of fundamental frequency (F0) contour changes in the human voice is important for understanding a speaker's intonation, and consequently also his/her attitude. In this study, we investigated the neural processes involved in the perception of F0 contour changes in the Japanese one-syllable interjection "ne" in 21 native-Japanese listeners. A passive oddball paradigm was applied in which "ne" with a high falling F0 contour, used when urging a reaction from the listener, was randomly presented as a rare deviant among a frequent "ne" syllable with a flat F0 contour (i.e., meaningless intonation). We applied an adaptive spatial filtering method to the neuromagnetic time course recorded by whole-head magnetoencephalography (MEG) and estimated the spatiotemporal frequency dynamics of event-related cerebral oscillatory changes in the oddball paradigm. Our results demonstrated a significant elevation of beta band event-related desynchronization (ERD) in the right temporal and frontal areas, in time windows from 100 to 300 and from 300 to 500. ms after the onset of deviant stimuli (high falling F0 contour). This is the first study to reveal detailed spatiotemporal frequency characteristics of cerebral oscillations during the perception of intonational (not lexical) F0 contour changes in the human voice. The results further confirmed that the right hemisphere is associated with perception of intonational F0 contour information in the human voice, especially in early time windows..|
|41.||Yuko Yoshimura, Mitsuru Kikuchi, Kiyomi Shitamichi, Sanae Ueno, Gerard B. Remijn, Yasuhiro Haruta, Manabu Oi, Toshio Munesue, Tsunehisa Tsubokawa, Haruhiro Higashida, Yoshio Minabe, Language performance and auditory evoked fields in 2- to 5-year-old children, European Journal of Neuroscience, 10.1111/j.1460-9568.2012.07998.x, 35, 3-4, 644-650, 2012.02, Language development progresses at a dramatic rate in preschool children. As rapid temporal processing of speech signals is important in daily colloquial environments, we performed magnetoencephalography (MEG) to investigate the linkage between speech-evoked responses during rapid-rate stimulus presentation (interstimulus interval <1s) and language performance in 2- to 5-year-old children (n=59). Our results indicated that syllables with this short stimulus interval evoked detectable P50m, but not N100m, in most participants, indicating a marked influence of longer neuronal refractory period for stimulation. The results of equivalent dipole estimation showed that the intensity of the P50m component in the left hemisphere was positively correlated with language performance (conceptual inference ability). The observed positive correlations were suggested to reflect the maturation of synaptic organisation or axonal maturation and myelination underlying the acquisition of linguistic abilities. The present study is among the first to use MEG to study brain maturation pertaining to language abilities in preschool children..|
|42.||Hiroshige Takeichi, Takako Mitsudo, Yoshitaka Nakajima, Gerard B. Remijn, Yoshinobu Goto, Shozo Tobimatsu, A neural decoding approach to auditory temporal assimilation, Neural Computing and Applications, 10.1007/s00521-010-0399-z, 20, 7, 965-973, 2011.10, By constructing Gaussian Naïve Bayes Classifiers, we have re-analyzed data from an earlier event-related potential (ERP) study of an illusion in time perception known as auditory temporal assimilation. In auditory temporal assimilation, two neighboring physically unequal time intervals marked by three successive tone bursts are illusorily perceived as equal if the two time intervals satisfy a certain relationship. The classifiers could discriminate whether or not the subject was engaged in the task, which was judgment of the subjective equality between the two intervals, at an accuracy of >79%, and from principal component scores of individual average ERP waveforms, we were able to predict their subjective judgments to each stimulus at an accuracy of >70%. Chernoff information, unlike accuracy or Kullback-Leibler (KL) distance, suggested brain activation associated with auditory temporal assimilation at an early pre-decision stage. This may provide us with a simple and useful neural decoding scheme in analyzing information processing of temporal patterns in the brain..|
|43.||Mitsuru Kikuchi, Kiyomi Shitamichi, Yuko Yoshimura, Sanae Ueno, Gerard B. Remijn, Tetsu Hirosawa, Toshio Munesue, Tsunehisa Tsubokawa, Yasuhiro Haruta, Manabu Oi, Haruhiro Higashida, Yoshio Minabe, Lateralized theta wave connectivity and language performance in 2-to 5-year-old children, Journal of Neuroscience, 10.1523/JNEUROSCI.2785-11.2011, 31, 42, 14984-14988, 2011.10, Recent neuroimaging studies support the view that a left-lateralized brain network is crucial for language development in children. However, no previous studies have demonstrated a clear link between lateralized brain functional network and language performance in preschool children. Magnetoencephalography (MEG) is a noninvasive brain imaging technique and is a practical neuroimaging method for use in young children. MEG produces a reference-free signal, and is therefore an ideal tool to compute coherence between two distant cortical rhythms. In the present study, using a custom child-sized MEG system, we investigated brain networks while 78 right-handed preschool human children (32-64 months; 96% were 3-4 years old) listened to stories with moving images. The results indicated that left dominance of parietotemporal coherence in theta band activity (6-8 Hz) was specifically correlated with higher performance of languagerelated tasks, whereas this laterality was not correlated with nonverbal cognitive performance, chronological age, or head circumference. Power analyses did not reveal any specific frequencies that contributed to higher language performance. Our results suggest that it is not the left dominance in theta oscillation per se, but the left-dominant phase-locked connectivity via theta oscillation that contributes to the development of language ability in young children..|
|44.||Erika Tomimatsu, Hiroyuki Ito, Shoji Sunaga, Gerard Bastiaan Remijn, Halt and recovery of illusory motion perception from peripherally viewed static images, Attention, Perception, and Psychophysics, 10.3758/s13414-011-0131-9, 73, 6, 1823-1832, 2011.08, We quantitatively investigated the halt and recovery of illusory motion perception in static images. With steady fixation, participants viewed images causing four different motion illusions. The results showed that the time courses of the Fraser-Wilcox illusion and the modified Fraser-Wilcox illusion (i. e., "Rotating Snakes") were very similar, while the Ouchi and Enigma illusions showed quite a different trend. When participants viewed images causing the Fraser-Wilcox illusion and the modified Fraser-Wilcox illusion, they typically experienced disappearance of the illusory motion within several seconds. After a variable interstimulus interval (ISI), the images were presented again in the same retinal position. The magnitude of the illusory motion from the second image presentation increased as the ISI became longer. This suggests that the same adaptation process either directly causes or attenuates both the Fraser-Wilcox illusion and the modified Fraser-Wilcox illusion..|
|45.||Remijn, G.B., Ueda, K., Toyooka, T., Nakajima, Y. , Perception of English plural /s/ and /z/ in young Japanese adults, Geijutsu Kogaku, 15, 65-70, 2011.07.|
|46.||Kikuko Nagao, Mitsuru Kikuchi, Gerard Bastiaan Remijn, Yoshio Minabe, Shoichi Koizumi, Haruhiro Higashida, Toshiov Munesue, Correlations between development of cognitive/behavioral skills and spontaneous MEG for 3-4-year-old healthy children, Journal of Brain Science, 36, 1, 18-31, 2011.07, Objective: This study examined the correlations between the development of cognitive/behavioral skills, and spontaneous magnetoencephalogram (MEG) in 3-4-year-old healthy children. Although MEG is non-invasive and easier for applying to infants, there has been no previous study relating cognitive/behavioral development of preschool children with MEG data. Methods: The cognitive skills were evaluated by the Japanese adaptation of the Kaufman Assessment Battery for Children (K-ABC). The behavioral skills were assessed by Pervasive Developmental Disorders Autism Society Japan Rating Scale (PARS). Spontaneous brain activity was measured from 52 children (23 male subjects and 29 female subjects) in an eye-closed condition. Results: The power spectral densities were calculated from the MEG data. We found frequency-band correlations between the power spectral densities and some cognitive/behavioral scores for the eye-closed condition. In female subjects, there was a significant negative relationship between cognitive skill scores and the theta power spectral density of the frontal/temporal area. In male subjects, there was a significant negative relationship between the maladaptive behavior score and the beta power spectral density of the frontal/central area. Conclusions: These results demonstrate interesting differences in the cognitive/behavioral development between 3-4-year-old males and females. We are continuing further research especially focused on maladaptive behaviors, including Pervasive Developmental Disorders (PDD) symptoms, and related gender differences..|
|47.||Gerard B. Remijn, Mitsuru Kikuchi, Yuko Yoshimura, Kiyomi Shitamichi, Sanae Ueno, Kikuko Nagao, Toshio Munesue, Haruyuki Kojima, Yoshio Minabe, Hemodynamic responses to visual stimuli in cortex of adults and 3- to 4-year-old children, Brain Research, 10.1016/j.brainres.2011.01.090, 1383, 242-251, 2011.04, In this study we used near-infrared spectroscopy (NIRS) to measure relative changes in cortical hemodynamics from 19 adult and 19 preschool children (aged 3-4 years old), while they watched epochs of static and motion pictures extracted from TV programs. The spatio-temporal characteristics of oxygenated and deoxygenated hemoglobin volumes (oxy- and deoxy-Hb) of both subject groups were described and compared where appropriate for five regions of interest (ROIs). These were striate, left and right middle temporal, and left and right temporo-parietal areas. Over these areas, deoxy-Hb volumes did not differ between both groups. Preschool data showed significant increases in oxy-Hb over striate, middle temporal and temporo-parietal areas in response to visual motion stimuli. Static stimuli caused a significant oxy-Hb increase over striate and left middle temporal areas. Surprisingly, changes in adult oxy-Hb were not profound and did not show a significant oxy-Hb increase in striate and middle temporal areas in response to the motion stimuli, warranting further research. In spite of oxy-Hb volume differences, oxy-Hb recovery to baseline followed a similar pattern in both groups in response to both static and motion stimuli. Together, the results suggest that near-infrared spectroscopy is a viable method to investigate cortical development of preschool children by monitoring their hemodynamic response patterns..|
|48.||Mitsuru Kikuchi, Kiyomi Shitamichi, Sanae Ueno, Yuko Yoshimura, Gerard B. Remijn, Kikuko Nagao, Toshio Munesue, Koichi Iiyama, Tsunehisa Tsubokawa, Yasuhiro Haruta, Yoshihiro Inoue, Katsumi Watanabe, Takanori Hashimoto, Haruhiro Higashida, Yoshio Minabe, Neurovascular coupling in the human somatosensory cortex
A single trial study, NeuroReport, 10.1097/WNR.0b013e3283406615, 21, 17, 1106-1110, 2010.12, Oscillations in the higher frequency range are closely related to regional brain hemodynamic changes. Here we investigated this neurovascular coupling in humans in response to electrical stimulation of the right median nerve. In a single-trial study, we simultaneously recorded hemodynamic fluctuations in the somatosensory cortex by near infrared spectroscopy and brain neuronal oscillations by whole-head magnetoencephalography (MEG). The results from six volunteers showed that neural fluctuations at β or γ-band power were correlated with hemodynamic fluctuation during stimulus conditions. These correlations were prominent with a time delay of 5-7 s. This study provides new direct evidence that hemodynamic onset lags specific neural oscillations in the order of seconds in human awake conditions using noninvasive methods..
|49.||Gerard B. Remijn, Haruyuki Kojima, Active versus passive listening to auditory streaming stimuli
A near-infrared spectroscopy study, Journal of Biomedical Optics, 10.1117/1.3431104, 15, 3, 2010.05, We use near-infrared spectroscopy (NIRS) to assess listeners' cortical responses to a 10-s series of pure tones separated in frequency. Listeners are instructed to either judge the rhythm of these "streaming" stimuli (active-response listening) or to listen to the stimuli passively. Experiment 1 shows that active-response listening causes increases in oxygenated hemoglobin (oxy-Hb) in response to all stimuli, generally over the (pre)motor cortices. The oxy-Hb increases are significantly larger over the right hemisphere than over the left for the final 5 s of the stimulus. Hemodynamic levels do not vary with changes in the frequency separation between the tones and corresponding changes in perceived rhythm ("gallop," "streaming," or "ambiguous"). Experiment 2 shows that hemodynamic levels are strongly influenced by listening mode. For the majority of time windows, active-response listening causes significantly larger oxy-Hb increases than passive listening, significantly over the left hemisphere during the stimulus and over both hemispheres after the stimulus. This difference cannot be attributed to physical motor activity and preparation related to button pressing after stimulus end, because this is required in both listening modes..
|50.||Hiroshige Takeichi, Takako Mitsudo, Yoshitaka Nakajima, Gerard B. Remijn, Yoshinobu Goto, Shozo Tobimatsu, Auditory temporal assimilation
A discriminant analysis of electrophysiological evidence, 16th International Conference on Neural Information Processing, ICONIP 2009 Neural Information Processing - 16th International Conference, ICONIP 2009, Proceedings, 10.1007/978-3-642-10684-2_33, 299-308, 2009.12, A portion of the data from an event-related potential (ERP) experiment  on auditory temporal assimilation [2, 3] was reanalyzed by constructing Gaussian Naïve Bayes Classifiers . In auditory temporal assimilation, two neighboring physically-unequal time intervals marked by three successive tone bursts are illusorily perceived to have the same duration if the two time intervals satisfy a certain relationship. The classifiers could discriminate the subject's task, which was judgment of the equivalence between the two intervals, at an accuracy of 86-96% as well as their subjective judgments to the physically equivalent stimulus at an accuracy of 82-86% from individual ERP average waveforms. Chernoff information  provided more consistent interpretations compared with classification errors as to the selection of the component most strongly associated with the perceptual judgment. This may provide us with a simple but somewhat robust neurodecoding scheme..
|51.||Mitsuru Kikuchi, Toshio Munesue, Koichi Iiyama, Gerard B. Remijn, Yoshio Minabe, Bambi plan
Project for the early detection of autism spectrum disorder using a NIRS/MEG integrated device, Journal of Brain Science, 35, 1, 35-39, 2009.12, Accurately predicting the development of Autism Spectrum Disorder (ASD) in early childhood would have major implications for maximizing treatment efficacy. Magnetoencephalography (MEG) and near infrared spectroscopy (NIRS) are non-invasive, instant and low-constraint measurement techniques which allow us to study brain functioning even in young children without any sedation. Using NIRS, we investigated brain hemodynamic responses in frontal areas during letter-cued verbal fluency tests in 21 normal controls and 24 patients with ASD. Results suggest that these parameters must be sensitive physiological markers of ASD. In addition, we have done a pilot study in one patient with ASD using MEG The results suggest that the brain responses elicited by visual biological motion stimuli could be sensitive physiological markers of ASD. Finally, we shortly introduce our new project (Bambi plan) with a NIRS/MEG integrated device. This ongoing project pursues engineering development for an integrated MEG/NIRS device and aims to evaluate the performance of this novel device in the early diagnosis of ASD..
|52.||Mitsuru Kikuchi, Akira Hanaoka, Tomokazu Kidani, Gerard Bastiaan Remijn, Yoshio Minabe, Toshio Munesue, Yoshifumi Koshino, Heart rate variability in drug-naïve patients with panic disorder and major depressive disorder, Progress in Neuro-Psychopharmacology and Biological Psychiatry, 10.1016/j.pnpbp.2009.08.002, 33, 8, 1474-1478, 2009.11, Power spectral analysis of electrocardiogram (ECG) R-R intervals is useful for the detection of autonomic dysfunction in various clinical disorders. Although both panic disorder (PD) and major depressive disorder (MDD) are known to have effects on the cardiac autonomic nervous system, no previous study has tested this among drug-naïve (i.e. no history of treatment) patients with MDD and PD in the same study. The purpose of this study was to compare cardiac autonomic functions among drug-naïve patients with MDD and PD and those of healthy controls. Subjects were 17 drug-naïve PD patients, 15 drug-naïve MDD patients and 15 normal controls. ECGs were recorded under both supine resting and supine deep-breathing conditions (10-12 breaths/min; 0.17-0.20 Hz). We measured the low-frequency power (LF; 0.05-0.15 Hz), which may reflect baroreflex function, the high-frequency power (HF; 0.15-0.40 Hz), which reflects cardiac parasympathetic activity, as well as the LF/HF ratio. As expected, deep breathing induced an increase in HF power and a decrease in the LF/HF ratio in healthy controls. Compared to these controls, however, the MDD group had a lower response to regular deep breathing in LF power and in LF/HF ratio. PD patients showed intermediate results between normal controls and MDD patients. The results indicate that the reactivity to deep breathing revealed diminished cardiac autonomic reactivity in drug-naïve MDD patients..|
|53.||Takako Mitsudo, Yoshitaka Nakajima, Gerard B. Remijn, Hiroshige Takeichi, Yoshinobu Goto, Shozo Tobimatsu, Electrophysiological evidence of auditory temporal perception related to the assimilation between two neighboring time intervals, NeuroQuantology, 10.14704/nq.2009.7.1.213, 7, 1, 114-127, 2009.01, We conducted two event-related potential (ERP) experiments that examined the mechanisms of auditory temporal assimilation. Stimulus patterns consisted of two neighboring time intervals marked by three successive tone bursts (20 ms, 1000 Hz). Six stimulus patterns were used in which the first time interval (T1) varied from 100 to 280 ms, while the second time interval (T2) was fixed at 200 ms. Two other stimulus patterns consisted of different T1/T2 combinations were employed. Participants judged whether T1 and T2 had the same duration or not by pressing a button. ERPs were recorded from 11 electrodes over the scalp. Behavioral data showed symmetrical assimilation; the participants judged the two neighboring time intervals as equal when the difference between the time intervals (T1-T2) was -40 to +40 ms. Electrophysiological data showed that two ERP components (P300 and CNV) emerged related to the temporal judgment. The P300 appeared in the parietal area at 400 ms after the 2nd tone burst, and its amplitude decreased as a function of T1. The CNV component appeared in the frontal area during T2 presentation, and its amplitude increased as a function of T1. In Experiment 2, 11 stimulus patterns were presented. In seven stimulus patterns, T1 varied from 80 to 320 ms, and T2 was fixed at 200 ms. ERPs were recorded from 19 electrodes over the scalp. In this experiment, behavioral data showed asymmetrical assimilation; participants judged the two neighboring time intervals as equal when T1-T2 was -80 to +40 ms. Consistent with the results of Experiment 1, electrophysiological data showed the P300 and the CNV during T2. In addition, a slow negative component (SNCt) appeared in the right prefrontal area after the 3rd tone burst, and continued up to about 400 ms after the stimuli. The magnitude of this component was smaller when temporal assimilation occurred. These three ERP signatures seem to correlate with the process of temporal assimilation; (a) the P300 augmentation, which could be related to the participants' attention to the 1st interval and reflect the monitoring of the passage of time, (b) the CNV in the frontal area, which might have accompanied the process of memorizing the lengths of the time intervals, and (c) the SNCt in the right prefrontal area, which showed a reduction when temporal assimilation occurred. Our results showed spatiotemporal characteristics of the cortical processing of short time intervals and may assist the neurophysiological understanding of illusions in time and time perception in general..|
|54.||Gerard B. Remijn, Elvira Pérez, Yoshitaka Nakajima, Hiroyuki Ito, Frequency modulation facilitates (modal) auditory restoration of a gap, Hearing Research, 10.1016/j.heares.2008.06.007, 243, 1-2, 113-120, 2008.09, In this study we further investigated processes of auditory restoration (AR) in recently described stimulus types: the so-called gap-transfer stimulus, the shared-gap stimulus and the pseudo-continuous stimulus. The stimuli typically consist of two crossing sounds of unequal duration. In the shared-gap and pseudo-continuous stimuli, the two crossing sounds share a gap (<45 ms) at their crossing point. In the gap-transfer stimulus, only the long sound contains a gap (100 ms), whereas the short sound is physically continuous. Earlier research has shown that in these stimuli the long sound is subject to AR, in spite of the gap it contains, whereas the gap is perceived in the short sound. Experiment 1 of the present study showed that AR of the stimuli's long sound was facilitated when its slope increased from 0 to 1 oct/s. Experiment 2 showed that the effect of slope on AR of the long sound also occurred when the slope relationship between the long and short sound was fixed. Implications for a tentative sound edge-binding explanation of AR as well as alternative explanations for the effect of slope on AR are discussed..|
|55.||Gerard B. Remijn, Yoshitaka Nakajima, Shunsuke Tanaka, Perceptual completion of a sound with a short silent gap, Perception, 10.1068/p5574, 36, 6, 898-917, 2007.08, Listeners reported the perceptual completion of a sound in stimuli consisting of two crossing frequency glides of unequal duration that shared a short silent gap (40 ms or less) at their crossing point. Even though both glides shared the gap, it was consistently perceived only in the shorter glide, whereas the longer glide could be perceptually completed. Studies on perceptual completion in the auditory domain reported so far have shown that completion of a sound with a gap occurs only if the gap is filled with energy from another sound. In the stimuli used here, however, no such substitute energy was present in the gap, showing evidence for perceptual completion of a sound without local stimulation ('modal' completion). Perceptual completion of the long glide occurred under both monaural and dichotic presentation of the gap-sharing glides; it therefore involves central stages of auditory processing. The inclusion of the gap in the short glide, rather than in both the long and the short glide, is explained in terms of auditory event and stream formation..|
|56.||Kyoko Kanafuka, Yoshitaka Nakajima, Gerard B. Remijn, Takayuki Sasaki, Shunsuke Tanaka, Subjectively divided tone components in the gap transfer illusion, Perception and Psychophysics, 10.3758/BF03193768, 69, 5, 641-653, 2007.07, When a long glide with a short temporal gap in its middle crosses with a continuous short glide at the temporal midpoint of both glides, the gap is perceived in the short glide instead of in the long glide. In the present article, we tested possible explanations for this "gap transfer illusion" by obtaining points of subjective equality of the pitches and durations of the two short tones that are subjectively divided by the gap. The results of two experiments showed that neither an explanation in terms of envelope patterns nor explanations in terms of combination tones or acoustic beats could account for the perception of the short tones in the gap transfer illusion. Rather, the results were compatible with the idea that the illusory tones were formed by the perceptual integration of onsets and offsets of acoustically different sounds. Implications for the perceptual construction of auditory events are discussed..|
|57.||Gerard B. Remijn, Hiroyuki Ito, Perceptual completion in a dynamic scene
An investigation with an ambiguous motion paradigm, Vision Research, 10.1016/j.visres.2007.03.017, 47, 14, 1869-1879, 2007.06, In this study we employed the streaming-bouncing stimulus to investigate aspects of dynamic occlusion, e.g., of objects that temporarily move under occlusion while covertly being tracked. Two occluders, both either luminance-defined or invisible (virtual), were placed on the trajectories of the moving objects in the streaming-bouncing stimulus. We found that the bouncing percept was dominant when the objects moved under luminance-defined occluders but not when they moved under virtual occluders. Perceived motion direction thus varied with occluder visibility. The results seem to suggest that perceptual completion of a moving object interferes with constant motion processing of the same object..
|58.||Gert ten Hoopen, Takayuki Sasaki, Yoshitaka Nakajima, Ger Remijn, Bob Massier, Koenraad S. Rhebergen, Willem Holleman, Time-shrinking and categorical temporal ratio perception
Evidence for a 1:1 temporal category, Music Perception, 10.1525/mp.2006.24.1.1, 24, 1, 1-22, 2006.09, In a previous study, we presented psychophysical evidence that time-shrinking (TS), an illusion of time perception that empty durations preceded by shorter ones can be conspicuously underestimated, gives rise to categorical perception on the temporal dimension (Sasaki, Nakajima, & ten Hoopen, 1998). In the present study, we first survey studies of categorical rhythm perception and then describe four experiments that provide further evidence that TS causes categorical perception on the temporal dimension. In the first experiment, participants judged the similarity between pairs of /t1/t2/ patterns (slashes denote short sound markers delimiting the empty time intervals t1 and t2). A cluster analysis and a scaling analysis showed that patterns liable to TS piled up in a 1:1 category. The second and third experiments are improved replications in which the sum of t1 and t2 in the /t1/t2/ patterns is kept constant at 320 ms. The results showed that the 12 patterns /115/ 205/, /120/200/,...,/165/155/, /170/150/ formed a 1:1 category. The fourth experiment utilizes a cross-modality matching procedure to establish the subjective temporal ratio of the /t1/t2/ patterns and a 1:1 category was established containing the 11 patterns /120/200/, /125/ 195/,...,/165/155/, /170/150/. On basis of these converging results we estimate a domain of perceived 1:1 ratios as a function of total pattern duration (t1 + t2) between 160 and 480 ms. We discuss the implications of this study for rhythm perception and production..
|59.||Gerard Bastiaan Remijn, Yoshitaka Nakajima, The perceptual integration of auditory stimulus edges
An illusory short tone in stimulus patterns consisting of two partly overlapping glides, Journal of Experimental Psychology: Human Perception and Performance, 10.1037/0096-15184.108.40.206, 31, 1, 183-192, 2005.02, Two partly overlapping frequency glides can be perceived as consisting of a long pitch trajectory accompanied by a short tone in the temporal middle. It was found that the appearance of this middle tone could not be related to peripheral processes concerned with spectral splatter or combination tones that could have emerged during the overlap of the glides. Furthermore, it was found that the middle tone was perceived even when components of the 2 glides were separated by more than an equivalent rectangular bandwidth at any time during the overlap. The appearance of the middle tone indicates that auditory events can result from the perceptual integration of component parts-that is, stimulus edges-of acoustically different sounds..
|60.||Gerard B. Remijn, Hiroyuki Ito, Yoshitaka Nakajima, Audiovisual integration
An investigation of the 'streaming-bouncing' phenomenon, Journal of physiological anthropology and applied human science, 10.2114/jpa.23.243, 23, 6, 243-247, 2004.11, Temporal aspects of the perceptual integration of audiovisual information were investigated by utilizing the visual 'streaming-bouncing' phenomenon. When two identical visual objects move towards each other, coincide, and then move away from each other, the objects can either be seen as streaming past one another or bouncing off each other. Although the streaming percept is dominant, the bouncing percept can be induced by presenting an auditory stimulus during the visual coincidence of the moving objects. Here we show that the bounce-inducing effect of the auditory stimulus is paramount when its onset and offset occur in temporal proximity of the onset and offset of the period of visual coincidence of the moving objects. When the duration of the auditory stimulus exceeded this period, visual bouncing disappears. Implications for a temporal window of audiovisual integration and the design of effective audiovisual warning signals are discussed..
|61.||Yoshitaka Nakajima, Takayuki Sasaki, Gerard B. Remijn, Kazuo Ueda, Perceptual organization of onsets and offsets of sounds, Journal of physiological anthropology and applied human science, 10.2114/jpa.23.345, 23, 6, 345-349, 2004.11, Several illusory phenomena in auditory perception are accounted for by using the event construction model presented by Nakajima et al. (2000) in order to explain the gap transfer illusion. This model assumes that onsets and offsets of sounds are detected perceptually as if they were independent auditory elements. They are connected to one another according to the proximity principle to constitute auditory events. This model seems to contribute to a general cross-modal theory of perception where the idea of edge integration plays an important role. Potential directions in which we can connect the present paradigm with speech perception are indicated and possibilities to improve artificial auditory environments are suggested..|
|62.||Goodacre, J. Remijn, G.B. (2004), A method for teaching scientific report writing in English for nonnative speakers, Geijutsu kougaku kenkyuu, 2, 37-40, 2004.04.|
|63.||Remijn, G.B., Nakajima, Y., ten Hoopen, G. , Continuity perception in stimulus patterns consisting of two partly overlapping frequency glides, Journal of Music Perception and Cognition, 7, 77-92, 2001.06.|
|64.||Yoshitaka Nakajima, Takayuki Sasaki, Kyoko Kanafuka, Atsuko Miyamoto, Ger Remijn, Gert Ten Hoopen, Illusory recouplings of onsets and terminations of glide tone components, Perception and Psychophysics, 10.3758/BF03212143, 62, 7, 1413-1425, 2000.10, We present a new auditory illusion, the gap transfer illusion, supported by phenomenological and psychophysical data. In a typical situation, an ascending frequency glide of 2,500 msec with a temporal gap of 100 msec in the middle and a continuously descending frequency glide of 500 msec cross each other at their central positions. These glides move at the same constant speed in logarithmic frequency in opposite directions. The temporal gap in the long glide is perceived as if it were in the short glide. The same kind of subjective transfer of a temporal gap can take place also when the stimulus pattern is reversed in time. This phenomenon suggests that onsets and terminations of glide components behave as if they were independent perceptual elements. We also find that when two long frequency glides are presented successively with a short temporal overlap, a long glide tone covering the whole duration of the pattern and a short tone around the temporal middle can be perceived. To account for these results, we propose an event construction model, in which perceptual onsets and terminations are coupled to construct auditory events and the proximity principle connects these elements..|
|65.||Ger Remijn, Gert van der Meulen, Gert ten Hoopen, Yoshitaka Nakajima, Yorimoto Komori, Takayuki Sasaki, On the robustness of time-shrinking, Journal of the Acoustical Society of Japan (E) (English translation of Nippon Onkyo Gakkaishi), 10.1250/ast.20.365, 20, 5, 365-373, 1999.01, Time-shrinking is a well-established perceptual phenomenon by now. When two empty time intervals, marked by short sounds, are presented contiguously, the first interval can shrink the second one perceptually. This is almost always the case when the first interval is shorter than the second one, unless the difference gets greater than approximately 80 ms. The phenomenon is rather compelling, so it can be called an illusion of time perception. Our purpose in the present study is to show by three experiments how robust this illusion is. The first experiment showed that time-shrinking operates also when the last time interval is preceded by more than one interval (up to five at least). Moreover, the number of preceding intervals had no effect upon the amount of shrinking. The second and third experiment studied the effect of sound marker frequency on time-shrinking. It was found that the illusory phenomenon clearly appeared even when the sound marker frequencies differed by more than two octaves. However, the amount of shrinking appeared to be largest when frequencies were equal..|
|66.||Ger B. Remijn, Yoichi Sugita, Mismatch between anticipated and actually presented sound stimuli in humans, Neuroscience Letters, 10.1016/0304-3940(95)12241-9, 202, 3, 169-172, 1996.01, In a tone sequence of continuously ascending frequency or ascending intensity, a tone was occasionally repeated. The repeated tone elicited an anterior negative wave, the latency of which was comparable to the mismatch negativity. However, the amplitude of the negative wave was larger over the left hemisphere than over the right hemisphere. The negative wave might reflect a discrepancy between an expected tone and an actually presented tone..|