Kyushu University Academic Staff Educational and Research Activities Database
List of Papers
Kenji Ono Last modified date:2022.05.13

Professor / Graduate School and Faculty of Information Science and Electrical Engineering, Department of Informatics / Section of Applied Data Science / Research Institute for Information Technology


Papers
1. Hagita, Katsumi, Murashima, Takahiro, Ogino, Masao, Omiya, Manabu, Ono, Kenji, Deguchi, Tetsuo, Jinnai, Hiroshi and Kawakatsu, Toshihiro, Efficient compressed database of equilibrated configurations of ring-linear polymer blends for MD simulations, Scientific Data, https://doi.org/10.1038/s41597-022-01138-3, 9, 40, 2022.02.
2. Tomohiro Kawanabe, Kazuma Hatta, Kenji Ono, ChOWDER - A New Approach for Viewing 3D Web GIS on Ultra-High-Resolution Scalable Display, 2020 IEEE International Conference on Cluster Computing, CLUSTER 2020, 412-413, 9229604, 2020.09, ChOWDER is an open-source, web-based scalable display system that consists of multiple display devices on which a web browser operates in cooperation to construct a single large pixel space. Newly introduced functionality of displaying 3D geographic information systems allows us to show large 3D geographic information on ultra-high-resolution tiled display system. This paper describes the method of implementation, use cases, and related works of this functionality..
3. Xin Liang, Hanqi Guo, Sheng Di, Franck Cappello, Mukund Raj, Chunhui Liu, Kenji Ono, Zizhong Chen, Tom Peterka, Toward Feature-Preserving 2D and 3D Vector Field Compression, 13th IEEE Pacific Visualization Symposium, PacificVis 2020 2020 IEEE Pacific Visualization Symposium, PacificVis 2020 - Proceedings, 10.1109/PacificVis48177.2020.6431, 81-90, 2020.06, The objective of this work is to develop error-bounded lossy compression methods to preserve topological features in 2D and 3D vector fields. Specifically, we explore the preservation of critical points in piecewise linear vector fields. We define the preservation of critical points as, without any false positive, false negative, or false type change in the decompressed data, (1) keeping each critical point in its original cell and (2) retaining the type of each critical point (e.g., saddle and attracting node). The key to our method is to adapt a vertex-wise error bound for each grid point and to compress input data together with the error bound field using a modified lossy compressor. Our compression algorithm can be also embarrassingly parallelized for large data handling and in situ processing. We benchmark our method by comparing it with existing lossy compressors in terms of false positive/negative/type rates, compression ratio, and various vector field visualizations with several scientific applications..
4. Issei Koga, Kenji Ono, Effective pre-processing of genetic programming for solving symbolic regression in equation extraction, 12th International Workshop on Information Search, Integration and Personalization, ISIP 2018 Information Search, Integration, and Personalization - 12th International Workshop, ISIP 2018, Revised Selected Papers, 10.1007/978-3-030-30284-9_6, 89-103, 2019.01, Estimating a form of equation that explains data is very useful to understand various physical, chemical, social, and biological phenomena. One effective approach for finding the form of an equation is to solve the symbolic regression problem using genetic programming (GP). However, this approach requires a long computation time because of the explosion of the number of combinations of candidate functions that are used as elements to construct equations. In the present paper, a novel method to effectively eliminate unnecessary functions from an initial set of functions using a deep neural network was proposed to reduce the number of computations of GP. Moreover, a method was proposed to improve the accuracy of the classification using eigenvalues when classifying whether functions are required for symbolic regression. Experiment results showed that the proposed method can successfully classify functions with over 90% of the data created in the present study..
5. Tomohiro Kawanabe, Jorji Nonaka, Daisuke Sakurai, Kazuma Hatta, Shuhei Okayama, Kenji Ono, Showing Ultra-High-Resolution Images in VDA-Based Scalable Displays, 16th International Conference on Cooperative Design, Visualization, and Engineering, CDVE 2019 Cooperative Design, Visualization, and Engineering - 16th International Conference, CDVE 2019, Proceedings, 10.1007/978-3-030-30949-7_13, 116-122, 2019.10, For web-browser-based scalable display systems, we recently presented the Virtual Display Area (VDA) [1] concept, which unifies different display resolutions and tiling by abstracting the physical pixel spaces into a single software display. Web browsers, however, are generally not designed for handling images of large size, even though ultra-high-resolution images emerge especially in the HPC and big data communities. We thus present an approach to handle ultra-high-resolution images for web-based scalable display systems while keeping the principle of the VDA to achieve both the efficiency of operations and the simplicity of software design. We show the advantage of our approach by comparing its performance to that of SAGE2, which is the de facto standard web-based scalable display system..
6. Voxel-based simulation of nasal airflow during a sniff.
7. Issei Koga, Kenji Ono, Effective Pre-processing of Genetic Programming for Solving Symbolic Regression in Equation Extraction, Communications in Computer and Information Science, 1040, 89-103, 2019.08, Estimating a form of equation that explains data is very useful to understand various physical, chemical, social, and biological phenomena. One effective approach for finding the form of an equation is to solve the symbolic regression problem using genetic programming (GP). However, this approach requires a long computation time because of the explosion of the number of combinations of candidate functions that are used as elements to construct equations. In the present paper, a novel method to effectively eliminate unnecessary functions from an initial set of functions using a deep neural network was proposed to reduce the number of computations of GP. Moreover, a method was proposed to improve the accuracy of the classification using eigenvalues when classifying whether functions are required for symbolic regression. Experiment results showed that the proposed method can successfully classify functions with over 90{\%} of the data created in the present study..
8. Shinya Kimura, Yusuke Kimura, Toshihiro Sera, Kenji Ono, Gaku Tanaka, Voxel-based simulation of nasal airflow during a sniff, Transactions of Japanese Society for Medical and Biological Engineering, 10.11239/jsmbe.56.37, 56, 2, 37-43, 2018.01, To establish a new simplified approach to quantify the impact of surgical intervention on nasal airflow, we used voxel-based computational fluid dynamics simulations to analyze nasal airflow under unsteady flow conditions mimicking a sniff, which involves brief inhalation accompanied by rapid acceleration. The time-transient distribution of the flow rate in the coronal cross-section was investigated to validate the results of this voxel method against those of conventional boundary-fitted method. Despite a simple approach using coarse voxel grids, the voxel method accurately reproduced rapid changes in flow distribution during a sniff. We also found that correctly modeling rapid changes in the characteristic flow structure in a nasal cavity (including a jet posterior to the nasal valve and a recirculating flow in the upper anterior region of the cavity) is important for reproducing the unsteady flow distribution during a sniff. Thus, the voxel-based simulations can be used to assess the dynamics of unsteady nasal airflows..
9. Kenji Ono, Jorji Nonaka, Hiroyuki Yoshikawa, Takeshi Nanri, Yoshiyuki Morie, Tomohiro Kawanabe, Fumiyoshi Shoji, Design of a Flexible In Situ Framework with a Temporal Buffer for Data Processing and Visualization of Time-Varying Datasets, International Conference on High Performance Computing, ISC High Performance 2018 High Performance Computing - ISC High Performance 2018 International Workshops, Revised Selected Papers, 10.1007/978-3-030-02465-9_17, 243-257, 2018.01, This paper presents an in situ framework focused on time-varying simulations, and uses a novel temporal buffer for storing simulation results sampled at user-defined intervals. This framework has been designed to provide flexible data processing and visualization capabilities in modern HPC operational environments composed of powerful front-end systems, for pre-and post-processing purposes, along with traditional back-end HPC systems. The temporal buffer is implemented using the functionalities provided by Open Address Space (OpAS) library, which enables asynchronous one-sided communication from outside processes to any exposed memory region on the simulator side. This buffer can store time-varying simulation results, and can be processed via in situ approaches with different proximities. We present a prototype of our framework, and code integration process with a target simulation code. The proposed in situ framework utilizes separate files to describe the initialization and execution codes, which are in the form of Python scripts. This framework also enables the runtime modification of these Python-based files, thus providing greater flexibility to the users, not only for data processing, such as visualization and analysis, but also for the simulation steering..
10. Jorji Nonaka, Kenji Ono, Naohisa Sakamoto, Kengo Hayashi, Motohiko Matsuda, Fumiyoshi Shoji, Kentaro Oku, Masahiro Fujita, Kazuma Hatta, A Large Data Visualization Framework for SPARC64 fx HPC Systems-Case Study
K Computer Operational Environment-, 8th IEEE Symposium on Large Data Analysis and Visualization, LDAV 2018 2018 IEEE 8th Symposium on Large Data Analysis and Visualization, LDAV 2018, 10.1109/LDAV.2018.8739214, 108-109, 2018.10, Leading-edge supercomputer systems have been designed to achieve the highest computational performance possible for running a wide variety of large-scale simulations, and the pre-and post-processing are usually not considered in the main design feature. Although supercomputer systems may have peculiar CPU architecture, the auxiliary computational systems tend to use commodity based hardware and software in the form of servers and clusters. In the case of the K computer operational environment, at RIKEN R-CCS, the supercomputer itself is based on SPARC64 fx CPU architecture, and pre-and post-processing servers are based on traditional x86 CPU architecture. In this poster we present a large data visualization environment developed for this peculiar HPC operational environment, presenting some of the efforts made to meet the large data visualization needs. It is publicly known that the next-generation leading-edge Japanese supercomputer will abandon this CPU architecture in favor of another architecture, but we expect that some of the knowledge obtained in this development will also be useful for this future coming supercomputer system..
11. Shinya Kimura, Takashi Sakamoto, Toshihiro Sera, Hideo Yokota, Kenji Ono, Denis J. Doorly, Robert C. Schroter, Gaku Tanaka, Voxel-based modeling of airflow in the human nasal cavity, Computer Methods in Biomechanics and Biomedical Engineering, 10.1080/10255842.2018.1555584, 22, 3, 331-339, 2019.02, This paper describes the simulation of airflow in human nasal airways using voxel-based modeling characterized by robust, automatic, and objective grid generation. Computed tomography scans of a healthy adult nose are used to reconstruct 3D virtual models of the nasal airways. Voxel-based simulations of restful inspiratory flow are then performed using various mesh sizes to determine the level of granularity required to adequately resolve the airflow. For meshes with close voxel spacings, the model successfully reconstructs the nasal structure and predicts the overall pressure drop through the nasal cavity..
12. Kenji Ono, Jorji Nonaka, Hiroyuki Yoshikawa, Takeshi Nanri, Yoshiyuki Morie, Tomohiro Kawanabe, Fumiyoshi Shoji, Design of a Flexible In Situ Framework with a Temporal Buffer for Data Processing and Visualization of Time-Varying Datasets, Lecture Notes in Computer Science, 11203, 243-257, 2019.01.
13. Kenji Ono, Takanori Uchida, High-Performance Parallel Simulation of Airflow for Complex Terrain Surface, Modelling and Simulation in Engineering, 10.1155/2019/5231839, 2019, 2019.02, It is important to develop a reliable and high-throughput simulation method for predicting airflows in the installation planning phase of windmill power plants. This study proposes a two-stage mesh generation approach to reduce the meshing cost and introduces a hybrid parallelization scheme for atmospheric fluid simulations. The meshing approach splits mesh generation into two stages: in the first stage, the meshing parameters that uniquely determine the mesh distribution are extracted, and in the second stage, a mesh system is generated in parallel via an in situ approach using the parameters obtained in the initialization phase of the simulation. The proposed two-stage approach is flexible since an arbitrary number of processes can be selected at run time. An efficient OpenMP-MPI hybrid parallelization scheme using a middleware that provides a framework of parallel codes based on the domain decomposition method is also developed. The preliminary results of the meshing and computing performance show excellent scalability in the strong scaling test..
14. Analyses and Visualization of Characteristics of Car Body Types by Using Convolutional Neural Network.
15. Kazunori Mikami, Kenji Ono, Jorji Nonaka, Performance evaluation and visualization of scientific applications using PMlib, 6th International Symposium on Computing and Networking Workshops, CANDARW 2018 Proceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018, 10.1109/CANDARW.2018.00053, 243-249, 2018.12, The computational performance of scientific applications on HPC systems is often much lower than user expectation based on the system's maximum performance specifications. To understand the basis for this performance gap, a multi-perspective evaluation is important. For instance, from the user perspective, correlating the theoretical computation coded as a source program with the actual computation workload produced by the compilers is valuable. From the system perspective, evaluating the characteristics of microarchitecture elements such as processor core and memory is of significance. An open source library called PMlib was developed to address these types of synthetic evaluations. PMlib provides an avenue for reporting the arithmetic/application workload explicitly coded in the source program, as well as the actually executed system workload. It also provides detailed utilization reports of processor-specific hardware including the categorized SIMD instruction statistics, the layered cache hit/miss rate, and the effective memory bandwidth, which are captured via hardware performance counters (HWPC). Using PMlib, users can conduct a synthetic analysis of application performance, and obtain useful feedback for further optimized execution of applications..
16. Eduardo C. Inacio, Jorji Nonaka, Kenji Ono, Mario A.R. Dantas, Fumiyoshi Shoji, Characterizing I/O and Storage Activity on the K Computer for Post-Processing Purposes, 2018 IEEE Symposium on Computers and Communications, ISCC 2018 2018 IEEE Symposium on Computers and Communications, ISCC 2018, 10.1109/ISCC.2018.8538488, 2018-June, 730-735, 2018.11, An increasing volume of data is produced by computational science applications executing on flagship-class supercomputers, such as the K computer. Most of these huge datasets would later pass through post-processing for visualization and analysis in order to derive meaningful information. Particular characteristics of the computing environment, application, and the dataset itself, can make efficiently exploring the performance capabilities of large-scale storage systems supporting these supercomputer a challenging task. This paper presents a characterization of the I/O and storage activity of jobs executed on the K computer focusing on post-processing purposes, based upon nine months of production operation recorded. Results demonstrate the intensive data demand of K computer applications, both in terms of volume of file I/O carried out during job execution, amount of data staged-in and staged-out, and number of files produced per job. These aspects shed light on challenges and opportunities for specialized data management libraries for posthoc data visualization and analysis..
17. Tomohiro Kawanabe, Jorji Nonaka, Kenji Ono, Chowder
Dynamic contents sharing through remote tiled display system, 11th International Symposium on Visual Information Communication and Interaction, VINCI 2018 VINCI 2018 - 11th International Symposium on Visual Information Communication and Interaction, 10.1145/3231622.3232504, 108-109, 2018.08, Due to the continuous increase in the scale of numerical simulations, research on visualization has shifted to in-situ/in-transit approaches. The interactivity of large-scale visualization has also become increasingly important. In order to observe large-scale visualization data in detail, high-resolution displays, such as those with 8K or 16K resolutions, give an opportunity to inspire new discovery. With the commoditization of high-resolution displays, tiled display walls (TDWs) have facilitated their use for the collaborative research, where a large screen size is required for sharing the content among multiple sites. In this paper, we propose a remote collaboration method that utilizes a TDW driver (ChOWDER), which enables content sharing among multiple sites even with different display configurations, and a visualization application (HIVE) for dynamic content sharing of interactive visualization results..
18. Jorji Nonaka, Kenji Ono, Masahiro Fujita, 234Compositor
A flexible parallel image compositing framework for massively parallel visualization environments, Future Generation Computer Systems, 10.1016/j.future.2017.02.011, 82, 647-655, 2018.05, Leading-edge HPC systems have already been generating a vast amount of time-varying complex data sets, and future-generation HPC systems are expected to produce much higher amounts of such data, thus making their visualization and analysis a much more challenging task. In such scenario, the In-situ visualization approach, where the same HPC system is used for both numerical simulation and visualization, is expected to become more a necessity than an option. On massively parallel environments, the Sort-last approach, which requires final image compositing, has become the de facto standard for parallel rendering. In this work, we present the 234Compositor, a scalable and flexible parallel image compositor framework for massively parallel rendering applications. It is composed of a single-stage power-of-two conversion mechanism based on 234 Scheduling of 3-2 and 2-1 Eliminations, and a final image gathering mechanism based on Data Padding and MPI Rank Reordering for enabling the use of MPI_Gather collective operation. In addition, the hybrid MPI/OpenMP parallelism can also be applied to take advantage of current multi-node, multi-core architecture of modern HPC systems. We confirmed the scalability of the proposed approach by evaluating a Binary-Swap implementation of 234Compositor on the K computer, a Japanese leading-edge supercomputer installed at RIKEN AICS. We also evaluated an integration with HIVE (Heterogeneously Integrated Visual-analytic Environment) in order to verify a real-world usage. From the encouraging scalability results, we expect that this approach can also be useful even on the next-generation HPC systems which may demand higher level of parallelism..
19. Mikio Iizuka, Kenji Ono, Influence of the phase accuracy of the coarse solver calculation on the convergence of the parareal method iteration for hyperbolic PDEs, Computing and Visualization in Science, 2018.05.
20. Jorji Nonaka, Eduardo C. Inacio, Kenji Ono, Mario Dantas, Yasuhiro Kawashima, Tomohiro Kawanabe, Fumiyoshi Shoji, Data I/O management approach for the post-hoc visualization of big simulation data results, International Journal of Modeling, Simulation, and Scientific Computing, 10.1142/S1793962318400068, 2018.04.
21. Jorji Nonaka, Eduardo C. Inacio, Kenji Ono, Mario A.R. Dantas, Yasuhiro Kawashima, Tomohiro Kawanabe, Fumiyoshi Shoji, Data I/O management approach for the post-hoc visualization of big simulation data results, International Journal of Modeling, Simulation, and Scientific Computing, 10.1142/S1793962318400068, 2018.04, Leading-edge supercomputers, such as the K computer, have generated a vast amount of simulation results, and most of these datasets were stored on the file system for the post-hoc analysis such as visualization. In this work, we first investigated the data generation trends of the K computer by analyzing some operational log data files. We verified a tendency of generating large amounts of distributed files as simulation outputs, and in most cases, the number of files has been proportional to the number of utilized computational nodes, that is, each computational node producing one or more files. Considering that the computational cost of visualization tasks is usually much smaller than that required for large-scale numerical simulations, a flexible data input/output (I/O) management mechanism becomes highly useful for the post-hoc visualization and analysis. In this work, we focused on the xDMlib data management library, and its flexible data I/O mechanism in order to enable flexible data loading of big computational climate simulation results. In the proposed approach, a pre-processing is executed on the target distributed files for generating a light-weight metadata necessary for the elaboration of the data assignment mapping used in the subsequent data loading process. We evaluated the proposed approach by using a 32-node visualization cluster, and the K computer. Besides the inevitable performance penalty associated with longer data loading time, when using smaller number of processes, there is a benefit for avoiding any data replication via copy, conversion, or extraction. In addition, users will be able to freely select any number of nodes, without caring about the number of distributed files, for the post-hoc visualization and analysis purposes..
22. Tomohiro Kawanabe, Jorji Nonaka, Kazuma Hatta, Kenji Ono, ChOWDER
An adaptive tiled display wall driver for dynamic remote collaboration, 15th International Conference on Cooperative Design, Visualization, and Engineering, CDVE 2018 Cooperative Design, Visualization, and Engineering - 15th International Conference, CDVE 2018, Proceedings, 10.1007/978-3-030-00560-3_2, 11-15, 2018.01, Herein, we propose a web-based tiled display wall (TDW) system that is capable of supporting collaborative activities among multiple remote sites. Known as the Cooperative Workspace Driver (ChOWDER), this system introduces the virtual display area (VDA) concept as a method for handling various display configuration environments with different physical resolutions and aspect ratios. This concept, which is one of ChOWDER’s key features, allows ad hoc participation among multiple sites to facilitate remote collaboration and cooperative work..
23. Fan Hong, Chongke Bi, Hanqi Guo, Kenji Ono, Xiaoru Yuan, Compression-based integral curve data reuse framework for flow visualization, Journal of Visualization, 10.1007/s12650-017-0428-4, 20, 4, 859-874, 2017.11, [URL], Currently, by default, integral curves are repeatedly re-computed in different flow visualization applications, such as FTLE field computation, source-destination queries, etc., leading to unnecessary resource cost. We present a compression-based data reuse framework for integral curves, to greatly reduce their retrieval cost, especially in a resource-limited environment. In our design, a hierarchical and hybrid compression scheme is proposed to balance three objectives, including high compression ratio, controllable error, and low decompression cost. Specifically, we use and combine digitized curve sparse representation, floating-point data compression, and octree space partitioning to adaptively achieve the objectives. Results have shown that our data reuse framework could acquire tens of times acceleration in the resource-limited environment compared to on-the-fly particle tracing, and keep controllable information loss. Moreover, our method could provide fast integral curve retrieval for more complex data, such as unstructured mesh data..
24. Fan Hong, Chongke Bi, Hanqi Guo, Kenji Ono, Xiaoru Yuan, Compression-based integral curve data reuse framework for flow visualization, Journal of Visualization, 10.1007/s12650-017-0428-4, 20, 4, 859-874, 2017.11, Abstract: Currently, by default, integral curves are repeatedly re-computed in different flow visualization applications, such as FTLE field computation, source-destination queries, etc., leading to unnecessary resource cost. We present a compression-based data reuse framework for integral curves, to greatly reduce their retrieval cost, especially in a resource-limited environment. In our design, a hierarchical and hybrid compression scheme is proposed to balance three objectives, including high compression ratio, controllable error, and low decompression cost. Specifically, we use and combine digitized curve sparse representation, floating-point data compression, and octree space partitioning to adaptively achieve the objectives. Results have shown that our data reuse framework could acquire tens of times acceleration in the resource-limited environment compared to on-the-fly particle tracing, and keep controllable information loss. Moreover, our method could provide fast integral curve retrieval for more complex data, such as unstructured mesh data..
25. Comparison and Visualization of Affective Information on Two Liquors' Reviews on the Web Using Data Mining.
26. Performance Evaluation of Parareal Method for Large-Scale Spatio-Temporal Computation.
27. Kenji Ono, Daisuke Sakurai, Hamish Carr, Jorji Nonaka, Tomohiro Kawanabe, Flexible Fiber Surface: A Reeb-Free Approach, Topology-Based Methods in Visualization 2017 (TopoInVis 2017), 2017.02, [URL], he fiber surface generalizes the popular isosurface to multi-fields, visu- alizing the pre-images as surfaces. As with isosurfaces, however, fiber surface com- ponent may suffer from visual occlusion. The flexible isosurface avoids occlusion of components by tracking them topologically in the contour tree, at some cost to user comprehension. For the fiber surface, this requires computing the Reeb space, which poses further issues in comprehension. However, the flexible isosurface can also be defined as a set of user interactions, and we extend this notion to provide the flexible fiber surface, without pre-computing the global topology. Our on-demand tracking of surfaces is Reeb-free, as it does not require the explicit computation of the Reeb graph nor Reeb space. We study our geometrical approach taking into account how the semantics in the flexible isosurface generalizes to the Reeb-free multi-field analysis..
28. Ryo Takenoshita, Toshinobu Harada, Kenji Ono, Visualization of Affective Layered Structures of Liquors' Reviews on the Web using Rough Set Theory, 日本感性工学会論文誌, 10.5057/jjske.TJSKE-D-16-00057, 16, 1, 19-28, 2017.02, In the plan of products, affective information is increasingly utilized with diversification of a life style. Then, in this research, we had developed the system that visualizes the affective layered structures of liquors' reviews using rough set theory (hereafter, visualization system). Specifically, the development consists of the following steps. (1) The visualization system extracts affective words from the reviews of mail order sites. (2) We create the decision table in rough set theory using affective words. (3) We visualize the affective layered structure using the computed decision rules from decision table. We conducted the evaluation experiment of the visualization system for 20 subjects. We made ten subjects the summary of liquor reviews using the visualization system, and made the remaining ten subjects the summary of them without the visualization system. Consequently, the usefulness of the visualization system was verified from the contents and the time for creation of the summary..
29. Kenji Ono, Takashi Shimizu, Naohisa Sakamoto, Jorji Nonaka, Koji Koyamada, Web-based Visualization System for Large-Scale Volume Datasets, The 35th JSST Annual Conference International Conference on Simulation Technology, 2016.10.
30. Progress of Visualization Technology and its Contribution to Computational Science.
31. Seigo Imamura, Kenji Ono, Mitsuo Yokokawa, Iterative-method performance evaluation for multiple vectors associated with a large-scale sparse matrix, International Journal of Computational Fluid Dynamics, 10.1080/10618562.2016.1234046, 30, 6, 395-401, 2016.07, Ensemble computing, which is an instance of capacity computing, is an effective computing scenario for exascale parallel supercomputers. In ensemble computing, there are multiple linear systems associated with a common coefficient matrix. We improve the performance of iterative solvers for multiple vectors by solving them at the same time, that is, by solving for the product of the matrices. We implemented several iterative methods and compared their performance. The maximum performance on Sparc VIIIfx was 7.6 times higher than that of a naïve implementation. Finally, to deal with the different convergence processes of linear systems, we introduced a control method to eliminate the calculation of already converged vectors..
32. Development of Large-Scale Parallel Visualization System HIVE.
33. HPC Enhanced Product Design.
34. Effectiveness of Two-dimensional Medical Image Interpolation in the Transverse Plane in the Voxel Analysis of Nasal Air Flow and Temperature.
35. Jorji Nonaka, Masahiro Fujita, Kenji Ono, Multi-Step Image Composition Approach for Sort-Last Massively Parallel Rendering, JASSE, 10.15748/jasse.2.108, 2, 1, 108-125, 2015.05, Large-scale numerical simulations on modern leading-edge supercomputers have been continuously generating tremendous amount of data. In-Situ Visualization is widely recognized as the most rational way for analysis and mining of such large data sets by the use of sort-last parallel visualization. However, sort-last method requires communication intensive final image composition and can suffer from scalability problem on massively parallel rendering and compositing environments. In this paper, we present the Multi-Step Image Composition approach to achieve scalability by minimizing undesirable performance degradation on such massively parallel rendering environments. We verified the effectiveness of this proposed approach on K computer, installed at RIKEN AICS, and achieved a speedup of 1.8× to 7.8× using 32,768 composition nodes and different image sizes. We foresee a great potential of this method to meet the even larger image composition demands brought about by the rapid increase in the number of processing elements on modern HPC systems..
36. Performance improvement of iterative solver using bit-representation of sparse matrices(Selected Researches in CFD28).
37. Grid Generation of Hierarchical Cartesian Data Structure for Large-Scale Thermal Flow Simulation over 10G Elements and Its Applications.
38. Voxel Simulation of Nasal Air Flow and Temperature Based on the Medical Images.
39. 2G15 Voxel-based simulation of heat transfer between nasal airflow and mucosal membrane.
40. Kenji Ono, Yasuhiro Kawashima, Tomohiro Kawanabe, Data centric framework for large-scale high-performance parallel computation, Procedia Computer Science, 10.1016/j.procs.2014.05.218, 29, 2336-2350, 2014.01, Supercomputer architectures are being upgraded using different level of parallelism to improve computing performance. This makes it difficult for scientists to develop high performance code in a short time. From the viewpoint of productivity and software life cycle, a concise yet effective infrastructure is required to achieve parallel processing. In this paper, we propose a usable building block framework to build parallel applications on large-scale Cartesian data structures. The proposed framework is designed such that each process in a simulation cycle can easily access the generated data files with usable functions. This framework enables us to describe parallel applications with fewer lines of source code, and hence, it contributes to the productivity of the software. Further, this framework was considered for improving performance, and it was confirmed that the developed flow simulator based on this framework demonstrated considerably excellent weak scaling performance on the K computer..
41. Shota Ishikawa, Haiyuan Wu, Chongke Bi, Qian Chen, Hirokazu Taki, Kenji Ono, Fluid data compression and ROI detection using run length method, Procedia Computer Science, 10.1016/j.procs.2014.08.228, 35, C, 1284-1291, 2014.01, It is difficult to carry out visualization of the large-scale time-varying data directly, even with the supercomputers. Data compression and ROI (Region of Interest) detection are often used to improve efficiency of the visualization of numerical data. It is well known that the Run Length encoding is a good technique to compress the data where the same sequence appeared repeatedly, such as an image with little change, or a set of smooth fluid data. Another advantage of Run Length encoding is that it can be applied to every dimension of data separately. Therefore, the Run Length method can be implemented easily as a parallel processing algorithm. We proposed two different Run Length based methods. When using the Run Length method to compress a data set, its size may increase after the compression if the data does not contain many repeated parts. We only apply the compression for the case that the data can be compressed effectively. By checking the compression ratio, we can detect ROI. The effectiveness and efficiency of the proposed methods are demonstrated through comparing with several existing compression methods using different sets of fluid data..
42. Parallelization of a CFD system based on hierarchical Cartesian grids.
43. 3E14 Assessment of nasal surgery using voxel-based simulation.
44. Satoshi Ito, Kazuya Goto, Kenji Ono, Automatically optimized core mapping to subdomains of domain decomposition method on multicore parallel environments, Computers and Fluids, 10.1016/j.compfluid.2012.04.024, 80, 1, 88-93, 2013.01, On hierarchical parallel environment with multicore processors, mapping of subdomains to CPU/cores were optimized considering both the communication speed of different communication paths and the communication pattern of a parallel application based on the domain decomposition method. We evaluated proposed method on massively paralleled Intel Xeon PC cluster and confirmed that it could reduce communication time and achieve higher parallel performance than without mapping in several benchmark tests..
45. Hank Childs, Hans Christian Hege, Mark Hereld, Kenji Ono, David Rogers, Deborah Silver, Ultra vis 2012
2012 workshop on ultrascale visualization, 2012 SC Companion: High Performance Computing, Networking Storage and Analysis, SCC 2012 Proceedings - 2012 SC Companion: High Performance Computing, Networking Storage and Analysis, SCC 2012, 10.1109/SC.Companion.2012.371, 2012.12.
46. F302 Technology that joins product design and simulation.
47. Development of technology for very-large-scale voxel generation and its interface for simulators.
48. Advanced Visualization Technology on Large-Scale Numerical Simulation.
49. Selected Researches in CFD25.
50. 7H16 Voxel-based thermo-fluid analysis of air conditioning function in the human nasal cavity.
51. 402 Examination of parallel computing of a Lagrangian-Eulerian coupling method based on level sets.
52. Y. Obikane, T. Nemoto, K. Ogura, M. Iwata, Kenji Ono, Using the V-Sphere Code for the passive scalar in the wake of a bluff body, World Academy of Science, Engineering and Technology, 77, 859-861, 2011.05, The objective of this research was to find the diffusion properties of vehicles on the road by using the V-Sphere Code. The diffusion coefficient and the size of the height of the wake were estimated with the LES option and the third order MUSCL scheme. We evaluated the code with the changes in the moments of Reynolds Stress along the mean streamline. The results show that at the leading part of a bluff body the LES has some advantages over the RNS since the changes in the strain rates are larger for the leading part. We estimated that the diffusion coefficient with the computed Reynolds stress (non-dimensional) was about 0.96 times the mean velocity..
53. Thermo-fluid analysis of airflow in the human nasal cavity.
54. 9D-11 An Emphatic Rendering Method for Understanding Biomechanisms.
55. 9D-14 Voxel-based simulation of air flow and temperature in the nasal cavity.
56. Current State of CFD Technology and Trend in the Future.
57. Kohei Okita, Kenji Ono, Shu Takagi, Yoichiro Matsumoto, Development of high intensity focused ultrasound simulator for large-scale computing, International Journal for Numerical Methods in Fluids, 10.1002/fld.2470, 65, 1-3, 43-66, 2011.01, High intensity focused ultrasound (HIFU) has been developed as a noninvasive therapeutic option. HIFU simulations are required to support the development of the HIFU device as well as the realization of noninvasive treatments. In this study, an HIFU simulator is developed that uses voxel data constructed from computed tomography scan data on the living human body and signed distance function (SDF) data to represent the object. The HIFU simulator solves the conservation equations of mass and momentum for mixtures with the equation of state for each medium. The numerical method is the finite-difference time-domain method. A high-order finite-difference method based on Lagrange interpolation is implemented to reduce numerical phase error. This approach reproduces wave propagation to an nth order of accuracy. Representation of the sound source by volume fraction, which is obtained from the SDF using a smoothed Heaviside function, provides around 1.66th order of accuracy in the spherical wave problem. As a realistic application, transcranial HIFU therapy for a brain tumor is modeled, where tissue inhomogeneity causes not only displacement of the focal point but also diffusion of the focused ultrasound. Even in such cases, focus control using phase delays, which are pre-computed based on the time-reversal procedure, enables correct focal point targeting as well as improved ultrasound focusing..
58. Automatically optimized core mapping for subdomains of domain decomposition methods on multicore parallel environment.
59. An Eulerian mesh-based scheme using adaptive mesh refinement for fluid-structure interaction.
60. Kohei Okita, Kenji Ono, Shu Takagi, Yoichiro Matsumoto, Numerical simulation of the tissue ablation in high-intensity focused ultrasound therapy with array transducer, International Journal for Numerical Methods in Fluids, 10.1002/fld.2467, 64, 10-12, 1395-1411, 2010.11, The development of high-intensity focused ultrasound (HIFU) therapy for deeply situated cancer has been desired. One problem is focal point displacement due to the inhomogeneity of the human body. The objectives are the realization of appropriate phase control of an array transducer and support for preoperative planning of HIFU therapy by the computational prediction of ablation regions. To these ends, in this study we have developed an HIFU simulator that employs a voxel phantom constructed from CT/MRI data of a living human body. To reproduce the pressure propagation through an inhomogeneous medium, the mass and momentum equations for a mixture with the equation of state of the medium are solved. The ablation of tissue is modeled as a phase transition by using the phase field model. Then, the heat equation with viscous dissipation as a heat source and the Allen-Cahn equation with a free energy model are solved to predict the development of the ablation region. The basic equations are discretized by the finite difference method. HIFU therapy with an array transducer for liver cancer is reproduced numerically. Although the results without phase control show displacement and diffusion of the focal point due to the inhomogeneity of the human body, a clear focal point is obtained by using the array transducer with an appropriate phase delay obtained from pre-computation. The HIFU simulator predicts that the ablation region will develop close to the target, owing to the phase control of the array transducer..
61. Are Supercomputers Really Useful for Engineering?.
62. Simulation of Incompressible Viscous Flow on Cartesian Grid for Arbitrary Geometries Composed of Non-Watertight Polygon Elements(Fluids Engineering).
63. Robust voxelization algorithm with memory-efficient data structure.
64. APPLICATION OF A PARTITIONED COUPLING ALGORITHM USING LEVEL SET FUNCTION TO FOLDED AIRBAG DEPLOYMENT.
65. Voxel Simulation of Airflow in Human Nasal Cavity.
66. Kei Akasaka, Kenji Ono, Simulation of incompressible viscous flow on cartesian grid for arbitrary geometries composed of non-watertight polygon elements, Nihon Kikai Gakkai Ronbunshu, B Hen/Transactions of the Japan Society of Mechanical Engineers, Part B, 76, 764, 536-545, 2010.04, A useful computational method was proposed for an incompressible viscous flow simulation around arbitrary geometries on a Cartesian grid system. This method has a remarkable feature that allows us to simulate the flow around geometries which are composed of non-watertight and incomplete polygon elements without any repair. The proposed method can reduce manpower drastically in the process of the mesh generation because the repair of the defective polygon elements can be eliminated. In this method, governing equations are discretized using the extrapolated velocity to satisfy the no-slip condition on the wall surface taking into account the distance between the polygons and the cell center on the Cartesian grid. Moreover, this approach has a higher accuracy of shape approximation compared with the voxel method. In this paper, four different cases were calculated to validate the proposed method. Firstly, the flow around an inclined plate thinner than the mesh size was calculated to show that this method can simulate the flow around the nonwatertight geometry. Additionally, in this case, the accuracy of shape approximation was compared between the proposed method and the voxel method. Sencondly and thirdly, flows around a circular cylinder (/?e=40, 100) were calculated to confirm the accuracy of solutions in the steady and unsteady flows. Finally, an internal flow in a curved duct was calculated to compare the solutions with other researcher's results including the experiment. Consequently, it was found that the proposed method could simulate the flow around the non-watertight geometry and this method had the reasonably good accuracy compared with the literature..
67. 0415 Voxel simulation of nasal airflow.
68. Kohei Okita, Kazuyasu Sugiyama, Kenji Ono, Shu Takagi, Yoichiro Matsumoto, Numerical study on high intensity focused ultrasound therapy using array transducer, Physics Procedia, 10.1016/j.phpro.2010.01.042, 3, 1, 315-322, 2010.01, The development of the HIFU therapy for the deeply placed cancer such as the liver cancer and brain cancer has been desired. One problem is the displacement of the focal point due to the reflection and refraction of ultrasound. In the present study, the HIFU therapy for the brain cancer through a skull with a bowl-shape array transducer is performed numerically. Our approach is to solve the mass and momentum equations for mixture with the equation of state of media. The nonlinearity is mainly taken into account through the equation of state of the media. The three-dimensional controllability of the focal point by the array transducer with a phase delay is examined. As the result of the ultrasound propagation through the skull with the phase delay, we obtain a clear focal point where the peak pressure is higher than that without the phase delay. Therefore, the array transducer with the appropriate phase delay enables to assign the focal point to the target adequately, even if the ultrasound propagates through the inhomogeneous medium as a human skull..
69. Gaku Hashimoto, Kenji Ono, Interface Treatment under No-Slip Conditions Using Level- Set Virtual Particles for Fluid-Structure Interaction, Theoretical and Applied Mechanics Japan, 10.11345/nctam.58.325, 58, 0, 325-342, 2010.01, We show that a fluid-structure coupling method based on a fixed Eulerian mesh using the level set function is applicable to fluid-structure interaction (FSI) problems involving incompressible viscous fluids and thin elastic structures. The coupling method was originally proposed for large-deformation FSI analyses of high-speed compressible inviscid flows and thin structures such as airbags. We introduce a novel interface-treatment technique that uses virtual particles with level sets and structural normal velocities to enforce the kinematical condition at the fluid-structure interface on a fluid fixed mesh. The virtual particles also have structural tangent velocities so as to impose no-slip conditions at the interface. Application of the method to finite-deformation FSI problems, and comparison of the results with those obtained by the conventional moving ALE mesh-based scheme show the adequacy of the method. It is confirmed that the appearance of the flow and geometry of the interface are similar to those for the ALE scheme..
70. 503 Development of parallel FEM application using SPHERE.
71. 515 Airbag deployment simulation including the effect of outside air.
72. Visualization for Extreme Large Data Set Toward a Next-Generation Super Computer.
73. S0203-1-2 Numerical simulation of nasal airflow.
74. J0102-3-3 Simulation of the high intensity focused ultrasound therapy including the heat coagulation of tissue.
75. Development of an application middleware for the next-generation supercomputer.
76. S202 Progress of volume-based CFD system.
77. Kenjiro Shimano, Takahiro Kumano, Michitoshi Takagi, Kenji Ono, Nariaki Horinouchi, Seiji Tarumi, Wind Tunnel Testing of JSAE Standard Low-aerodynamic-drag Vehicle Body Using 1/5 Scale Model, JSAE Review, 30, 1, 51-60, 2009.01.
78. Interface treatment with no-slip condition using level set virtual particles for fluid-structure interaction.
79. Jorji Nonaka, Kenji Ono, Hideo Miyachi, Performance Evaluation of Large-Scale Parallel Image Compositing on a T2K Open Supercomputer, Information and Media Technologies, 10.11185/imt.4.780, 4, 4, 780-788, 2009, This paper presents a performance evaluation of large-scale parallel image compositing on a T2K Open Supercomputer. Traditional image compositing algorithms were not primarily designed for exploiting the combined message passing and the shared address space parallelism provided by systems such as T2K Open Supercomputer. In this study, we investigate the Binary-Swap image compositing method because of its promising potential for scalability. We propose some improvements to the Binary-Swap method aiming to fully exploit the hybrid programming model. We obtained encouraging results from the performance evaluation conducted on Todai Combined Cluster, a T2K Open Supercomputer at the University of Tokyo. The proposed improvements have also shown a high potential to tackle the large-scale image compositing problem on leading-edge HPC systems where an ever increasing number of processing cores is involved..
80. Jorji Nonaka, Kenji Ono, Hideo Miyachi, Performance Evaluation of Large-Scale Parallel Image Compositing on a T2K Open Supercomputer, IPSJ Online Transactions, 10.2197/ipsjtrans.2.140, 2, 140-148, 2009, This paper presents a performance evaluation of large-scale parallel image compositing on a T2K Open Supercomputer. Traditional image compositing algorithms were not primarily designed for exploiting the combined message passing and the shared address space parallelism provided by systems such as T2K Open Supercomputer. In this study, we investigate the Binary-Swap image compositing method because of its promising potential for scalability. We propose some improvements to the Binary-Swap method aiming to fully exploit the hybrid programming model. We obtained encouraging results from the performance evaluation conducted on Todai Combined Cluster, a T2K Open Supercomputer at the University of Tokyo. The proposed improvements have also shown a high potential to tackle the large-scale image compositing problem on leading-edge HPC systems where an ever increasing number of processing cores is involved..
81. Kenjiro Shimano, Takahiro Kumano, Michitoshi Takagi, Kenji Ono, Nariaki Horinouchi, Seiji Tarumi, Wind Tunnel Testing of JSAE Standard Low-aerodynamic-drag Vehicle Body Using 1/5 Scale Model, Review of Automotive Engineering, 10.11351/jsaereview.30.51, 30, 1, 51-60, 2009, As computational fluid dynamics plays a crucial role in vehicle body design, the CFD committee, one of the technical committees comprising JSAE, embarked on a benchmark project in which a number of CFD codes would be tested on the same vehicle body. A wind tunnel test of the 1/5 scale model was conducted to provide reference data for the benchmark test. In this paper, details of the measurements are presented. On the body surface, smooth flow patterns were visualised by tufting. It was also found that total pressure loss in the wake of the model displayed a unique pattern similar to a mushroom cloud..
82. Seamless Environment of Fluid Simulation with Volume Data.
83. 842 Airbag Deployment Simulation using Level Set Function.
84. F1-3 Environment of software development and visualization for life science.
85. F1-7 Numerical Simulation for the Less Invasive Therapy using High Intensity Focused Ultrasound.
86. F1-9 Development of a middleware for life science simulation.
87. Three Dimensional Flow around a Cylindrical Object on pFTT Mesh with Adaptive Refinement.
88. Investigation of Incompressible Flow Calculation for Thin Objects on Cartesian Grid.
89. ROI Detection and Flow Visualization Based on Gaze Information.
90. Coupled analysis of high-speed flow and large-deformable shell using level set function.
91. OUT OF CORE IMPLEMENTATION OF LINE DRAWINGS FOR LARGE SCALE VOLUME DATA.
92. Development of Advanced Simulation System for Thermal Fluid/Structure Interaction Problem and Its Automotive Application.
93. NUMERICAL SIMULATION FOR THE ASSISTANCE OF HIGH INTENSITY FOCUSED ULTRASOUND THERAPY.
94. Level set particle interface treatment for large deformation FSI analyses.
95. G1305 Investigation into Interface Treatment in Fluid-Structure Coupling Method with Level Set Function(2).
96. G1305 Investigation into Interface Treatment in Fluid-Structure Coupling Method with Level Set Function(1).
97. Identification of ROI in 3D Flow Field Using Eye Tracking Data and Automatic Streamline Placement.
98. AMR simulation with pFTT data structure.
99. Development of Object-oriented Parallel Class Library and Performance Evaluation of Benchmark Code.
100. 4101 Development of the application for the design of medical ultrasound transducers.
101. 1106 Fluid-Structure Coupled Analysis using Partitioned Solution Method with Level Set Function.
102. 2125 A Study of AMR simulation with pFTT Data Structures.
103. 2218 An Implementation of Boundary Condition of Incompressible Flow Solver on Voxel Method.
104. 3638 Simulation of Temperature Rise during High Intensity Focused Ultrasound Therapy.
105. Progressive Visualization System and Its Utilization.
106. SPHERE.
107. 125 Numerical Analysis for Aerodynamic Sound using Limited Compressible Formulation.
108. SPHERE : : A Framework for Development of Physical Simulation.
109. 606 SPHERE.
110. 629 A Study of Simulation Using Octree with Different Data Structures.
111. F04-(2) Construction of Problem Solving Environment assisted by Visual Data Mining Technology.
112. A Novel Platform for Next-Generation Visualization System.
113. Gaze-Directed Flow Visualization System.
114. Design and Development of Multi-Platform Visualization API.
115. A Novel Platform for Next-Generation Visualization System.
116. SPHERE.
117. Potential of CFD System Based on Volume Data(Computer Aided Design and Production System Based on Voxel or Volume Data Structure).
118. Simulation of ultrasound propagation in body.
119. Performance evaluation of interactive volume rendering with a hardware compositor.
120. High Performance Visualization on PC Cluster with Image Merge Unit.
121. Development of Interactive Visualization System for Unsteady Large-Scale Data Set.
122. Experiences that Got Stuck in the Deep of CFD.
123. Kenji Ono, Role of Fluid Dynamics Technology in Automobile Design, Review of Automotive Engineering, 25, 2, 129-134, 2004.04.
124. Development of Interactive Visualization System for Unsteady Large-Scale Data Set.
125. Gabriel G. Rosa, Eric B. Lum, Kwan Liu Ma, Kenji Ono, An interactive volume visualization system for transient flow analysis, 2003 Eurographics/IEEE TVCG Workshop on Volume Graphics, VG '03 Proceedings of the 2003 Eurographics/IEEE TVCG Workshop on Volume Graphics, VG '03, 10.1145/827051.827072, 45, 2003.12, This paper describes the design and performance of an interactive visualization system developed specifically for improved understanding of time-varying volume data from thermal flow simulations for vehicle cabin and ventilation design. The system uses compression to allows for better memory utilization and faster data transfer, hardware accelerated rendering to enable interactive exploration, and an intuitive user interface to support comparative visualization. In particular, the interactive exploration capability offered by the system raises scientists to a new level of insight and comprehension. Compared to a previous visualization solution, such a system helps scientists more quickly identify and correct design problems..
126. Development of Interactive Volume Rendering System for Product Design.
127. A Robust Fluid Simulation System to support Product Design.
128. Research Trend of Numerical Prediction of Aerodynamic Sound in Automobile Design.
129. An Implementation of Boundary Condition to Poisson Equation for Complex Geometories on Voxel Method.
130. Product Design using Cartesian Mesh Method(CFD toward Practical Applications).
131. Robust approach of CFD for automotive applications.
132. Development of Predicting Technology for Prevention of Snow Intrusion into Air Cleaner.
133. Numerical Analysis of Automotive Wind Noise.
134. Kenji Ono, Ryutaro Himeno, Tatsuya Fukushima, Prediction of wind noise radiated from passenger cars and its evaluation based on auralization, Journal of Industrial Aerodynamics, 81, 403-419, 1999.05, This paper describes the prediction of radiated wind noise from automobiles and its reduction. The Lighthill acoustic analogy was employed to estimate the wind noise at far field. In order to compute accurate pressure fluctuations, which play a role as the noise sources in the analogy, an overlapped grid system is used to calculate flow fields in detail with a finite-difference method. This approach was applied to predict the wind noise radiated from a door mirror and a front pillar. Measured data are compared with the computed results of pressure fluctuations on the side-window surface. The predicted pressure fluctuations agree well with the experimental results. Then, shapes of the front pillar and the door mirror were modified based on the computed results so as to reduce the wind noise. The effects of the modification were proved by additional experiments..
135. Kenji Ono, K. Fujitani, H. Fujita, Applications of CFD using voxel modeling to vehicle development, Proceedings of the 1999 3rd ASME/JSME Joint Fluids Engineering Conference, FEDSM'99, San Francisco, California, USA, 18-23 July 1999 (CD-ROM), 1, 1999, The purpose of this study is to construct a practical flow simulation system and to investigate the feasibility of this system for actual vehicle development process. In order to get the solution quickly, well-established Cartesian solver is employed with a volume mesh (voxel) system, which has a great advantage of short modeling time and robustness for a complex geometry. In addition, incorporated hhierarchical nested mesh structure leads high-resolution results without the increase of computing time. Computed results were demonstrated for underhood flows, flows passing through the front grilles and an air-conditioner duct flow. The all computed results showed good agreement with the experimented flow fields and enough short turnaround time. As a consequence, the present approach to simulate the flow around the complex shape is very efficient, powerful and has enough accuracy to support the vehicle development process..
136. Kenji Ono, Ryutaro Himeno, Sanae Sato, Naoshi Kikuchi, Visualization of a Flow around a Circular Cylinder with Spiral Grooves Using CFD, Album of visualization, 13, 17-18, 1996.12.
137. Synthesis of Aerodynamic Noise Emitted from a Door Mirror and Visualization of Flows around it Using CFD.
138. 3-D Unsteady Incompressible Flow Analysis Using Multi-level Cartesian Grid System.
139. A CFD STUDY OF CHANGES IN FLOW CAUSED BY SMALL MODIFICATION IN A-PILLAR SHAPE OF A PASSENGER CAR.
140. Synthesis of Aerodynamic Noise Emitted from a Door Mirror and Visualization of Flows around it Using CFD.
141. Ryutaro Himeno, Tatsuya Fukushima, Kenji Ono, Computation of aerodynamic noise emitted from a door mirror of a passenger car and its synthesis, American Society of Mechanical Engineers, Fluids Engineering Division (Publication) FED, 242, 167-173, 1996, The aerodynamic noise caused by an outside door mirror of a passenger car was calculated and synthesized. Flows around the car including the mirror were computed using a finite-difference method and an overlapping grid technique. Sound pressure of the aerodynamic noise in far field from the car was computed with Curie's formula using unsteady pressure distribution on the mirror which is numerically solved from the incompressible Navier-Stokes equations. The sound is then synthesized for engineers to evaluate. We investigate the relation between noise sources and the flow field, and find that the strong noise sources are distributed along sepalation lines. This means it possible to reduce the noise if we can avoid separation or fix sepalation lines. Considering it, we modified the mirror shape and succeeded to reduce the aerodynamic noise..
142. Flow around Two Square Cylinders in Staggered Arrangements..
143. Investigation of Automotive Radiator Using 3-Dimensional Analysis.