九州大学 研究者情報
発表一覧
備瀬 竜馬(びせりようま) データ更新日:2023.12.04

教授 /  システム情報科学研究院 情報知能工学部門 データサイエンス実践特別講座


学会発表等
1. Takumi Okuo, Kazuya Nishimura, Hiroaki Ito, Kazuhiro Terada, Akihiko Yoshizawa, Ryoma Bise, Learning from Label Proportionによる陽性腫瘍の比率推定
, 画像の認識・理解シンポジウム MIRU2023, 2023.07.
2. Shinnosuke Matsuo, Daiki Suehiro, Seiichi Uchida, Hiroaki Ito, Kazuhiro Terada, Akihiko Yoshizawa,Ryoma Bise, WSIに対する部分的なラベル比率からの学習, 画像の認識・理解シンポジウム MIRU2023, 2023.07.
3. Takanori Asanomi, Shinnosuke Matsuo, Daiki Suehiro, Ryoma Bise, クラス比率学習におけるバッグ単位のデータ拡張, 画像の認識・理解シンポジウム MIRU2023, 2023.07.
4. 伊東隼人,小田昌宏,申忱,王成,三浦幹太,佐藤淳哉,大竹義人,備瀬竜馬,古川亮,本谷秀堅,増谷佳孝,森健策, MICCAI 2022参加報告, 電子情報通信学会技術研究報告, 医用画像研究会(MI), 2023.03.
5. 志久開人, 白井洸充, 石原健, 備瀬竜馬, 非剛体レジストレーションを用いた線虫の時系列3D神経細胞の追跡, 電子情報通信学会技術研究報告, 医用画像研究会(MI), 2023.03.
6. 松尾信之介, 末廣大貴, 内田誠一, 備瀬竜馬, 部分的なラベル比率からの学習, パターン認識・メディア理解研究会(PRMU),2023年3月, 2023.03.
7. 西村 和也, 刀谷 在美, 中馬 新一郎, 備瀬 竜馬, 部分的なアノテーションを用いた細胞分裂検出, パターン認識・メディア理解研究会(PRMU),2023年3月, 2023.03.
8. 原田翔太, 備瀬竜馬, 田中聖人, 内田誠, 撮影順序情報を活用した潰瘍性大腸炎分類モデルの提案, 情報処理学会コンピュータビジョンとイメージメディア研究会, 2023.01.
9. 松尾信之介, 備瀬竜馬, 内田誠一, 末廣大貴, オンライン予測理論に基づく擬似ラベル手法によるクラス比率からの学習, パターン認識・メディア理解研究会(PRMU),2022年12月, 2023.12.
10. 浅海標徳, 西村和也, 備瀬竜馬, 特徴量ワーピングを導入した時間情報集約によるマルチオブジェクトトラッキング, パターン認識・メディア理解研究会(PRMU),2022年12月, 2023.12.
11. 藤井和磨, 末廣大貴, 備瀬竜馬, 部分的な教師データを用いた細胞検出, 電子情報通信学会技術研究報告, PRMU, 2022.12.
12. Takanori Asanomi, Kazuya Nishimura, Heon Song, Junya Hayashida, Hiroyuki Sekiguchi, Takayuki Yagi, Imari Sato, Ryoma Bise, Deep Non-Rigid Registration for Noisy-and-Corrupted Images, 画像の認識・理解シンポジウム MIRU2022, 2022.07.
13. Takanori Asanomi, Junya Hayashida, Kazuya Nishimura, Ryoma Bise, Self-Attentionによる大局的時間情報を考慮した複数物体トラッキング, 画像の認識・理解シンポジウム MIRU2022, 2022.07.
14. Kazuya Nishimura, Ryoma Bise, 複数種の弱教師を用いたsingle instance pastingによる細胞画像セグメンテーション, 画像の認識・理解シンポジウム MIRU2022, 2022.07.
15. Yuki Shigeyasu,Shota Harada, Kengo Araki, Akihiko Yoshizawa, Kazuhiro Terada, Yuki Teramoto, Ryoma Bise, 病理画像セグメンテーションにおける腫瘍領域の空間分布に基づく疑似ラベル選択法の提案, 画像の認識・理解シンポジウム MIRU2022, 2022.07.
16. Shota Harada, Ryoma Bise, Kengo Araki, Akihiko Yoshizawa, Kazuhiro Terada, Mariyo Kurata-Rokutan, Naoki Nakajima, Hiroyuki Abe, Tetsuo Ushiku, Seiichi Uchida, Semi-Supervised Domain Adaptation for Class-Imbalanced Dataset, 画像の認識・理解シンポジウム MIRU2022, 2022.07.
17. 重安勇輝,原田翔太,荒木健吾,吉澤明彦,寺田和弘,寺本祐記,備瀬竜馬, 病理画像における腫瘍領域の空間分布に基づく半教師学習, パターン認識・メディア理解研究会(PRMU), 2022.05.
18. 備瀬竜馬, 光超音波3Dイメージング技術の開発と医療応用, 医用画像情報学会 令和3年度春季(192回)大会, 2022.02.
19. 備瀬竜馬, 細胞画像解析のための効率的なラベル付与による機械学習, メディカルイメージング連合フォーラム, 2022.01.
20. 杉本龍彦、寺田和弘、吉澤明彦、備瀬竜馬, 簡易アノテーションを用いた癌細胞の分類, パターン認識・メディア理解研究会(PRMU),2021年12月, 2021.12.
21. 荒木健吾、倉田麻理代、寺田和弘、吉澤明彦、備瀬竜馬, 子宮頸癌病理画像のセグメンテーション, パターン認識・メディア理解研究会(PRMU),2021年12月, 2021.12.
22. Takanori ASANOMI, Kazuya NISHIMURA, Heon SONG, Junya HAYASHIDA, Hiroyuki SEKIGUCHI, Takayuki YAGI, Imari SATO, and Ryoma BISE, Unsupervised non-rigid alignment for multiple noisy images, パターン認識・メディア理解研究会(PRMU),2021年8月, 2021.08.
23. Hyeonwoo Cho, Kazuya Nishimura, Kazuhide Watanabe, Ryoma Bise, Domain Extension in Cell Detection by Pseudo-Cell-Position Heatmap, 画像の認識・理解シンポジウム MIRU2021, 2021.07.
24. Shota Harada, Ryoma Bise, Hideaki Hayashi, Kiyohito Tanaka, Seiichi Uchida, Disentangled Representation Learning with Temporal Continuity for Ulcerative Colitis Classification, 画像の認識・理解シンポジウム MIRU2021, 2021.07.
25. Kazuya Nishimura, Hyeonwoo Cho, Ryoma Bise, Cell Detection in Time-Lapse Images via Tracking, 画像の認識・理解シンポジウム MIRU2021, 2021.07.
26. Kazuma Fujii, Daiki Suehiro, Kazuya Nishimura, Ryoma Bise, Cell Detection for Imperfect Annotation Problem by using Top-Ranking for Pseudo-Labeling, 画像の認識・理解シンポジウム MIRU2021, 2021.07.
27. 山根健寛, 備瀬竜馬, 深層学習を用いた三次元血管構造の抽出, 電気・情報関係学会九州支部連合大会, 2021.09.
28. 重安勇輝, 備瀬竜馬, 病理画像における腫瘍領域の自動抽出, 電気・情報関係学会九州支部連合大会, 2021.09.
29. 林田純弥,西村和也,備瀬竜馬, 大域的な時空間コンテキストの整合性を考慮した細胞トラッキング, パターン認識・メディア理解研究会(PRMU),2020年10月, 2020.10.
30. 原田翔太,早志英朗,備瀬竜馬,河村卓二,碕山直邦,田中聖人,内田誠一, 内視鏡画像のMayo分類のための分離された特徴表現の獲得, パターン認識・メディア理解研究会(PRMU),2020年10月, 2020.10.
31. 門田健明,安部健太郎,備瀬竜馬,河村卓二,碕山直邦,田中聖人,内田誠一, 簡易な相対アノテーションに基づく潰瘍性大腸炎の重症度分類, パターン認識・メディア理解研究会(PRMU),2020年10月, 2020.10.
32. Ryoma Bise, Cell Tracking and Segmentation for Cell Image Analysis, JSPS Establishing International Research Network of Mathematical Oncology, 2020.10.
33. 藤井和磨,西村和也,林田純弥,備瀬竜馬, 深層学習を用いた3 次元多細胞検出, 電気・情報関係学会九州支部連合大会, 2020.09.
34. 浅海標徳, 備瀬竜馬, マルチタスク学習によるビデオ補間の精度向上, 電気・情報関係学会九州支部連合大会, 2020.09.
35. H. Cho, K. Nishimura, R. Bise, Cell detection for various cell shapes, 電気・情報関係学会九州支部連合大会, 2020.09.
36. 杉本龍彦, 備瀬竜馬, PU-Learningを用いた病理画像における簡易アノテーション法の提案, 画像の認識・理解シンポジウム MIRU2020, 2020.08.
37. J. Hayashida, K. Nishimura, and R. Bise, MPM: Joint Representation of Motion and Position Map for Cell Tracking, 画像の認識・理解シンポジウム MIRU2020, 2020.08.
38. S. Harada, R. Bise, H. Hayashi, K. Tanaka, and S.Uchida, Self-Constrained Clustering with Prior Knowledge of Endoscopic Image Sequence, 画像の認識・理解シンポジウム MIRU2020, 2020.08.
39. 西村和也, 林田純弥, C. Wang, D.F.E Ker, 備瀬竜馬, 弱教師付き学習に基づいた細胞トラッキング, 画像の認識・理解シンポジウム MIRU2020, 2020.08.
40. 備瀬竜馬, ディープラーニングの病理診断への応用, 第109回日本病理学会総会, 2020.07.
41. 備瀬竜馬 , ディープラーニングの病理診断への応用
, 第109回日本病理学会総会, 2020.07.
42. 西村和也,林田純弥,Ker Elmer,Wang Chenyang,備瀬竜馬, 弱教師学習に基づいた細胞追跡, パターン認識・メディア理解研究会(PRMU),2020年5月, 2020.05.
43. 備瀬竜馬, 弱教師学習によるアノテーションフリーな自動細胞画像解析へ向けた取り組み, 30 回 日本サイトメトリー学会学術集会, 2020.05.
44. 備瀬竜馬 , 弱教師学習によるアノテーションフリーな自動細胞画像解析へ向けた取り組み
, 第30回日本サイトメトリー学会学術集会 シンポジウム3 [ 次世代細胞認識・追尾システムの幕開け ], 2020.05.
45. Dan Wang, Xu Zhang, Kazuya Nishimura, Rocky Tuan, Ryoma Bise, Dai Fei Elmer Ker, Label-Free Cell Detection in Phase Contrast Images Using Artificial Neural Networks, Orthopaedic Research Society (ORS) Annual Meeting ORS2020, 2020.03.
46. 安部健太郎, Yan Zheng, 早志英朗, 備瀬竜馬, 河村卓二, 碕山直邦, 田中聖人, 内田誠一, ランキング学習による大腸内視鏡画像の重症度予測, 電子情報通信学会2020年総合大会, 2020.03.
47. 西村和也, Dai Fei Elmer Ker, 備瀬竜馬, 細胞社会ダイバース解析のための簡易なアノテーションを用いた定量化手法の提案, 細胞ダイバース第3回若手ワークショップ, 2020.02.
48. 林田純弥, 備瀬竜馬, 細胞社会解析のための自動細胞トラッキング手法の提案, 細胞ダイバース第5回公開シンポジウム, 2020.01.
49. 西村和也, Dai Fei Elmer Ker, 備瀬竜馬, 細胞社会ダイバース解析のための簡易なアノテーションを用いたトラッキング手法の提案, 細胞ダイバース第5回公開シンポジウム, 2020.01.
50. 德永宏樹(九大)・寺本祐記・吉澤明彦(京大医学部附属病院)・備瀬竜馬(九大/NII), 病理画像癌種別領域分割のための癌種比率を活用した学習手法, パターン認識・メディア理解研究会(PRMU),2019年12月, 2019.12.
51. Junya Hayashida, Ryoma Bise, Cell Tracking with CNN for Cell Detection and Association, The 15th Joint Workshop on Machine Perception and Robotics (MPR2019), 2019.11.
52. Nishimura Kazuya, Dai Fei Elmer Ker, Ryoma Bise, Weakly supervised Cell Segmentation, The 15th Joint Workshop on Machine Perception and Robotics (MPR2019), 2019.11.
53. Ryo Kikkawa, Ryoma Bise, Weakly Supervised Body Hair Detection in Photoacoustic Image
, The 15th Joint Workshop on Machine Perception and Robotics (MPR2019), 2019.11.
54. Kentaro Abe, Hideaki Hayashi, Ryoma Bise, Takuji Kawamura, Naokuni Sakiyama, Kiyohito Tanaka, Seiichi Uchida, Clustering of Colonoscopic Image with Multi-Task Learning, The 15th Joint Workshop on Machine Perception and Robotics (MPR2019), 2019.11.
55. J. Hayashida, R. Bise, Cell Tracking with Deep Learning for Cell Detection and Motion Estimation in Low-Frame-Rate, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI2019), 2019.10.
56. K. Nishimura, E.D. Ker, R. Bise, Weakly Supervised Cell Segmentation in Dense by Propagating from Detection Map, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI2019), 2019.10.
57. Ryoma Bise, Kentaro Abe, Hideaki Hayashi, Kiyohito Tanaka, Seiichi Uchida, Efficient Soft-Constrained Clustering for Group-Based Labeling, 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019, 2019.10, [URL], We propose a soft-constrained clustering method for group-based labeling of medical images. Since the idea of group-based labeling is to attach the label to a group of samples at once, we need to have groups (i.e., clusters) with high purity. The proposed method is formulated to achieve high purity even for difficult clustering tasks such as medical image clustering, where image samples of the same class are often very distant in their feature space. In fact, those images degrade the performance of conventional constrained clustering methods. Experiments with an endoscopy image dataset demonstrated that our method outperformed various state-of-the-art methods..
58. Nishimura Kazuya, Dai Fei Elmer Ker, Ryoma Bise, Deep learning for cell segmentation with less annotation, Resonance Bio International Symposium, 2019.10.
59. Junya Hayashida, Ryoma Bise, Cell Tracking by estimating cell motions for high-throughput screening
, Resonance Bio International Symposium, 2019.10.
60. 林田純弥・西村和也・備瀬竜馬, 細胞位置及び細胞対応付け同時学習CNNによる細胞追跡, パターン認識・メディア理解研究会(PRMU),2019年10月, 2019.10.
61. 備瀬竜馬・安部健太郎・早志英朗(九大)・田中聖人(京都第二赤十字病院)・内田誠一(九大), 内視鏡画像のソフト制約クラスタリングによるラベル付け簡略化, パターン認識・メディア理解研究会(PRMU),2019年9月, 2019.09.
62. 吉川亮・備瀬竜馬, 正例自動サンプリングPositive Unlabeled-Learningを用いた光超音波画像における体毛領域認識, パターン認識・メディア理解研究会(PRMU),2019年9月, 2019.09.
63. 安部健太郎・早志英朗・備瀬竜馬(九大)・河村卓二・碕山直邦・田中聖人(京都第二赤十字病院)・内田誠一(九大), マルチタスク学習による大腸内視鏡画像の部位及び所見分類, パターン認識・メディア理解研究会(PRMU),2019年9月, 2019.09.
64. 西村和也・林田純弥・備瀬竜馬, 時系列3D CNN回帰モデル による細胞分裂認識, パターン認識・メディア理解研究会(PRMU),2019年9月, 2019.09.
65. 荒木健吾, 徳永宏樹, 備瀬竜馬, 内田誠一, 深層学習による子宮頸癌のクラス分類, 電気・情報関係学会九州支部連合大会, 2019.09.
66. 杉本龍彦, 徳永宏樹, Xiaotong Ji, 備瀬竜馬, 病理画像における陽性細胞の検出, 電気・情報関係学会九州支部連合大会, 2019.09.
67. 川原祐樹, 備瀬竜馬, 木村暁, 内田誠一, 安定結婚アルゴリズムによる細胞内中心体のトラッキング, 電気・情報関係学会九州支部連合大会, 2019.09.
68. S. Harada, H. Hayashi, R. Bise, K. Tanaka, Q. Meng, and S. Uchida, Endoscopic Image Clustering with Temporal Ordering Information Based on Dynamic Programming
, 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2019. , 2019.07.
69. D. Harada, R. Bise, H. Tokunaga, W. Ohyama, S. Oka, T. Fujimori, and S. Uchida, Scribbles for Metric Learning in Histological Image Segmentation
, 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2019. , 2019.07.
70. 西村 和也(九大), Elmer Dai Fei Ker(香港中文大), 備瀬 竜馬(九大), 弱教師学習を用いた顕微鏡画像における細胞領域認識
, 画像の認識・理解シンポジウム MIRU2019, 2019.07.
71. 林田 純弥, 備瀬 竜馬, 細胞挙動推定による低フレームレート動画像下における細胞トラッキング, 画像の認識・理解シンポジウム MIRU2019, 2019.07.
72. Shota Harada, Hideaki Hayashi(Kyushu Univ.), Ryoma Bise(Kyushu Univ./NII), Qier Meng(NII), Kiyohito Tanaka(Kyoto Second Red Cross Hospital), Seiichi Uchida(Kyushu Univ./NII), Endoscopic Image Clustering Based on Temporal Ordering Information, 画像の認識・理解シンポジウム MIRU2019, 2019.07.
73. H. Tokunaga, Y. Teramoto, A. Yoshizawa, R. Bise, Adaptive Weighting Multi-Field-of-View CNN for Semantic Segmentation in Pathology, IEEE Conference on Computer Vision and Pattern Recognition, 2019.06.
74. 西村和也, Dai Fei Elmer Ker, 備瀬竜馬, Deep Neural Network 解析による細胞領域認識用学習データ作成の省略化, 細胞ダイバース第4回公開シンポジウム, 2019.06.
75. 林田純弥, 備瀬竜馬, Deep Neural Networkを用いた細胞移動方向推定による細胞トラッキング, 細胞ダイバース第4回公開シンポジウム, 2019.06.
76. 西村和也,Ker Dai Fei Elmer,備瀬竜馬, 弱教師学習を用いた複数細胞種における細胞領域認識, パターン認識・メディア理解研究会(PRMU), 2019.05.
77. 林田純弥,備瀬竜馬, 細胞の移動軌跡推定を用いた低フレームレート動画像下における細胞トラッキング, パターン認識・メディア理解研究会(PRMU), 2019.05.
78. Ryo Kikkawa, Hiroyuki Sekiguchi, Itaru Tsuge, Susumu Saito, Ryoma Bise, Semi-supervised learning with structured knowledge for body hair detection in photoacoustic image, 16th IEEE International Symposium on Biomedical Imaging, ISBI 2019, 2019.04, [URL], Photoacoustic (PA) imaging is a promising new imaging technology for non-invasively visualizing blood vessels inside biological tissues. In addition to blood vessels, body hairs are also visualized in PA imaging, and the body hair signals degrade the visibility of blood vessels. For learning a body hair classifier, the amount of real training and test data is limited, because PA imaging is a new modality. To address this problem, we propose a novel semi-supervised learning (SSL) method for extracting body hairs. The method effectively learns the discriminative model from small labeled training data and small unlabeled test data by introducing prior knowledge, of the orientation similarity among adjacent body hairs, into SSL. Experimental results using real PA data demonstrate that the proposed approach is effective for extracting body hairs as compared with several baseline methods..
79. 林田純弥, 備瀬竜馬, 細胞挙動推定による低フレームレート動画像下における細胞トラッキング, Resonance Bio "Buy Me!", 2019.04.
80. 原田翔太, 早志英朗, 備瀬?馬, 田中聖人, Qier Meng, 内田誠一, 動的計画法を用いた内視鏡画像系列クラスタリング, 生体画像と医用人工知能研究会, 2019.03.
81. Yuki Teramoto, Akihiko Yoshizawa, Ryoma Bise, Hiroki Tokunaga, Naoki Nakajima, and Hironori Haga, Novel approaches for assessment of PD-L1 immunohistochemistry in lung adenocarcinoma through deep learning algorithms, USCAP2019, 2019.03, Background:
The programmed death receptor 1 (PD-1) and its ligand PD-L1 pathway has shown promising clinical success as an immunotherapy target in cancer treatment, especially in non-small cell lung cancer. Although PD-L1 immunohistochemistry (IHC) is used for the decision on treatment with PD-L1 checkpoint inhibitors, evaluation of PD-L1 IHC may be difficult. Macrophages often exhibit membranous staining and may be misinterpreted as cancer cells. Also, unspecific cytoplasmic staining may occur, and weak partial membranous staining of tumors cells count as positives but may be difficult to detect. The mentioned difficulties are especially troublesome when a very limited proportion of tumor cells such as 1% is sufficient for a positive test, as is the case for some markers, and may lead to significant interobserver variation. Our study purpose was to develop a deep neural network convolution system for automated semi-quantitative assessment of PD-L1 IHC.


Design:
We adopted ensemble of a two-step strategy using the whole slide images of tissue microarray comprising of 343 surgical specimens. Three trained pathologists scored PD-L1 staining proportion into two grades (=50%) independently, and discrepant cases were resolved in the discussion with all observers. In the first step, we applied the U-Net architecture to detect all nuclei of tumoral and non-tumoral cells in each core. To create the ground truth labels we annotated bounding boxes for over 4000 nuclei. In the second step, our CNN model classifies the detected cells as either PD-L1 positives (Class 0), negatives (Class 1) or non-tumoral cells (Class 3) (e.g., macrophages, necrotic cells). To train the CNN, pathologists annotated regions so that each region only includes a single class of cells, and the small patch image (64x64) around the detected cells is labeled based on the annotated regions. In the inference step, the trained CNN inferences the class of the detected cells. Based on the inference, the percentage of PD-L1 positive cells of each core was obtained by dividing the number of PD-L1 expressing cancer cells by the total number of cancer cells which is the union of regions represented by Classes 0 and 1.


Results:
Our methods reproduce the expert analysis, leading to an accuracy of 0.96 for automatic diagnosis in the case of total proportion score (TPS) =50%.


Conclusion:
These results clearly open the door to highly-accurate and fully-automatic analysis of PD-L1 IHC..
82. 西村和也,備瀬竜馬, 弱教師学習を用いた顕微鏡画像における細胞領域認識手法の提案, 細胞ダイバース第2回若手ワークショップ, 2019.02.
83. 林田純弥,,備瀬竜馬, 細胞位置及び挙動推定による細胞トラッキング, 細胞ダイバース第2回若手ワークショップ, 2019.02.
84. Matsumoto Y, Gu L, Bise R, Asao Y, Sekiguchi H, Yoshikawa A, Ishii T, Takada M, Kataoka M, Sakurai T, Yagi T, Sato I, Togashi K, Shiina T, and Toi M., Machine learning-based structural analysis and oxygen saturation measurement of tumor-associated vessels in breast cancer using a photoacoustic tomography system
, Breast Cancer Symposium, 2018.12.
85. Matsumoto Y, Gu L, Bise R, Asao Y, Sekiguchi H, Yoshikawa A, Ishii T, Takada M, Kataoka M, Sakurai T, Yagi T, Sato I, Togashi K, Shiina T, and Toi M. , Machine learning-based structural analysis and oxygen saturation measurement of tumor-associated vessels in breast cancer using a photoacoustic tomography system
, 2018.12.
86. K.Kajiya, R.Bise, C.Seidel, I. Sato, T. Yamashita, and M. Detmar, Cleaning of a human skin and its application for the three-dimensional visualization of the vasculature, Journal of Investigative Dermatology, 2018.11.
87. 原田大輔, 備瀬竜馬, 岡 早苗, Timothy Francis Day, 藤森俊彦, 内田誠一 , グラフカットとCNNを用いたマウス胚領域分割
, 電子情報通信学会技術研究報告, 2018.11.
88. 林田純弥, 備瀬竜馬, 深層学習を用いた細胞トラッキング, 電気・情報関係学会九州支部連合大会講演論文集, 2018.09.
89. 西村和也, 備瀬竜馬 , 顕微鏡画像における細胞セグメンテーション
, 電気・情報関係学会九州支部連合大会講演論文集, 2018.09.
90. Qier Meng, Kiyohito Tanaka, Shin’ichi Satoh, Masaru Kitsuregawa, Yusuke Kurose, Tatsuya Harada, Hideaki Hayashi, Ryoma Bise, Seiichi Uchida, Masahiro Oda, Kensaku Mori, Anatomical location classification of gastroscopic images using DenseNet trained from Cyclical Learning Rate
, 画像の認識・理解シンポジウム MIRU2018, 2018.08.
91. 徳永宏樹, 寺本祐記, 吉澤明彦, 備瀬竜馬, 病理画像領域分割のためのAdaptively Weighting Multi-scale FCNの提案, 画像の認識・理解シンポジウム MIRU2018, 2018.08.
92. ソン ホン, リ ジンホ, 備瀬 竜馬, 内田 誠一, ネットワークの中間層の物体追跡への利用
, 画像の認識・理解シンポジウム MIRU2018, 2018.08.
93. 吉川亮, 関口博之, 津下到, 齊藤晋, 備瀬竜馬 , 半教師あり学習を用いた光超音波画像における体毛領域認識
, 画像の認識・理解シンポジウム MIRU2018, 2018.08.
94. 徳永宏樹, 寺本祐記, 吉澤明彦, 備瀬竜馬, 病理画像領域分割のためのAdaptively Weighting Multi-scale FCNの提案, 情報処理学会コンピュータビジョンとイメージメディア研究会(CVIM), 2018.05.
95. 吉川亮、関口博之、津下到、齊藤晋、備瀬竜馬, 半教師あり学習を用いた光超音波画像における体毛領域認識
, 情報処理学会コンピュータビジョンとイメージメディア研究会(CVIM), 2018.05.
96. Qiuyu Chen, Ryoma Bise, Lin Gu, Yinqiang Zheng, Imari Sato, Jenq Neng Hwang, Sadakazu Aiso, Nobuaki Imanishi, Virtual Blood Vessels in Complex Background Using Stereo X-Ray Images, 16th IEEE International Conference on Computer Vision Workshops, ICCVW 2017, 2018.01, We propose a fully automatic system to reconstruct and visualize 3D blood vessels in Augmented Reality (AR) system from stereo X-ray images with bones and body fat. Currently, typical 3D imaging technologies are expensive and carrying the risk of irradiation exposure. To reduce the potential harm, we only need to take two X-ray images before visualizing the vessels. Our system can effectively reconstruct and visualize vessels in following steps. We first conduct initial segmentation using Markov Random Field and then refine segmentation in an entropy based post-process. We parse the segmented vessels by extracting their centerlines and generating trees. We propose a coarse-to-fine scheme for stereo matching, including initial matching using affine transform and dense matching using Hungarian algorithm guided by Gaussian regression. Finally, we render and visualize the reconstructed model in a HoloLens based AR system, which can essentially change the way of visualizing medical data. We have evaluated its performance by using synthetic and real stereo X-ray images, and achieved satisfactory quantitative and qualitative results..
97. A. Kondow, K. Ohnuma, S. Nonaka, Y. Kamei, R. Bise, Y. Sato, T.J. Kobayashi and K. Hashimoto, 3D tracking of Nodal signal activation in a single cell of zebrafish embryo, 小型魚類研究会, 2017.12.
98. Mihoko Shimano, Hiroki Okawa, Yuta Asano, Ryoma Bise, Ko Nishino, Imari Sato, Wetness and color from a single multispectral image, 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017.11, Visual recognition of wet surfaces and their degrees of wetness is important for many computer vision applications. It can inform slippery spots on a road to autonomous vehicles, muddy areas of a trail to humanoid robots, and the freshness of groceries to us. In the past, monochromatic appearance change, the fact that surfaces darken when wet, has been modeled to recognize wet surfaces. In this paper, we show that color change, particularly in its spectral behavior, carries rich information about a wet surface. We derive an analytical spectral appearance model of wet surfaces that expresses the characteristic spectral sharpening due to multiple scattering and absorption in the surface. We derive a novel method for estimating key parameters of this spectral appearance model, which enables the recovery of the original surface color and the degree of wetness from a single observation. Applied to a multispectral image, the method estimates the spatial map of wetness together with the dry spectral distribution of the surface. To our knowledge, this work is the first to model and leverage the spectral characteristics of wet surfaces to revert its appearance. We conduct comprehensive experimental validation with a number of wet real surfaces. The results demonstrate the accuracy of our model and the effectiveness of our method for surface wetness and color estimation..
99. Ryoma Bise, Cell Tracking for Cell Image Analysis, The Joint Workshop on Machine Perception and Robotics, 2017.10.
100. Lin Gu, Yinqiang Zheng, Ryoma Bise, Imari Sato, Nobuaki Imanishi, Sadakazu Aiso, Semi-supervised learning for biomedical image segmentation via forest oriented super pixels(voxels), 20th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2017, 2017.09, In this paper, we focus on semi-supervised learning for biomedical image segmentation, so as to take advantage of huge unlabelled data. We observe that there usually exist some homogeneous connected areas of low confidence in biomedical images, which tend to confuse the classifier trained with limited labelled samples. To cope with this difficulty, we propose to construct forest oriented super pixels(voxels) to augment the standard random forest classifier, in which super pixels(voxels) are built upon the forest based code. Compared to the state-of-the-art, our proposed method shows superior segmentation performance on challenging 2D/3D biomedical images. The full implementation (based on Matlab) is available at https://github.com/lingucv/ssl_superpixels..
101. Mihoko Shimano, Ryoma Bise, Yinqiang Zheng, Imari Sato, Separation of transmitted light and scattering components in transmitted microscopy, 20th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2017, 2017.09, In transmitted light microscopy, a specimen tends to be observed as unclear. This is caused by a phenomenon that an image sensor captures the sum of these scattered light rays traveled from different paths due to scattering. To cope with this problem, we propose a novel computational photography approach for separating directly transmitted light from the scattering light in a transmitted light microscope by using high-frequency lighting. We first investigated light paths and clarified what types of light overlap in transmitted light microscopy. The scattered light can be simply represented and removed by using the difference in observations between focused and unfocused conditions, where the high-frequency illumination becomes homogeneous. Our method makes a novel spatial multiple-spectral absorption analysis possible, which requires absorption coefficients to be measured in each spectrum at each position. Experiments on real biological tissues demonstrated the effectiveness of our method..
102. 徳永宏樹, 備瀬竜馬, 肺癌病理検体画像における癌細胞自動判別手法の検討, 平成29年度(第70回)電気・情報関係学会九州支部連合大会, 2017.09.
103. 吉川亮, 備瀬竜馬, 機械学習を用いた光超音波画像における体毛認識及び除去, 平成29年度(第70回)電気・情報関係学会九州支部連合大会, 2017.09.
104. 備瀬竜馬, Yinqiang Zheng, 佐藤いまり, Low-rank最適化による血管・ノイズ・欠損領域分離及び位置合わせを用いた光超音波血管画像の画質改善, 第20回画像の認識・理解シンポジウム, 2017.08.
105. 加治屋健太朗、備瀬竜馬 、Catharina Seidel、佐藤いまり 、山下豊信 、Michael Detmar, ヒト⽪膚透明化技術の開発と毛細血管の 3 次元的可視化, 第42 回 日本香粧品学会, 2017.06.
106. Ryoma Bise, Yinqiang Zheng, and Imari Sato, Low-rank最適化によるノイズ分離および位置合わせを用いた光超音波血管画像の画質改善
, 日本超音波医学会第90回学術集会, 2017.05.
107. R Bise, Y Sato, Cell tracking for cell image analysis, Biomedical Imaging and Sensing Conference, 2017.04.
108. Ryoma Bise, Yoichi Sato, Cell tracking for cell image analysis, 3rd Biomedical Imaging and Sensing Conference, 2017, Cell image analysis is important for research and discovery in biology and medicine. In this paper, we present our cell tracking methods, which is capable of obtaining fine-grain cell behavior metrics. In order to address difficulties under dense culture conditions, where cell detection cannot be done reliably since cell often touch with blurry intercellular boundaries, we proposed two methods which are global data association and jointly solving cell detection and association. We also show the effectiveness of the proposed methods by applying the method to the biological researches..
109. Ryoma Bise, Imari Sato, Kentaro Kajiya, Toyonobu Yamashita, 3D Structure Modeling of Dense Capillaries by Multi-objects Tracking, 29th IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2016, 2016.12, A newly developed imaging technique called light-sheet laser microscopy imaging can visualize the detailed 3D structures of capillaries. Capillaries form complicated network structures in the obtained data, and this makes it difficult to model vessel structures by existing methods that implicitly assume simple tree structures for blood vessels. To cope with such dense capillaries with network structures, we propose to track the flow of blood vessels along a base-axis using a multiple-object tracking framework. We first track multiple blood vessels in cross-sectional images along a single axis to make the trajectories of blood vessels, and then connect these blood vessels to reveal their entire structures. This framework is efficient to track densely distributed vessels since it uses only a single cross-sectional plane. The network structure is then generated in the post-processing by connecting blood vessels on the basis of orientations of the trajectories. The results of experiments using a challenging real data-set demonstrate the efficacy of the proposed method, which are capable of modeling dense capillaries..
110. K.Kajiya, @R.Bise, C.Seidel, I. Sato, T. Yamashita, and M. Detmar , Cleaning of a human skin and its application for the three-dimensional visualization of the vasculature
, Journal of Investigative Dermatology, 136, 9, S254, 2016. , 2016.09.
111. Ryoma Bise, Yingqiang Zheng, Imari Sato, Masakazu Toi, Vascular registration in photoacoustic imaging by low-rank alignment via foreground,background and complement decomposition, 2016.01, Photoacoustic (PA) imaging has been gaining attention as a new imaging modality that can non-invasively visualize blood vessels inside biological tissues. In the process of imaging large body parts through multi-scan fusion,alignment turns out to be an important issue,since body motion degrades image quality. In this paper,we carefully examine the characteristics of PA images and propose a novel registration method that achieves better alignment while effectively decomposing the shot volumes into low-rank foreground (blood vessels),dense background (noise),and sparse complement (corruption) components on the basis of the PA characteristics. The results of experiments using a challenging real data-set demonstrate the efficacy of the proposed method,which significantly improved image quality,and had the best alignment accuracy among the state-of-the-art methods tested..
112. Ryoma Bise, Yoshitaka Maeda, Mee Hae Kim, Masahiro Kino-Oka, Cell tracking under high confluency conditions by candidate cell region detection-based association approach, 10th IASTED International Conference on Biomedical Engineering, BioMed 2013, 2013, Automated tracking of cell population is an important element of research and discovery in the biology field. In this paper, we propose a method that tracks cells under highly confluent conditions by using the candidate cell region detection-based association approach. Unlike conventional segmentation-based association tracking methods, the proposed method uses the tracking results from the previous frame to segment the cell regions at the current frame. First, candidate cell regions are detected, and while there may be many false positives, there are very few false negatives. Next, optimized detection results are selected from the candidate regions and associated with the tracking results of the previous frame by resolving a linear programming problem. We quantitatively evaluated the proposed method using a variety of sequences. Results showed that our method has a better tracking performance than conventional segmentation-based association methods..
113. Ryoma Bise, Takeo Kanade, Zhaozheng Yin, Seung Il Huh, Automatic cell tracking applied to analysis of cell migration in wound healing assay, 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS 2011, 2011.12, The wound healing assay in vitro is widely used for research and discovery in biology and medicine. This assay allows for observing the healing process in vitro in which the cells on the edges of the artificial wound migrate toward the wound area. The influence of different culture conditions can be measured by observing the change in the size of the wound area. For further investigation, more detailed measurements of the cell behaviors are required. In this paper, we present an application of automatic cell tracking in phase-contrast microscopy images to wound healing assay. The cell behaviors under three different culture conditions have been analyzed. Our cell tracking system can track individual cells during the healing process and provide detailed spatio-temporal measurements of cell behaviors. The application demonstrates the effectiveness of automatic cell tracking for quantitative and detailed analysis of the cell behaviors in wound healing assay in vitro..
114. Seungil Huh, Sungeun Eom, Ryoma Bise, Zhaozheng Yin, Takeo Kanade, Mitosis detection for stem cell tracking in phase-contrast microscopy images, 2011 8th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, ISBI'11, 2011.04, Automated visual-tracking systems of stem cell populations in vitro allow for high-throughput analysis of time-lapse phase-contrast microscopy. In these systems, detection of mitosis, or cell division, is critical to tracking performance as mitosis causes branching of the trajectory of a mother cell into the two trajectories of its daughter cells. Recently, one mitosis detection algorithm showed its success in detecting the time and location that two daughter cells first clearly appear as a result of mitosis. This detection result can therefore helps trajectories to correctly bifurcate and the relations between mother and daughter cells to be revealed. In this paper, we demonstrate that the functionality of this recent mitosis detection algorithm significantly improves state-of-the-art cell tracking systems through extensive experiments on 48 C2C12 myoblastic stem cell populations under four different conditions..
115. Ryoma Bise, Zhaozheng Yin, Takeo Kanade, Reliable cell tracking by global data association, 2011 8th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, ISBI'11, 2011.04, Automated cell tracking in populations is important for research and discovery in biology and medicine. In this paper, we propose a cell tracking method based on global spatio-temporal data association which considers hypotheses of initialization, termination, translation, division and false positive in an integrated formulation. Firstly, reliable tracklets (i.e., short trajectories) are generated by linking detection responses based on frame-by-frame association. Next, these tracklets are globally associated over time to obtain final cell trajectories and lineage trees. During global association, tracklets form tree structures where a mother cell divides into two daughter cells. We formulate the global association for tree structures as a maximum-a-posteriori (MAP) problem and solve it by linear programming. This approach is quantitatively evaluated on sequences with thousands of cells captured over several days..
116. Takeo Kanade, Zhaozheng Yin, Ryoma Bise, Seungil Huh, Sungeun Eom, Michael F. Sandbothe, Mei Chen, Cell image analysis
Algorithms, system and applications, 2011 IEEE Workshop on Applications of Computer Vision, WACV 2011, 2011, We present several algorithms for cell image analysis including microscopy image restoration, cell event detection and cell tracking in a large population. The algorithms are integrated into an automated system capable of quantifying cell proliferation metrics in vitro in real-time. This offers unique opportunities for biological applications such as efficient cell behavior discovery in response to different cell culturing conditions and adaptive experiment control. We quantitatively evaluated our system's performance on 16 microscopy image sequences with satisfactory accuracy for biologists' need. We have also developed a public website compatible to the system's local user interface, thereby allowing biologists to conveniently check their experiment progress online. The website will serve as a community resource that allows other research groups to upload their cell images for analysis and comparison..
117. Zhaozheng Yin, Ryoma Bise, Mei Chen, Takeo Kanade, Cell segmentation in microscopy imagery using a bag of local Bayesian classifiers, 7th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, ISBI 2010, 2010.04, Cell segmentation in microscopy imagery is essential for many bioimage applications such as cell tracking. To segment cells from the background accurately, we present a pixel classification approach that is independent of cell type or imaging modality. We train a set of Bayesian classifiers from clustered local training image patches. Each Bayesian classifier is an expert to make decision in its specific domain. The decision from the mixture of experts determines how likely a new pixel is a cell pixel. We demonstrate the effectiveness of this approach on four cell types with diverse morphologies under different microscopy imaging modalities..
118. Sungeun Eom, Ryoma Bise, Takeo Kanade, Detection of hematopoietic stem cells in microscopy images using a bank of ring filters, 7th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, ISBI 2010, 2010.04, We present a method for robustly detecting hematopoietic stem cells (HSCs) in phase contrast microscopy images. HSCs appear to be easy to detect since they typically appear as round objects. However, when HSCs are touching and overlapping, showing the variations in shape and appearance, standard pattern detection methods, such as Hough transform and correlation, do not perform well. The proposed method exploits the output pattern of a ring filter bank applied to the input image, which consists of a series of matched filters with multiple-radius ring-shaped templates. By modeling the profile of each filter response as a quadratic surface, we explore the variations of peak curvatures and peak values of the filter responses when the ring radius varies. The method is validated on thousands of phase contrast microscopy images with different acquisition settings, achieving 96.5% precision and 94.4% recall..
119. Ryoma Bise, N. Takahashi, T. Nishi, On the design method of cellular neural networks for associative memories based on generalized eigenvalue problem, 7th IEEE International Workshop on Cellular Neural Networks and their Applications, CNNA 2002, 2002.07, This paper presents a design technique which is used to realize associative memories via cellular neural networks. The proposed method can store every prototype vector as a memory vector and maximize the areas of basin of attraction of memory vectors in a certain sense. The network parameters are obtained by solving optimization problems known as generalized eigenvalue problems. Simulation results prove that our method is better than the existing ones..

九大関連コンテンツ

pure2017年10月2日から、「九州大学研究者情報」を補完するデータベースとして、Elsevier社の「Pure」による研究業績の公開を開始しました。