Kyushu University Academic Staff Educational and Research Activities Database
List of Reports
Kohei Hatano Last modified date:2024.04.10

Professor / Department of Informatics / Faculty of Information Science and Electrical Engineering


Reports
1. Analysis of a decision tree boosting algorithm
We analyze a boosting algorithm for multi-class functions. Boosting is a technique of finding a hypothesis with high accuracy by combining many weak hypotheses, that is only moderately accurate. We focus on an algorithm, TOPDOWN proposed by Takimoto and Maruoka. TOPDOWN constructs a K-branching decision tree in which inner nodes are labeled by weak hypothesis h:X→{1, …, K}for an unknown target function f:X→{1, …, N}(N&ge;2). Takimoto and Maruoka showed that if the number of inner nodes of the output decision tree is larger than(1/ε)^<K/γ>the training error of the tree is less than ε, where γ is the advantage lowerbound of weak hypotheses. We show that this sufficient number of inner nodes is indeed 2(1/ε)^<1nK/γ>..
2. Simplified analysis of a decision tree boosting algorithm with multi - way branching
We show an simpler analysis of a decision tree learning algorithm proposed by Mansour and McAlleter. They extend a binary decision tree learning algorithm to a multi-way branching one and showed that the error of the multi-way branching tree produced by the extended algorithm is bounded with an upperbound of the error of the binary tree of the same size produced by non-extended one. We show that their result holds under a simpler and weaker condition..
3. 畑埜 晃平, 渡辺 治, r-of-k閾値関数に対するブースティングを用いた学習, 電子情報通信学会技術研究報告. COMP, コンピュテーション, Vol.104, No.147, pp.27-32, 2004.06, 本研究では,学習対象の関数がr-of-k閾値関数であるときにブースティング技法のさらなる改良を考える.r-of-k閾値関数とはk個の変数からなる論理関数であり,それらの値の内少なくともr個以上が正であるときに"+1"を返し,それ以外の場合,"-1"を返す.r-of-k関数のm個の例,および,弱仮説としてリテラルが与えられたとき,ブースティングアルゴリズムの多く(例:AdaBoost[6]),O(K^2logm)回のくり返しで無矛盾な仮説を構成する.本研究では,AdaBoostの改良版(confidence-rated AdaBoost[11]またはInfoBoost[1])が,O(krlogm)回のくり返しで無矛盾な仮説を構成することを示す..
4. 畑埜 晃平, Smooth Boosting with An Information-theoretic Criterion (テーマ:特集「シンボルグラウンディング問題」および一般), 人工知能基本問題研究会, Vol.61, pp.107-112, 2005.11.
5. 坂内 英夫, 畑埜 晃平, 稲永 俊介, Practical Algorithms for Pattern Based Linear Regression (テーマ:特集「シンボルグラウンディング問題」および一般), 人工知能基本問題研究会, Vol.61, pp.69-74, 2005.11.
6. 畑埜晃平, より効率的なフィルタリング型ブースティング技法(計算理論とアルゴリズムの新展開), 数理解析研究所講究録, Vol.1489, pp.57-63, 2006.05.
7. A Context Tree Weighting Algorithm with an Adaptive Window
The Context Tree Weighting Method (CTW) [1] is one of the most effective algorithm in the area of universal data compression. On the other hand, when we apply CTW to the non-stationary source, its compression performance descends due to the harmful effect of past data. The Finite Window Context Tree Weighting Method (FWCTW) [4] avoids this problem by introducing a finite window and discarding stale data. But FWCTW has a problem that we must decide the window size, which is difficult to determine appropriately in advance. In this paper, we propose a method detecting change of the source and adapting the window size automatically..
8. A Context Tree Weighting Algorithm with an Adaptive Window.
9. 石橋 浩介, 畑埜 晃平, 竹田 正幸, Online Learning of Approximate Maximum $p$-Norm Margin Classifiers with Bias (Foundations of Theoretical Computer Science : For New Computational View), 数理解析研究所講究録, Vol.1599, pp.154-161, 2008.05.
10. Autonomous Learning of Ball Passing Skills Using Thinning-out.
11. Musical data classification and similarity measures on strings.
12. Learning Shogi Evaluation Functions using Kernel Methods
近年,コンピュータ将棋における評価関数は,機械学習を応用したパラメータの自動調整を行う手法が主流となっている.ただし,評価項目(特徴)は,作成者の考え,感覚に基づいて用意されることがほとんどである.本論文では,プロ棋士の棋譜を学習サンプルとして,カーネル法とサポートベクトルマシンを用いて学習を行う手法を提案する.カーネル法を用いることにより,作成者があらかじめ複雑な特徴を用意せずとも,局面を表現する単純な特徴のみから,特徴間のn項関係などのより高次な特徴のが暗に生成され,その特徴空間で学習が行われる.複数の駒の位置関係の考慮が不可欠である囲いの評価実験を行い,カーネル法が有用性を示す結果を得た.また,本手法により得られた評価関数は,定跡などの明示的な知識を導入することなしに得られたにもかかわらず,特に序盤において,人間らしい局面評価を行うことを示す.Recently, automatic optimization of parameters by applying machine learning methods has become a mainstream approach for developing good evaluation functions in shogi. However, the features used in the evaluation functions are prepared by the developer, depending heavily on his/her knowledge and intuition. In this paper, we propose a method for learning evaluation functions from game records of professional players, using kernel methods and Support Vector Machines (SVMs). By using kernels, higher dimensional features such as n-ary relations between simple features can be considered implicitly, and various complex features can be considered without preparation. We apply our method on castle positions, which require consideration of relative positions of pieces, and show that the evaluation functions learned using kernels give better results. We also show that even without knowledge of standard moves, we were able to obtain human-like evaluation functions, especially in the opening..
13. Learning Evaluation Functions for Shogi Using SVM-based Bipartite Ranking Learning
Recently, automatic optimization of parameters by applying machine learning methods has become a mainstream approach for developing good evaluation functions in shogi. However, the features used in the evaluation functions are prepared by the developer, depending heavily on his/her knowledge and intuition. To date, many complex features, such as relationships between multiple pieces, have been designed. In this paper, we propose an approach using polynomial kernels and Support Vector Machines (SVM), where only very simple features will be prepared explicitly. Polynomial kernels allow us to consider high dimensional, n-ary relations of monomial features. We further regard the problem of evaluation function learning as a bipartite ranking problem of the positions after legal moves, and propose a method which uses SVMs (ranking SVM). We show the effectiveness of our algorithm through computational experiments..
14. 樫原 和昭, 畑埜 晃平, 坂内 英夫, 竹田 正幸, F-037 Sparse Substring Pattern Set Discovery using Linear Programming Boosting, 情報科学技術フォーラム講演論文集, Vol.9, No.2, pp.449-455, 2010.08, In this paper, we consider finding a small set of substring patterns which classifies the given documents well. We formulate the problem as 1 norm soft margin optimization problem where each dimension corresponds to a substring pattern. Then we solve this problem by using LPBoost and an optimal substring discovery algorithm. Since the problem is a linear program, the resulting solution is likely to be sparse, which is useful for feature selection. We evaluate the proposed method for real data such as movie reviews.
15. 安武 翔太, 畑埜 晃平, 瀧本 英二, 竹田 正幸, F-036 Online Rank Aggregation, 情報科学技術フォーラム講演論文集, Vol.9, No.2, pp.441-448, 2010.08, We consider an online learning framework where the task is to predict a permutation which represents a ranking of n fixed objects. At each trial, the learner incurs a loss defined as the Kendall tau distance between the predicted permutation and the true permutation given by the adversary. This setting is quite natural in many situations such as information retrieval and recommendation tasks. We propose algorithms for this problem and prove relative loss bounds with regret only depending on O(n^2). Further, we also prove a matching lower bound of the regret, which shows our algorithms are almost optimal..
16. Polyphonic Music Classification Based on Similarity
In this paper we address the problem of classifying polyphonic music. Especially we use the classification approach based on dissimilarity [1]. We convert polyphonic music into vector sequence in order to apply the dissimilarity function used for string to music. When classified by changing vector sequence and dissimilarity function, classification accuracy was good at any case..
17. 末廣 大貴, 畑埜 晃平, 瀧本 英二, ランキングSVMの近似に基づく効率的なAUC最大化, 電子情報通信学会技術研究報告. IBISML, 情報論的学習理論と機械学習 = IEICE technical report. IBISML, Information-based induction sciences and machine learning, Vol.112, No.279, pp.243-249, 2012.10.
18. 末廣 大貴, 畑埜 晃平, 瀧本 英二, ランキングSVMの近似に基づく効率的なAUC最大化(第15回情報論的学習理論ワークショップ), 電子情報通信学会技術研究報告. IBISML, 情報論的学習理論と機械学習, Vol.112, No.279, pp.243-249, 2012.10, The formulation of Ranking SVMs is popular for maximizing AUC scores. More precisely, the formulation is given as a hard/soft margin optimization over pn pairs of p positive and n negative instances. Directly solving the problem is impractical since we have to deal with a sample of size pn, which is quadratically larger than the original sample size p+n. In this paper, we propose (approximate) reduction methods from the hard/soft margin optimization over pn pairs to variants of hard/soft margin optimization over p+n instances. The resulting classifiers of our methods are guaranteed to have a certain amount of margin over pn pairs..
19. Passive-Aggressive Algorithm with Bias
We consider online binary classification problems by using linear classifiers. Among online learning algorithms for linear classifiers, Passive-Aggressive algorithm is very popular. In order to learn linear classifiers with bias (constant term), Passive-Aggressive algorithm, like many typical online learning algorithms, extends the instance space by adding an extra dimension corresponding to bias and its value is given as a parameter. But, in general, it is not trivial to optimize the extra parameter so as to minimize cumulative losses. In this paper, we propose a version of Passive-Aggressive algorithm which can deal with bias without the extra parameter to tune..
20. Learning Evaluation Functions for Shogi Using SVM-Based Bipartite Ranking Learning
近年,将棋の評価関数の設計においては,機械学習を応用してパラメータの自動調整を行う手法が主流となっている.ただし,評価項目(特徴)は作成者の棋力,感覚に基づいて用意されることが多く,これまで,複数の駒同士の関係など,複雑な特徴が数多く考案されてきた.本研究では,明示的に用意する特徴としては局面を表す基本的で単純なもののみとし,多項式カーネルとサポートベクターマシン(SVM)を用いて評価関数の学習を行う手法を提案する.多項式カーネルを用いることにより,単項式で表現できる特徴間のn項関係を,全て高次の特徴として利用することができる.また,評価関数の学習問題を,合法手後の局面を順位づける2部ランキングの問題と捉え,SVMを用いて学習を行う手法(ランキングSVM法)を提案する.対局や棋譜との一致率を調べる実験結果,及び駒組みの観察等から,ランキングSVM法の有効性を示す..
21. 松本 一成, 畑埜 晃平, 瀧本 英二, Online Prediction with Bradley-Terry Models and Logistic Models (情報論的学習理論と機械学習), 電子情報通信学会技術研究報告. IBISML, 情報論的学習理論と機械学習, Vol.113, No.476, pp.55-62, 2014.02, We consider an online density estimation problem under the Bradley-Terry model which determines the probability of the order between any pair in the set of n teams. An annoying issue is that the loss function is not convex. A standard solution to the avoid the non-convexity is to change variables so that the new loss function w.r.t. new variables is convex. But, then the radius of the new domain might be huge or unknown in general, for which standard algorithms such as OGD and ONS have suboptimal regret bounds. We propose two algorithms with regret O (lnT). Our first algorithm achieves the best regret so far and can be applied to the online logistic regression models as well. As a result, we solve the open problem posed by McMahan and Streeter. Our second algorithm has a weaker regret bound, but performs better in practice..
22. 森富 賢一郎, 畑埜 晃平, 瀧本 英二, 津田 宏治, Online Matrix Prediction with Log-Determinant Regularizer (情報論的学習理論と機械学習), 電子情報通信学会技術研究報告. IBISML, 情報論的学習理論と機械学習, Vol.113, No.476, pp.63-70, 2014.02, We consider an online symmetric positive semi-definite matrix prediction problem with convex loss function and Probenius norm of predictions are bounded. FTRL is famous method to deal with online prediction task, which makes prediction by minimizing cumulative loss function and regularizer function. There are three popular regularizer functions for matrices, Probenius norm, quantum relative entropy and log-determinant. The regret bound is already proven in Probenius norm and quantum relative entropy regularizer although in log-determinant regularizer is not proven. We propose a FTRL based algorithm with log-determinant as regularizer and show regret bound of algorithm. We also propose an application of online matrix prediction problem to online metric learning task..
23. 松本 一成, 畑埜 晃平, 瀧本 英二, Online Density Estimation of Bradley-Terry Models (情報論的学習理論と機械学習), 電子情報通信学会技術研究報告 = IEICE technical report : 信学技報, Vol.115, No.112, pp.173-180, 2015.06.
24. 低コスト・省力化,軽労化技術等の開発―農家の作業技術の数値化及びデータマイニング手法の開発―第1章「農匠ナビ」全体システム設計・開発・評価 1「農匠ナビ」全体システム設計・実証および農作業連続計測・可視化・データマイニング基盤技術の研究開発.
25. Bandit Algorithm For k-Sets Based On Projection And Decomposition.
26. Daiki Suehiro, Kengo Kuwahara, Kohei Hatano, Eiji Takimoto, Time Series Classification Based on Random Shapelets, NIPS Time Series Workshop 2016, 2016.12.
27. 森富 賢一郎, 畑埜 晃平, 瀧本 英二, Collaborative ranking based on relative preference (情報論的学習理論と機械学習), 電子情報通信学会技術研究報告 = IEICE technical report : 信学技報, Vol.115, No.511, pp.1-8, 2016.03.
28. Learning theory and algorithms for shapelets and other local features.
29. Ryohei Nagaura, Kohei Hatano, Eiji Takimoto, Combinatorial bandit prediction with relaxation-based approximation algorithm, 研究報告アルゴリズム(AL), Vol.2018-AL-167, No.4, pp.1-4, 2018.03.
30. 高齢者向け学習支援システムのためのデュアルタブレット・ユーザインタフェースの提案と実現.
31. 高齢者向け学習支援システムためのUIプロトタイプ開発と実験.
32. 自学学習中における脳波・視線の同時計測・分析システムの開発.
33. 学習中の生理応答同時計測による「学びのつまずき」推定システム開発.
34. , [URL].
35. Report on The 21st Annual Conference on Learning Theory (COLT 2008) .
36. Frontier of Theoretical Computer Science.