Kyushu University Academic Staff Educational and Research Activities Database
Researcher information (To researchers) Need Help? How to update
Yoshinari Takeishi Last modified date:2024.04.25



Graduate School
Undergraduate School


E-Mail *Since the e-mail address is not displayed in Internet Explorer, please use another web browser:Google Chrome, safari.
Homepage
https://kyushu-u.elsevierpure.com/en/persons/yoshinari-takeishi
 Reseacher Profiling Tool Kyushu University Pure
https://www.me.inf.kyushu-u.ac.jp/~takeishi/index.html
Academic Degree
Dr. Eng
Country of degree conferring institution (Overseas)
No
Field of Specialization
Machine Learning, Information Theory, Coding Theory
ORCID(Open Researcher and Contributor ID)
0000-0002-8727-7445
Total Priod of education and research career in the foreign country
00years00months
Research
Research Interests
  • I study information theory, machine learning and their applications.
    keyword : Coding theory, Information theory, Machine learning, Deep learning, Cyber security, Magnetic Resonance Imaging
    2011.04.
Academic Activities
Papers
1. Yoshinari Takeishi, Masazumi Iida, Jun’ichi Takeuchi, Approximate spectral decomposition of Fisher information matrix for simple ReLU networks, Neural Networks, 10.1016/j.neunet.2023.05.017, 164, 691-706, 2023.07.
2. Yoshinari Takeishi, Jun'ichi Takeuchi, An Improved Analysis of Least Squares Superposition Codes with Bernoulli Dictionary, Japanese Journal of Statistics and Data Science, 2, 591-613, 2019.09.
3. Yoshinari Takeishi, Jun'ichi Takeuchi, An improved upper bound on block error probability of least squares superposition codes with unbiased Bernoulli dictionary, Proceedings of 2016 IEEE International Symposium on Information Theory, 10.1109/ISIT.2016.7541483, 1168-1172, 2016.08, For the additive white Gaussian noise channel with average power constraint, it is shown that sparse superposition codes, proposed by Barron and Joseph in 2010, achieve the capacity. We study the upper bounds on its block error probability with least squares decoding when a dictionary with which we make codewords is drawn from an unbiased Bernoulli distribution. We improve the upper bounds shown by Takeishi et.al. in 2014 with fairly simplified form..
4. Yoshinari Takeishi, Masanori Kawakita, Jun'ichi Takeuchi, Least Squares Superposition Codes With Bernoulli Dictionary Are Still Reliable at Rates up to Capacity, IEEE TRANSACTIONS ON INFORMATION THEORY, 10.1109/TIT.2014.2312728, 60, 5, 2737-2750, 2014.05, For the additive white Gaussian noise channel with average power constraint, sparse superposition codes with least squares decoding are proposed by Barron and Joseph in 2010. The codewords are designed by using a dictionary each entry of which is drawn from a Gaussian distribution. The error probability is shown to be exponentially small for all rates up to the capacity. This paper proves that when each entry of the dictionary is drawn from a Bernoulli distribution, the error probability is also exponentially small for all rates up to the capacity. The proof is via a central limit theorem-type inequality, which we show for this analysis..
5. Yoshinari Takeishi, Masanori Kawakita, Jun'ichi Takeuchi, Least squares superposition codes with Bernoulli dictionary are still reliable at rates up to capacity, Proceedings of 2013 IEEE International Symposium on Information Theory, 10.1109/ISIT.2013.6620456, 1396-1400, 2013.07, For the additive white Gaussian noise channel with average power constraint, sparse superposition codes with least squares decoding were proposed by Barron and Joseph in 2010. The codewords are designed by using a dictionary which is drawn from a Gaussian distribution. The error probability is shown to be exponentially small in code length for all rates up to the capacity. This paper proves that when the dictionary is drawn from a Bernoulli distribution, the error probability is also exponentially small for all rates up to the capacity. © 2013 IEEE..
Membership in Academic Society
  • IEEE