Researchers Database

Makoto Murakami

    Department of Information Sciences and Arts Associate Professor
    Research Institute of Industrial Technology Researcher
    Course of Information Sciences and Arts Associate Professor
    Center for Computational Mechanics Research Researcher
Last Updated :2024/04/23

Researcher Information

Degree

  • Doctor (Information and Computer Science)(Waseda University)

URL

Research funding number

  • 80329119

J-Global ID

Research Interests

  • Computer Vision   Speech Processing   Pattern Recognition   Human-Computer Interaction   Multimodal Interface   Augmented Reality   Human-Agent Interaction   Human Interface   Human Face Recognition   Image Processing   

Research Areas

  • Informatics / Entertainment and game informatics
  • Informatics / Human interfaces and interactions
  • Informatics / Intelligent informatics

Academic & Professional Experience

  • 2009/04 - Today  Toyo UniversityFaculty of Information Sciences and ArtsAssociate Professor
  • 2007/04 - 2009/03  Toyo UniversityFaculty of EngineeringAssociate Professor
  • 2005/04 - 2007/03  Toyo UniversityFaculty of EngineeringAssistant Professor
  • 2002/04 - 2005/03  Toyo UniversityFaculty of EngineeringLecturer
  • 2000/04 - 2002/03  Waseda UniversitySchool of Science and EngineeringResearch Associate

Association Memberships

  • ACM   IEEE   THE JAPANESE SOCIETY FOR ARTIFICIAL INTELLIGENCE   INFORMATION PROCESSING SOCIETY OF JAPAN   THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS   

Published Papers

Conference Activities & Talks

MISC

Research Grants & Projects

  • 日本学術振興会:科学研究費助成事業
    Date (from‐to) : 2022/04 -2025/03 
    Author : 村上 真
  • Japan Society for the Promotion of Science:Grants-in-Aid for Scientific Research
    Date (from‐to) : 2019/04 -2022/03 
    Author : Murakami Makoto
     
    We consider that the process that people create various human motions in their minds and the process that people recognize various human motions are complicated and non-linear. And we modeled them using two different kinds of deep neural networks: generative adversarial networks and variational autoencoders. We trained the proposed models using human motion dataset captured with optical motion capture system. And we confirmed that the trained models can generate various natural human motions.
  • Japan Society for the Promotion of Science:Grants-in-Aid for Scientific Research Grant-in-Aid for Scientific Research (C)
    Date (from‐to) : 2008 -2011 
    Author : YUBUNE Eiichi; MURAKAMI Makoto
     
    In order for Japanese learners of English to acquire native-like sounding natural speech, we attempted to develop a computer program which recognizes and evaluates the extent to which learners' phonetic realization is either close or distant against the teacher speech. We used a DP matching method to make the evaluation program of coalescent assimilation and linking of English sounds embedded in the sample sentences. As the result of comparing the program's evaluations with the human evaluations, considerably high validity and correlation were found, suggesting our automatic recognition and evaluation system will be of use. We also conducted a research on which feedback modality will best help the learner to realize the important aspects of pronunciation in producing better English rhythm. We tested three different modalities against experimental subjects : visual, auditory, and linguistic. As the result, the auditory feedback tended to be best recognized among the three.
  • Japan Society for the Promotion of Science:Grants-in-Aid for Scientific Research Grant-in-Aid for Scientific Research (A)
    Date (from‐to) : 2002 -2004 
    Author : SHIRAI Katsuhiko; KOBAYASHI Tetsunori; YONEYAMA Masahide; YAMAZAKI Yoshio; OHIRA Shigeki; MURAKAMI Makoto
     
    To clarify how emotion appears in speech sound, we analyzed it using rakugo comic stories speech data, which is a kind of the most natural and emotional speech data. As a result the variance in speech sound with emotion mainly appears at the end of utterance. Then we focused on laughing voice as an emotion representation in physiological function. As a result of its analysis f0 frequency and phoneme timing are the fundamental features to perceive the voice as laughing. Then to generate motion with emotion from language instructions we constructed emotion representation model, in which the relation between the emotional words and the motion is described as a binary tree. Then we implemented the virtual actor system, which consists of the emotion representation component to generate target motion from language instructions using the emotion representation model, and the emotion learning component to update the emotion representation model when an unknown word is given. As a result of the evaluation experiment our system generates the appropriate motion with emotion. Finally, in order to clarify the relation between the signals of video and sound and the emotion which we perceive from them, we analyzed it using the visual and speech data with emotion. As a result speakers represent emotional level by the change of not facial expression but voice. At the same time listeners recognize the kind of emotion from the speakers' facial expression, and perceive the level of the emotion from the speakers' voice.
  • Japan Society for the Promotion of Science:Grants-in-Aid for Scientific Research Grant-in-Aid for Scientific Research (B)
    Date (from‐to) : 2001 -2003 
    Author : YASUYOSHI Itsuki; MCFARLAND Curtis; SAKURAI Toshiko; MURAKAMI Makoto
     
    Since broadband Internet connections have become widely available, multimedia educational software is estimated to be of great importance for the immediate future. PDAs and mobile phones owned by many young people are considered to be as effective as PCs in near-future language learning. We made a comparative study of PCs, PDAs, and mobile phones in terms of usability, effectiveness, and interactivity in language learning. PDAs were judged to be more useful than mobile phones in the 2001-2 experiment. Students studied in a multimedia English program using the three different environments (eight students per PC, PDA, and mobile phone) and afterwards took exams. PCs turned out to be very dependable, but PDAs showed a great potential for language learning because of their portability and unique functions. Our research also focused on how to develop learning materials using the Internet. We made three different movies for our collaborative experiment in creating educational software on the web. The traditional way of making educational software is for programmers to design, produce, and edit learning materials in one place together with authors and editors, but in this research each team member could occupy a different site on the web and still collaborate simultaneously in the production of software. Commuting time was saved because the participants in this collaboration did not have to meet physically at one place. In order to decide whether the text used for educational software is appropriate for students, we used text-analyzing programs and the English text database : the levels of the texts were decided by the number of their vocabulary words. We added another one thousand texts (700 megabytes) to the present English Text Database to extract more sample sentences in textual analysis. We also came to the conclusion that at least 4000 vocabulary words were necessary for college students to understand the average English newspaper and its equivalents, although Japanese high-school graduates learn only two thousand words at high school. We also collected and analyzed TOEFL and various English tests to make a prototype of CBT (Computer-Based Testing). This program was designed to evaluate students' English language abilities automatically pointing out their strengths and weaknesses based on grammar and usage data stored in the databases.
  • 日本学術振興会:科学研究費助成事業 特定領域研究(A)
    Date (from‐to) : 2001 -2001 
    Author : 安吉 逸季; 村上 真; 櫻井 敏子; マクファーランド カーティス
     
    インターネット上での教育ソフト制作におけるコラボレーション 今後急速にブロードバンドが一般化すると考えられ、インターネット上で教育のソフト制作の共同制作をどのように行うか、この課題をNTTラーニング・早稲田大学国際情報通信センターの教授・研究員との共同で研究した.また、異なる3つの受講環境(PC, PDA, FOMA)で、制作した教材を受講した被験者からのデータを得、今後の研究用データとした. 研究課題 (1)コンテンツ制作における編集・修正・議論をウェッブ上にて行い、そこでテキスト・静止画・デザイン等の編集を試みた.ウェッブ上の複数地点から同一教材制作のコラボレーション実験を行い、移動・待ち時間・編集・打ち合わせ等の項目ごとに、従来型の開発制作方法との対比をした.制作に実際にかかる編集時間や打ち合わせよりも、移動・待ち時間の方が、断然大きいという結果をえた.今後の教育ソフト制作に一つの指針を与えてくれた. (2)コンテンツは、ウェッブ上で学習できるマルチメディア教材を制作した.テーマとしてはスコットランド地方で作られるスコッチ・ウィスキーを取り上げ、全3章で構成している.各章は4〜7ページのHTMLファイルと4分〜6分のビデオから成る(PartIスコットランドの首都エディンバラ,PartIIスコッチウィスキーの歴史,PartIIIウィスキー製造の工程).各ページは1ないし2パラグラフで構成している.教材を全て閲覧すると、約15分程度となる.ただし,受講端末により表示条件などが異なるため、それぞれコンテンツは異なる. (3)英語教材の評価方式とその実験、ウェッブ上での試験と評価方式の実験を行った.