April 16, 2024, 4:51 a.m. | Kota Tanabe, Masahiro Suzuki, Hiroki Sakaji, Itsuki Noda

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.09260v1 Announce Type: new
Abstract: We construct an instruction dataset for the large language model (LLM) in the Japanese finance domain. Domain adaptation of language models, including LLMs, is receiving more attention as language models become more popular. This study demonstrates the effectiveness of domain adaptation through instruction tuning. To achieve this, we propose an instruction tuning data in Japanese called JaFIn, the Japanese Financial Instruction Dataset. JaFIn is manually constructed based on multiple data sources, including Japanese government websites, …

abstract arxiv attention become construct cs.ce cs.cl dataset domain domain adaptation finance financial japanese language language model language models large language large language model llm llms popular study through type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer - New Graduate

@ Applied Materials | Milan,ITA

Lead Machine Learning Scientist

@ Biogen | Cambridge, MA, United States