Jan. 31, 2024, 4:46 p.m. | Dyah A. M. G. Wisnu, Epri Pratiwi, Stefano Rini, Ryandhimas E. Zezario, Hsin-Min Wang, Yu Tsao

cs.LG updates on arXiv.org arxiv.org

This paper introduces HAAQI-Net, a non-intrusive deep learning model for
music quality assessment tailored to hearing aid users. In contrast to
traditional methods like the Hearing Aid Audio Quality Index (HAAQI), HAAQI-Net
utilizes a Bidirectional Long Short-Term Memory (BLSTM) with attention. It
takes an assessed music sample and a hearing loss pattern as input, generating
a predicted HAAQI score. The model employs the pre-trained Bidirectional
Encoder representation from Audio Transformers (BEATs) for acoustic feature
extraction. Comparing predicted scores with ground …

arxiv assessment attention audio audio quality contrast deep learning eess.as hearing index long short-term memory memory music paper quality

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Principal, Product Strategy Operations, Cloud Data Analytics

@ Google | Sunnyvale, CA, USA; Austin, TX, USA

Data Scientist - HR BU

@ ServiceNow | Hyderabad, India