all AI news
Benchmarking Transformers-based models on French Spoken Language Understanding tasks. (arXiv:2207.09152v1 [cs.CL])
cs.CL updates on arXiv.org arxiv.org
In the last five years, the rise of the self-attentional Transformer-based
architectures led to state-of-the-art performances over many natural language
tasks. Although these approaches are increasingly popular, they require large
amounts of data and computational resources. There is still a substantial need
for benchmarking methodologies ever upwards on under-resourced languages in
data-scarce application conditions. Most pre-trained language models were
massively studied using the English language and only a few of them were
evaluated on French. In this paper, we propose …
arxiv benchmarking language spoken language understanding transformers understanding