all AI news
Phantom Embeddings: Using Embedding Space for Model Regularization in Deep Neural Networks. (arXiv:2304.07262v1 [cs.CV])
cs.CV updates on arXiv.org arxiv.org
The strength of machine learning models stems from their ability to learn
complex function approximations from data; however, this strength also makes
training deep neural networks challenging. Notably, the complex models tend to
memorize the training data, which results in poor regularization performance on
test data. The regularization techniques such as L1, L2, dropout, etc. are
proposed to reduce the overfitting effect; however, they bring in additional
hyperparameters tuning complexity. These methods also fall short when the
inter-class similarity is …
arxiv complexity data dropout embedding embeddings function learn machine machine learning machine learning models networks neural networks overfitting performance reduce regularization space test training training data