Feb. 13, 2024, 5:45 a.m. | Valentino Maiorca Luca Moschella Antonio Norelli Marco Fumero Francesco Locatello Emanuele Rodol\`a

cs.LG updates on arXiv.org arxiv.org

While different neural models often exhibit latent spaces that are alike when exposed to semantically related data, this intrinsic similarity is not always immediately discernible. Towards a better understanding of this phenomenon, our work shows how representations learned from these neural modules can be translated between different pre-trained networks via simpler transformations than previously thought. An advantage of this approach is the ability to estimate these transformations using standard, well-understood algebraic procedures that have closed-form solutions. Our method directly estimates …

alignment cs.lg data intrinsic modules networks semantic shows space spaces translated translation understanding via work

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Werkstudent Data Architecture & Governance (w/m/d)

@ E.ON | Essen, DE

Data Architect, Data Lake, Professional Services

@ Amazon.com | Bogota, DC, COL

Data Architect, Data Lake, Professional Services

@ Amazon.com | Buenos Aires City, Buenos Aires Autonomous City, ARG

Data Architect

@ Bitful | United States - Remote