Oct. 7, 2022, 1:11 a.m. | Jianyi Zhang, Yiran Chen, Jianshu Chen

cs.LG updates on arXiv.org arxiv.org

Developing neural architectures that are capable of logical reasoning has
become increasingly important for a wide range of applications (e.g., natural
language processing). Towards this grand objective, we first propose a symbolic
reasoning architecture that chain FOET, which is particularly useful for
modeling natural languages. To endow it with differentiable learning
capability, we closely examine various neural operators for approximating the
symbolic join-chains. Interestingly, we find that the widely used multi-head
self-attention module in transformer can be understood as a …

arxiv attention head join multi-head multi-head attention network reasoning transformer

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Machine Learning Engineer

@ Samsara | Canada - Remote