Feb. 22, 2024, 1 p.m. | code_your_own_AI

code_your_own_AI www.youtube.com

New insights by Google DeepMind and Stanford University on the limitations of current LLMs (Gemini Pro, GPT-4 TURBO) regarding causal reasoning, and logic.

Unfortunately the human reasoning process and all its limitations are encoded in our LLMs, given the multitude of human conversations and human reasoning processes on all online platforms (including social media's logical richness - smile).

NO AGI in sight, just pure rule hallucinations added w/ factual hallucinations and only linear sequential understanding. Our LLMs really learn from …

conversations current deepmind gemini gemini pro google google deepmind gpt gpt-4 human insights layer limitations llm llms logic online platforms platforms process processes rag reasoning stanford stanford university turbo university

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior ML Engineer

@ Carousell Group | Ho Chi Minh City, Vietnam

Data and Insight Analyst

@ Cotiviti | Remote, United States