Feb. 22, 2024, 1 p.m. | code_your_own_AI

code_your_own_AI www.youtube.com

New insights by Google DeepMind and Stanford University on the limitations of current LLMs (Gemini Pro, GPT-4 TURBO) regarding causal reasoning, and logic.

Unfortunately the human reasoning process and all its limitations are encoded in our LLMs, given the multitude of human conversations and human reasoning processes on all online platforms (including social media's logical richness - smile).

NO AGI in sight, just pure rule hallucinations added w/ factual hallucinations and only linear sequential understanding. Our LLMs really learn from …

conversations current deepmind gemini gemini pro google google deepmind gpt gpt-4 human insights layer limitations llm llms logic online platforms platforms process processes rag reasoning stanford stanford university turbo university

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US