Dec. 14, 2023, 9:30 a.m. | code_your_own_AI

code_your_own_AI www.youtube.com

Gemini-PRO is now (starting today) available on Vertex-AI. So we test it in a causal reasoning test against the open source MoE Mixtral 8x7B. Can a Mixture of Experts with eight 7B LLM system have a similar performance to the latest LLM from Google? We expect the new GEMINI Ultra not before next year.

Simple (and non-scientific) test of Mixtral 8x7B against GEMINI-PRO on my standard causal reasoning prompt, with which I tested already 7 other LLMs for you to …

expect experts gemini gemini pro gemini ultra google llm mixtral mixtral 8x7b mixture of experts moe open source performance reasoning test vertex vertex-ai

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US