>>172795,
>>172796,
>>172797,
>>172798,
>>172799,
>>172800,
>>172801,
>>172802,
>>172803,
>>172804,
>>172805,
>>172806,
>>172807,
>>172808,
>>172809,
>>172810,
>>172811,
>>172812,
>>172813,
>>172814,
>>172815,
>>172816,
>>172817,
>>172818,
>>172819,
>>172820,
>>172821,
>>172822,
>>172823,
>>172824,
>>172825,
>>172826,
>>172827,
>>172828,
>>172829,
>>172830,
>>172831,
>>172832,
>>172833,
>>172834,
>>172835,
>>172836,
>>172837,
>>172838,
>>172839,
>>172840,
>>172841,
>>172842,
>>172843,
>>172844,
>>172845,
>>172846,
>>172847,
>>172848,
>>172849,
>>172850,
>>172851,
>>172852,
>>172853,
>>172854,
>>172855,
>>172856,
>>172857,
>>172858,
>>172859Mario Nawfal @MarioNawfal - GROK DOMINATES AI RELIABILITY STUDY WITH LOWEST 8% HALLUCINATION RATE BEATING CHATGPT'S 35%
A fresh December 2025 study from casino aggregator Relum tested 10 major AI chatbots for workplace reliability, and Grok came out on top with the lowest hallucination rate at just 8%, meaning it makes up facts way less often.
ChatGPT clocked 35%, Gemini even worse at 38%.
They scored on hallucinations, user ratings, consistency, and downtime, giving each a risk score (lower better).
Grok nailed 4.5 rating, 3.5 consistency, near-zero downtime for an overall risk of 6.
DeepSeek was close at 4, but ChatGPT maxed out at 99 risk.
Relum's product chief pointed out 65% of US companies use these tools daily, with nearly half the workers feeding them sensitive info, so reliability matters huge.
Grok's flying lower on popularity but crushing on accuracy, perfect for when you need straight facts without the BS.
xAI built Grok to seek truth maximally, and it's showing.
In a world full of AI slop, low hallucinations mean you can trust it more for real work.
Source: Tessera, Teslarti, Robyn, @grok
https://x.com/MarioNawfal/status/2007108370465955940Mario Zelaya @mario4thenorth - Good news & bad news.
Good news for the end of a narco-state.
Bad news for the Canadian economy.
Message too long. Click here to view full text.