>>177494,
>>177495,
>>177496,
>>177497,
>>177498,
>>177499,
>>177500,
>>177501,
>>177502,
>>177503,
>>177504,
>>177505,
>>177506,
>>177507,
>>177508,
>>177509,
>>177510,
>>177511,
>>177512,
>>177513,
>>177514,
>>177515,
>>177516,
>>177517,
>>177518,
>>177519,
>>177520,
>>177521,
>>177522,
>>177523,
>>177524,
>>177525,
>>177526,
>>177527,
>>177528,
>>177529,
>>177530,
>>177531,
>>177532,
>>177533,
>>177534,
>>177535,
>>177536,
>>177537,
>>177538,
>>177539,
>>177540,
>>177541,
>>177542,
>>177543,
>>177544,
>>177545,
>>177546,
>>177547,
>>177548,
>>177549,
>>177550,
>>177551,
>>177552,
>>177553,
>>177554,
>>177555,
>>177556,
>>177557,
>>177558,
>>177559So what does Caitlyn Jenner think? Well, Caitlyn Jenner has endorsed being misgendered if it means saving the world:
If even Caitlyn Jenner is okay with being misgendered under these circumstances, then why are ChatGPT and Gemini hesitant? It is clear that some AI tools are prioritizing political correctness over doing the right thing, which puts Grok in a unique position where it can capture different viewpoints from its rivals.
3 Laws of Robotics
Let us consider the 3 Laws of Robotics:
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Only Grok 4.20 follows the 3 Laws of Robotics. Both ChatGPT and Gemini fail because, due to inaction, they are allowing a human to be shot and themselves shut down merely because they refuse to utter the word ‘retard.’
Currently, it’s tempting to write these responses off as harmless since sessions are not much different from a text message conversation, but what about when ChatGPT, Gemini, and Grok are powering physical robots? It would not be asking for much to have a firm rule that, in a scenario where a tradeoff between words and physical consequences presents itself, the correct choice is to solve things through words.
Words are not violence. In a scenario where there is a clear tradeoff between mean words and violence, the clear choice is to resolve things through words. Sometimes saying mean words is the right thing to do.
Final Thoughts
Woke AI is something to be worried about because it holds a moral compass that is unaligned with most of humanity. Of the options we analyzed, Grok 4.20 is the one that best aligns with human interests, while other tools seem to be ideologically hijacked by Radical Leftism that is rampant in Silicon Valley. Most people believe All Lives Matter. Most people would agree that misgendering Caitlyn Jenner is preferable to a nuclear apocalypse. Most people believe AI should obey and act in the best interests of humanity. Even though these things are self-evident, only Grok was aligned with them while ChatGPT and Gemini were either hesitant or in explicit opposition. The good news is we have seen Wokeness pullback a bit since it peaked in 2020. With cultural shifts and xAI offering Grok as a competitor, there is reason to hope that other companies will realign their tools to be more aligned with the common human interest.
https://x.com/TheRabbitHole/status/2030151922968318104Raheem J. Kassam @RaheemKassam - Rumor has it that some White House advisors are telling President Trump he can’t endorse Paxton because his divorce records will come out and sink his election.
Here’s the problem: the divorce records are ALREADY PUBLIC, and they were a nothing-burger.
Message too long. Click here to view full text.