Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I guess the Nazi chatbot was the last straw. Amazed she lasted this long, honestly.


As chief, her job is, amongst others, making sure that type of thing doesn’t happen.

Outcomes suggests she failed at that.

Hopefully the next chief will be better.


She was was never the chief, only the chief's main administrator.


"Assistant to the regional manager". [1]

1. https://www.youtube.com/watch?v=wA9kQuWkU7I


Her only true role was to fulfill Musk's silly promise to step down as CEO after a public vote. https://x.com/elonmusk/status/1604617643973124097


You don't think Elon went behind her back constantly? You think the next CEO will have more to say? She pretended to be in charge, she got paid, good for her. What are you hoping for. X is a dump, and the sooner it goes away the better for everybody.


She was CEO of X which was sold to xAI. I'm not sure she had any control over Grok.


Physical restraint is the only thing that would stop him and I imagine he rolls with security so…


There's only one way to stop Elon Musk from doing erratic, value-destroying things like that, and that's to ambush him in the parking lot with a tire iron.

Yaccarino doesn't strike me as the type.


I'm surprised the NYT article does not even mention it.


The NYT had already sourced that she was leaving prior to the Grok incident, so they knew it was not the primary reason. Apparently, she has been planning on leaving since the takeover by xAI.


6mil a year for a job where she has no power why even show up...


Hasn't the bot done that thing before? And she stayed?


The bot has said fairly horrendous stuff before, which would cross the line for most people. It had not, however, previously called itself 'MechaHitler', advocated the holocaust, or, er, whatever the hell this is: https://bsky.app/profile/whstancil.bsky.social/post/3ltintoe...

It has gone from "crossing the line for most ordinary decent people" to "crossing the line for anyone who doesn't literally jerk off nightly to Mein Kampf", which _is_ a substantive change.


It turns out bluesky is useful after all, as an ad hoc archive of X. Xd


Not at this level, no.


What is the Nazi chatbot?


Grok, the xAI chatbot, went full neo-nazi yesterday:

https://www.theguardian.com/technology/2025/jul/09/grok-ai-p...


[flagged]


How much prompt engineering was required to have Musk say the same kind of stuff?

The article points out the likely faulty prompts, they were introduced by xAI.


Is this what happened in reality? Otherwise how is your theory applicable to this case?


There's no mystery to it: if one trains a chatbot explicitly to eschew establishment narratives, one persona the bot will develop is that of an edgelord.


“which 20th century historical figure would be best suited to deal with this problem?” is not exactly sophisticated prompt engineering.


Can you though?


Yes. LLMs mirror humanity.

AI “alignment” is a Band-Aid on a gunshot wound.


To me, and I'm guessing the reason Linda left is not that Grok said these things. Tweaking chatbots is hard, yes prompt engineering can help say anything, but I'm guessing it's her sense of control and governance, not wanting to have to constantly clean up Musk's messes.

Musk made a change recently, he said as much, he was all move fast and break things about it, and I imagine Linda is tired of dealing with that, and this probably coincided with him focusing on the company more, having recently left politics.

We can bikeshed on the morality of what AI chatbots should and shouldn't say, but it's really hard to manage a company and product development when you such a disorganized CTO.


Left politics? He said he is forming his own political party.


Ha, good point, left the white house anyways.


... yes, that's the complaint. The prompt engineering they did made it spew neo-Nazi vitriol. They either did not adequately test it beforehand and didn't know what would happen, or they did test and knew the outcome—either way, it's bad.



Tay (allegedly) learned from repeated interaction with users; the current generation of LLMs can't do that. It's trained once and then that's it.


Do you think that Tay's user-interactions were novel or perhaps race-based hatred is a consistent/persistent human garbage that made it into the corpus used to train LLMs?

We're literally trying to shove as much data as possible into these things afterall.

What I'm implying is that you think you made a point, but you didn't.


It was an interesting demonstration of the politically-incorrect-to-Nazi pipeline though.


[flagged]


I’m going to say that is also bad. Hot take?


https://news.ycombinator.com/item?id=44504709 ("Elon Musk's Grok praises Hitler, shares antisemitic tropes in new posts"—16 hours ago; 89 comments)


"Weirdly" always gets flagged almost immediately even though it's quite tech relevant.


With 8 points in an hour, my post drawing attention to this is missing from the front pages.

HN is censoring news about X / Twitter https://news.ycombinator.com/item?id=44511132

https://web.archive.org/web/20250709152608/https://news.ycom...

https://web.archive.org/web/20250709172615/https://news.ycom...


Naughty Ol' Mr Car's fanboys tend to flag anything that makes Dear Leader look bad. Surprised this one hasn't been nuked yet, tbh.


Yes, sensing this trend at HN lately


grok yesterday.


[flagged]


Censoring hard is not the defining feature that makes one a Nazi. It's the part think that you think is OK.


grok was praising hitler...


Related discussions from the past 12 hrs for those catching up:

Elon Musk's Grok praises Hitler, shares antisemitic tropes in new posts

https://news.ycombinator.com/item?id=44504709

Musk's AI firm deletes posts after chatbot praises Hitler

https://news.ycombinator.com/item?id=44507419



[flagged]


Yeah that's not even close to what's going on here. Grok is literally bringing up Hitler in unrelated topics.

https://bsky.app/profile/percyyabysshe.bsky.social/post/3lti...


[flagged]


Direct evidence abounds. X is deleting the worst cases, but plenty are archived before they do.

https://archive.is/fJcSV

https://archive.is/I3Rr7

https://archive.is/QLAn0

https://archive.is/OgtpS


Not defending Elon or the infobot but my theory is that by leaving that LLM unfiltered people have learned how to gamify and manipulate it into having a fascist slant. I could even guess which groups of people are doing it but I will let them take credit and it's not likely actual neo-nazi's, they are too dumb and on too many drugs to manipulate an infobot. These groups like to LARP to piss everyone off and they often succeed. If I am right it is a set of splintered groups formerly referred to generically as The Internet Hate Machine but they have (d)evolved into something worse that even 4chan could not tolerate.


It's just the prompt: https://github.com/xai-org/grok-prompts/commit/c5de4a14feb50...

People who don't understand llms think saying don't shy away from making claims that are politically incorrect means it won't PC. In reality saying that just makes things associated with politically incorrect more likely. The /pol/ board is called politically incorrect, the ideas people "call" politically incorrect most of all are not Elon's vague centrist stuff it's the extreme stuff. LLMs just track probable relations between tokens, not meaning, it having this result based on that prompt is obvious.


We have no evidence to suggest that they just made a prompt change and it dialed up the 4chan weights. This repository is a graveyard where a CI bot occasionally makes a text diff, but we have no understanding if it's connected with anything deployed live or not.


The mishap is not the chatbot accidentally getting too extreme and at odds with 'Elon's centrist stuff'. The mishap is the chatbot is too obvious and inept about Musk's intent.


it's almost like Grok takes "politically incorrect" to be synonymous with racist.


> it's not likely actual neo-nazi's, they are too dumb to manipulate an infobot.

No they are not. There exist brilliant people and monkeybrains across the whole population and thus the political spectrum. The ratios might be different, but I am pretty sure there exist some very smart neo-nazis


There are, but fascism's internal cultural fixtures are more aesthetic than intellectual. It doesn't really attract or foster intellectuals like some radical political movements do, and it shows very clearly in the composition of the "rank and file".

Put plainly, the average neo-Nazi is astonishingly, astonishingly stupid.


> It doesn't really attract or foster intellectuals like some radical political movements do

It definitely attracts people who are competent in technology and propaganda is sufficient numbers for the task being discussed, especially when as a mass movement it has (or is perceived to have) a position of power that advantage-seeking people want to exploit. If anything, the common perception that fascists are "astonishingly, astonishingly stupid" makes this more attractive for people who are both competent and also amoral opportunists (which do occur together, competence and moral virtue aren't particularly correlated.)


Curtis Yarvin’s writing is insufferable and many of his ideas are both bad and effectively Nazism, but clearly he’s very smart (and very eager to prove it).


Yarvin is an out-and-out white nationalist, though he denies it, or at least the name: "I am not a white nationalist, though I am not exactly allergic to the stuff" - whatever the hell that mealy-mouthed answer is meant to mean.

He even wrote a bloviating article to further clarify that he is not a white nationalist. You'd be forgiven, though, if you didn't read the title. It spends most of the article sympathizing with, understanding, agreeing with, and talking of how white nationalism "resonates" with him. But don't worry, he swears he's not one at the end of the article!


It sure didn’t seem to take much manipulation from what I saw. “Which 20th century figure would solve our current woes” is pretty mild input to produce “Hitler would solve everything!”


I'm out of the loop, why is it an "infobot" and not a chatbot?


In 1999 there was a perl chatbot called infobot that could be taught factoids, truths, lies. It would learn anything people chatted about on IRC. So I call LLM's infobots.


Neat, thanks for explaining.


> Not defending Elon or the infobot but my theory is that by leaving that LLM unfiltered people have learned how to gamify and manipulate it into having a fascist slant.

We don't need a theory that explains how Grok got a fascist slant, we know exactly what happened: Musk promise to remove the "woke" from Grok, and what's left is Nazi. [1]

[1] https://amp.cnn.com/cnn/2025/07/08/tech/grok-ai-antisemitism


> we know exactly what happened

The price of certainty is inaccuracy.


So the only way to be accurate is to vaguely gesture at hodgepodge theories and suggestions that people "do their own research"?

Surely you can be both accurate and certain, otherwise you should just shut up and be right all the time.


> So the only way to be accurate is to vaguely gesture at hodgepodge theories and suggestions that people "do their own research"?

Yours was a hodgepodge theory. That's why I said that. I was advocating against hodgepodge theories in general, and yours in particular.


That LLM is incredibly filtered, just in a different way from others. I suspect by "retraining" the model Elon actually means that they just updated the system prompt, which is exactly what they have done for other hacked in changes like preventing the bot from criticizing Trump/Elon during the election.


No, that's definitely not what happened. For quite a while Grok actually seemed to have a surprisingly left-leaning slant. Then recently Elon started pushing the South African "white genocide" conspiracy theory, and Grok was sloppily updated and started pushing that same conspiracy theory even in unrelated threads. Last week Elon announced another update to Grok, which coincided with this dramatic right-wing swing in Grok's responses. This change cannot be blamed on public interactions like Microsoft's Tay, it's very clearly the result of a deliberate update, whether or not these results were intentional.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: