One summer I got a ridiculous number of flat tires in my 300,000 mile jeep.
One of those happened in a heavy rainstorm. The ground was soft, and I don't know why exactly, but I couldn't get the jack to lift the jeep high enough to lift the tire.
I was on a country highway near my home, with no cell service and maybe one car every ten minutes. I tried a few spots, even just halfway in the lane--I was afraid, though, because the rain limited visibility for other drivers.
A man pulled up behind me in a Subaru. He wasn't local; he had come from Tennessee to paint a local scenic spot. He not only lent me his jack, but he got out in the pouring rain and helped until it was done. We both had raincoats, at least.
He said that just the day before, he had a flat of his own, and someone stopped and helped him solve some problem he couldn't get around, too.
I doubt that's the nicest thing a stranger ever did for me, but I sure appreciated it. Stopping and helping may be a small kindness, but it can feel like a miracle to the recipient.
Parker's Automotive is a pretty cool youtube channel, guy goes around in his truck and mostly does work for free. Seems like every day he's finding someone stopped in a lane.
I think he's been able to parlay it into an awesome youtube career, and the amount of people who stop on the street to sing his praises is unreal. Sometimes 15+ in one video, it's nuts. He's a hero to the people. Love to see it.
> It's mostly clickbait/outrage for the sake of headlines & clicks.
That's just how populist messaging works, even before the internet. You say outrageous stuff on the radio and then people listen - just ask Adolf Hitler.
We know, for sure, it works - particularly when the medium is new and people haven't built up a strong sense of discernment.
Thanks. At least in this particular case, the question was pretty clear, so I skipped the description. Also, I wish HN would automatically expand links when the whole post has nothing else.
I'm a vegan and its insane the number of bots, who the meat industry pays for, that promote really weird anti-vegan ideas on social media.
This stuff spreads into real life. I run into folks IRL who repeat the same lines the bots do.
What online bots are amazing for is amplification. They take an idea that already exists and blast opposition with comments promoting their misinformation. This then lends some credence to their idea so when grandma Google's it there is discourse on it, or Fox can use online quotes to say "Hey, people are talking!!"
A lot of the weird shit Trump talks about is bot-promoted misinformation. Like, A LOT.
There have been whole subreddits that are just bots and paid PR folks promoting weird stuff or they try to "disprove" things like solar panels or vegan diets.
With online bot stuff it isn't about quality. It's about repetition until the ideas land with someone. It's very cheap to blast people with negativity. Eventually it lands.
So, it totally works when used correctly. I think to most people that's pretty obvious.
The fact countries(state sanctioned) pour a good amount of money and resources into these bot farms proves they work.
Of course they do. And yes there is proof for AI chatbots now, see the link in the other post, but in the last 10 years (since the Cambridge Analytica purchase by Bob Mercer) the usage was sock puppet networks and basic auto reply bots. However, they were microtargeted to individual psychology. So yes they work.
We now have multiple networks discovered in multiple countries, ie Analytica, Team Jorge in Israel, Internet Research Agency in Russia. And that's the ones we know about. Why would multiple countries double down on an idea that doesn't work?
Every right wing movement in Europe that had any contact with Bannon through his "The Movement" "data analytics" training program has all the outer appearances of running a large bot program, now using LLMs. In Portugal for the origins of the bot network they traced them in Angola. In Brasil the origin was Israel.
Yes, while citing an LLM in the same way is probably not as useful.
"I googled this" is only helpful when the statistic or fact they looked up was correct and well-sourced. When it's a reddit comment, you derail into a new argument about strength of sources.
The LLM skips a step, and gets you right to the "unusable source" argument.
I agree. Telling I googled this and someone has this opinion is pretty useless. Be that someone a LLM or random poster on internet.
Still, I will fight that someone actually doing the leg work even by search engine and reasonable evaluation on a few sources is often quite valuable contribution. Sometimes even if it is done to discredit someone else.
OTOH I am participating in a wonderful discord server community, primarily Italians and Brazilians, with other nationalities sprinkled in.
We heavily use connected translating apps and it feels really great. It would be such a massive pita to copy every message somewhere outside, having to translate it and then back.
Now, discussions usually follow the sun, and when someone not speaking, say, Portuguese wants to join in, they usually use English (sometimes German or Dutch), and just join.
We know it's not perfect but it works. Without the embedded translation? It absolutely wouldn't.
I also used pretty heavily a telegram channel with similar setup, but it was even better, with transparent auto translation.
Reddit would be even worse if the translations were better, now you don't have to waste much time because it hits you right in the face. Never ever translate something without asking about it first.
When I search for something in my native tongue it is almost always because I want the perspective of people living in my country having experience with X. Now the results are riddled with reddit posts that are from all over the world with crappy translation instead.
I think we should distinguish between the feature being good/hated:
1. An automatic translation feature.
2. Being able to submit an "original language" version of a post in case the translation is bad/unavailable, or someone can read the original for more nuance.
The only problem I see with #2 involves malicious usage, where the author is out to deliberately sow confusion/outrage or trying to evade moderation by presenting fundamentally different messages.
I think the audience that would be interested in this is vanishingly small, there exist relatively few conversations online that would be meaningfully improved by this.
I also suspect that automatically translating a forum would tend to attract a far worse ratio of high-effort to low-effort contributions than simply accepting posts in a specific language. For example, I'd expect programmers who don't speak any english to have on average a far lower skill level than those who know at least basic english.
That's Twitter currently, in a way. I've seen and had short conversations in which each person speaks their own language and trusts the other to use the built-in translation feature.
What LLM generate is an amalgamation of human content they have been trained on. I get that you want what actual humans think, but that’s also basically a weighted amalgamation. Real, actual insight, is incredibly rare and I doubt you see much of it on HN (sorry guys; I’ll live with the downvotes).
Why do you suppose we come to HN if not for actual insight? There are other sites much better for getting an endless stream of weighted amalgamations of human content.
I've heard that said by many, but it's been my experience as well.