This suffers from a common pitfall of LLM's, context taint. You can see it is obviously the front page from today with slight "future" variation, the result ends up being very formulaic.
Judging by the reply posted by the OP, the OP probably maintains a pretty humorous tone while chatting with the AI. It's not just about the prompt, but the context too.
The problem is not that it fails to be cheeky, but that "its funny" is depressing in a context where there was a live question of whether it's a sincere attempt at prediction.
When I see "yeah but it's funny" it feels like a retrofitted repair job, patching up a first pass mental impression that accepted it at face value and wants to preserve a kind of sense of psychological endorsement of the creative product.
Honestly it feels like what I, or many of my colleagues would do if given the assignment. Take the current front page, or a summary of the top tropes or recurring topics, revise them for 1 or 2 steps of technical progress and call it a day. It isn't assignment to predict the future, it is an assignment to predict HN, which is a narrower thing.
Right, because you would read the teacher and realize they don't want you to actually complete the assignment to the letter. So you would do jokes in response to a request for prediction.
But it would otherwise be not fun at all. Anthropic didn’t exist ten years ago, and yet today an announcement by them would land on the front page. Would it be fun if this hypothetical front page showed an announcement made by a future startup that hasn’t been founded yet? Of course not.
Thank you. It was a great one-shot and I didn't end up doing any updates. Thrilled to see how it inspired work from: Thomas Morford (CSE @ UC Merced, thomasm6m6) who did the amazing article/thread generation (in < 100 lines of PY!): https://sw.vtom.net/hn35/news.html ; and also from Andrej Karpathy (ex-OpenAI, now Eureak Labs, karpathy) who did an interesting analysis of prescience quality of threads/commenters inspired by a reply linking the page of 10 years past to compare: https://karpathy.bearblog.dev/auto-grade-hn/
This was wonderful. 3000 points? I mean, fuck. Among the biggest posts of all time, and definitely of Show HN. Funny for me is that all the work I've done in the last 10 years, probably 100 Show HN's all different, this was by far the hugest. Could be months of work, no interest. And this thing, which dropped into my mind, probably 30 minutes, demolished them all. It's hilarious that it even beat out legitimate AI posts, and contaminated search results with future stories.
One of the funniest things for me was hearing how people tabbed away from the page, only to come back and momentarily feel it was the actual HN page. Hahahahaha! :)
All I can say is, I love you all. Watching it stay at the top for 24 hours...it felt like it wasn't something I made at first. But it was. Cool
Not a bad thing necessarily, but some part of the plot, and usually with things going awry or emphasizing the scammy nature of blockchain.
Examples: Shameless season 11, The Simpsons S31E13, Superstore season 5, the good wife S3E13, greys anatomy S14E8, big bang theory S11E9, Billions season 5, some later seasons of Mr Robot, etc
I think the most absurd thing to come from the statistical AI boom is how incredibly often people describe a model doing precisely what it should be expected to do as a "pitfall" or a "limitation".
It amazes me that even with first-hand experience, so many people are convinced that "hallucination" exclusively describes what happens when the model generates something undesirable, and "bias" exclusively describes a tendency to generate fallacious reasoning.
These are not pitfalls. They are core features! An LLM is not sometimes biased, it is bias. An LLM does not sometimes hallucinate, it only hallucinates. An LLM is a statistical model that uses bias to hallucinate. No more, no less.