Hacker Newsnew | past | comments | ask | show | jobs | submit | Lewton's commentslogin

> That all of earth is round, comes not from direct observation (except for the very few people who have been in space).

or anyone who's been in a plane?



Wow, that is a weird and interesting plot, thanks for that, I take it as a recommendation.


What kind of planes are you flying?

I mean, I just checked, it seems under ideal conditions one could see the curvature of earth at 10.5 km height, but to me it was not really a convincing curvature last time I did a long flight. Your experience was different?


10.5km (34.4kft) is an ordinary cruising altitude for an ordinary commercial airliner.

The curvature of the earth can be quite apparent on a clear day at that altitude when flying over water. Or the midwestern US.


Well thanks, then this is what I will recommend the next time I engage with flat earthers.


I'm hard stuck on fish probability 59% fml


> I heard majority of the users are techies asking coding questions.

Citation needed? I can't sit on a bus without spotting some young person using ChatGPT


I’m not arguing against your basic point as I completely agree arriving on time is my own responsibility, but when I do what you suggest, i forget my house keys lol


If you can you might want to install a door lock that uses a code number instead of a key.


> since 100% of people already agree that you should never be afraid to ask basic but obviously important questions.

You don’t have a great mental model of how most people think


Breaking down muscle is good, not letting the muscle recover is bad

Same kind of thing


A week ago, I was looking for the name of a semi obscure late 90ies computer game. Google could not help me given the description i gave

Described it to ChatGPT, and we had a back and forth where I explained why the games it suggested me weren’t right, it eventually found the correct game (Nocturne) and was able to explain to me that half of the things I remembered from the game were just plain wrong, and that’s why Google couldn’t find it

ChatGPT helping me with my hallucinations, go figure


I've had a conversation almost exactly like this too, about an obscure dos based 3d fighting game. And some pessimistic instinct tells me I should worry, like this capability will be optimized away in future versions. It's like that precise pang of satisfaction is tied to a jaded feeling that I can't trust it to last if it ultimately depends on infrastructure and incentives that will lead openai toward eating the world like Google did.

Getting ahead of myself to be sure, Google absolutely deserves to be stomped, so for now I guess we just ride out this wave.

Edit: I see elsewhere that others are converging on this idea and expressing it more clearly, namely that we may be in a honeymoon period.


> And some pessimistic instinct tells me I should worry, like this capability will be optimized away in future versions.

A big difference here is that oprn source models are available, effective and checkpointed at this point in time!

Worst case scenario, we stop getting new open source models and are forced to query new, suboptimal models only for recent information.


Let me guess: One Must Fall 2097 or FX Fighter?


I loved One Must Fall, but it's not that and it was a lot more similar to FX fighter, but older and more grainy. I don't think I ever actually found out what it was.


One must fall was AMAZING! It had so much character and satisfying feedback in it!

(Or this is how I remember it)


No, it was beyond amazing. Both the pilots and robots had a distinct character and a history behind them, and some combinations just seemed a better git than others. Like I can't imagine the kickboxer Milano piloting anything else besides a Jaguar or maybe a Shadow. I also liked how after every match, you got a news report with highlights of your match, with "screenshots" showing key moments - not sure how they managed to do that in an old DOS game - nothing short of genius IMO. I can't think of any other game which has this feature.

The manual included with the game was also a very interesting and fun read - full of humor and quips about the game devs - even including a humorous piece on the main developer's baby daughter. Reading the manual made the game and the game devs feel so much more alive. I haven't played OMF in decades, yet I still remember most of their names - Rob Elam, Ryan Elam, the genius music composer Kenny Chou[1] and not to forget the baby, Bethany Kay Elam. Till date I wonder what she's is up to, and whether she's gotten over her habit of slobbering over the keyboard...

Personally, for me, OMF2097 was the definitive fighting game of all time. It's a pity that almost no one ever thinks of it, it's always either Tekken, Street Fighter, or MK. All great games, mind you, but they're nothing like OMF.

[1] Kenny Chou uploaded a remake of the OMF theme song a while ago, in case you weren't aware: https://www.youtube.com/watch?v=UvlVaQl7kEk


This is great, never thought to use ChatGPT for this use. I have been trying to remember a game from extremely early memories where I was able to play it for a single weekend at an uncle's house in my youth when my parents were out of town.

Bugged me on and off for the better part of a decade and I couldn't figure it out from describing it to google - the only thing I knew for sure were a few graphical UI screens seared into my brain, the rest was far too generic to really narrow things down.

In the end ChatGPT got it correct in the first try from a minimal description, confirmed by watching a youtube playthrough and the memories coming back immediately.

Masters of Orion if anyone was curious :)


I must now point out how, paralleling this article, MoO 1 is known for its elegant simplicity compared to most of the game of the 4X genre it created / popularized.


masters of orion was fantastic!

MoO 2 is considered the best, III was a bit odd, the new one isn't bad apparently, gonna try it out.

Thanks for bringing that up.


I dumped thousands of hours into Moo 2. Great game. I haven't touched it in years.


I have tried the same, and had ChatGPT utterly fail.

> me: what's the relatively recent movie with a pig on the movie poster

> gpt: The movie you're referring to is likely "Pig," a 2021 American drama film. It stars Nicolas Cage as a truffle hunter who lives alone in the Oregon wilderness. He embarks on a quest to find his beloved foraging pig after she is kidnapped. The film was directed by Michael Sarnoski .

> me: no, that movie does not have a pig on the poster

https://en.wikipedia.org/wiki/Pig_(2021_film)#/media/File:Pi...

> gpt: My mistake, let's correct that. Given your description, another film that fits the criteria of having a pig on its movie poster and is relatively recent is "The Menu" (2022). This film features a darkly comedic and thrilling story set against the backdrop of an exclusive restaurant on a remote island, where a young couple experiences a very unusual dining experience. The pig on the poster is part of the film's intriguing promotional material, hinting at the film's critique of culinary culture and excess.

https://en.wikipedia.org/wiki/The_Menu_(2022_film)#/media/Fi...

> me: No, that move does not have a pig poster

> gpt: It seems I misunderstood your initial request. After a more detailed search, I couldn't find a recent movie specifically featuring a pig prominently on its movie poster beyond the titles we've discussed. Could there be any other details about the movie you're thinking of that might help narrow it down? For instance, any specific elements of the plot, actors involved, or even the genre might provide more clues to identify the correct film.

movie I was looking for was "The Hunt": https://en.wikipedia.org/wiki/The_Hunt_%282020_film%29#/medi...


Inb4 we all tell you you're using it wrong. There are certainly better ways to do this, but this is really the major thing that drives me nuts. No matter how many times I tell some LLM I want it to ask clarifying questions to provide better answers, it just won't. You end up doing exactly this, guessing what information might trigger the right recall.


I've found that there are often different trigger phrases I can use to get an LLM to "change its mind".

For Llama, I can just say "Are you sure?" and it will change its tune (unless it's quite "certain" about the results).

Qwen is more insistent, but will change course if I say "I looked it up and it says you're wrong".


if we formalize what might trigger the recall we could keep a mental model of it, maybe call it bangs, something like !wikipedia would only index data sourced from training on wikipedia..you might be on to something..


> No matter how many times I tell some LLM I want it to ask clarifying questions to provide better answers, it just won't

You're telling me there's a modern frontier model that refused to ask clarifying questions after you told it to?


I can't tell if you're being sarcastic, but yes, when it asks questions they are trivial disambiguation of mainly language, and never information seeking like you might expect when doing more than superficial investigation


If you can't tell if I'm being sarcastic about an LLM asking clarifying questions, you are possibly not great at using LLMs.

Prompting isn't necessarily the career some people wanted was sold as, but it's not a bad idea to practice a bit and build a sense of what a clear and effective prompt looks like.

-

To be clear, I get telling people it's a "you" problem every time an issue with LLMs comes up isn't helpful... but sometimes the disconnect between someone's claimed experience, and what little most people actually can agree LLMs are capable of is so great that it must be PEBCAK.

I just tried the original checkpoint of GPT 3.5 Turbo and it was able to handle drilling up and down in specificity as appropriate with the prompt: "I need you to help me remember a movie. Ask clarifying questions as we go along."


I think it depends a lot on what llm and tooling you have access to.

I had great results prototyping agents at work for specific tasks and answering in a specific style including asking appropriate clarification questions.

But at home with just free/local options? I've nowhere near the same settings to play with and only had very mixed results. I couldn't get most models to follow simple instructions at all.


Nobody has more PEBKAC problems than me. It might be my experience is colored by older models. I can give it another shot. But I do use LLMs quite a lot and got decent and surprising functionality out of them in the past. I was just consistently vexed by this one thing when doing information seeking search -like activity.


The recent web search integration actually made ChatGPT much worse at this specific kind of search (I want a piece of media, but I only remember characteristics of it)

If you explicitly tell it not to search, it's less likely to anchor itself in trying to search the web and going down rabbit holes.

It also more likely to pause at times and ask the right questions to nudge.


Agreed, it has been much more valuable to me for it to rely on it's training than it's web searches.


Ha ha, you should see what Google returns when you ask it for the recent movie with a pig on the movie poster.

(Hint: it is a film from 2021 starring Nicolas Cage.)


I've done the same thing with movies quite a few times now. I'd mention a scene or two from the movie and approximately what decade it came out in and some of the themes of the movie and after a couple of back and forths it was able to return the name that I had forgotten.


Wow! A million thanks goes to you! I never thought of using ChatGPT to find an old book I had 30 years ago! I just found it! Epic! I use ChatGPT but never tried this before! Thank you, thank you thank you!


> Where they can't afford the medication, or no medication seems to work, or they have an addiction problem and there just isn't any way for them to get into a treatment program

I would consider two of these absolutely monstrous reasons to assist someone in killing themselves. A broken society opting into killing its citizens because it can’t be arsed with fixing itself is next level evil


> A broken society

I don't think this is necessarily broken by any reasonable standard. If no medication exists, that's not "broken society". Nor is there not being money to give a bespoke treatment to everyone, no matter how expensive it is. If that's broken, no society in history has ever not been broken. Which is just a poor definition.


Yeah I wonder how people who say this stuff feel about the homeless? How do they feel about tax increase to fix "broken society"?


We feel fine. The ones against taxes never cared and pay the least proportion.


This is vs them talk is honestly pointless. You don't speak for anyone but yourself.


- It’s a fact of life that people die.

- Some of those deaths are suicides.

- It is pure fantasy to imagine we can have a society with 0 suicides (when accounting scale, complexity and size).

The choice then becomes: help, or punish.

It is absolutely monstrous to punish, and helping does not mean killing every Tom, Dick and Harry with a case of the Monday blues.


You're getting downvoted, but honestly, I agree with you.

That said, I don't have any actual insight into the process; I was just extemporizing. So if it sounds fucked up, that's because _I_ fucked up and didn't think long enough about circumstances where someone would be approved for MAID instead of offered other solutions. It's entirely possible and indeed likely that if you can't afford medication or treatment, there's options in place to get that to you instead of approving your MAID appointment.

Please don't judge Canada too harshly just because I misspoke. =)


It’s all good, I didn’t take your comment as an indication of how it actually functions in Canada. Just reacting to the implications of your examples


>If you sign out and view a user you can see their posts

Only for prolific twitter users and only a random mishmash of 2+ year old tweets. Usually it just asks you to sign up


It's a substack blog, there's no trickery going on. It just means she used https://jessicawildfire.substack.com/ until now


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: