Hacker Newsnew | past | comments | ask | show | jobs | submit | ericol's commentslogin

This applies to many different things, depending on the pair of languages you are using.

In Spanish the closes approximation would be "ni mal ni bien" (Not bad not wrong) but I understand the Chinese expression has a strong lean on "not being wrong".

Not so long ago (I'm 50+, Spanish native speaker, and I've spoken English for the past 30 years almost daily) I learnt about "accountability".

Now before I get a barrage of WTFs, the situation is that in Spanish we only have "Responsabilidad" and that accounts for both responsibility and accountability, with a strong lean on responsibility.

So basically we recognise what is it to be responsible of something, but being accountable is seriously diluted.

The implications of this are enormous, and this particular though exercise I'd leave for people that spend more time thinking about these things than I do.


> Ok, so does anyone remember 'Watson'? It was the chatgpt before chatgpt. they built it in house

I do. I remember going to a chat once where they wanted to get people on-board in using it. It was 90 minutes of hot air. They "showed" how Watson worked and how to implement things, and I think every single person in the room knew they were full of it. Imagine we were all engineers and there were no questions at the end.

Comparing Watson to LLMs is like comparing a rock to an AIM-9 Sidewinder.


Watson was nothing like ChatGPT. The first iteration was a system specifically built to play Jeopardy. It did some neat stuff with NLP and information retrieval, but it was all still last generation AI/ML technology. It then evolved into a brand that IBM used to sell its consulting services. The product itself was a massive failure because it had no real applications and was too weak as a general purpose chat bot.

I had no idea about what Watson was initially meant to solve.

I do remember they tried to sell it - at least in the meeting I went - as a general purpose chatbot.

I did try briefly to understand how to use it, but the documentation was horrendous (As in, "totally devoid of any technical information")


Watson was intended to solve fuzzy optimization problems.

Unfortunately, the way it solved fuzzy was 'engineer the problem to fit Watson, then engineer the output to be usable.'

Which required every project to be a huge custom implementation lift. Similar to early Palantir.


> Watson was intended to solve fuzzy optimization problems.

> Unfortunately, the way it solved fuzzy was 'engineer the problem to fit Watson, then engineer the output to be usable.'

I'm going to review my understanding of fuzzy optimization because this last line doesn't fit the bill in it.


The reason LLMs are viable for use cases that Watson wasn't is their natural language and universal parsing strengths.

In the Watson era, all the front- and back-ends had to be custom engineered per use case. Read, huge IBM services implementation projects that the company bungled more often than not.

Which is where the Palantir comparison is apt (and differs). Palantir understood their product was the product, and implementation was a necessary evil, to be engineered away ASAP.

To IBM, implementation revenue was the only reason to have a product.


> Read, huge IBM services implementation projects that the company bungled more often than not

Well this is _not_ what they wanted to sell in that talk.

But the implementation shown was über vanilla, and once I got home the documentation was close to un existent (Or, at least, not even trying to be what the docs for such a technology should be).


If anyone is curious to see what Watson actually was you can find it here (it was nowhere near to a generalized large langue model -- mostly made for winning in Jeopardy): https://www.cs.cornell.edu/courses/cs4740/2011sp/papers/AIMa...

That's not a fucking triangle.

(It's Friday night people it's a joke and I have no idea what the article is talking about just looked at the picture)


I had to reason with my brain before it would accept it as a triangle. It has 3 sides and 3 corners so...


It's one of those things where it's technically correct but the headline is misleading. When you say "a triangle" without any qualification as the headline does, people are going to interpret that as a good old fashioned triangle. Using the term without clarification that you mean spherical geometry is kind of underhanded writing, imo.


The title attribute of the article is <title>A hyperbolic triangle with three cusps</title>


It's a mild form of clickbait.


I think it's just a normal ages-old pattern for writing headlines that pique people's curiosity. It's super common in popular math in particular, because math is always about generalizing. There's a fine line between that and actual clickbait meant to actively mislead.


Sides are half spheres but yeah it is not an euclidean triangle.


Even thought the doc _might_ be AI generated, that repo is Addy Osmani's.

Of Addy Osmani fame.

I seriously doubt he went to Gemini and told it "Give me a list of 30 identifiable issues when agentic coding, and tips to solve them".


So we come with a system prompt?


> read something spit out by the equivalent of a lexical bingo machine because you were too lazy to write it yourself.

Ha! That's a very clever spot on insult. Most LLMs would probably be seriously offended by this would thy be rational beings.

> No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake.

OK, you are pushing it buddy. My mandarin is not that good; as a matter of fact, I can handle no mandarin at all. Or french to that matter. But I'm certain a decent LLM can do that without me having to resort to reach out to another person, that might not be available or have enough time to deal with my shenanigans.

I agree that there are way too much AI slop being created and made public, but yet there are way too many cases where the use is fair and used for improving whatever the person is doing.

Yes, AI is being abused. No, I don't agree we should all go taliban against even fair use cases.


As a side note, i hate posts where they go on and on and use 3 pages to go to the point.

You know what I'm doing? I'm using AI to chase to the point and extract the relevant (For me) info.


> It's that Microsoft pretends they are required

I think that's what "artificial limitations" mean. Microsoft pretending they are required when they are not.


Oh, so that's what the box was about.

I use both Mac and Windows (Work / Leisure) and in both boxes I had a weird dialog appearing with no text at all in either.

I can confirm the dark pattern switch (As in dark grey / light gray status)


"...in API"

That's a VERY relevant clarification. this DOESN'T apply to web or app users.

Basically, if you want a 1M context window you have to specifically pay for it.


wow. I've been entertaining this idea for some time now (Emphasis on "entertaining"). Seeing that already somebody has actually made this makes me very happy.

Will definitely give it a go.


That's fantastic to hear, thank you! It's always validating to know that others have been thinking along the same lines.

I'd love to hear your thoughts once you've had a chance to try it out. All feedback is welcome!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: