Reminds me of a quote from a few years back: "We are entering an era where we use AI to write blog posts from a few keywords for people who use AI to summarize a blog post into a few keywords".
Gemini Pro neither as is nor in Deep Research mode even got the number of pieces or relevant squares right. I didn't expect it to actually solve it. But I would have expected it to get the basics right and maybe hint that this is too difficult. Or pull up some solutions PDF, or some Python code to brute force search ... but just straight giving a totally wrong answer is like ... 2024 called, it wants its language model back.
Instead in Pro Simple it just gave a wrong solution and Deep Research wrote a whole lecture about it starting with "The Geometric and Cognitive Dynamics of Polyomino Systems: An Exhaustive Analysis of Ubongo Puzzle 151" ... that's just bullshit bingo. My prompt was a photo of the puzzle and "solve ubongo puzzle 151"; in my opinion you can't even argue that this lecture was to be expected given my very clear and simple task description.
My mental model for language models is: overconfident, eloquent assistant who talks a lot of bullshit but has some interesting ideas every now and then. For simple tasks it simply a summary of what I could google myself but asking an LLM saves some time. In that sense it's Google 2.0 (or 3.0 if you will)
Deep research, from my experience, will always add lectures.
I'm trying to create a comprehensive list of English standup specials. Seems like a good fit! I've tried numerous times to prompt it "provide a comprehensive list of English standup specials released between 2000 and 2005. The output needs to be a csv of verified specials with the author, release date and special name. I do not want any other lecture or anything else. Providing anything except the csv is considered a failure". Then it creates it's own plan and I go further clarifying to explicitly make sure I don't want lectures...
It goes on to hallucinate a bunch of specials and provide a lecture on "2000 the era of X on standup comedy" (for each year)
I've tried this in 2.5 and 3. Numerous time ranges and prompts. Same result. It gets the famous specials right (usually), hallucinates some info on less famous ones (or makes them up completely) and misses anything more obscure
I tried asking for a list of the most common gameboy color games not compatible with the original dmg gameboy. Chatgpt would over and over list dmg compatible games instead. I asked it to cross reference lists of dmg games to remove them and it ”reasoned” for a long time before it showed what sources it used for cross references, and then gave me the same list again.
It also insisted on including ”Shantae” in the list, which is expensive specifically because it is uncommon. I eventually forbid it from including the game in the list, and that actually worked, but it would continue mentioning it outside the list.
Overselling is only the tip of the iceberg. The real problem is that a lot of managers base their decision to introduce language models into business processes on cutting edge Pro edition demos, but what is, of course, actually used in production is some cheap Nano/Flash/Mini version.
There is something fucky about tokenizing images that just isn't as clean as tokenizing text. It's clear that the problem isn't the model being too dumb, but rather that model is not able to actually "see" the image presented. It feels like a lower-performance model looks at the image, and then writes a text description of it for the "solver" model to work with.
To put it another way, the models can solve very high level text-based problems while struggling to solve even low level image problems - even if underneath both problems use a similar or even identical solving frameworks. If you have a choice between showing a model a graph or feeding it a list of (x,y) coordinates, go with the coordinates every time.
Google pays Mozilla, Mozilla has more money, Mozilla spends more money (especially in compensations to a bloated C-level), Mozilla needs more money, Google threatens with paying less, Mozilla will lube up and bend over.
If they keep their shit together and ride the surveillance wave then I don't see what should keep them from increasing 15fold by 2050. Their technology is not at all dependent on AGI, which is why I don't understand why they are always thrown in together with OpenAI et al.
Not sure about the meaning of your asterisk, but the Nokian Tyres corporation is not related to Nokia the telecoms co, other than being founded in the same town.
Nokia did manufacture rubber boots though, before they spun off the footwear division in 1990 and went all in on electronics.
This changed in 1988 with the formation of an LLC, in 1995 they went public and in 2003 shares still held by the parent company were sold off to Bridgestone.
>I had no idea you've never heard of it. Thanks for keeping us informed.
I see.
In that case, you'll appreciate the fact that the Three Musketeers chocolate bar bears no relationship to Alexander Dumas, the author of the famed book series featuring D'Artagnan and three musketeers.
You might also be interested to learn that Zenit launch vehicles are not made by the organization that produces Zenit optics and cameras.
Most crucially, Lucky grocery store chain in California turns out to be completely different from the Korean Lucky chemical products and electronics conglomerate (known as "Lucky GoldStar" after merging its chem and electronics wings, and, currently, "LG").
reply