I have been using chatGPT a ton over the last months and paying the subscription. Used it for coding, news, stock analysis, daily problems, and a whatever I could think of. I decided to give Gemini a go when version three came out to great reviews. Gemini handles every single one of my uses cases much better and consistently gives better answers. This is especially true for situations were searching the web for current information is important, makes sense that google would be better. Also OCR is phenomenal chatgpt can't read my bad hand writing but Gemini can easily. Only downsides are in the polish department, there are more app bugs and I usually have to leave the happen or the session terminates. There are bugs with uploading photos. The biggest complaint is that all links get inserted into google search and then I have to manipulate them when they should go directly to the chosen website, this has to be some kind of internal org KPI nonsense. Overall, my conclusion is that ChatGPT has lost and won't catch up because of the search integration strength.
I consistently have exactly the opposite experience. ChatGPT seems extremely willing to do a huge number of searches, think about them, and then kick off more searches after that thinking, think about it, etc., etc. whereas it seems like Gemini is extremely reluctant to do more than a couple of searches. ChatGPT also is willing to open up PDFs, screenshot them, OCR them and use that as input, whereas Gemini just ignores them.
I will say that it is wild, if not somewhat problematic that two users have such disparate views of seemingly the same product. I say that, but then I remember my own experience just from few days ago. I don't pay for gemini, but I have paid chatgpt sub. I tested both for the same product with seemingly same prompt and subbed chatgpt subjectively beat gemini in terms of scope, options and links with current decent deals.
It seems ( only seems, because I have not gotten around to test it in any systematic way ) that some variables like context and what the model knows about you may actually influence quality ( or lack thereof ) of the response.
> I will say that it is wild, if not somewhat problematic that two users have such disparate views of seemingly the same product.
This happens all the time on HN. Before opening this thread, I was expecting that the top comment would be 100% positive about the product or its competitor, and one of the top replies would be exactly the opposite, and sure enough...
I don't know why it is. It's honestly a bit disappointing that the most upvoted comments often have the least nuance.
How much nuance can one person's experience have? If the top two most visible things are detailed, contrary experiences of the same product, that seems a pretty good outcome?
Also, why introduce nuance for the sake of nuance? For every single use case, Gemini (and Claude) has performed better. I can’t give ChatGPT even the slightest credit when it doesnt deserve any
Chatgpt is not one model! Unless you manually specify to use a particular model your question can be routed to different models depending on what it guesses would be most appropriate for your question.
Yes but then what does the grandparent mean with “unless you specify a specific model” ? Do they mean “if you select auto, it automatically decides between instant or thinking” ?
Because neither product has any consistency in its results, no predictive behaviour. One day it performs well, another it hallucinates non existing facts and libraries. Those are stochastic machines
I see the hyperbole is the point, but surely what these machines do is to literally predict? The entire prompt engineering endeavour is to get them to predict better and more precisely. Of course, these are not perfect solutions - they are stochastic after all, just not unpredictably.
Prompt engineering is voodoo. There's no sure way to determine how well these models will respond to a question. Of course, giving additional information may be helpful, but even that is not guaranteed.
Also every model update changes how you have to prompt them to get the answers you want. Setting up pre-prompts can help, but with each new version, you have to figure out through trial and error how to get it to respond to your type of queries.
I can't wait to see how bad my finally sort-of-working ChatGPT 5.1 pre-prompts work with 5.2.
It definitely isn’t voodoo, it’s more like forecasting weather. Some forecasts are easier to make, some are harder (it’ll be cold when it’s winter vs the exact location and wind speed of a tornado for an extreme example). The difference is you can try to mix things up in the prompt to maximize the likelihood of getting what you want out and there are feasibility thresholds for use cases, e.g. if you get a good answer 95% of the time it’s qualitatively different than 55%.
No, it's not. Nowadays we know how to predict the weather with great confidence. Prompting may get you different results each time. Moreover, LLMs depend on the context of your prompts (because of their memory), so a single prompt may be close to useless and two different people can get vastly different results.
And I’d really like for Gemini to be as good or better, since I get it for free with my Workspace account, whereas I pay for chatgpt. But every time I try both on a query I’m just blown away by how vastly better chatgpt is, at least for the heavy-on-searching-for-stuff kinds of queries I typically do.
It’s like having 3 coins and users preferring one or the other when tossing it because one coin gives consistently more heads (or tails) than the other coin.
What is better is to build a good set of rules and stick to one and then refine those rules over time as you get more experience using the tool or if the tool evolves and digress from the results you expect.
<< What is better is to build a good set of rules and
But, unless you are on a local model you control, you literally can't. Otherwise, good rules will work only as long as the next update allows. I will admit that makes me consider some other options, but those probably shouldn't be 'set and iterate' each time something changes.
what I had in mind when I added that comment was for coding, with the use of .md files.
For the web version of chats I agree there is little control on how to tailor the way you want the agent to behave, unless you give a initial "setup" prompt.
They're not really capable of producing varying answers based on load.
But they are capable of producing different answers because they feel like behaving differently if the current date is a holiday, and things like that. They're basically just little guys.
Tesla FSD has been more or less the same experience. Some people drive 100s of miles without disengaging while others pull the plug within half a mile from their house. A lot of it depends on what the customer is willing to tolerate.
We've been having trouble telling if people are using the same product ever since Chat GPT first got popular. The had a free model and a paid model, that was it, no other competitors or naming schemes to worry about, and discussions were still full of people talking about current capabilities without saying what model they were using.
For me, "gemini" currently means using this model in the llm.datasette.io cli tool.
openrouter/google/gemini-3-pro-preview
For what anyone else means? If they're equivalent? If Google does something different when you use "Gemini 3" in their browser app vs their cli app vs plans vs api users vs third party api users? No idea to any of the above.
Same, I use chatgpt plus (the entry-level paid option) extensively for personal research projects and coding, and it seems miles ahead of whatever "Gemini Pro" is that I have through work. Twice yesterday, gemini repeated verbatim a previous response as if I hadn't asked another question and told it why the previous response was bad. Gemini feels like chatGPT from two years ago.
Are you uploading PDFs that already have a text layer?
I don't currently subscribe to Gemini but on A.I. Studio's free offering when I upload a non OCR PDF of around 20 pages the software environment's OCR feeds it to the model with greater accuracy than I've seen from any other source.
I'm sorry but I don't see what "knowledge cutoff" has to do with what we were talking about- which is using a LLM find PDFs and other sources for research.
I agree with you. To me, gemini has much worse search results. Then again, I use kagi for search and I cannot stand the search results from Google anymore. And its clear that gemini uses those.
In contrast, chatgpt has built their own search engine that performs better in my experience. Except for coding, then I opt for Claude opus 4.5.
> The biggest complaint is that all links get inserted into google search and then I have to manipulate them when they should go directly to the chosen website, this has to be some kind of internal org KPI nonsense.
Oh I know this from my time at Google. The actual purpose is to do a quick check for known malware and phishing. Of course these days such things are better dealt with by the browser itself in a privacy preserving way (and indeed that’s the case), so it’s unnecessary to reveal to Google which links are clicked. It’s totally fine to manipulate them to make them go directly to the website.
Instead of forwarding model-generated links to https://www.google.com/url?q=[URL], which serves the purpose of malware check and user-facing warning about linking to an external site, Gemini forwards links to https://www.google.com/search?q=[URL], which does... a Google search for the URL, which isn't helpful at all.
That's interesting, I just today started getting some "Some sites restrict our ability to check links." dialogue in ChatGPT that wanted me to verify that I really wanted to follow the link, with a Learn More link to this page: https://help.openai.com/en/articles/10984597-chatgpt-generat...
So it seems like ChatGPT does this automatically and internally, instead of using an indirect check like this.
What an understatement. It has me thinking „man, fuck this“ on the daily.
Just today it spontaneously lost an entire 20-30 minutes long thread and it was far from the first time. It basically does it any time you interrupt it in any way. It’s straight up data loss.
It’s kind of a typical Google product in that it feels more like a tech demo than a product.
It has theoretically great tech. I particularly like the idea of voice mode, but it’s noticeably glitchy, breaks spontaneously often and keeps asking annoying questions which you can’t make it stop.
ChatGPT web UI was also like this for the longest time, until a few months ago: all sorts of random UI bugs leading either to data loss or misleading UI state. Interrupting still is very flaky there too. And on the mobile app, if you move away from the app while it's taking time to think, its state would somehow desync from the actual backend thinking state, and get stuck randomly; sometimes restarting the app fixes it, sometimes that chat is that unusable from that point on.
And the UI lack of polish shows up freshly every time a new feature lands too - the "branch in new chat" feature is really finicky still, getting stuck in an unusable state if you twitch your eyebrows at wrong moment.
i basically can't use the ChatGPT app on the subway for these reasons. the moment the websocket connection drops, i have to edit my last message and resubmit it unchanged.
it's like the client, not the server, is responsible for writing to my conversation history or something
it took me a lot of tinkering to get this feeling seamless in my own apps that use the api under the hood. i ended up buffering every token into a redis stream (with a final db save at the end of streaming) and building a mechanism to let clients reconnect to the stream on demand. no websocket necessary.
works great for kicking off a request and closing tab or navigating away to another page in my app to do something.
i dont understand why model providers dont build this resilient token streaming into all of their APIs. would be a great feature
exactly. they need to bring in spotify level of caching of streaming music that it just works if you're in a subway. Constant availability should be table stakes for them.
> ChatGPT web UI was also like this for the longest time
Copilot Chat has been perfect in this respect. It's currently GPT 5.0, moving to 5.1 over the next month or so, but at least I've never lost an (even old) conversation since those reside in an Exchange mailbox.
I downloaded my archive and completely ended my GPT subscription last week based on some bad computer maintenance advice. Same thing here - using other models, never touching that product again.
Oh, it was DUMB. I was dumb. I only have myself to blame here. But we all do dumb things sometimes, owning your mistakes keeps you humble, and you asked. So here goes.
I use a modeling software called Rhino on wine on Linux. In the past, there was an incident where I had to copy an obscure dll that couldn't be delivered by wine or winetricks from a working Windows installation to get something to work. I did so and it worked. (As I recall this was a temporary issue, and was patched in the next release of wine.)
I hate the wine standard file picker, it has always been a persistent issue with Rhino3d. So I keep banging my head on trying to get it to either perform better or make a replacement. Every few months I'll get fed up and have a minute to kill, so I'll see if some new approach works. This time, ChatGPT told me to copy two dll's from a working windows installation to the System folder. Having precedent that this can work, I did.
Anyway, it borked startup completely and it took like an hour to recover. What I didn't consider - and I really, really should have - was that these were dll's that were ALREADY IN the system directory, and I was overwriting the good ones with values already reflecting my system with completely foreign ones.
And that's the critical difference - the obscure dll that made the system work that one time was because of something missing. This time was overwriting extant good ones.
But the fact that the LLM even suggested (without special prompting) to do something that I should have realized was a stupid idea with a low chance of success made me very wary of the harm it could cause.
> ...using other models, never touching that product again.
> ...that the LLM even suggested (without special prompting) to do something that I should have realized was a stupid idea with a low chance of success...
Since you're using other models instead, do you believe they cannot give similarly stupid ideas?
I'm under no misimpression they can't. But I have found ChatGPT to be most confident when it f's up. And to suggest the worst ideas most often.
Until you queried I had forgotten to mention that the same day I was trying to work out a Linux system display issue and it very confidently suggested to remove a package and all its dependencies, which would have removed all my video drivers. On reading the output of the autoremove command I pointed out that it had done this, and the model spat out an "apology" and owned up to ** the damage it would have wreaked.
** It can't "apologize" for or "own up" to anything, it can just output those words. So I hope you'll excuse the anthropomorphization.
There is no competing product for GPT Voice. Hands down. I have tried Claude, Gemini - they don't even comes close.
But voice is not a huge traffic funnel. Text is. And the verdict is more or less unanimous at this time. Gemini 3.0 has outdone ChatGPT. I unsubscribed from GPT plus today. I was a happy camper until the last month when I started noticing deplorable bugs.
1. The conversation contexts are getting intertwined.Two months ago, I could ask multiple random queries in a conversation and I would get correct responses but the last couple of weeks, it's been a harrowing experience having to start a new chat window for almost any change in thread topic.
2. I had asked ChatGPT to once treat me as a co-founder and hash out some ideas. Now for every query - I get a 'cofounder type' response. Nothing inherently wrong but annoying as hell. I can live with the other end of the spectrum in which Claude doesn't remember most of the context.
Now that Gemini pro is out, yes the UI lacks polish, you can lose conversations, but the benefits of low latency search and a one year near free subscription is a clincher. I am out of ChatGPT for now, 5.2 or otherwise. I wish them well.
Just a note, chatGPT does retain a persistent memory of conversations. In the settings menu, there's a section that allows you to tweak/clear this persistent memory
I found the gemini cli extremely lacking and even frustrating. Why google would choose node…
Codex is decent and seemed to be improving (being written in rust helps). Claude code is still the king, but my god they have server and throttling issues.
Mixed bag wherever you go. As model progress slows / flatlines (already has?) I’m sure we’ll see a lot more focus and polish on the interfaces.
> It has me thinking „man, fuck this“ on the daily.
That's sometimes me with the CLI. I can't use the Gemini CLI right now on Windows (in the Terminal app), because trying to copy in multiple lines of text for some reason submits them separately and it just breaks the whole thing. OpenCode had the same issue but even worse, it quite after the first line or something and copied the text line by line into the shell, thank fuck I didn't have some text that mentions rm -rf or something.
At the same time, neither Codex CLI, nor Claude Code had that issue (and both even showed shortened representations of copied in text, instead of just dumping the whole thing into the input directly, so I could easily keep writing my prompt).
So right now if I want to use Gemini, I more or less have to use something like KiloCode/RooCode/Cline in VSC which are nice, but might miss out on some more specific tools. Which is a shame, because Gemini is a really nice model, especially when it comes to my language, Latvian, but also your run of the mill software dev tasks.
In comparison, Codex feels quite slow, whereas Claude Code is what I gravitate towards most of the time but even Sonnet 4.5 ends up being expensive when you shuffle around millions of tokens: https://news.ycombinator.com/item?id=46216192 Cerebras Code is nice for quick stuff and the sheer amount of tokens, but in KiloCode/... regularly messes up applying diff based edits.
Which makes tons of sense because iPhone users are higher CLV than Android users. If Google had to choose between major software defects in Android or iOS, they would focus quality on iOS every time.
iMessage renders other iMessage users as blue bubbles, SMS/RCS as green bubbles.
People who can’t understand that many people actually prefer iOS use this green/blue thing to explain the otherwise incomprehensible (to them) phenomenon of high iOS market share. “Nobody really likes iOS, they just get bullied at school if they don’t use it”.
It’s just “wake up sheeple” dressed up in fake morality.
It wouldn't be an issue if they didn't pick the worst green on earth. "Which green would you like for the carrier text messages Mr. Jobs?" ... "#00FF00 will be fine."
Outweighed by the value of having to suffer with the moldy fruits of their own labor. That was the only way the Android Facebook app became usable as well.
To posit a scenario: I would expect General Motors to buy some Ford vehicles to test and play around with and use. There's always stuff to learn about what the competition has done (whether right, wrong, or indifferent).
But I also expect the parking lots used by employees at any GM design facility in the world to be mostly full of General Motors products, not Fords.
I'm only familiar with Ford production and distribution facilities. Those parking lots are broadly full of Fords, but that doesn't mean that it's like this across the board.
And I've parked in the lot of shame at a Ford plant, as an outsider, in my GMC work truck -- way over there.
It wasn't so bad. A bit of a hike to go back and get a tool or something, but it was at least paved...unlike the non-union lot I'm familiar with at a P&G facility, which is a gravel lot that takes crossing a busy road to get to, lacks the active security and visibility from the plant that the union lot has, and which is full of tall weeds. At P&G, I half-expect to come back and find my tires slashed.
Anyway, it wasn't barren over there in the not-Ford lot, but it wasn't nearly so populous as the Ford lot was. The Ford-only lot is bigger, and always relatively packed.
It was very clear to me that the lots (all of the lots, in aggregate) were mostly full of Fords.
To bring this all back 'round: It is clear to me that Ford employees broadly (>50%) drive Fords to work at that plant.
---
It isn't clear to me at all that Google Pixel developers don't broadly drive iPhones. As far as I can tell, that status (which is meme-level in its age at this point) is true, and they aren't broadly making daily use of the systems they build.
(And I, for one, can't imagine spending 40 hours a week developing systems that I refuse to use. I have no appreciation for that level of apparent arrogance, and I hope to never be suaded to be that way. I'd like to think that I'd be better-motivated to improve the system than I would be to avoid using it and choose a competitor instead.
That doesn’t surprise me at all haha appreciate someone a little closer to the question answering it! I know it still counts anecdotal but I’ll take it
This is flabbergasting, how could such a large proportion of highly technical people willingly subject themselves to being shackled by iOS? They just happily put up with having one choice of browser, (outside Europe) no third party app stores, and being locked into the Apple ecosystem? I can't think of a single reason I would ever switch from an S22-25+U to an iPhone. I only went from 22U to 25U because my old one got smashed, otherwise the 22U would still be perfectly fine.
Because many of them just want to use their phone as a tool, not tinker with it.
Same way many professional airplane mechanics fly commercial rather than building their own plane. Just because your job is in tech doesn’t mean you have to be ultra-haxxor with every single device in your life.
I don't have my phone (a Pixel) because it frees me from shackles or anything like that. It's just a phone. I use the default everything. Works great. I imagine most people with iPhones are the same.
I feel like people dance around this a lot because idk it hurts nerd credibility or something. The fact is on a moment to moment basis, the iPhone is just a better experience generally. They also hold their value a lot longer. I consistently trade in my phone or sell it to other people for easily 80% of what I paid for it. Usually this is 3-4yrs out
Remember how long it took for Instagram to be functional on android phones?
I've tried them out and not a single thing about it was tangibly better IMO. They have no inherent merit above Android except that some see them as a status symbol (which is absurd as my S25U has a higher MSRP than most iPhone models)
Cameras, for starters. I’ve never seen another smart phone keep up with the quality color and texture of an iPhone’s photos/videos (videos in particular) since the 4s. Their color science is just better. We’ve intercut footage since the 7 or so with our work and frankly you’d be hard pressed to catch it wasn’t one of our nicer rigs unless we hold the shot for too long. we just can’t get other phone cameras to match footage with the same ease, especially when it comes to skin tones.
Any time its safety stuff triggers, Gemini wipes the context. It's unusable because of this because whatever is going on with the safety stuff, it fires too often. I'm trying to figure out some code here, not exactly deporting ICE to Guantanamo or whatever.
The more Gemini and Nano-Banana soften their filters, the more audience it will take from other platforms. The main risk is payment providers banning them, I can't imagine bank card providers to remove payments to Google.
On a flip side chatgpt app now has years of history that sometimes useful (search is pretty ok, but could improve) but otherwise I'd like to remove most of it - good luck doing so.
Claude regularly computes a reply for me, then reports an error and loses the reply. I wonder what fraction of Anthropic’s compute gets wasted and redone.
Interesting, I had the opposite experience. 5.0 "Thinking" was better than 5.1, but Gemini 3 Pro seems worse than either for web search use cases. It's hallucinating at pretty alarming rates (including making up sources it never actually accessed) for a late 2025 model.
Opus 4.5 has been a step above both for me, but the usage limits are the worst of the three. I'm seriously considering multiple parallel subscriptions at this point.
I've had the same experience with search, especially with it hallucinating results instead of actually finding them. It's really frustrating that you can't force a more in-depth search from the model run by the company most famous for a search engine.
I’ve been putting literally the same inputs into both ChatGPT and Gemini and the intuition in answers from Gemini just fits for me. I’m now unwilling to just rely on ChatGPT.
Google, if you can find a way to export chats into NotebookLM, that would be even better than the Projects feature of ChatGPT.
notebooklm is heavily biased to only use the sources i added and frame every task around them - even if it is nonsensical - so it is not that useful for novel research. it also tends to hallucinate when lots of data is involved.
> Overall, my conclusion is that ChatGPT has lost and won't catch up because of the search integration strength.
Depends, even though Gemini 3 is a bit better than GPT5.1, the quality of the ChatGPT apps themselves (mobile, web) have kept me a subscriber to it.
I think Google needs to not-google themselves into a poor app experience here, because the models are very close and will probably continue to just pass each other in lock step. So the overall product quality and UX will start to matter more.
Same reason I am sticking to Claude Code for coding.
The ChatGPT Mac app especially feels much nicer to use. I like Gemini more due to the context window but I doubt Google will ever create a native Mac app.
This matches my experience pretty closely when it comes to LLM use for coding assistance.
I still find a lot to be annoyed with when it comes to Gemini's UI and its... continuity, I guess is how I would describe it? It feels like it starts breaking apart at the seams a bit in unexpected ways during peak usages including odd context breaks and just general UI problems.
But outside of UI-related complaints, when it is fully operational it performs so much better than ChatGPT for giving actual practical, working answers without having to be so explicit with the prompting that I might as well have just written the code myself.
That's interesting. I've got completely different impression. Every time I use Gemini I'm surprised how bad it is. My main complaint is that Gemini is too lazy.
I've read many very positive reviews about Gemini 3. I tried using it including Pro and to me it looks very inferior to ChatGPT. What was very interesting though was when I caught it bullshitting me I called its BS and Gemini expressed very human like behavior. It did try to weasel its way out, degenerated down to "true Scotsman" level but finally admitted that it was full of it. this is kind of impressive / scary.
Ditto but for Claude -- blows GPT out of the water. Much better in coding and solving physics problems from the images (in foreign languages). GPT couldn't even read the image. The only annoying thing is that if you use Opus for coding, your usage will fill up pretty fast.
Gemini voice recognition is trash compared to chatgpt and that is a deal breaker for me. I wonder how many ppl do OCR versus use voice.
And how has chatgpt lost when ure not comparing the chatgpt that just came out to the Gemini that just came out? Gemini is just annoying to use.
and Google just benchmaxxed I didn't see any significant difference (paying for both) and the same benchmaxxing probably happening for chatgpt now as well, so in terms of core capabilities I feel stuff has plateaued. more bout overall experience now where Gemini suxx.
I really don't get how "search integration" is a "strength"?? can you give any examples of places where you searched for current info and chatgpt was worse? even so I really don't get how it's a moat enough to say chatgpt has lost. would've understood if you said something like tpu versus GPU moat.
Then you haven't used Gemini CLI with Gemini 3 hard enough. It's a genius psychopath. The raw IQ that Gemini has is incredible. Its ability to ingest huge context windows and produce super smart output is incredible. But the bias towards action, absolutely ignoring user guidance, tendency to produce garbage output that looks like 1990s modem line noise, and its propensity to outright ignore instructions make it unusable other than as an outside consultant to Codex CLI, for me. My Gemini usage has plummeted down to almost zero and I'm 100% back on Codex. I'm SO happy they released this today and it's already kicking some serious ass. Thanks OpenAI team and congrats.
I guess when you use it for generic "problem solving", brainstorming for solutions, this is great. That's what I use it for, and Gemini is my favorite model. I love when Gemini resists and suggests that I am wrong while explaining why. Either it's true, and I'm happy for that, or I can re-prompt based on the new information which doesn't allow for the mistake Gemini made.
On the other hand, I can also see why Claude is great for coding, for example. By default it is much more "structured". One can probably change these default personalities with some prompting, and many of the complaints found in this thread about either side are based on the assumption that you can use the same prompt for all models.
Yeah it's a weird mix of issues with the backend model and issues with the CLI client and its prompts. What makes it hard for them is the teams aren't talking to each other. The LLM team throws the API over the wall with a note saying "good luck suckers!".
> I usually have to leave the happen or the session terminates
Assuming you meant "leave the app open", I have the same frustration. One of the nice things about the ChatGPT app is you can fire off a req and do something else. I also find Gemini 3 Pro better for general use, though I'm keen to try 5.2 properly
I generate fun images for my kids - turn photos into a new style, create colouring pages from pictures, etc. I lost interest in chatGPT because it throws vague TOS errors constantly. Gemini handles all of this without complaint.
What's your specific concern here? I certainly wouldn't want to, e.g., give young kids unmonitored use of an LLM, or replace their books with AI-generated text, or stop directly engaging with their games and stories and outsource that to ChatGPT. But what part of "generate fun images for my kids - turn photos into a new style, create colouring pages from pictures, etc" is likely to be "unhealthy and bad for their development"?
Can you share some examples of this where it gives better results?
For me both Gemini and ChatGPT (both paid versions Key in Gemini and ChatGPT Plus) give me similiar results in terms of "every day" research. Im sticking with ChatGPT at the moment, as the UI and scaffolding around the model is in my view better at ChatGpt (e.g. you can add more than one picture at once...)
For Software Development, I tested Gemini3 and I was pretty disappointed in comparison to Claude Opus CLI, which is my daily driver.
I see a post like this every time there are news about ChatGPT or OpenAI. I'm probably being paranoid but I keep thinking that it looks like bots or paid advertisement for Gemini
I think people like me just enjoying sharing when something is working for them and they have a good experience. It probably gets voted up because people enjoy reading when that happens
Google has such a huge advantage in the amount of training data with the Google search database and with YouTube and in terms of FLOPS with their TPUs.
Just a fair warning, it likes to spell Acknowledge as Acknolwedge. And I've run into issues when it's accessing markdown guides, it loses track and hallucinates from time to time which is annoying.
A future where Google still dominates, is that a future we want? I feel a future with more players is better than one with just a single one. Competition is valuable for us consumers
It would be useful to see some examples of the differences and supposed strengths of Gemini so this doesn't come off as Google advertisement snarf.
Also, I would never, ever, trust Google for privacy or sign into a Google account except on YouTube (and clear cookies afterwards to stop them from signing me into fucking Search too).
What?? Am I using the same gemini as everyone else?
>OCR is phenomenal
I literally tried to OCR a TYPED document in Gemini today and it mangled it so bad I just transcribed it myself because it would take less time than futzing around with gemini.
> Gemini handles every single one of my uses cases much better and consistently gives better answers.
>coding
I asked it to update a script by removing some redundant logic yesterday. Instead of removing it it just put == all over the place essentially negating but leaving all the code and also removing the actual output.
Today I asked it to make a short bit of code to query some info from an API. I needed it to not use the specific function X that is normally used. I added to its instructions "Never use function X" then asked it in the chat to confirm its rules. It then generated code using function X and a word soup explaining how it did not uses function X. Then I copy pasted the line and asked why it used function X and it said more word soup explaining how the function was not there. So yea not so good.
Why do people pay for ai tools? I didn't get that. I feel like I just rotate between them on the free tiers. Unless you're paying for all of them, what's the point?
I pay for Kagi and get all of the major ones, a great search engine that I can tune to my liking, and the ability to link any model to my tuned web search.