Another one I've noticed is using "I've" as a contraction in e.g. "I've a meeting to attend". Seems totally reasonable but for some reason native speakers just don't use it that way.
Wait, what? Englishman in my 50s here and I use phrases like that all the time — “I’ll be missing standup cos I’ve a GP appointment”, “leaving at lunchtime as I’ve a train to catch”, “gotta dash, I’ve chores to do”. No one’s ever said I sound German!
Could also be French speakers. They would say "J'utilise le format .avif depuis quelques années." I think the "depuis" throws off the French speakers when they translate that literally as "since some years" instead of "for some years".
Another common tell: I wake up in the morning in the US/Pacific time zone, and see the European writers on HN using "I have ran" instead of "I have run".
As someone who used to work there, Google will never get product releases right in general because of how bureaucratic and heavyweight their launch processes are.
They force the developing team to have a huge number of meetings and email threads that they must steer themselves to check off a ridiculously large list of "must haves" that are usually well outside their domain expertise.
The result is that any non-critical or internally contentious features get cut ruthlessly in order to make the launch date (so that the team can make sure it happens before their next performance review).
It's too hard to get the "approving" teams to work with the actual developers to iron these issues out ahead of time, so they just don't.
Spot on. I would suggest a slightly different framing where the antagonist isn't really the "approving" teams but "leaders" who all want a seat at the table and exercise their authority lest their authority muscles atrophy. Since they're not part of the development, unless they object to something, would they really have any impact or leadership?
I always laugh-cry with whomever I'm sitting next to whenever launch announcements come out with more people in the "leadership" roles than the individual contributor roles. So many "leaders" but none with the awareness or the care of the farcical volumes such announcements speak.
Involving everyone who shows up to meetings is a great way to move forward and/or trim down attendees. Management who enjoys getting their brain picked or homework assignments are always welcome.
That's presuming a healthy culture. In an unhealthy culture, some people will feel pressure to uphold some comment that someone "senior" made offhand in a meeting several months ago, even if that leader is no longer attending project meetings. The people who report to this leader may otherwise receive blowback if the "decision" their leader made is not being upheld, whether such a leader recalls their several-month-old decision correctly or not, in the case they recall it at all. I have found it frustratingly-more-common-than-I-would-like where people, including leaders, retroactively adjust their past decisions so that they claim "I-told-you-so" and "you-should-have-done-what-I-said".
In response to your comment, yes, I would largely be in favor of moving forward only with whatever is said in the relevant meetings with the given attendees of a meeting. That assumes a reasonably healthy culture where these meetings are scheduled in good faith reasonable times for all relevant stakeholders.
Yep, that and (also used to work there) the motivations of the implementing teams end up getting very detached from the customer focus and product excellence because of bureaucratic incentives and procedures that reward other things.
There's a lot of "shipping the org chart" -- competing internal products, turf wars over who gets to own things, who gets the glory, rather than what's fundamentally best for the customer. E.g. Play Music -> YouTube Music transition and the disaster of that.
Hah, that exact transition was my last project there before I decided I had had enough!
The GPM team was hugely passionate about music and curating a good experience for users, but YT leadership just wanted us to "reuse existing video architecture" to the Nth degree when we merged into the YT org.
After literally years of negotiations you got... what YTM is. Many of the original GPM team members left before the transition was fully underway because they saw the writing on the wall and wanted no part of it. I really wish I had done the same.
That is so sad to hear. I absolutely loved Google Play Music – especially features like saving e.g. an online Universal Music release to my "archive" and then for myself being able to actually RENAME TRACKS with e.g. wrong metadata.
That and being able to mix my own uploaded tracks with online music releases into a curated collection almost made it a viable contender to my local iTunes collection.
And then... they just removed it forever. Bastards.
Yep, YTM is/was so clearly the inferior product it's laughable. Even as a Google employee with a discount etc (I can't remember what that was, but) on these things I switched to Spotify when they dropped it.
I worked on a team that wrote software for Chromecast based devices. The YTM app didn't even support Chromecast, our own product, and their responses on bug tickets from Googlers reporting this as a problem was pretty arrogant. It was very disheartening to watch. Complete organizational dysfunction.
I think YTM has substantially improved since then, but it still has terrible recommendations, and it still bizarrely blurs between video and music content.
Google went from a company run by engineers to one run by empire-building product managers so fast, it all happened in a matter of 2-3 years.
As someone who just GA'd an Azure service - things aren't all that different in Azure. Not sure how AWS does service launches but it would be interesting to contrast with GCP and Azure.
Why would you name an option "disable_ai" with a default value of false instead of calling it "enable_ai" with a default value of true?
Are there some mechanical semantics I'm missing here that make this beneficial?
Negative booleans (ie that remove or suppress something when true) are generally a source of confusion and bugs and should be avoided like the plague in my experience.
Technically, "enable_ai" doesn't imply that all AI features are really turned off. Without context, it might imply that some basic AI features exist and "enable_ai" just enables further features. "disable_ai" is unambiguous.
Enable/disable are the only two dichotomies in the whole of all possible states regarding this AI feature, so I'll have to bite: What's your "Technically," referring to here?
First of all, enable/disable is a dichotomy, and is not a set of two dichotomies.
Second, imagine an editor that has AI running in the background, scanning your files. "Enable_AI" could just mean enable the visibility of the feature to actually use the results. On the other hand, it would sound more suspicious if there were some background AI tasks running, even for training purposes, if "disable_AI" were "True" as compared to "Enable_AI" to be false.
In other words, Enable_AI COULD have the connotation (to some) of just enabling the visibility of the feature, whereas Disable_AI gives more of a sense of shutting it off.
Imagine for example you're in a court of law. Which one sounds more damning?
=======
Prosecutor: You still have AI tasks running in the background but AI_Enable is set to false?
Defendent: But Enable_AI just means enabling the use of the output!
====
====
Prosecutor: You still have AI tasks running in the background, but AI_Disable is TRUE?
> Enable_AI COULD have the connotation (to some) of just enabling the visibility of the feature, whereas Disable_AI gives more of a sense of shutting it off.
Personally, I don't feel much difference between the two. I doubt that an average reasonable person would either.
Well, I do feel a distinct connotational difference, but then again, I could be the only one I suppose. And if the average person doesn't care, then why argue about it at all? And how many average people will be using Zed anyway?
My pet peeve is CGO_ENABLED compiler option in Go. It's set to 0 or 1 to enable/disable (can never remember which mapa to which)
If it was just CGO=true or CGO=false I think so much confusion could have been avoided.
I think similar thinking applies here. It's convoluted to disable something by setting ai_disable=true because I read it like: setting false true instead of just setting boolean.
> It's set to 0 or 1 to enable/disable (can never remember which mapa to which)
That's crazy. Boolean logic is the most fundamental notion of computer science, I can still remember learning that in my very first course on my very first year.
This follows a convention that was well established and felt pretty ancient when I learned about environment variables in the nineties (i.e. 30 years ago). Variables that are flags enabling/disabling something use 1 to enable, and 0 to disable. I'd not be surprised if this has been pretty much standard behavior since the seventies.
I always thought that an unset boolean env var should define the default behavior for a production environment and any of these set with a value of length>0 will flip it (AUTH_DISABLED, MOCK_ENABLED, etc.). I thought env vars are always considered optional by convention.
I don't doubt any of that but why stick to such old conventions when there are explicit and immediately clear options?
I don't think me writing an if condition
if boolean != true
instead of
if boolean == false
should pass code review. I don't think my pet peeve is necessarily different from that. I understand there's a historical convention but I don't think there's any real reason for having to stick to it.
Hell, some of the other compiler options are flags with no 0 or 1, why could this not have been --static or any flag? I'm genuinely curious.
Moreover, 0 here maps to false but in program exit codes it maps to success which in my mind maps to true but then we have this discrepancy so it does not appear to be the right mental model.
IMHO flags like this should always reflect the planned steady state. If you plan for a feature to be on in general, then the flag should be a disabling flag that eventually goes away.
It's good and predictable that all boolean settings to be false by default (when not set). And the default behavior is the one that is compatible with older version — changing the defaults is a breaking change that should usually be avoided.
Yes, this is especially useful if the boolean settings are stored in a bit field, in my opinion. In the case of Zed, according to the article, it uses JSON, but if a program uses a bit field instead then it would make sense.
How long did it take for you to get used to the autocomplete?
I completely hated it when I tried it out. It breaks my flow. Those weird pauses are so painful. Feels like someone grabbing the steering wheel while I am driving.
I heavily use the agent mode but I don't understand the appeal of the autocomplete feature but maybe I am missing something.
You spent 20 years without autocomplete? I started using Eclipse's autocomplete & shortcut macros in 2002.
It is fascinating to me how much of the enthusiasm around AI seems to be the result of people not knowing about/using the deterministic tools that already exist.
AI autocomplete can generate the entire `match` tree for a very complex Rust enum, and also generate all of the associated tests. It then takes me to the next places in the code where changes need to be made. It's wildly good at knowing exactly what I want to do next.
2002 macros and autocomplete were not that and cannot possibly be compared in the same light.
Most people only care about the end result, and vibe coding got the author there much faster and with less effort.
Your comment reads like a carriage driver bemoaning car drivers because they didn't have to feed, groom, harness, and command a team of horses, yet still arrived successfully at their destination.
"Why should I be impressed that you turned a wheel and pushed a pedal?"
Long names are good for short expressions, but they obfuscate complex ones because the identifiers visually crowd out the operators.
This can be especially difficult if the author is trying to map 1:1 to a complex algorithm in a white paper that uses domain-standard mathematical notation.
The alternative is to break the "full formula" into simpler expression chunks, but then naming those partial expression results descriptively can be even more challenging.
To those who want to do this: make sure to swap out all cells in a battery at once to be safe, ideally using new cells that are all from the same manufacturer and same production run.
Mixing old cells with new can lead to a runaway thermal event in the worst case (i.e. unstoppable cancer-causing fire).
reply