Hacker Newsnew | past | comments | ask | show | jobs | submit | KronisLV's commentslogin

They should publish the token limits not just talk about conversations or what average users can expect: https://support.claude.com/en/articles/11145838-using-claude...

For comparison’s sake, this is clear: https://support.cerebras.net/articles/9996007307-cerebras-co...

And while the Cerebras service is pretty okay, their website otherwise kinda sucks - and yet you can find clear info!


During the announcement of the tariffs and the subsequent period, pretty much everything I had invested dropped across the board: https://blog.kronis.dev/user/pages/blog/my-investments-in-20...

(I was doing an experiment of putting 1k into a bunch of stocks each through Revolut instead of my usual bank funds and seeing how they do after a year)

Yet it recovered afterwards. I’m certain that some transfer of wealth took place there, with at least some people panic selling.


> During the announcement of the tariffs and the subsequent period, pretty much everything I had invested dropped across the board

Noise. The drop was short lived, the news was used as an excuse to take some profits and realign portfolios, a lot of other announcements wrt tariffs looked a lot like market manipulation too.

Then the market figured that foreign competition is being stomped in the mud and the officially sanctioned inflation is the new and endless excuse for higher prices and profits without actually increasing production in a monopolized and cartelized economic environment.


Automated code checks. Either custom linter rules (like ESLint) or prebuild scripts to enforce whatever architectural or style rules you want, basically all of the stuff that you'd normally flag in code review that can be codified into an automatic check but hasn't been before due to developers either not finding it worth their time to do it, or not having enough time or skill to do that - use the AI to write as many of these as needed, just like:

  node prebuild/prebuild.cjs
which will then run all the other checks you've defined like:

  prebuild/ensure-router-routes-reference-views-not-regular-components.cjs
  prebuild/ensure-custom-components-used-instead-of-plain-html.cjs
  prebuild/ensure-branded-colors-used-instead-of-tailwind-ones.cjs
  prebuild/ensure-eslint-disable-rules-have-explanations.cjs
  prebuild/ensure-no-unused-translation-strings-present.cjs
  prebuild/ensure-pinia-stores-use-setup-store-format.cjs
  prebuild/ensure-resources-only-called-in-pinia-stores.cjs
  prebuild/ensure-api-client-only-imported-in-resource-files.cjs
  prebuild/ensure-component-import-name-matches-filename.cjs
  prebuild/disallow-deep-component-nesting.cjs
  prebuild/disallow-long-source-files.cjs
  prebuild/disallow-todo-comments-without-jira-issue.cjs
  ...
and so on. You might have tens of these over the years of working on a project, plus you can write them for most things that you'd conceivably want in "good code". Examples above are closer to a Vue codebase but the same principles apply to most other types of projects out there - many of those would already be served by something like ESLint (you probably want the recommended preset for whatever ruleset exists for the stack you work with), some you'll definitely want to write yourself. And that is useful regardless of whether you even use AI or not, so that by the time code is seen by the person doing the review, hopefully all of those checks already pass.

If "good code" is far too nebulous of a term to codify like that, then you have a way different and frankly more complex problem on your hands. If there is stuff that the AI constantly gets wrong, you can use CLAUDE.md as suggested elsewhere or even better - add prebuild script rules specifically for it.

Also, a tech stack with typing helps a bunch - making wrong code harder to even compile/deploy. Like, with TypeScript you get npm run type-check (tsc) and that's frankly lovely to be able to do, before you even start thinking about test coverage. Ofc you still should have tests that check the functionality of what you've made too, as usual.


Really like the idea of a font with extremely broad glyph support, sadly it looks really blurry on any custom size, like if I'd try to use this font in my IDE but would want to make it smaller so I can fit more text on the screen.

For that particular use case (tbh mostly aesthetics than glyph support), I also found the TTF version of Terminus to be pleasant: https://files.ax86.net/terminus-ttf/ though JetBrains Mono is good enough for me to not venture far away from defaults, albeit maybe Liberation Mono / Cousine was the peak of readability at somewhat small sizes out of any font out there for me.

Wonder if the Potrace approach of Terminus TTF version would work for Unifont. I imagine that Unifont is a pretty good default when doing shipping labels and for such utilitarian use cases.


> In message log, the agent often boasts about the number of tests added, or that code coverage (ugh) is over some arbitrary percentage. We end up with an absolute moloch of unmaintainable code in the name of quality. But hey, the number is going up.

Oh hey, just like real developers!


I really like having a good old RESTful API (well maybe kinda faking the name because don't need HATEOAS usually)!

Except I find most front end stacks to lead to either endless configuration (e.g. Vue with Pinia, router, translation, Tailwind, maybe PrimeVue and a bunch of logic for handling sessions and redirects and toast messages and whatnot) and I feel the pull to just go and use Django or Laravel or Ruby on Rails mostly with server side templates - I much prefer that simplicity, even if it feels a bit icky to couple your front end and back end like that.


> 154GB vs 23GB can trivially make the difference of whether the game can be installed on a nice NVMe drive.

I think War Thunder did it the best:

  * Minimal client 23 GB
  * Full client 64 GB
  * Ultra HQ ground models 113 GB
  * Ultra HQ aircraft 92 GB
  * Full Ultra HQ 131 GB
For example, I will never need anything more than the full client, whereas if I want to play on a laptop, I won't really need more than the minimal client (limited textures and no interiors for planes).

The fact that this isn't commonplace in every engine and game out there is crazy. There's no reason why the same approach couldn't also work for DLCs and such. And there's no reason why this couldn't be made easy in every game engine out there (e.g. LOD level 0 goes to HQ content bundle, the lower ones go into the main package). Same for custom packages for like HDDs and such.


> So, assuming the domain of infrastructure-as-code is mostly known now which is a fair statement -- which is a better choice, Go or Rust, and why? Remember, this is objective fact, not art, so no personal preferences are allowed.

I think it’s possible to engage with questions like these head on and try to find an answer.

The problem is that if you want the answer to be close to accurate, you might need both a lot of input data about the situation (including who’d be working with and maintaining the software, what are their skills and weaknesses; alongside the business concerns that impact the timeline, the scale at which you’re working with and a 1000 other things), as well as the output of concrete suggestions might be a flowchart so big it’d make people question their sanity.

It’s not impossible, just impractical with a high likelihood of being wrong due to bad or insufficient data or interpretation.

But to humor the question: as an example, if you have a small to mid size team with run of the mill devs that have some traditional OOP experience and have a small to mid infrastructure size and complexity, but also have relatively strict deadlines, limited budget and only average requirements in regards to long term maintainability and correctness (nobody will die if the software doesn’t work correctly every single time), then Go will be closer to an optimal choice.

I know that because I built an environment management solution in Go, trying to do that in Rust in the same set of circumstances wouldn’t have been successful, objectively speaking. I just straight up wouldn’t have iterated fast enough to ship. Of course, I can only give such a concrete answer for that very specific set of example circumstances after the fact. But even initially those factors pushed me towards Go.

If you pull any number of levers in a different direction (higher correctness requirements, higher performance requirements, different team composition), then all of those can influence the outcome towards Rust. Obviously every detail about what a specific system must do also influences that.


> It’s not impossible, just impractical with a high likelihood of being wrong due to bad or insufficient data or interpretation.

If it's impractical to know, why is using personal preference and intuition a "huge red flag"?

That's the core idea being disagreed with, not the idea that you could theoretically with enough resources get an objective answer.


It might be because depending on one's sensitivity to various factors and how much work they put into discovering the domain, things might feel okay, and yet be the completely wrong choice.

For example, how to many people MongoDB felt like a really good option during its hype cycle before it became clear how there are workloads out there, where you will get burnt badly if you pick anything other than a traditional RDBMS with ACID.

Similarly, there are cases where people cargo cult really hard or just become opinionated over time - someone who has worked primarily in Java for 20 years will probably pick that for a wide variety of projects, though this preference might make them blind to the fact that others aren't as good with it on a given team and that they might not iterate fast enough to ship, when compared with, let's say Django or Ruby on Rails or even Laravel.

Feelings can be dangerous, informed choices will generally be better, though I guess with the way we use language, those two kinda blend together. If those feelings are based on good enough data and experience, then those might be pretty valuable too - someone who has been writing code for 20 years will probably be more accurate than someone who has been programming for 2 years, yet if someone has 10x2 years of experience (doing the same thing, not learning, not exploring), then it's a toss up, worse yet if people think that still means seniority.

I kinda get why someone might react to the word "feels" in seemingly deterministic development context, but my own reaction wouldn't be so strong and with certain people, I'd trust their feelings. At the same time I've seen plenty of people who write what they believe to be a good code that is a bit of a mess in my eyes.


I’ve linked this before, but I feel like this might resonate with you: https://www.stilldrinking.org/programming-sucks

Yeah a bridge has a plan that it’s built and verified against. It’s the picture book waterfall implementation. The software industry has moved away from this approach because software is not like bridges.

> It’s the picture book waterfall implementation.

One of my better experiences with software development was actually with something waterfall-adjacent. The people I was developing software for produced a 50 page spec ahead of any code being written.

That let me get a complete picture of the business domain. That let me point out parts of the spec that were just wrong in regards to the domain model and also things that could be simplified. Implementation became way more straightforwards and I still opted for a more iterative approach than just one deliverable at the end. About 75% of the spec got build and 25% was found to be not necessary, it was a massive success - on time and with fewer bugs than your typical 2 week "we don't know the big picture" slop that's easy to get into with indecisive clients.

Obviously it wasn't "proper" waterfall and it also didn't try to do a bunch of "agile" Scrum ceremonies but borrowed whatever I found useful. Getting a complete spec of the business needs and domain and desired functionality (especially one without prescriptive bullshit like pixel perfect wireframes and API docs written by people that won't write the API) was really good.


If you can't get a complete spec, it's better start with something small that you can get detailed info on, and then iterate upon that. It will involve refactoring, but that is better than badly designing the whole thing from the get go.

I enjoyed that but honestly it kind of doesn't really resonate. Because it's like "This stuff is really complicated and nobody knows how anything works etc and that's why everything is shit".

I'm talking about simple stuff that people just can't do right. Not complex stuff. Like imagine some perfect little example code on the react docs or whatever, good code. Exemplary code. Trivial code that does a simple little thing. Now imagine some idiot wrote code to do exactly the same thing but made it 8 times longer and incredibly convoluted for absolutely no reason and that's basically what most "developers" do. Everyone's a bunch of stupid amateurs who can't do simple stuff right, that's my problem. It's not understandable, it's not justifiable, it's not trading off quality for speed. It's stupidity, ignorance and lazyness.

That's why we have coding interviews that are basically "write fizzbuzz while we watch" and when I solve their trivial task easily everyone acts like I'm Jesus because most of my peers can't fucking code. Like literally I have colleagues with years of experience who are barely at a first year CS level. They don't know the basics of the language they've been working with for years. They're amateurs.


Then it’s quite possible that you’re working in an environment that naturally leads to people like that getting hired. If that’s something you see repeatedly, then the environment isn’t a good fit for you and you aren’t a good fit for it. So you’d be better served by finding a place where the standards are as high as you want, from the very first moment in the hiring process.

For example, Oxide Computers has a really interesting approach https://oxide.computer/careers

Obviously that’s easier said than done but there are quite a few orgs out there like that. If everyone around you doesn’t care about something or can’t do it, it’s probably a systemic problem with the environment.


> I consider it a deshittified Ubuntu.

This is more or less what I have used Linux Mint for (their Cinnamon desktop version is pretty okay, also used the XFCE one).

Nice to have more options!


They are vendor of hardware, so much more aggressive about pushing new hardware support.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: