Hacker Newsnew | past | comments | ask | show | jobs | submit | petterroea's commentslogin

Questions surrounding this has plagued me for the last years, and this is basically where I'm at right now:

* I am trying to write more because writing is a good skill to practice, and it's fun to discuss with colleagues and have meanings that resonate with people. Or not. I still think most use of Cloudflare is naive and unnecessary cargo culting that just adds infrastructure complexity, but last time I complained it got a reasonable amount of pushback :D

* But being a public person has downsides. The more public you are, the less of an expectation of privacy you have, and the less you are allowed to make mistakes.

I grew up as a somewhat infamous person in my local community due to sticking out, it wasn't unusual that people already knew of me when I met them for the first time. As a result I had to accept that there was no such thing for me as simply going somewhere, the chance was high that someone who knew who I was (even if I didn't know them!) spotted me.

I have lived long enough to see many people mess up being a famous public person on the internet. Often they never even wanted to be famous, it just happened and then they had to deal with the consequences. It could happen to anyone who happens to be at the right place at the right time. For hackers and similar people, it seems some just find a calling and that calling makes them well known as a side-effect.

If you do anything that could be considered novel, you risk becoming well known. If you have a public persona and people like it, you will get followers. And if that happens, your public activity becomes the bane of your existence. You will be picked apart, analyzed, and possibly targeted by people who disagree with you. People will expect you to have opinions on things and drag you into conflicts. And what you say _matters_ - you have to think about everything you say because one misstep and entire communities will mobilize against you. Many people have gotten hate for saying something controversial on a topic they had little knowledge about. This is normal in a private setting, we discuss politics we aren't experts on with friends all the time. But if you are a public person, you lose many avenues to do this.

I am Norwegian, and the lack of tech literacy in government and the general public is frankly depressing. This isn't necessarily because the general public is stupid. Bob Kåre (49) has better things to do with his life than learn about tech-politics. Norway needs more technical people to be politically active. But doing so seems downright stupid, considering the reflections above. It is practically a sacrifice.

I think the reward has to be pretty large for this to be worth considering. It is a lot better, and easier, to just stick to yourself and your circle.


I have ran (read: helped with infrastructure) a small production service using PSQL for 6 years, with up to hundreds of users per day. PSQL has been the problem exactly once, and it was because we ran out of disk space. Proper monitoring (duh) and a little VACUUM would have solved it.

Later I ran a v2 of that service on k8s. The architecture also changed a lot, hosting many smaller servers sharing the same psql server(Not really microservice-related, think more "collective of smaller services ran by different people"). I have hit some issues relating to maxing out the max connections, but that's about it.

This is something I do on my free time so SLA isn't an issue, meaning I've had the ability to learn the ropes of running PSQL without many bad consequences. I'm really happy I have had this opportunity.

My conclusion is that running PSQL is totally fine if you just set up proper monitoring. If you are an engineer that works with infrastructure, even just because nobody else can/wants to, hosting PSQL is probably fine for you. Just RTFM.


Psql (lowercase) is the name of the textual sql client for PostgreSQL. For a general abbreviation we rather use "Pg".

Good catch, thx

But it’s 1500 pages long!

Good point. I sure didn't read it myself :D

I generally read the parts I think I need, based on what I read elsewhere like Stackoverflow and blog posts. Usually the real docs are better than some random person's SO comment. I feel that's sufficient?


Contrary to what the article claims, the market being open and sloppy isn't enshittification. Jacking up prices and removing features users were using in the name of extracting profit is. But what I strongly agree with the author about is the uncertainty of Valve's fate after GabeN. Any company is able to enshittify, we are just one change of owners away. It's almost like potential energy vs kinetic energy - a company like Valve has saved up a _lot_ of enshittification potential waiting for the "right" condition to be realized.

I'd love to believe Steam will keep being the market leader because they haven't really enshittified yet. I'd love to believe that Tim Sweeney and Epic games are so unable to read the room and so blinded by being a public company that consumers just aren't interested. But considering their biggest game is Fortnite, they are practically selling to kids, who lack any sort of market opinion of that regard. Regardless, consumers don't really buy with their wallet unless there are immediate, solvable problems in front of them.

Regarding metaverse, I believe anyone who has been on VRChat instinctively understands why metaverse was doomed to fail from the get-go. I wrote some notes about my experiences which I released while doing winter-cleaning of my notes recently: https://petterroea.com/blogs/2025/living-a-second-life-in-vr.... There just simply isn't a market for what Meta are trying to sell.


This happens every tine with Microsoft: * Announces feature implemented in a horrible way * Everyone gets mad * Microsoft cancels the feature or removes features people complain about

I'm sick and tired of listening to it.

Someone in here once wrote that Microsoft must be doing it on purpose - it is similar to the negotiation tactic where you overcharge on purpose so your opponent feels they won something when you settle for less. Surely this is Microsoft's strategy to get you to accept them pushing this garbage on us.

Remember, the AI bubble winner won't be the one that pushes the first polished AI tool. It will be the company that fills most of the market, such that they already have market share by the time they are able to do something useful with it.


Another problem the article doesn't mention is how much of a hassle it is to deal with permissions. Depending on the GraphQL library you are using, sure, but my general experience with GraphQL is that the effort needed to secure a GraphQL API increases a lot the more granular permissions you need.

Then again, if you find yourself needing per-field permission checks, you probably want a separate admin API or something instead.


Surely the only people who actually looks at receipts are accountants. They won't care if you used tailwind or plain html.

I'm personally happy with a non-html receipt email, and I think most customers agree


Users are also happy with clunkiness if it gives them other things they value. Ask any Windows user. Also, video game modding.

My general experience is that clunky software is what made people tech literate, and now that everything has safety barriers and protects the user from everything tech literacy has fallen.


Windows is also just convenience. Most use it because it comes with the computer

Or because don't know (or care) that they have a choice. Same with browsers. Most users will click the 'internet button' to get online

The main reason is the better alternative costs twice as much.

Wow. I didn’t know Linux cost that much! /s

Cost isn't only money. In the case of linux it is time to learn to use it (which is a sunk cost on windows: already paid it). Then you need to download and install it - again windows comes by default so a sunk cost.

> In the case of linux it is time to learn to use it

How much time do you need to take to learn "click on the swirly orange thing for Internet"? It looks just the same as it does in every other OS.

> (which is a sunk cost on windows: already paid it)

This is actually something I'm coping with at the moment, because I have to learn how to use Windows and it's the most backwards thing ever to use.


If somebody else admins your system. However if not there is a lot to learn. At least every distribution I've used needs manual updates from time to time. (though admittedly most people would replace the computer before I've seen anything hard happen)

Why would it need someone else to "admin" it?

Who currently "admins" your Windows system?


Windows user here. It goes vastly further than that. I've been using Windows since version 3.0. I'm used to it to the point where it's second nature. Linux is foreign and difficult to comprehend, not least because it explicitly avoids being anything like Windows or accommodating habits people acquired from Windows. I don't like the direction Windows is going any more than anyone, and I'm avoiding Windows 11 for the time being, but as long as Linux people continue to believe that the only reason Windows users don't switch is because they don't know Linux exists, Linux will not be able to attract Windows users even as Windows goes full capitalist enshittification.

You know the Firefox icon in Windows?

It's the exact same in Linux. Click on it, get Internet.

You do everything in a browser anyway.


I don't remember the last time I've clicked on a Firefox icon. I've pinned it to the taskbar and I press Win+1 to use it, which is 100 times faster. I've been doing that for >10 years now.

This is acceptable. We now understand that privacy-focused solutions are not appropriate for individuals with average technological literacy, and we cannot depend on companies to self-regulate.

At present, the emphasis is on the potential of large language models (LLMs) and the related ethical considerations. However, I would prefer to address the necessity for governments or commissions to assume responsibility for their citizens concerning "social" media, as this presents a significantly greater risk than any emerging technology.


> Users are also happy with clunkiness if it gives them other things they value. Ask any Windows user.

In that case, the alternatives are also clunky. I use Windows, MacOS and Linux regularly, and all of them got serious UX problems.


Basically everyone I know in engineering share this resentment in some way, and the AI industry has itself to blame.

People are fed up and burned out from being forced to try useless AI tools by non-technical leaders who do not understand how LLM works nor understand how they suck, and now resent anything related to AI. But for AI companies there is a perverse incentive to push AI on people until it finally works, because the winner of the AI arms race won't be the company that waits until they have a perfect, polished product.

I have myself had "fun" trying to discuss LLMs with non technical people, and met a complete wall trying to explain why LLMs aren't useful for programming - at least not yet. I argue the code is often of low quality, very unmaintainable, and usually not useful outside quick experimentation. They refuse to believe it, even though they do hit a wall with their vibe-coded project after a few months when claude stops generating miracles any more - they lack the experience with code to understand they are hitting maintainability issues. Combine that with how every "wow!" LLM example is actually just the LLM regurgitating a very common thing to write tutorials about, and people tend to over-estimate its abilities.

I use claude multiple times a week because even though LLM-generated code is trash I am open to try new tools, but my general experience is that Claude is unable to do anything well that I can't have my non-technical partner do. It has given me a sort of superiority complex where I immediately disregard the opinion of any developer who thinks its a wondertool, because clearly they don't have high standards for the work they were already doing.

I think most developers with any skill to their name agree. Looking at how Microsoft developers are handling the forced AI, they do seem desperate: https://news.ycombinator.com/item?id=44050152 even though they respond with the most "cope" answers I've ever read when confronted about how poorly it is going.


> and met a complete wall trying to explain why LLMs aren't useful for programming - at least not yet. I argue the code is often of low quality, very unmaintainable, and usually not useful outside quick experimentation.

There are quite a few things they can do reasonably well - but they mostly are useful for experienced programmers/architecs as a time safer. Working with a LLM for that often reminds me of when I had many young, inexperienced Indians to work with - the LLM comes up with the same nonsense, lies and excuses, but unlike the inexperienced humans I can insult it guilt free, which also sometimes gets it back on track.

> They refuse to believe it, even though they do hit a wall with their vibe-coded project after a few months when claude stops generating miracles any more - they lack the experience with code to understand they are hitting maintainability issues.

For having a LLM operate on a complete code base there currently seems to be a hard limit of something like 10k-15k LOC, even with the models with the largest context windows - after that, if you want to continue using a LLM, you'll have to make it work only on a specific subsection of the project, and manually provide the required context.

Now the "getting to 10k LOC" _can_ be sped up significantly by using a LLM. Ideally refactor stupid along the way already - which can be made a bit easier by building in sensible steps (which again requires experience). From my experiments once you've finished that initial step you'll then spend roughly 4-5 times the amount of time you just spent with the LLM to make the code base actually maintainable. For my test projects, I roughly spent one day building it up, rest of the week getting it maintainable. Fully manual would've taken me 2-3 weeks, so it saved time - but only because I do have experience with what I'm doing.


I think there's a lot of reason to what you are saying. The 4-5 amount of time to make the codebase readable resonates.

If i really wanted to go 100% LLM as a challenge I think I'd compartmentalize a lot and maybe rely on OpenAPI and other API description languages to reduce the complexity of what the LLM has to deal with when working on its current "compartment" (i.e the frontend or backend). Claude.md also helps a lot.

I do believe in some time saving, but at the same time, almost every line of code I write usually requires some deliberate thought, and if the LLM makes that thought, I often have to correct it. If i use English to explain exactly what I want it is some times ok, but then that is basically the same effort. At least that's my empirical experience.


> almost every line of code I write usually requires some deliberate though

That's probably the worst case for trying to use a LLM for coding.

A lot of the code it'll produce will be incorrect on the first try - so to avoid sitting through iterations of absolute garbage you want the LLM to be able to compile the code. I typically provide a makefile which compiles the code, and then runs a linter with a strict ruleset and warnings set to error, and allow it to run make without prompting - so the first version I get to see compiles, and doesn't cause lint to have a stroke.

Then I typically make it write tests, and include the tests in the build process - for "hey, add tests to this codebase" the LLM is performing no worse than your average cheap code monkey.

Both with the linter and with the tests you'll still need to check what it's doing, though - just like the cheap code monkey it may disable lint on specific lines of code with comments like "the linter is wrong", or may create stub tests - or even disable tests, and then claim the tests were always failing, and it wasn't due to the new code it wrote.


For an understanding of what Valve are doing, here is a 1 hour talk by Gabe Newell (CEO):

Gabe Newell: On Productivity, Economics, Political Institutions, and the Future of Corporations https://www.youtube.com/watch?v=Td_PGkfIdIQ

TL;DR:

* The most skilled workers are the most undervalued

* Make products to serve the customer

* Management is a skill, not a career path

* The only people they consider themselves to be unable to compete with are their customers, so enabling the customer to produce better content in their ecosystem is the most efficient way of producing things.


It would be naive to assume they couldn't access the data from a technical perspective. I think anyone in here would think so. The problem is regular customers who aren't technical and don't have much choice but to trust claims by the seller - these are the real victims here.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: