Hacker Newsnew | past | comments | ask | show | jobs | submit | QuiDortDine's commentslogin

hahah what are you talking about, there's no such thing as long term!

Surely you mean managers, right? Most developers I interact with would love to do things the right way, but there's just no time, we have to chase this week's priority!


Jesus I just had flashbacks from my last jobs. Non-technical founder always telling me I was being pessimistic (there were no technical founders). It's just not that simple Karen!


Is there a way to ban specific users in your GitHub project?

(I prefer GitLab, I'm sure if it had projects that are as popular it would be similarly inundated.)


IIRC if one of the maintainers of a project blocks a user that prevents them from participating in issues and PRs.

For bigger projects with many maintainers that can also lead to problems if people use the block function as liberally as on Twitter.


Welcome to the future! Do you feel disrupted yet?


I think it's glass actually? Or something like glass.


I can't believe all these qualia questions have not evolved in centuries (or at least, the common discourse arond them hasn't). We all have similar rods and cones in our eyes. We have common kinds of color blindness. What other reasonable conclusion is there but that my red is your red? All the machinery is similar enough.

I suppose it's because people associate so much of who they are to the subjectivity of their experience. If I'm not the only one to see and taste the world as I do, am I even special? (The answer is no, and that there are more important things in life than being special.)


I see that you're not alone in your position clearly, but still, this is such a strange take to me. Do people not seriously not see, nay, instinctively understand the ontological difference between the difference between using code someone no longer understands and deploying code no one ever understood?

I'm not saying the code should be up to any specific standards, just that someone should know what's going on.


I don't actually see the difference. If someone writes the code and understands it but then becomes unavailable, what's the practical difference with no one having understood it?


Someone at some point had a working mental model of the code and a reputation to protect and decided that it was good enough to merge. Someone vetted and hired that person. There's a level of familiarity and history that leads others to extend trust.


The way I see it is that LLMs and humans are not inherently different. They are simply on different segments of a very complex spectrum of sensory input and physical output. Over time this position on the spectrum changes, for both LLMs and humans too.

With this in mind, it's all matter of what are your metrics for "trust". If you are placing trust on a human employee because it was hired, does this mean the trust comes from the hiring process? What if the LLM passed went through that too?

About familiarity and history: we are at the point were many people will start working at a new place were the strangers are the humans, you will actually be more familiar and history with LLM tools than actual humans, so how do you take that into consideration?

Obviously this is a massive simplification and reduction of the problem, but I'm still not convinced humans get a green checkmark of quality and trust just because they are humans and were hired by a company.


> Someone at some point had a working mental model of the code and a reputation to protect

This isn’t always true in absolute terms. In many places and projects it’s about doing a good enough job to be able to ship whatever the boss said and getting paid. That might not even involve understanding everything properly either.

Plenty of people view software development as just a stepping stone to management as well.

With reading enough code it becomes apparent that the code quality that AI generates will often be similar or better to human developers, even if the details and design are sometimes demented.


You could never have the same amount of trust in LLM-generated code as in a human developer, even if you wrote a large amount of tests for it. There will always be one more thing that you didn't think to test. But the many reports of massive security holes in AI coding tools and their products show that nobody even bothers with testing. Neither the vendors nor the users.


One of the implementations underwent analysis.


Surely they both go through that before being merged? If not then I think the the issue is somewhere other than I'd being suggested.


They're trying to build a moat by leaving out features that rely on other services. I wonder how that will work out for them.


> trying to build a moat by leaving out features that rely on other services

Except for Gmail?


Much easier to get people to use an extension of an existing email account than switch emails.


Wow that Brazilian institute... surely they knew?


Like the spinning silhouette of a ballerina, you can make it spin both ways.

You can see a sun with a house, and you can see a butt with an object penetrating.

But admittedly, it's pretty hard to unsee the butt once you think about it.

Surely some people knew right away the moment the logo came out of the designer's office.


This one boggles my mind. But it's a real logo, so there were at least a few people with decision power who didn't know and let it happen.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: