Hacker Newsnew | past | comments | ask | show | jobs | submit | orhmeh09's commentslogin

> But really the ultimate goal for any good website engineer should be to offload as much logic and processing to the browser, not rewrite everything in JS just because you can

Why? This makes for a horrible user experience. Things like TicketMaster, and in recent years GitHub, slow my machine to a crawl sometimes. I much prefer mostly static content. This is a well-made website: https://www.compuserve.com/


Which isn't JavaScript's failure per se. I wouldn't wat to use a Google Maps like thing, with full page reload each time I scroll or zoom or check details of a place.

The issue is of "plain" websites for bad reasons add dynamic stuff.


I can't imagine a scenario where I would want to reimplement rm just for this.


[flagged]


https://news.ycombinator.com/newsguidelines.html

> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

Instead of being rude to a fellow human making an inoffensive remark, you could’ve spent your words being kind and describing the scenario you claim exists. For all you know, maybe they did ask ChatGPT and were unconvinced by the answer.

As a side note, I don’t even understand how your swipe would make sense. If anything, needing ChatGPT is what demonstrates a lack of imagination (having the latter you don’t need the former).


What makes you think I need ChatGPT, since I just wondered whether ChatGPT was as stupid, since obviously I do know why that would be useful?


I think CRAN for R is very good, partly aided by an aggressive pruning policy for broken packages.


I agree. Here are some things that I (a science researcher and professor) like about R and CRAN:

1. There are a lot of build checks for problems involving mismatches between documentation and code, failed test suites, etc. These tests are run on the present R release, the last release, and the development version. And the tests are run on a routine basis. So, you can visit the CRAN site and tell at a glance whether the package has problems.

2. There is a convention in the community that code ought to be well-documented and well-tested. These tend not to be afterthoughts.)

3. if the author of package x makes changes, then all CRAN packages that use x will be tested (via the test suite) for new problems. This (again because of the convention of having good tests) prevents lots of ripple-effect problems.

4. Many CRAN packages come with so-called vignettes, which are essays that tend to supply a lot of useful information that does not quite fit into manpages for the functions in the package.

5. Many CRAN packages are paired with journal/textbook publications, which explain the methodologies, applications, limitations, etc in great detail.

6. CRAN has no problem rejecting packages, or removing packages that have problems that have gone unaddressed.

7. R resolves dependencies for the user and, since packages are pre-built for various machine/os types, installing packages is usually a quick operation.

PS. Julia is also very good on package management and testing. However, it lacks a central repository like CRAN and does not seem to have as strong a culture of pairing code with user-level documentation.


It is not fast. Mamba and micromamba are still much faster than conda and yet lack basic features that conda has to provide. Everyone is dropping conda like a hot plate since the licensing changes in 2024.


Yes, with StopTheMadness https://underpassapp.com/StopTheMadness/

It works on Safari, Chrome, and Firefox, but you must buy it.


It only works on Apple though it looks like.


Because it's better at the task than Python is.


That's just the problem! It is better at the task. Until it isn't, and "isn't" comes much too soon.


I've seen this sentiment a lot here. "Once shell is >n lines, port to python". My experience has been different. Maybe half of the scripts I write are better off in python while the other half are exponentially longer in python than bash.

For example, anything to do with json can be done in 1 line of readable jq, while it could be 1, 5, or 20 lines in python depending on the problem.

I'd just like to put that out there because half of the time, the >n metric does not work for me at all. My shell scripts range from ~5-150 lines while python are 100+


Same. It’s mostly because if I have a shell script, while I’ll add some comments for tricky bits if needed, and maybe a `-h` function, that’s about it. In Python, though, I use the language’s features to make it as readable and safe as possible. Types, docstrings, argparse, etc. My thinking is that if I’m going to take the time to use a “proper” language, then I should make it bulletproof, otherwise I’d just stick with shell.

My personal decision matrix for when to switch to Python usually involves the relative comfort of the rest of my team in both languages, the likelihood that future maintenance or development of the script will be necessary, and whether or not I’m dealing with inputs may change (e.g. API responses, since sadly versioning isn’t always a guarantee).


> For example, anything to do with json can be done in 1 line of readable jq, while it could be 1, 5, or 20 lines in python depending on the problem.

I don't agree that there exists such a thing as "readable jq" to start with. It's very arcane and difficult to follow unless you live and breathe the thing (which I don't). Furthermore, jq may or may not be present on the system, whereas the json package is always there in Python. Finally, I don't think having more lines is bad. The question is, what do you get for the extra lines? Python might have 5 lines where bash has 1, but those 5 lines will be far easier to read and understand in the future. That's a very worthwhile trade-off in my opinion.


> It's very arcane and difficult to follow unless you live and breathe the thing

I used to think this before I actually read how it worked. If you know shell, jq is extremely easy to pick up. It acts the exact same way, but pipes JSON entities instead of bytes ("text") like shell does.

Like the Unix philosophy, every filter does exactly one thing very well. Like shell, you write it incrementally, one filter at a time.

Genuinely, I do not blame you for thinking it's complex. I have never seen a concise, correct explanation of how jq works that builds an intuitive understanding. I have a near-complete one, and it's on my todo list to eventually publish it.

Anyway, I don't mean to say more lines is always worse, but that it is worse about half the time. Python is certainly more readable, but I'd rather spend 60 seconds making a long pipeline than 10 minutes making it work in python.

Want to count lines in a file? wc -l. Compress a directory? tar -zcf. Send that compressed file somewhere? Pipe it to ssh. Each of those is an ordeal in python and it's around 10 keystrokes in shell.


The only thing bash is better at than Python is very short scripts, like 10ish lines. Everything else it sucks at, due to the horrible syntax and various bash footguns.


Just use restic. It handles these things.


You can get by with R and reticulate.


the Python ecosystem for AI is far ahead than R, sadly (or not :-) )


Reticulate is a bridge that lets you use Python from R so that you don't have to suffer Python the language.


They might log access in some circumstances, according to their privacy policy (https://proton.me/legal/privacy)

> 2.5 IP logging: By default, we do not keep permanent IP logs in relation with your Account. However, IP logs may be kept temporarily to combat abuse and fraud, and your IP address may be retained permanently if you are engaged in activities that breach our Terms of Service (e.g. spamming, DDoS attacks against our infrastructure, brute force attacks). The legal basis of this processing is our legitimate interest to protect our service against non-compliant or fraudulent activities. If you enable authentication logging for your Account or voluntarily participate in Proton's advanced security program, the record of your login IP addresses is kept for as long as the feature is enabled. This feature is off by default, and all the records are deleted upon deactivation of the feature. The legal basis of this processing is consent, and you are free to opt in or opt out of that processing at any time in the security panel of your Account. The authentication logs feature records login attempts to your Account and does not track product-specific activity, such as VPN activity.

See also section 3, "Network traffic that may go through third-parties."


Not to be confused with exa: https://github.com/ogham/exa


I hate name collisions and this sort of thing only reinforces my ire. It doesn't help that I'm already team anti-AI, but it would annoy me regardless of the tech. Why don't people even bother to look and be original? (I feel like I'm going "against the ideals of the site" when I get angry like this, but come on, people, it's a simple google search. If you can't be arsed to do that, why should we even give you money - would be my FIRST question as an investor, but I'm just an idiot not a world famous inventor of a non-released LISP and checks list - uh. Yahoo Storefront.

Still though, come on man, why people why. I remember when we had "domainsquatting" but I guess AI doesn't give a fuck about people's copyrights/trademarks anyway.

(Sorry to vent as a reply, but it was nice to see SOMEONE mention it at least, and had to give a hard agree on pointing it out).


(ugh, while my point stands I guess technically it's a dead project, so I got egg on my face, gloat everyone gloat at the pathetic clown :P)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: