Hacker Newsnew | past | comments | ask | show | jobs | submit | bullen's commentslogin

What we need is configurable ack packet counts.

Then we can make TCP become UDP.

And then we solved everything.

Both linux and Windows have this config but it's buggy so we're back to TCP and UDP.


I’m curious about this. Can you share more details or some links that discuss what you’re describing?

TcpAckFrequency and net.ipv4.tcp_delack_min but they don't do exactly what we need:

We need to be able to set it to -1 to tell the OS that no acks should be used at all, effectively turning that socket into UDP but with a TCP API!

That way we only have one protocol for ALL internet traffic.

I made a RFC about this a decade ago that was ignored.


My old 6600 from 2016 is still running fine, I replaced the SSD (Intel 400GB to X25-E 64GB that will last 20 years minimum), the RAM (Micron to Samsung from aliexpress before the price hike... got 8 sticks of 16GB for $40 a pop for backup) and even the old trusty monitor (Both Eizo 5:4 matte VA; mercury tube to led, with f.lux/redshift the blue light is ok).

But with a 3050 upgrade from the 1050 and later 1030 (best GPU for eternity if you discount VR) I had in it it's good for another decade. If a game comes out that does not run on it I wont play it... simple as that... 150W is enough. So far only PUBG stutters, what a joke of bloat and poor engineering that game has become...

Win 10 improved NOTHING over 7. Win 11 improves NOTHING over 10.

YMMV but recommendation is still: do not buy new X86 hardware; do not use new OS/languages.

Build something good with what you have right now.

Make it so good it's still in use after 100 years.


> Win 10 improved NOTHING over 7

Windows 7 doesn't have compressed memory (ZRAM). Doesn't support TRIM for NVMe SSDs. Doesn't have WSL. Doesn't have ISO mounting built in. Doesn't have HDR, variable refresh rate, etc...


The better statement is 'Win 10 improved nothing directly user-facing over Win 7'. Sure, there are several technical improvements under the hood, but those are completely detached from what the user actually sees and experiences, and there's no real reason we couldn't have the Windows 10 technical improvements with a Windows 7 UI, other than Microsoft being the abusive parent that it is.


I'd still disagree but UI changes are far more subjective with approval. The start menu in 10 is a lot more customizable vs. Windows 7 which I think is a good thing. Task View (virtual desktops) were added in 10. Task Manager is so much better, that one is probably objective.


> The start menu in 10 is a lot more customizable vs. Windows 7 which I think is a good thing.

I installed Open-Shell day 1 when I got Windows 8, and continued with that on 10, since the new start menu did not convince me, so I can't really vouch for that. I don't see a need in having tiles and such in my start menu.

> Task View (virtual desktops) were added in 10.

Never used it in Windows. On my Mac I use it to put individual apps in full-screen, so they're easy to switch to with 3-finger swipe. Then again, I have three screens, so the demand for more desktop space is close to zero on what would be my Windows machine.

> Task Manager is so much better, that one is probably objective.

Technically a Windows 8 addition, but I'll give you that one. I'll have the old task manager back if I could get the old photo viewer back though. I can manage with the old task manager. I couldn't manager with the Win10 Photo app, and had to install Irfanview to get a usable picture viewer (at least before I went to Linux).


> I don't see a need in having tiles and such in my start menu.

Tiles are gone in Windows 11.

But this is exactly my point. Some people were so happy with how Windows XP worked but things are so much better now. It's repeating again where Windows 7 is the new XP.


Things are better, but it's a case of two steps forward, one step back. We got a new task manager that was actually good, and lost the photo viewer that was good. We got good taskbar search, right in the start menu, and then lost it again. We got DX12, but also got more telemetry than ever. We got an actually decent Windows update (it even grabs drivers for you and is pretty good at getting it right!), and we lose the ability to disable them (without really getting in there). We apparently lost tiles again, even though some people might still want them, and we also lost the ability to left-align our start menu, until the noise got so loud that even Microsoft couldn't ignore it.

Things may be better, but saying that Windows has gotten better, without a comma and a but, or an asterisk, is disingenuous. Much better is a matter of opinion, and one I don't share. Where things have gotten much better is Linux.


Are those really improvements though.

RAM maybe wears quicker if compressed?

NVMe will break long before a good old SATA drive.

WSL... lol

ISO you can do with daemon tools for free...

Displays are good enough at 60Hz 5:4 matte.


WSL is an excellent Micro-Soft technology.


> NVMe will break long before a good old SATA drive.

What gave you that idea?

> RAM maybe wears quicker if compressed?

Is this serious? The rest of your post seems serious, but that's such a silly idea.


This person has said that 1gb/s ethernet is as high as networking will go because of power constraints (yes obviously 2.5gb is common and a 10gb networking card is $25).

They have said that DDR3 RAM causes mouse stuttering and that a 2011 atom is the best CPU that will ever be made. Unfortunately I think they are serious.


> What gave you that idea?

If you run Windows 7 on it then it sure will!


I have fedora xfce running beautifully on a 2011 i5 Mac mini. Replacing the hard disk with modern SSD was all it took to get it running at acceptable speeds where interacting with xfce is roughly instantaneous


> Win 10 improved NOTHING over 7. Win 11 improves NOTHING over 10.

You had me up to this point. The problem is that there are actually quite a few improvements under the hood over those upgrade paths, but they are unfortunately hidden under all of the bullshit. I was an early adopter of Windows 11 specifically because of their efficiency core support over Windows 10 when I upgraded my CPU.


You need to look at the cost of improvements, and they overshadow all progress.

I'm going linux with TWM (desktop with design look from the 70s) on ARM because M$ is clearly not thinking about the long perspective.

We need a stable platform to build quality software.

And that's saying alot seen how linux is deprecating libc after very short time and the legacy joystick API is not being compiled into modern kernels anymore.

Stability is way more important than bells and whistles.


You can use IceWM instead of TWM; it's almost as lightweight and much more stable.

Also, if any, CTWM (with the welcome screen disabled) can be as good as TWM but with better features (sticky menus and the like).


I don't like the look of IceWM but I will check out CTWM!

Edit: TWM is less cluttered and less features is actually more here...


Yep, I like TWM on design too, but both CTWM and IceWM can be equally configured.

On CTWM, you can straightly import the TWM config modulo some slight error on a single line (if any). Once you set profer TTF fonts for it (by default they might look huge on smaller screens), you are done. Set the sticky/persistent menus (on a laptop/netbook it's a godsend) and you are done.

Maybe you would like to disable the startup screen (-W flag for the 'ctwm' binary) or some obvious option in the man page in order to be put at ~/.ctwmrc or ~/.twmrc.

For sticky menues you need to just put

     StayUpMenus
at ~/.ctwmrc

Just copy the content from your former ~/.twmrc file and append that on top.


Personally I'm staying with OpenGL (ES) 3 for eternity.

VAO is the last feature I was missing prior.

Also the other cores will do useful gameplay work so one CPU core for the GPU is ok.

4 CPU cores is also enough for eternity. 1GB shared RAM/VRAM too.

Let's build something good on top of the hardware/OSes/APIs/languages we have now? 3588/linux/OpenGL/C+Java specifically!

Hardware has permanently peaked in many ways, only soft internal protocols can now evolve, I write mine inside TCP/HTTP.


> Also the other cores will do useful gameplay work so one CPU core for the GPU is ok.

In the before times, upgrading CPU meant eveything runs faster. Who didn't like that? Today, we need code that infinitely scales CPU cores for that to remain true. 16 thread CPUs have been around for a long time; I'd like my software to make the most of them.

When we have 480+Hz monitors, we will probably need more than 1 CPU core for GPU rendering to make the most of them.

Uh oh https://www.amazon.com/ASUS-Swift-Gaming-Monitor-PG27AQDP/dp...


I'm 60Hz for life.

Maybe 120Hz if they come in 4:3/5:4 with matte low res panel.

But that's enough for VR which needs 2x because two eyes.

So progress ends there.

16 cores can't share memory well.

Also 15W is peak because more is hard to passively cool in a small space. So 120Hz x 2 eyes at ~1080 is limit what we can do anyways... with $1/KWh!

The limits are physical.


This is my desktop 2025: http://ett.host.radiomesh.org/film/HL2%20on%203588%20uConsol... (controls are impossible at first on 3588 uConsole, this was me playing without the usual bindings just to prove the performance so don't judge the gameplay)

3588 (10W) plays HL2 at 300 FPS and streams it at 60 FPS to twitch.

Turns out 2025 was the year of the ARM linux desktop after all!

TWM + emacs + irssi + mpv(ytdl)


Yes, DDR3 is the lowest CAS latency and lasts ALOT longer.

Just like SSDs from 2010 have 100.000 writes per bit instead of below 10.000.

CPUs might even follow the same durability pattern but that remains to be seen.

Keep your old machines alive and backed up!


> Yes, DDR3 is the lowest CAS latency and lasts ALOT longer.

DDR5 is more reliable. Where are you getting this info that DDR3 lasts longer?

DDR5 runs at lower voltages, uses modern processes, and has on-die ECC.

This is already showing up in reduced failure rates for DDR5 fleets: https://ieeexplore.ieee.org/document/11068349

The other comment already covered why comparing CAS latency is misleading. CAS latency is measured in clock cycles. Multiply by the length of a clock cycle to get the CAS delay.


It has on-die ECC _because_ it is so unreliable. ECC is there to fix its terribleness from factory.


So? If the net result is more reliable memory, it doesn't matter.

Many things in electrical engineering use ECC on top of less reliable processes to produce a net result that is more reliable on the whole. Everything from hard drives to wireless communication. It's normal.


ECC doesnt completely fix it, it only masks problems for most common use patterns. Rowhammer is a huge problem.


Just like increasing the structure size "only" decreases the likelihood of bit flips. Correcting physical unreliability with more logic may feel flimsy, but in the end, probabilities are probabilities.


CAS latency is specified in cycles and clock rates are increasing, so despite the number getting bigger there's actually been a small improvement in latency with each generation.


Not for small amounts of data.

Bandwith increases, but if you only need a few bytes DDR3 is faster.

Also slower speed means less heat and longer life.

You can feel the speed advantage by just moving the mouse on a DDR3 PC...


While it has nothing to do with how responsive your mouse feels, as that is measured in milliseconds while CAS latency is measured in nanoseconds, there has indeed been a small regression with DDR5 memory compared to the 3 previous generations. The best DDR2-4 configurations could fetch 1 word in about 6-7 ns while the best DDR5 configurations take about 9-10 ns.

https://en.wikipedia.org/wiki/CAS_latency#Memory_timing_exam...


RAM latency doesn't affect mouse response in any perceptible way. The fastest gaming mice I know of run at 8000Hz, so that's 125000ns between samples, much bigger than any CAS latency. And most mice run substantially slower.

Maybe your old PC used lower-latency GUI software, e.g. uncomposited Xorg instead of Wayland.


I only felt it on Windows, maybe tht is due to the special USB mouse drivers Microsoft made? Still motion-to-photon latency is really lower on my DDR3 PCs, would be cool to know why.


You are conflating two things that have nothing to do with each other. Computers have had mice since the 80s.

Still motion-to-photon latency is really lower on my DDR3 PCs, would be cool to know why.

No it isn't, your computer is doing tons of stuff and the cursor on windows is a hardware feature of the graphics card.

Should I even ask why you think memory bandwidth is the cause of mouse latency?


Dan Luu actually measured latency of older computers (terminal, input latency), and compared it to modern computers. It shows older computers (and I mean previous century-wise old) have lower input latency. This is much more interesting than 'feelings', especially when discussing with other people.


> 100.000 writes per bit

per cell*

Also, that SSD example is wildly untrue. Especially with the context of available capacity at the time. You CAN get modern SSD's with mind boggling write endurance per cell, AND has multides more cells, resulting in vastly more durable media than what was available pre 2015. The one caveat there to modern stuff being better than older stuff is Optane (the enterprise stuff like the 905P or P5800X, not that memory and SSD combo shitshow that Intel was shoveling out the consumer door). We still haven't reached parity with the 3DXpoint stuff, and it's a damn shame Intel hurt itself in it's confusion and cancelled that, because boy would they and Micron be printing money hand over fist right now if they were still making them. Still, Point being: Not everything is a TLC/QLC 0.3DWPD disposable drive like has become standard in the consumer space. If you want write endurance, capacity, and/or performance, you have more and better options today than ever before (Optane/3DXPoint excepted).

Regarding CPU's, they still follow that durability pattern if you unfuck what Intel and AMD are doing with boosting behavior and limit them to perform with the margins that they used to "back in the day". This is more of a problem on the consumer side (Core/Ryzen) than the enterprise side (Epyc/Xeon). It's also part of why the OC market is dying (save for maybe the XOC market that is having fun with LN2), those CPU's (especially consumer ones) come from the factory with much less margin for pushing things, because they're already close to their limit without exceedingly robust cooling.

I have no idea what the relative durability of RAM is tbh, it's been pretty bulletproof in my experience over the years, or at least bulletproof enough for my usecases that I haven't really noticed a difference. Notable exception is what I see in GPU's, but that is largely heat-death related and often a result of poor QA by the AIB that made it (eg, thermal pads not making contact with the GDDR modules).


Maybe but in my experience a good old <100GB SSD from 2010-14 will completely demolish any >100GB from 2014+ in longevity.

Some say they have the opposite experience, mine is ONLY Intel drives, maybe that is why.

X25-E is the diamond peak of SSDs probably forever since the machines to make 45nm SLC are gone.


What if you overprovision the newer SSD to a point where it can run the entirety of the drive in pseudo-SLC ("caching") mode? (You'd need to store no more than 25% of the nominal capacity, since QLC has four bits per cell.) That should have fairly good endurance, though still a lot less than Optane/XPoint persistent memory.


Which tells me your experience is incredibly limited.

Intel was very good, and when they partnered with Micron, made objectively the best SSD's ever made (3DXPoint Optanes). I lament that they sold their storage business unit, though of all the potential buyers, SK was probably the best case scenario (they since rebranded that into Solidigm).

The intel X25-E was a great drive, but it is not great by modern standards and in any write-focused workload it is an objectively, provably bad drive by any standard these days. Let's compare it to a Samsung 9100 Pro 8TB which is a premium consumer drive, and a quasi mid level enterprise drive (depends on usecase, it's lacking a lot of important enterprise features such as PLP) that's still a far cry from the cream of the crop, but has an MSRP comparable to the X25-E's at launch

X25-E 64GB vs 9100 Pro 8TB:

MSRP: ~$900 ($14/GB) vs ~$900 ($0.11/GB)

Random Read (IOPS): 35.0k vs 2,200k

Random Write (IOPS): 3.3k vs 2,600k

Sustained/Seq Read (MBps): 250 vs 14,800

Sustained/Seq Write (MBps): 170 vs 13,400

Endurance: >=2PB writes vs >= 4.8 PB writes

In other words, it loses very badly in every metric, including performance and endurance per dollar (in fact, it loses so bad on performance that it still isn't close even if we assume the X25-E is only $50), and we're not even into the high end of what's possible with SSD's/NAND flash today. Hell, the X25-E can't even compare to a Crucial MX500 SATA SSD except on endurance which it only barely beats (2PB for X25-E vs 1.4PB for 4TB). The X25-E's incredibly limited capacity (64GB max) also makes it a non-starter for many people no matter how good the performance might be (but isn't).

Yes, per cell the X25-E is far more durable than a MX500 or 9100 Pro yielding a Disk Write Per Day endurance of about 17DWPD, which is very good. An Intel P4800X however (almost a 10 year old drive itself) had 60DWPD, or more than 3x the endurance when normalized for Capacity, while also blowing it - and nearly every other SSD ever made until very very recently - out of the water on the performance front as well. And let's not forget, not only can you supplement per-cell endurance with having more cells (aka more capacity), but the X25-E's maximum capacity of 64GB makes it a non-starter for the vast majority of use-cases right out of the gate, even if you try to stack them in an array.

For truly high end drives, look at what the Intel P5800X, Micron 9650 MAX, or Solidigm D7-5810 are capable of for example.

Oh, and btw, a lot of those high end drives have SLC as their Transition Flash Layer, sometimes in capacities greater than the X25-E was available in. So the assertion that they don't make SLC isn't true either, we just got better about designing these devices so that we aren't paying over $10/GB anymore.

So no. By todays standards the X25-E is not "the diamond peak". It's the bottom of the barrel and in most cases, non-viable.


My experience is 10 drives from 2009-2012 that still work and 10 drives from 2014 that have failed.


Yes, we've already established your experience is incredibly limited and not indicative of the state of the market. Stop buying bad drives and blaming the industry for your uninformed purchasing decisions.

Hell, as you admitted that your experience is limited to intel, I'd wager at least one of those drives that failed were probably the 660P's, no? Intel was not immune from making trash either, even if they did also make some good stuff (which for their top tier stuff, was technically was mostly Micron's doing).

I've deployed countless thousands of solid state drives - hell well over a thousand all-flash-arrays - that in aggregate probably now exceeds an exabyte of raw capacity since. This is my job. I've deployed individual systems with more SSD's than you've owned in total from the sound of it. And part of why it's hard to kill those old drives is they are literal orders of magnitude slower, meaning it takes literal orders of magnitude more time to write the same amount of data. That doesn't make them good drives, it makes them near-worthless even when they work, especially considering the capacity limitations that come with it.

I'm not claiming bad drives don't exist, they most certainly do, and would consider over 50% of what's available in the consumer market to fit that bill, but I also have vastly higher standards than most, because if I fuck something up, the cost to fix it is often astronomical. Modern SSD's aren't inherently bad, they can be, but not necessarily so. Just like they aren't inherently phenomenal, they can be, but not necessarily so. But they do exist, at a variety of price points and use-cases.

TL;DR Making uninformed purchasing decisions often leads to bad outcomes.


CAS latency doesn't matter so much as ns of total random-access latency and the raw clockspeed of the individual RAM cells. If you are accessing the same cell repeatedly, RAM hasn't gotten faster in years (around DDR2 IIRC).


Old machines use a lot more power (worse nm), and DDR5 has equivalent to ECC, while previously you had to specifically get ECC RAM and it wouldn't work on cheaper Intel hardware (bulk of old hardware is going to be Intel).


The on-chip ECC in DDR5 is there to account for lower reliability of the chips themselves at the higher speeds. It does NOT replace dedicated ECC chips which cover a whole lot more.


Seems this generates some sort of shim that calls source 2 dynamic lib.


Ok, so all bits have to be rotated, even when powered on, to not loose their state?

Edit: found this below: "Powering the SSD on isn't enough. You need to read every bit occasionally in order to recharge the cell."

Hm, so does the firmware have a "read bits to refersh them" logic?


Kind of. It's "read and write back" logic, and also "relocate from a flaky block to a less flaky block" logic, and a whole bunch of other things.

NAND flash is freakishly unreliable, and it's up to the controller to keep this fact concealed from the rest of the system.


I concur; in my experience ALL my 24/7 drives from 2009-2013 still work today and ALL my 2014+ are dead, started dying after 5 years, last one died 9 years later. Around 10 drives in each group. All older drives are below 100GB (SLC) all never are above 200GB (MLC). I reverted back to older drives for all my machines in 2021 after scoring 30x unused X25-E on ebay.

The only MLC I use today are Samsungs best industrial drives and they work sort of... but no promises. And SanDisc SD cards that if you buy the cheapest ones last a surprising amount of time. 32GB lasted 11-12 years for me. Now I mostly install 500GB-1TB ones (recently = only been running for 2-3 years) after installing some 200-400GB ones that work still after 7 years.


> in my experience ALL my 24/7 drives from 2009-2013 still work today and ALL my 2014+ are dead,

As a counter anecdote, I have a lot of SSDs from the late 2010s that are still going strong, but I lost some early SSD drives to mysterious and unexpected failures (not near the wear-out level).


Interesting, what kind where they? Mine where all Intel.


Yes, but that tradeoff comes with a hidden cost: complexity!

I much rather have 64GB of SLC at 100K WpB than 4TB of MLC at less than 10K WpB.

The spread functions that move bits around to even the writes or caches will also fail.

The best compromise is of course to use both kinds for different purposes: SLC for small main OS (that will inevitably have logs and other writes) and MLC for slowly changing large data like a user database or files.

The problem is now you cannot choose because the factories/machines that make SLC are all gone.


The problem is now you cannot choose because the factories/machines that make SLC are all gone.

You can still get pure SLC flash in smaller sizes, or use TLC/QLC in SLC mode.

I much rather have 64GB of SLC at 100K WpB than 4TB of MLC at less than 10K WpB.

It's more like 1TB of SLC vs. 3TB of TLC or 4TB of QLC. All three take the same die area, but the SLC will last a few orders of magnitude longer.


SLC are produced, but the issue is that there is no (I'm aware of) SLC products for consumer market


My problem is: I have more than 64GB of data


The key takeaway is that you will rebuild the drivers less often:

1) The stack is mature now, we know what features can exist.

2) For me it's about having the same stack as on a 3588 SBC, so I don't need to download many GB of Android software just to build/run the game.

The distance to getting a open-source driver stack will probably be shorter because of these 2 things, meaning OpenVR/SteamVR being closed is less of a long term issue.


I'm confused. Why would you develop a game on a SBC (that's not powerful enough to do VR)? Why are you not just cross compiling?

It's possible that you can have a full open source stack some day on these goggles.. but I don't think that's something that's obviously going to happen. SteamVR sounds like their version of GooglePlay Services


3588 can do VR, just not Unity/Unreal VR. That is a problem with bloated engines not the 3588.

All mainstream headsets get open-source drivers eventually: https://github.com/collabora/libsurvive


yeah but is foveated streaming and whatnot going to be opensource, or are we going to have to wait a decade for some grad student to reimplement a half broken version?


Probably, but eye traction is never going to be the focus of indie engines specially if they run on the 3588.

Also about cross compiling that is meaningless as you need hardware to test on and then you should be able to compile on the device you are using to test. Alteast that is what I want, make devices that cannot compile illegal.


*tracking


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: