Hacker Newsnew | past | comments | ask | show | jobs | submit | jy14898's commentslogin

I never want to unknowingly use an app that's driven this way.

However, I'm happy it's happening because you don't need an LLM to use the protocol.


you might like https://omrelli.ug/g9/ which is a similar concept but for graphics

I do like g9! It was a strong inspiration for bidicalc actually!

The post stated that it was believed duplication improved loading times on computers with HDDs rather than SSDs


Which is true. It’s an old technique going back to CD games consoles, to avoid seeks.


Is it really possible to control file locations on HDD via Windows NTFS API?


No, not at all. But by putting every asset a level (for example) needs in the same file, you can pretty much guarantee you can read it all sequentially without additional seeks.

That does force you to duplicate some assets a lot. It's also more important the slower your seeks are. This technique is perfect for disc media, since it has a fixed physical size (so wasting space on it is irrelevant) and slow seeks.


> by putting every asset a level (for example) needs in the same file, you can pretty much guarantee you can read it all sequentially

I'd love to see it analysed. Specifically, the average number of nonseq jumps vs overall size of the level. I'm sure you could avoid jumps within megabytes. But if someone ever got closer to filling up the disk in the past, the chances of contiguous gigabytes are much lower. This paper effectively says that if you have long files, there's almost guaranteed gaps https://dfrws.org/wp-content/uploads/2021/01/2021_APAC_paper... so at that point, you may be better off preallocating the individual does where eating the cost of switching between them.


From that paper, table 4, large files had an average # of fragments around 100, but a median of 4 fragments. A handful of fragments for a 1 GB level file is probably a lot less seeking than reading 1 GB of data out of a 20 GB aggregated asset database.

But it also depends on how the assets are organized, you can probably group the level specific assets into a sequential section, and maybe shared assets could be somewhat grouped so related assets are sequential.


Sure. I’ve seen people that do packaging for games measure various techniques for hard disks typical of the time, maybe a decade ago. It was definitely worth it then to duplicate some assets to avoid seeks.

Nowadays? No. Even those with hard disks will have lots more RAM and thus disk cache. And you are even guaranteed SSDs on consoles. I think in general no one tries this technique anymore.


> But if someone ever got closer to filling up the disk in the past, the chances of contiguous gigabytes are much lower.

By default, Windows automatically defragments filesystems weekly if necessary. It can be configured in the "defragment and optimize drives" dialog.


Not 'full' de-fragmentation, Microsoft labs did a study and after 64MB slabs of contiguous files you don't gain much so they don't care about getting gigabytes fully defragmented.

https://web.archive.org/web/20100529025623/http://blogs.tech...

old article on the process


> But if someone ever got closer to filling up the disk in the past, the chances of contiguous gigabytes are much lower

Someone installing a 150GB game sure do have 150GB+ of free space and there would be a lot of continuous free space.


It's an optimistic optimization so it doesn't really matter if the large blobs get broken up. The idea is that it's still better than 100k small files.


Not really. But when you write a large file at once (like with an installer), you'll tend to get a good amount of sequential allocation (unless your free space is highly fragmented). If you load that large file sequentially, you benefit from drive read ahead and OS read ahead --- when the file is fragmented, the OS will issue speculative reads for the next fragment automatically and hide some of the latency.

If you break it up into smaller files, those are likely to be allocated all over the disk; plus you'll have delays on reading because windows defender makes opening files slow. If you have a single large file that contains all resources, even if that file is mostly sequential, there will be sections that you don't need, and read ahead cache may work against you, as it will tend to read things you don't need.


Key word is "believed". It doesn't sound like they actually benchmarked.


There is nothing to believe. Random 4K reads for HDD is slow.


I assume asset reads nowadays are much heavier than 4 kB though, specially if assets meant to be loaded together are bundled together in one file. So games now should be spending less time seeking relative to their total read size. Combined with HDD caches and parallel reads, this practice of duplicating over 100 GBs across bundles is most likely a cargo-cult by now.

Which makes me think: Has there been any advances in disk scheduling in the last decade?


Who cares? I've installed every graphically intensive game on SSDs since the original OCZ Vertex was released.


Their concern was that one person in a squad loading on HDD could slow down the level loading for all players in a squad, even if they used a SSD, so they used a very normal and time-tested optimisation technique to prevent that.


Their technique makes it so that the normal person with a ~base SSD of 512 GB can't reasonably install the game. Heck of a job Brownie.


Nonsense. I play it on a 512GB SSD and it’s fine.


It's hard for me to use a laptop with win11 and one game (BG3) installed on a 512 GB SSD.

Stopped at 74 but managed to par all before that somehow. Didn't really do any problem solving/deep thinking about it, just clicking what felt right


I don't think you can apply Unix philosophy to a (GUI) web browser, you don't use it compositionally.


In fact, the web browser may be the best example of a program antithetical to the unix philosophy. It is a single program that does rendering, password management, video decoding, dev tools, notifications, extension systems, etc. Adding some new AI component is rather on-brand for browsers (whether a good decision or not).


Not literally. But in spirit.

I don't want my web browser to be a mediocre PDF reader. I want my good and perfected PDF reader to be a PDF reader. I don't want my web browser to be a Web development IDE. I want a specialised (version of) a browser with all the developer tools and one that lacks all these features is lighter, safer and simpler for browsing. I don't want an FTP client in my web browser (I don't want one anywere lol). Firefox was extracted from Mozilla back in the days exactly because Mozilla had become a browser that was bloated and crammed full of features that were unpolished or just subpar. Firefox saved Mozilla and fought back by being lean, fast, and terribly focused at doing one thing and doing talhat well.

I want a browser that's good and forever improving in letting me browse the web and run and use web-apps.


> you don't use it compositionally.

I would if I could!


Nothing really stops you from doing curl | awk , but you probably aren't.


curl is not a replacement for a web browser. No Javascript, no DOM.


But that's basically the promise, that the damn thing _can_ use arbitrary things compositionally.


by this logic sockets are also non-unix


I mean, that's not exactly wrong...


Open the page in two windows, with one that has note mode enabled


Thanks, I totally missed that the site is intended to be opened in multiple windows/tabs! That lede is buried in the code.

Wasn't familiar with BroadcastChannel, which allows communication between different ... windows and tabs ... of the same origin

https://developer.mozilla.org/en-US/docs/Web/API/BroadcastCh...


ramburgers are quite healthy, they've been shown to improve memory


I'm interpretting your message as you asking me to share my API keys


You are absolutely right!


Fair enough, but this isn't that case. They have Y (it's in the first photo) and tested a previous version of the model. The changes are predictable (rotating letters and slight scaling) so I don't think it's unreasonable to be confident and not waste plastic.


What do you mean by "CSS is basically pre-tokenized input"? Can you give an example of what you have trouble with?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: