Hacker Newsnew | past | comments | ask | show | jobs | submit | m-schuetz's commentslogin

The high memory usage is due to the optimization. Responsiveness, robustness and performance was improved by making each tab independent processes. And that's good. Nobody needs 80 tabs, that's what bookmarks are for.

"that's what bookmarks are for"

And if you are lucky, the content will still be there the next time.


"using namespace std;" goes a long way to make C++ more readable and I don't really care about the potential issues. But yeah, due to a lack of a nice module system, this will quickly cause problems with headers that unload everything into the global namespace, like the windows API.

I wish we had something like Javascript's "import {vector, string, unordered_map} from std;". One separate using statement per item is a bit cumbersome.


Standard library modules: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2022/p24...

I have thoroughly forgotten which header std::ranges::iota comes from. I don't care either.


Last time I tried, modules were a spuriously supported mess. I'll give them another try once they have ironclad support in cmake, gcc, clang and Visual Studio.

Terrible for atomic functions where you are often only interested in atomically writing something, but not interested in the return value they also provide.

I'm not a fan of nodiscard because it's applied way too freely, even if the return value is not relevant. E.g. WebGPU/WGSL initially made atomics nodiscard simply because they return a value, but half the algorithms that use atomics only do so for the atomic write, without needing the return value. But due to nodiscard you had to make a useless assignment to an unused variable.

Disagree. This is the default in Objective-C & Swift, and it’s great. You have to explicitly annotate when discards are allowed. It’s probably my all time favorite compiler warning in terms of accidental-bugs-caught-early, because asking the deeper question about why a function returns useless noise invites deep introspection.

Only when it comes to graphics/art. When it comes to LLMs for code, many people do some amazing mental gymnastics to make it seem like the two are totally different, and one is good while the other is bad.

People were against steam engines, tractors, CGI, self-checkouts, and now generative AI. After some initial outrage, it will be tightly integrated into society. Like how LLMs are already widely used to assist in coding.

Or not. Unlike all of the above, AI directly conflicts with the concept of intellectual property, which is backed by a much larger and more influential field.

Intellectual property is a purely institutional abstraction agreed upon by arbitrary social contract written to satisfy the standard of "To promote the Progress of Science and useful Arts".

AI exists by calculations without invoking law or social agreement.

Which will endure?

IP depends on belief and enforcement.

AI depends on matter and energy.


Most OS licenses requires attribution, so AI for code generation violates licenses the same way AI for image generation does. If one is illegal or unethical, then the other would be too.

On the contrary, I would say this is the main thing Vulkan got wrong and the main reason whe the API is so bad. Desktop and mobile are way too different for a uniform rendering API. They should be two different flavours with a common denominator. OpenGL and OpenGL ES were much better in that regard.


It is unreasonable to expect to run the same graphics code on desktop GPUs and mobile ones: mobile applications have to render something less expensive that doesn't exceed the limited capabilities of a low-power device with slow memory.

The different, separate engine variants for mobile and desktop users, on the other hand, can be based on the same graphics API; they'll just use different features from it in addition to having different algorithms and architecture.


> they'll just use different features from it in addition to having different algorithms and architecture.

...so you'll have different code paths for desktop and mobile anyway. The same can be achieved with a Vulkan vs VulkanES split which would overlap for maybe 50..70% of the core API, but significantly differ in the rest (like resource binding).


But they don't actually differ, see the "no graphics API" blog post we're all commenting on :) The primary difference between mobile & desktop is performance, not feature set (ignoring for a minute the problem of outdated drivers).

And beyond that if you look at historical trends, mobile is and always has been just "desktop from 5-7 years ago". An API split that makes sense now will stop making sense rather quickly.


Different features/architecture is precisely the issue with mobile, be it due to hardware constraints or due to lack in deiver support. Render passes were only bolted into Vulkan because of mobile tiler GPUs, they never made any sense for desktop GPUs and only made Vulkan worse for desktop graphics development.

And this is the reason why mobile and desktop should be separate graphics APIs. Mobile is holding desktop back not just feature wise, it also fucks up the API.


Problem is that NVIDIA literally makes the only sane graphics/compute APIs. And part of it is to make the API accessible, not needlessly overengineered. Either the other vendors start to step up their game, or they'll continue to lose.


> Problem is that NVIDIA literally makes the only sane graphics/compute APIs.

Hot take, Metal is more sane than CUDA.


I'm having a hard time taking an API seriously that uses atomic types rather than atomic functions. But at least it seems to be better than Vulkan/OpenGL/DirectX.


Man, how I wish WebGPU didn't go all-in on legacy Vulkan API model, and instead find a leaner approach to do the same thing. Even Vulkan stopped doing pointless boilerplate like bindings and pipelines. Ditching vertex attrib bindings and going for programmable vertex fetching would have been nice.

WebGPU could have also introduced Cuda's simple launch model for graphics APIs. Instead of all that insane binding boilerplate, just provide the bindings as launch args to the draw call like draw(numTriangles, args), with args being something like draw(numTriangles, {uniformBuffer, positions, uvs, samplers}), depending on whatever the shaders expect.


>Man, how I wish WebGPU didn't go all-in on legacy Vulkan API model

WebGPU doesn't talk to the GPU directly. It requires Vulkan/D3D/Metal underneath to actually implement itself.

>Even Vulkan stopped doing pointless boilerplate like bindings and pipelines.

Vulkan did no such thing. As of today (Vulkan 1.4) they added VK_KHR_dynamic_rendering to core and added the VK_EXT_shader_object extension, which are not required to be supported and must be queried for before using. The former gets rid of render pass objects and framebuffer objects in favor of vkCmdBeginRendering(), and WebGPU already abstracts those two away so you don't see or deal with them. The latter gets rid of monolithic pipeline objects.

Many mobile GPUs still do not support VK_KHR_dynamic_rendering or VK_EXT_shader_object. Even my very own Samsung Galaxy S24 Ultra[1] doesn't support shaderObject.

Vulkan did not get rid of pipeline objects, they added extensions for modern desktop GPUs that didn't need them. Even modern mobile GPUs still need them, and WebGPU isn't going to fragment their API to wall off mobile users.

[1] https://vulkan.gpuinfo.org/displayreport.php?id=44583


> WebGPU doesn't talk to the GPU directly. It requires Vulkan/D3D/Metal underneath to actually implement itself.

So does WebGL and it's doing perfectly fine without pipelines. They were never necessary. Since WebGL can do without pipelines, WebGPU can too. Backends can implement via pipelines, or they can go for the modern route and ignore them.

They are an artificial problem that Vulkan created and WebGPU mistakenly adopted, and which are now being phased out. Some devices may refuse to implement pipeline-free drivers, which is okay. I will happily ignore them. Let's move on into the 21st century without that design mistake, and let legacy devices and companies that refuse to adapt die in dignity. But let's not let them hold back everyone else.


My biggest issues with WebGPU are, yet another shading language, and after 15 years, browser developers don't care one second for debugging tools.

It is either pixel debugging, or trying to replicate in native code for proper tooling.


Ironically, WebGPU was way more powerful about 5 years ago before WGSL was made mandatory. Back then you could just use any Spirv with all sorts of extensions, including stuff like 64bit types and atomics.

Then wgsl came and crippled WebGPU.


My understanding is that pipelines in Vulkan still matter if you target certain GPUs though.


At some point, we need to let legacy hardware go. Also, WebGL did just fine without pipelines, despite being mapped to Vulkan and DirectX code under the hood. Meaning WebGPU could have also worked without pipelines just fine as well. The backends can then map to whatever they want, using modern code paths for modern GPUs.


Quoting things I only heard about, because I don't do enough development in this area, but I recall reading that it impacted performance on pretty much every mobile chip (discounting Apple's because there you go through a completely different API and they got to design the hw together with API).

Among other things, that covers everything running on non-apple, non-nvidia ARM devices, including freshly bought.


After going through a bunch of docs and making sure I had the right reference.

The "legacy" part of Vulkan that everyone on desktop is itching to drop (including popular tutorials) is renderpasses... which remain critical for performance on tiled GPUs where utilization of subpasses means major performance differences (also, major mobile GPUs have considerable differences in command submission which impact that as well)


Also pipelines and bindings. BDA, shader objects and dynamic rendering are just way better than the legacy Vulkan without these features.


> Also, WebGL did just fine without pipelines, despite being mapped to Vulkan and DirectX code under the hood.

...at the cost of creating PSOs at random times which is an expensive operation :/


No longer an issue with dynamic rendering and shader objects. And never was an issue with OpenGL. Static pipelines are an artificial problem that Vulkan imposed for no good reason, and which they reverted in recent years.


That's not at all what dynamic rendering is for. Dynamic rendering avoids creating render pass objects, and does nothing to solve problems with PSOs. We should be glad for the demise of render pass objects, they were truly a failed experiment and weren't even particularly effective at their original goal.

Trying to say pipelines weren't a problem with OpenGL is monumental levels of revisionism. Vulkan (and D3D12, and Metal) didn't invent them for no reason. OpenGL and DirectX drivers spent a substantial amount of effort to hide PSO compilation stutter, because they still had to compile shader bytecode to ISA all the same. They were often not successful and developers had very limited tools to work around the stutter problems.

Often older games would issue dummy draw calls to an off screen render target to force the driver to compile the shader in a loading screen instead of in the middle of your frame. The problem was always hard, you could just ignore it in the older APIs. Pipelines exist to make this explicit.

The mistake Vulkan made was putting too much state in the pipeline, as much of that state is dynamic in modern hardware now. As long as we need to compile shader bytecode to ISA we need some kind of state object to represent the compiled code and APIs to control when that is compiled.


Going entirely back to the granular GL-style state soup would have significant 'usability problems'. It's too easy to accidentially leak incorrect state from a previous draw call.

IMHO a small number of immutable state objects is the best middle ground (similar to D3D11 or Metal, but reshuffled like described in Seb's post).


Not using static pipelines does not imply having to use a global state machine like OpenGL. You could also make an API that uses a struct for rasterizer configs and pass it as an argument to a multi draw call. I would have actually preferred that over all the individual setters in Vulkan's dynamic rendering approach.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: