I'm super excited to give this one a spin. It seems like a neat idea, Triton, but simpler and with automatic autotuning. My head is spinning with options right now. I love how everyone was hyping up CUDA this and CUDA that a couple of years ago, and now CUDA is all but irrelevant. There's now so many different and opinionated takes on how you should write high performant accelerator cluster code. I love it.
It's also kinda of ironic that right now in 2025, we have all this diversity in tooling, but at the same time, the ML architecture space has collapsed entirely and everyone is just using transformers.
> CUDA that a couple of years ago, and now CUDA is all but irrelevant
What? CUDA won't be irrelevant for years even if all the competitors figure out the holy grail, the ecosystem doesn't suddenly migrate over night. People learning CUDA today will continue to be find jobs and opportunities across the sector for the near future without any worries.
> but at the same time, the ML architecture space has collapsed entirely and everyone is just using transformers.
That's also not true, the ML space is still growing, and lots of things outside of Transformers, but it requires you to actually look and pay attention, not just browse the HN and r/localllama frontpage.
Overall, these do not seem to be the sentiments coming from someone inside the ML space, but rather from an onlookers perspective.
> There's now so many different and opinionated takes on how you should write high performant accelerator cluster code. I love it.
There are literally only 2: SIMT (ie the same as it always was) and tiles (ie Triton). That's it. Helion is just Triton with more auto-tuning (Triton already has auto-tuning).
It's also kinda of ironic that right now in 2025, we have all this diversity in tooling, but at the same time, the ML architecture space has collapsed entirely and everyone is just using transformers.