Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not quite. The announcement mentions that:

“M5 delivers over 4x the peak GPU compute performance for AI”

In this situation, at least, it’s just referring to AI compute power.



Looks like you might be replying out of context. The parent comment had asked why their mac doesn't feel thousands of times faster than earlier models because they've misinterpreted the marketing claims.

However the marketing claims did not state an across the board weighted performance increase over M4 and certainly by reading the claims one would not assume one that large. Instead the claims state performance gains in specific benchmarks, which is relevant to common modern workflows such as inference. The closest benchmark stated to general purpose computing is the multicore CPU performance increase, which the marketing puts at 15% over M4.

As for that large leap in GPU-driven AI performance, this is on account of the inclusion of a "Neural Accelerator" in each GPU core, which is an M5 specific addition and is similar to changes introduced in the A19 SoC.


Their "peak GPU compute performance for AI" is quite different from your unqualified "performance". I don't know what figures they're quoting, but something stupid like supporting 4-bit floats while the predecessor only supported down to 16-bit floats could easily deliver "over 4x peak GPU compute performance for AI" (measured in FLOPS) without actually making the hardware significantly faster.

Did they claim 4x peak GPU compute going from the M3 to M4? Or M2 to M3? Can you link to these claims? Are you sure they weren't boasting about other metrics being improved by some multiplier? Not every metric is the same, and different metrics don't necessarily stack with each other.


Much of this is probably down to optimized transformer kernels.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: