Biggest issue isn't even sequential speed but latency. In the cloud all persistent storage is networked and has significantly more latency than direct-attached disks. This is a physical (speed of light) limit, you can't pay your way out of it, or throw more CPU at it. This has a huge impact for certain workloads like relational databases.
I ran into this directly trying to use Azure's SMB as a service offering (Azure Files) for a file-based DB. It currently runs on a network share on-prem, but moving it to an Azure VM using that service killed performance. SMB is chatty as it is, and the latency of tons of small file IO was horrendous.
Interestingly, creating a file share VM deployed in the same proximity group has acceptable latency.
Yep. This is why my 12-year old Dell R620s with Ceph on NVMe via Infiniband outperform the newest RDS and Aurora instances: the disk latency is measured in microseconds. Locally attached is of course even faster.