Saving $400/month covers about 3-5 hours of engineering time per month. In a year, call it 30-50 hours. Did this project take more than 30-50 person-hours?
(The obvious argument about how it might pay off more in the future are dependent on the startup surviving long enough for that future to arrive.)
I feel like this is left out of the story too often - people tend to compare the most optimistic "self-hosted", usually just one or two servers at best, to a less than ideal cloud installation.
My parent company (Healthcare) uses all on prem solutions, has 3 data centers and 10 sys admins just for the data centers. You still need DevOps too.
I don't know how much it would cost to migrate their infra to AWS, but ~ $1.3M (salary) in annual spend buys you a ton of reserved compute on AWS.
$1.3M is 6000 CPU cores, 10TiB of RAM 24/7 with 100TB of storage.
I know for a fact due to redundancy they have no where near that, AND they have to pay for Avamar, VMWare, (~$500k) etc.
There's no way its cheaper than AWS, not even close.
So sure someones self hosted PHP BB forum doesn't need to be on AWS, but I challenge someone to run a 99.99% uptime infra significantly cheaper than the cloud.
I didn't try to hit that because that's harder to call, especially at a small startup. If what is probably "the guy doing all this" happens to be more comfortable with k8s than the AWS stack you can end up winning by going with a nominally more complicated k8s stack that doesn't force you to spend dozens of hours learning more new things and instead just using what you already know. For a small startup those training costs are proportionally huge compared to a more established larger going concern already making money. Startups should generally always go with whatever their engineers already know unless there is a damned good reason not to. (And "I just wanted to learn it" is not a good reason for the startup.)
But monetarily, even for a startup, $400/month savings is something you shouldn't be pouring the equivalent of $5000 (or more, just picking a reasonable concrete number to anchor the point) into. You really need to solve a $400/month problem by putting your time into something, anything that will promote revenue growth sooner and faster rather than optimizing that particular cost.
If your parent company sysadmins invest heavily into automation each sysadmin could be managing thousands of servers.
Also, 6000 CPU "cores" on the cloud is more like 3000 CPU cores. Which you can get in just 20-50 servers. This is in the range of something that could be taken care of as a part time job.
Exactly this. I know people don't like to use this term (because it comes from traditional IT), but this is effectively known as TCO (total cost of ownership). The whole "bare metal" versus the well known hyperscalers debate often misses this with a hand-wavy "just get better devops people and its cheaper".
(The obvious argument about how it might pay off more in the future are dependent on the startup surviving long enough for that future to arrive.)