Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is sizing and consistency. When you're small, it's not cost effective to overprovision 2-3 big servers (for HA).

And when you need to move fast (or things break), you can't wait a day for a dedicated server to come up, or worse, have your provider run out of capacity (or have to pick a different specced server)

IME, having to go multi cloud/provider is a way worse problem to have.



Most industries are not bursty. Overprovision in not expensive for most businesses. You can handle 30000+ updates a second on a 15$ VPS.

A multi node system tends to be less reliable and more failure points than a single box system. Failures rarely happen in isolation.

You can do zero downtime deployment with a single machine if you need to.


> A multi node system tends to be less reliable and more failure points than a single box system. Failures rarely happen in isolation.

Just like a lot of problems exists between keyboard and chair, a lot of problems exist between service A and service B.

The zero downtime deployment for my PHP site consisted of symlinking from one directory to another.


Nice!

Honestly, we need to stop promoting prematurely making everything a network request as a good idea.


> we need to stop promoting prematurely making everything a network request as a good idea

But how are all these "distributed systems engineers" going to get their resume points and jobs?


There are a number of providers who provision dedicated servers via API in minutes these days. Given a dedicated server starts at around $90/Month it probably does make sense for alot of people.


A $20 dedicated server from OVH can outperform $144 VPSs from Linode in my testing, on passmark.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: