As yourself more the question, is your service that important to need 99.999% uptime? Because i get the impression that people are so fixated on this uptime concept, that the idea of being down for a few hours is the most horrible issue in the world. To the point that they rather hand over control of their own system to a 3th party, then accept a downtime.
The fact that cloudflare can literally ready every bit of communication (as it sits between the client and your server) is already plenty bad. And yet, we accept this more easily, then a bit of downtime. We shall not ask about the prices for that service ;)
To me its nothing more then the whole "everybody on the cloud" issue, when most do not need the resource that cloud companies like AWS provide (and the bill), and yet, get totally tied down to this one service.
Not when you start pushing into the TB's range of monthly data... When you get that dreaded phone call from a CF rep, because the bill that is coming is no joke.
Its free as long as you really are small, not worth milking. The moment you can afford to run your own mini dc at your office, you start to enter the "well, hello there" for CF.
> The moment you can afford to run your own mini dc at your office, you start to enter the "well, hello there" for CF.
As someone who has (and is) runs (running) a DC with all the electrical/UPS, cooling, piping, HVAC+D stuff to deal with: it can be a lot of just time/overhead.
Especially if you don't have a number of folks in-house to deal with all that 'non-IT' equipment (I'm a bit strange in that I have an interest in both IT and HVAC-y stuff).
> There are many self-hosted alternatives to protect against botnet.
What would some good examples of those be? I think something like Anubis is mostly against bot scraping, not sure how you'd mitigate a DDoS attack well with self-hosted infra if you don't have a lot of resources?
On that note, what would be a good self-hosted WAF? I recall using mod_security with Apache and the OWASP ruleset, apparently the Nginx version worked a bit slower (e.g. https://www.litespeedtech.com/benchmarks/modsecurity-apache-... ), there was also the Coraza project but I haven't heard much about it https://coraza.io/ or maybe the people who say that running a WAF isn't strictly necessary also have a point (depending on the particular attack surface).
There is haproxy-protection, which I believe is the basis of Kiwiflare. Clients making new connections have to solve a proof-of-work challenge that take about 3 seconds of compute time.
Well if you self host DDoS protection service, that would be VERY expensive. You would need rent rack space along with a very fast internet connection at multiple data centers to host this service.
If you're buying transit, you'll have a hard time getting away with less than 10% commit, i.e. you'll have to pay for 10 Gbps of transit to have a 100 Gbps port, which will typically run into 4 digits USD / month. You'll need a few hundred Gbps of network and scrubbing capacity to handle common DDoS attacks using amplification from script kids with a 10 Gbps uplink server that allow spoofing, and probably on the order of 50+ Tbps to handle Aisuru.
If you're just renting servers instead, you have a few options that are effectively closer to a 1% commit, but better have a plan B for when your upstreams drop you if the incoming attack traffic starts disrupting other customers - see Neoprotect having to shut down their service last month.