This is one key draw to Big Cloud and especially PaaS and managed SQL for me (and dev teams I advise).
Not having an ops background I am nervous about:
* database backup+restore
* applying security patches on time (at OS and runtime levels)
* other security issues like making sure access to prod machines is restricted correctly, access is logged, ports are locked down, abnormal access patterns are detected
* DoS and similar protections are not my responsibility
It feels like picking a popular cloud provider gives a lot of cover for these things - sometimes technically, and otherwise at least politically...
Applying security patches on time is not much problem. Ones that you need to apply ASAP are rare and for DB engine you never put it on public access, most of the time exploit is not disclosed publicly and PoC code is not available for patched RCE right on day of patch release.
Most of the time you are good if you follow version updates for major releases as they come you do regression testing and put it on prod in your planned time.
Most problems come from not updating at all and having 2 or 3 year old versions because that’s what automated scanners will be looking for and after that much time someone much more likely wrote exploit code and shared it.
There must be SaaS services offering managed databases on different providers, like you buy the servers they put the software and host backups for you. Anyone got any tips?
to be fair, AWS' database restore support is generally only a small part of the picture - the only option available is to spin an entirely new DB cluster up from the backup, so if your data recovery strategy isn't "roll back all data to before the incident", you have to build out all your own functionality for merging the backup and live data...
Yeah, and that default strategy tends to become very, very painful the first time you encounter non-trivial database corruption.
For example, one of my employers routinely tested DB restore by wiping an entire table in stage, and then having the on call restore from backup. This is trivial because you know it happened recently, you have low traffic in this instance, and you can cleanly copy over the missing table.
But the last actual production DB incident they had was a subtle data corruption bug that went unnoticed for several weeks - at which point restoring meant a painful merge of 10s of thousands of records, involving several related tables.
For sure. It's more about having a pipeline for pulling data from multiple sources - rather than spin up a whole new DB cluster, you usually want to pull the data into new tables in your existing DB, so that you can run queries across old & new data simultaneously
Exactly this. For a small team that's focused on feature development and customer retention, I tend to gladly outsource this stuff and sleep easy at night. It's not even a cost or performance issue for me. It's about if I start focusing on this stuff, what about my actual business am I neglecting. It's a tradeoff.
I can attest to that. At Cloud 66 a lot of customers tell us that while the PaaS experience on Hetzner is great, they benefit from our managed DBs the most.
Not having an ops background I am nervous about:
* database backup+restore * applying security patches on time (at OS and runtime levels) * other security issues like making sure access to prod machines is restricted correctly, access is logged, ports are locked down, abnormal access patterns are detected * DoS and similar protections are not my responsibility
It feels like picking a popular cloud provider gives a lot of cover for these things - sometimes technically, and otherwise at least politically...