Hacker Newsnew | past | comments | ask | show | jobs | submit | mercnz's commentslogin

just before this outage i was exploring bunnycdn as the idea of cloudflare taking over dns still irks me slightly. there are competitors. but there's a certain amount of scale that cloudflare offers which i think can help performance in general. that said in the past i found cloudflare performance terrible when i was doings lots of testing. they are predominantly a pull based system not a push, so if content isn't current the cache miss performance can be kind of blah. i think their general backhaul paths have improved, but at least from new zealand they used to seem to do worse than hitting a los angeles proxy that then hits origin. (although google was in a similar position before, where both 8.8.8.8 and www.google.co.nz/.com were both faster via los angeles than via normal paths - i think google were doing asia parent, like if testing 8.8.8.8 misses it was super far away). i think now that we have http/3 etc though that performance is a bit simpler to achieve, and that ddos, bot protection is kind of the differentiator, and i think that cloudflare's bot protection may work reasonably well in general?


i think that is data rather than code which is where it falls short, in a way you need stringent code and more safeguarded code; it's like if everyone sends you 64k posts as that's all your proxy layer lets in, someone checked sending 128kb and it gave an error before reaching your app - and then someone sends 128kb and the proxy layer has changed - and your app crashes as it was more than 64kb and your app had an assert against that. to actually track issues with erraneous data that overflows well and stuff isn't so much code test but more like fuzz testing, brute force testing etc. which i think people should do; but that's more like we need strong test networks, and also those test networks may need to be more internet like to reflect real issues too, so the whole testing infrastructure in itself becomes difficult to get right - like they have their own tunneling system etc, they could segregate some of their servers and make a test system with better error diagnosis etc potentially. but to my mind, if they had better error propogation back that really identified what was happening and where then that would be a lot better in general. sure, start doing that on a test network. this is something i've beeen tihnking about in general - i made a simple rpc system for being able to send real time rust tracing logs (it allows to just use the normal tracing framework and use a thin rpc layer) back from multiple end servers but that's mostly for granular debugging. i've never quite understood why systems like systemd-journald aren't more network centric when they're going to be big and complex kitchensink approaches - apparently there's dbus support, but to my mind something inbetween debugging level of code and warning/info. like even if it's doing things like 1/20 of log info it's too much volume if things like large files getting close to limits is increasing etc and we can see this as things run, and can see if it's localised or common etc it'd help have more resilient systems. something may already exist in this line but i didn't come across anything in a reasonably passive way - i mean there's debugging tools like dtrace etc that have been around for ages.


did you try at 60hz? i've found a lot of monitors don't like 70mhz 720x400.. which is what bios often boots to on older computers. i'm not sure if they're running 640x480 at high refresh rate too.


Yes, I only tested the 60Hz vertical refresh options.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: