The C++ server I mentioned had an "experiment config" that would be used to roll out changes (user-visible features, backend code path changes for data store migrations, etc.) incrementally, and it picked up config changes without a restart. Each request needed to grab the current experiment config once and hold onto it for a consistent view. This server reserved ~16 cores and had pretty high request rates, so Arc<Config> would indeed hit the sort of problem yencabulator is describing. And I imagine it'd get pretty bad if each server crossed NUMA node boundaries (although I recommend avoiding that if you can regardless).
In this case, the Linux rseq-based epoch GC worked perfectly. It is technically a form of garbage collection, but it's not all-or-nothing like Java-style or Bohm GC; it's just a library you use for a few highly used, infrequently updated data structures.
btw, Arc<Config> doesn't really seem relevant to the discussion of scoped concurrency. Scoped concurrency can often replace or reduce the need for Arc<RequestState> but not Arc<Config>.
"So many times" = one increment + one decrement per request = 100k/sec maybe, bouncing cache lines across 16 cores. This is suboptimal but not world-ending.
In this case, the Linux rseq-based epoch GC worked perfectly. It is technically a form of garbage collection, but it's not all-or-nothing like Java-style or Bohm GC; it's just a library you use for a few highly used, infrequently updated data structures.
btw, Arc<Config> doesn't really seem relevant to the discussion of scoped concurrency. Scoped concurrency can often replace or reduce the need for Arc<RequestState> but not Arc<Config>.