Given your username, I assume I'm addressing Martin Uecker and I want to commend you for putting the work in on this problem over a long period. While a draft TS is closer to actually resolving a 20+ year old problem it's not there yet and this draft is, if you pardon me, in my opinion more than three months work from publishable.
In contrast, Aria went from "We should see whether strict rules are workable" to the current set of Rust nightly features relatively quickly.
If I want to attempt pointer stunts in C, N3005 is probably worth reading first, but it isn't actionable from what I can tell. Contrast Aria's changes, I can go try that (in nightly Rust) and see it works or why I can't / mustn't do what I wanted under Strict Provenance and who I ought to talk to about that.
I wish you luck for C 29 (?) but that's a distant future, and we're contrasting C++ here, so as with #embed you might take this obvious must-have for C29 and find that somehow WG21 are still so far behind they can't do the same for C++ 29.
You realize that this experiment is basically our proposal translated to Rust? Also I wonder why it is an issue that N3005 is still a draft? Rust also does not have an ISO standard.
In what sense do you believe it's "basically our proposal translated to Rust" ?
N3005 explains four provenance variants, but settles on specifically PNVI-ue-udi for the future of C. But Aria's experiment focuses on roughly what you'd call PNVI-plain, reasoning that Rust largely does not need the extra headache of exposure.
N3005 doesn't provide an implementation, so Aria can't have merely "translated" that. It doesn't provide an API design (so not that). Which leaves basically the fact that you've extensively discussed the semantics of these variants.
Let's try specifics: Take a common feature of Rust's design, ptr::map_addr. Where is equivalent functionality described for C in N3005 ? In Rust it's perfectly natural to write a lambda here, in C perhaps you'd spell this very differently, but N3005 doesn't seem to offer such a thing.
It's an issue that N3005 is still a draft because it goes to how long this wound has festered in C comparatively and will still be there. If this was merely a theoretical problem I wouldn't be bothered, but the whole reason DR260 was raised is that it's not a theoretical problem.
The "expose" mechanism as described in the document you linked to corresponds exactly to PVNI-ae-udi (including the "expose" terminology which was taken from N3005 or an earlier draft). Aria's document is very vague (is there a precise version?), but the explanation using angelic determinism ("guessing the right provenance") corresponds to PVNI-ae-udi. "Strict-provenance" does not need the "ae-udi" part but the doc seems to acknowledge that this is not enough and if you ignore the uintptr_t conversions which make us of this in C, you also have strict provenance. N3005 doesn't provide an implementation, but there are - of course - implementations. First, because it captures what most C compilers do (although there are still some differences for some). Then there is a precise mathematical model in N3005 and there is Cerberos which you can run code and analyze. You are right that we do not have the convenience tools such as ptr::map_addr. Maybe someone should build a header for C that provides similar convenience macros. But the hard part is getting all the bugs in the compiler backends fixed. Last time I checked, Rust was also still affected by the same bugs in LLVM as C (but those may be fixed by now).
I should also give a bit of background information, because I find it funny that some surface-level API for Rust is portrait as "solving" the problem in three months which C struggled for years. (And by that I do not want to downplay Arias impressive work, it is a very nice and clean interface). The fundamental problem is that compiler writers build optimizations without a clear mathematical understanding of the rules. One could argue that this is the fault of the C standard by not being a mathematically precise specification, but this was also never really the goal of ISO C. The main issue is that compilers implemented rules that were not consistent even between different optimizer passes. WG14's main mission is to standardize existing practice. DR260 (failed) attempt to clarify some of this came out this. I agree that this should have been done better, but the important thing to understand is that WG14 as a consenus-based standardization committee is not really equipped to do this kind of work. The basic assumption is that compilers implement sound models, which can then later be standardized, only harmonizing differences between different compilers. But here we had to clean up the mess compiler vendors created. Not at all what WG14 was meant for. To be able to start fixing compilers, one had to find a common and consistent formulation for provenance. For C this meant also considering what all existing code needs and what optimization different compilers implement, and deciding which version to pick and what specific behavior is a useful optimization, what is a compiler bug, and what optimization can be scarified to come to a consistent formulation. Only with this can one decide what an optimizer bug is and what not. To the extend these bugs are still there they affect Rust too and to they extend they are fixed, Rust benefits. (vice-versa C benefits from Rust's push towards safety which also puts pressure on compilers to fix such bugs). But in any case, the API on top is not at all the most difficult part and Aria's document is not precise enough to even differentiate between subtle points of provenance.
I think you're correct that it's a big problem that the serious compilers don't actually have coherent internal behaviour, but I think that's a distinct problem which has been masked by the provenance problem in C and thus C++. It meant that real bugs in their software can be argued as "Works as intended" pointing to DR260 rather than anybody needing to fix the compiler.
Once the GCC work is closer to finished, I expect we'll see the same there.
I disagree that you need to solve the C problem first to solve the compiler problem, and I think it was misguided to start there. You seem to have focused on the fact that exposure is even an option in Aria's implementation, but let me quote: "The goal of the Strict Provenance experiment is to determine whether it is possible to use Rust without expose_addr and from_exposed_addr". Setting PNVI-ae-udi as "the" provenance rule is your end goal for C, but there's a reason it's called the "Strict Provenance" experiment in Rust, the goal is something like what you call PNVI-plain.
APIs like map are key to that goal, Rust has them and N3005 does not.
So like I said, rather than just being a "translation" I think the most that can be said is you've got the same problem albeit in a very different context, and your solutions are related in the way we'd expect when competent people attack similar problems.
We have no "end goal". PVNI-ae-udi is intended to capture the semantics of most existing C code. I mention terminology such as "exposure" etc. just because this makes it obvious that this work builds on C's provenance model (the terminology did not exist before as far as I know). PVNI plain is a stricter subset. You can already use it in C and a lot of C code would just work fine with it. Note that some compilers people and formal semantics people were pushing for PVI.
If you think that "map_addr" etc. is an important API, then I agree that this is an innovation on top of what we did. But I personally do not quite see the importance of this API. Yes, it allows some things in the scope of strict provenance which in C would now require PVNI-ae-udi. We envisioned future extensions that prevent exposure of pointers for certain operations, but somehow this seems more academic at this point in time. If you are not using hardware such as CHERI, this does not matter. On the other hand, PVNI-ae-udi makes most existing C code follow a precise provenance model, which I think is a huge step forward.