At least when we implemented this in the first version of Oculus Link, the way it worked is that it was distorted (AADT [1]) to a deformed texture before compression and then rectilinear regenerated after compression as a cheap and simple way to emulate fixed foveated rendering. So it’s not that there’s some kind of adaptive bitrate which applies less bits outside the fovea region but achieves a similar result by giving it fewer pixels in the resulting image being compressed; doing adaptive bitrate would work too (and maybe even better) but encoders (especially HW accelerated ones) don’t support that.
Foveated streaming is presumably the next iteration of this where the eye tracking gives you better information about where to apply this distortion, although I’m genuinely curious how they manage to make this work well - eye tracking is generally high latency but the eye moves very very quickly (maybe HW and SW has improved but they allude to this problem so I’m curious if their argument about using this at a low frequency really improves meaningfully vs more static techniques)
Although your eye moves very quickly your brain has a delay in processing the completely new frame you switched to. It's very hard to look left and right with your eyes and read something quickly changing on both sides
Foveated streaming is presumably the next iteration of this where the eye tracking gives you better information about where to apply this distortion, although I’m genuinely curious how they manage to make this work well - eye tracking is generally high latency but the eye moves very very quickly (maybe HW and SW has improved but they allude to this problem so I’m curious if their argument about using this at a low frequency really improves meaningfully vs more static techniques)
[1] https://developers.meta.com/horizon/blog/how-does-oculus-lin...