Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The whole "foveated streaming" sounds absolutely fascinating. If they can actually pull off doing it accurately in real time, that would be incredible. I can't even imagine the technical work behind the scenes to make it all work.

I'd really like to know what the experience is like of using it, both for games and something like video.



There's an awesome shader on shadertoy that illustrates just how extreme the fovea focus is: https://www.shadertoy.com/view/4dsXzM

Linus the shrill/yappy poodle and his channel are less than worthless IMO.


When you full screen this, it's crazy how tiny the area that spins is. For me it's like an inch or inch and a half on a 32 inch 4k display at a normal seated position.

(If I move my head closer it gets larger, further and it gets smaller)


That's crazy. I feel dumb for initial thinking it was somehow doing eye tracking to achieve this, despite having no such hardware installed.

I would be curious to see a similar thing that includes flashing. Anecdotally, my peripheral vision seems to be highly sensitive to flashing/strobing even if it is evidently poor at seeing fine details. Make me think compression in the time domain (e.g. reducing frame rate) will be less effective. But I wonder if the flashing would "wake up" the peripheral vision to changes it can't normally detect.

Not sure what the random jab at Linus is about.


It’s normal to be "more sensitive" to brightness differences in the peripheral areas compared to the fovea. The fovea has more color receptors, in the other areas, there are comparatively more monochromatic receptors (brightness). The general density of the fovea is also much larger.


It is doing eye tracking for the foveated rendering - it has 2 cameras inside the visor for it.


They're referring to the shadertoy linked above. The illusion simulates foveated rendering on your device without eye tracking.


That's quite harsh, and definitely not accurate.


Imagine if we could hook this into game rendering as well. Have super high resolution models, textures, shadows, etc near where the player is looking, and use lower LoDs elsewhere.

It could really push the boundaries of detail and efficiency, if we could somehow do it real-time for something that complex. (Streaming video sounds a lot easier)


Foveated rendering is already a thing. But since it needs to be coded for in the game, it's not really being used on PC games. Games designed for Playstation with the PS VR 2 in mind do use foveated rendering since they know their games are being played with hardware that provides eye tracking.


That's foveated rendering. Foveated streaming, which is newly presented here, is a more general approach which can apply to any video signal, be it from a game, movie or desktop environment.

They are complementary things. Foveated rendering means your GPU has to do less work which means higher frame rates for the same resolution/quality settings. Foveated streaming is more about just being able get video data across from the rendering device to the headset. You need both things to get great results as either rendering or video transport could be a bottleneck.


Game rendering is what they're talking about here. John Carmack has talked about this a bunch if you'd like to seed a google search.


Not quite: you can use it for games rendering, but with a Wifi adapter you more importantly want to use it for the video signal, and only transfer highres in the area you're looking at. A 4k game (2048*2048*2 screens) is 25gbit uncompressed at 100fps, which would stress even Wifi-7. With foveated rendering you can probably get that down to 8gbit easy.


Not just stress WiFi 7, even if the theoretical limit is 23Gbps, you’re not getting anywhere close to that sending to just one device.


Valve is applying it to the streamed view from the computer to reduce the bandwidth requirements it's not actually doing foveated rendering in the game itself because not all games support it.

Foveated streaming is just a bandwidth hack and doesn't reduce the graphic requirements on the host computer the same way foveated rendering does.


As a lover of ray/path tracing I'm obligated to point out: rasterisation gets its efficiency by amortising the cost of per-triangle setup over many pixels. This more or less forces you to do fixed-resolution rendering; it's very efficient at this, which is why even today with hardware RT, rasterisation remains the fastest and most power-efficient way to do visibility processing (under certain conditions). However, this efficiency starts to drop off as soon as you want to do things like stencil reflections, and especially shadow maps, to say nothing of global illumination.

While there are some recent'ish extensions to do variable-rate shading in rasterisation[0], this isn't variable-rate visibility determination (well, you can do stochastic rasterisation[1], but it's not implemented in hardware), and with ray tracing you can do as fine-grained distribution of rays as you like.

TL;DR for foveated rendering, ray tracing is the efficiency king, not rasterisation. But don't worry, ray tracing will eventually replace all rasterisation anyway :)

[0] https://developer.nvidia.com/vrworks/graphics/variableratesh...

[1] https://research.nvidia.com/sites/default/files/pubs/2010-06...


I think you could do foveated rendering efficiently with rasterization if you "simply" render twice at 2 different resolutions. A low resolution render over the entire FOV, and a higher resolution render in the fovea region. You would have overlap but overall it should be less pixels rendered.


I believe the standard way is to downgrade the sampling density outside the area you're looking, see https://docs.vulkan.org/samples/latest/samples/extensions/fr... . Optimally you could attach multiple buffers with different resolutions covering different parts of clipspace, saving vram bandwidth. Sadly this is not supported currently to my knowledge, so you have to write to a single giant buffer with lower sample resolution outside the detail area, and then just downsample it for the coarse layer.


Foveated streaming should be much easier to implement than foveated rendering. Just encode two streams, a low res one and a high res one, and move the high res one around.


There is a LTT video: https://www.youtube.com/watch?v=dU3ru09HTng

Linus says he cannot tell it is actually foveated streaming.


I believe in Linus very little. I'll keep my eyes peeled to see what others say. It's certainly possible though, Valve has the chops to pull it off.


Norm from Tested said the same in his video.

https://youtu.be/b7q2CS8HDHU


The Verge reports similarly - can't tell foveated streaming. Seems like Valve really cracked the code with this one.


I don't think a lot of people realize how big of a deal this is. You used to have to choose between wireless and slow or wired and fast. Now you can have both wireless and fast. It's insane.


Yep, that basically guarantees this as a purchase for me. It's basically a Quest 3 with some improvements, an open non-Meta OS, and the various WiFi and Streaming app issues fixed to make it nearly as good as a wired headset.


I haven't bought a VR headset since the Oculus Rift CV1, but this might do it for me


If you are lucky enough to have wired as an option anyway, especially in linux this has been shaky. But with Steam continuing to push into linux and VR I have no doubt this will change quickly.


Thats not what he said. What he said was even rapidly moving his eyes around he could not spot the lower resolution part.


If you are going to be pedantic then at least do it right. Because that's also not what he said. He said that no matter how fast he moved his eyes he wasn't able to catch it.


How is that meaningfully different than not being able to tell that it's foveated?



Also mentions 1-2ms latency on a modern GPU


I'm super curious how they will implement it, if it's a general api in steam vr that headsets like the Bigscreen Beyond could use or if it's more tailored towards the Frame. I hope it's the first as to me it sounds like all you need is eye input and the two streams, the rest could be done by steam-vr.


If you use a Quest Pro and use Steam Link with a WiFi 6E access point, that should accurately represent the experience of using it.

It's close to imperceptible in normal usage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: