Greed. There's no technical basis. I am pretty sure they considered it and even though L2s do not pay much rent to L1, Stripe wanted absolute control (which goes against decentralization) and all the fees in their pocket. Just hoping this doesn't become another trend.
In the API, all GPT‑5 models can accept a maximum of 272,000 input tokens and emit a maximum of 128,000 reasoning & output tokens, for a total context length of 400,000 tokens.
So it's only 270k for input and 400k in total considering reasoning & output tokens.
Only at first glance. It can easily render things that would be very hard to implement in an FPS engine.
What AI can dream up in milliseconds could take hundreds of human hours to encode using traditional tech (meshes, shaders, ray tracing, animation, logic scripts, etc.), and it still wouldn't look as natural and smooth as AI renderings — I refer to the latest developments in video synthesis like Google's Veo 3. Imagine it as a game engine running in real time.
Why do you think this is so hard, even for technical people here, to make the inductive leap on this one? Is it that close to magic? The AI is rendering pillars and also determining collision detection on it. As in, no one went in there and selected a bunch of pillars and marked it as a barrier. That means in the long run, I'll be able to take some video or pictures of the real world and have it be game level.
Because that's been a thing for years already - and works way better then this research does.
Unreal engine 5 has been demoing these features for a while now, I heard about it early 2020 iirc, but the techniques like gaussian splattering predate it.
I have no experience in either of these, but I believe MegaScans and RealityCapture are two examples doing this. And the last nanite demo touched on it, too.
I'm sorry, what's a thing? Unreal engine 5 does those things with machine learning? Imagine someone shows me Claude generating a full React app, and I say "well you see, React apps have always been a thing". The thing we're talking about is AI, nothing else. There is no other thing is the whole point of the AI hype.
What they meant is that 3D scanning real places, and translating them into 3D worlds with collision already exists, and provides much, much better results than the AI videos here. Additionally, it does not need what is likely hours of random footage wandering in the space, just a few minutes of scans.
There are people who file collective mandamus lawsuits, so you might consider joining one of those groups.
However, from what I've seen, it could be a waste of money, as there isn't convincing evidence that it accelerates AP. These lawsuits usually take many months, and AP often resolves "by itself" before the lawsuit reaches a resolution.
There's a difference in quality, though. IntelliSense was never meant to be more than autocomplete or suggestions (function names, variables names, etc.), i.e. the typing out and memorizing API calls and function signatures part. LLMs in the context of programming are tools that aim to replace the thinking part. Big difference.
I don't need to remember all functions and their signatures for APIs I rarely use - it's fine if a tool like IntelliSense (or an ol' LSP really) acts like a handy cheat sheet for these. Having a machine auto-implement entire programs or significant parts of them, is on another level entirely.
https://crackernews.github.io/
reply