Hacker Newsnew | past | comments | ask | show | jobs | submit | xpl's commentslogin

A similar project of mine (generated back when GPT-4 was out, along with comment threads and articles):

https://crackernews.github.io/


> What are some examples?

"Weaponizing Wikipedia Against Israel" https://aish.com/weaponizing-wikipedia-against-israel/


Using that site as evidence is about as credible as linking to a Ben Shapiro vlog


Why not use actual Ethereum as a base layer? If you want speed, build (or use) an L2 on top of it.

I can hardly see any value in "yet another private blockchain" — just use a database, duh.


Greed. There's no technical basis. I am pretty sure they considered it and even though L2s do not pay much rent to L1, Stripe wanted absolute control (which goes against decentralization) and all the fees in their pocket. Just hoping this doesn't become another trend.


ethereum is expensive, and there's scaling limits, see all the posts coinbase puts out for their base chain.


Those scaling limits are temporary though. PeerDAS in the next hardfork should increase scalability even more on the rollups.

Most of these L1s will likely end up becoming L2s in the near future, especially if they can rake in revenue via sequencers


Probably 50 years ago, some skeptics were saying, "Why database, just use a pen and a notepad, duh".


exactly my thought. i have next to zero interest in yet another l1


They say:

In the API, all GPT‑5 models can accept a maximum of 272,000 input tokens and emit a maximum of 128,000 reasoning & output tokens, for a total context length of 400,000 tokens.

So it's only 270k for input and 400k in total considering reasoning & output tokens.


Only at first glance. It can easily render things that would be very hard to implement in an FPS engine.

What AI can dream up in milliseconds could take hundreds of human hours to encode using traditional tech (meshes, shaders, ray tracing, animation, logic scripts, etc.), and it still wouldn't look as natural and smooth as AI renderings — I refer to the latest developments in video synthesis like Google's Veo 3. Imagine it as a game engine running in real time.


Why do you think this is so hard, even for technical people here, to make the inductive leap on this one? Is it that close to magic? The AI is rendering pillars and also determining collision detection on it. As in, no one went in there and selected a bunch of pillars and marked it as a barrier. That means in the long run, I'll be able to take some video or pictures of the real world and have it be game level.


Because that's been a thing for years already - and works way better then this research does.

Unreal engine 5 has been demoing these features for a while now, I heard about it early 2020 iirc, but the techniques like gaussian splattering predate it.

I have no experience in either of these, but I believe MegaScans and RealityCapture are two examples doing this. And the last nanite demo touched on it, too.


I'm sorry, what's a thing? Unreal engine 5 does those things with machine learning? Imagine someone shows me Claude generating a full React app, and I say "well you see, React apps have always been a thing". The thing we're talking about is AI, nothing else. There is no other thing is the whole point of the AI hype.


What they meant is that 3D scanning real places, and translating them into 3D worlds with collision already exists, and provides much, much better results than the AI videos here. Additionally, it does not need what is likely hours of random footage wandering in the space, just a few minutes of scans.


This thing looks like a complete joke next to the Unreal Engine.


Check out this Russian Telegram channel on AP https://t.me/usadminvisaprocessing/1

There are people who file collective mandamus lawsuits, so you might consider joining one of those groups.

However, from what I've seen, it could be a waste of money, as there isn't convincing evidence that it accelerates AP. These lawsuits usually take many months, and AP often resolves "by itself" before the lawsuit reaches a resolution.



One interesting benefit is that OpenAI would be able to detect bots using their APIs to generate content.


We just recently had COVID which was likely bioengineered (as the lab leak is now "officially" considered a plausible explanation)


People said the same thing about IntelliSense a long time ago.


There's a difference in quality, though. IntelliSense was never meant to be more than autocomplete or suggestions (function names, variables names, etc.), i.e. the typing out and memorizing API calls and function signatures part. LLMs in the context of programming are tools that aim to replace the thinking part. Big difference.

I don't need to remember all functions and their signatures for APIs I rarely use - it's fine if a tool like IntelliSense (or an ol' LSP really) acts like a handy cheat sheet for these. Having a machine auto-implement entire programs or significant parts of them, is on another level entirely.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: