Hacker Newsnew | past | comments | ask | show | jobs | submit | corndoge's commentslogin

From hosting a peertube instance solely for my own stuff for several years, I've come to appreciate just how difficult self hosting a streaming video platform is. As you say, bandwidth and storage requirements are significant; another less obvious one is transcoding. When a user uploads an HD video file, it needs to be transcoded into lower resolutions if you want there to be a hope of people streaming it. While Peertube itself is perfectly happy running on 2-4 vcpu cores on a cheap cloud vm, if you use those cores to handle transcode jobs it can take huge amounts of time (like 20+ hours) to transcode even medium length 1080p videos. So you really need either a lot of CPU that sits mostly idle, or hardware acceleration, both of which are expensive when purchased from cloud providers. Or you can use remote transcoding to offload transcode jobs onto your home gaming pc or whatever, which works well, but can be complicated and a bit touchy to set up properly, and now you have a point of failure dependent on your home network...

And then, people watching videos are used to the YouTube experience with its world class CDN infra enabling subsecond first frame latencies even for 4k videos. They go on Peertube and first frame takes like 5 seconds for a 1080p video...realistically, with today's attention spans most of them are going to bounce before it ever plays.


Since you seem like you have practical knowledge here, I hope you don't mind me asking:

Would it change the equation, meaningfully, if you didn't offer any transcoding on the server and required users to run any transcoding they needed on their own hardware? I'm thinking of a wasm implementation of ffmpeg on the instance website, rather than requiring users to use a separate application, for instance.

Would you think a general user couldn't handle the workload (mobile processing, battery, etc), or would that be fairly reasonable for a modern device and only onerous in the high traffic server environment?


> Would it change the equation, meaningfully, if you didn't offer any transcoding on the server and required users to run any transcoding they needed on their own hardware?

I think the user experience would be quite poor, enough that nobody would use the instance. As an example a 4k video will transcoded at least 2 times, to 1080p and 720p, and depending on server config often several more times. Each transcode job takes a long time, even with substantial hwaccel on a desktop.

Very high bitrate video is quite common now since most phones, action cameras etc are capable of 4k30 and often 4k60.

> Do you think a general user couldn't handle the workload (mobile processing, battery, etc), or would that be fairly reasonable for a modern device and only onerous.

If I had to guess, I would expect it be a poor experience. Say I take a 5 minute video, that's probably around 3-5gb. I upload it, then need to wait - in the foreground - for this video to be transcoded and then uploaded to object storage 3 times on a phone chip. People won't do it.

I do like the idea of offloading transcode to users. I wonder if it might be suited for something like https://rendernetwork.com/ where users exchange idle compute to a transcode pool for upload & storage rights, and still get to fire-and-forget uploads?


Right on. Thanks for the consideration!

I really appreciate you walking through that; it's an eye-opener! It seems like you not only deal with a considerable amount of five-minute-or-greater videos, but much higher quality than I was expecting, too.

I also like the idea of user-transcoding because, honestly, I think it's better for everyone? I would love if every place I uploaded video or audio content offered an option to "include lower-quality variants" or something. Broadly, it's my product; I should have the final say on (and take responsibility for) the end result. And for high-quality stuff, the people who make it tend to have systems equipped to do that better anyway. So they could probably get faster transcoding times by using their own systems rather than letting the server do it. Seems like a win-win, even outside of the obvious benefits of "make a whole lot of computers do only the work they each need done, instead of making a few computers do the work that everyone needs done". With the only slight downside of the "average user" having some extra options that they don't understand which cause them to use it wrong and then everyone hates your product. Yay, app development.


I think offering client side transcode as an option, with server side transcode available for those who don't want to do it client side, is compelling. I would probably do it, as I have a powerful home system that can transcode much faster than my cloud host (I do use the remote transcoding feature in Peertube though).

That's very much not what transcoding is for. You don't want transcoding so a client can render the video in a comfortable resolution. You need transcoding to save bandwidth. If you want the client to do transcoding, you must send them the full raw video file. Either end of the connection may not have enough free bandwidth for that. The client may not be able to teanscode depending on size and format.

You of course, can do this anyway. PeerTube allows you to completely disable transcoding. But again that means you're streaming the full resolution. Your client may not like this.

If realtime performance is your concern I think PeerTube allows you to pre-transcode to disk. If there is a transcoded copy matching the client request, the server streams that direct with no extra transcode.

To answer your question: shifting transcode onto the client won't improve performance and will greatly increase bandwidth requirements in exchange for less compute on the server. You almost certainly do not want this.


Yep. As OP said: I meant the user could transcode the various versions on their machine and then upload each to the server. Sorry about the wording; I can see that it's vague.

I think GP meant making the user perform transcoding at upload time

The funny thing is that YouTube has now enshittified to the point where people routinely DO wait well over 5 seconds to watch the video they actually wanted to watch while interstitials and other commercials are jammed in. Even with adblock enabled, the latest YouTube code won't unlock the first frame of the actual video till some period of ad time has passed so you just sit there looking at a black screen. This on its own definitely isn't enough to get people to leave the platform, but it's still notable how much worse the experience has gotten compared to a few years ago.

On what setup? All YouTube videos load and start playing instantly for me. Every time I've experienced otherwise in the last couple years, it's been my first indication that e.g. AWS is exploding that day

I wonder if it depends what country you are in. I only notice it occasionally when the video won't play in FreeTube or PipePipe (which always has the pause at the start since the last few months) and I'm forced to open an incognito browser tab to watch, and then I realize just how many ads other people are being subjected to before they can even watch the video.

I bet you're using Chrome. Open a video in Chrome and the video is immediately playable, load the same video on the same machine in Firefox and you can expect to wait 5+ seconds for the video to be playable.

I suspect that non-Chrome browsers are being intentionally hobbled


I did a pretty low rigor test and just pulled up one of those 4K videos with the swirling ink in Firefox on my M1 Mac and it seemed to load just as fast. The only difference was that it didn't autoplay the video (because I'm logged out I think) but I clicked it as the page loaded and it played instantly.

I don't doubt at all that Google hobbles their sites on Firefox but at least on my machine they aren't doing a great job of it


You likely pay for YouTube premium if you aren’t noticing adds

I do pay for premium but my impression of the parent was that this was independent of ads. The test I did in the other comment didn't trigger an ad for some reason even though I was logged out, which may be why it loaded so fast.

Ah. The parent mentioned several frustrations that I am not familiar with (presumably since I also pay for premium and don’t block the ads), but my impression was that the delay was caused by the code refusing to play the video until the time slot for the ad had completed even if the ad failed to load (as would happen when blocking the ad http request)

FreeBSD + Waterfox, or Firefox for that matter. YouTube really likes to strangle those who are not in their domain.

If I set my user agent to something like Linux/Ubuntu, it loads just fine. If I set my user agent to some unheard Linux distro, it lags as the same with FreeBSD.


I shove 1080p mp4s onto a very cheap server and I get 2 seconds of load time there versus somewhere between 1 and 2 seconds on youtube. And looking at network requests, the first chunk of the file loads in well under a second so I'd expect something with the metadata preloaded could start playing at that point. So if peertube takes 5 seconds, I really wonder why.

Is it inconvenient to transcode before/during upload?


If you scale an instance you need to use object storage (s3/b2/etc). Fetch from object storage can occasionally have latency spikes.

5 seconds is somewhat exaggerating, I clicked through 10 or so videos on my instance to check and it's 2-3 seconds most of the time.


We can exclude rare enough outliers.

I've experienced B2 throwing a wrench into the dream of low latency, but some object stores are very fast. And more importantly you only need the first couple megabytes of each video to be on fast storage.


I'm using B2, so maybe that's it. I have the instance configured to serve video directly from B2 rather than proxying it. Peertube has no facilities to keep the first few mb of each video local to the server that I am aware of.

What value do you get in transcoding your own stuff? I have plex transcoding disabled on all local network devices that stream it and run into minimal issues (codecs on TV devices, mostly).

By "my own stuff" I mean that I use my instance to upload videos I would otherwise upload to youtube - videos I made that I intend to share with people. The usual reasons for transcoding apply.

idk man it feels pretty useful to me

It is useful. I use AI daily.

The issue is that is it even more useful to bad actors. Our society has been based on a certain level of trust.

I remember a world where photographic evidence was good enough to convict beyond a reasonable doubt. We can’t even trust video any longer. Or even that the voice on the other end of the phone is a family member


No doubt you were doing a myriad of other things that were worthwhile to you at the time.


Can you expound a bit on the problem domains? I am curious


Other states do as well.


"We compile state-level rent late-fee rules from official statutes and housing authority publications with AI-powered consistency checks."

Needs a higher-powered AI, I'd say.


Given that OP said it was "built in Replit"[1], I'm tempted to believe AI misgenerated the underlying calculation code.

[1] Replit bills itself as "an AI-powered platform for building professional web apps and websites."


I always thought Replit was supposed to be a pastebin site with built-in sandboxed code execution, so people could demo Python snippets and what-not. What happened?


Vibe-coding webapps raise more money


see: https://news.ycombinator.com/item?id=45486035

tl;dr: they pivoted from offering services adjacent to "learn to code" (among other things) to vibecoding


Just to follow up on this, 1 day later:

- It's still wrong

- The website now has a "get premium for $6 first 100 customers only!" banner

Vibe coded trash


Let me preface this by saying I use passkeys with KeepassXC.

According to WebAuthn, this is not true. Such passkeys are considered "synced passkeys" which are distinct from "device bound" passkeys, which are supposed to be stored in an HSM. WebAuthn allows for an RP to "require" (scare quotes) that the passkey be device bound. Furthermore, the RP can "require" that a specific key store be used. Microsoft enterprise for example requires use of Microsoft Authenticator.

You might ask, how is this enforced? For example, can't KeepassXC simply report that it is a hardware device, or that it is Microsoft Authenticator?

The answer is, there are no mechanisms to enforce this. Yes, KeepassXC can do this. So while you are actually correct that it's possible, the protocol itself pretends that it isn't, which is just one of the many issues with passkeys.


Hmm, I thought there was some form of attestation involved? Is it really as simple as spoofing the device ID? Do you have any more info/links on the spoofing?


Yes, PKC authentication is good, but the way passkeys have been implemented is not great. Way too much trust built into the protocol; way too much power granted to relying parties; much harder for users to form a correct mental model.


> Frameworks are generally more expensive than Macs, sometimes 50% - 100% more expensive for a similar laptop.

Do you have an example? An 8tb m4 macbook pro runs over 7 grand; the comparable hx370 framework 13 is barely over 3 grand. I bought both within the last couple months and found the macs to be significantly more expensive in the segment i was looking at.


You can buy an M4 Air for $799 on sale frequently.[0] Meanwhile, a similar spec'ed Framework with a slower AMD CPU/GPU is $1,517.00.[1] So the repairability angle just doesn't seem worth it. If the Air breaks, just buy a new one.

Keep in mind that the M4 Air has a better display, significantly faster CPU, faster GPU, significantly more battery life, is fanless, better speakers, much better trackpad, and a thinner profile.

[0]https://www.macrumors.com/2025/08/27/200-off-every-m4-macboo...

[1]https://frame.work/products/laptop13-diy-amd-ai300/configura...


It is mostly valid for 16GB/256GB-SSD config and when you need performance in bursts. Consider sustained performance, more RAM, more storage, OS options etc and the value proposition changes.

I have maintained it for years that the base model M-series Air is the best computer for normal people if they plan to keep it for years.


It likely still has better sustained performance. If you need more, then just go up to MBP.


Yes, it's this. I also own an M4 mbp and an AMD framework 13. With both on maximum screen brightness, side by side, doing similar workloads, battery life isn't that much better on the M4. I think the difference maker is that the mac constantly decreases screen brightness when possible, turns the backlight completely off when there isn't any activity, heavily leverages power efficient scheduling and efficiency cores, no doubt turns off power to all peripherals whenever possible, and so on. And of course lid-closed suspend on a mac lasts indefinitely. Arch does none of these things and even on cohesive distros like Fedora there's only so much you can do in user land. Linux is designed for compatibility across a huge breadth of devices; darwin only has to support Mac hardware and can extract every ounce of power efficiency from deep hardware integration.


IIRC the low power states of M series chips generally dips down further than most x86 CPUs do, and the way both the SoC And OS are designed are for racing to idle and coalescing tasks to reduce wakeups. On the MBPs specially the screen can also drop down to 1hz so the GPU isn’t wasting cycles redrawing static content.

The result is that in more typical usage where the machine isn’t under a constant load, battery life is much better. When it’s sitting there idle displaying a web page it’s barely consuming any power at all, where most competing laptops at minimum are pulling at least 2-3x as much power between the CPU not being able to scale down that far and constantly getting woken to perform poorly scheduled tasks.


When I switched off Android >5 years ago, even then, it was as simple as turning on the hotspot and connecting to it. It was no more cumbersome than any other wifi network. This was with a Pixel device and Linux laptop, and I am sure it works on Windows too.


You have obviously not compared it to how fast a Mac connects to an iphone. There is no need to turn on the hotspot. You can leave that on on the iphone. Just open your MacBook and it quickly connects to your iphone if it does not find a standard wifi.

I am very familiar with the Android hotspot feature. I used it for years. It works OK. But, it is not as fast as the Mac/iphone combo. Not even close. I am speaking from extensive experience.


It's the same now. Turn on hotspot->Connect to it on the PC. After that one step it's in your saved networks and you're good to go.

The only difference is Apple will do this automatically for you. If you open up your mac, and don't have network, you get a little pop up that says "use iPHone's connection?" and will turn on hotspot and connect automatically. Nice, but hardly any different or time saving really.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: