Something I haven't seen mentioned is that python is very commonly taught at universities. I learned it in the 2010s at my school, whereas I never got exposed to Perl. The languages people learn in school definitely stick with you and I wonder if that plays a non-zero factor in this.
I'm not a fan of rust, but I don't think that is the only takeaway. All systems have assumptions about their input and if the assumption is violated, it has to be caught somewhere. It seems like it was caught too deep in the system.
Maybe the validation code should've handled the larger size, but also the db query produced something invalid. That shouldn't have ever happened in the first place.
> It seems like it was caught too deep in the system.
Agreed, that's also my takeaway.
I don't see the problem being "lazy programmers shouldn't have called .unwrap()". That's reductive. This is a complex system and complex system failures aren't monocausal.
The function in question could have returned a smarter error rather than panicking, but what then? An invariant was violated, and maybe this system, at this layer, isn't equipped to take any reasonable action in response to that invariant violation and dying _is_ the correct thing to do.
But maybe it could take smarter action. Maybe it could be restarted into a known good state. Maybe this service could be supervised by another system that would have propagated its failure back to the source of the problem, alerting operators that a file was being generated in such a way that violated consumer invariants. Basically, I'm describing a more Erlang model of failure.
Regardless, a system like this should be able to tolerate (or at least correctly propagate) a panic in response to an invariant violation.
The takeaway here isn’t about Rust itself, but that the Rust marketing crew’s claims that we constantly read on HN and elsewhere about the Result type magically saving you from making mistakes is not a good message to send.
They would also tell you that .unwrap() has no place in production code, and should receive as much scrutiny as an `unsafe` block in code review :)
The point of option is the crash path is more verbose and explicit than the crash-free path. It takes more code to check for NULL in C or nil in Go; it takes more code in Rust to not check for Err.
1. They don’t. There is presumably some hypothetical world where they would tell you if you start asking questions, but nobody buying into the sales pitch ever asks questions.
2. You’re getting confused by technology again. This isn’t about technology.
Sometimes it's hard: for many kinds of projects, I don't think anyone would use them if they were not open source (or at least source-available). Just like I wouldn't use a proprietary password manager, and I wouldn't use WhatsApp if I had a choice. Rather I use Signal because it's open source.
How to get people to use your app if it's not open source, and therefore not free?
For some projects, it feels better to have some people use it even if you did it for free than to just not do it at all (or do it and keep it in a drawer), right?
I am wondering, I haven't found a solution. Until now I've been open sourcing stuff, and overall I think it has maybe brought more frustration, but on the other hand maybe it has some value as my "portfolio" (though that's not clear).
But it can be profited for not-so-big corps, so I'm still working for free.
Also I have never received requests from TooBigTech, but I've received a lot of requests from small companies/startups. Sometimes it went as far as asking for a permissive licence, because they did not want my copyleft licence. Never offered to pay for anything though.
Corporations extract a ton of value from projects like ffmpeg. They can either pay an employee to fix the issues or setup some sort of contract with members of the community to fix bugs or make feature enhancements.
Nearly everyone here probably knows someone who has done free labor and "worked for exposure", and most people acknowledge that this is a scam, and we don't have a huge issue condemning the people running the scam. I've known people who have done free art commissions because of this stuff, and this "exposure" never translated to money.
Are the people who got scammed into "working for exposure" required to work for those people?
No, of course not, no one held a gun to their head, but it's still kind of crappy. The influencers that are "paying in exposure" are taking advantage of power dynamics and giving vague false promises of success in order to avoid paying for shit that they really should be paying for.
I've grown a bit disillusioned with contributing to Github.
I've said this on here before, but a few months ago I wrote a simple patch for LMAX Disruptor, which was merged in. I like Disruptor, it's a very neat library, and at first I thought it was super cool to have my code merged.
But after a few minutes, I started thinking: I just donated my time to help a for-profit company make more money. LMAX isn't a charity, they're trading company, and I donated my time to improve their software. They wouldn't have merged my code in if they didn't think it had some amount of value, and if they think it has value then they should pay me.
I'm not very upset over this particular example since my change was extremely simple and didn't take much time at all to implement (just adding annotations to interfaces), so I didn't donate a lot of labor in the end, but it still made me think that maybe I shouldn't be contributing to every open source project I use.
I understand the feeling. There is a huge asymmetry between individual contributors and huge profitable companies.
But I think a frame shift that might help is that you're not actually donating your time to LMAX (or whoever). You're instead contributing to make software that you've already benefited from become better. Any open source library represents many multiple developer-years that you've benefited from and are using for free. When you contribute back, you're participating in an exchange that started when you first used their library, not making a one-way donation.
> They wouldn't have merged my code in if they didn't think it had some amount of value, and if they think it has value then they should pay me.
This can easily be flipped: you wouldn't have contributed if their software didn't add value to your life first and so you should pay them to use Disruptor.
Neither framing quite captures what's happening. You're not in an exchange with LMAX but maintaining a commons you're already part of. You wouldn't feel taken advantage of when you reshelve a book properly at a public library so why feel bad about this?
Now count how many libraries you use in your day to day paid work that are opensource and you didn't have to pay anything for them. If you want to think selfishly about how awful it is to contribute to that body of work, maybe also purge them all from your codebase and contact companies that sell them?
The main issue is the collaboration aspect of LibreOffice. I imagine though with funding LibreOffice can be upgraded to do this. If countries are already trying to migrate away from US tech, they could invest in this.
The world has become very expensive and everything is way more competitive than it was in the past.
To me, it feels like there is little room to make mistakes. If you get detailed it's hard to get back on track. That I think is the primary reason people are taking less risks (or being deviant).
Having worked at Meta, I wish they did this when I was there. Way too many people not agreeing on anything and having wildly different visions for the same thing. As an IC below L6 it became really impossible to know what to do in the org I was in. I had to leave.
They could do like in the Manhattan project: have different team competing on similar products. Apparently Meta is willing to throw away money, could be better than giving the talents to their competitors.
I’ve always thought there is way more room for this, small teams competing on the same problem and then comparing results and deploying the best implementation
It's surprising to me macs aren't a more popular target for games. They're extremely capable machines and they're console-like in that there isn't very much variation in hardware, as opposed to traditional PC gaming. I would think that it's easier to develop a game for a MacBook than a Windows machine where you never know what hardware setup the user will have.
The main roadblock for porting the games to Mac has never been the hardware, but Apple themselves. Their entire attitude is that they can do whatever they please with their platforms, and expect the developers to adjust to the changes, no matter how breaking. It’s a constant support treadmill, fixing the stuff that Apple broke in your previously perfectly functioning product after every update. If said fixing is even possible, like when Apple removed support for 32-bit binaries altogether, rendering 3/4 of macOS Steam libraries non-functional. This works for apps, but it‘s completely antithetical to the way game development processes on any other platform are structured. You finish a project, release it, do a patch cycle, and move on.
And that’s not even talking about porting the game to either Metal or an absolutely ancient OpenGL version that could be removed with any upcoming OS version. A significant effort just to address a tiny market.
I still don't get this. Apple is a trillion dollar company. How much does it cost to pay a couple of engineers to maintain an up to date version on top of Metal? Their current implementation is 4.1, it wouldn't cost them much to provide one for 4.6. Even Microsoft collaborated with Mesa to build a translation on top of dx12, Apple could do the same.
> If said fixing is even possible, like when Apple removed support for 32-bit binaries altogether, rendering 3/4 of macOS Steam libraries non-functional.
IIRC developers literally got 15 years of warning about that one.
Apple's mistake was allowing 32-bit stuff on Intel in the first place -- if they had delayed the migration ~6 months and passed on the Core Duo for Core 2 Duo, it would've negated the need to ever allow 32-bit code on x86.
IIRC that didn't convince many developers to revisit their software. I still have hard drives full of Pro Tools projects that open on Mojave but error on Catalina. Not to mention all the Steam games that launch fine on Windows/Linux but error on macOS...
Yes, game developers can't revisit old games because they throw out the dev environments when they're done, or their middleware can't get updated, etc.
But it's not possible to keep maintaining 32-bit forever. That's twice the code and it can't support a bunch of important security features, modern ABIs, etc. It would be better to run old programs in a VM of an old OS with no network access.
> But it's not possible to keep maintaining 32-bit forever.
Apple had the money to support it, we both know that. They just didn't respect their Mac owners enough, Apple saw more value in making them dogfood iOS changes since that's where all the iOS devs are held captive. Security was never a realistic excuse considering how much real zombie code still exists in macOS.
Speaking personally, I just wanted Apple to wait for WoW64 support to hit upstream. Their careless interruption of my Mac experience is why I ditched the ecosystem as a whole. If Apple cannot invest in making it a premium experience, I'll take my money elsewhere.
Especially because Apple has a functional design which means there is nearly no redundancy; there's only one expert in any given field and that expert doesn't want to be stuck with old broken stuff. Nor does anyone want software updates to be twice as big as they otherwise would be, etc.
> Security was never a realistic excuse considering how much real zombie code still exists in macOS.
Code doesn't have security problems if nobody uses it. But nothing that's left behind is as bad as, say, QuickTime was.
nb some old parts were replaced over time as the people maintaining them retired. In my experience all of these people were named Jim.
> there's only one expert in any given field and that expert doesn't want to be stuck with old broken stuff.
Oh, my apologies to their expert. I had no idea that my workload was making their job harder, how inconsiderate of me. Anyone could make the mistake of assuming that the Mac supported these workloads when they use their Mac to run 32-bit plugins and games.
Another big, non-technical reason is most games make most of their money around their release date. Therefore there is no financial benefit to updating the game to keep it working. Especially not on macOS where market share is small.
The company in general never really seemed that interested in Games, and that came right from Steve Jobs. John Carmack made a Facebook post[1] several years ago with some interesting insider insights about his advocacy of gaming to Steve Jobs, and the lukewarm response he received. They just never really seemed to be a priority at Apple.
It's impossible to care about video games if you live in SV because the weather is too nice. You can feel the desire to do any indoor activity just fade away when you move there. This is somehow true even though there's absolutely nothing to do outside except take walks (or "go hiking" as locals call it) and go to that Egyptian museum run by a cult.
Somehow Atari, EA and PlayStation are here despite this. I don't know how they did it.
Meanwhile, Nintendo is successful because they're in Seattle where it's dark and rains all the time.
As far as I’ve seen, Apple is to blame here as they usually make it harder to target their platform and don’t really try to cooperate with the rest of the industry.
As a game developer, I have to literally purchase Apple hardware to test rather than being able to conveniently download a VM
I run Linux and test my Windows releases on a VM. It works great.
Sure, I'm not doing performance benchmarking and it's just smoke tests and basic user stories, but that's all that 98% of indie developers do for cross platform support.
Apple has been intensely stupid as a platform to launch on, though I did do it eventually. I didn't like Apple before and now I like it even less.
I develop a game that easily runs on much weaker hardware and runs fine in a VM, I would say most simple 3D & 2D games would work fine in a VM on modern hardware.
However, these days it's possible pass-through hardware to your VM so I would be able to pass through a 2nd GPU to MacOS...if it would let me run it as a guest.
on linux, KVM provides passthrough for GPUs and other hardware, so the VM "steals" the passed through resources from the host and provides near-native performance.
I'm not a subject matter expert, but I do find it a little odd to read the second half of that. I'd expect, beyond development/debugging, there's certainly a phase of testing that requires hardware that matches your target system?
Like, I get if you develop for consoles, you probably use some kind of emulation on your development workstation, which is probably running Windows. Especially for consoles like XBOX One or newer, and PS4 or newer, which are essentially PCs. And then builds get passed off to a team that has the hardware.
Is anyone developing games for Windows on Apple hardware? Do they run Parallels and call it a day? How is the gaming performance? If the answers to those 3 questions are "yes, yes, great", then Apple supports PC game development better than they support Apple game development?
> Like, I get if you develop for consoles, you probably use some kind of emulation on your development workstation
I don’t think anybody does this. I haven’t heard about official emulators for any of the mainstream consoles. Emulation would be prohibitively slow.
Developers usually test on dedicated devkits which are a version of the target console (often with slightly better specs as dev builds need more memory and run more slowly). This is annoying, slow and difficult, but at least you can get these dev kits, usually for a decent price, and there’s a point to trying to ship on those platforms. Meanwhile, nobody plays games on macs, and Apple is making zero effort to bring in the developers or the gamers. It’s a no-chicken-and-no-egg situation, really.
Basically you are correct, MacOS has to be treated like a console in that way. Except you get all the downsides of that development workflow with none of the upsides. The consoles provide excellent debugging and other tools for targeting their platform, can't say the same for MacOS.
For testing, I can do a large amount of testing in a VM for my game. Maybe not 100% and not full user testing but nothing beats running on the native hardware and alpha/beta with real users.
Also, since I can pass through hardware to my VM I can get quite good performance by passing through a physical GPU for example. This is possible and quite straightforward to do on a Linux host. I'm not sure if it's possible using Parallels.
Whether or not you're using ESXi, or want to, is an entirely different question. But "you're not able to" is simply incorrect. I virtualize several build agents and have for years with no issues.
macOS 26 is the last major version to support Intel, so once macOS 28 is latest this will probably become impossible (macOS 26 should be able to use Xcode 27, but maybe the platform removal will change this previous year's OS support from continuing).
I am being facetious. You'll have a PC for gamedev because that's your biggest platform unless you are primarily switch or PS5, in which case you'll have a devkit as well as a PC. But the cost of an Apple device is insignificant compared to the cost of developing the software for it.
So it really comes down to the market size and _where they are_. The games I play are either on my PS5, or on my Mac, never both. For any specific game, they are on one or the other. Ghost of Tsushima is on the PS5. Factorio is on my Mac. If I were an indie game developer, I'd likely be developing the kind of game that has a good market on the Mac.
I think I misunderstood your point as “developing a game on Mac sucks”, vs “developing for Mac without a Mac sucks” which I absolutely can’t disagree with
I was very surprised, and pleasantly too, that Cyberpunk 2077 can maintain 60FPS (14", M4 Pro, 24gb RAM) with only occasional dips. Not with full resolution (actually around FullHD), but at least without "frame generation". Turning frame generation on, it now can output 90-100 FPS depending on environment, but VSync is disabled so dips become way more noticeable.
It even has "for this mac" preset which is good enough that you don't need to tinker with settings to have decent experience.
The game is paused, almost like becomes "frozen" if it's not visible on screen which helps with battery (it can be in the background without any noticeable impact on battery and temperature). Overall way better experience than I expected.
I play a lot of World of Warcraft on my M3 MacBook Pro which has a native MacOS build. It's a CPU bottlenecked game with most users recommending the AMD X3D CPUs to achieve decent framerates in high end content. I'm able to run said content at high (7/10) graphics settings at 120fps with no audible fan noise for hours at a time on battery. It's been night and day compared to previous Windows machines.
Multiple solid reasons have been mentioned from ones created by Apple to ones enforced in software by Apple. One that hasn't been mentioned is the lack of marketshare. Macos market is just tiny and very limited. It's also not a growing market. PC gaming isn't blowing up either but the amount of players is just simply higher.
Ports to macos have not done well from what I've heard. However you can see ports on PC do really well and have encouraged studios like Sony and SquareEnix to invest more in PC ports. Even much later after the console versions sell well. Just not a lot of reasons to add the tech debt and complexity of supporting mac as well.
Even big publishers like Blizzard who have been mac devs for a long time axed the dedicate mac team and client and moved to a unified client. This has downfalls like mac specific issues. If those are not critical then they get put in the pile with the rest of the bugs.
I wonder how that might look once you factor in Apple TV devices. They're pretty weak devices now but future ones can come with M-class CPUs. That's a huge source of potential revenue for Apple.
The current Apple TV is, in many respects, unbelievably bad, and it has nothing to do with the CPU.
Open up the YouTube app and try to navigate the UI. It’s okay but not really up to the Apple standard. Now try to enter text in the search bar. A nearby iPhone will helpfully offer to let you use it like a keyboard. You get a text field, and you can type, and keystrokes are slowly and not entirely reliably propagated to the TV, but text does not stay in sync. And after a few seconds, in the middle of typing, the TV will decide you’re done typing and move focus to a search result, and the phone won’t notice, and it gets completely desynchronized.
The YouTube app has never been good and never felt like a native app -- it's a wrapper around web tech.
More importantly for games, though, is the awful storage architecture around the TV boxes. Games have to slice themselves up into 2GB storage chunks, which can be purged from the system whenever the game isn't actively running. The game has to be aware of missing chunks and download them on-demand.
It makes open-world games nearly impossible, and it makes anything with significant storage requirements effectively impossible. As much as Apple likes to push the iOS port of Death Stranding, that game cannot run on tvOS as currently architected for that reason.
There's a cost/value calculation that just doesn't work well...I have a Ryzen9/rtx3070 PC ($2k over time) and my M4 Mini ($450) holds it's own for most all normal user stuff...sprinting ahead for specific tasks (Video CODEC)...but the 6 year old dedicated GPU on the PC annihilates the Mini in pushing pixels...You can spec an Apple that does better for gaming, but man, are you gonna pay for it, and still not keep up with current PC GPUS.
Now...something like minecraft or SubNautica? The M4 is fine, especially if you're not pushing 4k 240hz.
Apple has been pushing the gaming experience for years (iPhone 4s?) but it never REALLY seems to land, and when someone has a great gaming seperience in a modern AAA game, they always seem to be using a $4500 Studio or similar.
I wrote a post (rant)[1] about my experience of releasing a game on macOS as an indie dev. tl;dr: Apples goes a long way to make the process as painful as possible with tons of paper cuts.
Apple is not the only platform where you effectively pay to have it signed. At some point people need to let this go and accept that the wider industry has started to go this way.
Metal is a very recent API compared to DirectX and OpenGL. Also, there’s very very little people on Mac, and even less that also play videogames. There are almost no libraries and tooling built around Metal and the Mac SDKs, and a very small audience, so it doesn’t make financial sense.
you have to release major titles for windows and console, because there are tons of customers using them.
so a mac port, even if simple, is additional cost. there you have the classic chicken and egg problem. the cost doesn't seem to be justified by the number of potential sales, so major studios ignore the platform. and as long as they do, gamers ignore the platform
i've seen it suggested that Apple could solve this standoff by funding the ports, maybe they have done this a few times. but Apple doesn't seem to care much about it
Up to some years ago, it was common for gamers to assemble their own PC, something that you can't do with a Mac. Not sure if this is still common among gamers though.
The advent of silicon interposer technology has made modular memory and separate CPU/GPU soon to be obsolete IMO
The communication bandwidth you can achieve by putting CPU, CPU, and memory together at the factory is much higher than having these components separate.
It's kind of a myth though, Mac has many flagship games and everything in between
If you identify as a "gamer" and are in those communities, then you'll see communities talking about things you can't natively play
but if you leave niches you already have everything
and with microtransactions, Apple ecosystem users are the whales. again, not something that people who identify as "gamers" wants to admit being actually okay with, but those people are not the revenue of game production.
so I would say it is a missed opportunity for developers that are operating on antiquated calculations of MacOS deployment
macOS is supported by one title (DOTA 2). Windows supports all 10, Linux (the free OS, just so we're clear) runs 7 of the games and has native ports of 5 of them. If you want to go argue to them about missed revenue opportunities then be my guest, but something tells me that DOTA 2 isn't being bankrolled by Mac owners.
If you have any hard figures that demonstrate "antiquated calculations" then now is the time to fetch them for us. I'm somewhat skeptical.
Kind of? It does support higher refresh rates, but their emphasis on "Retina" resolutions imposes a soft limit because monitors that dense rarely support much more than 60hz, due to the sheer bandwidth requirements.
The MacBook Pro has had a 120 Hz screen for nearly half a decade. And of course, external displays can support whatever resolution/refresh rate, regardless of the OS driving them.
i think it depends on how easy it is for a dev to deploy to apple. M1 was great at running call of duty in a windows emulator. iPhone can run the newest resident evil. apple needs to do more to convince developers to deploy to mac
It seems like it's just poor management. I understand they are not product lines, but a university has bills to pay. They have to pay people salaries, benefits, maintain those builds, labs, libraries, etc. The money to do that has to come from somewhere and in the hard times, the fields with the least likely chance of generating revenue to keep the university afloat will see hits. It seems like the university though has put itself in the hard times by taking on a large amount of debt: https://chicagomaroon.com/43960/news/get-up-to-date-on-the-u.... It seems like its less malicious and just risk taking gone wrong.
It's not that different in the corporate world. Lots of companies make bad bets that then lead to layoffs, but not always in the orgs that actually were part of the bad bet. I've seen many startups take on too much risk, then have to perform layoffs in orgs like marketing, recruiting, sales, HR, etc. even if those orgs weren't responsible for the issues that the company is facing.
There isn't really a good solution here. A precedent for banning speech isn't a good one, but COVID was a real problem and misinformation did hurt people.
The issue is that there is no mechanism for punishing people who spread dangerous misinformation. It's strange that it doesn't exist though, because you're allowed to sue for libel and slander. We know that it's harmful, because people will believe lies about a person, damaging their reputation. It's not clear why it can't be generalized to things that we have a high confidence of truth in and where lying is actively harmful.
No speech was banned. Google didn't prevent anyone from speaking. They simply withheld their distribution. No one can seem to get this right. Private corporations owe you almost nothing and certainly not free distribution.
In the article it mentions that Google felt pressured by the government to take the content down. Implying that they wouldn't have if it wasn't for the government. I wasn't accusing Google of anything, but rather the government.
Maybe it's not banning, but it doesn't feel right? Google shouldn't have been forced to do that and really what should've happened is that the people that spread genuine harmful disinformation, like injecting bleach, the ivermectin stuff or the anti-vax stuff, should've faced legal punishment.
It's interesting you say that, because the government is saying Tylenol causes autism in infants when the mother takes it. The original report even says more verification is required and it's results are inconclusive.
I wouldn't be surprised if some lawsuit is incoming from the company that manufactures it.
We have mechanisms for combatting the government through lawsuits. If the government came out with lies that actively harm people, I hope lawsuits come through or you know... people organize and vote for people who represent their interests.
Virtually all of the supposed misinformation turned out not to be that at all. Period, the end. All the 'experts' were wrong, all those that we banned off platforms (the actual experts) were right
You're last quote hits the nail on the head. I've found that good libraries are written by people actually solving the problem and then open sourcing it. pytorch, numpy, eigen, ros, openssl, etc. these all came from being trying to actually solve specific problems.
I think rust will get those libraries, but it takes time. Rust is still young compared to languages with a large amount of useful libraries. The boost project in c++ started in the 90s for example. It just takes time.
Boost is basically why I hate C++ but yes your point is entirely correct.
I write rust every day for a niche application that SHOULD be on paper completely trivial given advertised crates. But I constantly run into amateur hour math/correctness mistakes because nobody is using this stuff in daily user-serving production except apparently us. Some of this stuff should just be inexcusable.
One time I filed a bug report the maintainers response I got was "well I'm not familiar with X" where X was precisely what the library is advertised to do.
And yet they are fine rewriting the API every couple months.
This is early stage. The next wave of open source from companies giving back their work is going to be excellent.