Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The obvious example is the change to print from a statement to a function. It makes the language a little cleaner, but it also breaks existing code for little practical benefit.

To be clear: I literally do not remember a single example of this breaking anything after running 2to3. There was some practical benefit (such as being able to use print in callbacks) and I don't think it breaking existing code is meaningful given how thoroughly automated the fix was.

I do get the impression that a lot of the complaints are from people who did not do any upgrades themselves, or if they did, didn't use the automated tools. This is just such an irrelevant critique. This is a quintessential example of bikeshedding: the only reason you're bringing up `print` is because you understand the change, not because it's actually important in any way.

> Maintainers should think carefully about whether their change induces lots of downstream work for users. Users will be mad if they perceive that maintainers didn’t take that into account.

Sure, but users in this case are blatantly wrong. You can read the discussions on each of the breaking changes, they're public in the PEPs. The dev team is obviously very concerned with causing downstream work for users, and made every effort, very successfully, to avoid such work.

If your impression is that maintainers didn't take into account downstream work for users, and your example is print, which frankly did not induce downstream work for users, you're the problem. You're being pretty disrespectful to people who put a lot of work into providing you a free interpreter.





I think we essentially agree. My comments about maintainers wasn’t referencing the Python language maintainers. The print change certainly shouldn’t have blocked anyone.

More interesting is how long it took core libraries to transition. That was my primary blocker. My guess is that there were fairly substantial changes to the CPython API that slowed that transition.

Other changes to strings could be actually dangerous if you were doing byte-level manipulations. Maybe tools could help catch those situations. Even if they did, it took some thought and not just find/replace to fix. The change was a net benefit, but it’s easy to see why people might be frustrated or delay transition.


Well, I'll point out one thing:

Your definition of "core libraries" is likely a lot broader than mine. I'm old, and I remember back in the day when Perl developers started learning the hard way that CPAN isn't the Perl standard library.

JavaScript's culture has embraced pulling in libraries for every single little thing, which has resulted in stuff like the left pad debacle, but that very public failing is just the tip of the iceberg for what problems occur when you pull in a lot of bleeding edge libraries. The biggest problems, IMO, are with security. These problems are less common in Python's culture, but still fairly common.

I've come onto a number of projects to help them clean up codebases where development had become slow due to poor code quality, and the #1 problem I see is too many libraries. Libraries don't reduce complexity, they offload it onto the library maintainers, and if those library maintainers don't do a good job, it's worse than writing the code yourself. And it's not necessarily library maintainers' fault they don't do a good job: if they stop getting paid to maintain the library, or never were paid to maintain it in the first place, why should they do a good job of maintaining it?

The Python 2 to 3 transition wasn't harder for most core libraries than it was for any of the rest of us: if anything, it was easier for them because if they're a core library they don't have as many dependencies to wait on.

There are exceptions, I'm sure, but I'll tell you that Django, Pillow, Requests, BeautifulSoup, and pretty much every other library I use regularly, supported both Python 2 AND 3 before I even found out that Python 3 was going to have significant breaking changes. On the flip side, many libraries I had to upgrade had been straight up abandoned, and never transitioned from 2 to 3 (a disproportionate number of these were OAuth libraries, for some reason). I take some pride in the fact that most of the libraries that had problems with the upgrade were ones that had been imported when I wasn't at the company, or ones that I had fought against importing because I was worried about whether they would be maintained. It's shocking how many of these libraries were fixable not with an upgrade, but with removing the dependency writing a <100 lines of my own code including tests.

I'd hope the lesson we can take away from this isn't, "don't let Python make any breaking changes", but instead, "don't import libraries off Pypi just to avoid writing 25 lines of your own code".


The core libraries to me include all the numerical and other major scientific computing libraries. I’m guessing those were laggards due to things like that string/byte change and probably changed to the CPython API.

Did you ever look into why the transition took so long for OAuth libraries? Did you consider just rewriting one yourself?


Ah, I'm not so aware of the numerical/scientific computing space beyond numpy--I will say the numpy transition was pretty quick, though.

I did take the approach of writing my own OAuth using `requests`, which worked well, but I don't think I ever wrote in such a general way to make it a library.

Part of the problem is that OAuth isn't really a standard[1]. There are well-maintained libraries for Facebook and Google OAuth, but that's basically it--everyone else's OAuth is following the standard, but the standard is too vague so they're not actually compatible with each other. You end up hacking enough stuff around the library that it's easier to just write the thing yourself.

The problem with the Google and Facebook OAuth libraries is that there were a bunch of them--I don't think any one of them really became popular enough to become "the standard". When Python 3 came out, there were a bunch of new Google and Facebook OAuth libraries that popped up. I did actually port one Facebook OAuth library to Python3 and maintain it briefly, but the client dropped support for Facebook logins because too few users were using it, and Facebook kept changing data usage requirements. When the client stopped needing the library, I stopped maintaining it. It was on Github publicly, but as far as I know I was the only user, and eventually when I deleted the Repo nobody complained.

I don't say anything unless asked, but if asked I always recommend against OAuth unless you're using it internally: why give your sign up data to Google or Facebook? That's some of your most valuable data.

[1] https://thenewstack.io/oauth-2-0-a-standard-in-name-only/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: