I really is impressive. I wish publishers like EA and anti-cheat developers weren't so reluctant to support it. I hope Steam devices and SteamOS gain enough traction to force their hands.
Ironically the only way I would ever consider robust anti-cheet is if the game installed a seperate bootable Linux witch didn't have the encryption keys for my main partition.
Does windows make it easy to tell the installer wants to install kernel anti cheat? It used to pop up the generic binary "This application wants to change files on your computer" which could be installing in the protected "Program Files" or could be modifying anything.
Wine doesn't emulate the NT kernel; Just the NT and Win32 userspace APIs. For example, Wine provides a `kernel32.dll` that maps API calls into the appropriate Linux ones. Anything kernel level is operating "below" Wine.
I read the person I was responding to saying they avoided games with root kits for moral, not technical, reasons. So I assumed they were on Windows, and AFAIK, windows just offers binary "changes" permissions which covers anything from installing in the slightly protected Program Files directory to installing a rootkit. In other words, can they even detect they are about to install a root kit?
Hate to be the one saying this, but that rootkit is there to prevent you from molesting/cheating other players. So its not an one-sided issue. It is unfortunate that the developers found no other way of fighting cheats, sure.
And by all means, if a game community is so toxic that it has to be policed by extreme measures, it is perhaps indeed better to avoid playing such a game altogether.
agreed. if a condition of attending an event was that I had to wear a chastity belt that prevented guests from raping each other, I'd think twice about attending alltogether.
I don’t care if my shooter game “community” is toxic. I don’t have any interaction with the “community” other than to try and virtually kill its members. I wish my chess app had anti-cheat, there are just some people who will cheat and any game is more fun if they can be prevented from playing.
Oh man, Python 2 > 3 was such a massive shift. Took almost half a decade if not more and yet it mainly changing superficial syntax stuff. They should have allowed ABIs to break and get these internal things done. Probably came up with a new, tighter API for integrating with other lower level languages so going forward Python internals can be changed more freely without breaking everything.
The text encoding stuff wasn't a small change considering what it could break, at least. And remember we're sometimes talking about software that would cost a lot of money to migrate or upgrade. I still maintain some 2.x python code-bases that will be very expensive to migrate and the customer is not willing to invest that money.
Although your general sentiment is something I agree with(if it's going to be painful do it and get it over with), I don't believe anybody knew or could've guessed what the reaction of the ecosystem would be.
Your last point about being able to change internals more freely is also great in theory but very difficult(if not impossible) to achieve in practice.
I don't know. Having maintained some small projects that were free and open source, I saw the hostility and entitlement that can come from that position. And those projects were a spec of dust next to something like Python. So I think the core team is doing the best they can. It was always going to be damned if you do, damned if you don't.
> I still maintain some 2.x python code-bases that will be very expensive to migrate and the customer is not willing to invest that money.
Slight tangent: if Claude can decimate IBM stock price by migrating off Cobol for cheap, surely we can do Python 2 to 3 now, too?
About the internals: we sort of missed an opportunity there, but back then there also didn't quite know what they were doing (or at least we have better ideas of what's useful today). And making the step from 2 to 3 even bigger might have been a bad idea?
I wasn't aware that migrating projects off Cobol has become cheap and it would only take a Claude subscription.
In my experience, the problem had always been maintaining the business logic and any integrations with third-party software that also may be running legacy code-bases or have been abandoned. It can get quite complicated, from what I've seen. Now of course if you're talking about well maintained code-bases with 100%, or close to 100% test coverage, and that includes the integration part along with having the ability to maintain the user experience and/or user interface then yes it becomes a relatively easy process of "just write the code". But, in my experience, this has never been the case.
For the 2.x code-bases I maintain, the customers simply doesn't want to pay for any of it. They might choose to at a later time, but so far it has been more cost effective for them to pay me to maintain that legacy code than pay to have it migrated. Other customers have different needs and thus budget differently.
I'll refrain from judging if 2 to 3 was a missed opportunity or not. I believe the core team does actually know what they're doing and that any decision would've been criticized.
I skip news like that. It's an AI business hyping one of their tools in a major AI hype-cycle. Shares can go up and down based on sentiment. My point still stands.
To me, there's a big difference between saying that migration projects can now be assisted with some AI tooling and saying that it is cheap and to just get Claude to do it.
Maybe I am out of touch but the former is realistic and the latter is just magical hand-waving.
Share-pricing operates on illusions. Just selling a plausible claim can influence the price. Whether they will deliver at the end, doesn't matter at that moment.
Risk my money based on a bunch of wallstreetbets idiots yoloing their money using a random number generator and seeing the word AI on twitter posts, sure lol. I’ll let you play in that cesspool.
> I believe the core team does actually know what they're doing and that any decision would've been criticized.
I agree with the latter. About the former: they probably made a good decisions given the information available at the time. I mean that nowadays they know more than they did in the past.
Absoultely, I had a 2 -> 3 code base I'd mostly given up on, and Claude was amazing. It even re-wrote some libraries I used without py3 versions, decided to just write the parts of the libraries I needed.
It does much better with good tests. In my case the output was a statically generated website, so I could just say 'make the same website, given these inputs'.
I cannot believe people are still acting like Python 2->3 was a huge fuck-up and an enormous missed opportunity. When in reality Python is by most measures the most popular language and became so AFTER that switch.
Since the switch we have seen enormous companies being built from scratch. There is no reason for anyone to be complaining about it being too hard to upgrade in 2026
Living through it... Python 3 made a lot of changes for the better but 3.0 in particular included a bunch of unforced errors that made it too hard for people to upgrade in one go.
It wasn't until much later (I would say 3.4 or 3.5?) that we had good tooling to allow for migrating from Python 2 to Python 3 gradually, which is what most tools needed to do.
The final thing that made Python upgrading easy was making a bunch of changes (along with stuff like six) so that you could write code that would run identically in Python 2 and Python 3. That lets you do refactors over time, little cleanups, and not have the huge "move to Python 3" commit.
> Python is by most measures the most popular language and became so AFTER that switch
The switch had nothing to do with Python's rise in popularity though, it was because of NumPy and later PyTorch being adopted by data scientist and later machine learning tasks that themselves became very popular. Python's popularity rose alongside those.
> There is no reason for anyone to be complaining about it being too hard to upgrade in 2026
The "complaints" are about unnecessary and pointless breakage, that was very difficult for many codebases to upgrade for years. That by now most of these codebases have been either abandoned, upgraded or decided to stick with Python2 until the end of time doesn't mean these pains didn't happen nor that the language's developers inflicting them to their users were a good idea because some largely unrelated external factors made the language popular several years later.
> that was very difficult for many codebases to upgrade for years.
In case people have forgotten: python 3.3 through 3.5 (and 3.6 I think) each had to reintroduce something that was removed to make the upgrade easier. Jumping from 2.7 to 3.3 (or higher depending on what you needed) was the recommended route because of this, it was less work than going to 3.0, 3.1, or 3.2
> The switch had nothing to do with Python's rise in popularity though
You don't realise it but you have actually made my point. People are acting like was a suicidal, harmful move that fucked over Python and ... it didn't matter because Python went on to become insanely popular. Particularly hilarious is that some people have tried to argue that Perl somehow did it better.
In the end your Python 2 project was either worth porting and you did it, or it wasn't worth it and you didn't. Python has gone on to be an undeniable success and it is nearly unthinkable that anyone would use Python 2 for anything. It is absolutely fucking insane for people to still be whining about this in this day and age
It took a long time for python 3 to add the necessary backwards compatibility features to allow people to switch over. Once they did it was fine, but it was a massive fuck up until then. The migration took far longer than it should have done
Its widely regarded as a disaster for good reason, that forced some corrections in python to fix it. Just because its fine now, does not mean it was always fine
I am saying that 2-to-3 has not affected Python becoming the most popular programming language whatsoever. So I guess in a way you're right to say the two things are "unrelated" but you're not exactly disagreeing with me - it'd be like if I said "water is a liquid" and you said "nuh-uh, it's wet"
The Python devs didn’t want to make huge changes because they were worried Python 3 would end up taking forever like Perl 6. Instead they went to the other extreme and broke everyone’s code for trivial reasons and minimal benefit, which meant no-one wanted to upgrade.
Even the main driver for Python 3, the bytes-Unicode split, has unfortunately turned out to be sub-optimal. Python essentially bet on UTF-32 (with space-saving optimisations), while everyone else has chosen UTF-8.
> Python essentially bet on UTF-32 (with space-saving optimisations)
How so? Python3 strings are unicode and all the encoding/decoding functions default to utf-8. In practice this means all the python I write is utf-8 compatible unicode and I don't ever have to think about it.
UTF-32 allows for constant time character accesses, which means that mystr[i] isn't O(n). Most other languages can only provide constant time access for code units.
UTF-32 allows for constant time access to code points. Neither UTF-8 nor UTF-16 can do the same (there are 2 to the power of 20 valid code points, though not all are in use).
While most characters might be encodable as a single code point, Python does not normalize strings, so there is no guarantee that even relatively normal characters are actually stored as single code points.
Internally Python holds a string as an array of uint32. A utf-8 representation is created on demand from it (and cached). So pansa2 is basically correct [^1].
IMO, while this may not be optimal, it's far better than the more arcane choice made by other systems. For example, due to reasons only Microsoft can understand, Windows is stuck with UTF-16.
[1] Actually it's more intelligent. For example, Python automatically uses uint8 instead of uint32 for ASCII strings.
There is no caching of a "utf-8 representation". You may check for example:
>>> x = '日本語'*100000000
>>> import time
>>> t = time.time(); y = x.encode(); time.time() - t # takes nontrivial time
>>> t = time.time(); y = x.encode(); time.time() - t # not cached; not any faster
Generally, the only reason this would happen implicitly is for I/O; actual operations on the string operate directly on the internal representation.
Python uses either 8, 16 or 32 bits per character according to the maximum code point found in the string; uint8 is thus used for all strings representable in Latin-1, not just "ASCII". (It does have other optimizations for ASCII strings.)
The reason for Windows being stuck with UTF-16 is quite easy to understand: backwards compatibility. Those APIs were introduced before there supplementary Unicode planes, such that "UTF-16" could be equated with UCS-2; then the surrogate-pair logic was bolted on top of that. Basically the same thing that happened in Java.
> There is no caching of a "utf-8 representation".
No there certainly is. This is documented in the official API documentation:
UTF-8 representation is created on demand and cached in the Unicode object.
https://docs.python.org/3/c-api/unicode.html#unicode-objects
In particular, Python's Unicode object (PyUnicodeObject) contains a field named utf8. This field is populated when PyUnicode_AsUTF8AndSize() is first called and reused thereafter. You can check the exact code I'm talking about here:
The C API may provide for it, but I'm not seeing a way to access that from Python. This sort of thing is provided for people writing C extensions who need to interface to other C code.
(And the code search seems to be broken; it can't find me the definition of `unicode_fill_utf8` although I'm sure it's obvious enough.)
> all the encoding/decoding functions default to utf-8
Languages that use UTF-8 natively don't need those functions at all. And the ones in Python aren't trivial - see, for example, `surrogateescape`.
As the sibling comment says, the only benefit of all this encoding/decoding is that it allows strings to support constant-time indexing of code points, which isn't something that's commonly needed.
> Python essentially bet on UTF-32 (with space-saving optimisations), while everyone else has chosen UTF-8.
It did nothing of the sort. UTF-8 is the default source file encoding and has been the target for many APIs. It likely would have been the default for all I/O stuff if we lived in a world where Windows had functioning Unicode in the terminal the whole time and didn't base all its internal APIs on UTF-16.
I assume you're referring to the internal representation of strings. Describing it as "UTF-32 with space-saving optimizations" is missing the point, and also a contradiction in terms. Yes, it is a system that uses the same number of bytes per character within a given string (and chooses that width according to the string contents). This makes random access possible. Doing anything else would have broken historical expectations about string slicing. There are good arguments that one shouldn't write code like that anyway, but it's hard to identify anything "sub-optimal" about the result except that strings like "I'm learning 日本語" use more memory than they might be able to get away with. (But there are other strings, like "ℍℯℓ℗", that can use a 2-byte width while the UTF-8 encoding would add 3 bytes per character.)
It's wrong. Python3 eliminated mountains of annoying bugs that happened all over the code base because of mixing of unicode strings and byte strings. Python2 was an absolute mess.
I had a Nokia N9 or was it N7 that ran the predecessor OS to Sailfish. It was so good back then. The UX left android and iOS in dust. Both ended up adopting a lot of patterns from it later. Loved that phone.
You have a service that talks to postgres, probably other services that talk to that service, a client that talks to these services through some gateway. When a user clicks a button, distributed tracing allows you to see the whole request right from the button click to API gateway, to every service it goes through to the DB.. and all the way back. So you can see exactly the path it took, where it slowed down or failed. DBs are usually seen as black boxes in such systems. We just see the DB call took 500ms. We don't know what happened inside though. This allows such distributed traces to also get visibility to the internals of the DB so you can tell the exact joins, index scans etc that took place by looking at your distributed trace. I don't know what level of visibility they've built but that's the general idea.
I'm sure you are aware but sharing anyway. Django 6.0 shipped an API called Django Tasks for background jobs so all Django code can implement portable, backend agnostic background jobs and swap backends on the fly but there are zero actual backends out there right now one can use in production. If you could add inbuilt Django Tasks support to Chancy or create a `django-chancy` package that bridged it, I think you'd see a lot of early adoption by Django projects.
Thanks. My two cents would be to not lock technical features behind a paywall. Lock "enterprise" features like encryption, FIPS, compliance reports, etc which make sense for a big corp. This would be far more palatable for someone small like my one-two person teams to adopt it and pay if we ever become big enough to care about enterprisey features.
A combination of LISTEN/NOTIFY for instantaneous reactivity, letting you get away with just periodic polling, and FOR UPDATE...SKIP LOCKED making it efficient and safe for parallel workers to grab tasks without co-ordination. It's actually covered in the article near the bottom there.
I'm just coming back to web/API development Python after 7-8 years working on distributed systems in Go. I just built a Django+Celery MVP given what I knew from 2017 but I see a lot of "hate" towards Celery online these days. What issues have you run into with Celery? Has it gotten less reliable? harder to work with?
Celery + RabbitMQ is hard to beat in the Python ecosystem for scaling. But the vast, vast majority of projects don't need anywhere that kind of scale and instead just want basic features out of the box - unique tasks, rate limiting, asyncio, future scheduling that doesn't cause massive problems (they're scheduled in-memory on workers), etc. These things are incredibly annoying to implement over top of Celery.
We don't hate Celery at all. It's just a bit harder to get it to do certain things and requires a bit more coding and understanding of celery than what we want to invest time and effort in.
Again, no hate towards Celery. It's not bad. We just want to see if there are better options out there.
But if you are too small for celery, it seems a hard sell to buy a premium message queue?
My top problem with my celery setup has always been visibility. AI and I spent and afternoon setting up a Prometheus / grafana server, and wiring celery into it. Has been a game changer. When things go crazy in prod, I can usually single it down to a specific task for a specific user. Has made my troubleshooting go from days to minutes. The actual queue and execute part has always been easy / worked well.
Most Django projects just need a basic way to execute timed and background tasks. Celery requires seperate containers or nodes, which complicates things unnecessarily. Django 6.0 luckily has tasks framework -- which is backported to earlier django versions has well, which can use the database. https://docs.djangoproject.com/en/6.0/topics/tasks/
Django 6's tasks framework is nice but so far it is only an API. It does not include an actual worker implementaiton. There is a django-tasks package which does a basic implementation but it is not prod ready. I tried it and it is very unreliable. Hopefully the community will come out with backends for it to plug celery, oban, rq etc.
Could you say a bit more about "it is very unreliable"? I'm considering using django-tasks with an rq backend [1] and would like to hear about your experiences. Did you find it dropping tasks, difficult to operate, etc.
I like celery but I started to try other things when I had projects doing work from languages in addition to python. Also I prefer the code work without having to think about queues as much as possible. In my case that was Argo workflows (not to be confused with Argo CD)