Everybody on the planet knows that the modern cellphone is really just a portable vacuum connected directly to people's wallets and steam doesn't want a piece of that action?
They must be raking in the dough if they are ignoring that opportunity.
You can't make money from gacha that's the difference.
I can put real money into Genshin Impact but it's just a sink.
In Steam you can actually make money with the skin gambling or just by simply day trading. I've built my first gaming PC back in 2017 from selling CS cases alone (~$800 back then)
I just bought low and sold high and I already had some older cases that were going up in price.
Once I had my target achieved, in my case that was +$700 at least in my Steam account, I bought some
very popular CS knives, iirc 3 with each of them going +$250
Then moved those knives to a 3rd party gambling/trading site (thanks to the SteamAPI provided by Valve).
And then sold the knives there for real money and cashed out with simple bank transfer.
Few days later bought my first gaming PC.
Nothing have changed ever since and you can still turn your Steam account money into real cash with 3rd party sites.
It's the whole ecosystem, Valve selling keys (making $1b in 2023 alone [0]) and the numbers are only going up since [1]
On top of that they take a cut form every single transaction so even just people selling cases between each other makes Valve a fuckton of money.
And there are the 3rd party sites which are running on the SteamAPI where people can bet these cases, keys, and skins on CS matches. It's an insane system running freely without any oversight from anyone
I think it is reasonable option to not compete in those markets largely controlled by existing players.
Especially when they really are rolling in money from selling desktop games. Taking 20% and courts most likely won't stop them as they do not control the platforms.
Yeah, that is strange, especially since we now have the technology to actually play full blown PC versions via abstraction layers on Android phones. A steam store with existing games on mobile is now possible.
The PS5 ecosystem lets you buy games from your phone and your console will autodownload it for you.
Maybe the Steam store is just too big of a legacy clusterfuck to make mobile friendly. It never worked all that well even on desktop, especially on macOS. It's probably the most sluggish web app I've ever used.
You can buy Steam games and download them to your PC from your phone, too. Only problem is the website isn't designed specifically for mobile. Most pages work well but use a lot of data to pull media.
I'm looking at the steam store via the mobile app on my android device. If I load up my Linux desktop and the steam app, it seems fine between the two.
Honestly I've been a steam user for.. idk how long, 15 years?
I can't say I've ever had a major issue with it, maybe I've been lucky.
Edit: Seems to be the chat app that causes this for the author in question.
A user's attempt to send a sticker reveals Valve's misguided implementation of the Y combinator, where Steam App and Steam Chat exist only to question each other's purpose.
The sole defender of this system, a user who bought a game from Steam on their mobile device, completely unaware of any Steam app, maintains that the portable wallet vacuum works as intended.
It's too late. Negative convergence of the Dyson vacuum's warranty appears inversely proportional to Gaben's proximity to retirement. The Y-combinator is reaching criticality.
The Steam Chat app opens.
Sent from my Steam™ (Claude x Gemini "Dew It Right" 2025 Black Edition).
Kafka is a write-ahead log, not a queue per se. It handles transactions to the disk. Not across the network.
RabbitMQ is neat out of the box. But I went with ZeroMQ at the time.
ZeroMQ is cool but during current year I'd only use it to learn from their excellent documentation. Coming from Python, it taught me about Berkeley sockets and the process of building cross-language messaging patterns. After a few projects, it's like realizing I didn't need ZeroMQ to begin with I could make my own! If ZeroMQ's Hintjens were still with us I'd still be using it.
It's like the documented incremental process of designing a messaging queue to fit your problem domain, plus a thin wrapper easing some of lower level socket nastiness. At least that's my experience using it over the years. Me talking about it won't do it enough justice.
NATS does the lower level socket wrapper part very nicely. It's a but more modern too. Golang's designed to be like a slightly nicer C syntax, so it would make sense that it's high performance and sturdy. So it's similar to ZeroMQ there.
I'm not sure if either persist to disk out of the box. So either of these are going to be simpler and faster than Kafka.
The DB people are probably trying too hard to cater to the queues. Ideally I'd have normalized the data and modeled the relations such transactions don't lock up the whole table. Then I started questioning why I needed a queue at all when databases (sans SQLite which is fast enough as is) are made for pooling access to a database.
Kafka supports pipelining to a relational database but this part is where you kind of have to be experienced to not footgun and I'm not at that level. I think using it as a queue in that you're short-circuiting it from the relational database pipeline is non-standard for Kafka. I suspect that's where a lot of the Kafka hate is from. I could understand if the distributed transactions part is hell but at that point it's like why'd you skip the database then? Trying to get that free lunch I assume.
I have an alternative. Try inserting everything into a SQLite file. Running into concurrency issues? Use a second SQLite file. Two computers? send it over the network. More issues? Since it's SQL just switch to a real database that will pool the clients. Or switch to five of them. SQL is sorta cool that way. I assume that would avoid the reimplementing half of the JVM to sync across computers where you get Oracle Java showing up to sell you their database halfway into making your galactic scale software or the whatever.
I already spent time proofreading it, but I forgot that I couldn't edit it after so much time. Unless the merciful admin lets me roll the 12-sided dice to replace the above cringe with the below piece I carved out of soap.
---
I always check for maintained libraries for my programming languages for any messaging library. Bindings in many languages are consistent across Kafka, ZeroMQ, and NATS.
Kafka is a write-ahead log, not a queue per se. It handles transactions to the disk. The networking is a simple broadcast, not a shared queue. You also can't (canonically, at least) pop/insert/delete rows. It's append-only. It can do basic seeking, like replaying from the start.
ZeroMQ is a good choice for learning from its excellent documentation, and programmers interested in C programming. Probably a good lead into Beej's networking guide. ZeroMQ is the odd one as it has no central broker ("Zero" for zero broker); you copy your favorite broker.py pattern from the ZeroMQ guide.
Dropping anchor to throw in the POSIX standard sockets, the BSD kqueue, the Linux epoll, newer io_uring, and libuv for boring cross-platform asynchronous I/O.
Many of us have used both sides and settled on one area to start.
Kafka, et al are amazing. Also almost always overkill in the first x months or years.
It’s not too much of a stretch to model your queue first in something like Postgres, which oddly offers things a little beyond a traditional rbdms, and when the model implementation in the domain reveals itself… it can shine a nice light in the direction of a Kafka, etc.
Sir, your reply is more coherent than mine. I'll give you props for that.
Still, I disagree that Kafka is always overkill.
When god opened a datagram socket on your computer, you needed to have been capturing this data X months ago, but weren't paying attention. You need to build warbot.py and put it into production before you have the chance to deal with cold storage. Kafka is my go-to if you can do this before you run out of disk space.
I frequently append JSON lines to a "data.json" file in Python. Add a socket server, compression, and typed bindings for 20+ languages. Boom. Kafka. Don't oversell it. Need to delete a row? Congratulations, you selected the wrong tool. It's appending JSON lines to a file. Kafka is a write-ahead log, not a queue.
To your point about Postgres, I've found Postgres has fantastic JSONB support and awesome developers who have been very influential in my life and whom I admire. Postgres is my preferred cold storage, which I connect to Kafka. It feels like swimming upstream because RMDBs are traditionally for normalized data, not denormalized JSON lines that make XML look hip again.
If you have a choice in DB, Postgres' JSONB has helped me avoid unnecessary normalization steps. It's good to have options.
ZeroMQ would call this the Titanic pattern and mic drop because the guide has a section on it. That's why I like ZeroMQ.
Edit: Apologies for typos/brevity. I have an ancient phone that only works with 20% of the web and phone apps. There are no apps or LLMs to help this dyslexic soul.
Hell even today in 2025 I installed my Wacom tablet drivers (USB drawing tablet) and the installer says "You must restart your system... Note: Shut down is not the same as Restart", like what does that even mean? It's a classic Microsoft move I'd say.
Well, Windows computers have a prominent button that evolved from the ATX shutdown button that actually used to turn off energy completely, but the button has been long since hijacked to do all sort of fancy sleep gymnastics; there is no other "shutdown" button that the user can operate.
So I can definitely see why a driver installer is very nervous about people "shutting down" their devices to reload the system. Why would a full restart be required in the 3rd millennium to load a driver for an USB input device is of course another Microsoft philosophical moment.
C# has aged better but I feel like Java 8 approaching ANSI C level solid tools. If only Swing wasn't so ugly. They should poach Raymond Chen to make Java 8 Remastered I like his blog posts. There's probably a DOS joke in there. Also they should just use the JavaFX namespace so I don't have to change my code and I want the lawyer here to laugh too.
I don't understand how everyone hates Kafka I use it as a typed write-ahead JSON log with library support for most languages. Yes the systems I've built with this were overengineered but it worked and was reliable. I just bought a larger disk instead of using whatever remains of the great battle of the zookeeper. I just assumed the fact it has any integration support with standard RDBMs must be a byproduct of being Java as purely an accident.
Thanks for sharing I was just looking for what happened to Sun. I like the second-hand quote comparing the IBM and HP as "garbage trucks colliding" plus the inclusion of blog posts with links to the court filings.
Is it fair to say ZFS made most sense on Solaris using Solaris Containers on SPARK?
ZFS was developed in Solaris, and at the time we were mostly selling SPARC systems. That changed rapidly and the biggest commercial push was in the form of the ZFS Storage Appliance that our team (known as Fishworks) built at Sun. Those systems were based on AMD servers that Sun was making at the time such as Thumper [1]. Also in 2016, Ubuntu leaned in to use of ZFS for containers [2]. There was nothing that specific about Solaris that made sense for ZFS, and even less of a connection to the SPARC architecture.
> There was nothing that specific about Solaris that made sense for ZFS, and even less of a connection to the SPARC architecture.
Although it does not change the answer to the original question, I have long been under the impression that part of the design of ZFS had been influenced by the Niagara processor. The heavily threaded ZIO pipeline had been so forward thinking that it is difficult to imagine anyone devising it unless they were thinking of the future that the Niagara processor represented.
Am I correct to think that or did knowledge of the upcoming Niagara processor not shape design decisions at all?
By the way, why did Thumper use an AMD Opteron over the UltraSPARC T1 (Niagara)? That decision seems contrary to idea of putting all of the wood behind one arrow.
Niagara did not shape design decisions at all -- remember that Niagara was really only doing on a single socket what we had already done on large SMP machines (e.g., Starfire/Starcat). What did shape design decisions -- or at least informed thinking -- was a belief that all main memory would be non-volatile within the lifespan of ZFS. (Still possible, of course!) I don't know that there are any true artifacts of that within ZFS, but I would say that it affected thinking much more than Niagara.
As for Thumper using Opteron over Niagara: that was due to many reasons, both technological (Niagara was interesting but not world-beating) and organizational (Thumper was a result of the acquisition of Kealia, which was independently developing on AMD).
I don’t recall that being the case. Bonwick had been thinking about ZFS for at least a couple of years. Matt Ahrens joined Sun (with me) in 2001. The Afara acquisition didn’t close until 2002. Niagara certainly was tantalizing but it wasn’t a primary design consideration. As I recall, AMD was head and shoulders above everything else in terms of IO capacity. Sun was never very good (during my tenure there) at coordination or holistic strategy.
Yeah I think if it hadn’t been for the combination of Oracle and CDDL, Red Hat would have been more interested in for Linux. As it was they basically went with XFS and volume management. Fedora did eventually go with btrfs but dints know if there are are any plans for copy-on-write FS for RHEL at any point.
It’s not like Red Hat had/has no influence over what makes it into mainline. But the options for copy on write were either relatively immature or had license issues in their view.
Their view is that if it is out of tree, they will not support it. This supersedes any discussion of license. Even out of tree GPL drivers are not supported by RedHat.
We had those things at work as fileservers, so no containers or anything fancy.
Sun salespeople tried to sell us the idea of "zfs filesystems are very cheap, you can create many of them, you don't need quota" (which ZFS didn't have at the time), which we tried out. It was abysmally slow. It was even slow with just one filesystem on it. We scrapped the whole idea, just put Linux on them and suddenly fileserver performance doubled. Which is something we weren't used to with older Solaris/Sparc/UFS or /VXFS systems.
We never tried another generation of those, and soon after Sun was bought by Oracle anyways.
I had a combination uh-oh/wow! moment back in those days when the hacked up NFS server I built on a Dell with Linux and XFS absolutely torched the Solaris and UFS system we'd been using for development. Yeah, it wasnt apples to apples. Yes, maybe ZFS would have helped. But XFS was proven at SGI and it was obvious that the business would save thousands overnight by moving to Linux on Dell instead of sticking with Sun E450s. That was the death knell for my time as a Solaris sysadmin, to be honest.
I've been critical of Wikipedia for years. Can't they come up with a more coherent argument like it put paid encyclopedias out of business so the donations it gets for running ads should be taxed like a business.
Being competitive is not illegal and non-profits are not forbidden from creating better products. I don't understand what about that argument would be coherent.
How do I tell the difference between a profit-maximizing non-profit who says they contribute to the public using what I contributed 10 years ago to Wikipedia and thus not need to pay taxes under charity in 2025? Compared to actually helping? I would hate to be a non-profit surrounded by beggars it isn't sustainable.
There are a whole lot of propaganda-spreading non-profits with agendas which contribute much less to the public interest compared to Wikipedia. This should be the starting point of this discussion because it's apparent that in this case, the "right" propaganda isn't called out as propaganda.
Wikipedia could be a lot better but it could be a lot worse too - the current pressure tactic can only make matters worse. The key point here is that if Wikipedia cannot be tax free, nobody should be, just remove that as a legal possibility because it can be used to suppress some views and promote others simply by arbitrary definitions of "propaganda".
Can the administration be more direct about it? Like Wikipedia shouldn't be able to threaten to obstruct to take back the data then place a banner ad covering 90% of the screen that I receive 0% of just to access my own thankless contributions I'd rather just pay than be subjected to domestic ads but maybe it's leaking overseas that would be scary.
Your problem is with nonprofits generally, not Wikipedia specifically. Nonprofit status in the United States has nothing to do with competitiveness and everything to do with an organization’s purpose and operations.
It...defeats the entire point of nonprofit status? Are you talking about legal status or mission? Special legal status is afforded to nonprofits because (ideally) they are pursuing something that is in the public's best interest, providing some kind of service that the government does not. Should the red cross pay taxes because they pushed private blood banks and first aid companies out of business? Taxes (ideally) are used to pursue the public good, so if an organization is using their labor and/or products to further the public good, they why should they be double taxed?
I'm not saying that's how it always works, but that's how it's designed to work. So if the system is being abused, then I think your issue might be with the system, not with Wikipedia in particular. They're just an example of the abuse of the system (which I don't think they are).
You are correct to question the premise. I don't care about Wikipedia, I am asking about some systemic decisions from decades ago that most people take for granted these days. The Red Cross doesn't need defending because the AMA backs it up due to tax deductions and medical licensing/lobbying overlap. Yet, in this political thread, Wikipedia moderators argue that their curation is the value and that it's not political and not the contribution, similar to Stack Overflow moderators hollowing out their site.
This brings me to the handling of actual experts, like medical doctors. How could the Wikimedia Foundation justify removing a licensed doctor like James Heilman ("Doc James") from its Board, a decision even Jimmy Wales supported? I get that they put him back shortly after, but that's so arbitrary. Someone putting their professional credibility on the line to provide accurate public information without explaining the original removal publicly? Yes, he was reinstated shortly after, but the initial act and the lack of a clear public explanation for the original removal feel arbitrary and undermine trust. Here was someone putting their professional credibility on the line for accuracy, and they were treated in such a dismissive way. The exact type of caustic politics causes this split between public health advice and the medical profession. Frankly, I'd rather see funding go directly to the experts if this is how they treat people.
Imagine if Wikipedia were frozen as a "2017 Doc James Edition" snapshot. Would that be so bad if that could slash operating costs by 99%? I could live with a slightly outdated encyclopedia for a few years if it meant escaping the constant, resource-draining burden of moderation and the endless debates about needing more free contributions, now we're overloaded and needing more funding/moderators. Then the donations could go to archive.org, which I hope Wikipedia donates to, since I mostly use Wikipedia for links to archive.org snapshots, since I like to see the source, not what people "curate" for me.
The constant fundraising solicitations are grating when contrasted with how they treated Doc James, or when users are lectured about their "privilege" for accessing supposedly free information. It seems Wikipedia doesn't inherently need my contribution or money; they had a highly dedicated expert, pushed him out (temporarily, but damagingly), and kept asking for donations. Claiming that donating today supports the same original mission feels increasingly misleading. Doc James must have the patience of a saint. It infuriates me when that kind of undervaluation happens, and they get to deduct taxes "for the public good," I get yelled at by the moonlighting Wikipedia/Discord mods for even questioning it. It's no surprise he's focused on teaching now.
Anyways, thanks for not attacking me for a seemingly dumb question. I don't have the answer to my question either.
> Can't they come up with a more coherent argument like it put paid encyclopedias out of business so the donations it gets for running ads should be taxed like a business.
... Hold on, are you saying that charities shouldn't be allowed to do things that businesses do? Close the food banks, to protect the poor innocent supermarkets! Ban the FSF and OSI, for competing with Microsoft!
Clay works well. The Rosicrucian museum in San Jose has a cuneiform tax receipt stating that a goat was eaten by wolves, and therefore no tax is due on it.
This is similar, from 1634BC, but I don’t know what it says:
They say M-Discs are analogous to carving data onto stone. I have my doubts though. Searched the web and found posts claiming they are just regular Blu-ray discs now.
When Windows forces me to sign in to install it, I can't help but feel it's subsidizing this entire design silo. In the next episode, now lets make everyone (including dyslexic people) jump through even more hoops to install Windows to subsidize the creation of a font that even if it did help dyslexic people that I would not be able to use since it was at the expense of everyone else. YMMV.