Sky Team is great, I agree. For a few more 2p co-ops to try out, I can recommend Sail, Burgle Bros (give it a few playthroughs to get a feel), and Regicide. All are available on BGA if you want to try them and I've loved playing them.
Agree 100%. This hobby jumped the shark probably 5-10 years ago.
Thanks to crowdfunding, there are deluxe editions of games all the time being announced for $400–500.
Games ship with "6 expansions in box" which sounds great and like a ton of replayable content, until you realize that they're poorly playtested, lack balance, and add a confounding (and sometimes contradictory) number of rules.
As you noted, games come with a ridiculous number of minis and trinkets and baubles that drive the price of new games well past $100 in many cases.
As the industry has gotten larger, many publishers are turning more toward bankable IP as opposed to innovative concepts. Or, they're releasing a bajillion reskins of the same game (looking at you, TtR, Azul, Pandemic, 7 Wonders, etc..) This is not unique to board games by any stretch. But it's a sign of an inflection point.
I'm not saying there aren't good games being released. I'm saying they're harder to find and getting drowned out by the shameless cash grabs and lazy IP-based games.
Go find some of the classics by Rosenberg, Knizia, Feld, Luciani, and others. You'll get a lot more bang for your buck.
> Games ship with "6 expansions in box" which sounds great [...] until you realize that they're poorly playtested, lack balance, and add a confounding (and sometimes contradictory) number of rules.
Hot take: I have never played an expansion that I liked more than the base game.
I won't argue that. There are a handful that I think improve the experience (some of the early Carcassonne ones, for example) but they are by far the exception rather than the rule.
The more accurate version is only Chinese companies (plus Facebook briefly) really open source their frontier models. The rest are non frontier. They are either older or specialized for something.
It's all openwashing, all of the ones you listed at somepoint have expressed how important and valuable open weights and locally usable models are. Every single one of them has then increasingly focused and pushed closed, proprietary or cloud usable only options since saying/doing that.
I'm annoyed at myself, because I thought/hoped/praised chinese AI when they were opening up as Llama was closing, but Qwen looks to be doing the same playbook here as Llama/Meta, Gemma/Google and OpenAI/gpt-oss.
It’s typical to complain about AI slop that hits the front page here, but it’s worth noting that a lot of the (presumably) human-written content is slop in its own right.
This piece was some self-indulgent rambling that didn’t really have any connective threads.
They're not perfect but the local model game is progressing so quickly that they're impossible to ignore. I've only played around with the new qwen 3.6 models for a few minutes (it's damn impressive) but this weekend's project is to really put it through its paces.
If I can get the performance I'm seeing out of free models on a 6-year-old Macbook Pro M1, it's a sign of things to come.
Frontier models will have their place for 1) extensive integrations and tooling and 2) massive context windows. But I could see a very real local-first near future where a good portion of compute and inference is run locally and only goes to a frontier model as needed.
I've had really good results form qwen3-coder-next. I'm hoping we get a qwen3.6-coder soon since claude seems to get less-and-less available on the pro plan.
You can enable the lm studio server and use any openai compatible harness to use the models that are running inside it.
OpenCode, pi, even Claude and Codex...
I’ve been missing agentic capabilities from almost all local LLM apps. It’s like they’re all stuck in 2023.
That’s why I started using OpenCode for this. It works pretty well, the web UI comes pretty close to a general chat app. You can use folders to organize your sessions like projects (which annoyingly Gemini still doesn’t have) with files and extra instructions.
OpenCode is one solution, but there are also several alternatives.
For example pi-dev, but even Codex is open source and it should work with any locally-hosted model, e.g. by using the OpenAI-compatible API provided by llama-server.
I have not used pi-dev until now, but the recent presentation of pi-dev by its developer (reported on other HN threads) has convinced me that he is among the people who can distinguish good from bad, which unfortunately cannot be said about many people creating AI applications.
So I intend to switch to using pi-dev as a coding assistant for my locally-hosted models, but I do not have yet results demonstrating that this is the right choice, besides its lead developer being more trustworthy than the others.
I too am interested in Pi and Codex, but haven’t seen any full-featured web UIs for them yet. Would be happy to know if there are some!
One thing I’m considering (depending on how happy I am with OpenCode after trying to remove some questionable functionality it has) would be to make Pi (or Codex) speak the OpenCode protocol so that its web UI can be used with it.
Same with Reddit. A decade ago it felt like they were down more than they were up. And it didn't slow down their growth trajectory. Instead, as soon as it was back there would be a thousand shitposts about "How did you all survive the outage? Did you <gasp> work?"
Yeah. I think that's an interesting use case. Especially if I can kick it off or schedule it when I'm not actively working. Inference speed (especially with tool calling involved) won't be great on my machines, but if I schedule nightly usability tests of dev sites while I sleep, that could be really cool.
You’re right about inference speed being a concern. I was assuming it’s a small model but even then, one of the browser automation frameworks is going to be faster.
The readme is one sentence long. The website doesn't really have any additional info. The link to the App Store seems to be a for a "Tech Jobs in EU" app from the same developer. Supposedly, it's more private, lower latency, lower cost, and works offline... but how?
The pulse dashboard seems to imply that there are nodes around the world... is this some distributed inference engine that lives on Apple silicon?
I started the company in Sweden a little over a year ago, didn’t get the bank account until April (or May) last year, published our first app on the AppStore on August 21st, and ended up publishing 20+ apps.
This is the AI inference engine powering them all.
reply