Hacker News isn't a great place to discuss papers generally.
Having a productive discussion around a paper requires at least reading and understanding the abstract, and the most successful content on HN (sadly) is content where people can jump in with an opinion purely from reading the headline.
Anyone know of any forums that are good for discussing papers?
This is true across all research subject areas (I'm not especially tuned into LLM research but am to cryptography, which also happens to be a field that gets a lot of play on HN). I think it's just a function of how many people conversant in the field are available to talk about it at any one time.
There are/were isolated communities on Discord around fast.ai, MLC, MLOps that talk papers more in depth but it’s hard to organize a community without commercial or academic incentive.
I wonder if the fact that GPT-5.5 was already available in their Codex-specific API which they had explicitly told people they were allowed to use for other purposes - https://simonwillison.net/2026/Apr/23/gpt-5-5/#the-openclaw-... - accelerated this release!
I've been calling that the "streaming experts" trick, the key idea is to take advantage of Mixture of Expert models where only a subset of the weights are used for each round of calculations, then load those weights from SSD into RAM for each round.
As I understand it if DeepSeek v4 Pro is a 1.6T, 49B active that means you'd need just 49B in memory, so ~100GB at 16 bit or ~50GB at 8bit quantized.
v4 Flash is 284B, 13B active so might even fit in <32GB.
The "active" count is not very meaningful except as a broad measure of sparsity, since the experts in MoE models are chosen per layer. Once you're streaming experts from disk, there's nothing that inherently requires having 49B parameters in memory at once. Of course, the less caching memory does, the higher the performance overhead of fetching from disk.
Streaming weights from RAM to GPU for prefill makes sense due to batching and pcie5 x16 is fast enough to make it worthwhile.
Streaming weights from RAM to GPU for decode makes no sense at all because batching requires multiple parallel streams.
Streaming weights from SSD _never_ makes sense because the delta between SSD and RAM is too large. There is no situation where you would not be able to fit a model in RAM and also have useful speeds from SSD.
I don't mean to be a jerk, but 2-bit quant, reducing experts from 10 to 4, who knows if the test is running long enough for the SSD to thermal throttle, and still only getting 5.5 tokens/s does not sound useful to me.
But you aren't trying out the model. You quantized beyond what people generally say is acceptable, and reduced the number of experts, which these models are not designed for.
Even worse, the github repo advertises:
> Pure C/Metal inference engine that runs Qwen3.5-397B-A17B (a 397 billion parameter Mixture-of-Experts model) on a MacBook Pro with 48GB RAM at 4.4+ tokens/second with production-quality output including tool calling.
It doesn't have to be a 2-bit quant - see the update at the bottom of my post:
> Update: Dan's latest version upgrades to 4-bit quantization of the experts (209GB on disk, 4.36 tokens/second) after finding that the 2-bit version broke tool calling while 4-bit handles that well.
That was also just the first version of this pattern that I encountered, it's since seen a bunch of additional activity from other developers in other projects.
On Apple Silicon Macs, the RAM is shared. So while maybe not up to raw GPU VRAM speeds, it still manages over 450GB/s real world on M4 Pro/Max series, to any place that it is needed.
They all do have a limitation from the SSD, but the Apple SSDs can do over 17GB/s (on high end models, the more normal ones are around 8GB/s)
Yeah, I am mostly only talking about the SSD bottleneck being too slow. No way Apple gets 17GB/s sustained. SSDs thermally throttle really fast, and you have some random access involved when it needs the next expert.
This is just a random thought, but have you tried doing an 'agentic' pelican?
As in have the model consider its generated SVG, and gradually refine it, using its knowledge of the relative positions and proportions of the shapes generated, and have it spin for a while, and hopefully the end result will be better than just oneshotting it.
Or maybe going even one step further - most modern models have tool use and image recognition capabilities - what if you have it generate an SVG (or parts/layers of it, as per the model's discretion) and feed it back to itself via image recognition, and then improve on the result.
I think it'd be interesting to see, as for a lot of models, their oneshot capability in coding is not necessarily corellated with their in-harness ability, the latter which really matters.
I tried that for the GPT-5 launch - a self-improving loop that renders the SVG, looks at it and tries again - and the results were surprisingly disappointing.
I should try it again with the more recent models.
Being a bicycle geometry nerd I always look at the bicycle first.
Let me tell you how much the Pro one sucks... It looks like failed Pedersen[1]. The rear wheel intersects with the bottom bracket, so it wouldn't even roll. Or rather, this bike couldn't exist.
The flash one looks surprisingly correct with some wild fork offset and the slackest of seat tubes. It's got some lowrider[2] aspirations with the small wheels, but with longer, Rivendellish[3], chainstays. The seat post has different angle than the seat tube, so good luck lowering that.
This is an excellent comment. Thanks for this - I've only ever thought about whether the frame is the right shape, I never thought about how different illustrations might map to different bicycle categories.
I wonder which model will try some more common spoke lacing patterns. Right now there seems to be a preference for radial lacing, which is not super common (but simple to draw). The Flash and Pro one uses 16 spoke rims, which actually exist[1] but are not super common.
The Pro model fails badly at the spokes. Heck, the spokes sit on the outside of the drive side of the rim and tire. Have a nice ride riding on the spokes (instead of the tire) welded to the side of your rim.
Both bikes have the drive side on the left, which is very very uncommon. That can't exist in the training data.
I think the pelican on a bike is known widely enough that of seizes to be useful as a benchmark. There is even a pelican briefly appearing in the promo video of GPT-5, if I'm not mistaken https://openai.com/gpt-5/. So the companies are apparently aware of it.
I feel like if I attempted this, the bike frame would look fine and everything else would be completely unrecognizable. After all, a basic bike frame is just straight lines arranged in a fairly simple shape. It's really surprising that models find it so difficult, but they can make a pelican with panache.
why do you find it surprising? these models have no actual understanding of anything, never mind the physical properties and capabilities of a bicycle.
My question is, as a human, how well would you or I do under the same conditions? Which is to say, I could do a much better job in inkscape with Google images to back me up, but if I was blindly shitting vectors into an XML file that I can't render to see the results of, I'm not even going to get the triangles for the frame to line up, so this pelican is very impressive!
Yeah, the bike frame is the thing I always look at first - it's still reasonably rare for a model to draw that correctly, although Qwen 3.6 and Gemini Pro 3.1 do that well now.
Yeah. I've always loosely correlated pelican quality with big model smell but I'm not picking that up here. I thought this was supposed to be spud? Weird indeed.
Can someone explain how we arrived at the pelican test? Was there some actual theory behind why it's difficult to produce? Or did someone just think it up, discover it was consistently difficult, and now we just all know it's a good test?
I set it up as a joke, to make fun of all of the other benchmarks. To my surprise it ended up being a surprisingly good measure of the quality of the model for other tasks (up to a certain point at least), though I've never seen a convincing argument as to why.
What it has going for it is human interpretability.
Anyone can look and decide if it’s a good picture or not. But the numeric benchmarks don’t tell you much if you aren’t already familiar with that benchmark and how it’s constructed.
how can you say "it ended up being a surprisingly good measure of the quality of the model for other tasks" and also "It should not be treated as a serious benchmark" in the same comment?
if it is indeed a good measure of the quality of the model (hint: it's not) then, logically, it should be taken seriously.
this is, sadly, a great example of the kind of doublethink the "AI" hypesters (yes - whether you like it or not simon - that is what you are now) are all too capable of.
I genuinely don't see how those two statements conflict with each other.
Despite not being a serious benchmark (how could it be serious? It's a pelican riding a bicycle!) it still turned out to have some value. You can see that just by scrolling through the archives and watching it improve as the models improved.
If your definition of doublethink is "holding two conflicting ideas in your head at once" then I would say doublethink is a necessary skill for navigating the weird AI era we find ourselves inhabiting.
"some value" is not the same as "a surprisingly good measure of the quality of the model for other tasks".
doublethink does not mean holding two conflicting ideas in your head at once. it means holding two logically inconsistent positions/beliefs at the same time.
It all began with a Microsoft researcher showing a unicorn drawn in tikz using GPT4. It was an example of something so outrageous that there was no way it existed in the training data. And that's back when models were not multimodal.
Nowadays I think it's pretty silly, because there's surely SVG drawing training data and some effort from the researchers put onto this task. It's not a showcase of emergent properties.
It's interesting to see some semblance of spatial reasoning emerge from systems based on textual tokens. Could be seen as a potential proxy for other desirable traits.
It's meta-interesting that few if any models actually seem to be training on it. Same with other stereotypical challenges like the car-wash question, which is still sometimes failed by high-end models.
If I ran an AI lab, I'd take it as a personal affront if my model emitted a malformed pelican or advised walking to a car wash. Heads would roll.
You've never seen pelicans riding bicycles either so maybe these are just representations of those specific subgroups of pelicans which are capable of riding them. Normal pelicans would not feel the need to ride bikes since they can fly, these special pelicans mostly seem to lack the equipment needed to do that which might be part of the reason they evolved to ride two-wheeled pedal-propelled vehicles.
Hmm. Any idea why it's so much worse than the other ones you have posted lately? Even the open weight local models were much better, like the Qwen one you posted yesterday.
I mean, yeah. "Person who spends time publishing content online is doing it for self promotion" doesn't seem particularly notable to me. 24 years of self promotion and counting!
Dude it comes across, maybe only to me, as a bit shameless. Or maybe it's just that there are so many people lapping it up like you're doing a public service that I find tedious. I wish hackernews had a block feature but alas it doesn't. Maybe I'll vibecode a browser extension.
Not the same at all. For that to happen you would have to explicitly visit their channel (forgive incorrect terminology, I don't use youtube). If someone kept posting on hackernews asking you to subscribe I hope you wouldn't appreciate it. swillison is spamming a communal public feed with self promotional comments about vibe coding, quite obviously because they, like the rest of us, are panicking about not having a career in a few years.
The more time I spend actually working with these tools the less I fear for my future career.
Building software remains really hard. Most people are not going to be able to produce production quality software systems, no matter how good the AI tooling gets.
Conversely, if the models ever make it to the point where they can replace ~all developers we will presumably have achieved AGI or even ASI and all other jobs will also be eliminated more or less simultaneously. So at least we'll all be in good company (and there probably won't be much point to marketing yourself in that case).
Forums traditionally included signature blocks at the end of messages. If someone linked his youtube channel there would that be objectionable? Assuming the preceding message was on point of course.
Posts on HN are analogous to videos on youtube. A channel is analogous to an HN user profile.
what is your setup for drawing pelican? Do you ask model to check generated image, find issues and iterate over it which would demonstrate models real abilities?
I for one delight in bicycles where neither wheel can turn!
It continues to amaze me that these models that definitely know what bicycle geometry actually looks like somewhere in their weights produces such implausibly bad geometry.
Also mildly interesting, and generally consistent with my experience with LLMs, that it produced the same obvious geometry issue both times.
> It continues to amaze me that these models that definitely know what bicycle geometry actually looks like somewhere in their weights produces such implausibly bad geometry.
I feel like the main problem for the models is that they can't actually look at the visual output produced by their SVG and iterate. I'm almost willing to bet that if they could, they'd absolutely nail it at this point.
Imagine designing an SVG yourself without being able to ever look outside the XML editor!
> Imagine designing an SVG yourself without being able to ever look outside the XML editor!
I honestly think I could do much better on the bicycle without looking at the output (with some assistance for SVG syntax which I definitely don't know), just as someone who rides them and generally knows what the parts are.
It's silly and a joke and a surprisingly good benchmark and don't take it seriously but don't take not taking it seriously seriously and if it's too good we use another prompt and there's obvious ways to better it and it's not worth doing because it's not serious and if you say anything at all about the thread it's off-topic so you're doing exactly what you're complaining about and it's a personal attack from the fun police.
Only coherent move at this point: hit the minus button immediately. There's never anything about the model in the thread other than simon's post.
See if you can spot what's interesting and unique about this one. I've been trying to put more than just a pelican in there, partly as a nod to people who are getting bored of them.
At some point, OpenAI is going to cheat and hardcode a pelican on a bicycle into the model. 3D modelling has Suzanne and the teapot; LLMs will have the pelican.
Fear of AI companies "slurping up data" being used as a rationale for not sharing anything is one of the most underrated harms of the whole current AI mess.
AI is an entirely different situation because the effort required to copy has dropped by multiple orders of magnitude. You used to be able to build in the open without worrying about copycats because the vast majority of people didn’t want to spend the effort. Now (with AI), even someone with the slightest, most fleeting whim can copy your work.
It’s great that you’re open to being adapted. There’s nothing wrong with that. But if you’re not open to having your ideas outright taken, then it’s not safe to build in the open any longer.
It has been known (especially in gamedev circles) that ideas are not worth much. I don't like AI slop, but what's the harm of taking someone's demo and making it better? Then someone else can do the same, and tweak some other mechanic.
Why is that a bad thing? Person 1 built a thing, and then someone came along and made it better? It's a game, so better is subjective, but should ideas only ever come from Person 1, while everyone else just gazes upon them with slack jawed awe, unable to contribute?
I completely agree. Honestly I wish we could go back to before AI. I don't like where it's taking us at all. Changing how we write code is just the beginning. Next we'll be replacing humans altogether. I've already had an interview with a soulless "AI recruiter" bot. We can't go back now of course, but one can dream.
(I have a strong case of software brain as he describes it myself.)
reply