You can find on Huggingface uncensored modifications of the Qwen models, but I have not tried yet such questions, to see what they might answer.
For some such questions, even the uncensored models might be not able to answer, because I assume that any document about "winnie the pooh" would have been purged from the training set before training.
Given that DeepSeek, GLM, Kimi etc have all released large open weight models, I am personally grateful that Qwen fills the mid/small sized model gap even if they keep their largest models to themselves. The only other major player in the mid/small sized space at this point is pretty much only Gemma.
I'm totally fine with that, frankly. I'm blessed with 128GB of Unified Memory to run local models, but that's still tiny in comparison the larger frontier models. I'd much rather get a full array of small and medium sized models, and building useful things within the limits of smaller models is more interesting to me anyway.
It is very unlikely that general claims about a model are useful, but only very specific claims, which indicate the exact number of parameters and quantization methods that are used by the compared models.
If you perform the inference locally, there is a huge space of compromise between the inference speed and the quality of the results.
Most open weights models are available in a variety of sizes. Thus you can choose anywhere from very small models with a little more than 1B parameters to very big models with over 750B parameters.
For a given model, you can choose to evaluate it in its native number size, which is normally BF16, or in a great variety of smaller quantized number sizes, in order to fit the model in less memory or just to reduce the time for accessing the memory.
Therefore, if you choose big models without quantization, you may obtain results very close to SOTA proprietary models.
If you choose models so small and so quantized as to run in the memory of a consumer GPU, then it is normal to get results much worse than with a SOTA model that is run on datacenter hardware.
Choosing to run models that do not fit inside the GPU memory reduces the inference speed a lot, and choosing models that do not fit even inside the CPU memory reduces the inference speed even more.
Nevertheless, slow inference that produces better results may reduce the overall time for completing a project, so one should do a lot of experiments to determine an appropriate compromise.
When you use your own hardware, you do not have to worry about token cost or subscription limits, which may change the optimal strategy for using a coding assistant. Moreover, it is likely that in many cases it may be worthwhile to use multiple open-weights models for the same task, in order to choose the best solution.
For example, when comparing older open-weights models with Mythos, by using appropriate prompts all the bugs that could be found by Mythos could also be found by old models, but the difference was that Mythos found all the bugs alone, while with the free models you had to run several of them in order to find all bugs, because all models had different strengths and weaknesses.
(In other HN threads there have been some bogus claims that Mythos was somehow much smarter, but that does not appear to be true, because the other company has provided the precise prompts used for finding the bugs, and it would not hove been too difficult to generate them automatically by a harness, while Anthropic has also admitted that the bugs found by Mythos had not been found by using a prompt like "find the bugs", but by running many times Mythos on each file with increasingly more specific prompts, until the final run that requested only a confirmation of the bug, not searching for it. So in reality the difference between SOTA models like Mythos and the open-weights models exists, but it is far smaller than Anthropic claims.)
> Anthropic has also admitted that the bugs found by Mythos had not been found by using a prompt like "find the bugs", but by running many times Mythos on each file with increasingly more specific prompts, until the final run that requested only a confirmation of the bug, not searching for it.
- provide a container with running software and its source code
- prompt Mythos to prioritize source files based on the likelihood they contain vulnerabilities
- use this prioritization to prompt parallel agents to look for and verify vulnerabilities, focusing on but not limited to a single seed file
- as a final validation step, have another instance evaluate the validity and interestingness of the resulting bug reports
This amounts to at most three invocations of the model for each file, once for prioritization, once for the main vulnerability run, and once for the final check. The prompts only became more specific as a result of information the model itself produced, not any external process injecting additional information.
Their previous model Qwen3.5 was available in many sizes, from very small sizes intended for smartphones, to medium sizes like 27B and big sizes like 122B and 397B.
This model is the first that is provided with open weights from their newer family of models Qwen3.6.
Judging from its medium size, Qwen/Qwen3.6-35B-A3B is intended as a superior replacement of Qwen/Qwen3.5-27B.
It remains to be seen whether they will also publish in the future replacements for the bigger 122B and 397B models.
The older Qwen3.5 models can be also found in uncensored modifications. It also remains to be seen whether it will be easy to uncensor Qwen3.6, because for some recent models, like Kimi-K2.5, the methods used to remove censoring from older LLMs no longer worked.
Not sure why you're being downvoted, I guess it's because how your reply is worded. Anyway, Qwen3.7 35B-A3B should have intelligence on par with a 10.25B parameter model so yes Qwen3.5 27B is going to outperform it still in terms of quality of output, especially for long horizon tasks.
You should. 3.5 MoE was worse than 3.5 dense, so expecting 3.6 MoE to be superior than 3.5 dense is questionable, one could argue that 3.6 dense (not yet released) to be superior than 3.5 dense.
Ok but you made a claim about the new model by stating a fact about the old model. It's easy to see how you appeared to be talking about different things. As for the claim, Qwen do indeed say that their new 3.6 MoE model is on a par with the old 3.5 dense model:
> Despite its efficiency, Qwen3.6-35B-A3B delivers outstanding agentic coding performance, surpassing its predecessor Qwen3.5-35B-A3B by a wide margin and rivaling much larger dense models such as Qwen3.5-27B.
OpenCode is one solution, but there are also several alternatives.
For example pi-dev, but even Codex is open source and it should work with any locally-hosted model, e.g. by using the OpenAI-compatible API provided by llama-server.
I have not used pi-dev until now, but the recent presentation of pi-dev by its developer (reported on other HN threads) has convinced me that he is among the people who can distinguish good from bad, which unfortunately cannot be said about many people creating AI applications.
So I intend to switch to using pi-dev as a coding assistant for my locally-hosted models, but I do not have yet results demonstrating that this is the right choice, besides its lead developer being more trustworthy than the others.
I too am interested in Pi and Codex, but haven’t seen any full-featured web UIs for them yet. Would be happy to know if there are some!
One thing I’m considering (depending on how happy I am with OpenCode after trying to remove some questionable functionality it has) would be to make Pi (or Codex) speak the OpenCode protocol so that its web UI can be used with it.
These 2 species of eagles are the biggest eagles in Europe.
Eagles are the biggest birds of prey which are active hunters.
A few species of vultures are much bigger than eagles, up to twice heavier.
In Europe, previously there was rather widespread a bird of prey that is intermediate in size and in appearance between eagles and true vultures, the so-called bearded vulture.
Unfortunately the bearded vulture and the true vultures have been exterminated in many parts of Europe by using poisoned dead animals.
For the bearded vulture, the Romans used a much better name, "ossifraga", which means "break-bones" (from which the word "osprey" comes, due to a confusion about the birds for which the name was used). The bearded vulture was called "break-bones", because it eats only bones, after breaking them by letting them fall on a rock from a great height.
Before the disappearances of the golden eagles and of the bearded vultures, they provided some of the most spectacular views in high mountains, due to their exquisite flying prowess.
Laser projectors already include green lasers made by the same principle, but made from separate semiconductor lasers and non-linear crystals.
With this technology, which integrates the non-linear crystal with the semiconductor laser, it may become possible to make cheaper laser projectors, either by making an integrated green laser, or perhaps even an integrated triple laser, for all 3 primaries, but the difference in cost will not be great, because the green laser is a rather small fraction of the total cost (though it may be more expensive than the red and blue lasers together).
They have developed a method of growing on semiconductor wafers a kind of crystal with non-linear optical properties (Ta2O5, tantalum oxide, like in the tantalum electrolytic capacitors).
With a non-linear crystal, there are many variants of transforming the color of a laser, e.g. by generating harmonics, by non-linear mixing light from lasers of different colors or by pumping with the laser a parametric oscillator that produces a different color from that of the laser.
You may be able to produce almost any color, but not with a single device, and the energy efficiency of producing various colors can be very different.
This is similar to how green laser pointers work. Because unlike for red or for blue, there are no good green semiconductor lasers, the green laser pointers have an infrared laser whose output is converted into green light by a non-linear crystal.
Because this reply has been hidden in a sub-thread started by a flagged message, but I believe that this information should be more widely known, I repost my now hidden message, which replied to a very weird claim of someone else, that the golden eagle does not have cultural importance:
I do not understand to what you are referring by "is hardly known and of little importance culturally".
Your statement is completely unrelated with the parent article. Contrary to what you say, the golden eagle is by far the best known species of eagle and the one with the greatest cultural importance. In a large part of Eurasia, for at least 5 millennia or more the golden eagle has been the most culturally important species of bird.
The golden eagle (Aquila chrysaetos) is the species of eagle that has become the state symbol of the late Roman Republic and then of the Roman Empire.
Inspired by the Romans, during the last couple of millennia many other states have included the golden eagle in their heraldic symbols and several of them are still using it today.
Even much earlier than the Romans, at all Indo-European people the golden eagle had a special importance, being the bird used as a messenger by the God of the Sky, later known as Zeus in Greece and as Jupiter at the Romans. Already the Hittite texts from 3500 years ago have many references to the golden eagle.
The golden eagle is also the species that has been the most valued as a domesticated hunting bird in Central Asia.
The use of the "bald" eagle by USA has also been inspired by the Roman golden eagle, but the original species was replaced with a native American species. The golden eagles have survived in small pockets spread over a very large area from Western Europe to USA, so they were not representative for USA alone.
While the sea eagles, to which the American "bald" eagle also belongs, are bigger than the golden eagle (whose preferred habitat are the high mountains), the golden eagle is stronger for its size and she is able to hunt bigger prey in proportion to its size. Only some jungle eagles, like the harpy eagle, are definitely stronger and able to carry heavier prey.
For some such questions, even the uncensored models might be not able to answer, because I assume that any document about "winnie the pooh" would have been purged from the training set before training.
reply