Medium doesn't "hurt your credibility" nearly as much as revealing that one's arsenal of litmus tests is suffering from such a paucity of real knowledge that one bases it on the web design, but Medium has a lot of annoying popups. A lot of people like Substack better and they have a paid subscriber thing that works well.
(realistically speaking, experts tend to know less about the blog hosting ecosystem the more they know about their domain)
its just a "tell" that you dont mind the poor reader experience and being associated with the rest of low quality slop that is on medium. many of us here have simply given up clicking on anything medium related
It is an LLM fine-tuned using a new type of dataset and RL reward. It's good at reasoning, but I would not recommend to replace Llama for general tasks.
Approaches like best of n sampling and majority voting are definitely feasible. But I don't recommend trying things related to CoT, as it might interfere with the internalized reasoning patterns.
The model can already answer some tricky questions that other models (including GPT-4o) have failed to address, achieving a +5.56 improvement on the GPQA-Diamond dataset. Unfortunately, it has not yet managed to reproduce inference-time scaling. I will continue to explore different approaches!
not sure i understand the rsults. its based on qwen 32b which is 49.49, and your best model is 53.54. results havent shown that your approach adds significant value yet.
The result for Qwen2.5-32B (49.49) is using CoT prompting. Only Steiner models do not use CoT prompting.
More importantly, I highly recommend to try these out firsthand (not only Steiner, but all reasoning models). You'll find that these reasoning models can solve many problems that other models with the same parameter size cannot handle. The existing benchmarks may not reflect this well, as I mentioned in the article:
"... automated evaluation benchmarks, which are primarily composed of multiple-choice questions and may not fully reflect the capabilities of reasoning models. During the training phase, reasoning models are encouraged to engage in open-ended exploration of problems, whereas multiple-choice questions operate under the premise that "the correct answer must be among the options." This makes it evident that verifying options one by one is a more efficient approach. In fact, existing large language models have, consciously or unconsciously, mastered this technique, regardless of whether special prompts are used. Ultimately, it is this misalignment between automated evaluation and genuine reasoning requirements that makes me believe it is essential to open-source the model for real human evaluation and feedback."
I haven't personally used Ollama Modelfile, but I think it should be relatively easy to convert from GGUF?