For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | nextos's commentsregister

Also Claude owes its popularity mostly to the excellent model running behind the scenes.

The tooling can be hacky and of questionable quality yet, with such a model, things can still work out pretty well.

The moat is their training and fine-tuning for common programming languages.


>> Also Claude owes its popularity mostly to the excellent model running behind the scenes.

It's a bit of both. Claude Code was the tool that made Anthropic's developer mindshare explode. Yes, the models are good, but before CC they were mostly just available via multiplexers like Cursor and Copilot, via the relatively expensive API.


Even small brand new apartments tend to have their own sauna, which is quite impressive.

It should be better for the reasons explained in the article. Pure functions require no context to understand. If they are typed, it's even simpler. LLMs perform badly on code that has lots of state and complex semantics. Those are hard to track.

In fact, synthesis of pure Haskell powered by SAT/SMT (e.g. Hoogle, Djinn, and MagicHaskeller) was already of some utility prior to the advent of LLMs. Furthermore, pure functions are also easy to test given that type signatures can be used for property-based test generation.

I think once all these components (LLMs, SAT/SMT, and lightweight formal methods) get combined, some interesting ways to build new software with a human-in-the-loop might emerge, yielding higher quality artifacts and/or enhancing productivity.


Wouldnt a fair counter argument be, that llms have been trained on way less fu ctional code though?

Like they are trained on a LOT of js code -> good at js Way less functional code -> worse performance?


You can write functional-style code in many languages, as I have in JS and occasionally Python to great benefit.

For sure. I write functional style code in C# but it is not the same thing as writing OCaml or F#.

That's a very fair point. There are some publications showing lower performance for languages with less training data. I imagine it also applies to different paradigms. Most training code will be imperative and of lower quality.

I think LLMs are great at compression and information retrieval, but poor at reasoning. They seem to work well with popular languages like Python because they have been trained with a massive amount of real code. As demonstrated by several publications, on niche languages their performance is quite variable.

I used to find it better to shortcut the AI by asking it to write python to do a task. Claude 4.6 seems to do this without prompting.

Edit: working on a lot of legacy code that needs boring refactoring, which Claude is great at.


You have a point but current LLM architectures in particular are very fragile to data poisoning [1,2].

[1] https://www.anthropic.com/research/small-samples-poison

[2] https://arxiv.org/abs/2510.07192


Yes, there are quite a few anti-AI projects. https://old.reddit.com/r/badphilosophy/wiki/index

No idea why you're being downvoted. We can't yet even demonstrate that LLMs will withstand training on their own output as they pollute the Internet.

This is very impressive, top labs doing research often don't have experimental designs that are this elaborate. Was the TCR and BCR-seq you conducted helpful to design cell therapies, neoantigen vaccines, and monitor progress?

Given that you carry the HLA-B*27:05 allele, you might have been blessed by being predisposed to a better response. But probably you want to keep an eye on future autoimmunity issues. Talking from experience...


Thanks for the warning, I hope that it wasn't a personal experience for you.

Thanks for the compliment about the elaborate design. I think that when you make something for one or a few patients it is easier to be more elaborate, even with the same knowledge and equipment.

Maybe the TCR and BCR-seq was most helpful for mRNA design and effectiveness monitoring, but hopefully someone else on my team will answer that better.


The TCR sequencing has been helpful for downselecting TCRs for a TCR based cell therapy, and for monitoring response to various immune therapies (including the vaccines)

Interesting, thanks for your replies.

You should consider publishing a patient case report somewhere, as I believe there are lots of valuable conclusions to be extracted from your work.




> the reason it has been limited to those cases is drug development, today, is constrained by commercialization.

That's a good observation, but I think it's an incomplete picture. Another important constraint is often regulatory inertia and historical baggage.

The UK pioneered small classical and adaptive trials using Bayesian methods, and there were some promising results. A lot of modern Bayesian methodology was, in fact, developed at the MRC BSU Cambridge with this goal in mind. For example, the probabilistic programming language BUGS (1989).

Given that most drugs fail, the industry is highly incentivized to use Bayesian methods to fail faster. These models allow for more rapid dose-finding and the ability to distinguish promising leads using interim data, which is vital given the massive cost of any trial, especially late-stage failures.

But for Bayesian methods to make a dent, they'd need to be applied to a large number of trials, and change doesn't happen overnight. Lots of big pharma players, e.g. GSK, are becoming interested in moving to Bayesian methods in order to leverage prior information and work better within small-data regimes.


You don't need an app if you don't want one.

In a CLI, oath lets you calculate a TOTP.

But it's maybe a bit more insecure if you use the same machine.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You