For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | rockinghigh's commentsregister

Their revenue was $57.4 billion last year. Just in Q4; cloud revenue $6.7 billion, cloud infrastructure $3.0 billion, cloud application $3.7 billion, Fusion Cloud ERP $1.0 billion, NetSuite cloud ERP $1.0 billion.

It's the number of attempts at answering the question.


He founded the team that worked on fasttext, llama and other similarly impactful projects.


He founded FAIR and the team in Paris that ultimately worked on the early Llama versions.


FAIR was founded in 2015 and Llama's first release was in 2023. Musk co-founded OpenAI in 2015 but no reasonable person credits ChatGPT in 2022 to him.


It can also be used to simplify existing code bases.


It's a lot simpler. These models are not optimized for ambiguous riddles.


There's nothing ambiguous about this question[1][2]. The tool simply gives different responses at random.

And why should a "superintelligent" tool need to be optimized for riddles to begin with? Do humans need to be trained on specific riddles to answer them correctly?

[1]: https://news.ycombinator.com/item?id=47054076

[2]: https://news.ycombinator.com/item?id=47037125


How is this riddle relevant to a coding model?


It's not a coding model. Go to https://chat.z.ai/ and you'll see it is presented as a generalist.


They do. Pretty much all agentic models call linting, compiling and testing tools as part of their flow.


It's called problem decomposition and agentic coding systems do some of this by themselves now: generate a plan, break the tasks into subgoals, implement first subgoal, test if it works, continue.


That's nice if it works, but why not look at the plan yourself before you let the AI have its go at it? Especially for more complex work where fiddly details can be highly relevant. AI is no good at dealing with fiddly.


That's what you can do. Tell the AI to make a plan in an MD file, review and edit it, and then tell another AI to execute the plan. If the plan is too long, split it into steps.


This has been a well integrated feature in cursor for six months.

As a rule of thumb, almost every solution you come up with after thirty seconds of thought for a online discussion, has been considered by people doing the same thing for a living.


That's exactly what Claude does. It makes a comprehensive plan broken into phases.


There’s nothing stopping you from reviewing the plan or even changing it yourself. In the setup I use the plan is just a markdown file that’s broken apart and used as the prompt.


A language model in computer science is a model that predicts the probability of a sentence or a word given a sentence. This definition predates LLMs.


A 'language model' only has meaning in so far as it tells you this thing 'predicts natural language sequences'. It does not tell you how these sequences are being predicted or any anything about what's going on inside, so all the extra meaning OP is trying to place by calling them Language Models is well...misplaced. That's the point I was trying to make.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You