For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | aiven's commentsregister

the system is built in a way that you need to run in order to stay still. you cant just "work less" because you will not have stability this way. new slaves using new shiny tools will work more and more and outcompete you in whatever domain you are working. so you will be forced to work more, to learn more in order just to keep your current lifestyle.

without basic morality and gov regulation capitalism would exploit humans like cattle. 996 should be outlawed everywhere


Do you feel that you are at least as productive as a "regular" developer working a normal amount of hours? In terms of the quality and quantity of features you have shipped.


In the first ten years? Maybe. After that, it might become mainstream.


- If LLM knows about your content, people don't really need to visit your site - LLM crawlers can be pretty aggressive and eat up a lot of traffic - Google will not suggest its misleading "summaries" from its search for your site - Some people just hate LLMs that much ¯\_(ツ)_/¯


But LLMs still can't reason... in a reasonable sense. No matter how you look at it, it is still a statistical model that guesses next word, it doesn't think/reason per se.


It does not guess the next word, the sampler chooses subword tokens. Your explanation can't even explain why it generates coherent words.


It is insane to think this in 2025 unless you define "reasoning" as "the thing I can do that LLMs cannot"


Reasoning is the act of figuring out how to solve a problem for which you have no previous training set. If an AI can reason, and you give it a standard task of "write me a python file that does x and y", it should be able to complete that task without ever being trained on python code. Or english in general.

The way it would solve that problem would look more like some combination of Hebbian Learning and Mu Zero, where it starts to explore the space around it interms of interactions, information gathering, information parsing, forming associations, to where it eventually understands that your task involves the action of writing bytes to a file in a certain structure that when executed produces certain output, and the rules around the structure that make it give that output.

And it will be able to do this through running as a model on your computer, or a robot that can type on a keyboard, all from the same code.

LLMs appear to "reason" because most people don't actually reason - a lot of people even in technical fields operate on a principle of information lookup. I.e they look at the things that they have been taught to do, figure out which problem fits closest, and repeat steps with a few modifications a long the way. LLMs pretty much do the same thing. If you operate like this, then sure LLMs, "reason". But there is a reason why LLMs are barely useful in actual technical work - under the hood, to make them do things autonomously, you basically have to specify wrapper code/prompts that take often as long to write and finetune as actual code itself.


It is insane to think this in 2025 unless you define "reasoning" as some mechanical information lookup. This thinking (ironically) degrades the meaning of reasoning and of intelligence in general.


Ironically, they are not happy because they are not smart enough to find happiness ¯\_(ツ)_/¯


OpenAPI also has examples and auth. But, like Postman, it is tied to your service, not some 3rd party.


> Don’t complain about mediocre work when you’re producing mediocre work yourself.

If only tasteful minority will complain - they will soon shrink into tasteful micro-minority. Complaining is still a useful feedback (half the time)


* assuming that we actually need all these technological advancements


my custom simple tool is a stopwatch on a phone ¯\_(ツ)_/¯


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You