For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | uriegas's commentsregister

The idea of offshoring computing is good. However, the cloud developed as a centralized computing platform instead of a distributed one. This has created power dynamics that harm customers. The same happened with social media, and has happened to other industries. I think it would be better for customers if there were many small cloud providers and they could easily switch between them. But even migrating from one cloud provider to another is a huge endeavor these days.

If you actually mean offshore as in located in a different country or especially a different continent, then that is a terrible idea for latency for many forms of computation. There are acceptable use cases, eg when round trips are infrequent and average latency is already high like batch workloads or some forms of LLM, but even then closer compute is pretty much always a better experience.

I think AI labs are realizing that they no longer have any competitive advantage other than being the incumbents. Plus hardware improvements might render their models irrelevant for most tasks.

I don't really think it is turning into a guesswork. A lot of people wrote bad code before by pasting things from the internet they didn't understand. I think some people are using LLMs the same way, but it does not mean that programming has changed. But I do think that code quality is being neglected nowadays.

Programming has changed. Agentic coding, where I go back and forth with the AI to generste a spec along with tooling and exit criteria, and then the AI goes off for hour(s) (possibly helped by harness/tooling like Ralph Wiggum), and then do the same thing for a different spec/feature/bug fix and the AI goes off and does that. Repeat until out of tokens. That was previously not how programming went.

We can quibble as to how much that is or is not "programming", but on a post about Claude code, what's relevant is that's how things are today. How much code review is done after the AI agent stops churning is relevant to the question of code quality out the other end, but to the question at hand, "has programming changed", either has, or what I'm doing is no longer programming. The semantics are less interesting to me, the point is, when I sit down at my computer to make code happen so I can deliver software to customers, the very nature of what I do has changed.


Long ago we abstracted programming into a logical language which allow us to think at a higher level. IMO LLMs are another abstraction but a bad one as it is stochastic and we can't guarantee output quality (e.g. security, performance, etc). The dream has always been to tell the computer what to do in a simple language, and the challenge has always been finding out that we didn't even know what we wanted the computer to do. LLMs might help in the first one but not in the latter. At the end, human intelligence cannot be outsourced.

The guesswork lies in the "how to poke the black box in the right way", not in the code itself.

What does an AI degree provide? Is it really different than majoring in CS? AI has been pretty standard in CS undergrad programs AFAIK. At least in high school it seems that AI curricula is just learning to use LLMs and understand a few concepts. That does bring value at a society level but I am not sure if this makes sense as an undergrad degree (if that is what they are only teaching).


I do agree with his identification of the problem: sometimes agents fail because of the tools around it and not because of the model's reasoning. However, for the failing tests I think he is not making the distinction between a failed test due to a harness failure or due to a reasoning failure. It would be nice if someone analyzed that from the data set.


I don't think so, have you read 'The Bonobo and the atheist'? Humans are not the only ones using tools and in reality there isn't much difference between humans and animals. The conclusion I get from the book is that the only difference is religion. Although, I have a feeling that humans do have a more developed intellect (problem solving) but this was not explored in the book.


There are some research projects out there that use LLMs for health diagnostics. Here's one: https://cs.stanford.edu/people/jure/pubs/med-pmlr23.pdf

They usually require more data It is not a great idea to diagnose anything with so few information. But in general I am optimistic of the use of LLMs on health.


I agree. Most of the time people think STEM is harder but it is not. Yes, it is harder to understand some concepts, but in social sciences we don't even know what the correct concepts are. There hasn't been so much progress in social sciences in the last centuries as there was for STEM.


I'm not sure if you're correct. In fact there has been a revolution in some areas of social science in the last two decades due to the availability of online behavioural data.


Yeah, there is also the work of primatologists which challenges some of our beliefs of what we think is human sciences (like politics). See Frans De Waal.

Yet, I believe there hasn't been much progress as compared with STEM. But it is just a belief at the end of the day. There might be some study about this out there.


I believe iMessage is only used in the USA. In Latin America almost everyone uses WhatsApp.


Agree. Texas is pretty bad. In most places you cannot exist without a car. No wonder Mcallen is the most obese city the US.


Hence the last sentence of my post:

> Advocate for safe biking infrastructure in your area.

We built dangerous highways. We can build bikeways as well.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You