For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | chis's commentsregister

Yeah my take is the exact opposite. It's such a page turner that the book has become one of my default recommendations for people looking to get back into reading. Of course you have to be a certain type of nerd to appreciate it.

Just to make one obvious critique your costs per token are probably about 1000x higher than the ones they provide.

I'm pretty sympathetic to Anthropic/OpenAI just because they are scaling a pretty new technology by 10x every year. It is too bad Google isn't trying to compete on coding models though, I feel like they'd do way better on the infra and stability side.


I've owned this GPU for 5 years already, it's fine

Hackernews needs to nominate an elite crew of individuals who can tell when an article is AI slop and flag it.

We’ve had the ability to make water/wind-proof garments long before Gore-Tex. The crucial thing is that Gore-Tex is water vapor permeable. So it has a way better ability to shed excess heat without needing to take off a layer.

Traditional materials still have a place though. Material science has not beaten down feathers or wool yet, for the most part.


> Gore-Tex is water vapor permeable. So it has a way better ability to shed excess heat without needing to take off a layer.

It's a way to shed water: Wearing waterproof, non-breathable layers often is worse than not, because the moisture your body releases and that gets trapped soaks you from the inside as surely and rapidly as the rain. (Maybe it's a bit warmer.)


> The crucial thing is that Gore-Tex is water vapor permeable.

While dry, or intermittently wettened (so it can still shed water). Numerous independent tests show that it doesn't breathe at all, once the surface is fully wet. Also, Gore-Tex is no longer best-in-class amongst rain-shedding breathable fabrics; it simply has name recognition.

To be fair, few things do breathe once their surface wets... but wool's surface is so convoluted by the twisty, hydrophobic threads that it rarely gets fully wet on the surface.


My hope would be that this eventually pushes pip to adopt a similar feature-set and performance improvements. It's always a better story when the built-in tool is adequate instead of having to pick something. And yes UV is rust but it's pretty clear that Python could provide something within 2-5x the speed.


The problem is funding.

There seems to be a pervasive believe that the Python tooling and interpreter suck and are slow because the maintainers don’t care, or aren’t capable.

The actual problem is that there isn’t enough money to develop all of these systems properly.

Google says that Astral had 15 team members. Or course, it’s so hard to make these projections. But it wouldn’t shock me if uv and ruff are each individually multi-million dollar pieces of software.

If you’d like to invest a million dollars to improve pip, or work for free for 3 years to do it yourself, I’m not sure if anyone would object.


pip isn't exactly a "built-in" tool. Beyond the python distribution having a stub module that downloads pip for you.


`ensurepip` does not "download pip for you". It bootstraps pip from a wheel included in a standard library sub-folder, (running pip's own code from within that wheel, using Python's built-in `zipimport` functionality).

That bootstrapping process just installs the wheel's contents, no Internet connection required. (Pip does, of course, download pip for you when you run its self-upgrade — since the standard library wheel will usually be out of date).


E2E encryption lets Meta turn down government subpoenas because they can say they truly don't have access to the unencrypted data.

I can't say I really mind this change by Meta that much overall though. Anyone who's serious about privacy probably knew better than to pick "Instagram chat" as their secure channel. And on the other hand having the chats available helps protect minors.


These AI written articles carry all the features and appearance of a well reasoned, logical article. But if you actually pause to think through what they're saying the conclusions make no sense.

In this case no, it's not the case that go can't add a "try" keyword because its errors are unstructured and contain arbitrary strings. That's how Python works already. Go hasn't added try because they want to force errors to be handled explicitly and locally.


It is simpler than that. Go hasn't added "try" because, much like generics for a long time, nobody has figured out how to do it sensibly yet. Every proposal, of which there have been many, have all had gaping holes. Some of the proposals have gotten as far as being implemented in a trying-it-out capacity, but even they fell down to scrutiny once people started trying to use it in the real world.

Once someone figures it out, they will come. The Go team has expressed wanting it.


> nobody has figured out how to do it sensibly yet.

In general or specifically in Go?


The mentioned in the article `try` syntax doesn't actually make things less explicit in terms of error handling. Zig has `try` and the error handling is still very much explicit. Rust has `?`, same story.


I just read the article and I didn't get away with that rationale. Now, this isn't to say that I agree with the author. I don't see why go would *have* to add typed error sets to have a try keyword.

Yes, mimicking Zig's error handling mechanics in go is very much impossible at this point, but I don't see why we can't have a flavor of said mechanics.


What led you to believe this is an AI written article?


It's quite clear that these companies do make money on each marginal token. They've said this directly and analysts agree [1]. It's less clear that the margins are high enough to pay off the up-front cost of training each model.

[1] https://epochai.substack.com/p/can-ai-companies-become-profi...


It’s not clear at all because model training upfront costs and how you depreciate them are big unknowns, even for deprecated models. See my last comment for a bit more detail.


They are obviously losing money on training. I think they are selling inference for less than what it costs to serve these tokens.

That really matters. If they are making a margin on inference they could conceivably break even no matter how expensive training is, provided they sign up enough paying customers.

If they lose money on every paying customer then building great products that customers want to pay for them will just make their financial situation worse.


"We lose money on each unit sold, but we make it up in volume"


By now, model lifetime inference compute is >10x model training compute, for mainstream models. Further amortized by things like base model reuse.


Those are not marginal costs.


> They've said this directly and analysts agree [1]

chasing down a few sources in that article leads to articles like this at the root of claims[1], which is entirely based on information "according to a person with knowledge of the company’s financials", which doesn't exactly fill me with confidence.

[1] https://www.theinformation.com/articles/openai-getting-effic...


"according to a person with knowledge of the company’s financials" is how professional journalists tell you that someone who they judge to be credible has leaked information to them.

I wrote a guide to deciphering that kind of language a couple of years ago: https://simonwillison.net/2023/Nov/22/deciphering-clues/


Unfortunately tech journalists' judgement of source credibility don't have a very good track record


But there are companies which are only serving open weight models via APIs (ie. they are not doing any training), so they must be profitable? here's one list of providers from OpenRouter serving LLama 3.3 70B: https://openrouter.ai/meta-llama/llama-3.3-70b-instruct/prov...


It's also true that their inference costs are being heavily subsidized. For example, if you calculate Oracles debt into OpenAIs revenue, they would be incredibly far underwater on inference.


Sue, but if they stop training new models, the current models will be useless in a few years as our knowledge base evolves. They need to continually train new models to have a useful product.


Football is so unique in that the way it’s presented makes it almost impossible to understand what’s going on. There are a million rules, which even die-hard fans don’t understand. And the broadcast doesn’t even make an attempt to explain or even show the offensive or defensive formations and plays being chosen.

It feels like what we’re shown on tv is a very narrow slice of what’s going on. We see the ball moving down the field but have no idea what the coach or quarterback is doing. Somehow it’s still an incredible watch though.


The plays belong to the individual teams, which is, I heard, why they don't broadcast full field views.

No idea if it's true or not


There are some recent experiments with consumer-facing full field: (Prime Vision All-22). They were held closely for a long time, though.


What would the average software engineer pay for a AI coding subscription, as compared to not having any at all? Running a survey on that question would give some interesting results.


I may be a bit of an anomaly since I don't really do personal projects outside of work, but if I'm spending my own money then $0. If the company is buying it for me, whatever they are willing to pay but anything more than a couple hundred/month I'd rather they just pay me more instead or hire extra people.


I would pay at least 300$/month just for hobby projects. The tools are absolutely amazing at things I am the worst at: getting a good overview on a new field/library/docs, writing boilerplate and first working examples, dealing with dependencies and configurations etc. I would pay that even if they never improve and never help to write any actual business logic or algorithms.

Simple queries like: "Find a good compression library that meets the following requirements: ..." and then "write a working example that takes this data, compresses it and writes it to output buffer" are worth multiple hours I would otherwise need to spend on it.

If I wanted to ship commercial software again I would pay much more.


I pay $20 for ChatGPT, I ask it to criticize my code and ideas. Sometimes it's useful, sometimes it says bullshit.

For a few months I used Gemini Pro, there was a period when it was better than OpenAI's model but they did something and now it's worse even though it answers faster so I cancelled my Google One subscription.

I tried Claude Code over a few weekends, it definitely can do tiny projects quickly but I work in an industry where I need to understand every line of code and basically own my projects so it's not useful at all. Also, doing anything remotely complex involves so many twists I find the net benefit negative. Also because the normal side-effects of doing something is learning, and here I feel like my skills devolve.

I also occasionally use Cerebras for quick queries, it's ultrafast.

I also do a lot of ML so use Vast.ai, Simplepod, Runpod and others - sometimes I rent GPUs for a weekend, sometimes for a couple of months, I'm very happy with the results.


I pay $20/m for cursor. It allowed me to revamp my home lab in a weekend.


> It allowed me to revamp my home lab in a weekend.

So, what did you learn from that project??


I previously hand baked everything.

I learned how andible, terraform, and docker approach dev ops and infra.

I would not be able to hand cook anything with these tools but understanding the syntax was a non goal (nor what interests me).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You