For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more pkage's commentsregister

ML researcher perspective: Conda is... dog slow, even for relatively simple tasks (clone and run a project). The recommendation nowadays is to use Mamba (iirc), but in my experience (a few years old now) it's been unstable and unable to find package solves which worked on my system / our cluster.

I've settled on just using Poetry for most things, and then using pip in a venv to install the pyproject.toml file from Poetry either in a Dockerfile or directly on a cluster. That's worked fairly well so far, even with torch/cuda (and the mess of CUDA versioning) and from macOS to Linux.

I think uv/rye is a good next step, Poetry can be a bit slow as well at times.


It IS slow, no argument there, but I never find the speed of a package management tool too important.

Maybe it's different for other ecosystem such as node etc., but when I'm doing research in ML I config my project mostly just once and do the bulk work (install cuda pytorch etc.), later it's mostly just activate and occasionally add some util packages via pip.

What makes conda better than native venv+pip is its extensive libraries/channel and be able to solve/build complicated dependencies effortlessly especially when you have to run your project on both Windows and Linux.

This is not to say speeding up isn't needed, of course!


> What makes conda better than native venv+pip is its extensive libraries/channel and be able to solve/build complicated dependencies effortlessly especially when you have to run your project on both Windows and Linux.

For me, most stuff is installed via pip anyways. The only things I'm pulling via conda is blas, scipy, torch and all that stuff that's a PITA to install.


If you are working on a large collaborative project, switching between branches can mean needing to rebuild your container images. It's not something I do every day, but it happens enough that the difference between 1 minute (doesn't disrupt flow/train of thought) and 10 minutes (disrupts flow) means something.


The mamba solver comes with conda nowadays. It's not slow any more.


Not only it is slow, it has so many idempotency issues that it makes it barely usable.


Claiming that emissions surged 50% due exclusively to AI (as in the headline) is unsupported by the Google report, which is the article's singular source.

Again, looking at the graph of emissions at Google on pg 31, it's clear that the increase is linear after a dip 2019-2020 and the report identifies supply chain issues as a major source of emissions in addition to datacenter electricity costs—again, notably not specifically calling out their AI training/inference costs as a reason for their increase. The report does identify AI as a challenge going forwards, however that's not the same as saying it's exclusively responsible for the last 5 years of emissions.


and indeed, this is the approach that config-centric languages like Nickel[0] take.

[0]: https://nickel-lang.org/


The priority field in Nickel seems a lot like CSS weighting, though more explicit, I suspect it will cause headaches at scale.


Additionally, the only sea covered in its entirety is the Mediterranean. Generally, constellations don't do captures over open ocean as researchers/customers tend to be much more interested in events on land; this makes it difficult to do long-term analyses of marine events as the data just simply isn't captured.

Source: work in the industry


True, but coastlines are well covered. Assuming the pollution comes from the coast it should be fairly easy to determine what the hotspots are (see Po river on the map in the article).


Looks like it's trying to use Runestone[0] (a textbook authoring tool) to get the number of online students but the server url is improperly configured to point at localhost (hence the port scan trigger).

[0] https://runestoneserverascholer.readthedocs.io/en/latest/ind...


I think the problem comes down to fingerprinting. For example, resources used to be cached in the browser regardless of site exactly as you described, but then ad companies figured out you can track a user's browsing history by timing the cache accesses to see if a user had loaded a bundle from another site. To remove that vector, browsers split up caches per domain.


That's a vector for literally all assets so adding a random delay to every layer up to the last Paint() is the only real counter.

It can be mitigated by adding a property to scripts to allow/default/deny shared domain resource caching, and the ecosystem can benefit from jquery/etc from being hot but still maintain opt-in default/backwards-compat status.

that, combined with the already established HTML <script> integrity Attribute and you are gravy


What does this have to do with high speed rail?


High speed rail can shorten commute between Los Angeles privately-funded aerospace and Las Vegas publicly-funded aerospace ecosystems, supporting commercialization that can benefit CA, NV and USA.


> And then imagine that you can buy — hopefully you can buy — an apartment in the $400,000 range, or a nice house for $600,000. Those figures are a little bit up in the air, but it will definitely be more affordable.

These... are still relatively unaffordable housing prices.


> relatively

I'm not sure that means what you think it means.

Relative to the cost of housing in every nearby city, that's very affordable.


Relative to the Bay Area rose prices are downright cheap.


Who is downvoting this? It's factual.

Look up housing prices in places in San Francisco or SV. Tear downs sell for more!


I would love a $2.5 million discount on a house.


I live in one of the best cities in the US. Like it's definitely a City with a capital C, dig?

$600k gets you something great. Is there a hidden assumption of having 8 kids here, lol?


Where?


I am not so sure it is considering these are new builds. I see 800sqft old homes being sold in Fairview for $500k.


I don't see any houses for sale near that price in Fairview on Redfin.


https://www.zillow.com/homedetails/1412-Adams-St-Fairfield-C...

Not sure why the position is so contentious. My only point is that the pricing for the new planned homes in this development seem to be pretty good pricing compared to the rest of the area. I don't know on a per sqft basis but looks like old turds like this house are selling for 4-500k and newer larger lots are in the million dollar range. A well planned community home for $600k sounds pretty nice.


There are only a couple transactions this year for that kind of price. None of them have interior photos, I'd assume run down. But yeah, it's a lot cheaper than I expected given that houses on the peninsula are 3-4x this price.


Totally agree, I suspect I got the downvotes because my poorly explained post appears that I am going the opposite direction. Most of the homes in the area are million dollar homes. If these new builds are $600k starting, that sounds like a really good deal for someone.



What is the relationship between Rye and uv? Aren't they now both under the Astral.sh umbrella?


Astral have said they eventually want uv to replace rye, but are going to build off of what rye has already done. Rye is currently using uv under the hood for the venv basics and the dependency resolver.

Rye is kind of being used as the test bed for what uv will eventually be.


There’s more information in the uv announcement:

https://astral.sh/blog/uv


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You