For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | sndean's favoritesregister

Yeah, this is why I'm having a hard time taking many programmers serious on this one.

As a general class of folks, programmers and technologists have been putting people out of work via automation since we existed. We justified it via many ways, but generally "if I can replace you with a small shell script, your job shouldn't exist anyways and you can do something more productive instead". These same programmers would look over the shoulder of "business process" and see how folks did their jobs - "stealing" the workflows and processes so they could be automated.

Now that programmers jobs are on the firing block all of a sudden automation is bad. It's hard to sort through genuine vs. self-serving concern here.

It's more or less a case of what comes around goes around to me so far.

I don't think LLMs are great or problem free - or even that the training data set scraped from the Internet is moral or not. I just find the reaction to be incredibly hypocritical.

Learn to prompt, I guess?


> Companies that monetize user data in exchange for “free” services that abuse your privacy aren’t affected by this [the app store tax], as they don’t process payments through the App Store. However, privacy-first companies that monetize through subscriptions are disproportionately hit by this fee, putting a major barrier toward the adoption of privacy-first business models.

Huh. I’ve never seen it framed this way and it might be the most compelling argument I’ve heard to date. It’s not simply a debate about whether a company should be allowed to be vertically integrated in isolation, but whether that vertical integration allows them to exert unfair distorting pressure on the free markets we are trying to protect.


I feel like people in the comments are misunderstanding the findings in the article. It’s not that people save time with AI and then turn that time to novel tasks; it’s that perceived savings from using AI are nullified by new work which is created by the usage of AI: verification of outputs, prompt crafting, cheat detection, debugging, whatever.

This seems observationally true in the tech industry, where the world’s best programmers and technologists are tied up fiddling with transformers and datasets and evals so that the world’s worst programmers can slap together temperature converters and insecure twitter clones, and meanwhile the quality of the consumer software that people actually use is in a nosedive.


A friend recently put me on to Intervals (https://intervals.icu) for Garmin / Strava related data nerdery and I’ve enjoyed it very much. As a rower it was nice that you can construct reports that give rowing related metrics rather than just the usual cycle / run stuff.

From the conclusions:

> I find that AI substantially boosts materials discovery, leading to an increase in patent filing and a rise in downstream product innovation. However, the technology is effective only when paired with sufficiently skilled scientists.

I can see the point here. Today I was exploring the possibility of some new algorithm. I asked Claude to generate some part which is well know (but there are not a lot of examples on the internet) and it hallucinated some function. In spite of being bad, it was sufficiently close to the solution that I could myself "rehallucinate it" from my side, and turn it into a creative solution. Of course, the hallucination would have been useless if I was not already an expert in the field.


Entrepreneurship is like one of those carnival games where you throw darts or something.

Middle class kids can afford one throw. Most miss. A few hit the target and get a small prize. A very few hit the center bullseye and get a bigger prize. Rags to riches! The American Dream lives on.

Rich kids can afford many throws. If they want to, they can try over and over and over again until they hit something and feel good about themselves. Some keep going until they hit the center bullseye, then they give speeches or write blog posts about "meritocracy" and the salutary effects of hard work.

Poor kids aren't visiting the carnival. They're the ones working it.

--

Original comment from notacoward here: https://news.ycombinator.com/item?id=15659076


About 10 years ago, I had the pleasure of spending a few nights in a Bedouin camp. It's difficult to describe how different the world looks from the desert. Not just the immediate surroundings but how the desert shapes their entire world view. It feels like a completely different timeline. Like a parallel world looking through a spyglass to the modern world passing on the horizon.

One night, a sand storm kicked up and started rumbling the tents. We were in temporary tents for guests and not traditional Bedouin tents. I remember looking up at the tent poles, watching them shake. Then in an instant, the tent just disappeared, going from the blackness of the tent to the most intense canvas of stars in the night sky. The wind had plucked the entire tent and threw it across camp.

The next day, I had tea with one of the Bedouin guides. Her eyes were bright blue like the Fremen in Dune. When she found out I grew up in Colorado, she said it's been a childhood dream of her's to visit. I thought at first it was because of the mountains, but then she pulled out a picture she kept with her. It was a large reservoir in the foothills outside of Denver. She dreamed of one day being able to swim in so much fresh water.

Much of Western Civilization is culturally rooted in the mythology of the desert. Its impact is often forgotten but runs surprisingly deep.


Yup, minority here, and I wouldn't feel comfortable living anywhere except the US, Canada, the UK, and maybe ANZ. The Anglo-world in general is far more welcoming to people of color than anywhere else. I'm sure I could be comfortable wherever I am in the world, but it would be extremely difficult for me to thrive and see myself in a position of leadership (either in the private sector or the government) anywhere outside of those countries.

There's a famous quote (perhaps apocryphally) attributed to Reagan: "You can go to live in France, but you cannot become a Frenchman. You can go to live in Germany or Turkey or Japan, but you cannot become a German, a Turk, or a Japanese. But anyone, from any corner of the Earth, can come to live in America and become an American."

As a brown man, I feel this viscerally; I am not only deeply proud to be American, but I feel accepted as an American by my non-immigrant peers.

I think the nuanced way of putting it isn't so much that "America is not racist", it's that every country in the world deals with racism as a problem, but America is perhaps among the least racist.


Ok, echoing my top level comment... An alternative framing that I've come to find more helpful is to take your life expectancy, and cut it by 2/3. For example, if you're 20 years old and your life expectancy is 80 (ie. 60 more years), pretend that you only have 20 more, so you'll only live until you're 40. It's nice cause it naturally adjusts as you get older. You'll have smaller windows to work with.

This approach strikes a nice balance. It gives you enough time to be able to really do something and change directions if you want. But not so much time that you can really waste any. It forces you to ask the hard questions about whether your day to day is truly connecting with your dreams, and whether you're on a path to get there.

Of course, Seneca didn't have life expectancy tables to work with. But I think he would have approved. :)


I'll go that one step further and cast my phone into a shallow grave, having salted the earth it's buried in so that nothing may grow there again.


> FTA: "it was not feasible to determine personal exposures to UV radiation and temperature"

Sadly no current consumer devices exist on market that can track these two numbers. The Microsoft Band used to have both needed sensors, but AFAIK no one else has tried to widely release a consumer product with a UV exposure sensor on it.

FWIW that sensor was a major hassle, took up a lot of space, and finding a plastic cover for it that didn't also block UV was super hard.


> Lots of hand-waving elevator pitch, but I just never see anything explaining how the fuck it's actually useful.

No idea about DEVONagent, but the trial of DEVONthink I've done suggests it would make sense for someone with a few TBs of PDFs (e.g., me). The OCR plus search function makes it like a better Finder. I just can't justify paying $200 for that.


Compression is a component of general intelligence. A few years ago I was very sceptical of machine learning ever leading to general intelligence. I've since changed my mind. There are a lot of parallels to this work and the concept of "embeddings" in machine learning.

Intelligence requires the ability to generalize. A prerequisite for generalization is the ability to take something high-dimensional and reduce it to a lower-dimensional representation to allow comparison and grouping of concepts.

We're doing this all the time. Take a pen for example: we're able to combine information from sight, touch, and sound. Through some mechanism, our brains reduce the multi-sensory information and create a consistent representation that is able to invoke past memories and knowledge about pens.

Our brains encode the embeddings in a very different way to deep learning neural networks, but the commonality is that both are able to compress data into a _useful_ representation. Note that as a result of this, the quality of the compression is important. Some forms of compression might be very efficient but they also tangle concepts together, resulting in loss of composability. The ideal compression (from an intelligence point of view) is both information efficient and maximally composable.


This gets me too. In high school I called it "the power of a blank text file." Something about knowing that this was how every great software product began life was very empowering and intoxicating to me.

When you open vim and look at the blinking cursor, you stand on the precipice of greatness in the shoes of every programmer who has ever lived, all of you united across space and time by that peculiar feeling you get when you peer into the infinite possibilities of your empty text file.


It's important to note that although chemical space is quite large, most of this space is not easy to synthesize and also is not chemically feasible, stable or desirable. Another interesting "small" subset of chemical space is ZINC [0] which is a database of about a billion commercially offered compounds, meaning that manufacturers at a minimum think they can easily make them (and effectively the fulfilment is quite high when random compounds are ordered, e.g. 95% in this paper where they did molecular docking simulations on the entirity of this database to find new melatonin receptor modulators [1]). Concerning exploration of chemical space, one area that might be of interest here is the quite effective smooth(ish) movement through structure-property space using VAEs.[2]

[0] https://zinc.docking.org/ [1] "Virtual discovery of melatonin receptor ligands to modulate circadian rhythms" https://www.nature.com/articles/s41586-020-2027-0.pdf [2] "Automatic Chemical Design Using a Data-DrivenContinuous Representation of Molecules", https://arxiv.org/pdf/1610.02415.pdf


In case you're looking for a resource to learn more about Haskell, I would highly recommend http://dev.stephendiehl.com/hask/. I started recently to learn the language and tooling and found this guide randomly on Twitter, and it's by far the best codex of knowledge on the language I've seen so far. No bullshit, straight to the point, everything in one page (so easy to Ctrl-F around).

I really like Aggarwal's "Neural Networks and Deep Learning". I recommend it highly. (At least this summer Springer was giving away a PDF - not sure if that's still true.)

To me, Goodfellow et al. spent the first hundred and fifty pages on stuff which is important, but covered better elsewhere (e.g., probability theory, numerical methods) and didn't belong in their book at all. Simultaneously, I didn't get that much out of the "core" chapters on RNNs, CNNs, etc, relative to what I got out of other books. I think the book is somewhat overrated, frankly, but YMMV!


What is the equivalent book for Deep Learning?

It definitely does not solve all the issues listed in the article, but for me Zotero https://www.zotero.org has been absolutely brilliant as a "digital content organizer".

While it is "marketed" for scientists, it is a very general tool, that allows me to archive, tag, organize and search any webpage (with the option of taking a snapshot) or digital item.

I started using it for research articles, but it quickly expanded as a general bookmark organizer, then to books and even some podcast and movies archival. I use specific folders as reading/watching/listening queues.

Best of all it is FLOSS software, which is an absolute requirement for me to future-proof my use, and has an API that can be used to interact with external software. I use it for example as a document "backend" for my emacs org-mode journal (via the zotxt-emacs package).

There is an online sync service offered at a very reasonable cost (including a free tier). This is one of the very rare online services I'm paying money for. My understanding is that the sync server is open-source, but not production ready for self-hosting yet. The devs are supposedly working on it.

I strongly recommend you at least take a look.


$100K isn't what it used to be. In addition to inflation^1, the death of pensions means that salary and other liquid compensation is now a much larger percentage of total compensation.

For example, in our state, teachers and government employees receive a full pension after 20 years of service. That pension pays out 1/2 of some average of their previous few year's salary. Even if you retire making a relatively modest $60K/yr, that pension is still worth north of $50K/yr over those 20 years.

Salaries that don't include pensions aren't nearly as generous as they sound. $100K/yr without a pension is comparable to $40K/yr with a pension.

--

1. Including absolutely incredible housing price inflation in major cities like Seattle which typical measures of inflation don't properly capture).


"Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway" - A. Tanenbaum of MINIX (amongst other things) fame

“I do get a sense sometimes now among certain young people, and this is accelerated by social media, there is this sense sometimes of: ‘The way of me making change is to be as judgmental as possible about other people, and that’s enough.”

“Like, if I tweet or hashtag about how you didn’t do something right or used the wrong verb, then I can sit back and feel pretty good about myself, cause, ‘Man, you see how woke I was, I called you out.’”

“That’s not activism. That’s not bringing about change. If all you’re doing is casting stones, you’re probably not going to get that far. That’s easy to do.”

- Barack Obama

https://www.nytimes.com/2019/10/31/us/politics/obama-woke-ca...

https://www.youtube.com/watch?v=qaHLd8de6nM


Most of the time, you don't need all that, since Python has zipapps. You defined deps, you zip it, you ship to any same os with the same python version. It embeds everything, and just run.

We even how have a nice tool to automate the bundling for you:

https://pypi.org/project/shiv/

Of course you still have to figure out how to get a Python installed on the final machine, that's the price to pay to be an interpreted language.

We don't have yet a story to ship a beautiful exe/dmg/deb/rpm that embeds the zipapp and libpython in an easy way.


You know the ends of shoeleaces, those little plastic bits? Roughly a 800,000,000/year industry

Cardboard box industry in USA is bigger than the NFL, NHL, NBA, and the other major sports leagues combined


I love J, APL and the array languages in general. I am always looking for exercises to practice my J skills. I found this article by Tyler Limkemann called "Modeling the COVID-19 Outbreak with J"[1].

I also follow tangentstorm and took some of this J-talks gui code[2], and mashed it up with Tyler's as a J learning exercise. I just put this up on my github. [3]

I highly recommend tangentstorm's YouTube videos too![4]

The concise J code, and the way it makes me approach a problem brings me joy! Given all of my coding projects in J fit on one page, I can revisit them later, and get right back in without worrying about having too many comments or documents to understand my code. I like refining the code over time for fun and learning even after it works, or somebody much smarter than myself shows me a different way to look at it.

[1] https://datakinds.github.io/2020/03/15/modeling-the-coronavi...

[2] https://github.com/tangentstorm/j-talks

[3] https://github.com/rpherman/JLang/tree/master/COVID-19

[4] https://www.youtube.com/user/tangentstorm


Sure, but I'm sure some prosecutor took a look at it and decided it wasn't worthwhile. People get killed in accidents all the time, and frequently nobody gets charged with anything. A person being killed in a car accident while jaywalking at night because the driver wasn't paying attention particularly well is a story as old as cars, and usually doesn't result in jail time. Turns out, people are legally allowed to drive while being really shitty drivers, and you mostly can't prosecute people criminally just for being bad at driving. If anyone was going to face criminal charges it would be the driver, and I suppose the relevant prosecutor didn't like the odds.

I hope Julia will be more popular in bioinformatics. Personally, I have a high hopes for BioJulia[1][2][3] and the amazing AI framework FluxML[4][5] + Turing.jl[6][7]. Apart from the speed, they offer some interesting concepts too - I recommend to check them out.

[1] https://biojulia.net/

[2] https://github.com/BioJulia

[3] https://github.com/BioJulia

[4] https://fluxml.ai/

[5] https://github.com/FluxML/

[6] https://turing.ml/dev/

[7] https://github.com/TuringLang


As a former bioinformatician (if that’s a word), I’m not sure there’s much value in this. There’s high dispersion in the performance requirements of bioinformatics tools. The processes that need to be fast (alignment, BLAST, whatever, tree creation, etc) are already super fucking optimized (though, unfortunately, still slow). The things that don’t need to be fast can use whatever you want (I used Haskell and Racket for my own tools at the time). Python is.. not the greatest. The major value add is the multitude of scientific libraries. If you’re gonna throw that all away, why not just use something better? Things like Julia, OCaml, Haskell, etc. I personally think Julia is pretty dope and is what I would use today for bioinformatics research. Or maybe if I was feeling a little subversive, K/Q or J. Q’s time-series database KDB+ could probably be used for sequences. And maybe even for great effect. And the performance would be off the charts!

It seems like the purpose of this is Python without the performance penalty, which doesn’t make much sense to me. I’ve found Haskell absolutely perfect for bioinformatics as most operations you are doing are functional data transformations. Moreover, it’s pretty damn fast if you need it to be.

I’ve been out of the field a long time though (Roche-454 was still the main workhorse at the time). But let me tell you, bioinformatics is/was a fucking shit-show. The tools and ecosystem are/were like Linux in the mid 90s: fucking terrible. And another language is just gonna make it worse.


Warren Buffet, the CEO of Berkshire Hathaway (big player in the insurance and re-insurance business) discussed some of these scary tail risks in his letter to shareholders in 2015

http://www.berkshirehathaway.com/letters/2015ltr.pdf

>There is, however, one clear, present and enduring danger to Berkshire against which Charlie and I are powerless. That threat to Berkshire is also the major threat our citizenry faces: a “successful” (as defined by the aggressor) cyber, biological, nuclear or chemical attack on the United States. That is a risk Berkshire shares with all of American business.

>The probability of such mass destruction in any given year is likely very small. It’s been more than 70 years since I delivered a Washington Post newspaper headlining the fact that the United States had dropped the first atomic bomb. Subsequently, we’ve had a few close calls but avoided catastrophic destruction. We can thank our government – and luck! – for this result.

>Nevertheless, what’s a small probability in a short period approaches certainty in the longer run. (If there is only one chance in thirty of an event occurring in a given year, the likelihood of it occurring at least once in a century is 96.6%.) The added bad news is that there will forever be people and organizations and perhaps even nations that would like to inflict maximum damage on our country. Their means of doing so have increased exponentially during my lifetime. “Innovation” has its dark side.

>There is no way for American corporations or their investors to shed this risk. If an event occurs in the U.S.that leads to mass devastation, the value of all equity investments will almost certainly be decimated.

>No one knows what “the day after” will look like. I think, however, that Einstein’s 1949 appraisal remain sapt: “I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.”


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You