For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | arjie's commentsregister

What I do is have a quick command that spins up a worktree on a repo with my ghostty splits as I like them and the tmux named the worktree. I then tell the Claude code about the tmux when it needs to look. It’s pretty good at natively handling the tmux interactions.

Ideally Ghostty would offer primitives to launch splits but c’est la vie. Apple automation it is.


I actually really enjoy getting this sequence of emails but I use Gmail’s auto categorization so it just goes in the “Updates” folder and gets auto-forwarded to my claw-like so it’s not super interrupty. I prefer to have the full trace on my side rather than on the provider side because their site might go down and so on.

I can see why people get annoyed. It’s just the alternative that I really dislike.

This way I can do all analysis on my own side or search for status on my side. I prefer to own the data and have it pushed in a timely manner.


If I'm being honest, I've never related to that notion of remuneration and credit being the primary reason to write something. I don't claim to be some great writer or anything, but I do have a blog I write quite often on (though I'm traveling in my wife's Taiwan now and haven't updated it in a while). But for me, I write because it feels good to do so. Sometimes there's a group utility in things like I edit a Google Maps listing to be correct even though "a faceless corporation is going to hoover up my work and profit off it without paying me for my work" and I might pick up a Lime bike someone's dropped into the sidewalk even though "a faceless corporation is externalizing the work of organizing the proper storage of their property on public land without paying the workers" or so on.

I just think it's nice to contribute to the human commons and it's fine if some subset of my fellow organism uses it in whatever way. Realistically, the fact that Brewster Kahle is paid whatever few hundred thousand he's paid for managing a non-profit that only exists because it aggregates other people's work isn't a problem for me. Or that Larry Page and Sergey Brin became ultra-rich around providing a search interface into other people's work. Or that Sam Altman and Dario Amodei did the same through a different interface.

This particular notion doesn't seem to be a post-AI trend. It seems to have happened prior to the big GPTs coming out where people started doing a lot of this accounting for contribution stuff. One day it'll be interesting to read why it started happening because I don't recall it from the past. Perhaps I just wasn't super plugged in to the communities that were complaining about Red Hat, Inc.

It's not that I don't understand if I sold my Subaru to a guy who immediately managed to sell it to another guy for a million times the money. I get that. I'd feel cheated. But if I contributed a little to it, like I did so Google would have a site to list for certain keywords so that they could show ads next to it in their search results, I just find it so hard to be like "That's my money you're using. Pay me!".


You do it as a hobby, that's fine. Some people do it for a living. And while they aren't owed a living doing that specific thing, it is going to be a big problem for them if they can't make money at it anymore.

I'm sure plenty of people feel the same way about software. They make software as a hobby and don't care about remuneration or credit. Meanwhile I write software for my day job and losing the ability to make money from it would be devastating.


Ah, I see. It’s just straightforward protectionism like dockworkers opposing automation and so on. That I do comprehend, in fact.

I write software too and I may no longer be able to just do it in the old way. Pretty scary world but also exciting. I can’t imagine trying to restrict LLM software writers on that basis but I can comprehend it as simply self-interest.

Fair enough.


It’s about the amount of time available.

Do you make money writing software? I bet you either try to restrict LLM usage or assign your rights to an employer who does. Putting code in the public domain is pretty rare, and extremely rare for paid work.

I allow them to train on my work as described here (for example) https://code.claude.com/docs/en/data-usage

And I do paste code into CC. I’m not super concerned that they’ll see it.

That’s fine by me. It doesn’t require putting code in the public domain which is something else entirely.

I make money off hosted software so in some sense there is writing involved at one end. But I’m not paid by output tokens.


If your code isn't in the public domain, then anything you haven't explicitly allowed them to train on is restricted for them. They've been ignoring that for anything they can actually get their hands on, but it's there.

> Some people do it for a living.

I was going to write, "not for long," which might be true for some. But then I realized there will always be a difference between LLM output and human writing. We don't read blogs because of their facts, we read them because of how the facts are presented and how the author's personality comes through on the page.

EDIT: That said, LLMs are great at faking it, and a lot of amateur writing will be difficult to distinguish from LLM output. So I'm disagreeing with myself a bit.

But we are talking about "slurping up" IP and regurgitating it right? OK. So if I slurp up Mickey Mouse and output Micky Mouse that's an offense. But what if I slurp up a billion images and output some chimera? That's what the LLMs do. And that's what humans do too.


The data is open, and so we don't have to do the visual reasoning off an imperfect graph. SF Chronicle has done a pretty rare (but I think good journalistic practice) of specifying the source of the data: https://data.sfgov.org/Public-Safety/Police-Department-Incid...

First to match the graph you make sure you pick 'Larceny - From Vehicle' only (there are some others one might argue matter) and ensure you're only counting incidents once (many rows reference the same incident). That lets us recreate the original graph.

When looking at many things I like to look at seasonal effects just to see, and it doesn't look like they are significant here (but you can see the Mar 2020 drop to the next year quite easily which I like): https://wiki.roshangeorge.dev/w/images/2/2e/SFPD_Vehicle_Bre...

I also tried overlaying various line charts but that's useless for visually identifying the break.

One thing I thought would be fun is to run a changepoint algorithm blindly https://wiki.roshangeorge.dev/w/File:SFPD_Vehicle_Break-Ins_...

I like PELT because it appeals to my sensibilities (you don't say ahead of time how many changepoints you want to find - you set an energy/cost param and let it roll) and it finds that one changepoint. You can have some fun with the other algos and changing the amount of breakpoints or changing the PELT cost function. And then you can have even more fun by excluding 2020 or excluding Mar 2020 onwards or replacing it by estimates from the previous years (quite suspect considering what we're trying to do but hey we're having fun - a bunch of algos all flag Nov 2023 as some moment of truth)

Anyway, anyone curious should download the data. It's pretty straightforward to use and if I goofed up with off-by-one or whatever, you can go see for yourself.


Your analysis also supports a covid trough, not a covid peak and certainly no covid effect. I agree with other commentators suggesting that flock cameras are not the full or even most of the story, but absolutely disagree with the GP that car break-ins are some identifiable covid phenom or that the decrease is merely a post-covid return to normalcy.

Hopefully there was nothing wrong with posting a news article with a graph instead of doing the data analysis myself.


I was avoiding getting into the specifics because rather than tea-leaf-reading a picture one can simply look at the numbers themselves and they cannot support anything but that the one year period immediately following the lockdowns was much lower than the surrounding years.

And I think it was great you shared the news article! For many others, analyses one does oneself are less believable. I prefer doing it myself to convince myself but I wouldn’t expect it to convince others. Here I did it because I wanted to know what the fact is and I always have trouble with picking change points on a bar graph without all the ticks marked.

I put it at this level because it feels supplemental to your link not because it’s a debunking of your comment or whatever though perhaps https://news.ycombinator.com/item?id=47690707 is the best place to do it.


It's just cultural. If there's a cultural expectation of the ring/honk it's not rude. e.g. in India people will honk as a form of active group flock behaviour but foreigners will interpret it as everyone saying "get out of my way"; but in some European countries I have seen that people use the bell (much less noisy than the typical Indian street) and it's got the same meaning. In Hawaii, if you ever honk at someone, you're going to have a fight on your hands. In San Francisco, if you honk at someone and you're on Bush Street it means you're trying to help the traffic light change (it's a team effort) but anywhere else you get anything from a gun drawn, to a brake check, to a wave in apology for missing the light by being on the phone.

Overall, cultural expectations are everything here so it's best to just "when in Rome, do as Romans do".


Can you explain to me what it means to try to get the traffic light to change on Bush street? I tried searching for it but couldn't find anything.

It was a not-particularly-amusing joke that people honk because doing so helps the light change. It doesn’t, of course, but I used to work at a building at the intersection of Bush and Sansome (I think), the Standard Oil Building, and every day at 5 PM the honking would put Bombay to shame.

The only real solution is to put Anubis in front. For me, I just use Cloudflare in front and that suffices. But it's only a few thousand per hour by default. My homeserver can handle that quite well on its own.

I actually like our current technology. Pretty useful and nice to have. Lots of good features that I find routinely useful.

I used retatrutide for weight loss and went from 199.3 lbs to just under 175 lbs. I kept daily notes through the process. Here's a quick AI one-paragraph summary if you're curious: https://pastebin.com/XACNYKvs

Overall I'm quite pleased with the effects and many of the properties of this treatment that people dislike are actually properties I was looking for. Essentially, for pharmacological interventions I want impermanent effects with a clear dose-response relationship and ideally minimal or no adaptation.

So the fact that people gain weight when they go off it and then lose weight again when they go on it was good. That meant it's fairly easily undoable. The fact that the more you take the more you lose also was pretty good to know though for the majority of the time I took less than any tested dose (and the effects were quite strong on those).

I did experience quite a bit of adaptation so I needed to up the dose until I was in the range tested by the end. I've been off it for a month now and been pretty much flat, but we've been traveling since I stopped and so a lot has changed (no more lifting, lots more eating, lots more walking).

Rough cost for the retatrutide is $1.25/mg.


When people talk about peptides, they typically mean either (1) GLP compounds like Reta that have successful phase 3 trial data and will be approved or (2) stuff like 157 which has no evidence or plausible mechanism to work.

Was cost the reason you went for it over tirzepitide? I feel like retatrutide is still way too early to mess with, it's giving me real "vioxx" vibes of messing with too much at once.

No, I had access to free tirzepatide. I chose retatrutide because early results seemed promising and safe and since I was going to run a short-term self-trial I wanted the most effective peptide.

Hey, fair enough! To each their own.

> Rough cost for the retatrutide is $1.25/mg.

Even with free healthcare that seems like a foolish place to save money when very widely used alternatives exist in the regulated market.


Ozempic in the US just had its price reduced to $499 for a pen with a few 2mg doses...

>when very widely used alternatives exist in the regulated market.

Which ones? Semiglutide costs orders of magnitude more.


op:

> had access to free tirzepatide


Please be careful here, as Retatritude is not actually commercially available in most of the world, outside of research labs. And is not FDA approved to buy or use in the US (where most of HN readers are).

I fear this kind of post will encourage gullible people to go chasing reta on the grey market, where they might as well be getting a placebo, as there's no mechanism for your typical person to verify that they're getting what they think they're getting.

And given it's an injectable, and bacterial growth, or a variety of other toxins that can remain in poorly manufactured pharmaceuticals, can do a great deal of harm.


> there's no mechanism for your typical person to verify that they're getting what they think they're getting.

Think what would happen if there was one.

In the spirit of selling shovels in a gold rush, the first company to commercialize a tricorder is going to make bank off all the fitness fads and off-label/DIY medicine markets.


Absolute nostalgia fever. About a month ago, I dug up an old desktop in the corner, took the drives out and gave away the machine. It felt like putting a racehorse to pasture: i7-4790k, 1080 Ti. It was my dream machine when I got it. Dual-boot (as we did back in the old days when Proton wasn't here) to Ubuntu, then Elementary, then Arch. By the time I gave it away it wasn't worth the power cost.

And that brought to mind my older dream machine, an 8800 GT from generations past, before which we made do with a Via Unichrome that worked sufficiently enough on the OpenChrome driver that I could edit open software (Freespace only needed a few constants changed) so it would render (though some of the image was smeared and so on I could play!).


I'm still rocking a Z97, i7-4790k and a 980Ti :) I'm still waiting until I need an upgrade. DDR3 is still performing good enough for the games I run.

Same. Still play StarCraft2 on a 4790k and AMD R9 Fury X.

I was running a 970ti for the longest time, it was only when I wanted to get into some VR gaming that it was time for an upgrade.

I used my 1080 Ti for about eight years. The successor GPU is in some ways way faster (raytracing, AI features etc.), but in others really quite stagnant considering the huge stretch of time that passed between them. ~10 years for 2-3x performance in GPUs at higher nominal and real price points shows how slow silicon advances have been compared to the 90s and 2000s. The same period from 2000 to 2010 would've seen 1000x performance if not more. The difference between a 1080 Ti and a more expensive RTX 50 card is the RTX can render ideally triple the frames in synthetic benchmarks, double the frames in some rasterizing games (most games won't see gains that high), and do a few relatively tame raytracing tricks at performance which is still not really good. At the same throughput it consumes maybe half the power or a bit less. The difference between a GeForce 2 and e.g a Radeon HD 4k is several planes of existence.

My 1080ti is still working away in my kid's PC. If you connect a 1080p monitor, it will still hit 60fps in mostly everything.

The only thing that holds this card back now is a handful of titles that will not run unless ray-tracing support present on card - Indiana Jones and The Great Circle springs to mind etc.

I am very likely going to get a decade of use out of it across three different builds, one of the best technology investments I've ever made.


It really is an impressive bit of hardware. I finally pulled it out of my last system a year ago, but it was definitely holding its own up until that point.

Well. The 5090ti is significantly faster than a 1080ti. It has 92b vs 12b transistors. That's the 10 years difference you mention. 10 years before the 1080ti we had the 8800 ultra with 600m transistors. So yeah you are a bit right. But stacked transistors in the future might become reality and enable transistor increase again.

A 5090 is more than twice as expensive as a 1080 Ti in real MSRP terms and way more than that in actual real terms, since the 1080 Ti was available for some time below MSRP, while the 5090 realistically never was and usually goes for 50-100% above MSRP. So I don't think these can be compared. Basically a similar story with the 5080, it's significantly more expensive in real terms (and about ~2x in nominal terms).

The 5070 Ti would be the same spot.

If you compare these - the RTX 50 card has a bit higher TDP (which it will usually not reach due to clock limits), is a roughly 100mm² smaller die with around 4x the transistors and about 3x the compute (since much more of the chip is disabled compared to the 1080 Ti's chip). It has 5 GB more memory (11->16) and a lot more bandwidth.


It is interesting the consumer high you get from buying things. I remember being in a microsoft store like 12 years ago and wanting this Surface laptop. Holding it in my hands but I couldn't afford it. Now I have a Surface Book 3 and it's still cool but not the same experience as it being a flagship/new at the time.

Still there are a lot of laptops I'd like to try when they get cheaper. As far as GPUs I like the Nvidia founder designs, it was a while before I got a 3080 Ti Fe that I ended up having to sell at a loss when I didn't have a job that was sad. I have a 4070 founders now which does struggle on certain games at 1440p but I'm going to use it to run local LLMs.


I also have that exact setup sitting around, but am just using my ryzen laptop now.

My current machine is an i5-3570k with a 1070Ti...

The old CPU is actually more of an issue. I couldn't run Civ 7 because the game (probably the DRM) uses some instructions that aren't implemented on that CPU. Other than that I bet it would run just fine.

I was just about to upgrade before hardware prices went through the roof. Now I'm just holding on until some semblance of sanity returns, hoping every day that the bubble pops and loads of gently loved hardware starts appearing on the secondary market. Also, the way nVidia has been skimping on memory for all but the most outrageously expensive chips has grated on me. I was really hoping they would buck the trend with the 5xxx generation, but nope, and with RAM prices the way they are I have little hope for the 6xxx generation. My current card is close to a decade old and has 8GB of VRAM. I'm not upgrading to a card with 8GB of VRAM, or ever 12GB. That 8GB was crucial in future proofing the original card, none of its 4GB contemporaries are of much use today.


My truenas scale server still happily running on a i7-3770.

Hey, I could have used that i7-4790k!

I've been running the worst gaming set up I can get away with, which atm is a 3080 10gb, using random DDR3 ram, a budget WD 512gb ssd, and an i5 of the same socket as the i7-4790k that doesn't even support hyperthreading and can't do more than 4 tasks in parallel.

It's absolutely laughable at this point, but I'm unironically looking for a deal on that cpu lmao, it would be a huge upgrade.


Woah, this is absolutely sick! 10 years ago me would have been surprised something so small can encode all the world knowledge necessary to make this plausible. That they'd make this openly available is a dream.

I don't think it can continue the no-interaction line too far. This may be enough for movie-making purposes though.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You