I wonder if this is due to abliteration actually "damaging" the model, or just an artifact of the model never having been properly trained on "forbidden" topics (as it's enough for them to recognize them, and there's no point in dedicating neurons to something that will never be exercised anyway).
Modern abliteration is quite good at not damaging the model on ordinary topics. But yes, on many of the weirdest "forbidden" topics (excluding the mild stuff like ordinary erotica) there's not going to be any real training of any sort and it's basically hallucinations running wild. You even see this claim repeated explicitly on every model release "safety card": 'no, this model does not have the sort of fiddly tacit know-how it would need to actually advise anyone nefarious on this dangerous stuff'.
How are you suffering equal heat stress from being submerged in moderately warm water and breathing very hot air? I could imagine quite different effects on airways and skin, for example. "Exactly the same effect" seems like the unexpected outcome here.
> intense heat exposure is a lot more agonising than 30 minutes of exercise for less benefit
Having to do absolutely nothing other than not leaving is quite different from pushing through a physical activity that can also easily be causing all kinds of discomfort.
It's all about raising your core temperature, water transfers heat to the body much more efficiently than air, so water at 104F ends up raising your body core temperature as much as a dry sauna at 170F. I did some experimentation on this, I have access to a dry sauna at my gym and I track my HR and exertion levels, I did the same with the hot tub at home making sure the water temperature doesn't go below 104F and im fully submerged to the neck, 30 mins session in both cases. The graphs look pretty much identical, same HR uptrends. So as far as cardio effects and heat shock proteins I do believe they are the same, not sure if there could be any benefit to breathing dry hot air for the lungs, but so far most benefits from sauna come from raising core temp
Too lazy to find it, but Dr Rhonda Patrick (a longtime advocate for saunas for their health benefits) reported that hot tubs can provide the same results as saunas -- and they are much more pleasant to use.
Not to beat my own dead horse but at the heat stress needed to cause an adaptation there’s nothing pleasant about the experience. If it’s not causing nausea and palpitations, it’s not hot enough.
> If it’s not causing nausea and palpitations, it’s not hot enough.
This is just so wrong. I use a 110C sauna pretty much daily, and I've done very hot onsens before, and I've never got nausea. The closest I've come is feeling lightheaded, but that's only when I combine it with ice baths. If you're feeling nauseous, you probably have a poor diet or an electrolyte imbalance
Let me guess that when it comes to exercise you think that you have to experience pain or almost pass out to get optimal adaptations? I guarantee that pushing your body to that level is highly counterproductive
> How are you suffering equal heat stress from being submerged in moderately warm water
by the rules of this universe, you can't survive being submerged in 40C water for a prolonged period of time (even 37C would kill you as well), because humans produce heat and if you can't dispose of it you'll overheat and be dead soon enough
So while I definitely think it's possible that hot baths and sauna have similar effects, I don't think this can be shown by simple thermodynamics and would require medical studies. Some sibling comments have already mentioned some.
To be clear, my objection was only to the supposed explanation/assertion of resulting core temperature being all that matters, not to the possibility that that's true.
Have you tried submerging yourself in moderately hot water, I wonder? And have you spent some time pondering the difference in heat transfer between convection and conduction?
> Have you tried submerging yourself in moderately hot water, I wonder?
Yes.
> have you spent some time pondering the difference in heat transfer between convection and conduction?
Also yes, but not in this context. I don't think basic thermodynamics (alone) is the right lens through which to analyze the health benefits of either. Without empirical studies, I feel like there can easily be plausible-sounding thermodynamic arguments for completely opposite outcomes.
There's a world of a difference between being able to carve out 30 actually uninterrupted minutes (and realistically more; most people don't have a sauna in their home, so they'd need to spend some time getting there and back) and being able to zone out and stare at a screen for 30 minutes in bed or on public transit.
Is this actual stat? Or do you mean “have access to” instead of actually “at their home” i.e. a private sauna they can use at any time 24/7, because from my lived experience I doubt the latter.
Essentially all residential buildings in Finland have saunas. Freestanding houses have private ones, apartments have communal ones but you can book a private time slot.
Not having an hour of uninterrupted leisure time per day, never mind per week (most Finns don’t go to sauna every day) still sounds pretty unfathomable, except maybe in some specific circumstances like being a fresh single parent or similar. In any case, in Finland people go to sauna together with even fairly young kids (like 3+ years old), with breaks as needed of course, even most adults don’t usually spend thirty continuous minutes in a 80°C sauna.
Virtually everyone everywhere can find free 30 minutes. And turn their devices off. Those who think they cannot would do well getting to a state where they can do this, at least 6, preferably 7 days a week.
Skipping screen time between waking up and getting up will might solve this problem for a significant fraction of the first world population. My 2c.
I think these analogies are largely correct, but TFA is about something subtly different:
LLMs don't make it impossible to do anything yourself, but they make it economically impractical to do so. In other words, you'll have to largely provide both your own funding and your own motivation for your education, unless we can somehow restructure society quickly enough to substitute both.
With assembly, we arguably got lucky: It turns out that high-level programming languages still require all the rigorous thinking necessary to structure a programmer's mind in ways that transfer to many adjacent tasks.
It's of course possible that the same is true for using LLMs, but at least personally, something feels substantially different about them. They exercise my "people management" muscle much more than my "puzzle solving" one, and wherever we're going, we'll probably still need some puzzle solvers too.
Outdated compared to what? In your counterfactual, VC funded agents don't exist anymore, no?
Your argument, if I understand it correctly, is that they might somehow go away entirely when VC funding dries up, when more realistically they'll probably at most become twice as expensive or regress half a year in performance.
> Outdated compared to what? In your counterfactual, VC funded agents don't exist anymore, no?
Outdated compared to reality / humans, their knowledge cutoff is a year further behind every year they don't get updates. Humans continuously expands their knowledge, the models needs to keep up with that.
> if Bob can do things with agents, he can do things.
This point is directly addressed in the paper: Bob will ultimately not be able to do the things Alice can, with or without agents, because he didn't build the necessary internal deep structure and understanding of the problem space.
And if Alice later on ends up being a better scientist (using agents!) than Bob will ever be, would you not say there was something lost to the world?
Learning needs a hill to climb, and somebody to actually climb it. Bob only learned how to press an elevator button.
I'm happy to herd idiots all my life if they come out of it smarter than they went in. The real tragedy with current LLM agents is that they're effectively stateless, and so all the effort of "educating" them feels wasted.
Once continuous learning is solved, I predict the problem addressed by TFA to become orders of magnitude bigger: What's the motivation for anyone to teach a person if an LLM can learn it much faster, will work for you forever, and won't take any sick days or consider changing careers?
At that point, I think it'll be time to admit to ourselves that capitalism is over.
The only reason we somewhat made it work is due to the interdependence between labor and capital. Once that's broken, the wheels will start falling off.
> for someone who doesn't yet have that intuition, the grunt work is the work
Very well said. I think people are about to realize how incredibly fortunate and exceptional it is to actually get paid, and in our industry very well, through a significant fraction of one's career while still "just" doing the grunt work, that arguably benefits the person doing it at least as much as the employer.
A stable paid demand for "first-year grad student level work" or the equivalent for a given industry is probably not the only possible way to maintain a steady supply of experts (there's always the option of immense amounts of student debt or public funding, after all), but it sure seems like a load-bearing one in so many industries and professions.
At the very least, such work being directly paid has the immense advantage of making artificially (often without any bad intentions!) created bullshit tasks that don't exercise actually relevant skillsets, or exercise the wrong ones, much easier to spot.
That doesn't seem like a contradiction to the idea that "Windows subsystem" is (at least after WSL 1 and especially 2) a description for a functionality (i.e. running binaries targeting a different OS's interfaces), not an implementation.
No, as I explained, that's not what the actual subsystem architecture did. The binaries very much targeted Windows and did not target any other OSes. They weren't (say) ELF files targeting Linux, they were PE files targeting Windows, and you had to compile them from source with special flags to target those subsystems on Windows. You could not run those binaries on other OSes. The compatibility was at the source level, not at the binary level.
Basically the view I had twenty years ago vs the view I have now. After being a UI-extender explorer for some years, I became a system-as-delivered person. I'm now at a healthy (for me) mix. I have a bunch of icons in my menu bar and an app to keep that tidy.
I agree. My menu widgets aren't the primary cause if my computer feels slow. It's almost always a ton of browser tabs because I collect stuff to investigate later and I procrastinate removing them.
However, I also see the point of the commenter that a lot of people who have a bunch of shit in the menu bar might not be computer people who understand what they are or how they got there. In those cases, people exploring things they don't know how to remove might accumulate a lot of other crap that causes a slow system.
Empty advice like "you should want what I want, because here is how it works for me", benefits from pushback.
Another common one: responding to a commenter's device or OS problem by suggesting a platform switch. Despite the massive number of unrelated tradeoffs such a decision would involve.
And of course, the pedantic "well, it always works for me" or "really, that should work", chime-in non-advice to just not have the problem in the first place. It is tautologically effective, but ...
The advice was to question what is truly needed. I may be a bit on the extreme end, as I never stop asking this question and seeing what life is like without various things.
This doesn’t seem like horrible advice to someone who is running into UI breaking problems. This also isn’t a new notch issue. I remember this being a common topic of discussion going back to the 12” MBP 20+ years ago. People with a lot of menubar icons would have them collide with the dropdown menus. I ran into this issue on some apps, even with a 17” display at the time.
I started to treat these limitations as a positive thing. One could call that Stockholm syndrome or worse, but I found having some of these limits changed how I think about problems. I no longer default to solving problems through addition, and instead first look if a problem can be solved through subtraction. This has been one of the most positive mental shifts in my life and has paid dividends in both my personal and professional life.
Of course the obvious answer to solve the problem through addition are the apps that let you place the menubar overflow into an expandable area or dropdown (like HiddenBar); I think they can also be added to Control Center now. However, I figured someone with that many items up there would already know about those utilities and maybe doesn’t want them for some reason. Those utilities also mask the problem for those who haven’t taken the time or energy to look at their setup critically and push back on their own assumptions of what they really need.
One might say that type of user is less likely on HN than in the general public, but I have seen it at all skill levels and backgrounds. For the more technical user, they hear about something, it sounds cool, they install it thinking it might be useful someday. It never actually makes it into their workflow, but during their evaluation they remember that it sounded cool and keep it around to use “someday”. I used to be this person. I had all the popular menubar apps, geek tool displaying stuff on my desktop, PathFinder replaced Finder, I was all-in.
People can and will do what they want. I’m just pushing back on the idea of what they want, the same way you’re pushing back on what I think you mischaracterized as empty advice.
reply