> That would be so out of place in midwest USA, where cynicism is rampant
Well yeah, the Midwest USA is full of drastically under-employed individuals holding advanced degrees who still can't find any decent work, and yet still have to pay $2,000/month rental costs, while also paying back $100k to $200k+ student loans for all that "more learning" they did.
I think much of HN still thinks of "college grads" as entering a market similar to how they had it back in 2002 - 2010. But it's 2025, the market is far far less forgiving on the low end than it used to be.
> Why haven’t AMD/Intel been able to catch up? Is x86 just not able to keep up with the ARM architecture? When can we expect a x86 laptop chip to match the M1 in efficiency/thermals?!
AMD kind of has, the "Max 395+" is (within 5% margin or so) pretty close to M4 Pro, on both performance and energy use. (it's in the 'Framework Desktop', for example, but not in their laptop lineup yet)
AMD/Intel hasn't surpassed Apple yet (there's no answer for the M4 Max / M3 Ultra, without exploding the energy use on the AMD/Intel side), but AMD does at least have a comparable and competitive offering.
M4 Pro was a massive step back in perf/watt over M3 Pro. To my knowledge, there aren't any M4 die shots around which has led to speculation that yields on M4 Max were predicted to be really bad, so they made the M4 Pro into a binned M4 Max, but that comes with tradeoffs like much worse leakage current.
That said Hardware Canucks did a review of the 395 in a mobile form factor (Asus ROG Flow F13) with TDP at 70w (lower than the max 120w TDP you see in desktop reviews). This lower-than-max TDP also gets you closer to the perf/watt sweet spot.
The M4 Pro scores slightly higher in Cinebench R24 despite being 10P+4E vs a full 16P cores on the 395 all while using something like a 30% less power. M4 Pro scores nearly 35% higher than the single-core R24 benchmark too. 395 GPU performance is than M4 Pro in productivity software. More specifically, they trade blows based on which is more optimized in a particular app, but AMD GPUs have way more optimizations in general and gaming should be much better with an x86 + AMD GPU vs Rosetta 2 + GPU translation layers + Wine/crossover.
M4 Pro gets around 50% better battery life for tasks like web browsing when accounting for battery size differences and more than double the battery life per watt/hr when doing something simple like playing a video. Battery life under full load is a bit better for the 395, but doing the math, this definitely involves the 395 throttling significantly down from it's 70w TDP.
I've got an AMD Ryzen 9 365 processor on my new laptop and I really like it. Huge autonomy and good performance when needed, it's comparable to the M3 version (not the Max).
I just recently was trying to buy a laptop and was looking at that chip, but like you said, not available in anything except framework desktops and a weird tablet thats 2.5x expensive as a macbook. Its competitive on paper, but is still completely infeasible at the moment.
Also, you don't realize until you try them out that other issues make running models on the AMD chip ridiculously slow compared to running the same models on an M4. Some of that's software. But a lot is how the chip/memory/neural etc are organized.
Right now, AMD is not even in the ballpark.
In fact, the real kick in the 'nads was my fully kitted M4 laptop outperforming the AMD. I just gave up.
I'll keep checking in with AMD and Intel every generation though. It's gotta change at some point.
you can find that processor in the 14" HP Zbook Ultra G1A (which is also Ubuntu certified). There is also the Asus Z13, though I'm not certain it's working well with Linux
This is not even a remotely accurate characterization of the relative performance of the Ryzen AI Max+ 395 and the Apple M4. I have both an expensive implementation of the former and the $499 version of the latter, and my M4 Mac mini beats the Ryzen by 80% or more in many single-threaded workloads, like browser benchmarks.
This is also why I went into the Philosophy major - knowing how to learn and how to understand is incredibly valuable.
Unfortunately in my experience, many, many people do not see it that way. It's very common for folks to think of philosophy as "not useful / not practical".
Many people hear the word "philosophy" and mentally picture "two dudes on a couch recording a silly podcast", and not "investigative knowledge and in-depth context-sensitive learning, applied to a non-trivial problem".
It came up constantly in my early career, trying to explain to folks, "no, I actually can produce good working software and am reasonably good at it, please don't hyper-focus on the philosophy major, I promise I won't quote Scanlon to you all day."
How people see it is based on the probability of any philosophy major producing good working software, not you being able to produce good working software.
Maybe because phylosophy focuses on weird questions (to be or not to be) and weird personas. If it was advertised as more grounded thing, the views would be different.
The way you are perceived by others dependa on your behaviour. If you wamt to be perceived differently, adjust your behaviour, don't demand others to change. They won't.
Both Roo and Continue support local modals (via LM Studio). For Continue, you add a fake account (type in literally anything) and then click 'edit' -- it will take you to the settings JSON, and you can type in LM Studio as your source.
The main problem I'm seeing, is that a lot of the tooling doesn't work as well "agentically" with the models. (Most of these tools say something like 'works best with Claude, tested with Claude, good luck with any local models'). The local models via LM Studio already works really well for pure chat, but occasionally trip up semi-regularly on basic things, like writing files or running commands -- stuff that say, GitHub Copilot has mostly already polished.
But those are basically just bugs in tooling that will likely get fixed. The local-only setup is behind the current commercial market -- but not much behind.
I strongly agree with the commenter above, if the commercial models and tooling slow down at any point, the free/open models and tooling will absolutely catch up -- I'd guess within 9 months or so.
> people don't just wake up and decide one day to be irrationally evil with no reason, if you believe that then you are a fool
The problem with this, is that people sometimes really do, objectively, wake up and device to be irrationally evil. It’s not every day, and it’s not every single person — but it does happen routinely.
If you haven’t experienced this wrath yourself, I envy you. But for millions of people, this is their actual, 100% honest truthful lived reality. You can’t rationalize people out of their hate, because most people have no rational basis for their hate.
(see pretty much all racism, sexism, transphobia, etc)
Do they see it as evil though? They wake up, decide to do what they perceive as good but things are so twisted that their version of good doesn't agree with mine or yours. Some people are evil, see themselves as bad, and continue down that path, absolutely. But that level of malevolence is rare. Far more common is for people to believe that what they're doing is in service of the greater good of their community.
Humans are not rational animals, they are rationalizing animals.
So in this regard, they probably do deep down see it as evil, but will try to reason a way (often in a hypocritical way) to make it appear good. The msot common method of using this to drive bigotry often comes in the reasons of 1) dehumanizing the subject of hate ("Group X is evil, so they had it coming!") or 2) reinforcing a superiority over the subject of hate ("I worked hard and deserve this. Group X did not but wants the same thing").
Your answer depends on how effective you think propaganda and authority is at shaping the mind to contradict itself. The Stanfor experiment seems to reinforce a notion that a "good" person can justify any evil to themself with a surprisingly little amount of nudging.
They just fired a lot of the gaming studios they used to own. (Tango, Arkane), and cancelled most of their major upcoming projects (Everwild, Perfect Dark, the new MMO, etc)
And - while there are rare examples otherwise (like iD software), many of their studios haven't made a well received game in over a decade.
It's not necessarily a great thing for the industry, but Microsoft leaving gaming entirely would not be that terribly tough. If anything, it would probably have the least amount of impact now, then it would have at any other time since 2001.
Microsoft bought the Call of Duty studios for a lot of money and likes to say they are succeeding at video games by pointing to Call of Duty revenue in a vacuum without counting the cost to aquire Activision.
They certainly aren't going to stop releasing Call of Duty games.
> In fact, I don't think I ever understood how Neo could control the machines in the real world.
In fairness to the Wachowskis, they do literally explain this in the movie, in literal dialog, in the third Matrix film.
---
Neo: "Tell me how I stopped four Sentinels by thinking it"
Oracle: "The power of the One extends beyond this world. It reaches from here (i.e., the digital matrix) all the way back to where it came from (i.e., in the real world).
Neo: "Where?"
Oracle: "The Source. That's what you felt when you touched those Sentinels"
The sentinels are networked (in the real world) and Neo has god-like access (superuser). Superuser works inside the matrix, but it also works on anything connected to the Matrix or networked to the matrix (like the Sentinels are, like most of the machines are).
---
Most people just tune out the dialogue about philosophy in these films, and then complain that nothing was explained. (when like, most of it was explained, folks just got bored and stopped listening)
That's fine in Matrix 1, because Matrix 1 works as a film even if you ignore the philosophy dialogue. Matrix 2, 3, and 4 are pretty good too, but they only work if you are also paying attention to all the philosophy dialog.
> I wish academia were more rigorous about not taking sides. I feel like they’ve been injecting their own ideology unnecessarily for a while.
I don't think this is true. I think academia is still pretty good (not great, but pretty good) at not taking sides. I don't see a ton of overly-ideological academics (if anything, Academia skews unnaturally conservative, simply out of fear).
I think the 'problem' is that science is moving forward and settling debates that previously weren't settled, and therefore lived in the realm of "political opinion". But because people don't like the mostly-objective scientific results, and because it's being debated politically, they're simply accusing academia of 'taking sides', because that's how they picked their political views.
Examples exist with things like Global Warming, or Human Rights, or LGBTQ+, or Vaccines, or so on.
Global Warming is an obvious one -- while there's always more to learn, and we can't perfectly predict everything -- the base science has been 100% settled for decades now -- the earth is warming, human pollution is the primary cause, we know what we're doing to cause it, and our refusal to stop causing it is changing environments. We can argue about how much, or how long, or how badly, or the accuracy of models and projections and so on -- but there is no factual argument that Global Warming isn't happening, or that humanity isn't directly contributing to it.
But people simply don't like that answer, despite it being true, so they debate it anyway -- and then flip around and accuse academia of 'picking a side' when Academia didn't pick or choose anything.
If any academic could, in good faith, prove "yep, air pollution isn't causing global warming, in fact, Earth's orbit is falling into the Sun at a rapid rate, we can prove this with astrological/radiological measurements" or whatever, they'd be rushing to demonstrate this. They aren't, because they can't -- it's simply not true.
I'm not saying it never happens, or that no one abuses academia to push political views (it does sometimes happen). But most academics aren't "picking" their stance on these issues, science and research is uncovering facts or evidence on a continuous basis, and academics usually just stick to what's been discovered or observed (and -- for the parts still unknown or missing -- make theories or educated guesses that are supported by science and do not contradict the objective discoveries. If new results or evidence arrives, they revoke or adjust their theory to fit with the new discovery).
Some humans simply refuse to accept the things that have been discovered or learned through science. It requires a person to be willing to change their mind based on something they have learned, and a lot of people simply hate doing that ever (so much so, that their answer to hating objective results of science, is to simply defund all the science)
There is a reason why meta studies that look at findings from different countries often has to account of cultural bias. Different objective truths can be found depending on how one look, including findings that give contradicting conclusions.
The existence of Global Warming, or Human Rights, or LGBTQ+, or Vaccines is not the topic which academia (or people in general) debate. Outside of some very fringe places, there is practically no discussion of it. The thing people debate is the strategies. Different ideologies will have different answers to which strategies works, are effective, and what costs they are allowed to have (and to whom). A common topic that the green movement discuss, and which often comes into global warming research, is fair wages for minorities and the impact of global warming on that demographic. People and politicians will discuss that and not the factual argument that Global Warming exist.
There is a similar argument over human rights as defined in Europe vs human rights as defined in the US. They both acknowledge human rights but with widely different interpretations. At a concept level, the idea of protected classes as a classification has very little to do with human rights as a theory but has everything to do with different strategies that involve human rights. Abolishing which demographic should be included, or including new demographics are hot topics of debate.
What people debates most heated over and get most emotional about is not when people disagree with the facts, but rather when both sides of an argument agree with the facts but disagree with the best strategy to go forward. That is when science becomes politics.
> people whose income or status in society depends on their production of ideas, without needing to test if the ideas actually work.
In general I'd agree -- but academic researchers (and the projects previously funded by the National Science Foundation) are not this. By definition, they are testing "if the ideas actually work", that testing is what the funding is paying for.
These folks aren't really "elite" -- not in perception, nor in class or wages. (they usually make less than the average programmer or CS graduate).
The definition of elite people seem to be using doesn't depend on wages. It's not a synonym for well paid.
There's no requirement to test if your ideas work if you're an academic. The funding pays for papers to get published, and that's all that's checked. To publish a paper you might need to at least pretend to test your ideas, maybe, but only in some of the better fields and journals. Nobody will check if you actually did test them though, so just making data up is a perfectly viable strategy. Worst case, you might get caught ten years from now if some random independent chooses to investigate and manages to go viral on social media. In very extreme cases you might get fired, but that won't stop you immediately getting another job doing exactly the same thing at another university (c.f. Brian Wansink).
But those are in the best, most scientific fields. In others you can just write papers all day that say all kinds of things, never test or even predict anything, and still have a successful career. That's how you get papers like the famous "feminist glaciology" paper [1], or thousands of COVID papers that present model outputs but never validate if the results matched reality, or thousands of string theory papers that don't make any predictions.
None of these problems will stop academics from being cited by journalists in high profile news outlets as if their ideas are already validated, nor being consulted by powerful politicians who then transcribe their policy demands into law, nor having their claims be automatically taken as gospel on forums like HN. If their ideas do become law and then cause a disaster, nothing will happen to them and they won't even suffer loss of reputation. For example, Einstein was a big supporter of Soviet-style planned economies, as were many academics of his era. That was a disastrous idea, and his supporting it - as late as 1949 no less - should have harmed his reputation for being a genius. But nobody remembers this today.
In these respects, they are the very epitome of what it means to be elite.
> you can just write papers all day that say all kinds of things, never test or even predict anything, and still have a successful career. That's how you get (snip) thousands of COVID papers that present model outputs but never validate if the results matched reality...
Disliking some papers is not really a fair or relevant metric by which to judge academia.
This would be like saying, "programmers are elite people, because they never have to test their work. Just look at the thousands of broken, shoddy-programmed repositories on GitHub full of junk code that's been abandoned! Half of it doesn't build, and isn't even testable. Why are we paying for hackathons just to get this garbage. Why do we hire interns and new CS grads, if they won't be seasoned veterans on day one".
A paper can be seen as kind of like the "pull request" of the academic world -- not every PR gets merged in, and a PR isn't always a waste just because it didn't get merged. Some number of bad papers does not mean "science" is "elite".
---
> That's how you get papers like the famous "feminist glaciology" paper [1]
That's probably a bad example, because the feminist glaciology paper isn't even bad or wrong. (Did you actually read it? Or did you only read the "feminist" and "glacier" and then get outraged?).
A quick scan of the actual paper, and Carey's argument seems to be that geologists and historians have been neglecting or ignoring how glaciers impact women (both the women in science doing these studies, and the women whose lives have been impacted by glaciers, or melt flow, flooding, disaster recovery, etc).
That's...like objectively true? And very well sourced. This isn't some crackpot theory, this is 100% a "validated idea" whose "results match reality".
Carey is an Environmental Historian and Professor of History. His papers are research and recordings of history, they won't be "testable" (under your definition of rubric) because he's reporting on things that have already happened. And this title specifically represents one (1) single paper that focuses on the impact to women, out of the 25(+?) he's published on Glaciers, Climate Change, and Societal impacts, over the past two decades.
If you want to remove Sociology and History from Science, so be it. (That would be Wrong, but I won't argue the point). But it should probably be expected that if you fund an Environmental Historian, you're going to get a lot research on Geological History and its societal impacts -- that's kind of the whole point of History as a discipline.
Your argument is that elitism is good, not that academics don't fit the criteria. Getting paid by the government to write entirely subjective, untestable ideas about the interaction between glaciers and feminist ideology is a perfect fit for the definition. Pull requests, on the other hand, are not. Their correctness is often tested automatically!
Well yeah, the Midwest USA is full of drastically under-employed individuals holding advanced degrees who still can't find any decent work, and yet still have to pay $2,000/month rental costs, while also paying back $100k to $200k+ student loans for all that "more learning" they did.
I think much of HN still thinks of "college grads" as entering a market similar to how they had it back in 2002 - 2010. But it's 2025, the market is far far less forgiving on the low end than it used to be.