For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | Lerc's commentsregister

From time to time I have been accused of being an apologist for Sam Altman, but I have always tried to assess information based upon what it says instead of whether it matches an existing narrative. You list a number of distortions in your article which show the problem. If you are a good person, bad stories about you may be fake. If you are a bad person, bad stories about you may still be fake.

My prima facie view on Altman has been that he presents as sincere. In interviews I have never seen him make a statement that I considered to be a deliberate untruth. I also recognise that people make claims about him go in all directions, and that I am not in a position to evaluate most of those claims. About the only truly agreed upon aspect has been how persuasive he is.

I can definitely see a possibility of people feeling like they have been lied to if they experienced a degree of persuasion that they are unaccustomed to. If you agree to something that you feel like you didn't really feel like you would have, I can see people concluding that they have been lied to rather than accept that they had been intellectually beaten.

In all such cases where an issue is contentious, you should ask yourself, what information would significantly change your views. If nothing could change your view, then it's a matter beyond reason.

I think you will agree that there is no smoking gun in this article, and it is just an outlay of the allegations. Evaluating allegations becomes tricky because I think it becomes a character judgement of those making the claims.

I have not heard a single person in all of this criticise Ilya Sutskever's character. If he were to make a statement to say that this article is an accurate representation of what he has experienced, it would go a long way.

I think Paul Graham should make a statement, The things he has publicly claimed are at odds with what the article says he has privately claimed. I have no opinion if one or the other is true or if they can be reconciled but there seem to be contradictions that need to be addressed.

While I do not have sources to hand (so I will not assert this as true but just claim it is my memory) I recall Sam Altman himself saying that he himself did not think he should have control over our future, and the board was supposed to protect against that, but since the 'blip' it was evident that another mechanism is required. I also recall hearing an interview where Helen Toner suggested that they effectively ambushed Altman because if he had time to respond to allegations he could have provided a reasonable explanation. It did not reflect well on her.

I am a little put off by some of the language used in the article. Things like "Altman conveyed to Mira Murati" followed by "Altman does not recall the exchange" Why use a term such as 'conveyed' which might imply no exchange to recall? If a third party explained what they thought Altman thought. Mira Murati could reasonbly feel like the information has been conveyed while at the same time Altman has no experience of it to recall. Nevertheless it results in an impression of Altman being evasive. If the text contained "Altman told Mira Murati" then no such ambiguity would exist.

"Later, the board was alarmed to learn that its C.E.O. had essentially appointed his own shadow board" Is this still talking about Brockman and Sutskever? I just can't see this as anything other than a claim he took advice from people he trusted. I assume those board members who were alarmed were not the ones he was trusting, because presumably the others didn't need to find out. The people he disagreed with still had votes so any claim of a 'shadow board' with power is nonsense, and if it is a condemnable offence, is the same not true of the alignment of board members who removed him.

Josh Kushner apparently made a veiled threat to Muratti, the claim "Altman claims he was unaware of the call" casts him as evasive by stacking denial upon denial, but without any other indication that was undisclosed in the article, it would have been more surprising if he did know of the call. I also didn't know of the call because I am not those two people.

The claim of sexual abuse says via Karen Hao "Annie suggested that memories of abuse were recovered during flashbacks in adulthood." To leave it at that without some discussion about the scientific opinion on previously unremembered events being recalled during a flashback seems to be journalistically irresponsible.


I think sometimes you have to look at the patterns rather than at the single claim. If a large amount of people, that are completely unrelated, tell you very similar experiences they had with Altman, you can take that as a good indicator of his general character.

And if this tendency to misunderstand/be misunderstood always results it Altman gaining more power, even if we give him the reason of the doubt and say that doesn't do it on purpose, it's still a big problem, given the responsibility he has.

The article also mentions many moments where apparently Altman straight out lied, as opposed to being "very persuasive, if you believe those sources then I don't think it's also possible to think he's sincere. I cannot open the article again to get the exact quotes, but the few I remember were: - one time he was claiming he didn't send a message, while people were literally showing him the message he sent, with the confirmation of another OpenAI employee - another time when he accused people of organising a coup, and that someone from the board informed him, and after the person from the board was called in the meeting Altman claimed he never said those words and never accused anyone

These cases can't be put to persuasion, that Altman changed their view, or that someone misremembered, they either happened or they didn't


Paul made a statement today: https://x.com/paulg/status/2041363640499200353?s=20

It clarifies he did not fire Sam

I overall agree with your takeaway, but this is not a criticism of the article itself.


I have experience in dealing with Sam Altman-like behavior. I hope to explain how their tactics unfold.

> I can see people concluding that they have been lied to rather than accept that they had been intellectually beaten.

There are two angles to this: from an individual perspective and from a collective one.

One's interaction with such a manipulator isn't a single shot. There is not a single event that they are “beaten”. First, one gets persuaded --- you might argue that there's nothing wrong with a skillful persuasion. At some point they realize that the reality is not in line with their expectations. They bring the point up to the manipulator and ask for a change, this time in more concrete terms. The manipulator agrees with the change, negotiates compromises, and the relationship continues. After some time the manipulated party realizes that things are not going in the direction they desire. This time they ask for more concrete terms, without accepting any compromises. The manipulator accepts, yet continues to act against the terms. The manipulated party is now angry and directly confronts the manipulator. The manipulator apologizes and tells that none of it was intentional, and asks for another chance. However, at that point, the manipulator has run out of “politically correct” “persuasion tactics”, and tells blatant lies to make the other party behave.

From a collective perspective, even those “politically correct” “persuasion tactics” are discovered to be lies, because what the manipulator told different parties are in direct opposition to each other, i.e., they cannot all be truths.

> Helen Toner suggested that they effectively ambushed Altman because if he had time to respond to allegations he could have provided a reasonable explanation. It did not reflect well on her.

I understand how her behavior may raise a flag for the unsuspecting, but it was exactly the right one. Manipulators prey on the benefit of the doubt. If Toner were to bring Altman's behavior into attention of others, no doubt that Altman would manipulate them successfully.

It's unfortunate that many people are unaware of these tactics and assume the best of intentions, when such assumptions fuel the manipulation that they would better avoid.


It’s wild to write something like “I have experience with Sam Altman-like behavior” and expect us to come along for a 5+ paragraph ride that actually has no Sam Altman connection at all except the one you imagine is true.

Talk to your therapist about your problems. Don’t project them on people you don’t know and seemingly have no actual first-hand experience with.


I'm sorry that it wasn't clear. I didn't mean to imply that I was going to connect to Sam Altman. I specifically wanted to address why it wasn't the case that people were “intellectually beaten” by Sam Altman.

> except the one you imagine is true

I'm not sure what you mean. I told about an example of manipulation that I witnessed. I later learned that these were common tactics employed by con-artists, scammers, etc.

> Don’t project them on people you don’t know and seemingly have no actual first-hand experience with.

I don't need first-hand experience with someone to understand that they are a manipulator. I am comfortable forming my opinion based on reports.


Paul Grahams's latest public statement on the issue:

https://x.com/paulg/status/2041363640499200353


> My prima facie view on Altman has been that he presents as sincere.

That is how pathological liars present.


> what information would significantly change your views

Quite simple: show me any single action took by Sam Altman which can not be construed as an attempt to get him more power/money/influence. You can't find it.

The difference between what he claims to believe and what he actually does is a textbook example of sociopathy.


I cannot find a single action of anyone that cannot be construed as an attempt to get them power/money/influence. I can believe that a persons intentions are good, but I can't make everyone in the world do that, and that is what you are asking.

"If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him"

To play your game, he got married, had a child, and joined an AI research organisation at a time when everybody thought the big advances were much further away than they turned out to be.

You could still construe those actions as evil if you choose to see them as evil.

I'm not going to claim that Sam Altman is not a sociopath, I lack the information and knowledge of psychology to make that determination. On the other hand I have not detected those attributes in anyone who has claimed he is a sociopath.

It seems odd that people seem to take offense at the notion that arbitrary people do not reach a conclusion that requires specialised expert knowledge and a decent amount of irrefutable evidence.


> I cannot find a single action of anyone that cannot be construed as an attempt to get them power/money/influence

Try the other way around, via negativa. We definitely can find plenty of examples of people stepping out of positions of power, deciding not to do something because of moral conflict, etc. Is there any case of such action from Sam?

Fuck, anyone with any semblance of moral fortitude would refuse to take money from the Saudis. But he had no problem to do it.

> joined an AI research organisation at a time when everybody thought the big advances were much further away than they turned out to be.

No, this is selection bias. What he did was to put himself in a position where he could have his fingers on any and every possible pie, and then when of these things turned out to be something believed to be valuable by people with money, then he manouvered himself to be in the driver seat.


When people are described as sociopathic it’s not about any particular lie, but the relationship that the person has with the truth, which is that they will lie when it suits them and tell the truth when it suits them and they don’t seem to distinguish morally between them. And more than that, they treat people the same way, and will use them while it suits them and then dispose of them when they are inconvenient.

Humans are older than money, so evidently we don't need it to survive, but there is more to existence than mere survival. I agree that people's basic needs to be taken care of, but I think that is an issue that needs to happen because of automation. It needs to happen because it is simply the right thing to do. I would go as fas as saying It shouldn't just be basic needs. Society should be aiming to provide the entire hierarchy of needs for everyone.

I think having employment delivers some of the higher needs to a subset of people, but it is a privileged few. A huge number work just to provide the basic needs. Advocating using the advances in automation to raise everybody up is what we need. Instead we seem to be maintaining a system that gives a few what we want and the rest of us are too busy with the survival part to influence that change.


Money is a tool (maybe not the best) to make an economy with division of labor work. It's not required and probably also doesn't work in societies where everybody knows everybody else and can make sure that the right things are done and nobody slacks off.

> Society should be aiming to provide the entire hierarchy of needs for everyone.

I don’t know. Society should provide the framework within which people can achieve their needs (and wants), but not the needs and wants themselves directly.

Otherwise you put an artificial cap on human growth and inefficient allocation of resources.


>Otherwise you put an artificial cap on human growth and inefficient allocation of resources.

That is not how the hierarchy of needs works https://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs

Removing the cap on growth is pretty much baked in.


I know 2GB isn't very heavy in OS terms these days, but it's still enough to hold nearly 350 uncompressed 1080p 24-bit images.

There's rather a lot of information in a single uncompressed 1080p image. I can't help but wonder what it all gets used to for.


A lot of it is optimizing applications for higher-memory devices. RAM is completely worthless if it's not used, so ideally you should be running your software with close to maximum RAM usage for your device. Of course, the software developer doesn't necessarily know what device you will be using, or how much other software will be running, so they aim for averages.

For example, Java applications will claim much more memory than they need for the heap. Most of that memory will be unused, but it's necessary to have a faster running application. If you've ever run a Java app at consistently 90% heap usage, you know it grinds to an absolute halt with constant collection.

The same is true for caching techniques. Reading from storage is slow, so it often makes sense to put stuff in RAM even if you're not using it very often.


I also believe that this memory usage might be decreased significantly, but I don't know how much (and how much is worth it). Some RAM usage might be useful, such as caching or for things related with graphics. Some is a cumulative bloat in applications caused by not caring much or duplication of used libraries.

But I remember in 2016 Fedora Gnome consumed about 1.6GB of RAM on my PC with 2GB of RAM a decade ago. Considering that after a decade the standard Ubuntu Gnome consumes only 400MB more RAM and also that my new laptop has 16GB of RAM (the system might use more RAM when more RAM is installed), I think the increase is not that bad for a decade. I thought it would be much worse.


Buy why that much? The first computer I bought had 192MB of RAM and I ran a 1600x1200 desktop with 24-bit color. When Windows 2000 came out, all of the transparency effects ran great. Office worked fine, Visual Studio, 1024x768 gaming (I know that’s quite a step down from 1080p).

What has changed? Why do I need 10x the RAM to open a handful of terminals and a text editor?


> and I ran a 1600x1200 desktop with 24-bit color

> What has changed? Why do I need 10x the RAM to open a handful of terminals and a text editor?

It’s not a factor of ten, but a 4K monitor has about four times as many pixels. Cached font bitmaps scale with that, photos take more memory, etc.

> When Windows 2000 came out

In those times, when part of a window became uncovered, the OS would ask the application to redraw that part. Nowadays, the OS knows what’s there because it keeps the pixels around, so it can bitblit the pixels in.

Again, not a factor of ten, but it contributes.

The number of background processes likely also increased, and chances are you used to run fewer applications at the same time. Your handful of terminals may be a bit fuller now than it was back then.

Neither of those really explain why you need gigabytes of RAM nowadays, though, but they didn’t explain why Windows 2000 needed whatever it needed at its time, either.

The main real reason is “because we can afford to”.


Partly because we have more layers of abstraction. Just an extreme example, when you open a tiny < 1KB HTML file on any modern browser the tab memory consumption will still be on the order of tens, if not hundreds of megabytes. This is because the browser has to load / initialize all its huge runtime environment (JS / DOM / CSS, graphics, etc) even though that tiny HTML file might use a tiny fraction of the browser features.

Partly because increased RAM usage can sometimes improve execution speed / smoothness or security (caching, browser tab isolation).

Partly because developers have less pressure to optimize software performance, so they optimize other things, such as development time.

Here is an article about bloat: https://waspdev.com/articles/2025-11-04/some-software-bloat-...


2 Programmers sat at a table. One was a youngster and the other an older guy with a large beard. The old guy was asked: "You. Yeah you. Why the heck did you need 64K of RAM?". The old man replied, "To land on the moon!". Then the youngster was asked: "And you, why oh why did you need 4Gig?". The youngster replied: "To run MS-Word!"

Higher res icons probably add a couple hundred megs alone

Well if you have a 512x512 icon uncompressed it is an even megabyte, so that makes the calculations fairly easy.

But raw imagery is one of the few cases where you can legitimately require large amounts of RAM because of the squaring nature of area. You only need that raw state in a limited number of situations where you are manipulating the data though. If you are dealing with images without descending to pixels then there's pretty much no reason to keep it all floating around in that form, You generally don't have more than a hundred icons onscreen, and once you start fetching data from the slowest RAM in your machine you get pretty decent speed gains from using decompression than trying to move the uncompressed form around.


I remember running Xubuntu (XFCE) and Lubuntu (LXDE, before LXQt) on a laptop with 4 GB of RAM and it was a pretty pleasant experience! My guess is that the desktop environment is the culprit for most modern distros!

Gnome 50 and its auxilliary services on my machine uses maybe 400MB.

The culprit is browsers, mostly.


well to start, you likely have 2 screen size buffers for current and next frame. The primary code portion is drivers since the modern expectation is that you can plug in pretty much anything and have it work automatically.

How often do you plug in a new device without a flurry of disk activity occurring?

That is dependent upon the quality of the AI. The argument is not about the quality of the components but the method used.

It's trivial to say using an inadequate tool will have an inadequate result.

It's only an interesting claim to make if you are saying that there is no obtainable quality of the tool that can produce an adequate result (In this argument, the adequate result in question is a developer with an understanding of what they produce)


The problem I see with this argument is that the ship sailed on understanding what you are doing years ago. It seems like it is abstraction layers all the way down.

If an AI is capable of producing an elegant solution with fewer levels of abstraction it could be possible that we end up drifting towards having a better understanding of what's going on.


This seems premature.

This requires an assumption of actions that might be performed if a condition in the future is met.

That is not a solid basis for a lawsuit.


This is the natural conclusion of what was really claimed about model collapse, and indeed natural evolution. Making an imperfect copy while invoking a selection mechanism is evolution.

Some of the claims about models training on their own data, in their enthusiasm to frame it as a failure, went further to suggest that it magnified biases. I had my doubts about their conclusions. If it were true, it would be a much greater breakthrough because the ability to magnify a property represents a way to measure a weak version that property. The ability to do that would mean they would have found a way to provide a training signal to avoid bias. It would be great if that's what they did but I suspect there would have been more news about it.

Perhaps this paper will put to rest the notion that AI output is useless as training data. It has only ever been the case that it was useless as an indiscriminate source of data.


Interesting, I had interpreted their comment to be asking if they were trained to carry out a no-quarter order.

I like the idea of tree curation. People view the branch of their interest. Anyone can submit anything to any point but are unlikely to be noticed if they submit closer to the trunk. Curated lists submit their lists to curators closer to the trunk.

The furthest branches have the least volume (need filters to stop bulk submission to all levels, but still allow some multi submission). It allows curators to contribute in a small field. They then submit their preferred items to the next level up. If that curator likes it they send it further. A leaf level curator can bypass any curator above but with the same risk of being ignored if the higher level node receives too much volume.

You could even run fully AI branches where their picks would only make all the way up by convincing a human curator somewhere above them of the quality. If they don't do a good job they would just be ignored. People can listen to them direct if they are so inclined


>It's sure baffling how Anthropic has kept Claude Code's plan mode so linear and inflexible

It's difficult to know what the appropriate process for a model would be without widespread deployment. I can see how they have to strike a fine balance between keeping up with what the feedback shows would be best and changing the way the user interacts with the system. Often it's easy to tell what would be better once something is deployed, but if people are productively using the currently deployed system you always have to weigh the advantage of a new method against the cost of people having to adapt. It is rare to make something universally better, and making things worse for users is bad.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You