For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more t8sr's commentsregister

I have never said and rarely thought this before, but I really hope the person who came up with / approved this idea got fired for it. It’s rare that you see something so unbelievably stupid and destructive of the shared pool of trust, which Apple spent 30 years building, only for one self-interested PM to blow a chunk of it up for no gain.

If the person who came up with this reads this site, I hope they see this comment and think about how screwed the industry would be if everyone acted the way they did.


I think the person who came up with this shouldn't be fired, the person who _approved_ it should be reprimanded.

There's some intersection point between who "owns" the wallet and who is coming up with ways to generate marketing revenue.

Whoever lives at that intersection point is the real shot caller here aren't they?

Imo you don't fire people for generating bad ideas, that just creates a culture of not thinking outside the box. But the person who is filtering those ideas is the critical lynch pin.


Why not fire them both?

> Imo you don't fire people for generating bad ideas,

If an idea is that bad, at the very least they should be transfered into a role that doesn't involve coming up with good ideas, since obviously that is outside of their skill set. And what's the argument for not firing the chain of people who approved it? Their job was to stop bad ideas and they catastrophically failed.


> at the very least they should be transfered into a role that doesn't involve coming up with good ideas, since obviously that is outside of their skill set.

Proposing one bad idea is not unusual for people whose job is idea-driven. When ideas are the primary currency of your occupation, you'll necessarily generate some losers. But in a company of Apple's size, that's why you rely on colleagues and - critically - a more robust approval process to move from idea to deliverable.

I hate your idea of firing (from org. or role) the idea person based on one bad idea. I don't hate the idea of firing (from org. or role) the leaders accountable for getting this idea into the world.


Job security seems to hold higher esteem than prison.

Social norms exist outside of criminal law, and a single extremely poor decision is reason enough for people to lose their freedom.

Why shouldn’t it be possible for people to lose their jobs?


> Why shouldn’t it be possible for people to lose their jobs?

This is a strawman argument that seems made in bad faith, but I'll bite anyway: I am not saying that no single bad idea or mistake should result in the loss of a job. I am saying that most of the time such a response would be an extreme reaction, especially when directed at the lower-level source of the ideas vs. the more senior accountable parties who are paid to know better.

Magnitude matters, as does accountability. Creating this world of extremes where one mistake of poor idea leads to termination is a pretty quick way to a toxic and non-productive work environment. Enact accountability where it sits, not across the entire chain.


I think you and I are saying the same thing honestly.

The parent seems to be of the mind that it's never a viable option for someone to lose their jobs for something; which I find an extreme position in itself.

I'm not sure how this context is lost, as precisely this point is what I'm getting at. I'm not jumping to extremes as some imply (including you), I'm saying it should be on the table for the most hopeless egregious offences.


You're seriously comparing a single advertisement to crimes like murder? Crimes that land you in prison are generally crimes that even children can understand are wrong. You're using "extremely poor decision" for 2 wildly different things, and if you think they're remotely equivalent, perhaps you should reflect on why you think that.


I am seriously suggesting that a single bad decision (like taking some money from the cash register) can land you in prison, why do we hold jobs to a higher standard?

Learning from our mistakes is one thing, slip ups happen after all, but I’m just drawing a comparison to “a single misjudgement”.

If you don't know societies values (stealing is wrong) or a companies values (tarnishing the brand by looking cheap and desperate) the outcome should probably be the same: expulsion or exclusion.

Also, don’t go to the most extreme negative interpretation of what someone says, it’s against guidelines.


> the outcome should probably be the same

Why exactly besides the fact that you like extreme solutions?


Because accountability?

Either you’re suggesting jail is too punitive a punishment or that being fired should never be a viable option.

I’m not saying we should jump to extremes, I’m saying that the option should be on the table if you violate the core principles of the company, especially in a way that causes loss of consumer trust.

Whats the difference between defrauding Ford out of $200M and causing $200M in damages because I decided that every new Ford will include the word “I solemnly swear I will shit on the American flag when requested”?

In essence, in either case I am putting my own needs above the needs of the company and above the needs of the consumer - in a way that undermines future sales for the company too.


There’s bad ideas like “it wasn’t possible to execute this the way we thought we could”, and bad ideas like “this goes against the core values of what this company is”.

The first is something that might have gone better in better circumstances, so it’s a learning opportunity. The second shows you either don’t understand the company and decided to carry on despite that, or you just don’t care about the company, but either way it reflects poorly enough on an individual that a firing should be on the table.


You definitely fire people for pitching ideas that are against the ethos of the company. Otherwise you have no culture. It shouldn’t come down to one approver on the wallet side to see how dumb this was


> Imo you don't fire people for generating bad ideas, that just creates a culture of not thinking outside the box.

No, you fire people for generating ideas that are shady and against your own policies.


disagree. brainstorming should never be seen as a negative. trying to _promote_ and _act_ on shady ideas is the problem.


“What if we just charge a bunch of hidden recurring fees?”

Some ideas are so bad they indicate that you aren’t aligned with the goals of the company


ok, i agree that an idea that’s actively malicious toward your customer should maybe be a fireable offense. That’s extreme but we can agree. :)


Agreed, even when brainstorming there needs to be left and right bounds. It needs to be constructive and it needs to align to the vision.


Yes, but there’s nuance. We each assume a version of events and nobody really knows. In my experience, big tech companies attract a certain type of person (among others) who will not only think of stuff like this, but actively fight for it and consequences to the long term be damned. VPs who actually approve this stuff will have limited time to think about it and a lot depends on the proposal.

This looks like a group PM level decision. Bluntly, at that level we get paid enough to exercise good judgement.


Tim Cook is in charge. This wasn't decided in a bubble. A single person can't do this. It takes a lot of people to do this. A culture that allows this. This wasn't a mistake. It wasn't malicious. It wasn't even the first time.

Tim Cook did this, and anyone that can't put the blame on him is lying to themselves.


You’d think he would have learned after that U2 album disaster 11 years ago, clearly not. He’s been doing this kind of stuff since he took over.

It seemed like Jobs used the products and was trying to make stuff that he would want to use. Cook seems like he doesn’t use any of these products, and is willing to sacrifice the user experience to try and make a few extra bucks.

It seems time for some new blood leading Apple. A product person who can get the company back to the core of trying to make insanely great products that people want to use, without compromise.


As the saying goes:

“Never attribute to incompetence that which is adequately explained by profit motives.”


Then you’re in agreement with the article:

> I try very seldom to call for anyone to be fired, but I think whoever authorized this movie ad through Wallet push notifications ought to be canned.


Apple should be sued for this. This is their responsibility. They built it, left it unsupervised, and allowed the obvious to happen.

This is not the fault of ONE low level worker and there is no reason to punish them and then walk away like you've accomplished /anything/ meaningful in the long term.

These are precisely the types of public cases that should be brought against them. It would lend a lot of aid to the anti trust efforts against them as well. They clearly privilege themselves and see the devices and app store as their asset, not something they maintain on behalf of customers and developers.


> destructive of the shared pool of trust

Will there actually be any short, mediumm, or long term consequences for Apple? What real, tangible trust has Apple lost that could lead to meaningful harm to them?

The only thing I can come up with is people who hold Apple to some kind of high-minded ideal, that they constantly run foul of for other reasons already.


Threads like this one are a short-term consequence for Apple.

People here, discussing it, a) demonstrate that they find the act to represent a breach of trust, and b) spread that understanding and opinion among those who read it.

That's not, in itself, a direct consequence for Apple, but it is something they need to be, and I genuinely believe are, worried about, because losing trust in them is precisely the kind of thing that will get people to stop buying their products. This is especially true given the way they've positioned themselves as a more trustworthy actor in the privacy field.


Apple does a lot of things that are not allowed by any of the 3p developers. Someone like EU could look at that (for instance in this case a direct to consumer marketing channel that they are using to favor their own properties) and say it violates the DMA.

Google is being forced to take Google Flights links out of Search results, for instance.


Apple’s behind of curve of its third party ecosystem. All of the apps on the App Store send spam notifications, violating Apple’s own guidelines that it has no intention of enforcing.


The thing is, while we care about it here at HN, most people don't really care. Apple is a cult among consumers and they aren't going to switch even if they started putting in way more ads. They know, similar to Windows, that they have an ecosystem lock in and people aren't going to escape it.


People think they don’t care, or they tolerate it, but it still has an impact on the experience. It comes in the form of fewer glowing reviews, fewer recommendations to friends, more complaints and less forgiveness for problems. The pressure builds up over time, and then they snap.

Windows is the perfect example against the claim that Apple should be comfortable to abuse their users. Windows marketshare has been steadily dropping for the last 15 years. People are tired of the abuse, and slowly but surely leaving the platform. We now have people like PewDiePie making videos about switching to Arch Linux and self hosting, large companies offering employees a choice of Windows or Mac… things that would have sounded extremely unlikely 10+ years ago.

I’m pretty deep in the Apple ecosystem, having been in it since 2003. I could transition out of it within a week if I had to. There are some things I’d miss, for sure, but I’d live.


Exactly. Just because someone says they don't care, or they don't even consciously see it, doesn't mean it's not internalized in some way. A lot of the time it simply degrades the importance of the notification, making them more likely to be passively ignored in the future, however it probably runs deeper too.


I assume that this was approved by the CMO or at very least at the VP level. Previously I was the eng TL for house ads at a big co. We would've run anytime vaguely controversial all the way up the chain.


It shows they must be REALLY worried about this movie. All the reviews I’ve read say it sucks. I’m an f1 fan and from what I’ve read it sounds all pretty dumb and fake.


I saw it last night. The cinematography of the racing sequences was interesting. The races they actually depicted were not, though. The human story line was trash, just a really hokey "old guy uses gut feeling to beat young people using science" movie plot. And Pitt did himself no favours in this movie with his acting.


It was fun in IMAX. It's not cinema


>if everyone acted the way they did.

Everyone with the power like Apple does


Any evidence you'd care to offer, or are you just being racist on the internet?


One writer has a bunch of articles about student protestors being Hamas and the other does a bunch of consent manufacturing for war with Iran.

Nothing about what I said was racist either, weird you’d say that.


Calling out Israeli propaganda isn't racist.


Would you rather work with (hire for your startup) someone who:

1) Always pleased middle management in a large bureaucracy by moving metrics, then bailed out just before the project collapsed

2) Ignored the noise, fixed real problems and left the project better than they found it?

After 20 years of tech career and 3 FAANGs, I know my answer. This article is decent enough advice for the first 5 years of your career, so you get some seniority and money.

Once you have those two things, what they give you is the agency and the safety to walk away from bullshit.

After that the game changes: it's about credibility and being sought after by your peers, who, at this point, should also hold senior IC positions at companies whose help you need, sit on standards committees, have maintainer rights in the Kernel, etc.

Your long-term professional success will come from being an excellent technical peer, rather than pleasing random middle managers you will never work with again. Your personal job satisfaction will come from honing your craft and solving real problems for real customers, not from hitting some arbitrary business milestones. (Obviously those two things sometimes align, but if you're forced to make a choice.)


I don't like mechanistic explanations of particle physics - they're an attempt to relate what's going on to our macroscopic experience, but they usually fall apart when you look too closely.

I find it much easier to understand the stability of hydrogen by thinking about ground states. If the electron in H could merge with the proton, they'd make a neutron. But a free neutron is not stable and decays in a few minutes back into a proton, electron and an antineutrino.

With very large, proton-rich nuclei, eventually you get to the opposite situation where the ground state is one less proton, one less electron and one more neutron and the atom decays into that state by, you guessed it, an electron "falling into" the nucleus.


Yes and my yoga teacher has an even better theory where the electrons are stopped from falling in by spirits of their time-traveling ancestors.


I want to make a joke about Feynman path integrals and the single electron universe etc., but I can't think of an appropriate yoga pun.


Directly imaging an exoplanet has been done about 20 times (maybe more, by now). If you're asking how far are we from resolving an exoplanet to more than a single point of light, the answer is we will never be able to do that from this distance.


There are proposals to use the solar gravitational lens.

Failing that, you’d need thousands of optical interferometers larger than the Hubble spread across a distance wider than the Earth.



When I did my 20% on Go at Google, about 10 years ago, we already had a semi-formal rule that channels must not appear in exported function signatures. It turns out that using CSP in any large, complex codebase is asking for trouble, and that this is true even about projects where members of the core Go team did the CSP.

If you take enough steps back and really think about it, the only synchronization primitive that exists is a futex (and maybe atomics). Everything else is an abstraction of some kind. If you're really determined, you can build anything out of anything. That doesn't mean it's always a good idea.

Looking back, I'd say channels are far superior to condition variables as a synchronized cross-thread communication mechanism - when I use them these days, it's mostly for that. Locks (mutexes) are really performant and easy to understand and generally better for mutual exclusion. (It's in the name!)


> When I did my 20% on Go at Google, about 10 years ago, we already had a semi-formal rule that channels must not appear in exported function signatures.

That sounds reasonable. From what little Erlang/Elixir code I’ve seen, the sending and receiving of messages is also hidden as an implementation detail in modules. The public interface did not expose concurrency or synchronization to callers. You might use them under the hood to implement your functionality, but it’s of no concern to callers, and you’re free to change the implementation without impacting callers.


AND because they're usually hidden as implementation detail, a consumer of your module can create simple mocks of your module (or you can provide one)


How large do you deem to be large in this context?

I had success in using a CSP style, with channels in many function signatures in a ~25k line codebase.

It had ~15 major types of process, probably about 30 fixed instances overall in a fixed graph, plus a dynamic sub-graph of around 5 processes per 'requested action'. So those sub-graph elements were the only parts which had to deal with tear-down, and clean up.

There were then additionally some minor types of 'process' (i.e. goroutines) within many of those major types, but they were easier to reason about as they only communicated with that major element.

Multiple requested actions could be present, so there could be multiple sets of those 5 process groups connected, but they had a maximum lifetime of a few minutes.

I only ended up using explicit mutexes in two of the major types of process. Where they happened to make most sense, and hence reduced system complexity. There were about 45 instances of the 'go' keyword.

(Updated numbers, as I'd initially misremembered/miscounted the number of major processes)


How many developers did that scale to? Code bases that I’ve seen that are written in that style are completely illegible. Once the structure of the 30 node graph falls out of the last developer’s head, it’s basically game over.

To debug stuff by reading the code, each message ends up having 30 potential destinations.

If a request involves N sequential calls, the control flow can be as bad as 30^N paths. Reading the bodies of the methods that are invoked generally doesn’t tell you which of those paths are wired up.

In some real world code I have seen, a complicated thing wires up the control flow, so recovering the graph from the source code is equivalent to the halting problem.

None of these problems apply to async/await because the compiler can statically figure out what’s being invoked, and IDE’s are generally as good at figuring that out as the compiler.


That was two main developers, one doing most of the code and design, the other a largely closed subset of 3 or 4 nodes. Plus three other developers co-opted for implementing some of the nodes. [1]

The problem space itself could have probably grown to twice the number of lines of code, but there wouldn't have needed to be any more developers. Possibly only the original two. The others were only added for meeting deadlines.

As to the graph, it was fixed, but not a full mesh. A set of pipelines, with no power of N issue, as the collection of places things could talk to was fixed.

A simple diagram represented the major message flow between those 30 nodes.

Testing of each node was able to be performed in isolation, so UT of each node covered most of the behaviour. The bugs were three deadlocks, one between two major nodes, one with one major node.

The logging around the trigger for the deadlock allowed the cause to be determined and fixed. The bugs arose due to time constraints having prevented an analysis of the message flows to detect the loops/locks.

So for most messages, there were a limited number of destinations, mostly two, for some 5.

For a given "request", the flow of messages to the end of the fixed graph would be passing through 3 major nodes. That then spawned the creation of the dynamic graph, with it having two major flows. One a control flow through another 3, the other a data flow through a different 3.

Within that dynamic graph there was a richer flow of messages, but the external flow from it simply had the two major paths.

Yes, reading the bodies of the methods does not inform as to the flows. One either had to read the "main" routine which built the graph, or better refer to the graph diagram and message flows in the design document.

Essentially a similar problem to dealing with "microservices", or plugable call-backs, where the structure can not easily be determined from the code alone. This is where design documentation is necessary.

However I found it easier to comprehend and work with / debug due to each node being a prodable "black box", plus having the graph of connections and message flows.

[1] Of those, only the first had any exerience with CSP or Go. The CSP expereince being with a library for C, the Go experience some minimal use a year earlier. The other developers were all new to CSP and Go. The first two developers were "senior" / "experienced".


>If you take enough steps back and really think about it, the only synchronization primitive that exists is a futex (and maybe atomics). Everything else is an abstraction of some kind.

You're going to be surprised when you learn that futexes are an abstraction too, ultimately relying on this thing called "cache coherence".

And you'll be really surprised when you learn how cache coherence is implemented.


I think the two basic synchronisation primitives are atomics and thread parking. Atomics allow you to share data between two or more concurrently running threads whereas parking allows you to control which threads are running concurrently. Whatever low-level primitives the OS provides (such as futexes) is more an implementation detail.

I would tentatively make the claim that channels (in the abstract) are at heart an interface rather than a type of synchronisation per se. They can be implemented using Mutexes, pure atomics (if each message is a single integer) or any number of different ways.

Of course, any specific implementation of a channel will have trade-offs. Some more so than others.


To me message passing is like it's own thing. It's the most natural way of thinking about information flow in a system consisting of physically separated parts.


What you think is not very relevant if it doesn't match how CPUs work.


huh?


I think they mean that message channels are an expensive and performance unstable abstraction.

You could address the concern by choosing a CPU architecture that included infinite capacity FIFOS that connected its cores into arbitrary runtime directed graphs.

Of course, that architecture doesn’t exist. If it did, dispatching an instruction would have infinite tail latency and unbounded power consumption.


What is "20% on Go"? What is it 20% of?


At least historically, google engineers had 20% of their time to spend on projects not related to their core role


This still exists today. For example, I am on the payments team but I have a 20% project working on protobuf. I had to get formal approval from my management chain and someone on the protobuf team. And it is tracked as part of my performance reviews. They just want to make sure I'm not building something useless that nobody wants and that I'm not just wasting the company's time.


I never worked at Google (or any other large corp for that matter), but this sounds like the exact opposite of an environment that spawned GMail.

As you think back even to the very early days of computing, you'll find individuals or small teams like Grace Hopper, the Unix gang, PARC, etc that managed to change history by "building something useless". Granted, throughout history that happened less than 1% of the time, but it will never happen if you never try.

Maybe Google no longer has any space for innovation.


>I never worked at Google (or any other large corp for that matter), but this sounds like the exact opposite of an environment that spawned GMail.

Friendly fyi... GMail was not a "20% project" which I mentioned previously: https://news.ycombinator.com/item?id=39052748

Somebody (not me but maybe a Google employee) also revised the Wikipedia article a few hours after my comment: https://en.wikipedia.org/w/index.php?title=Side_project_time...

Before LLMs and ChatGPT even existed ... a lot of us somehow hallucinated the idea that GMail came from Google's 20% Rule. E.g. from 2013-08-16 : https://news.ycombinator.com/item?id=6223466


I see, thank you for debunking. But I think my general point still stands. You can progress by addressing a need, but true innovation requires adequate space.


I see why they do this, but man it almost feels like asking your boss for approval on where you go on vacation. Do people get dinged if their 20% time project doesn't pan out, or they lose interest later on?


Previously it could be anything you wanted. These days, you need formal approval. Google has changed a bit.


It has nothing to do with success. It's entirely for making sure some one besides the person doing the 20% agrees with the idea behind the project.


Lol. They’d be better off giving people the option to work 4 days if they also signed over right of first refusal for hobby projects.


Which misses the point of 20% imo; exploring space that would likely be missed in business as usual, encouraging creativity.


Google historically allowed employees to self-direct 20% of their working time (onto any google project I think).


I assume this means "20% of my work on go" aka 1 out of 5 work days working on golang


I guess I'm officially listed as a "staff engineer". I have been at this for 20 years, and I work with multiple teams in pretty different areas, like the kernel, some media/audio logic, security, database stuff... I end up alternating a lot between using Rust, Java, C++, C, Python and Go.

Coding assistant LLMs have changed how I work in a couple of ways:

1) They make it a lot easier to context switch between e.g. writing kernel code one day and a Pandas notebook the next, because you're no longer handicapped by slightly forgetting the idiosyncrasies of every single language. It's like having smart code search and documentation search built into the autocomplete.

2) They can do simple transformations of existing code really well, like generating a match expression from an enum. They can extrapolate the rest from 2-3 examples of something repetitive, like converting from Rust types into corresponding Arrow types.

I don't find the other use cases the author brings up realistic. The AI is terrible at code review and I have never seen it spot a logic error I missed. Asking the AI to explain how e.g. Unity works might feel nice, but the answers are at least 40% total bullshit and I think it's easier to just read the documentation.

I still get a lot of use out of Copilot. The speed boost and removal of friction lets me work on more stacks and, consequently, lead a much bigger span of related projects. Instead of explaining how to do something to a junior engineer, I can often just do it myself.

I don't understand how fresh grads can get use out of these things, though. Tools like Copilot need a lot of hand-holding. You can get them to follow simple instructions over a moderate amount of existing code, which works most of the time, or ask them to do something you don't exactly know how to do without looking it up, and then it's a crapshoot.

The main reason I get a lot of mileage out of Copilot is exactly because I have been doing this job for two decades and understand what's happening. People who are starting in the industry today, IMO, should be very judicious with how they use these tools, lest they end up with only a superficial knowledge of computing. Every project is a chance to learn, and by going all trial-and-error with a chatbot you're robbing yourself of that. (Not to mention the resulting code is almost certainly half-broken.)


This is pretty much how i use llm for coding. I already know what i want i just don't want to type it out. I ask the llm to do the typing for me and then i check it over, copy/paste it in, and make any adjustments or extensions.


This is the way.

Just last night I did a quick test on Cursor (first time trying it). Opened up my IRC bot project and asked it to "add relevant handlers for IRC messages".

It immediately recognised the pattern I had used before and added CTCP VERSION, KICK, INVITE and 433 (nickname already in use). It didn't try to add everything under the sun and just added those. Took me 20 seconds.


I mean this in the nicest way possible: this paragraph, with all the repetition and constant use of the word "expert", is completely unhinged. I really recommend re-reading what you write.


The term expert is used frequently in US government settings, per the US Office of Personnel Management: https://www.opm.gov/frequently-asked-questions/assessment-po...

Anyone above the lowest pay grades gets categorized as some type of "expert". As the gov tries to justify higher pay to keep up with inflation and compete with private job markets, more people become categorized as "experts" to fill higher pay grades. (For perspective, you can't afford to live independently in the DC metro area unless you're in the top 1/3 of pay grades) I can totally see how someone throwing the term around could appear unhinged to an outsider, but the reality is that the US government as a whole lives in it's own unhinged little world.


Anyone above the lowest pay grades gets categorized as some type of "expert".

I've read that at the F.B.I., anyone not pushing a broom gets the title "agent."


It was absolutely not like that, at least up until 10 years ago. Agents and the operational staff were totally separate.


I am not OP and I see nothing of the sort you are implicating. The writing is dry humor and funny. The expert repetition of the word "expert" for the obvious non-expert expert delivers a good bit of the story.


It's a bit dramatic, but "unhinged" is excessive. I imagine the repetition is a stylistic choice. It builds up the conclusion, and turns a one-line anecdote into a story.


It echoes the "cosmonauts just used a pencil!" Copy pasta.


(It's a bot)


"he logged into his remote profile"

Yes.


And his post history. It's always one sentence about who he is, then a paragraph of text of one of his many careers slightly related to OP


What you're describing is not how it works. Chiefly, the hiring pipelines are not set up for a single role, but a whole family of them. They are filled ahead of need. (Or were, at a time when this would've been taking place.)

There are other inaccuracies, but suffice it to say, this comment section is full of comments by people who have never been hiring managers talking about how hiring works.


I re-read the wiki page on the DSA multiple times. It does explicitly spell out that at least one "diverse" candidate must be on the slate for each role. Yes, candidates are considered for multiple roles as they go through the hiring pipeline, but that doesn't change the fact that it prohibits moving forward with a hire if the candidate pool for the role does not include a diverse candidate.

If this is wrong, but all means explain how the DSA actually works.


Nothing you just said contradicts the OP in any way, as those details don't change any aspect of the argument.


The phrase "confidently incorrect" comes to mind


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You