> Our investigation has revealed that the incident originated from a third-party AI tool whose Google Workspace OAuth app was the subject of a broader compromise, potentially affecting hundreds of its users across many organizations.
> We are publishing the following IOC to support the wider community in the investigation and vetting of potential malicious activity in their environments. We recommend that Google Workspace Administrators and Google Account owners check for usage of this app immediately.
> Our investigation has revealed that the incident originated from a third-party AI tool whose Google Workspace OAuth app was the subject of a broader compromise, potentially affecting hundreds of its users across many organizations.
> We are publishing the following IOC to support the wider community in the investigation and vetting of potential malicious activity in their environments. We recommend that Google Workspace Administrators and Google Account owners check for usage of this app immediately.
> A Vercel employee got compromised via the breach of an AI platform customer called http://Context.ai that he was using.
> Through a series of maneuvers that escalated from our colleague’s compromised Vercel Google Workspace account, the attacker got further access to Vercel environments.
> We do have a capability however to designate environment variables as “non-sensitive”. Unfortunately, the attacker got further access through their enumeration.
> We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel.
Still no email blast from Vercel alerting users, which is concerning.
> We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel.
Blame it on AI ... trust me... it would have never happened if it wasn't for AI.
> We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI.
Reads like the script of a hacker scene in CSI. "Quick, their mainframe is adapting faster than I can hack it. They must have a backdoor using AI gifs. Bleep bleep".
> Still no email blast from Vercel alerting users, which is concerning.
On the one hand, I get that it's a Sunday, and the CEO can't just write a mass email without approval from legal or other comms teams.
But on the other hand... It's Sunday. Unless you're tuned-in to social media over the weekend, your main provider could be undergoing a meltdown while you are completely unaware. Many higher-up folks check company email over the weekend, but if they're traveling or relaxing, social media might be the furthest thing from their mind. It really bites that this is the only way to get critical information.
> On the one hand, I get that it's a Sunday, and the CEO can't just write a mass email without approval from legal or other comms teams
This is not how things work. In a crisis like this there is a war room with all stakeholders present. Doesn’t matter if it’s Sunday or 3am or Christmas.
And for this company specifically, Guillermo is not one to defer to comms or legal.
Has anyone actually gotten an email from Vercel confirming their secrets were accessed? Right now we're all operating under the hope (?) that since we haven't (yet?) gotten an email, we're not completely hosed.
Hope-based security should not be a thing. Did you rotate your secrets? Did you audit your platform for weird access patterns? Don’t sit waiting for that vercel email.
For most secrets they are under your control so, sure, go ahead and rotate them, allowing the old version to continue being used in parallel with the new version for 30 minutes or so.
For other secrets, rotation involves getting a new secret from some upstream provider and having some services (users of that secret) fail while the secret they have in cache expires.
For example, if your secret is a Stripe key; generating a new key should invalidate the old one (not too sure, I don't use Stripe), at which point the services with the cached secret will fail until the expiry.
nope...I feel u, the "Hope-based security" is exactly what Vercel is forcing on its users right now by prioritizing social media over direct notification.
If the attacker is moving with "surprising velocity," every hour of delay on an email blast is another hour the attacker has to use those potentially stolen secrets against downstream infrastructure. Using Twitter/X as a primary disclosure channel for a "sophisticated" breach is amateur hour. If legal is the bottleneck for a mass email during an active compromise, then your incident response plan is fundamentally broken.
Sure, and the reason he is is because he DOES check stuff like this before sending it out.
Top leaders excel because they assemble a team around them they trust. You can't do everything yourself, you need to delegate. And having people in those positions also means you shouldn't be acting alone or those people will not stick around
First of all, I would expect a top leader to be prepared for scenarios like this (including templates of customer communication).
And yeah, I would expect a CEO to have enough legal knowledge to handle such a situation (customer communication) on his own.
But I also have to mentioned that I'm not in the US. Not every country has the litigation system of the US where you can basically destroy a company because you as the customer are too dumb to not spill hot coffee over yourself.
> you as the customer are too dumb to not spill hot coffee over yourself
presuming you're referring to the hot coffee lawsuit, maybe read details of the story. McDonalds wasn't at all blameless, and the plaintiff had reasonable demands
You expect the CEO of a company to have the legal depth of knowledge AND knowledge of all their customers, contracts and SLAs to be able to wing a communication and not somehow trip over all of that? They also should understand every possible legal jurisdiction that could be affected? You realise even the head of their legal department (a HIGHLY competent lawyer) likely wouldn’t say there could do that without speaking to the key people in their team?
Should the CEO also bang out some dev estimates for the roadmap because, hey, they should be competent enough to do something like that. Why not submit the accounts for the year? How hard can it be, just reading a few lines off their Sage or Quickbooks accounts?
Let me be more clear on what I mean by “wing it,” because “having templates” doesn’t really cut it. Anyone can bang out a “we have a problem” template, so why does the CEO need to attach their name to it? Once you’re at the point of needing a CEO to communicate, you have a specific problem, with its own specific impacts that a single person can not be expected to have enough depth of knowledge in their brain to actually talk about without involving their domain experts, including legal, technical, whatever the situation needs.
> can not be expected to have enough depth of knowledge in their brain to actually talk about
What is the use of a CEO if not to have enough depth of knowledge about the different aspects of running a business?
Like what? Poor little CEO that doesn't understand anything about the world and how to run a company. Seems like helplessness is expected at every stage.
> What is the use of a CEO if not to have enough depth of knowledge about the different aspects of running a business?
Bit of a difference between “having depth of knowledge in their business” and “can speak off-the-cuff with the necessary accuracy to remain in compliance with every contract and legal jurisdiction their organisation is engaged in, without consulting the numerous domain experts they employ for just this purpose,” isn’t there.
Also, such a situation that requires the CEO’s direct attention has already gone FAR beyond your standard incidents where you can throw out a pre written statement. Do you want your organisation just cuffing it from the top down? Are you Elon Musk in disguise?
I'm going down with the ship over on X.com the Everything App. There's a parcel of very important tech people that are running some playbook where posting to X.com is sufficient enough to be unimpeachable on communication, despite its rather beleaguered state and traffic.
The disaster plan says there is a process, but it has never been used and is probably outdated. Chances are the social media strategy requires posting on the Facebook and updating key Circles on Google+
Production network control plane must be completely isolated from the internet with a separate computer for each. The design I like best is admins have dedicated admin workstations that only ever connect to the admin network, corporate workstations, and you only ever connect to the internet from ephemeral VMs connected via RDP or similar protocol.
The actual app name would be good to have. Understandable they don’t want to throw them under the bus but it’s just delaying taking action by not revealing what app/service this was.
I was trying to look it up (basically https://developers.google.com/identity/protocols/oauth2/java... -- the consent screen shows the app name) but it now says "Error 401: invalid_client; The OAuth client was not found." so it was probably deleted by the oauth client owner.
Yes. The oauth ID is indisputable. It it seems to be context.ai. But suppose it was a fake context.ai that the employee was tricked into using. Or… or…
Better to report 100% known things quickly. People can figure it out with near zero effort, and it reduces one tiny bit of potential liability in the ops shitstorm they’re going through.
Idk exactly how to articulate my thoughts here, perhaps someone can chime in and help.
This feels like a natural consequence of the direction web development has been going for the last decade, where it's normalised to wire up many third party solutions together rather than building from more stable foundations. So many moving parts, so many potential points of failure, and as this incident has shown, you are only as secure as your weakest link. Putting your business in the hands of a third party AI tool (which is surely vibe-coded) carries risks.
Is this the direction we want to continue in? Is it really necessary? How much more complex do things need to be before we course-correct?
This isn't a web development concept. It's the unix philosophy of "write programs that do one thing and do it well" and interconnect them, being taken to the extremes that were never intended.
Just throwing it out there - the Unix way to write software is often revered. But ideas about how to write software that came from the 1970s at Bell Labs might not be the best ideas for writing software for the modern web.
Instead of "programs that do one thing and do it well", "write programs which are designed to be used together" and "write programs to handle text streams", I might go with a foundational philosophy like "write programs that are do not trust the user or the admin" because in applications connected to the internet, both groups often make mistakes or are malicious. Also something like "write programs that are strict on which inputs they accept" because a lot of input is malicious.
The Unix model wasn't simply do one thing and do it well.
It was also a different model on ownership and vetting of those focused tools. It might have been a model of having the single source tree of an old UNIX or BSD, where everything was managed as a coherent whole from grep to cc all the way to X11. Or it might have been the Linux distribution model of having dedicated packagers do the vetting to piecemeal packages into more of a bazaar, even going so far as to rip scripting language bundles into their component pieces as for Python and Perl.
But in both of those models you were put farther away from the third-party authors bringing software into the open-source (and proprietary) supply chains.
This led to a host of issues with getting new software to users and with a fractal explosion of different versions of software dependencies to potentially have to work around, which is one reason we saw the explosion of NPM and Cargo and the like. Especially once Docker made it easy to go straight from stitching an app together with NPM on your local dev seat to getting it deployed to prod.
But the issue isn't with focused tooling as much as it is with hewing more closely to the upstream who could potentially be subverted in a supply chain attack.
After all, it's not as if people never tried to do this with Linux distros (or even the Linux kernel itself -- see for instance https://linux.slashdot.org/story/03/11/06/058249/linux-kerne... ). But the inherent delay and indirection in that model helped make it less of a serious risk.
But even if you only use 1 NPM package instead of 100, if it's a big enough package you can assume it's going to be a large target for attacks.
> Just throwing it out there - the Unix way to write software is often revered. But ideas about how to write software that came from the 1970s at Bell Labs might not be the best ideas for writing software for the modern web.
GP said it's about taking the Unix philosophy to extremes, you say something different.
Anything taken to extremes is bad; the key word there is "extremes". There is nothing wrong with the Unix philosophy, as "do one thing and do it well" never meant "thousands of dependencies over which you have no control, pulled in without review or thought".
I do not see what this has to do with Unix. The problem is not that programs interoperate or handle text streams, the problem is a) the supply chain issues in modern web-software (and thanks to Rust now system-level) development and b) that web applications do not run under user permissions but work for the user using token-based authentication schemes.
I’m not joking, but weirdly enough, that’s what most AI arguments boil down to. Show me what the difference is while I pull up the endless CVE list of which ever coreutils package you had in mind. It’s a frustrating argument because you know that authors of coreutils-like packages had intentionality in their work, while an LLM has no such thing. Yet at the end, security vulnerabilities are abundant in both.
The AI maximalists would argue that the only way is through more AI. Vibe code the app, then ask an LLM to security review it, then vibe code the security fixes, then ask the LLM to review the fixes and app again, rinse and repeat in an endless loop. Same with regressions, performance, features, etc. stick the LLM in endless loops for every vertical you care about.
Pointing to failed experiments like the browser or compiler ones somehow don’t seem to deter AI maximalists. They would simply claim they needed better models/skills/harness/tools/etc. the goalpost is always one foot away.
I wouldn't describe myself as an AI maximalist at all. I just don't believe the false dichotomy of you either produce "vulnerable vibe coded AI slop running on a managed service" or "pure handcrafted code running on a self hosted service."
You can write good and bad code with and without AI, on a managed service, self-hosted, or something in between.
And the comment I was replying to said something about not trusting something written in Akron, OH 2 years ago, which makes no sense and is barely an argument, and I was mostly pointing out how silly that comment sounds.
I used to believe that too, yet the dichotomy is what’s being pushed by what I called an “AI maximalist” and it’s what I was pushing against.
There is no “I wrote this code with some AI assistance” when you’re sending 2k line change PR after 8 minutes of me giving you permission on the repo. That’s the type of shit I’m dealing with and management is ecstatic at the pace and progress and the person just looks at you and say “anything in particular that’s wrong or needs changing? I’m just asking for a review and feedback”
It's such a bad faith argument, they basically make false equivalencies with LLMs and other software. Same with the "AI is just a higher level compiler" argument. The "just" is doing a ton of heavy lifting in those arguments.
Regarding the unix philosophy argument, comparing it to AI tools just doesn't make any sense. If you look at what the philosophy is, it's obvious that it doesn't just boil down to "use many small tools" or "use many dependencies", it's so different that it not even wrong [0].
In their Unix paper of 1974, Ritchie and Thompson quote the following design considerations:
- Make it easy to write, test, and run programs.
- Interactive use instead of batch processing.
- Economy and elegance of design due to size constraints ("salvation through suffering").
- Self-supporting system: all Unix software is maintained under Unix.
In what way does that correspond to "use dependencies" or "use AI tools"? This was then formalised later to
- Write programs that do one thing and do it well.
- Write programs to work together.
- Write programs to handle text streams, because that is a universal interface.
This has absolutely nothing in common with pulling in thousands of dependences or using hundreds of third party services.
Then there is the argument that "AI is just a higher level compiler". That is akin to me saying that "AI is just a higher level musical instrument" except it's not, because it functions completely differently to musical instruments and people operate them in a completely different way. The argument seems to be that since both of them produce music, in the same way both a compiler and LLM generate "code", they are equivalent. The overarching argument is that only outputs matter, except when they don't because the LLM produces flawed outputs, so really it's just that the outputs are equivalent in the abstract, if you ignore the concrete real-world reality. Using that same argument, Spotify is a musical instrument because it outputs music, and hey look, my guitar also outputs music!
It's not a hosting model, it's a fundamental failure of software design and systems engineering/architecture.
Imagine if cars were developed like websites, with your brakes depending on a live connection to a 3rd party plugin on a website. Insanity, right? But not for web businesses people depend on for privacy, security, finances, transportation, healthcare, etc.
When the company's brakes go out today, we all just shrug, watch the car crash, then pick up the pieces and continue like it's normal. I have yet to hear a single CEO issue an ultimatum that the OWASP Top 10 (just an example) will be prevented by X date. Because they don't really care. They'll only lose a few customers and everyone else will shrug and keep using them. If we vote with our dollars, we've voted to let it continue.
> [Google Threat Intelligence Group] attributes this activity to UNC1069, a financially motivated North Korea-nexus threat actor active since at least 2018, based on the use of WAVESHAPER.V2, an updated version of WAVESHAPER previously used by this threat actor. Further, analysis of infrastructure artifacts used in this attack shows overlaps with infrastructure used by UNC1069 in past activities.
> The car wash is only 50 meters from my house. I want to get my car washed. The car is at my house currently. Should I walk there or drive?
> Walk.
>
> At 50 meters, walking is faster than the full “start car → pull out → reposition → park” cycle, avoids cold-start wear, and you won’t risk arriving with dusty brakes/tires or water streaks from immediately driving afterward. Drive only if you can’t comfortably walk (mobility, heavy weather, you need to bring bulky items), or if the car wash explicitly requires you to drive it into a queue/track.
I wish they didn't call it "GPT-5.2 Thinking" when what the word "thinking" here means is "might choose to think if it wants to". I reproduce your result, but it does no reasoning; it eventually catches itself and ends with:
> Practical approach: walk over, check the setup/queue/payment, then walk back and drive the car the 50 meters when you’re ready to enter.
This file does not contain the terms of service of Slack. Rather, it contains the software licenses of third-party code that is embedded in Chromium, which in turn is embedded in the Slack app. Every dependency has its own license, which is why the file is so big (800× Apache-2.0, 237× MIT, 59× LGPL, and so on).
This is BSD Licence Hell, and for about 10 years I've being doing what miniscule part I can do to ameliorate it. Debian people are trying to do their parts, too.
That's how it is done in debian packages. The full text of each license is only mentioned once and given an identifier which is then used to link the license to the relevant copyright statements.
The worst companies to work for are bad at differentiating risk especially ones that entertain the most remote legal risks. It seems to happen more with legal risks than security or technology risks.
I think it might be the case that licenses often include the authors’ names in the “this code is copyright of so-and-so” (as you can see, I Am Not A Lawyer) section, which might be considered part of the text of the license, thereby making it a requirement to include the full license text for each dependency.
It’s usually done in MIT-like licenses, which are quite short.
But I’d argue that replacing it with
Copyright (c) 207X Jonathan Fenimore
Licensed MIT, see the license text below
or even
Copyright (c) 207X Jonathan Fenimore
SPDX-License-Identifier: MIT
should be enough, but IANAL too.
---
In longer licenses like GPL or Apache, you are not supposed to change any copyright statement placeholders. For example, there’s this line in the GPL text:
Copyright (C) <year> <name of author>
But it’s a part of the “How to Apply These Terms to Your New Programs” section. You are supposed to copy it into your code and fill it out there instead.
---
Or they could just compress the license amalgamation! I think it would be a bit bigger but pretty reasonable, and their lawyers should be happy with this arrangement.
Not sure why Apple doesn't offer a compressed filesystem :p it makes writes a bit slower when compression fails, but otherwise the savings in I/O time often makes up for the increased processing on read and write.
I found it surprising that the slide in the article uses Calibri, a typeface that wasn’t publicly available at the time. The original discussion confirms that the slide in the article is a recreation of the original one:
> The slide in the article has the same text, but is a recreation of the original (The Calibri typeface used wasn't part of PowerPoint until 2007).
> The original slide can be seen in the full report linked in the article:
which is even better than what Bootstrap provides since you get type safety for component props (and more opportunities for customization than what Bootstrap allows)
Type safety is good, but the class soup inside the components is just abysmal. And honestly, more often than not I see it spilling out of the components and onto the page layouts.
Design tokens are the one Tailwind feature I genuinely like. Everything else – kill it with fire. Just use whatever scoped CSS your stack does (<style> in Svelte/Vue, Emotion in React?).
class soup is what tailwind is. It's terrible because is just abbreviations of css attributes which you still need to know because you'll inevitably fiddle around devtools trying things out.
The only "smart" thing about it is leaning strongly on using rem.
how can it spill out onto the page? it's inline css. The (rare) inline selectors target only descendants.
Truth is that it's winning over because it works best with LLMs. Inline soup works better than looking for styling on different files in the context of the project, so here we are.
> how can it spill out onto the page? it's inline css.
Because people are lazy and don’t make a component for everything. And some people are even lazier and don’t make UI components at all. I’ve seen a project where there was a Button component and that’s it. Well, that was vibe coded probably so makes sense.
> Inline soup works better than looking for styling on different files in the context of the project, so here we are.
Only if you have a separate .css. If you do UI components with [insert your favourite CSS-in-JS solution], it stays in the same file. Maybe the proximity to markup within the file is important?
But no, Tailwind has been rising in popularity long before LLMs came along.
[I'm assuming styled-components was the preferred css-in-js solution until recently]
> If you do UI components with [insert your favourite CSS-in-JS solution], it stays in the same file.
I mean you can but "best practice" all around has been to put them separate and that's reflected in the majority of github repos in the training data of the LLMs.
> Maybe the proximity to markup within the file is important?
that's my assumption, yes. Seems to me LLMs work best when they output the relevant tokens right there with the markup instead of referencing some previous tokens even if relatively close.
styled components was the recommended solution in popular UI libraries like React MUI up until 2023 when chatgpt came out. Tailwind REALLY blew up with LLMs.
Yeah, I mean, with either Tailwind or styled-components you want to contain them to “UI components” – e.g. <Button> or <Navbar>. You could mix in some logic, and you’re right, looks like many styled-components folks will split it into two files then – but honestly I would just do in one.
Personally, I’m a Svelte fan, and I think they got it right – you just write <style> in your component and that’s it. I think Vue works in a similar way, too.
> Truth is that it's winning over because it works best with LLMs. Inline soup works better than looking for styling on different files in the context of the project, so here we are.
I saw this on another recent tailwind thread and I’m not sure I agree. LLM might be helping adoption, but I’ve been seeing and using tailwind for years without really going near LLM coding
> Indicators of compromise (IOCs)
> Our investigation has revealed that the incident originated from a third-party AI tool whose Google Workspace OAuth app was the subject of a broader compromise, potentially affecting hundreds of its users across many organizations.
> We are publishing the following IOC to support the wider community in the investigation and vetting of potential malicious activity in their environments. We recommend that Google Workspace Administrators and Google Account owners check for usage of this app immediately.
> OAuth App: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com
https://vercel.com/kb/bulletin/vercel-april-2026-security-in...
reply