> Enabling the feature in Workspace says that “you agree to let Google Workspace use your Workspace content and activity to personalize your experience across Workspace,” according to the settings page, but according to Google, that does not mean handing over the content of your emails to use for AI training.
User-facing software is full of language like that these days and I find it really frustrating, because it never helps answer the questions attentive people actually have, like will that mean my emails get dumped into the next Gemini training run?
Maybe my brain has rotted from paying a bit more attention to privacy language than the average person, but IMO in this case it's fairly clear to me that using your activity “to personalize your experience across Workspace” does not mean using it for training Gemini. The “personalize” means it's for training the recommendation/categorization systems for you, like which emails get marked as "important" or not (the settings at https://mail.google.com/mail/u/0/#settings/inbox).
(In some very broad sense I guess critics could call this “training AI” as there's an ML system somewhere whose parameters associated with your account get updated, but I think we can all agree this is not what we think of as “training AI”, i.e. going into a cross-user dataset for training Gemini or whatever.)
(I guess what Google should do, and should have done years/decades ago, is create a fixed set of categories of how your data can be used (aggregate statistics, training Gemini, personalization…) and use the same language across products, legal, everything.)
I interpret it that way too, but my bigger problem is in explaining it to other people. I can't credibly point nervous users at that language and say "what this actually means is X", because there's too much vagueness and wiggle room built into the language that the companies publish.
Maybe circumstances have changed? I certainly trusted 2008 Google a lot more than Google in 2025. It's really amazing to see a company just throw trust and goodwill out the window, even worse to see that it pays.
It sounds like you have some serious knowledge gaps about AI. It's perfectly normal to use AI on a dataset without incorporating that dataset into training a new AI model. If you download a free model and run it offline on your data, your data doesn't get magically incorporated into the model on the site you originally downloaded it from.
This is the same thing, and it's backed by a contract and the threat of lawsuits from the many businesses using Google Workspace.
I thought their mission statement was "Don't be evil", until they shortened it for practicality to just "Be evil". It's certainly how they've been behaving in recent years.
It's not delisted. Anna's Archive is huge. The fact that Google participates in an entirely voluntary transparency log that gives you this information should illustrate to you where they stand on the issue of their needing to be compliant to the DMCA. It isn't clear to me why online communities constantly invent fan fiction of evil enemies when organizations merely comply with a reasonable interpretation of the law of the land they are incorporated in.
Apparently corpo doesn’t hesitate to remove it when it benefits consumer, because “we just follow the law, citizen!” But when it benefits corpo it takes decades of suing and multi-billion fines to make a change.
Totally not evil, just business, comrade, amirite?
no one, and i mean no one, has to invent the history of evil corporations doing evil things. Climate change? Cigarettes?, shit let's go modern. CZ? SBF?
if it's not clear to you may i suggest with the upmost respect that you read surveillance capitalism by zuboff (a successor to manufactured consent in my humble opinion).
I guess my question is where do you get the confidence or belief these companies are doing anything BUT evil? how many of americas biggest companies' workers need food aid from the govt? look up what % of army grunts are food insecure. in the heart of empire.
Where on earth do you get this faith in companies from?
Publicly traded corporations are machines whose only lawful purpose is to make money. They are legally obligated to be sociopathic systems. They aren't evil like an axe murderer, they're evil like a gasoline fire. They may be useful when properly controlled, but they're certainly never worth defending in the way you seem to feel the need to
>Publicly traded corporations are machines whose only lawful purpose is to make money.
Hey, so this isn't the case at all, publicly traded companies are under no lawful obligation to focus only on making money. Fiduciary duty does not mean this in any way. It's a common misconception whose perpetuation is harmful. Let's stop doing it.
> publicly traded companies are under no lawful obligation to focus only on making money
You changed the word "purpose" to "obligation"
I think there is a big difference b/w the two.
I would consider a correction in both of these statements, that the only purpose isn't to make money but rather to make valuation (but same thing most of the times)
They'd rather lose on profits or even burn the profits if that would mean that somehow their valuation could grow faster.
But sooner or later the profits will catch up to the evaluation (I hope) and only profitable companies should have their valuations based on top of that in an efficient economy.
Public traded corporations get money from people indirectly via retirement funds or directly via investing in them directly. The whole idea becomes that the profit to a person retiring is not the profits of the company but rather the valuation of the company. Of course, they aren't a legal obligation to profit itself but I would consider them to be almost under legal obligation to valuation otherwise they would be removed out of being publicly traded or in things like S&P 500 etc.
As an example, in my limited knowledge, take Costco, some rich guy would say for them to raise the price of its hotdog etc. from 1.50$ to 3-4$ for insanely more profits. Yet, they have their own philosophy etc. and that philosophy is partially the reason of their valuation as well.
When the rumour that costco is raising the prices of their hot dogs, someone might expect stock prices to increase considering more "profit" in future but rather the stock prices dropped.. by a huge margin if I remember correctly.
most companies are investing into AI simply because its driving their valuations up like crazy.
I don't think its an understatement to say that companies are willing to do anything for their valuations.
Facebook would try to detect if girls are insecure about their body and try to show advertisements to them. This is in my opinion, predatory nature showed by the corporation. For what purpose? for the valuation.
It's not "a system". Each company is run by different people, and is under different pressures, and makes different decisions. Monolithing that is silly.
I find it sad that a lot of foundational open-source software is created/maintained by trading/crypto/money laundering companies. But OTOH it's great that they at least contribute _something_ to the society!
Also, Longbridge, who seem to be using this GPUI component library for their Longbridge Pro [1] app, look to me like a regular online brokerage company. What is your issue with that?
Bitcoin ethos (as in, the original 'banks are broken, let's fix this') is kinda similar to the hacker ethos ('this thing/program is broken, let's fix this'), so maybe this shouldn't be too surprising? Short term pain for long term gain etc.
(disclaimer: I think bitcoin is dumb, but the market disagrees)
> (disclaimer: I think bitcoin is dumb, but the market disagrees)
I believe that market is quite controversial. Most people know that bitcoin is a bit dump, but they buy it regardless because they believe they profit from it when they do that as part of larger group. A very interesting social experiment with the mix of market manipulation. It is less about stability, independence or usability of the currency, but more about the opportunity of profit.
> I find it sad that a lot of foundational open-source software is created/maintained by trading/crypto/money laundering companies. But OTOH it's great that they at least contribute _something_ to the society!
React is unfortunately becoming more foundational than this project, and with it maintained by a company that was involved in the Rohingya genocide in Myanmar, the Cambridge Analytica scandal and so on.
This makes crypto / trading companies look like angels compared to what Facebook has done even though they made and open sourced React.
To that end, I don't see anything morally wrong with the former camp of companies supporting open source, (trading/crypto) since they didn't participate and amplify an actual genocide.
I don't believe that to be the case. The ecosystems where this is most true would be Rust, which has a lot of crypto use, and maybe ocaml from Jane Street. But for the most part I have to doubt this.
"Think about where most breakthrough technologies actually come from. Not from some genius in a garage (though that’s a nice story). They come from decades of basic research funded by institutions that operate independently of political pressure. The internet you’re using to read this? That was DARPA. The GPS in your phone? Military research. The algorithms powering AI? University research.
Here’s how the ecosystem actually works: government funds basic research that has no obvious commercial application. Universities and research institutions build on that work, training graduate students who become the next generation of researchers and entrepreneurs. Some of those students go on to start companies that turn basic research into products. Others stay in academia and continue pushing the boundaries of what’s possible.
Yes, eventually the private markets and companies take over the commercialization, but so much of the core infrastructure of innovation comes from elsewhere.
And none of this happens overnight. The internet took decades to go from ARPANET to the web. GPS took years of satellite launches and signal processing advances. The machine learning techniques powering today’s AI boom are built on decades of research in statistics, computer science, and neuroscience."
This level of nit-picking is disheartening...