It is not a tricky problem because it has a simple and obvious solution: do not filter or block usage just because the input includes a word like "gun".
I expected this to become less necessary over time as models got faster, but the opposite has happened. It feels like Claude has actually gotten slower (but in fairness does more per prompt), meaning worktrees are even more essential now.
It’s weirder than that. There is a surge of companies working on how to provide automated access to things like payments, email, signup flows, etc to *Claw.
> There is no equivalent of the network effects seen at everything from Windows to Google Search to iOS to Instagram, where market share was self-reinforcing and no amount of money and effort was enough for someone else to to break in or catch up.
The main direct network effect is that Google uses heuristic data from users to improve their search rankings. (e.g. which links they click, whether someone returns quickly to Google after clicking on a link, etc)
Other factors that favor Google at scale:
- Sites often allow only the biggest search engine crawlers and block every other bot to prevent scraping. This has been going on for more than a decade and is especially true now with AI crawlers going around.
- Google search earns more per search than competitors due to their more mature ad network that they can hire lots of engineers to work on to improve ad revenues. They can also simply serve more relevant ads since their ad network is bigger.
- Google can simply share costs (e.g. index maintenance) among many more users.
The argument is more like “humans always invent new things to want that are scarce”, and until AI literally replaces all human labour to the point of the marginal utility of a human being zero, this category will continue to exist.
Which is a fair and nuanced argument! It is also not the same as, "but historical data shows X," which is regurgitated so often without any context as to be appallingly ignorant.
We gotta take these bad actors at their word that they're creating AI meant to (eventually) wholesale replace human labor, and act accordingly. That doesn't mean burning down data centers or trying to shove AI back into Pandora's Box, so much as it means not letting them dictate societal trends or reforms necessary to ensure stability and survival through such an incredibly disruptive transformation, provided they're right.
Arguing against proactive reform with regards to AI is the same sort of ignorance I've heard about climate change for my entire life, and folks shouldn't stand for it. We have infinitely more to lose by doing nothing with a "wait and see" approach than if we proactively legislate around its definitive harms.
Time to make a deal with the kids - i’ll verify you for instagram if you verify me for ChatGPT
reply