I think it's reasonable to knock on someone for concluding a wildly incorrect theory (eg generating out of mud) even when lacking evidence pointing to the correct answer. Aristotle did this a lot. The correct position in these scenarios is one of uncertainty.
They've graduated 5,000+ companies, so some fraud is hard to avoid, especially with young hungry founders willing to do anything to succeed. Honestly, it's a pretty good track record that there's only been a handful of companies like this.
this is a teachable moment for yc, maybe the cost of investing in a sour apple is a lot more than half a mil, maybe there's a brand or reputational cost, even in places you least expect it right, these two seemingly had everything laid out for them by investors, did they even come up with compliance? who told them to work on that? now look what happened, it's like everyone cant get far enough fast enough now. What about their lead investor insight partners? what's that conversation like?
it's all just very strange and stupid, ironically from the the startup posing as auditors..
Every single technical auditor I've dealt with has been majorly incompetent and wanted to do things that would decrease security. And these were not some cheap bottom of the barrel companies but the big "industry leaders".
I don't understand exactly what is being banned. I have a vibe coded context manager + chat thread UI that I use to manage multiple claude code cli sessions simultaneous. Is this allowed? If not how would this get identified vs other cli usage? How is this different than openclaw?
openclaw is too easy to set up and way too messy and context heavy, they don't have to catch you they just have to catch the guy on the market giving out free modified V8 F150s while Anthropic are selling gas subscriptions in town.
it’s not banned it will just charge to extra usage instead of going towards the sub when using setup token, you can allocate money to extra usage or make an anthropic api key and use that
OpenClaw! You just need to slightly change the definition of “good code”. The point of code is to ultimately bring money. The guy got hired by OpenAI and who gives a shit what happens to the “project” next. Mission accomplished.
There are examples littered around threads on HN. What happens is when people provide the examples, the goalposts get moved. So people have stopped bothering to reply to these demands.
I'm guessing a lot of the high-x productivity boost is from a cycle of generating lots of code, having bug reports detected or hallucinated from that code, and then generating even more code to close out those reports, and so on
On the other hand, other humans may have intrinsic interests outside of your control that may lead them to harm you despite the mechanisms you mentioned, whereas bots by default don't have such motives.
reply