> You don't actually write code with e2b. You write technical specs and then collaborate with an AI agent.
If I want to change 1 character of a generated source file, can I just go do that or will I have to figure out how to prompt the change in natural language?
> Because there are many services (each with their own readiness criteria), a cold boot takes a long time until all services stabilize (worst case I've seen was over 30 minutes). With hibernation they can resume where they started off within a couple of minutes.
This is exactly what we use hibernation for in conjunction with EC2 Warm Pools -- fast autoscaling of services that have long boot times. There's an argument to be made that fixing slow boots should be the "correct" solution, but in large enough organizations, hibernated instances are a convenient workaround to buy you some time to navigate the organizational dynamics (and technical debt) that lead to the slow boot times in the first place.
This is a wonderful article, architecture, and project. Can anyone from Clickhouse comment on any non-technical factors that allowed such a rapid pace of development, e.g. team size, structure, etc.?
Couple of things-
1. Hiring the best engineers globally - this was huge as we are a distributed company and can hire the best talent anywhere in the world.
2. Flat team structure specially in the first year with a concept of verticals(technical areas - like autoscaling, security, proxy layer, operations etc) and having one engineer own and drive these verticals.
3. Lot of cross team collaboration and communication (across product, engineering, business, sales)
4. Lastly as mentioned in the blog post, it was very important for us to stick to the milestone based sprints for faster development and product team helped a lot to prioritize the right features so that engineering could deliver.
All modern languages heavily borrow from each other’s latest iterations. In the case of Java though, playing catch up is by design since it’s intended to be a conservative/stable language.
I think they're referring to the Early Access Program, which was a one-time pre-launch Dutch auction in advance of general availability. Note that that is orthogonal to a domain being premium or not. During the last day of EAP, you might have seen the price of a non-premium domain be something like $130+12/yr. Whereas for a ~$70/yr premium domain, it'd have been something like $130+70/yr.
Regardless, forum.dev was registered in 2021 (you can confirm via WHOIS), which was long after the Early Access Program for .dev ended in 2019.
There's a hierarchical modeling paradigm/tools called C4 that (while being boxes and lines) helps with the zoom-in/zoom-out nature of understanding systems: https://c4model.com/
> I still can't get over the banana-pants insanity of the first count... arguing that Twitter lied about its numbers for years specifically so that someone would buy it at an inflated price?
Not a lawyer, so could you explain why this is banana-pants insanity? There's malicious fraud (unlikely) and then there's the more likely case of under-investing in bot-detection and expunging efforts, e.g. "in favor of other priorities", to keep DAUs and subsequently valuations high for a potential sale.
Twitter's methodology for checking bot accounts is clear, consistent, and has been detailing in its SEC filings for years. Anyone that cared could have easily double checked them. Recalling from memory, all they did is take a random sampling of accounts and have a human rate the accounts as a bot or not--back when Twitter had an API it would have been even easier to do this.
No. Nobody outside of Twitter can repeat the analysis.
First, only Twitter knows which users are active (the population being analyzed are the DAUs). People doing bot analysis from publicly available define activity based on the account tweeting, which will probably skew heavily toward spam bots.
Second, the mDAU metric is the DAUs with known bots having been removed. Nobody outside of Twitter knows which active accounts were excluded by Twitter from the metric. Even if 50% of Twitter DAUs are bots, as long as Twitter detects 90% of them as bots and marks them as non-monetized, the 5% number stands.
Third, nobody outside of Twitter can actually do a proper job of evaluating whether an account is a bot, since they have orders of magnitude more signals than a simple tweet stream / public profile information.
Twitters methology is far better than any publicly available bot detection would be, but the flipside is that it's not a replicable methodology.
They must have some internal documentation though, and the process can be looked at during the trial, right?
So it cannot be replicated by third parties, and Musk does not know what he’s talking about (shocker!), but it can be verified a posteriori. And I assume it will be at some point.
I don't think it would be verified by somebody being given access to Twitter's internal data and redoing their process. At most both sides will trot out some expert witnesses to talk about whether the process / rating guide described in the internal docs are reasonable (what Twitter does doesn't need to be perfect, just not outright fraudulent). I look forward to finding what kind of a kook Musk finds as his expert.
Maybe Musk hopes to find something in discovery to discredit the process, e.g. evidence of the process not being followed, or of the numbers being tampered with.
> I don't think it would be verified by somebody being given access to Twitter's internal data and redoing their process.
Indeed. But surely they have at least internal audits. They seem like they take this stuff seriously.
> I look forward to finding what kind of a kook Musk finds as his expert
Indeed! If he picks his experts like his lawyers, this could be spectacular.
> e.g. evidence of the process not being followed, or of the numbers being tampered with
Yeah, he sounds like he’s hoping to find a smoking gun where some Twitter higher-up admits fudging the numbers. To be fair, if that is true, then Twitter deserves to be raked over the coals, even though it would not be sufficient to get Musk out of this mess.
If I want to change 1 character of a generated source file, can I just go do that or will I have to figure out how to prompt the change in natural language?