For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more mayank's commentsregister

> You don't actually write code with e2b. You write technical specs and then collaborate with an AI agent.

If I want to change 1 character of a generated source file, can I just go do that or will I have to figure out how to prompt the change in natural language?


I'm sure there would be a way to edit the artifacts... otherwise this would be a constant exercise in frustration!


I feel like this would be like trying to edit a confluence page.

It's in the frustration valley between WYSIWYG and just writing code. The worst of both worlds.


So, not too different from typical real world coding tasks today.


> Because there are many services (each with their own readiness criteria), a cold boot takes a long time until all services stabilize (worst case I've seen was over 30 minutes). With hibernation they can resume where they started off within a couple of minutes.

This is exactly what we use hibernation for in conjunction with EC2 Warm Pools -- fast autoscaling of services that have long boot times. There's an argument to be made that fixing slow boots should be the "correct" solution, but in large enough organizations, hibernated instances are a convenient workaround to buy you some time to navigate the organizational dynamics (and technical debt) that lead to the slow boot times in the first place.


This is a wonderful article, architecture, and project. Can anyone from Clickhouse comment on any non-technical factors that allowed such a rapid pace of development, e.g. team size, structure, etc.?


Couple of things- 1. Hiring the best engineers globally - this was huge as we are a distributed company and can hire the best talent anywhere in the world. 2. Flat team structure specially in the first year with a concept of verticals(technical areas - like autoscaling, security, proxy layer, operations etc) and having one engineer own and drive these verticals. 3. Lot of cross team collaboration and communication (across product, engineering, business, sales) 4. Lastly as mentioned in the blog post, it was very important for us to stick to the milestone based sprints for faster development and product team helped a lot to prioritize the right features so that engineering could deliver.


Passion and experience


All modern languages heavily borrow from each other’s latest iterations. In the case of Java though, playing catch up is by design since it’s intended to be a conservative/stable language.


Wow....confirmed!

`{"errors":[{"message":"Your current API plan does not include access to this endpoint, please see https://developer.twitter.com/en/docs/twitter-api for more information","code":467}]}`


Can you comment on a reply below that claims there was promotional pricing in 2019 at launch?

https://news.ycombinator.com/item?id=33929132


I think they're referring to the Early Access Program, which was a one-time pre-launch Dutch auction in advance of general availability. Note that that is orthogonal to a domain being premium or not. During the last day of EAP, you might have seen the price of a non-premium domain be something like $130+12/yr. Whereas for a ~$70/yr premium domain, it'd have been something like $130+70/yr.

Regardless, forum.dev was registered in 2021 (you can confirm via WHOIS), which was long after the Early Access Program for .dev ended in 2019.


You stated here that they paid $850 to initially register the domain name.

https://news.ycombinator.com/item?id=33929354

Do you have proof of the initial price paid for registration or is this in some part speculation?


According to https://news.ycombinator.com/item?id=33929472 they checked in their system that the domain was billed as premium upon initial registration.


Exactly! Hashcash was proposed in 1997: https://en.m.wikipedia.org/wiki/Hashcash similar mechanics for a different use case


> - /organizations/:id

> - /blogs/:id

The pragmatic, large-company-only counterpoint is the narrow edge case where:

- :id must be human-readable for "SEO reasons"

- there are many competing organizations and blogs to the point where there may be a name collision.

Although in that case, I'd still suggest:

/:organization-name/:blog-name


What about /blogs/:id?organization=:id, doesn’t it solve the issue?


What if the company changes name?


What if you're using ids and two organisations merge? Whatever solution works for organisation ids will work for organisation names.


There's a hierarchical modeling paradigm/tools called C4 that (while being boxes and lines) helps with the zoom-in/zoom-out nature of understanding systems: https://c4model.com/


Thanks, I have seen that but never really looked into it. Have you found it useful?


> I still can't get over the banana-pants insanity of the first count... arguing that Twitter lied about its numbers for years specifically so that someone would buy it at an inflated price?

Not a lawyer, so could you explain why this is banana-pants insanity? There's malicious fraud (unlikely) and then there's the more likely case of under-investing in bot-detection and expunging efforts, e.g. "in favor of other priorities", to keep DAUs and subsequently valuations high for a potential sale.


Twitter's methodology for checking bot accounts is clear, consistent, and has been detailing in its SEC filings for years. Anyone that cared could have easily double checked them. Recalling from memory, all they did is take a random sampling of accounts and have a human rate the accounts as a bot or not--back when Twitter had an API it would have been even easier to do this.


No. Nobody outside of Twitter can repeat the analysis.

First, only Twitter knows which users are active (the population being analyzed are the DAUs). People doing bot analysis from publicly available define activity based on the account tweeting, which will probably skew heavily toward spam bots.

Second, the mDAU metric is the DAUs with known bots having been removed. Nobody outside of Twitter knows which active accounts were excluded by Twitter from the metric. Even if 50% of Twitter DAUs are bots, as long as Twitter detects 90% of them as bots and marks them as non-monetized, the 5% number stands.

Third, nobody outside of Twitter can actually do a proper job of evaluating whether an account is a bot, since they have orders of magnitude more signals than a simple tweet stream / public profile information.

Twitters methology is far better than any publicly available bot detection would be, but the flipside is that it's not a replicable methodology.


> No. Nobody outside of Twitter can repeat the analysis.

This is true of most material statements provided by companies in every industry:

- Revenue? Trust the company, I cannot independently verify from outside the company.

- Retail same-store sales comparable? Have to trust the company's numbers.

- Headcount? I have to trust their number again here.

- Expenses? I have no way to verify this unless I work at the company, in a very senior position.

Outside verification of data is not a concern relevant to corporate disclosures.


They must have some internal documentation though, and the process can be looked at during the trial, right?

So it cannot be replicated by third parties, and Musk does not know what he’s talking about (shocker!), but it can be verified a posteriori. And I assume it will be at some point.


I don't think it would be verified by somebody being given access to Twitter's internal data and redoing their process. At most both sides will trot out some expert witnesses to talk about whether the process / rating guide described in the internal docs are reasonable (what Twitter does doesn't need to be perfect, just not outright fraudulent). I look forward to finding what kind of a kook Musk finds as his expert.

Maybe Musk hopes to find something in discovery to discredit the process, e.g. evidence of the process not being followed, or of the numbers being tampered with.


> I don't think it would be verified by somebody being given access to Twitter's internal data and redoing their process.

Indeed. But surely they have at least internal audits. They seem like they take this stuff seriously.

> I look forward to finding what kind of a kook Musk finds as his expert

Indeed! If he picks his experts like his lawyers, this could be spectacular.

> e.g. evidence of the process not being followed, or of the numbers being tampered with

Yeah, he sounds like he’s hoping to find a smoking gun where some Twitter higher-up admits fudging the numbers. To be fair, if that is true, then Twitter deserves to be raked over the coals, even though it would not be sufficient to get Musk out of this mess.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You