For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | mgartner's commentsregister

It seems like you’re defining “scaling” as growth of a workload to the point that it cannot be handled by a single-server DB.

But with any service without a constant workload (I’d wager almost all services besides prototypes that get no users) you’re going to have to literally scale that one machine, by replacing it with a bigger machine. When you have 50 users you’re not going to be paying for some yy.24xlarge. You’ll start with something much more affordable. When the service grows to 50,000 users, you certainly won’t be at “Facebook scale”, but that t3.small isn’t going to cut it. Should your service ever decline, it’d be nice to scale that machine down to save on costs.

At a previous job, we spent many human hours continually ratcheting up the size of our Postgres machine a few times a year. Not only did this take non-trivial engineering hours and mind-space, it also caused maintenance downtime due to the limitations of traditional DBMSs.

Self-managed CockroachDB eliminates the downtime needed to scale. To handle a more intense workload, add machines. If you want to vertically scale each machine, that can be done without downtime too.

CockroachDB Serverless takes this a step further by scaling up and down to suit the demands of a highly dynamic workload, while minimizing costs.

Maybe what looks like an mega-scale obsession to you is actually a bunch of people trying to avoid the common headaches of managing a moderately sized, dynamic service.


Cool stuff! I've linked to your blog in the pg_flame README.


Cool, thanks!


I think the text display is great too! I built pg_flame mostly to help understand the relative timing of each step. Your brain can compare the size of each bar in a flamegraph virtually instantly, while comparing a bunch of actual time numbers scattered throughout the text output takes some time.

tldr; use both as needed


Ya, I ran into problems with CTE InitPlan steps. However, I did do some extra work to have them display in the most correct way I could think of.

I’ll add a CTE demo with and explanation.


I built it as a CLI because of the reasons you mention, but also because I find that workflow most convenient for me and I didn’t want to need an internet connection.


If you run the psql client from your dev machine, there’s no need for step 2. Also, in practice I don’t save the output JSON to a file, I just pipe it directly to pg_flame. The README breaks it up into multiple steps, but maybe I could make it clear that it’s not necessary.

But in general I do agree that simplifying this type of tooling is a good thing and something to strive for.


Impossible with Aurora Serverless. Only VPC IPs can connect.


Would a bastion host help here?


Yes, that's how we do things.

Ssh to 3333:dburl bastionhost -> psql to localhost:3333


You're totally right. In our case, our test infrastructure spins up a new Postgres container for every build. Because it's not a long-running database, bloat is not a concern.


Litmus is an Elixir library built at Lob for validating user input. We use it to validate API parameters are in the correct form. If you’ve ever used something like Joi in Javascript, Litmus is similar, albeit not as feature-rich yet.


This is an evolving collection of useful SQL for checking in on some important PG metrics. It's been a valuable resource recently at Lob to check in on our Postgres usage and health.

Heroku's pg-extras inspired this tool. pg_insights is useful for PG instances outside Heroku's hosted DB service. https://github.com/heroku/heroku-pg-extras


Based on https://en.wikipedia.org/wiki/HTTP_persistent_connection#HTT... it sounds like your statement is correct.

But the fact that the underlying HTTP connection is kept-alive by default doesn't necessary mean that the client is going to actually re-use that connection for multiple HTTP requests. And, in fact, in Node.js the connection is not reused by default.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You