For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more gabrielruttner's commentsregister

Hey - great writeup!

Curious how you're planning to handle Postgres versions going forward - will v16 be a hard requirement for new OSS deployments? What's your strategy for managing existing deployments on older versions?


Thank you! We generally try to stay in line with the supported versions of the technologies we use: Postgres, NodeJS, Redis, etc.

For Postgres in particular, part of the motivation for doing this upgrade when we did was (besides the very meaningful performance improvements) is that Postgres 12 will no longer be supported by Aurora after February 25, 2025.[0]

Officially, Medplum v3 (our current newest version) requires Postgres 12+. We're just about to release v4 which will support 13-17 based on the support dates published by Postgres[1]. We're still finalizing more formal documentation on our versioning strategy going forward.

[0]. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraPostgreSQ...

[1]. https://www.postgresql.org/support/versioning/


curious if opening the circuit prevents the fire?


Folks are using us for long-lived tasks traditionally considered background jobs, as well as near-real-time background jobs. Our latency is acceptable for requests where users may still be waiting, such as LLM/GPU inference. Some concrete examples:

1. Repository/document ingestion and indexing fanout for applications like code generation or legal tech LLM agents

2. Orchestrating cloud deployment pipelines

3. Web scraping and post-processing

4. GPU inference jobs requiring multiple steps, compute classes, or batches


We’ve heard and experienced the paradigm/terminology thing and are focusing heavily on devex. It's common to hear that only one engineer on a team will have experience with or knowledge of how things are architected with Temporal, which creates silos and makes it very difficult to debug when things are going wrong.

With Hatchet, the starting point is a single function call that gets enqueued according to a configuration you've set to respective different fairness and concurrency constraints. Durable workflows can be built on top of that, but the entire platform should feel intuitive and familiar to anyone working in the codebase.


Hi Gabe, also Gabe here. Yes, this is a core usecase we're continuing to develop. Prior to Hatchet I spent some time as a contractor building LLM agents where I was frustrated with the state-of-tooling for orchestration and lock in of some of these platforms.

To that end, we’re building Hatchet to orchestrate agents with features that are common like streaming from running workers to frontend [1] and rate limiting [2] without imposing too many opinions on core application logic.

[1] https://docs.hatchet.run/home/features/streaming [2] https://docs.hatchet.run/home/features/rate-limits


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You