For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more qianli_cs's commentsregister

That password is only used by the GHA to start a local Postgres Docker container (https://github.com/dbos-inc/dbos-transact-golang/blob/main/c...), which is not accessible from outside.


I think it was likely caused by the cache trying to compare the tag with Docker Hub: https://docs.docker.com/docker-hub/image-library/mirror/#wha...

> "When a pull is attempted with a tag, the Registry checks the remote to ensure if it has the latest version of the requested content. Otherwise, it fetches and caches the latest content."

So if the authentication service is down, it might also affect the caching service.


Even cloud vendors can’t get distributed systems design right.


Those are great questions!

For versioning, we recommend keeping each version running until all workflows on that version are done. It's similar to a blue-green deployment: each process is tagged with one version, and all workflows in it share that version. You can list pending/enqueued workflows on the old version (UI or list_workflow programmatic API), and once that list drains, you can shut down the old processes. DBOS Cloud automates this, and we'll add more guidance for self-hosting.

For bugfixes, DBOS supports programmatic forking and other workflow management tools [1]. We deliberately don't support code patching because it's fragile and hard to test. For example, patches can pile up on long-running workflows and make debugging painful.

The main limit is the database (which you can control the size). DBOS writes workflow inputs, step outputs, and workflow outputs to it. There's no step limit beyond disk space. Postgres/SQLite allow up to 1 GB per field, but keeping inputs/outputs under ~2 MB helps performance. We'll add clearer guidelines to the docs.

Thanks again for all the thoughtful questions!

[1] https://docs.dbos.dev/python/reference/contexts#fork_workflo...


Thanks for sharing your insights! You nailed the key tradeoffs of most durable workflow systems. The callback-style programming model is exactly the pain point we aim to solve with DBOS.

Instead of forcing you into a custom async runtime, DBOS lets you keep writing normal functions (this is an example in Python):

    @DBOS.workflow()
    def do_thing(foo):
        return bar

    # You can still call the workflow function like this:
    result = do_thing(fooInput)
Under the hood, DBOS checkpoints inputs/outputs so it can recover after failure, but you don't have to restructure your code around callbacks. In Python and Java we use decorators/annotations so registration feels natural, while in Go/TypeScript there's a lightweight one-time registration step. Either way, you keep the synchronous call style you'd expect.

On top of that, DBOS also supports running workflows asynchronously or through queues, so you can start with a simple function call and later scale out to async/queued execution without changing your code. That's what the article was leading into.


I think your use of Python decorators is a big usability improvement, with the point being that the glue is still there. You mention that in Go there's "a lightweight one-time registration step" but it seems like in addition to calling the registration steps, you also have to use `dbos.CallAsStep()` when calling sub-steps of a workflow, which is almost identical to the temporal Golang SDK which has you call `workflow.ExecuteActivity()`.

Can you explain what makes DBOS better to use in Golang vs Temporal?


:wave: Hey there, I'm working on the Go library and just wanted to confirm your suspicion:

"since Golang doesn't have decorators in the same way Python does, we still have to have code doing the kind of "manual callback" style I mentioned"

That's exactly right, specifically for steps. We considered other ways to wrap the workflow calls (so you don't have to do dbos.RunWorkflow(yourFunction)), but they got in the way of providing compile time type checking.

As Qian said, under the hood the Golang SDK is an embedded orchestration package that just requires Postgres to automate state management.

For example, check the RunWorkflow implementation: https://github.com/dbos-inc/dbos-transact-golang/blob/0afae2...

It does all the durability logic in-line with your code and doesn't rely on an external service.

Thanks for taking the time to share your insights! This was one of the most interesting HN comment I've seen in a while :)


The main advantage is the same architectural benefit DBOS provides in other languages: you only need to deploy your application, so there's no separate coordinator to run. All functionality (checkpointing, durable queues, notification/signaling, etc) is built directly into the Go package on top of the database.


I've been building an integration [1] with Pydantic AI and the experience has been great. Questions usually get answered within a few hours, and the team is super responsive and supportive for external contributors. The public API is easy to extend for new functionality (in my case, durable agents).

Its agent model feels similar to OpenAI's: flexible and dynamic without needing to predefine a DAG. Execution is automatically traced and can be exported to Logfire, which makes observability pretty smooth too. Looking forward to their upcoming V1 release.

Shameless plug: I've been working on a DBOS [2] integration into Pydantic-AI as a lightweight durable agent solution.

[1] https://github.com/pydantic/pydantic-ai/pull/2638

[2] https://github.com/dbos-inc/dbos-transact-py


Yeah, we plan to add more languages. Currently supports Python and TypeScript, and Go and Java will be released soon. We’re having a preview of DBOS Java at our user group meeting on August 28: https://lu.ma/8rqv5o5z Welcome to join us! We’d love to hear your feedback.

We welcome community contributions to the open source repos.


We heard you! Working on improvements based on user feedback. Stay tuned :)


Yup, some features are timeless and deserve a re-intro every now and then. SKIP LOCKED is definitely one of them.


with a nice NOWAIT when appropriate


Managing complex scheduled workflows at scale comes with a lot of nuances. This is exactly why we're building DBOS (shameless plug! https://github.com/dbos-inc), which provides durable cron jobs and exactly-once workflow triggering. Since it's just a library on top of Postgres, it doesn't require a centralized scheduler (well, think of Postgres as the coordinator).

One challenge is to guarantee exactly-once processing across software upgrades. DBOS uses the cron-scheduled time as an idempotency key, and tags each workflow execution with a version. We also use the database transactions to guard against conflicting concurrent updates.


My colleague did some internal benchmarking and found that LISTEN/NOTIFY performs well under low to moderate load, but doesn't scale well with a large number of listeners. Our findings were pretty consistent with this blog post.

(Shameless plug [1]) I'm working on DBOS, where we implemented durable workflows and queues on top of Postgres. For queues, we use FOR UPDATE SKIP LOCKED for task dispatch, combined with exponential backoff and jitter to reduce contention under high load when many workers are polling the same table.

Would love to hear feedback from you and others building similar systems.

[1] https://github.com/dbos-inc/dbos-transact-py


Nice! I'm using DBOS and am a little active on the discord. I was just wondering how y'all handled this under the hood. Glad to hear I don't have to worry much about this issue


Why not read the WAL?


We considered using WAL for change tracking in DBOS, but it requires careful setup and maintenance of replication slots, which may lead to unbounded disk growth if misconfigured. Since DBOS is designed to bolt onto users' existing Postgres instances (we don't manage their data), we chose a simpler, less intrusive approach that doesn't require a replication setup.

Plus, for queues, it's so much easier to leverage database constraints and transactions to implement global concurrency limit, rate limit, and deduplication.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You