For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | kentonv's commentsregister

Cloudflare's Durable Objects puts your Worker and SQLite DB on the same physical server (and lets you easily spawn millions of these pairs around the world).

D1 is a simplified wrapper around DO, but D1 does not put your DB on the same machine. You need to use DO directly to get local DBs.

https://developers.cloudflare.com/durable-objects/

(I am the lead engineer for Cloudflare Workers.)


Very cool, thanks for the response!

Just a heads up, the naming might be a little confusing vs:

https://github.com/cloudflare/kumo


Hey, sorry this has been a confusing transition.

You can actually still create new Pages projects, but it is de-emphasized in the UI in favor of Workers.

We've done a lot of work to make static sites on Workers just as easy to configure as Pages was. Have you tried it lately? Would love to hear what aspects you feel are still more complicated than Pages.

Just to be clear, we will not break existing sites using Pages. We will either auto-migrate them to Workers once we have all the tools in place to do so, or we'll keep supporting Pages forever.


Nice to hear! I haven’t tried it since last summer. What I tried to do was to deploy Jekyll from GitHub to Pages, and the only option I see was to deploy to Workers, but I couldn't find the documentation for doing this on Workers. Maybe this was improved since last summer. I'll try again.

To clarify, there are two approaches you can take to handle large-scale databases on Cloudflare:

With Durable Objects[0], you can create and orchestrate millions of sqlite databases that live directly on Cloudflare's edge machines. The 10GB limit applies to one database, but the idea is you design your system to split data into many small databases, e.g. one per user, or even one per document. Since the database is literally a local file on the machine hosting the Durable Object that talks to it, access is ridiculously fast. Scalability of any one database is limited, but you can create an unlimited number of them.

If you really need a single big database, you can use Hyperdrive[1], which provides connection management and caching over plain old Postgres, MySQL, etc. Cloudflare itself doesn't host the database in this case but there are many database providers you can use it with.

[0] https://developers.cloudflare.com/durable-objects/

[1] https://developers.cloudflare.com/hyperdrive/

(I'm the lead engineer on Cloudflare Workers.)


I can assure you that nobody at Cloudflare ever thought that open sourcing workerd would be a way to get "free labor from the commons". On the contrary, we are wary of external contributions. The Workers Runtime is a complicated codebase, and we invest a lot of time into getting new team members up to speed on how to write code correctly. We cannot make such an investment in external contributors who are only there to land one PR. Usually, a one-time contributor trying to do something complicated will waste more of the team's time than they save.

But in practice, we almost never receive major contributions from outside the team. Which is fine. We're happy just to have our team working in the open.

The reasons we open sourced it are:

1. Support a realistic local dev environment (without binary blobs).

2. Provide an off-ramp for customers concerned about lock-in. Yes, really. We have big customers that demand this, and we have had big customers that actually did move off Cloudflare by switching to workerd on their own servers. It makes business sense for us to support this because otherwise we couldn't win those big customers in the first place.


Cloudflare Workers is a big on capabilities.

The recently released Dynamic Workers directly provides an API for capability-based sandboxing: https://developers.cloudflare.com/dynamic-workers/usage/bind...

But the platform has used caps internally all along. Cloudflare makes heavy use of Cap'n Proto (https://capnproto.org/), a capability-based RPC protocol, and recently released Cap'n Web (https://capnweb.dev/), a JavaScript-oriented version of the same idea. The "Cap'n" in both is short for "Capabilities and". (Dynamic Workers sandboxing is based around Cap'n Web capabilities.)

Most successful sandboxes use capabilities, though it's not often something you hear about. Android's IPC system, Binder, is a capability system. And Chrome has a capability-based IPC system called "Mojo".

Capabilities really shine when used for sandboxing, but here's a blog post I wrote that tries to explain the benefits beyond sandboxing: https://blog.cloudflare.com/workers-environment-live-object-...

(I am the lead developer of Cloudflare Workers, and the creator of Cap'n Proto and Cap'n Web.)


When using Dynamic Workers, you generally don't run the AI harness inside the Dynamic Worker itself, but rather as a regular worker. But your harness would have a tool call that's like "executeCode" which runs code in the dynamic worker.

You could certainly set it up to allow the AI to import arbitrary npm modules if you want. We even offer a library to help with that:

https://www.npmjs.com/package/@cloudflare/worker-bundler


Dynamic Workers don't have a built-in filesystem, but you can give them access to one.

What you would do is give the Worker a TypeScript RPC interface that lets it read the files -- which you implement in your own Worker. To give it fast access, you might consider using a Durable Object. Download the data into the Durable Object's local SQLite database, then create an RPC interface to that, and pass it off to the Dynamic Worker running on the same machine.

See also this experimental package from Sunil that's exploring what the Dynamic Worker equivalent of a shell and a filesystem might be:

https://www.npmjs.com/package/@cloudflare/shell


Cloudflare Workers was actually pushing for web standards on the server side several months before Deno was announced. :)

Though Ryan of course had a lot more clout from day 1 than I did.


(I love cloudflare workers and thanks for that), but I do think that credit is where its due and Deno's push for server side web standards also helped the general ecosystem.


Since a lot of people here aren't familiar with the private credit situation, here's my understanding, which comes almost entirely from reading Money Stuff, a daily column by Matt Levine. If you are a tech person who wants to learn about finance, I recommend it! It's a lot more entertaining than most finance industry reporting.

"Private credit" is an idea that has been hot in finance for the last several years, originating from the great financial crisis (GFC). After the GFC, regulations made it very hard for banks to make business loans with any kind of risk anymore. So instead, new non-bank institutions stepped in to make loans to businesses. These "private credit" institutions raise money from investors, and lend it to businesses.

The investors are usually institutions who are OK with locking up their money long-term, like insurance companies and pension funds. This all seems a lot safer than having banks making loans: banks get their funding from depositors, who are allowed to withdraw their deposit any time they want. So a bank really needs to hold liquid assets so they are prepared for a run on the bank, and corporate borrowing is not very liquid. Insurance companies and pension funds have much more predictability as to when they actually will need their money back, so can safely put it in private credit with long horizons.

It's not quite so clean, though.

It's actually common for banks to lend money directly to private credit lenders, who then lend it out to companies. But when this happens, typically the bank is only lending a fraction of the total and arranges that they get paid back first, so it's significantly less risky than if they were loaning directly to the companies. Of course, the non-bank investors get higher returns on their riskier investment.

And the returns have been pretty good. Or were. With the banks suddenly retreating from this space, there was a lot of money to be made filling the gap, and so private credit got a reputation for paying back really good returns while being more predictable than the stock market.

But this meant it got hot. Really hot.

It got so hot that there were more people wanting to lend money than there were qualified borrowers. When that happens, naturally standards start to degrade.

And then interest rates went up, after having been near-zero for a very long time.

And now a lot of borrowers are struggling to pay back their loans on time. And the lenders need to pay back investors, so sometimes they are compromising by getting new investors to pay back the old ones, and stuff. It's getting precarious.

Meanwhile a lot of private credit institutions are hoping to start accepting retail investors. Not because retail investors have a lot of money and are gullible, no no no. 401(k) plans are by definition locked up for many years, so obviously should be perfect for making private credit investments! Also those 401(k)s today are all being dumped into index funds which have almost zero fees, whereas private credit funds have high fees. Wait, that's not the reason though!

But just as they are getting to the point of finding ways to accept retail investors, it's looking like the returns might not be so great anymore. Could be a crisis brewing. Even if the banks are pretty safe, it's not great if pensions and insurance companies lose a lot of money...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You