My understanding of these prefixed IDs is that translation happens at the API boundary - what you store in the database is the unprefixed ULID/UUID in an efficient binary column.
Then, whenever the API includes an id in a response, it adds the appropriate prefix. When receiving an id in a request, validate that the prefix is correct and then strip the prefix.
That gets you all the advantages of prefixed IDs but still keep all 128bits (or however many bits, you don't have to stick to UUIDs) for the actual id.
Or, to put it another way, there's no need to store the prefix in the column because it will be identical for all rows.
EDIT: this is not to knock your work - quite the opposite. If you do have a use case where you need rows in the same table to have a dynamic prefix, or the client takes the IDs and needs to put them in their own database, then your solution has a lot of advantages. I think what I'm getting at is that if you're using prefixes then it's a worthwhile discussion to be had about where you apply the prefix.
My understanding from Stripe was that they're stored with the prefix as text in the DB, but I might be wrong.
As for doing the translation at the API boundary, my only gripe is that it's likely to be error-prone: every dev needs to remember to add/strip the correct prefix in every route. Of course you can add some middleware that is context-aware, but still there will be cases (eg live querying while talking to non-tech team) where not having to translate back-and-forth would be great!
Anyway, appreciate the comment and definitely agree that for most teams just using a UUID and adding a bit of code is a more obvious route than using a new ID format someone just made up!
As someone who has done that exact sort of stripping at a previous job it's not that bad. If you mess it up your tests immediately fail but also it's just so easy to have a library that does it for you
You could take the final checkpoint from that page and run it for some additional steps and see if it improves? You could always publish the final checkpoint and training curves - someone might find it useful.
Honestly I think Google needs to be broken up. It's not a novel idea but the more I think about it the more I like it.
So, Google becomes two orgs: Google indexing and Google search. Google indexing must offer its services to all search providers equally without preference to Google search. Now we can have competition in results ranking and monetisation, while 'google indexing' must compete on providing the most valuable signals for separating out spam.
It doesn't solve the problem directly (as others have noted, inbound links are no longer as strong a signal as they used to be) but maybe it gives us the building blocks to do so.
Perhaps also competition in the indexing space would mean that one seo strategy no longer works, disincentivising 'seo' over what we actually want, which is quality content.
I’m afraid the problem is not indexing, but monetization. Alternative google search will not be profitable (especially if you have to pay a share to google indexing) because no one will buy ads there - even for bing it is a challenge
The hope though is that by splitting indexing that puts search providers on an equal footing in terms of results quality (at least initially). Advertisers go to Google because users go to Google. But users go to Google because despite recent quality regressions, Google still gives consistently better results.
If search providers could at least match Google quality 'by default' that might help break the stranglehold wherein people like the GP are at the mercy of the whims of a single org
How sure are you about that? I find them to be subpar when compared to Bing, especially for technical search topics (mostly, PHP, Go, and C related searches).
Not a bad idea, but there are lots of details need to be fill in and, you know, devils is in the details.
Google's index is so large that it's physically very hard to transfer out while being updated. Bandwidth cost is non negligible outside google's data centre.
In terms of data structure, i can imagine it is arranged in a way that make google search easy.
Replication can be for HA, not just for scale. All depends on your business requirements.
Also replication can be good for other operational reasons, such as zero downtime major version upgrades. Again depends on the business need/expectations.
Same, as it stands you the user are legally liable for the full bill unless netlify graciously forgive it.
Even in op's case, they didn't (still charging 5k!).
If there was an option to cap billing, or at least some legally binding limit on liability, then I can countenance using netlify.
Until then, it's just not feasible nor worth the risk.
I'm also interested to know this. I have a couple of static sites running on the free tier for friends/family and now I'm planning on moving them all to a VPS as soon as I can.
It is beyond ridiculous that serverless providers don't offer a way to cap spending. The idea that it might cause your site to go offline is a complete non-argument. That what I _want_ to happen. I want to be able to say sure, I'm happy to sustain 10x traffic for a few hours, and maybe 3x sustained over days, but after that take it offline. I don't want infinitely scaling infra precisely because of the infinitely scaling costs.
1,656 rivers is still not a lot of rivers for 80% of all ocean plastic though considering the actual total number of rivers in the world.
Because the key bit of data there is that ocean-borne plastic is not primarily coming from beaches, or city storm run-off (at least in modernized areas) in an even distribution but is very obviously a product of local regulation (and in turn suggests that other measures - like foreign aid or imposing standards of behavior on local companies with foreign suppliers/subsidiaries would likely solve the problem).
The issue is no one's truly freed themselves from the "individual sacrifice" narrative of environmental remediation: the desire is to accuse people on an individual basis of ruining the world, and requiring all solutions to involve individual consequences for their "sins". There's much less enthusiasm for the reality, which is that other then some slight changes in tax allocation we might be able to just solve the entire problem and give a slightly improved standard of living anyway (i.e. people in wealthier cities generally like their waterways and beaches not to be clogged with trash).
The worst thing that I've found with teams is the latency in a video call is _just slightly_ too high (and noticeably higher than Google meet and zoom). I find this latency to be absolutely critical in keeping the flow of conversation and preventing people from accidentally talking over each other. After seeing a forced switch to teams when the whole company was WFH, it was extremely obvious the detrimental impact it had on every meeting. It's the most basic technical aspect of the product, it doesn't matter how great the rest is if you don't nail it. That's why I hate teams.
Just letting you know it's not you - this is a well known long standing issue with keycloak. Typically users see a significant performance cliff at around 300-400 realms. While one realm is not necessarily the same as one tenant in keycloak, it does make it a significantly larger headache to support multi tenant with SSO integrations in a single realm.
I'm afraid I can't give you more details than that, we just moved on from keycloak at that point.
Then, whenever the API includes an id in a response, it adds the appropriate prefix. When receiving an id in a request, validate that the prefix is correct and then strip the prefix.
That gets you all the advantages of prefixed IDs but still keep all 128bits (or however many bits, you don't have to stick to UUIDs) for the actual id.
Or, to put it another way, there's no need to store the prefix in the column because it will be identical for all rows.
EDIT: this is not to knock your work - quite the opposite. If you do have a use case where you need rows in the same table to have a dynamic prefix, or the client takes the IDs and needs to put them in their own database, then your solution has a lot of advantages. I think what I'm getting at is that if you're using prefixes then it's a worthwhile discussion to be had about where you apply the prefix.