I'm a big fan of Postgres
I've used it for a while and prefer it (partly due to experience) over other options.
There is a lot that it _can_ do, especially when you take FDW, extensions, PLs, etc into the equation.
However, it's not an "always better" situation. These other specialized services excel in ways that Postgres cannot at the moment. _If_ you have a small system, _and_ the featureset you need overlaps with postgres' abilities, _and_ you don't expect to outgrow those two properties, then it could definitely make sense to use Postgres.
Let's imagine that adding all the pressure to the same system _didn't_ impact other parts of the same system (you wouldn't want a surge in kafka write traffic causing latency on your basic crud routes, right?).
Redis vs Postgres unlogged tables:
* Redis has a bunch of algorithms which are battle-tested and ease implementation of common patterns. Need a bloom filter, expiration with ttl that you don't need to write extra code for?
Kafka/SQS vs postgres queueing:
* There are pros and cons here, but you definitely don't want your other Postgres work being impeded by high spikes in traffic. Distributed logs and dedicated message queues are built to handle elastic scalability in ways that are difficult to achieve with Postgres. What if you have certain, super busy tenants whose queueing traffic comes in giant batches at unexpected times? With SQS and their recent fair queues, you not only don't need to worry about the spikes (as long as your writers can write fast enough), but you also don't need to worry about the distribution of in-flight consumer work being imbalanced due to the spikiness of a single tenant
ElasticSearch:
* Postgres FTS can work for a number of simple scenarios, but there's a number of edge-cases where performance deteriorates as well. What happens when large documents are normal? https://old.reddit.com/r/PostgreSQL/comments/1q5ts8u/postgre... shows some metrics on what happens when you get into TOAST territory. FTS also puts a decent load on the db both in indexing as well as searching.
MongoDB:
* Mongo, for all of the deficiencies that it's had over the years (which I hear are mostly resolved these days), can scale writes in a _muuuuch_ more simple way than Postgres.
The claim is that "unless you're at unbelievable scale" you won't hit it, however I don't think that's true.
I've worked on multiple Postgres databases that tapped out due to the amount of work it would take to scale things up further (full tenant sharding by db, multiple shards for some larger tenants, keeping that all going and working at a larger scale).
Every additional piece of logic you add into Postgres complicates the story of how you fix things once it becomes too much. Once you've got a single write that triggers 10 different table writes, with 80 index updates, and you want to scale those writes up, you might hit scenarios where you need to choose what gets migrated out. Or you get smart with materialized views, but those require full refreshes. So you create your own version of incrementally-maintained-views, which is more writes and work.
That all being said, I do think Postgres can work for more concerns than it currently is used for, even if I think the recent glazing is a bit much.
Postgres is great, and having everything in a single transactional workload is extremely convenient and can remove a lot of race conditions and buggy behavior.
This is the best response to the article, IMO. We've kept our tech stack down, and lean heavily on postgres in a lot of cases. But having Redis able to handle scale independently of postgres has been a life saver, and saved us a lot of money vs what kind of postgres instance we'd need to match what it's doing.
Small scale, small app, You Just Need Postgres is on point. Funneling all of your scaling issues into Postgres, which isn't always the easiest to scale horizontally, can start to be a problem.
Although you inevitably end up writing some OOP code in F# when interacting with the dotnet ecosystem, F# is a really good OOP language. It's also concise, so I don't spend as much time jumping around files switching my visual context. Feels closer to writing python.
They killed off VB, which if I recall the announcement correctly, noted that it statistically had a larger user base (by Microsoft metrics) than F#. There are a number of companies relying on F# for critical operations and MS has some use of F# internally which I understand has no plans of replacement, which helps balance out the fear.
I think saying that Spring is the representative of Java metrics is somewhat equivalent to saying that full aspnet mvc is the representative of dotnet metrics.
On the dotnet side, both Oxpecker and Giraffe (Giraffe being written by the author of that post) perform very well with simple code and from what I see, no "tricks". It's all standard "how the docs say to write it" code (muuuuch different than those platform benchmarks that were rightfully scrutinized in the referenced blog post).
On the jvm side, I started looking for a reference near the top without any targeted non-default optimizations (which is really what I personally look for in these). The inverno implementation has a few things that I'd call non-standard (any time I see a byte buffer I imagine that's not how most people are utilizing the framework), but otherwise looks normal. I recall an earlier quarkus implementation that I read through a couple years ago (same repo) that wasn't as optimized with small things like that and performed very well, but it seems they've since added some of those types of optimizations as well.
All to say: If you venture outside the standard of either platform (full fatty aspnet/ef or spring/hibernate) you can make the tradeoff of framework convenience for performance. However when it comes to the cost/benefit ratio, you're either going to be joining a company using the former, or writing your own thing using the latter (most likely).
I'm at my current company (actually writing mostly typescript and node services now) because of a YC "who's hiring" post that mentioned F# positions (bait and switch /s, but my experience lined up heavily with the team I ended up joining which didn't use F#).
The main release note here is more stable async. I’m curious how folks using nim feel about the async situation.
One of the most recent opinions from the discord is:
“ we have async and asyncdispatch but its leaky and bug prone esp when used with threads and causes deep ownership issues, i prefer using taskman when possible but it leaves the IO problem yet unsolved ”
I’ve also been told to just use regular threads whenever possible.
As one who was interested by Nim and tried it out for some personal projects, I found that this was the biggest problem with the project. There's several options for any given need (including internal tools like LSP or installing the compiler) with very little clear way to choose between them. Each will have different (dis)advantages that you must discover primarily by digging through forum posts or GitHub issues.
In some ways it's the sign of a very strong language, like the curse of Lisp: someone can easily just write their own version of whatever they need. But as a user trying to navigate the ecosystem it is frustrating. I do keep meaning to try it out again though; the language itself is very pleasant to use.
Not only do you need to walk a mere block or two from the tourist line to find charming quiet spots, but there are tons of people that walk directly past beautiful and interesting places to get _to_ the jam-packed spots.
Small private gardens with interesting history and splendid views sitting nearly empty while a train of tens->hundreds of tourists walk directly past it per minute. Or small hiking trails within a stones throw of a packed entrance with a tiny fraction of the foot-traffic. They aren’t obscured either, just not the “main attraction”.
No real confidence so they don't wander around and use their own brains. They just go to the top 10 spots chatGPT, tiktok, or some other list dictates.
It's like how people go to the Louvre and stand in line for 3 hours to look at one famous painting that they probably don't even really like all that much, and not see anything else in arguably the world's best gallery.
This is doubly true for a hot spot like Japan, because it's the current number one on the dumb lists.
what i intended to point out was that regrettably few people actually pay any attention to the nozze di cana despite it being both more accessible physically and worthy of interest
That's what you get when people travel to that one point they saw on Instagram. People crossing the world to take that exact photo a million people before them took.
From the docs, it looks like it's building a graph to retrieve data, though the comparison it gives contrasts it to doing many small individual queries and passing them to other methods to get evaluated.
I find in the apps I'm working on, either services will build complex queries themselves, or they need to make multiple queries due to data needing transformations between queries that aren't simple to facilitate in the database itself (these services also tend to avoid code in the database, which I'm mixed on).
In the "Deep Composition" section it has a comment in the code `// These three tiles run in parallel`. Does that mean that the way of composition is through pulling in multiple different pieces of data then joining at the application layer?
I'm coming from a very much sql mindset and trying to understand the intended mechanism for data retrieval here. It kind of reminds me of how ad-hoc LINQ queries use Expression trees to resolve sql queries, but not exactly the same.
Or is the thought more that this would be used when you have many disparate data stores (micro services, databases, caches, etc) and doesn't make sense for a monolithic single-database application)?
I think the disconnect we’re having here is what you’re looking for in a framework. Mosaic doesn’t help you accept incoming traffic or make your upstream requests, it just helps manage your business logic in between.
For an application with just one upstream data source, you’re right it probably doesn’t make sense to use this framework. My background is in fintech where we deal with dozens of data sources at the same time in large orchestration APIs and that’s where this system shines. It allows you to run all of these upstream requests in parallel without writing any coroutine boilerplate and access their results anywhere in your logic without needing to pass around the various responses.
As for how it works, it really isn’t doing anything too surprising. There is no actual graph being created, the behavior just acts like one exists. Mosaic inserts stubs into short-lived caches which causes tiles to wait on eachother at runtime. Once the tile is completed, the stub receives the result and all the waiting tiles get it too. It eagerly runs every piece of logic you give it while guaranteeing that it will never run twice for a request.
Yeah that makes a lot more sense. I can see how this would be a nice direction to take things instead of trying to retrofit graphql or some other layer onto an existing architecture.
Yep! It’s perfect for microservices, orchestration APIs, BFF APIs, etc. It’s designed as a way to augment your favorite ORMs and HTTP clients rather than replace the whole stack.
Not every developer needs to know about all of these things. I'd take this more as a "list of interesting details related to common things you might depend on", it's akin to suggesting that doctors of specific specialties (dermatologist) should know about random things that are part of other specialties (proctologist).
However, it's not an "always better" situation. These other specialized services excel in ways that Postgres cannot at the moment. _If_ you have a small system, _and_ the featureset you need overlaps with postgres' abilities, _and_ you don't expect to outgrow those two properties, then it could definitely make sense to use Postgres.
Let's imagine that adding all the pressure to the same system _didn't_ impact other parts of the same system (you wouldn't want a surge in kafka write traffic causing latency on your basic crud routes, right?).
Redis vs Postgres unlogged tables:
* Redis has a bunch of algorithms which are battle-tested and ease implementation of common patterns. Need a bloom filter, expiration with ttl that you don't need to write extra code for?
Kafka/SQS vs postgres queueing: * There are pros and cons here, but you definitely don't want your other Postgres work being impeded by high spikes in traffic. Distributed logs and dedicated message queues are built to handle elastic scalability in ways that are difficult to achieve with Postgres. What if you have certain, super busy tenants whose queueing traffic comes in giant batches at unexpected times? With SQS and their recent fair queues, you not only don't need to worry about the spikes (as long as your writers can write fast enough), but you also don't need to worry about the distribution of in-flight consumer work being imbalanced due to the spikiness of a single tenant
ElasticSearch: * Postgres FTS can work for a number of simple scenarios, but there's a number of edge-cases where performance deteriorates as well. What happens when large documents are normal? https://old.reddit.com/r/PostgreSQL/comments/1q5ts8u/postgre... shows some metrics on what happens when you get into TOAST territory. FTS also puts a decent load on the db both in indexing as well as searching.
MongoDB: * Mongo, for all of the deficiencies that it's had over the years (which I hear are mostly resolved these days), can scale writes in a _muuuuch_ more simple way than Postgres.
The claim is that "unless you're at unbelievable scale" you won't hit it, however I don't think that's true. I've worked on multiple Postgres databases that tapped out due to the amount of work it would take to scale things up further (full tenant sharding by db, multiple shards for some larger tenants, keeping that all going and working at a larger scale).
Every additional piece of logic you add into Postgres complicates the story of how you fix things once it becomes too much. Once you've got a single write that triggers 10 different table writes, with 80 index updates, and you want to scale those writes up, you might hit scenarios where you need to choose what gets migrated out. Or you get smart with materialized views, but those require full refreshes. So you create your own version of incrementally-maintained-views, which is more writes and work.
That all being said, I do think Postgres can work for more concerns than it currently is used for, even if I think the recent glazing is a bit much.
Postgres is great, and having everything in a single transactional workload is extremely convenient and can remove a lot of race conditions and buggy behavior.