> That's what makes Bank REST APIs so tricky - processing doesn't ever actually occur at request time.
HTTP has provisions to enable async semantics between the client and the server. I would always advocate for teams to return 202 Accepted responses with a Location header containing the absolute or relative path to an endpoint that can be interrogated for status of the underlying async operation. This can work fine for lower-scale services (or even higher scale services with rate limits enforced at gateway). You POST'd a payment. You get 202 Accepted with some Location value like (simplified) `/payments/<identifier>` for which GET can be used to retrieve current status (e.g. pending, failed, committed, etc).
If a polling model is not desirable you can also use webhooks and inform the caller whenever a given task is finished.
Both of these strategies work fairly well in practice in my experience. However, I Must agree that simply dropping a big file off at an sFTP share and moving on with life is certainly "easier" from the client perspective. But...do you then roll your own mechanism for checking status? Why not just use HTTP semantics that exist already?
I've lived this life first-hand, and it is refreshing after experiencing the vast open sea of half-assed Confluence sprawl, huge PDF design documents that aren't kept up to date with changing requirements, etc.
The other sibling comment regarding rendering documentation source to Confluence or <required external system> to keep the corp overlords happy is a great middle ground IMO.
Yep, FedNow is a good step in the right direction, but the list of banks that participate needs to grow substantially. Many small and mid-sized banks/FIs aren't having a great 2023/2024, so I wouldn't expect to hear much about them integrating with new payment rails until their fundamental economics improve a bit allowing the decision makers to loosen up the purse strings again.
source: have worked in the industry consulting and doing technical design and implementation
I work in a firm very closely associated with CC and ACH processors, and I feel like the only time FedNow ever came up was when I mentioned it.
The low cost of acceptance and fast response times would seem to appeal to any merchant who's already begrudgingly accepting ACH, even if it's not a direct replacement for card payments.
If this might apply to you/your company and you don't have a lot of time to get into deep research on the topic just go see the Eclipse Temurin[0] project and grab a build or image from them.
If you depend on Java, I'd recommend to spend some time and look into the best distribution for your needs.
There's no need to go into deep research, but take maybe 5 minutes to understand the landscape of OpenJDK distributions. For instance if you run RedHat based linux like many companies, Adoptium brings nothing, there's no reason not to use RedHat's OpenJDK packages.
Fair. My comment was more in the direction of saving others time, as I've personally looked into this previously (granted, some years ago) and found the Temurin packages to be of very high quality while avoiding the legal risk with the Oracle packages.
Agree. I've always thought that PostgREST is an interesting project for some niche use-cases and teams. However, his argument about replacing GET request handling with a new tool that lives outside of/alongside your existing application architecture is not a particularly compelling argument. With properly-factored application code adding a GET (list) or GET-by-id is fairly trivial.
The only complexity I've ever run into there is implementing a pagination scheme that is not the typical "OOTB framework" limit/offset mechanism. I still don't think this makes the argument much stronger.
> a new tool that lives outside of/alongside your existing application architecture
It need not live either outside of or alongside application code. Substituting application code with PostgREST is an option.
> With properly-factored application code adding a GET (list) or GET-by-id is fairly trivial.
If it's trivial then it sounds like needless busywork to me. I'd rather generate it or have ChatGPT write it than pay a developer to write it, if it's being forced on me. I'd rather dispense with it altogether if I'm allowed.
I appreciate that the engineers at Spotify wrote this post, but not because it's a novel solution or because most software engineers don't know this stuff (well, some don't I suppose). However, I find it and others like it helpful to engineering groups that exist outside of tech companies.
Why? It gives those groups and their technical leads something to point to when discussing trade-offs, timelines and investments in technology with product teams or more senior engineering leadership whom might not have had a background in software engineering and architecture. "Don't take my word for it, look at what the engineers at Spotify, et. al. wrote about this topic."
Public service notice: Employing these patterns in small companies that don't have any need for the complexity is an anti-pattern.
I'm going to disagree with your last point because it should add the caveat that it makes sense when working with third party libraries. It makes sense to implement a wrapper with senseable defaults that comprises of +80% of use cases
I can agree with that wholeheartedly and admit to the practice myself...particularly with libs with a large surface area or which are used in many other places in the company.
It's literally out of control. We cannot have a healthy democracy in a systematically surveilled society. When people understand they're being actively and constantly surveilled they tend to self-censor their expression. When we do not have a healthy and honest exchange of expression (something that has arguably been being eroded for a long while now) it will fundamentally erode one of the pillars of our society. Kudos to CA and IL for attempting to do something about this legislatively.
As a citizen with some modicum of hope for the future I will vote for strong privacy protections. As an engineer I will not work on products that progress our state of surveillance capitalism (yep, realize the constraints here). I hope others agree and act accordingly.
I’m not sure that these requirements are actually due to Surveillance Capitalism; Government KYC requirements may be the culprits, as they’re steadily on the rise (especially with all the recent sanctions).
These are not contradictory things. Capitalism is something that is maintained by the government in the first place - it exists so long as government continues to operate a property right framework that is in favor of large corporate entities. And, of course, said entities bribe the politicians who run the government to continue this state of affairs.
In short, surveillance state capitalism that strangles actual free markets and reduces choice is the natural form of capitalism.
Exactly, and this is why I always try to steer teams away from "one metric to rule them all" whether this be "always fail if coverage is less than X percent" or "random code metric like CC is beyond limit". Reality is simply more complicated than that, and it takes experienced engineers to actively manage and balance the tradeoffs appropriately over time. Putting arbitrary one-size-fits-all rules in place is almost never the answer.
Unfortunately, in some (many?) companies there simply aren't enough experienced engineers who have the time to do the active balancing...leading us back to "just stick this rule in pipeline so the teams are held to _some standard_".
Agreed. I like to do these scans but for informational purposes, not as a gate. Also most tools allow you to annotate code to turn off warnings, which can help when used intelligently.
Of course some teams will over use such tools and turn off the metrics left and right.
In the end there is no substitute for experienced engineers.
afaik the IANA is the body who is authoritative on this type of information. RFC1918[0] defines the address blocks reserved for private routing. From there it's up to various routers and their software to decide whether or not to actually flow the packets over their interfaces or not.
HTTP has provisions to enable async semantics between the client and the server. I would always advocate for teams to return 202 Accepted responses with a Location header containing the absolute or relative path to an endpoint that can be interrogated for status of the underlying async operation. This can work fine for lower-scale services (or even higher scale services with rate limits enforced at gateway). You POST'd a payment. You get 202 Accepted with some Location value like (simplified) `/payments/<identifier>` for which GET can be used to retrieve current status (e.g. pending, failed, committed, etc).
If a polling model is not desirable you can also use webhooks and inform the caller whenever a given task is finished.
Both of these strategies work fairly well in practice in my experience. However, I Must agree that simply dropping a big file off at an sFTP share and moving on with life is certainly "easier" from the client perspective. But...do you then roll your own mechanism for checking status? Why not just use HTTP semantics that exist already?