It's unfortunate that the loudest advocates of crypto in the US and Europe are screeching libertarians and grifters/scammers. They drown out the everyday users in far off places like Argentina and Nigeria that nobody (in the US and Europe) seem to care about.
That's because it's harder to see the forest for the trees when you're out in the woods. Just like the author of this blog post, users in Argentina and Nigeria aren't going to stop and think about the far-reaching problems with crypto if that crypto is helping them with their day-to-day lives.
Different countries have different regulations and differing levels of permeation within society. Nobody in the US will take crypto as payment (other than a few niche retailers), but it's a pretty popular payment option in places like Argentina, Estonia, El Salvador, Vietnam, India, Singapore, and Nigeria. So popular in Nigeria, in fact, that it's displacing the local currency and the government is (unsuccessfully) trying to ban p2p transactions.
I’ve been to Estonia a lot, and have never seen any retail business that would accept crypto.
I’m sure there’s a niche somewhere. But when you say “pretty popular payment option”, it sounds like it’s a rival to Visa and Mastercard, and that prices would be listed in crypto instead of euro. That’s absolutely not true.
Re: Nigeria, anecdotally my few friends from the country tell me that the crypto boom is primarily about local influencers promoting their own coins as investments, not actual payments.
No. If you're happy with USD, then use USD. If you are Argentinian (or Venezuelan, or Iraqi, or Zimbabwean) and your currency is getting devalued to nothing and the government is restricting access to your funds and preventing you from holding USD, then Bitcoin or Stablecoins are a pretty good option.
The attitude I see a lot (mostly by Americans and Western Europeans) is that "Crypto is not useful to me, therefor it is not useful to anyone" - forgetting that it's a global network open to everyone and anyone. And that other countries exist that are difficult places to live at times.
Exactly this. People in these countries really don't know the problems from the rest of the world.
When an international fintech player get into market (TransferWise, PayPal, etc) they always optimize the experience for users from the US and western EU first. I doubt many people know you can't send money to Ukraine via Paypal unless you mark the transaction as "for Friends or Family", which means you lose all the customer protection. Paypal is worse than crypto in almost every aspect in this case.
That supposes a government that is not akin to shut down electricity grid arbitrarily and randomly kill people owning material to make computers work out of the grid.
Circumventing policies of a government deemed defective can help on the short terms and narrow perspective, but it won’t fix the fundamental issue.
Currency, be it banker’s bill, digital crytowhatever, are just a marginal side apparatus here.
I’m not an Economist, and I have no idea of what Fernando de la Rua had as insight of outcomes for his decisions. What should have the government tried to stop a precipitating bank run? Would a world where everywhere would follow maraoz economical wishes be a better necessarily a better one?
Exactly, but the value proposition of crypto is that it's *very* difficult to disrupt. See Nigeria for a current example of a national government trying and failing to get its population to give up on crypto.
It's only difficult to disrupt if the people involved stay entirely in the crypto ecosystem. The moment they want to interact with the real world, they open themselves up to the hard power of the state. The US govenrment has been very successful in disrupting crypto offramps; they don't need to break crypto math when they can just put the squeeze on people wanting to trade it for something else.
It's even easier to disrupt than dollars, at least exchanging dollars as cash is popular whereas cryptocurrencies rely predominantly on exchanges and bank transfers.
Maybe they hacked together something that can feasibly me marketed as an AI-assistant knowing that whatever they build now will get "steamrolled" by GPT-5 (Sam's words, not mine). When GPT-5 gets released, update the OS and it'll work as advertised... EZ-PZ!
"Avoiding the sun" does not mean "avoid going outside"
Long sleeves, hat, sunglasses go a long way to prevent burns and skin cancer. Sunscreen, too, but something tells me this guy doesn't do sunscreen.
It's almost impossible to run a good UV exposure association study because going outside correlates with health. Everything clever you can do to get around that (UV-B areas are what the article chooses) runs into issues with introducing even more uncontrolled variables.
I'm getting a bit jaded with vitamin D and UV studies and it's possible that one will come along and prove me wrong, but thus far they've generated a lot of hype but failed when experimentally tested.
To note, this advice is probably moot for people with high enough melanin living in low to mid sun exposure areas. Sunscreen becomes useful only when receiving sun doses that exceed skin's natural absorption capacity.
Wait, who am I supposed to believe here?!? Prime Video tore down their micro services in favor of a monolith just last year! Which trillion dollar globocorp is my tiny, insignificant company supposed to emulate?
If you read the Prime Video blog post the takeaway is definitely not "always use a monolith". I haven't used Step Functions but they specifically mention step functions with a lot of state transitions (and the pricing model is per state transitions) and storing things in S3 and having to access things there all the time (which I've used S3 before and I was a bit shocked by since it seemed obvious to me that was gonna be really expensive). The takeaway for me was it's important to actually understand the tools that you're using.
As an aside the Prime video article as a bit funny, at one point they have the line (which I hope is sarcastic but I fear that it isn't) "We experimented and took a bold decision: we decided to rearchitect our infrastructure" when their original design just obviously chose tools that didn't fit their workflow.
> The takeaway for me was it's important to actually understand the tools that you're using.
no no no, that takes time, the hype train doesn't wait for anyone. the sacred monolith it is. all hail the Monolith! crush the microgerms, destroy the filthy tiny services.
They're pretty clear here on what the benefits are. Microservices allow you to scale up and down individual system components without having to carry along the rest of a vertically scaled monolith for the ride. This makes for more efficient utilization of compute resources. For a company renting compute resources from the cloud, like Netflix, this can save a lot of money.
What Amazon did, according to this needlessly snarky article that is not Amazon's tech blog, does not conflict with this. It's all theory. In reality, you should not be dogmatic and religious about your architecture choices, but empirical wherever possible. They measured utilization and cost and found they could do better in some cases with monolithic sub-systems. This doesn't mean all of Prime Video abandoned SOA.
> This makes for more efficient utilization of compute resources.
No it doesn't. The rest of the monolith is just a chunk in your compiled binary sitting on disk, which is trivial in terms of resource cost. If that code is not running, it is not using any runtime resources.
Microservices will, however, greatly increase resource requirements if they lead to additional serialization/deserialization, which is relatively expensive. If you're doing video encoding, this isn't such a big deal. For web services, it is likely to be the bulk of the resource cost. This is only exacerbated in modern infrastructures where services are more and more expected to use TLS to talk to each other.
Swapping out silicon calls for network calls is the definition of complicated...And starting complicated because you want to achieve some premature optimization is insane. Monoliths are preferable for 90% of the use cases that you see in public or private industries.
Honestly, I get that it somehow became cool to ignore the grammatically correct "computation" and "computational resources" in favor of just grunting "compute" — why not go all the way?
"efficient utility of compute resources", etc. Just shorthand everything.
Does it take a famous developer to do it first for everyone to feel comfortable doing it?
IIRC The Prime Video guys were processing videos using AWS step functions - which probably makes sense if you are processing a few videos every so often. If you are processing videos continuously then it’s much more cost effective to just have some big boxes running 24x7 crunching through a queue of jobs.
The meta takeaway is that you shouldn't be afraid to resist trends (in either direction: monolith or microservices). If you're 'wrong' today, you may be 'right' in 5 years.
I watch ThePrimagen (coding content creator on YouTube/Twich) from time to time, and he works as an engineer at Netflix.
My impression seems to be that he doesn't like the container infrastructure, reflecting my own opinion, though he never calls out explicitly the infrastructure at Netflix as something bad. But every time he talks about work at Netflix, sounds about as complex as I'd image if I'd give the job to a CV driven engineer.
"Micro service" and "Monolith" don't have precise definitions anyway. Ideally, there's only one right architecture: the one that's sufficient for the problem at hand, all things considered (latency, availability, cost, provider, maintenance, conceptual integrity, ...).
I feel like the term "microservice" is open to misinterpretation (people tend to focus on the micro part a bit much; I remember someone waxing poetic about something like a csv parsing service they had). But surely monolith is quite unambiguous: a system which is deployed as a whole. That is, you cannot deploy some part of the system on its own. You want to get changes in part X to prod? Better be prepared to deploy A through W and Y and Z as well.
That feels fairly precise. But maybe some folks would disagree with this definition of a monolith.
IIRC, the rebuild to monolith doesn't tell the whole story. They aren't ditching all of their microservices in favor of monolith. There is a person responsible from Prime giving a clear picture of this in twitter (sadly I'm not saving the tweet source)
I was highly commended for bringing up 6 new microservices a year ago during my performance review. Late during development I noticed that at least 2 of them were useless (they are essentially message routing gateways, I planned to "enrich" messages from inside, but ultimately never did). It was already done and I did not want to waste time to integrate those two services into the others so I left it as is and, well, my boss loved it.
I mean you could just read a variety of different sources, look at the different trade-offs, and make informed decisions based on the compromises of both different architectural designs and your own products’ needs?
Jesus the quality of conversation here is not good today.
Ridiculed by whom? We've seen many competitors try to make a streaming service and beyond Apple, they all provide a laggy experience even in the menus.
If you're going to emulate someone, it's not a bad idea to emulate who has the best results
Even Apple TV is pretty sluggish on my LG TV. And it sometimes makes my TV crash so I have to reboot it. It's maybe not as bad as the steaming pile of garbage that is Sky-Showtime, but it's got a ways to go before it's comparable to Netflix on my TV. Amazon Prime is pretty terrible on my TV too.
That’s your TV having a shitty processor and WebOS not being the best. Even expensive smart TVs don’t ship with good silicon.
Get something like an Apple TV or Fire TV Cube and you’ll have a better experience. The Apple TV 4K in particular ships with a very powerful processor, it’s far snappier than any other streaming box I’ve tried.
It's been a few years since I let go of my TV, a not new-at-the time, but high-end LG, and I loved WebOS on it. I considered it the best, even better than Apple TV, especially for netflix. The new owner runs it without complaints.
Yeah, no shit. Still, Netflix, Plex and Youtube work just fine, so Apple TV should be able to work at least as well. I'm not buying an extra device just to compensate for shitty software, that's silly. I prefer to unsubscribe.
> This is Apple's entire MO. You are expected to replace all your devices every year or two.
As someone who previously was an "anti-fan" of Apple's (we're talking 2000s, early 2010s) for their ridiculous prices (and that still stands for things like the Vision Pro), I've now seen the light (or gone to the dark side if you prefer) and now believe Apple provides better value for money than most of their competition due to the longevity of their devices. I know this is anecdotal and a sample size of one but I'd be curious to see data backing up your claim above.
Apple was a rip off luxury brand back in the day if you had a Samsung Fascinate or something. MacBooks were horrible and macOS was annoying to deal with. Now they're the default price/performance choice if you want a decent reliable machine, and iPhones are obviously very good value if you just want a phone that works for as long as possible.
Incorrect. Frequently, UI lag on components that hit server side back services is made significantly worse by naïve microservices, especially in the face of organic growth.
Specifically, every API call that traverses a machine, boundary, necessarily impart, additional latency, and uncontrolled microservices can have a multiplicative effect
I agree that a bad implementation may lead to poor performance. However, this is irrespective of the architecture. The effects of an architecture are more noticeable in the context of maintainability, scalability, and extensibility.
it's not actually irrespective of architecture. Some architectures are significantly more prone to certain kinds of problems than others. For example, monoliths can become so large as to make development, especially many-team development, inconvenient or impossible. In the specific case of microservices, the key benefit (multiple teams can develop and deliver in parallel without stepping on each other, separating concerns into their own ownership areas) has the tradeoff of distributed systems overhead, which can range from high latency (when a number of microservices are in a serialized hot path and the complexity is not being effectively managed) to lowered availability or consistency (when data is radiating through a network of microservices asynchronously and different services 'see' different data and make conflicting correct decisions). Monoliths see this set of performance problems much, much later in their lifecycle, because they have much better data locality and local function call characteristics.
Ad serving and metrics are asynchronous so won't block any UI. And authentication/identity has the same behaviour with monolith/microservices. It's ultimately just a look up this user in some database.
It's the serving of the content that requires coordination across multiple services and most of that should be cached at the serving layer.
Incorrect, in most apps nontrivial content is highly personalized and dynamically served, auth in microservices is frequently two or more hop rather than one hop, and ad serving and metrics frequently involve synchronous steps.
Disney owned streaming services and HBO Max are far from laggy thanks to BamTech.
But as far as the menus being laggy. When you are trying to keep the bill of materials for streaming to less than $20 in the case of Roku, what do you expect?
The AppleTV box is $140 and the difference in quality shows
the circumstances when microservices make sense are pretty well documented, but of course it's not widely known, especially because it doesn't fit into the 'microservices are future' slogan.
Jack has ADD
Chili's sister is infertile
Winton, Judo, and the Terriers are from single parent households
There's a deaf kid featured in Turtleboy
There's a wheelchair kid in Quiet Game
"Chocolate Milk" from Tradies is in a mixed breed relationship
In fact, the whole fact that different breeds are interacting is an allegory for racial inclusion.
So yeah, Bluey gets a little "woke" here and there.
Sorry, not sorry.
I got mine from Edible Landscaping in Virginia. The owner is super friendly and very helpful. One of my grafted saplings died almost immediately after planting, and he replaced it no questions asked.
https://ediblelandscaping.com/collections/pawpaw
Just don't get NC-1, Mango, Rappahannock, Prolific, Overleese, Wells, or PA Gold.
Don't get me wrong, if you get one of those when they're perfectly ripe then it may well still be a completely life changing experience. But you can also do better, and given then you're going to have them for the next 25 years, it's worth doing the extra work to get the absolute best cultivars you can rather than just buying the first ones that are in stock, even if it takes an extra year to get started.
Use graphviz[0], forget manual graphing. While many tools can generate a call graph, confidence rolling your own is a good superpower to have.
For example, write a quick script to add a panic (nonexistantFunction()) as the first line of each function. Then, call each function in turn. Save the panic's function stack trace, then process them as a combined graphviz file.
This simple and efficient hack will get you all the most important edges in most cases for most languages. It won't get you the internal links, for those you need a more effective parser or more exotic means of obtaining branched call stacks.