I do not understand the concept that a property of a sum of a bunch of numbers somehow applies to those components.
Whether something is net zero depends on what you are adding it to; it's not inherent in that thing.
It's like, if you have a pile of blue things, then the pile is also blue. But other characteristics do not work that way. If you have a pile of things that each weigh 1 lb, then the pile does not weigh 1 lb.
If you make a hydrocarbon by taking CO2 from the air, and you use a carbon free energy source to do it, then burning that hydrocarbon is net zero carbon. That is, if all your hydrocarbons that you burn were made this way, then the atmospheric carbon doesn't increase because of your activity.
Exactly. It's not a solution, but it's a way to stop the problem from getting worse.
It's like moving from a variable rate credit card to a 0% loan. You still have this pile of debt you need to deal with, but you more or less stop the problem from getting any worse.
It’s actually strange that no gas company is advertising something like this. I’ve seen so many industries advertise even the tiniest carbon offsets but why can’t BP or Exxon sell a net zero or at least low emissions fuel
It is really strange to you that oil company is not advertising a product that they don't have, a product which would compete with their regular product, a product which is hard to sell (more expensive and dubious value)? From the standpoint of these companies, there is plenty of oil down there, making synthetic one has no upside.
Well, maybe not synthetic, but what if they said "use this gas for 10% more, we planted a tree for every gallon" or whatever. Even if it was complete bs, I'd think about it
You can get the required high amount of heat either through concentrating solar collectors or through any renewable form of electricity that you run through electric heaters. If you want to be fancy, you can use microwaves to very selectively heat only the parts you want to heat.
I think you need to pause before considering this route. Are you that exceptional? This level of intelligence is incredibly rare and unless you're 100% sure you're capable of this, I wouldn't do it.
The guy has brains to burn of course, but tbh, it's simply about motivation. Medicine is a grind, more than an intellectual exercise. If you have the brains for actual tech there's no doubt you have the requisite intelligence. The only barrier is the stamina for the grind. I know I couldn't do it. Not as I am now. Maybe in the future. I couldn't say for sure, given I'm not clairvoyant.
I agree that you should give serious pause. Probably the most serious pause of your life, but still. I wouldn't shy away too quickly. When you know, you know.
Very true. Not to discount MD training or doctors, but it's less about connecting disparate dots or learning how to solve problems and more about working out the brain like a muscle.
1. I don't agree with your conclusion that it's Python specific. You don't have evidence for that--you made that up. And no I'm not interested in whatever benchmark you're going to want to post, because it's not a test of this situation--it cannot possibly be, because when you introduce JS, you're also going to be introducing literally hundreds of other factors which could affect the performance. The assertion you are making is not one you can possibly know.
2. For this specific test, there is a downside to async, as shown by the test. Even if what's going on here were Python-specific (which is still something you made up), downsides to async which only occur in a Python environment are still downsides to async. The title of this post is "Async Python is not faster"--that conclusion is incorrect for many reasons, but none of those reasons include the words "NodeJS", "JS", or anything else that is not in the Python ecosystem.
3. What is going on is probably specific to the tools being used, which is why I said "those downsides certainly don't apply to every project". In fact, they probably don't apply to the idiomatic ways of implementing this in Tornado, for example. But note how I said "probably" because I don't know for sure, and I'm not comfortable with making things up and stating them as facts.
>And no I'm not interested in whatever benchmark you're going to want to post,
That's rude.
Let's put it this way. NodeJS and nginx leveled the playing field. It destroyed the lamp stack and made async the standard way of handling high loads of IO. From that alone it should indicate to you that there is something very wrong with how you're thinking about things.
You know the theory of asyncio? Let me restate it for you: If coroutines are basically the SAME thing as routines but with the extra ability to allow tasks to be done in parallel with IO then what does that mean?
It means that 5 async workers in theory should be more performant than 5 sync workers FOR highly concurrent IO tasks.
The logic is inescapable.
So what does it mean, if you run tests and see that 5 async workers are NOT more performant than 5 sync workers ON PYTHON exclusively? The theory of asyncio makes perfect logical sense right? So what is logically the problem here?
The problem IS PYTHON. That's a theorem derived logically. No need for evidence or data driven techniques.
There's this idea that data drives the world and you need evidence to back everything up. How many data points do you need to prove 1 + 1 = 2? Put that in your calculator 200 times and you got 200 data points. Boom data driven buzzword. That's what you're asking from me btw. A benchmark, a datapoint to prove what is already logical. Then you hilariously decided to dismiss it before i even presented it.
Look, I say what I say not from evidence, but from logic. I can derive certain issues about the system from logic. You just follow the logic I gave you above and tell me where it went wrong and why do I need some dumb data point to prove 1+1=2 to you?
There is NOTHING made up above. It is pure logic derived from the assumption of what AsyncIO is doing.
>But note how I said "probably" because I don't know for sure, and I'm not comfortable with making things up and stating them as facts.
But you seem perfectly comfortable in being rude and accusing me of making stuff up. I'm not comfortable in going around the internet and trashing other peoples theories with accusations that they are making shit up. If you disagree say it, I respect that. I don't respect the part where you're saying I'm making stuff up.
> > And no I'm not interested in whatever benchmark you're going to want to post,
> That's rude.
It wasn't rude, it was predictive, and I predicted correctly. You literally ignored the second half of the sentence where I already explained why your incorrect conclusion is incorrect.
Your logic makes perfect sense, in a world where I/O bound processes, JIT versus interpretation differences, garbage collection versus reference counting differences, etc., don't exist. But those things do exist in the real world, so if your logic doesn't include them, you're quite likely to be wrong. In general, an interpreted concurrent system is far too complex to make performance predictions about based only on logic, because your logic can't possibly include all the relevant variables.
> No need for evidence or data driven techniques.
Well, there's where you're wrong. It turns out that if you actually collect evidence through experimentation, you'll discover results that are not predicted by your logic.
> Then you hilariously decided to dismiss it before i even presented it.
Well, you presented basically what a predicted, so... I wasn't wrong.
> But you seem perfectly comfortable in being rude and accusing me of making stuff up. I'm not comfortable in going around the internet and trashing other peoples theories with accusations that they are making shit up. If you disagree say it, I respect that. I don't respect the part where you're saying I'm making stuff up.
You are, in fact, making stuff up. If you are offended by accurate description of your behavior, behave better.
>Your logic makes perfect sense, in a world where I/O bound processes, JIT versus interpretation differences, garbage collection versus reference counting differences, etc., don't exist. But those things do exist in the real world, so if your logic doesn't include them, you're quite likely to be wrong. In general, an interpreted concurrent system is far too complex to make performance predictions about based only on logic, because your logic can't possibly include all the relevant variables.
Hey Genius. Look at the test that the benchmark ran. The benchmark is IO bound. This is SPECIFIC test about IO bound processes. The benchmark is not referring to real world applications it is SPECIFICALLY referring to IO.
The literal test is thousands of requests and for each handler for those requests ALL it does is query a database. If you look at a single request almost 99% of time is spent on IO.
Due to the above, Anything that has to do with the python interpreter, JIT, garbage collection and reference counting becomes NEGLIGIBLE in the context of the TEST in the ARTICLE ABOVE. I suspect you didn't even read it completely.
Does that concept make sense to you? You can use relativity rather then newtonian physics to calculate the trajectory of a projectile BUT it is involves UNNECESSARY overhead coming from Relativity because the accuracy gained from the extra calculations ARE NEGLIGIBLE.
>You are, in fact, making stuff up. If you are offended by accurate description of your behavior, behave better.
Now that things have been explained completely clearly to you, do you now see how you are the one who is completely wrong or are you incapable of ever admitting your wrong and apologizing to me like you should? I mean literally that statement above is embarrassing once you get how wrong you are.
> Anything that has to do with the python interpreter, JIT, garbage collection and reference counting becomes NEGLIGIBLE
Well, it's odd that you say that, when previously you were claiming that the result was caused by Python. Is it caused by Python, or is Python negligible?
> You can use relativity rather then newtonian physics to calculate the trajectory of a projectile BUT it is involves UNNECESSARY overhead coming from Relativity because the accuracy gained from the extra calculations ARE NEGLIGIBLE.
Man, you sure are willing to make broad statements that you cannot possibly know.
You're really sure that there's no context where the accuracy gained by relativity would be useful?
Again, this is you making stuff up. The worst part here is that a basic application of logic, which you claim to have such a firm grasp on that you don't need evidence, would indicate that you cannot possibly know the things you are claiming to know. You really think you know all the possible cases where someone might want to calculate the trajectory of a projectile? Really?
Please don't post flamewar comments to HN. Even though the other user broke the site guidelines worse, you started it and you provoked it further. We ban accounts that do that, regardless of how wrong the other person is or you feel they are.
I'm not going to ban you for this because it isn't repeated a lot in your account history, but please don't do it again.
>Well, it's odd that you say that, when previously you were claiming that it was caused by Python. Is it caused by Python, or is Python negligible?
It's not odd. Think harder. I'm saying under the benchmark and according to the logic of what SHOULD be going on under AsyncIO it SHOULD be negligible. So such performance issues between python and node SHOULDN'T matter, and that's why you CAN compare NodeJS and Python.
But actual testing does show an unexpected discrepancy that says that the problem is python specific because logically there's No other explanation.
>You're really sure that there's no context where the accuracy gained by relativity would be useful?
Dude I already know about that. I just didn't bring it up, because I'm giving you an example to help you understand what "NEGLIGIBLE" means. You're just being a pedantic smart ass.
There are many many cases where relativity is not needed because it's negligible. In fact the overwhelming majority of engineering problems don't need to touch relativity. You are aware of this I am aware of this. No need to be a pedantic smart ass. The other thing that gets me is that it's not even anything new, many people know about how satellites travel fast enough for relativity to matter.
>Again, this is you making stuff up
Dude nothing was made up.
I never said "there's no context where the accuracy gained by relativity would be useful". "No context" is something YOU made up.
In fact "made up" is too weak of a word. A better word is an utter lie.
That's right. You're a liar. I'm not being rude. Just making an observation. Embarrassed yet?
I've banned this account for repeatedly doing flamewars. Would you please stop creating accounts to break HN's rules with? You're welcome here if, and only if, you sincerely want to use this site in the intended spirit.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future.
This is not what is happening with flask/uwsgi. There is a fixed number of threads and processes with flask. The threads are only parallel for io and the processes are parallel always.
Which is fine until you run out of uwsgi workers because a downstream gets really slow sometime. The point of async python isn't to speed things up, it's so you don't have to try to guess the right number of uwsgi workers you'll need in your worst case scenario and run with those all the time.
Yep and this test being shown is actually saying that about 5 sync workers acting on thousands of requests is faster then python async workers.
Theoretically it makes no sense. A Task manager executing tasks in parallel to IO instead of blocking on IO should be faster... So the problem must be in the implementation.
Plain sanic runs much faster than the uvicorn-ASGI-sanic stack used in the benchmark, and the ASGI API in the middle is probably degrading other async frameworks' performance too. But then this benchmark also has other major issues, like using HTTP/1.0 without keep-alive in its Nginx proxy_pass config (keep-alive again has a huge effect on performance, and would be enabled on real performance-critical servers). https://sanic.readthedocs.io/en/latest/sanic/nginx.html
You're not completely off. There might be issues with async/await overhead that would be solved by a JIT, but also if you're using asyncio, the first _sensible_ choice to make would be to swap out the default event loop with one actually explicitly designed to be performant, such as uvloop's one, because asyncio.SelectorEventLoop is designed to be straightforward, not fast.
There's also the major issue of backpressure handling, but that's a whole other story, and not unique to Python.
My major issue with the post I replied to is that there are a bunch of confounding issues that make the comparison given meaningless.
If I/O-bound tasks are the problem, that would tend to indicate an issue with I/O event loop, not with Python and its async/await implementation. If the default asyncio.SelectorEventLoop is too slow for you, you can subclass asyncio.AbstractEventLoop and implement your own, such as buildiong one on top of uvloop. And somebody's already done that: https://github.com/MagicStack/uvloop
Moreover, even if there's _still_ a discrepancy, unless you're profiling things, the discussion is moot. This isn't to say that there aren't problems (there almost certainly are), but that you should get as close as possible to an apples-to-apples comparison first.
When I talk about async await I'm talking about everything that encompasses supporting that syntax. This includes the I/O event loop.
So really we're in agreement. You're talking about reimplementing python specific things to make it more performant, and that is exactly another way of saying that the problem is python specific.
No, we're not in agreement. You're confounding a bunch of independent things, and that is what I object to.
It's neither fair nor correct to mush together CPython's async/await implementation with the implementation of asyncio.SelectorEventLoop. They are two different things and entirely independent of one another.
Moreover, it's neither fair nor correct to compare asyncio.SelectorEventLoop with the event loop of node.js, because the former is written in pure Python (with performance only tangentally in mind) whereas the latter is written in C (libuv). That's why I pointed you to uvloop, which is an implementation of asyncio.AbstractEventLoop built on top of libuv. If you want to even start with a comparison, you need to eliminate that confounding variable.
Finally, the implementation matters. node.js uses a JIT, while CPython does not, giving them _much_ different performance characteristics. If you want to eliminate that confounding variable, you need to use a Python implementation with a JIT, such as PyPy.
Do those two things, and then you'll be able to do a fair comparison between Python and node.js.
Except the problem here is that those tests were bottlenecked by IO. Whether you're testing C++, pypy, libuv, or whatever it doesn't matter.
All that matters is the concurrency model because that application he's running is barely doing anything else except IO and anything outside of IO becomes negligible because after enough requests, those sync worker processes will all be spending the majority of their time blocked by an IO request.
The basic essence of the original claim is that sync is not necessarily better than async for all cases of high IO tasks. I bring up node as a counter example because that async model IS Faster for THIS same case. And bringing up node is 100% relevant because IO is the bottleneck, so it doesn't really matter how much faster node is executing as IO should be taking most of the time.
Clearly and logically the async concurrency model is better for these types of tasks so IF tests indicate otherwise for PYTHON then there's something up with python specifically.
You're right, we are in disagreement. I didn't realize you completely failed to understand what's going on and felt the need to do an apples to apples comparison when such a comparison is not Needed at all.
No, I understand. I just think that your comparison with _node.js_ when there are a bunch of confounding variables is nonsense. Get rid of those and then we can look at why "nodejs will beat flask in this same exact benchmark".
> I just think that your comparison with _node.js_ when there are a bunch of confounding variables is nonsense
And I'm saying all those confounding variables you're talking about are negligible and irrelevant.
Why? Because the benchmark test in the article is a test where every single task is 99% bound by IO.
What each task does is make a database call AND NOTHING ELSE. Therefore you can safely say that for either python or Node request less than 1% of a single task will be spent on processing while 99% of the task is spent on IO.
You're talking about scales on the order of 0.01% vs. 0.0001%. Sure maybe node is 100x faster, but it's STILL NEGLIGIBLE compared to IO.
It it _NOT_ Nonsense.
You Do not need an apples to apples comparison to come to the conclusion that the problem is Specific to the python implementation. There ARE NO confounding variables.
> And I'm saying all those confounding variables you're talking about are negligible and irrelevant.
No, you're asserting something without actual evidence, and the article itself doesn't actually state that either: it contains no breakdown of where the time is spent. You're assuming the issue lies in one place (Python's async/await implementation) when there are a bunch of possible contributing factors _which have not been ruled out_.
Unless you've actually profiled the thing and shown where the time is used, all your assertions are nonsense.
Show me actual numbers. Prove there are no confounding variables. You made an assertion that demands evidence and provided none.
>Unless you've actually profiled the thing and shown where the time is used, all your assertions are nonsense.
It's data science that is causing this data driven attitude to invade peoples minds. Do you not realize that logic and assumptions take a big role in drawing conclusions WITHOUT data? In fact if you're a developer you know about a way to DERIVE performance WITHOUT a single data point or benchmark or profile. You know about this method, you just haven't been able to see the connections and your model about how this world works (data driven conclusions only) is highly flawed.
I can look at two algorithms and I can derive with logic alone which one is O(N) and which one is O(N^2). There is ZERO need to run a benchmark. The entire theory of complexity is a mathematical theory used to assist us at arriving AT PERFORMANCE conclusions WITHOUT EVIDENCE/BENCHMARKS.
Another thing you have to realize is the importance of assumptions. Things like 1 + 1 = 2 will remain true always and that a profile or benchmark ran on a specific task is an accurate observation of THAT task. These are both reasonable assumptions to make about the universe. They are also the same assumptions YOU are making everytime you ask for EVIDENCE and benchmarks.
What you aren't seeing is this: The assumptions I AM making ARE EXACTLY THE SAME: reasonable.
>you're asserting something without actual evidence, and the article itself doesn't actually state that either: it contains no breakdown of where the time is spent
Let's take it from the top shall we.
I am making the assumption that tasks done in parallel ARE Faster than tasks done sequentially.
The author specifically stated he made a server that where each request fetches a row from the database. And he is saying that his benchmark consisted of thousands of concurrent requests.
I am also making the assumption that for thousands of requests and thousands of database requests MOST of the time is spent on IO. It's similar to deriving O(N) from a for loop. I observe the type of test the author is running and I make a logical conclusion on WHAT SHOULD be happening. Now you may ask why is IO specifically taking up most of the time of a single request a reasonable assumption? Because all of web development is predicated on this assumption. It's the entire reason why we use inefficient languages like python, node or Java to run our web apps instead of C++, because the database is the bottleneck. It doesn't matter if you use python or ruby or C++, the server will always be waiting on the db. It's also a reasonable assumption given my experience working with python and node and databases. Databases are the bottleneck.
Given this highly reasonable assumption, and in the same vein as using complexity theory to derive performance speed, it is highly reasonable for me to say that the problem IS PYTHON SPECIFIC. No evidence NEEDED. 1 + 1 = 2. I don't need to put that into my calculator 100 times to get 100 data points for some type of data driven conclusion. It's assumed and it's a highly reasonable assumption. So reasonable that only an idiot would try to verify 1 + 1 = 2 using statistics and experiments.
Look you want data and no assumptions? First you need to get rid of the assumption that a profiler and benchmark is accurate and truthful. Profile the profiler itself. But then your making another assumption: The profiler that profiled the profiler is accurate. So you need to get me data on that as well. You see where this is going?
There is ZERO way to make any conclusion about anything without making an assumption. And Even with an assumption, the scientific method HAS NO way of proving anything to be true. Science functions on the assumption that probability theory is an accurate description of events that happen in the real world AND even under this assumption there is no way to sample all possible EVENTS for a given experiment so we can only verify causality and correlations to a certain degree.
The truth is blurry and humans navigate through the world using assumptions, logic and data. To intelligently navigate the world you need to know when to make assumptions and when to use logic and when data driven tests are most appropriate. Don't be an idiot and think that everything on the face of the earth needs to be verified with statistics, data and A/B tests. That type of thinking is pure garbage and it is the same misguided logic that is driving your argument with me.
Nodejs is faster than Python as a general rule, anyway. As I understand, Nodejs compiles Javascript, Python interprets Python code.
I do a lot of Django and Nodejs and Django is great to sketch an app out, but I've noticed rewriting endpoints in Nodejs directly accessing postgres gets much better performance.
CPython, the reference implementation, interprets Python. PyPy interprets and JIT compiles Python, and more exotic things like Cython and Grumpy statically compiles Python (often through another, intermediate language like C or Go).
Node.js, using V8, interprets and JIT compiles JavaScript.
Although note that, while Node.js is fast relative to Python, it's still pretty slow. If you're writing web-stuff, I'd recommend Go instead for casually written, good performance.
The compare between Django against no-ORM is a bit weird given that rewriting your endpoint in python without Django or ORM would also have produced better results I suppose.
Right but this test focused on concurrent IO. The bottleneck is not the interpreter but the concurrency model. It doesn't matter if you coded it in C++, the JIT shouldn't even be a factor here because the bottleneck is IO and therefore ONLY the concurrency model should be a factor here. You should only see differences in speed based off of which model is used. All else is negligible.
So you have two implementations of async that are both bottlenecked by IO. One is implemented in node. The other in python.
The node implementation behaves as expected in accordance to theory meaning that for thousands of IO bound tasks it performs faster then a fixed number of sync worker threads (say 5 threads).
This makes sense right? Given thousands of IO bound tasks, eventually all 5 threads must be doing IO and therefore blocked on every task, while the single threaded async model is always context switching whenever it encounters an IO task so it is never blocked and it is always doing something...
Meanwhile the python async implementation doesn't perform in accordance to theory. 5 async workers is slower then 5 sync workers on IO bound tasks. 5 sync workers should eventually be entirely blocked by IO and the 5 async workers should never be blocked ever... Why is the python implementation slower? The answer is obvious:
It's python specific. It's python that is the problem.
I see this elitist attitude all over the internet. First it was people saying “Guys why are you over reacting to corona the flu is worse.”
Then it was people saying “Guys, stop buying surgical masks, The science says they don’t work it’s like putting a rag over your mouth.”
All of these so called expert know it alls were wrong and now we have another expert on asynchronous python telling us he knows better and he’s not surprised. No dude your just another guy on the internet pretending he’s a know it all.
If you are any good, you’ll realize that nodejs will beat the flask implementation any day of the week and the nodejs model is exactly identical to the python async model. Nodejs blew everything out of the water, and it showed that asynchronous single threaded code was better for exactly the test this benchmark is running.
It’s not obvious at all. Why is the node framework faster then python async? Why can’t python async beat python sync when node can do it easily? What is the specific flaw within python itself that is causing this? Don’t answer that question because you don’t actually know man. Just do what you always do and wait for a well intentioned humble person to run a benchmark then comment on it with your elitist know it all attitude claiming your not surprised.
Is there a word for these types of people? They are all over the internet. If we invent a label maybe they’ll start becoming self aware and start acting more down to earth.
Except IO is the bottleneck here. The concurrency model for IO should determine overall speed. If python async is slower for IO tasks then sync then that IS an unexpected result and an indication of a python specific problem.
If you say IO is the bottleneck, then you're claiming there is no significant difference between python and node. That's what a bottleneck means.
> The concurrency model for IO should determine overall speed.
"Speed" is meaningless, it's either latency or throughput. Yeah, yeah, sob in your pillow about how mean elites are, clean up your mascara, and learn the correct terminology.
We've already claimed the concurrency model is asynchronous IO for both python and node. Since they are both doing the same basic thing, setting up an event loop and polling the OS for responses, it's not an issue of which has a superior model.
> If python async is slower for IO tasks then sync then that IS an unexpected result and an indication of a python specific problem.
Both sync and async IO have their own implementations. If you read from a file synchronously, you're calling out to the OS and getting a result back with no interpreter involvement. This[2] is a simple single-threaded server in C. All it does is tell the kernel, "here's my IO, wake me up when it's done."
When you do async work, you have to schedule IO and then poll for it. This[1] is an example of doing that in epoll in straight C. Polling involves more calls into the kernel to tell it what events to look for, and then the application has to branch through different possible events.
And you can't avoid this if you want to manage IO asynchronously. If you use synchronous IO in threading or processes, you're still constructing threads or processes. (Which makes sense if you needed them anyway.)
So unless an interpreter builds its synchronous calls on top of async, sync necessarily has less involvement with both the kernel and interpreter.
The reason the interpreter matters is because the latency picture of async is very linear:
* event loop wakes up task
* interpreter processes application code
* application wants to open / read / write / etc
* interpreter processes stdlib adding a new task
* event loop wakes up IO task
* interpreter processes stdlib checking on task
* kernel actually checks on task
Since an event loop is a single-threaded operation, each one of these operations is sequential. Your maximum throughput, then, is limited by the interpreter being able to complete IO operations as fast as it is asked to initiate them.
I'm not familiar enough with it to be certain, but Node may do much of that work in entirely native code. Python is likely slow because it implements the event loop in python[3].
So, not only is Python's interpreter slower than Node's, but it's having to shuffle tasks in the interpreter. If Node is managing a single event loop all in low level code, that's less work it's doing, and even if it's not, Node can JIT-compile some or all of that interpreter work.
>If you say IO is the bottleneck, then you're claiming there is no significant difference between python and node. That's what a bottleneck means.
This is my claim that this SHOULD be what's happening under the obvious logic that tasks handled in parallel to IO should be faster then tasks handled sequentially and under the assumption that IO takes up way more time then local processing.
Like I said the fact that this is NOT happening within the python ecosystem and assuming the axioms above are true, then this indicates a flaw that is python specific.
>The reason the interpreter matters is because the latency picture of async is very linear:
I would say it shouldn't matter if done properly because the local latency picture should be a fraction of the time when compared to round trip travel time and database processing.
>Python is likely slow because it implements the event loop in python
Yeah, we're in agreement. I said it was a python specific problem.
If you take a single task in this benchmark for python. And the interpreter spends more time processing the task locally then the total Round trip travel time and database processing time... Then this means the database is faster than python. If database calls are faster then python then this is a python specific issue.
In erlang (in the BEAM vm to be more precise) you can update an application while it is running, it was a requirement for the language, even though these days it is a lot less common, its still possible.
The way that the BEAM does that it is that it store the old and the current version of the module, any new calls made will call the functions exported in the new module, but any process already running will call the functions of the old module.
But how do you handle types in a system like that, where the type of a message can change at runtime? There's no guarantees that your valid types that the compiler just approved will be valid at runtime...
learnyousomeerlang has this quote about rolling upgrades:
"We're getting into one of the most complex parts of OTP, difficult to comprehend and get right, on top of being time consuming. In fact, if you can avoid the whole procedure (which will be called relup from now on) and do simple rolling upgrades by restarting VMs and booting new applications, I would recommend you do so. Relups should be one of these 'do or die' tools. Something you use when you have few more choices."
It is reversible. If you make a mistake like this oftentimes you can rely on law enforcement/bank to force the money to be given back. Not sure if this is possible with crypto.
So you harvest CO2 to into fuel. That fuel is burned and the CO2 is released back into the air. So net difference is zero.
But to harvest the CO2 you needed to generate a high amount of heat. That's extra CO2. So net CO2 is more.
It depends on the cost of harvesting that CO2.