The question that the referenced paper (1) is trying to answer is "do the 3D incompressible Euler equations develop a finite time singularity from smooth initial data of finite energy?" This is an important question in the theory of nonlinear partial differential equations, but is probably not as relevant to real fluid flow as a lay reader might imagine.
The incompressible Euler equations model a very strange and unphysical kind of fluid. Incompressibility means that the speed of wave propagation in such a fluid is infinite, which means that normal causality is not respected. Effects in such a fluid happen simultaneously with their causes.
For example, if you apply a force to one end of a pipe full of Euler fluid, the fluid instantly starts coming out of the other end of the pipe, with no time taken for this effect to propagate from one end of the pipe to the other. You could use a long pipe full of Euler fluid as a superluminal communication device!
Intuitively, it seems reasonable that in such an unphysical fluid, it would be possible to form a singularity even from smooth initial conditions. The difficulty, of course, is proving that intuition, which is what the paper is trying to do.
1) https://arxiv.org/pdf/2210.07191.pdf "Stable nearly self-similar blowup of the 2D Boussinesq and 3D Euler equations with smooth data", Jiajie Chen and Thomas Y. Hou.
A simple example of a function with a singularity is f(t)=1/t. Note that at t=0, f(t) is undefined due to division by zero. On either side of zero, the absolute value of f(t) approaches infinity.
In this case, we are tracking the flow of an incompressible fluid over time. This flow is represented by a velocity field evolving over time, under the constraint of no net inflow/outflow of material into any region of space. Thus, the singularity corresponds to a portion of fluid speeding up and approaching an infinite speed as you approach some finite time.
Because the fluid cannot be compressed, the only way the singularity can be produced is for a portion of the liquid to swirl, increasingly rapidly, about some point: hence the discussion in the article about vorticity.
As isoprophlex pointed out, this undefined value of the velocity field prevents you from (or at least complicates) computing the further evolution of the fluid.
Do these swirls shed energy? Is it considered in these equations that for example friction within the swirls would slow them down (and hence not reach a singularity)?
In real fluids yes, absolutely, they basically transform/branch/divide/split into smaller and smaller scale vortices and then those dissipate the energy into the fluid (heating it up a bit).
The incompressible Euler equations model a fluid as a two-valued field. This means that at every point in space, the field has two values, density and velocity (1).
To me (2), a singularity in a field like this means that one or more of the field values "blows up", i.e. goes to infinity as you run the time variable forward.
But how could this ever happen? The Euler equations model the "conservation" (i.e. constant-ness) of three real physical quantities: mass, momentum, and energy. If these three quantities are finite and constant when you add them up over the whole field, how can any part of it "blow up" into an infinite value?
The answer is that the blow-up must occupy a volume that shrinks as the blow-up grows, so the conserved quantities are still constant. The singularity would be infinitely small in space, and have an infinite value of density or velocity (or both).
The hard question is, are these blow-ups merely artifacts of a particular numerical simulation technique, or are they essential somehow to the incompressible Euler equations themselves? That's what these papers are trying to figure out.
To me, an "essential" (i.e. inherent-in-the-equations) blow-up seems intuitively reasonable because of the acausal nature of the field. When you simulate the incompressible Euler equations, it superficially looks like it's a physical fluid doing physical-fluid things, swirling and flowing around. But in a real fluid, a change in one part of the fluid propagates to the other parts at finite velocity, creating real cause and effect.
An Euler fluid's time evolution is not a phenomenon that ripples forward through time in a normal way. Instead, every point in the fluid responds to every other point simultaneously. If you poke a cube of incompressible Euler fluid with your finger, there is no pressure wave that ripples through it, where the fluid parcels push each other along and get out of each other's way. Instead, the whole cube of fluid somehow instantly adopts a new flow pattern that conserves mass/momentum/energy in response to that finger-poke.
1) Note that velocity is a vector, since it has a direction. This means that in 2D the velocity is two numbers, and in 3D it's three numbers. So technically the 3D incompressible Euler equations have four values at every point: one density, and three velocity components, one each in the x, y, and z directions.
2) I'm a numerical simulation guy, not a mathematician. Real math experts have rigorous definitions of a singularity, e.g. in https://arxiv.org/pdf/2203.17221.pdf "Singularity formation in the incompressible Euler equation in finite and infinite time," Theodore D. Drivas and Tarek M. Elgindi.
>The incompressible Euler equations model a fluid as a two-valued field. This means that at every point in space, the field has two values, density and velocity
I don't get it. If the fluid is incompressible, how can density have a value at every point in space? Isn't it just a constant?
The density can be constant, but it doesn't have to be. If the density field starts out with some variation in it, then those variations move around as the fluid flows. Incompressibility just means that those density variations can't get bigger or smaller, they can only move, shear, and rotate.
When you work with near-supercritical and supercritical fluids under laboratory conditions, you can turn the pump by hand and you feel when the density hits the ceiling.
So you know something is up.
Systems would be modeled mathematically using a fluid's individual component values, but we were paid for the real-world laboratory data.
I think (not a physicist), simply put, an infinity or NaN value. As these are step-wise methods, having such values show up anywhere will seriously mess up subsequent calculation steps.
This sounds similar in some ways to the discussion that arose between AlphaPhoenix and Veritasium YouTube channels concerning the speed of electrons along a closed loop of wire.
https://youtu.be/2Vrhk5OjBP8
With a relevant illustration at 9:41 at where I think these topics intersect conceptually.
Yes, in pure/applied math, we know a lot about various cases of approximation. But in practice there are more cases of approximation, and, right, the Euler equations are another such case. Or, to be a little flippant, generally in applications to real problems, we look at a lot of the features and throw out some, modify some, and actually honor some!!
So, a question is, can we improve our ability to make such approximations and know something about the accuracy of the solutions we will get? E.g., for the Euler equations, will that approximation of an "incompressible" fluid ever work in practice and, if so, when and, there, how accurate can/will it be?
Or, what about, hmm, just to be picky and pick something, friction on the side of the tube? What if the tube is not a perfect tube?
A few grains of dirt: What if the liquid is water but, like most real water, has some solids floating around in it? Right, we can say, so there are a few grains of dirt floating around in the water, and they won't matter -- to be picky, that's an approximation, and we are likely correct, but where is an actual math theorem that says we are correct or how correct, i.e., accurate, are we? Right, a few grains of dirt -- we don't much care. But that's practical judgment and not really theorem/proof math.
And similarly for other approximations we get as we throw out, modify, or honor real features?
So, as stated, this is too difficult as a pure/applied math research direction. Okay, ..., then, is there anything at all in that direction that might be not absurdly difficult as a research direction?
Or, to be simplistic, we work hard and get a numerical solution to a boundary value problem. Now someone tweaks the boundary. Can we say that our numerical solution is only tweaked? Or, when can we say that small changes in the problem statement will result in only small changes in the solution? Right, we are into some topology and looking for a case of continuity .... Hmm .... If we had some linearity ...!!! Right, the two pillars of analysis are continuity and linearity ...! But here with Euler we were considering nonlinear partial differential equations!
Again I ask, is there any hope we can do anything for some corresponding math??
Wow, I'm honored :) These days, I try to only comment when an article is really in my wheelhouse, but that's not very often, given my narrow interests in fluid dynamics and computational physics.
> I try to only comment when an article is really in my wheelhouse
Which is why your comment is exceptionally worthwhile. I also know enough about fluid mechanics to both understand and appreciate it.
People often wonder on HN what the point of a STEM degree is (after making money). To me I've had a lifetime of pleasure from understanding how things work. It's so much better than things being mysterious black boxes.
I once asked a date if she wanted to understand how airplanes worked. She said no, that understanding them would make her afraid of flying. For me, it was the opposite. Knowing how the airplanes fly and how it all works made me a much less anxious passenger.
> This is an important question in the theory of nonlinear partial differential equations, but is probably not as relevant to real fluid flow as a lay reader might imagine.
What kinds of problems does it solve to know an answer to this question? Honestly curious, please do not take this as offensive/dismissive.
If mathematicians could solve these kinds of problems, they could answer valuable questions like "Will this equation always have a physically meaningful solution?" If the answer was "No", then we would know that the equation can't be a faithful model of reality.
We already know that the incompressible Euler equations can't be a faithful model, for reasons I've mentioned elsewhere in the thread. But I think the hope is that if they can answer these questions for incompressible Euler, then they can eventually extend their techniques to more complex fluid equations like Navier-Stokes, which people generally assume (but can't yet prove) is physically reasonable.
Simulation has great practical value, but it doesn't give you any guarantees about the behavior of the solutions for all the cases you haven't actually tried.
This raises a question I hadn't thought of before. Real-world fluid flow is ultimately well-modeled by the equations of many-body Newtonian mechanics, right (atoms bumping around)? Are those equations vulnerable to blow-ups?
Pretty much any mathematical model of a real phenomenon can have some sort of singularity or discontinuity in it.
If you model atoms as dimensionless points (1), then any kind of force law with the distance between atoms in the denominator can lead to a singularity when that distance is zero. In practice, you write the simulator to disallow this, but it's still there in the equations, you're just ignoring it.
If you model your atoms as finite-sized but incompressible billiard balls, then when they hit each other it's a discontinuity, since they instantly change direction when they collide. These collisions conserve total momentum and energy, but they're unphysical because real physical quantities can't jump from one value to another (in classical physics).
Even if you model your atoms as little rubber balls, the model can still be singular. Linear elasticity (the most common choice) allows you to compress a finite-sized object down to zero size with finite energy, which yields infinite energy density. Again, you'd have to disallow that in the simulator, which is very practical, but not theoretically satisfying.
It's the equations themselves that are singular. When we write simulators, we usually have to paper over the singularities that are inherent in the math.
For example, if you're simulating charged particles moving around, and you use a force equation F = k q1 q2 / d^2 (1), then when d approaches 0 (i.e. when the distance between particles approaches zero), then the force F goes to infinity.
For atoms, it works the same way. If you use a force law like Lennard-Jones (2), it also has the interatomic distance in the denominator, so the equation has a singularity baked right in.
You could always adopt a more complex force equation that doesn't have a singularity in it. But in practice, it's easier to use a simple but singular equation, and then selectively ignore its bad behavior.
The presence of a singularity in the force doesn't mean it will cause a blow up in the solution. Two positively charged point particles interacting electrostatically can be shot at each other at any angle or speed and blowup will never occur.
There are all kinds of blow-ups in Newtonian mechanics and in other equations of physics. The singularity at the center of a black hole in general relativity is a famous example. The ultraviolet catastrophe in classical thermodynamics was another. The presumption is that blow-ups in an equation indicate a mismatch between the equation and the true physical world, telling physicists to look for better theories, whose equations don't blow up. For the ultraviolet catastrophe, the mystery was solved through the discovery of quantum mechanics. For GR, it is still unsolved, and the solution is expected to come from a theory of quantum gravity that hasn't yet been invented, but is the target of tons of research.
That's a great article! 1) I'd seen a lot of other cool stuff from the same author over the years (see https://math.ucr.edu/home/baez/README.html), but had somehow missed this one.
I would think that nothing in reality is infinite, but allegedly sound waves collapsing bubbles in a fluid can cause a very small amount of plasma to become hotter than the sun and emit light. Some controversial research claims it might even be possible to create atomic fusion this way.
The infinite implied speed is of course because the Euler equations are just a rough approximation of what is happening, right? Much like Newtonian physics was an approximation?
There’s a real causality bound of the speed of light
If you use the theory of nonlinear partial differential equations to analyze the behavior of compressible materials, you can find what are called the "characteristic" speeds, which are the speeds that various types of waves propagate at.
Compressible materials tend two have two different characteristic speeds, one for sound waves and one for shock waves.
The speed of sound basically works out to speed = sqrt(stiffness / density). So as a material gets stiffer, the speed of sound goes up. An infinitely stiff (i.e. incompressible) material by implication would have an infinite speed of sound, though this can't happen in any real material.
Shock waves travel faster than sound, at a speed related to the pressure difference across the shock. The greater the pressure difference, the faster the shock travels. So if you had an infinite pressure difference, you could have an infinitely fast shock wave, but again this can't happen in the real world.
However, sound and shock speeds only apply to pressure waves in a material. Other influences like gravity and electromagnetism travel at the speed of light. So for example, if you're doing fluid dynamics for plasma, then you'll have a third characteristic speed, the speed of light, because of the charged nature of the material attracting and repelling itself.
There are also more exotic characteristics, like the speed of a propagating combustion front in a flammable material. But when you get to this level you're no longer just solving one simple set of differential equations.
The authors have shown a very nice and (to me) non-intuitive result. But they're playing a little fast and loose with their comparison to Mathematica. They're comparing their algorithm's accuracy (solution correctness vs. incorrectness), with Mathematica's ability to find the correct solution in less than 30 seconds. This is a very important distinction! Mathematica will never silently return an incorrect solution (barring software bugs, of course). And Mathematica can often take minutes to evaluate what appears to be a simple integral, so a 30-second timeout is far too short, unless you're simply trying to compare the computational efficiency of the two approaches.
There may be other subtleties as well. Mathematica works in the complex domain by default, which makes many operations more difficult, but the authors discard expressions which contain complex-valued coefficients as "invalid", which makes me think they're implicitly working in the real domain. Do they restrict Mathematica to the real domain when they invoke it? Perhaps, but they don't say one way or the other. And do they try common tricks like invoking FullSimplify[] on an expression/equation before attempting to operate on it? I'd like to see more details of their methodology.
> They're comparing their algorithm's accuracy (solution correctness vs. incorrectness), with Mathematica's ability to find the correct solution in less than 30 seconds. This is a very important distinction!
I had the same initial reaction as you, but then I realized that this is still extremely useful. In a ton of examples, only one direction of differentiation/integration is hard while the other direction is easy. You could build a system that attempted to solve it directly, and failing that attempted to guess-and-verify using this approach. My intuition is that such an overall system would be strictly superior to Mathematica's approach as it exists today.
That's a good point. Guess-and-verify could be a handy additional heuristic method if Mathematica's other methods came up empty on a problem. I've also heard of machine learning being used to choose between internal algorithms available in formal proof systems, to try to pick the algorithm that's most likely to work instead of just trying them all sequentially.
I suspect that at least some of these "flaws" are intentional, and are meant to make programming easier, at the expense of some performance.
For example, three of the poster's points (not allowing device property querying, not allowing the programmer to choose where a kernel runs, and not exposting local memory to the programmer) all make programming easier, though they also disallow some types of performance tuning.
One big potential reason for doing GPGPU on a mobile device is to get better energy efficiency per gigaflop, rather than to get huge overall performance like on a desktop GPGPU. In this context, squeezing out all possible performance may not be as important.
It's easy to see why OpenCL hasn't rolled out fully on mobile GPUs yet: writing and debugging a full OpenCL software stack is very expensive and time-consuming, and there's still not that much real programmer demand for OpenCL on mobile.
As for Renderscript, it's always sounded like a bit of "not invented here" syndrome Google's part -- we've already got CUDA and OpenCL, and RS doesn't really bring much new to the table. They've already deprecated the 3D graphics part of Renderscript in Android 4.1, so perhaps they'll do the same to Renderscript Compute soon.
I'd much rather see Google invest their time in an Android version of something like the Accelerate API from iOS. It would be a lot more generally useful.
When I saw the TEDx guys' caution about a "physics-related speaker [who] has a degree in engineering, not physics", it struck a bit of a nerve for me.
My Ph.D. is in electrical engineering, not physics. I recently got a computational physics paper accepted in a peer-reviewed journal, and I'm hard at work on a second paper. And in a different world, I could speak at a conference and no one would have to worry about my bona fides.
But I think the TEDx caution is well-founded.
Engineers are often accustomed to knowing more about science than the average person. It can be very easy for them, with the best of intentions, to convince themselves and others that they know more than they really do. It's easy to think you've got some great new idea if you don't engage existing experts in the field via peer review and reading papers.
This is not to say electrical engineers can't be authorities on physics topics -- quite the contrary! But I agree with the TEDx guys that it does merit a bit of extra checking, especially in their situation.
> Engineers are often accustomed to knowing more about science than the average person. It can be very easy for them, with the best of intentions, to convince themselves and others that they know more than they really do.
I agree, but that's true of scientists also, many of whom have a very narrow specialty in modern times and may not be qualified to speak outside their field of expertise. Example Nobel Prizewinner William Shockley and his now-infamous lectures on the topic of race:
There was some discussion a while ago about the tendency of Nobel prizewinners to go off into crazyland (see Linus Pauling, others [1]). I recall hearing some speculation that this was (somewhat) inherent to getting a prize. I think the willful ignorance of established thought (to make the great discovery) plus the feedback loop of the prize made Nobel winners uniquely confident in their more off-the-wall theories.
I work in the R&D division of a microprocessor company. Most of what we do, I would call "research", but not quite "science". We investigate things that are too risky or time-consuming for our product design groups to look into, with an eye towards making our company money in the future. We fund Ph.D. students in electrical engineering and computer science at university labs, and collaborate with them. But we don't (generally) push back the frontiers of human knowledge in our day-to-day work.
Things might be different at (say) IBM Research Zürich, doing work on atomic force microscopy, but that sort of thing seems to be the exception rather than the rule in industrial R&D. I don't know if I would have expected to see hardcore science at Sun Labs where the article writer's friend worked.
I like this as a definition of research in a corporate environment: "We investigate things that are too risky or time-consuming for our product design groups to look into, with an eye towards making our company money in the future."
Stross' comparison between Cray 1 performance and that of a modern smartphone seems off. He says: "A regular ARM-powered smartphone, such as an iPhone 4S, is some 12-13 orders of magnitude more powerful as a computing device than a late 1970s-vintage Cray 1 supercomputer."
A Cray 1 could peak at about 250 MFLOPs/s (http://en.wikipedia.org/wiki/Cray-1), and a modern smart phone like the Galaxy Nexus peaks at about 9.6 GFLOPs/s (using ARM Neon instructions on both cores). That's less than two orders of magnitude difference.
Floating-point power efficiency seems to have improved by about 6-7 orders of magnitude in that time though, which is very nice :)
Maybe he's comparing integer operations? ARM has relatively underpowered floating-point because most of the uses for an ARM (at least historically) don't involve it.
It sounds like the NCSU guys are using the CPU as a prefetcher to speed up GPU kernel execution, not using the GPU to speed up normal CPU programs as the ExtremeTech article implies.
The CPU parses the GPU kernel and creates a prefetcher program that contains the load instructions of the GPU kernel. This prefetcher runs on the CPU, but slightly ahead of kernel execution on the GPU. This warms up the caches, so that when the GPU executes a load instruction, the data is already there.
Yes, you are right. In fact we don't have to infer this. The researchers state it directly in their abstract, "...a novel approach to utilize the CPU resource to facilitate the execution of GPGPU programs..."
It sounds like the NCSU guys are using the CPU as a prefetcher to speed up GPU kernel execution, not using the GPU to speed up normal CPU programs as the ExtremeTech article implies.
The article says the same thing you are -- that the CPU is used as a prefetcher for the GPU; read the 3rd paragraph:
To achieve the 20% boost, the researchers reduce the CPU to a fetch/decode unit, and the GPU becomes the primary computation unit. This works out well because CPUs are generally very strong at fetching data from memory, and GPUs are essentially just monstrous floating point units. In practice, this means the CPU is focused on working out what data the GPU needs (pre-fetching), the GPU’s pipes stay full, and a 20% performance boost arises.
Humorously enough, early television in the UK (from 1929-1935) had only 30 lines of resolution and so could be recorded at audio frequencies onto a "Phonovision" disk. You can see some examples at http://www.tvdawn.com/recordng.htm of the low quality of the results.
For a little while, the BBC did broadcasts of 30-line video using two AM radio frequencies (one for the video, one for the audio) that could be picked up by special recievers.
Having to reject someone can be a very stressful experience. I don't like the idea of purposefully inflicting that stress on others unless there's a chance we both could benefit.
"Getting rejected on purpose" sounds like I'd be picking situations where I know for sure I'll be rejected -- an inappropriate proposition, an undeserved request. I don't want to do that to someone else just to try to desensitize myself to rejection.
The incompressible Euler equations model a very strange and unphysical kind of fluid. Incompressibility means that the speed of wave propagation in such a fluid is infinite, which means that normal causality is not respected. Effects in such a fluid happen simultaneously with their causes.
For example, if you apply a force to one end of a pipe full of Euler fluid, the fluid instantly starts coming out of the other end of the pipe, with no time taken for this effect to propagate from one end of the pipe to the other. You could use a long pipe full of Euler fluid as a superluminal communication device!
Intuitively, it seems reasonable that in such an unphysical fluid, it would be possible to form a singularity even from smooth initial conditions. The difficulty, of course, is proving that intuition, which is what the paper is trying to do.
1) https://arxiv.org/pdf/2210.07191.pdf "Stable nearly self-similar blowup of the 2D Boussinesq and 3D Euler equations with smooth data", Jiajie Chen and Thomas Y. Hou.