For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | precsim's commentsregister

I often nowadays find GNU GFortran the same speed or faster than Intel Fortan for FEM/CFD codes, while 10-15 years ago Intel Fortan was 10-20% faster in general. So these days I typically just recommend to go with GFortan which is easily available on most platforms.


And not only on such codes. I've posted variations on this before:

In my experience GNU Fortran was always competitive with proprietary compilers at around the 20% level on average once the scheduling was sorted for a new architecture. That's from the times I could try SPARC SunOS and GNU/Linux, MIPS/Irix, Alpha/Tru64(?), RS6000/AIX, and x86 GNU/Linux. (I don't know about the times between those and Opteron.)

I don't have the numbers to hand, but it's at that level on the Polyhedron benchmarks relative to Ifort on SKX with roughly equivalent options. I think it was twice as fast on one case and basically only lost out badly where it unfortunately doesn't use vectorized maths routines for a couple of cases unusually dominated by them, whereas libvecm would be used for C. GNU Fortran is also surprisingly more reliable than Ifort, but has the bizarre mystique that had me ordered to use it against the advice of maintainers of the code, notwithstanding previous comparison with Ifort, and though the result crashed with the Ifort du jour -- which wasn't an immediate clincher.

I don't remember the numbers relative to XLF on POWER, but they were respectable, and I don't have access to the proprietary Arm compiler.

Anyhow, typical HPC run times are relatively insensitive to code generation compared with MPI (especially collectives), and typically aren't even reproducible below the 10% level. [Broad picture, mileage varies, measure and understand, etc.]


I've seen a similar change in recent years. 10 years ago Intel Fortran was usually faster, at least 20%. Good 20%. GFortran seems to have caught up a lot.


For anyone doing HPC work and hasn't tried GFortran >=10, I highly suggest giving it a go. We switched to it for the arm64 improvements, but surprisingly also found a 20% speedup on the x86-64 target. My best guess is that it's a combination of IPO and autovec enhancements.


Profiling and -fopt-info should tell you why. (If you care about speed, use them anyway!) I'd be surprised if it's vectorization improvements, other than a specific bug fix.


Very interested in LFortran, are there any benchmarks how it compares with respect to performance to GFortran or Intel Fortran?


There is speed of compilation (I did some very preliminary benchmarks and I think it will be very good) and there is speed of the generated code, there currently we just use stock LLVM. But down the road we will have special optimizations on top, just like Intel Fortran is doing. Some of the things I personally would like to have a close look on is array operations, where I've heard from many users that they are slower than explicit loops. And function inlining and other such operations.

We want to have a dedicated repository for benchmarking compilers:

https://github.com/fortran-lang/benchmarks/issues/2


This is very interesting and would never have guessed that it is possible. I would love to read a more detailed write up how all this works (is everything bundled to a large blob for the octave interpreter for example?). Also is the a specific reason you used v4.4.1? And playing with the PWA everything seems instant, but Octave is ~1Gb installed, wouldn't the Octave wasm have to be downloaded to the client (which it doesn't seem to do)?


The GNU Octave interpreter and its dependencies are all in the matpower.wasm file. 19Mb is still quite large for a website, but it is loaded asynchronously and WASM get compiled as it streams in. The interpreter runs in a WebWorker using a Promise based interface so the UI doesn't get held up. Many of the dependencies (e.g. FFTW, CHOLMOD, ARPACK, Qt, HDF5) are disabled as they are not needed to run MATPOWER. There was no particular reason for using v4.4.1. The core functionality of GNU Octave doesn't change much and that version was stable and sufficient for my needs.


Thank you for explaining, I will have to explore this further as this opens up a lot of interesting possibilities. I find it very impressive that you have managed to get virtually no perceptible loading time at all, for octave running on a client (I remember threads now and then about statically compiling octave for portability which all eventually end up with it not being possible so I would never have guessed this was viable).


> Also if you want to ask me questions about my thinking as a user who thinks a lot in these problems space, feel free to email me (I will want it to move to a phone call, but don't want to post my phone number on hacker news).

Thank you for your offer, I could not find you contact info anywhere though, so I left my contact in my profile if anyone wants to get in touch for collaborations or anything at all really.


> Seems like you are just comsol.

Maybe it isn't too clear, but Comsol is just one solver (depending how to look at it I guess), while the idea of FEATool is to be a fully integrated platform for "any" solver, and in the extension to be able to mix and match and combine them (plug and play so to speak). At present I have just been able to implement the built-in MATLAB based multiphysics solver, with interfaces to FEniCS, OpenFOAM, and SU2.

> Why FENICS and not DEAL.II or MOOSE (I don't know, I am just curious. I used to use FENICs but got frustrated because their input syntax kept changing on me.

No particular reason really, I just started with FEniCS as it is quite popular and I found the FEATool PDE syntax easy to convert to FEniCS Python scripts. The plan is to add more and more external solver options.

> As someone who does this 50+ hours a week in industry (although only structural modelling but frequently coupled with optics/heat transfer/) and is reasonably well versed in the up and coming research I have a couple of questions (or would have if I was a potential buyer).

Thank you, I really appreciate the interest and feedback. Unfortunately its kind of "not implemented/available yet" to all your technical questions, although FEATool technically can solve any system of PDEs, I haven't yet pre-defined and set up everything so it is easy for the end users. So as many things are possible to do by going down to the FEM matrix assembly syntax but from a users standpoint that is most often the same as it doesn't exist.

> Also really cool. I have been thinking about writing software in this space but more nichy for about the past year and am beginning to get started, any interest in collaboration?

Thanks again for the feedback, I'm certainly open to all and any collaborations (email in my profile). Yes, looking back I think starting in a niche would have been a better approach.


I am sceptical that you are going to find customers who are going to pay for using multiple solvers especially since they use the fortran routines on the back end and most fea solvers have been heavily verified by e.g. nafems. That said it might be a defining feature between choosing COMSOL vs. FEATool Multiphysics but I doubt it is a technical driver.


Thanks for the honest feedback. I started building FEATool far too many years ago not getting anywhere in academia after too many postdocs, since it was the one thing I'm passionate about and good at and I felt was "missing" when I was working. But yes, it seems more and more that each user has a very specific problem that they want to solve which is very hard to generalize for. And although I still think (most if not all) software could be much more user friendly, I probably have taken on more than I can chew (I only keep going now since I feel I have no choice with the years I've already invested).


Really cool project. I've used comsol in the past for some fluidics simulations. I agree, the open source multiphysics packages come with a steep learning curve (easier for the more tech literate).

How did you get out of the post-doc loop?


I can understand that, having experienced something similar myself. The problem with users wanting things is they're often things that aren't really compatible with your vision, so you have to compromise and eventually, if you "succeed", you end up being a giant mess that everybody complains about like ANSYS.

Supporting multiple solvers is surely going to mean having solver-specific features, isn't it? Which would eat away at what seems like your vision. Or are you able to keep it abstracted from that so they really are interchangeable?


> I can understand that, having experienced something similar myself. The problem with users wanting things is they're often things that aren't really compatible with your vision, so you have to compromise and eventually, if you "succeed", you end up being a giant mess that everybody complains about like ANSYS.

Yes, unfortunately that is exactly right as I already am getting the heat. No matter what is wrong Matlab, meshing, OpenFOAM, FEniCS etc it always comes back to me even if its from an external solver or component, leading to lots of support. So yeah, I know many things could be much better and I'm far from what my "vision" is, but I don't have such resources so I have to try to do the best I can.

> Supporting multiple solvers is surely going to mean having solver-specific features, isn't it? Which would eat away at what seems like your vision. Or are you able to keep it abstracted from that so they really are interchangeable?

I have so far kept it abstracted with a minimum set of features, but some solvers only support some physics, models, or features so as you allude to sooner or later it will begin to become specialized modes or so for each solver, which is not ideal and I'm not yet sure the best way to move forward.


Don't really know how this ended up on the front page now, but the overall "idea" that I have yet to find anyone else understand and believe in (even my old prof and colleagues don't seem to "get it"), is that: wouldn't it be great if there was a unified, ideally simple, interface from which you could just select any solver by the click of a button, whether you wanted to use OpenFOAM, FEniCS, or maybe even Ansys, etc. Then you could set up your simulation model once, and run it on "any" and many solvers (to validate and check results etc.). So that's what I've been trying to do by myself for the past 7 years or so. For better or worse it initially seemed like a good idea to use Matlab/Octave, but have been planning to eventually move either to Python or Pascal/Delphi backend.


I agree, in my lab our typical solution is a MATLAB/Python routine managing the simulations (submitting to a cluster, gathering the results, producing a report via LaTeX, changing some parameters, submitting again...) but these routines needs to be heavily customized for each project, version of the software, cluster, etc. and they always end being very fragile.


Yes, I think the initial simulation definition/set up can be "easy", but once you get to parallel shared memory clusters it unfortunately have to be much more hands on. I had hoped to eventually not only allow running individual solvers from a single interface, but also allow coupling them similar to PreCICE [1], however making that easy to use is something I've not dared attempt yet.

[1]: https://www.precice.org


You mean like FEMAP or Hypermesh? Granted they are geared towards structural mechanics, but they will write input files for a host of solvers including ANSYS, ABAQUS, NASTRAN, MARC and LS-DYNA.


Is there really much value checking multiple solvers if you use the same pre-post which might introduce the same common errors, including user error, into all of them? I know it would be handy from time to time but validating solvers against each other seems like a small thing for an entire pre-post to be built around. It sounds a bit like compiling your software with multiple compilers to make sure it's correct. The bugs aren't usually in the compilers.


I guess the thing is most people only use one solver, but at least when I was doing research it would be rare that two physics solvers based on different discretizations give exactly the same results (sometimes wildly different). So I would say, yes, its actually very valuable, although time consuming, and something most users don't really consider or think about (just because your physics solver converges doesn't necessarily mean the physics are represented accurately).


The first versions were open source asking for funding by donations. Although I would have really liked for FEATool to be open source (and I think such as tool would be ideally suited as such), unfortunately not having any form of funding but my personal savings, it was needless to say financially unsustainable (not to say that actually its financially much easier now).


Very interesting, making simulations "easy and accessible" really resonates with me having spent far too many years single handedly trying to get traction for my "easy to use" engineering physics, FEA, and PDE ideas and simulation tools [1]. I would be very interested in collaborations if anyone from hash.ai (or anyone else) is reading this and ever think of including/or moving in to the "easy to use" PDE based simulation space.

[1]: https://www.featool.com


I've been thinking about these kinds of things a bit. PDE and FEA seem core to lots of interesting simulations. From chemistry and climate to engineering.

Firedrake [1] and Fenics [2] seem like interesting approaches to this. I know that firedrake is being explored for some climate modelling with fluidity project [3][4].

The value of any simulation platform is based on the value of the models and datasets in that platform. So the key question is, can you attract the people making the valuable models to your platform. Working with things that people are already using seems important, as does talking to those people about it. I've got a Google form asking questions currently to try to get more information about what people who currently make and use simulations need. [5]

[1] https://www.firedrakeproject.org/

[2] https://fenicsproject.org/

[3] http://fluidityproject.github.io/

[4] https://www.archer.ac.uk/community/eCSE/eCSE06-01/IC15-ecse-...

[5] https://docs.google.com/forms/d/e/1FAIpQLSfH-ns1CSQlyVyo7oj_...


Although not exactly a structured course or lessons, FEATool has been designed to be a very easy to use toolbox for physics simulations and learning by experimentation and trial and error with built-in tutorials (currently free license for home use during the lockdowns [1]), for fluid flow and heat transfer the simpler CFDTool might be more appropriate with an even simpler interface that "hides" the underlying PDEs [2].

[1] https://www.featool.com [2] https://www.cfdtool.com


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You