I’m using Salome to mesh a computational domain for a simulation that captures ultrasound. At its simplest case, my computational domain is a sphere of 100 mm with a core of 5mm.
I am using the NETGENPLUGIN to create a tetrahedral mesh for a 3D geometry. To capture the physical phenomenon with the required precision I found that the mesh elements maximum size should be 0.8. I also parameterized it so that it is very fine and second order.
I tried to compute the mesh on a high-end graphics card laptop but Salome has been stuck processing for well over 24h. If I change max size to 1.8 the mesh computes in ~11m.
So I have a few questions:
a) Can I compute a mesh with max size < 1.0? The numeric text entry indicates this is a valid number but will a complex geometry compute succesfully and under a reasonable time?
b) If understood correctly, salome doesn’t work with absolute units. Would scaling up the model one order of magnitude, including the mesh size (to max = 8 and min = 1) a valid workaround dependending on how I could configure the scaling of the simulation?
c) My model will increase in complexity with smaller geometries populating the sphere. Can I make use of parallelisation to break down the meshing process, compute submeshes and perhaps reassemble them to output one single mesh? Any pointer about how to go with this would be appreciated
Here is my answer to your question (i’m an intermediate user):
a) Max mesh size could be lower than 1. Yet, the lower the mesh size, the higher the computation time.
b) Mesh size is the size of tetrahedron edges in NETGEN. If you model is a 0.1 x 0.1 x 0.1 cube (unit is implicit), a mesh size of 0.01 will create a 10 tetrahedron edges on the cube edges.
c) I think Salome 9.8.0 uses NETGEN version 6 with multithreaded processing, so it should uses all the core on your machine. Maybe it is not enabled on Windows, a developper expertise may better answer that question.
May I make few suggestions:
You can use gmsh HXT algorithm, which is way faster than NETGEN to my experience (don’t know if it is multithreaded however).
If you can find a revolution symetry, you can benefit from it to generate a 2D mesh, and then make a revolution. If your code only consider tetrahedral mesh, you can then break the hexahedron and prisms in tetrahedron.
I just checked the maximum amount of memory for a single process running on 64-bit Windows and the value that I reported does seem strange. I ran the process on Monday evening, waited 3 days and it never terminated. Just before killing the process I checked its memory and it was below 1Gb.
I just restarted and started a new instance of SALOME_Session_Server. It initialised below 50 MB of memory. As soon I loaded the script with the mesh for the sphere (mesh max size=0.8 min=0.1) it gradually increased to 13 GB, where it stabilised.
Interestingly, CPU usage is 8% (20 cores i9-10900K @ 3.7GHz). I wonder if I could make better use of the processor with any configuration flag?
@MRousseau, my model is a sphere with 100 and mesh size is 0.1. Can I use gmsh HXT with Salome or would I have to do it in gmsh? My salome model is fully procedural.
Thanks for point 2, I can’t make use of revolution symmetry for the current use case but it will be good to contemplate it in future. Same for point 3.
You can do it in Salome (at least, in ubuntu 20 version as mine). Select Gmsh under algorihtm “Advanced”, then go into hypothesis and then in 3D algorithm, choose parallel Delaunay (HXT). I tested it on my 12 threads laptop. The 2D meshing algorithm didn’t seem to be parallelized but the 3D is. Total time to generate a 2.7 million tetrahedron mesh on my laptop (max size = 2 on a 100 unit sphere) is 2 min. Seems pretty efficient from my point of view, more than my mesher in my previous point 3 for example.