Parameterisation and computation of an ultra fine tetrahedral mesh

Hello

I’m using Salome to mesh a computational domain for a simulation that captures ultrasound. At its simplest case, my computational domain is a sphere of 100 mm with a core of 5mm.

I am using the NETGENPLUGIN to create a tetrahedral mesh for a 3D geometry. To capture the physical phenomenon with the required precision I found that the mesh elements maximum size should be 0.8. I also parameterized it so that it is very fine and second order.

NETGEN_1D_2D_3D = Mesh_1.Tetrahedron(algo=smeshBuilder.NETGEN_1D2D3D)
NETGEN_3D_Parameters_1 = NETGEN_1D_2D_3D.Parameters()
NETGEN_3D_Parameters_1.SetMaxSize( 0.8575 )
NETGEN_3D_Parameters_1.SetMinSize( 0.1616 )
NETGEN_3D_Parameters_1.SetSecondOrder( 1 )
NETGEN_3D_Parameters_1.SetOptimize( 1 )
NETGEN_3D_Parameters_1.SetFineness( 4 )

I tried to compute the mesh on a high-end graphics card laptop but Salome has been stuck processing for well over 24h. If I change max size to 1.8 the mesh computes in ~11m.

So I have a few questions:

a) Can I compute a mesh with max size < 1.0? The numeric text entry indicates this is a valid number but will a complex geometry compute succesfully and under a reasonable time?

b) If understood correctly, salome doesn’t work with absolute units. Would scaling up the model one order of magnitude, including the mesh size (to max = 8 and min = 1) a valid workaround dependending on how I could configure the scaling of the simulation?

c) My model will increase in complexity with smaller geometries populating the sphere. Can I make use of parallelisation to break down the meshing process, compute submeshes and perhaps reassemble them to output one single mesh? Any pointer about how to go with this would be appreciated

Many thanks,
Francisco

Hi Francisco
I think what could probably help is more RAM. AFAIK NETGEN does not to use graphical cards.
St.Michael

Hi St. Michael,

thank you for your recommendation.

I’m running Salome 9.8.0 on a Windows 11 machine with 32 Gb of RAM.

I just had a look at SALOME_Session_Server and the process seems to be using 1,8 Gb of memory in a stable way.

Is it possible that SALOME is configured to a memory threshold and would I be able to change it?

Thanks,

Kind regards,
Francisco

Hi Francisco
It’s strange. SALOME_Session_Server usually takes > 2GB just to start.
I never heard about any memory threshold in SALOME
St.Michael

Here is my answer to your question (i’m an intermediate user):

a) Max mesh size could be lower than 1. Yet, the lower the mesh size, the higher the computation time.
b) Mesh size is the size of tetrahedron edges in NETGEN. If you model is a 0.1 x 0.1 x 0.1 cube (unit is implicit), a mesh size of 0.01 will create a 10 tetrahedron edges on the cube edges.
c) I think Salome 9.8.0 uses NETGEN version 6 with multithreaded processing, so it should uses all the core on your machine. Maybe it is not enabled on Windows, a developper expertise may better answer that question.

May I make few suggestions:

  1. You can use gmsh HXT algorithm, which is way faster than NETGEN to my experience (don’t know if it is multithreaded however).
  2. If you can find a revolution symetry, you can benefit from it to generate a 2D mesh, and then make a revolution. If your code only consider tetrahedral mesh, you can then break the hexahedron and prisms in tetrahedron.
  3. I’ve created a parallel variational mesher for salome using geogram library (see GitHub - MoiseRousseau/Variational-Tet-Mesh: Variational tetrahedral mesher). However, I did not test it on windows. Compilation of the mesher and library might be complicated.

Moise

Thank you @smeap @MRousseau

I just checked the maximum amount of memory for a single process running on 64-bit Windows and the value that I reported does seem strange. I ran the process on Monday evening, waited 3 days and it never terminated. Just before killing the process I checked its memory and it was below 1Gb.

I just restarted and started a new instance of SALOME_Session_Server. It initialised below 50 MB of memory. As soon I loaded the script with the mesh for the sphere (mesh max size=0.8 min=0.1) it gradually increased to 13 GB, where it stabilised.

Interestingly, CPU usage is 8% (20 cores i9-10900K @ 3.7GHz). I wonder if I could make better use of the processor with any configuration flag?

@MRousseau, my model is a sphere with 100 and mesh size is 0.1. Can I use gmsh HXT with Salome or would I have to do it in gmsh? My salome model is fully procedural.

Thanks for point 2, I can’t make use of revolution symmetry for the current use case but it will be good to contemplate it in future. Same for point 3.

Many thanks,
Francisco

You can do it in Salome (at least, in ubuntu 20 version as mine). Select Gmsh under algorihtm “Advanced”, then go into hypothesis and then in 3D algorithm, choose parallel Delaunay (HXT). I tested it on my 12 threads laptop. The 2D meshing algorithm didn’t seem to be parallelized but the 3D is. Total time to generate a 2.7 million tetrahedron mesh on my laptop (max size = 2 on a 100 unit sphere) is 2 min. Seems pretty efficient from my point of view, more than my mesher in my previous point 3 for example.

1 Like