The following Material Point Method (MPM) simulation of sloshing fluid goes “haywire” at the end, just when things are starting to settle down:
(if the animated gif isn’t visible, please wait for it to load)
Similar anomalies have been observed in penetration simulations, where a sudden pressure pulse forms in material that had been once deformed severely but is, at the time of the anomalous pulse, now relatively quiescent. The fundamental reason for this anomaly (much less its resolution) remains an open research question, which is why it is being described here in this blog posting.
The problem definition is provided in WaterColumnKinematicAnomalyProblemStatement.pdf. We challenge (or plead with) the MPM community to articulate proper algorithms that will give realistic and stable solutions to this problem for a range of realistic material property values.
2017-09-15:
update: investigation results from Craig Schroeder (craig — at — ucr –dot — edu)
SUMMARY:
* If I run the simulation with implicit integration, it never fails.
* If I run with explicit, two failure modes occur: (a) instability, (b) J becomes negative
* Running with a modified F update (more on this below) prevents (b).
* Explicit sims fail because I am taking time step sizes that are too large.
* The equations of state are very stiff; a reliable explicit scheme will need to choose dt very wisely (see my advice at the end if you are interested).
WHAT I RAN:
I ran all combinations of the following (2*4*2*3*5*2*2 = 960 simulations)
* Constitutive model: both the power law and tait equation.
* Stiffness: 15 kPa, 150 kPa, 1.5 MPa, 15 MPa
* Time integration scheme: simplectic Euler (explicit) and backward Euler (implicit).
* Transfers: FLIP, PIC, APIC
* Maximum time step size: 0.03 0.01 0.003 0.001 0.0003
* Interpolation kernel: quadratic and cubic splines
* Deformation gradient (F) update: regular, modified (more on this below)
DEVIATIONS FROM DOCUMENT:
As far as I am aware, I deviate from the document in the following ways:
* Boundary conditions. I did not do reflection boundary conditions for my walls because (a) it makes the velocity field non-smooth, which messes up the update of F and (b) I don’t know how to implement this boundary condition with implicit time integration. Instead, I use the boundary conditions in [1]. It is essentially a sort of separating condition.
* Particle seeding. 4 particles per cell, irregularly seeded (using Poisson sampling).
WHAT I HAD TO CHANGE:
* My nonlinear solver had a bug, which I found and fixed.
* At the end of each time step, I replace F with sqrt(J)*I. Since the deviatoric part of F does not contribute to the constitutive model, errors here can grow without bound. Note that if J<0 at the end of the time step, the simulation will end.
* In both constitutive models, I test for J<1e-10. If this is true, then I return a “huge” result (1e20) instead of trying to compute the actual constitutive model. I need to do this because the nonlinear solver does line searches, which can evaluate the constitutive model at configurations where J<0. Since my simulations were all run with trapping floating point errors, this would terminate the simulations. If I instead return a large energy value, the line search will realize that this is a very bad configuration and ignore it.
EVALUATION CRITERIA:
A simulation is deemed a failure if any of these occur:
* The simulation terminates early. This can occur as a result of floating point exceptions.
* The velocity magnitude of any particle exceeds 10 in any time step.
* The time step falls below 1e-6 for any time step.
RESULTS:
* All implicit simulations succeed (480 sims).
* Explicit simulations go one of 4 ways
– success (81 sims)
– finish but big velocity (6 sims, max velocities: 10.7, 11.9, 12.2, 18.7, 34.7, 43.1)
– floating point error (183 sims); these are caused by J<0
– big velocity and/or tiny dt (210 sims)
* The modified F update prevents J<0, but it does not improve success much (for explicit: 43 with vs 38 without)
* The vast majority of failures occur immediately. The simulations are run to time 5s, but 345 of the simulations fail to reach time 0.05s. Of these, 169 die in the second time step. These sims are taking time step sizes that are unstable. My code calculates CFL based on the time it takes a particle to traverse one cell given the velocity at the beginning of the time step; the first time step has velocity zero, so it always takes a step at the maximum velocity allowed. No explicit sims finished at the highest stiffness, and only two succeeded at the second-highest stiffness (both using the smallest time step limit).
* The velocity range for successful sims is:
– explicit pic 1.6-2.7
– explicit apic 5.4-7.0 (first fail: 10.7)
– explicit flip 6.2-8.1 (first fail: 11.9)
– implicit pic 1.6-2.7
– implicit apic 5.3-6.1
– implicit flip 5.4-8.2
We see that there is a distinct gap between the usual range of velocities (which I am calling successful) and the cutoff of 10, which suggests that this is a good choice for the cutoff.
MODIFIED F UPDATE:
The usual update for F is: F^(n+1) = (I + dt gradV) F^n. The modified update is from [2]: F^(n+1) = (I + dt gradV + 1/2 dt^2 gradV^2) F^n. This update is a better approximation for the “ideal” update rule F^(n+1) = exp(dt gradV) F^n and has the advantage that det(I + A + 1/2 A^2) >= 0 for any matrix A. If you are updating J instead of F, you can apply the same trick.
MY ADVICE:
If you are going to run explicit, make sure your CFL computation takes into account forces. (My code does not do this because it is intended for implicit simulation, which does not require this). Ideally, compute your forces before deciding on a time step size. Then, choose a time step size small enough so that at the end of the time step:
* J does not change by more than a specific fraction (eg, by at most 20%)
* x does not move more than a specific fraction of a cell
* total energy does not increase (careful about boundary conditions!). Or at least a quadratic approximation of it (using the hessian of the potential energy function), which should be good enough.
I suspect that if you do these things, the failures you see in this sim will go away.
REFERENCES:
[1] Gast, T., Schroeder, C., Stomakhin, A., Jiang, C., Teran, J., “Optimization Integrator for Large Time Steps,” IEEE Transactions on Visualization and Computer Graphics, 21(10), pp. 1103-1115 (2015).
[2] Jiang, C., Schroeder, C. and Teran, J., “An angular momentum conserving affine-particle-in-cell method,” Journal of Computational Physics, 338, 137-164 (2017).
P.S. sent in separate email from Craig:
I have an explanation (without proof, of course) for why you see the explosions after things cool down. When the sim is calm, your velocities are small. This might be telling your code that it is safe to take a big time step. That would give you the opportunity to make a big mistake (eg, bring J close to 0.) In particular, small v does not necessarily imply small grad-v, so you can mess up J without doing anything bad to x or v. Then, in the next time step, you get a big force, and your particles are off to the racetrack. If that is the case, then including your F or J update in your time step calculation should make the problem go away.
I cant wait to find out what’s causing this.
Hi Rebecca,
I’m not sure if it is related, but a graduate student and I have been seeing the same sort of thing using MPM. Not in the same type of problem, but one in which very large deformations have occurred for quite a long time during the simulation. Then all the sudden, it explodes! Did you ever figure it out? Anyway, thanks for posting this. Hope you are well!
Kind Regards,
Reuben
No resolution or rigorous explanation has been brought to my attention (might be the ringing instability talked about in an obscure article by Brackbill? A publication about that is supposed to be submitted soon by Martin Berzins, and I urged him to include this problem in his test suite.) There have been several people who contacted me with their own observations about this anomaly, so it is a great topic for publication. Hope you or someone fixes it!
I’m not a MPM practitioner but several years ago I worked a little in the field of turbulence. I feel strongly, that your problem could be related with the blow-up time problem of Navier Stokes equations. I would check the distribution of enstrophy in the field just before the explosion. I mean, maybe some terms in your method of extrapolating to the grid, make enstrophy to diverge. Is just a possibility. Congratulations for the great work!
Sorry that this comment wasn’t seen in a timely manner. Please provide a citation to a journal article that discusses the phenomenon that you are describing, especially the one that details the role of enstropy.
I’m not a MPM practitioner but several years ago I worked a little in the field of turbulence. I feel strongly, that your problem could be related with the blow-up time problem of Navier Stokes equations. I would check the distribution of enstrophy in the field just before the explosion. I mean, maybe some terms in your method of extrapolating to the grid, make enstrophy to diverge. Is just a possibility. Congratulations for the great work!