This post has the following aims:
- Provide documentation and source code for a spherically symmetric wave propagation in a linear-elastic medium.
- Tell a story illustrating how this simple verification problem helped to validate a complicated rate-dependent and history-dependent geomechanics model.
- Warn against believing previously reported material parameters, since they might have been the result of constitutive parameter tweaking to compensate for unrelated errors in the host code. Continue reading
This 2007 Book Chapter on the basics of plasticity theory reviews the terminology and governing equations of plasticity, with emphasis on amending misconceptions, providing physical insights, and outlining computational algorithms. Plasticity theory is part of a larger class of material models in which a pronounced change in material response occurs when the stress (or strain) reaches a critical threshold level. If the stress state is subcritical, then the material is modeled by classical elasticity. The bound- ary of the subcritical (elastic) stress states is called the yield surface. Plasticity equations apply if continuing to apply elasticity theory would predict stress states that extend beyond this the yield surface. The onset of plasticity is typically characterized by a pronounced slope change in a stress–strain dia-gram, but load reversals in experiments are necessary to verify that the slope change is not merely nonlinear elasticity or reversible phase transformation.
The threshold yield surface can appear to be significantly affected by the loading rate, which has a dominant effect in shock physics applications.
In addition to providing a much-needed tutorial survey of the governing equations and their solution (defining Lode angle and other Lode invariants and addressing the surprisingly persistent myth that closest-point return satisfies the governing equations), this book chapter includes some distinctive contributions such as a simple 2d analog of plasticity that exhibits the same basic features of plasticity (such as existence of a “yield” surface with associative flow and vertex theory), an extended discussion of apparent nonassociativity, stability and uniqueness concerns about nonassociativity, and a summary of apparent plastic wave speeds in relation to elastic wave speeds (especially noting that non-associativity admits plastic waves that travel faster than elastic waves).
For the full manuscript with errata, click 2007 Book Chapter on the basics of plasticity theory.
Constitutive modeling refers to the development of equations describing the way that materials respond to various stimuli. In classical deformable body mechanics, a simple constitutive model might predict the stress required to induce a given strain; the canonical example is Hooke’s law of isotropic linear elasticity. More broadly, a constitutive model predicts increments in some macroscale state variables of interest (such as stress, entropy, polarization, etc.) that arise from changes in other macroscale state variables (strain, temperature, electric field, etc.).
Constitutive equations are ultimately implemented into a finite element code to close the set of equations required to solve problems of practical interest. This course describes a few common constitutive equations, explaining what features you would see in experimental data or structural behavior that would prompt you to select one constitutive model over another, how to use them in a code, how to test your understanding of the model, how to check if the code is applying the model as advertised in its user’s manual, and how to quantitatively assess the mathematical and physical believability of the solution.
Sanders, A., I. Tibbitts, D. Kakarla, S. Siskey, J. Ochoa, K. Ong, and R. Brannon. (2011). “Contact mechanics of impacting slender rods: measurement and analysis.” Society for Experimental Mechanics Annual Meeting. Uncasville, CT, June 13-16.
To validate models of contact mechanics in low speed structural impact, slender rods with curved tips were impacted in a drop tower, and measurements of the contact and vibration were compared to analytical and finite element (FE) models. The contact area was recorded using a thin-film transfer technique, and the contact duration was measured using electrical continuity. Strain gages recorded the vibratory strain in one rod, and a laser Doppler vibrometer measured velocity. The experiment was modeled analytically using a quasi-static Hertzian contact law and a system of delay differential equations. The FE model used axisymmetric elements, a penalty contact algorithm, and explicit time integration. A small submodel taken from the initial global model economically refined the analysis in the small contact region. Measured contact areas were within 6% of both models’ predictions, peak speeds within 2%, cyclic strains within 12 microstrain (RMS value), and contact durations within 2 µs. The accuracy of the predictions for this simple test, as well as the versatility of the diagnostic tools, validates the theoretical and computational models, corroborates instrument calibration, and establishes confidence thatthe same methods may be used in an experimental and computational study of the impact mechanics of artificial hip joint.
H.W. Meyer Jr. and R.M. Brannon
[This post refers to the original on-line version of the publication. The final (paper) version with page numbers and volume is found at http://dx.doi.org/10.1016/j.ijimpeng.2010.09.007. Some further details and clarifications are in the 2012 posting about this article]
Continuum mechanics codes modeling failure of materials historically have considered those materials to be homogeneous, with all elements of a material in the computation having the same failure properties. This is, of course, unrealistic but expedient. But as computer hardware and software has evolved, the time has come to investigate a higher level of complexity in the modeling of failure. The Johnsone-Cook fracture model is widely used in such codes, so it was chosen as the basis for the current work. The CTH ﬁnite difference code is widely used to model ballistic impact and penetration, so it also was chosen for the current work. The model proposed here does not consider individual ﬂaws in a material, but rather varies a material’s Johnsone-Cook parameters from element to element to achieve in homogeneity. A Weibull distribution of these parameters is imposed, in such a way as to include a size effect factor in the distribution function. The well-known size effect on the failure of materials must be physically represented in any statistical failure model not only for the representations of bodies in the simulation (e.g., an armor plate), but also for the computational elements, to mitigate element resolution sensitivity of the computations.The statistical failure model was tested in simulations of a Behind Armor Debris (BAD) experiment, and found to do a much better job at predicting the size distribution of fragments than the conventional (homogeneous) failure model. The approach used here to include a size effect in the model proved to be insufﬁcient, and including correlated statistics and/or ﬂaw interactions may improve the model.
R.M. Brannon, J.M. Wells, and O.E. Strack
Validating simulated predictions of internal damage within armor ceramics is preferable to simply assessing a models ability to predict penetration depth, especially if one hopes to perform subsequent ‘‘second strike’’ analyses. We present the results of a study in which crack networks are seeded by using a statistically perturbed strength, the median of which is inherited from a deterministic ‘‘smeared damage’’ model, with adjustments to reﬂect experimentally established size eﬀects. This minor alteration of an otherwise conventional damage model noticeably mitigates mesh dependencies and, at virtually no computational cost, produces far more realistic cracking patterns that are well suited for validation against X-ray computed tomography (XCT) images of internal damage patterns. For Brazilian, spall, and indentation tests, simulations share qualitative features with externally visible damage. However, the need for more stringent quantitative validation, software quality testing, and subsurface XCT validation, is emphasized.
Swan, S. and R. Brannon (2009)
Current simulations of material deformation are a balance between computational effort and accuracy of the simulation. To increase the accuracy of the simulated material response, the simulation becomes more computationally intensive with finer meshes and shorter timesteps, increasing the time and resource requirements needed to perform the simulation. One method for improving predictions of brittle failure while minimizing computational overhead is to implement statistical variability for the material properties being simulated. This method has low computational overhead and requires a relatively small increase in resource requirements while significantly increasing the precision of simulation results. Currently, most simulation frameworks inaccurately describe brittle and heterogeneous materials as uniform bodies of equal strength and consistency. This over-simplification underscores the need to implement statistical variability to help better predict material response and failure modes for materials that contain intermittent abnormalities such as changes in hardness, strength, and grain size throughout the specimen. Uintah, the computational framework developed by the University of Utah’s C-SAFE program, has a simplistic native Gaussian distribution function that was hard-coded into select material models. The goal of this research is to create an easily duplicable method for enabling dynamic global variability according to a Weibull distribution in constitutive models in Uintah and to implement said ability into the constitutive model Kayenta. The main application of Kayenta is to simulate geological response to penetration and perforation. For the purpose of simulating failure in brittle geological samples, the Weibull distribution produces realistic statistical scatter in constituent properties that correlates well to flaws and irregularities observed in laboratory tests.
The capability of the generalized interpolation material point (GIMP) method in simulation of penetration events is investigated. A series of experiments was performed wherein a shaped charge jet penetrates into a stack of aluminum plates. Electronic switches were used to measure the penetration time history. Flash x-ray techniques were used to measure the density,length, radius and velocity of the shaped charge jet. Simulations of the penetration event were performed using the Uintah MPM/GIMP code with several different models of the shaped charge jet being used. The predicted penetration time history for each jet model is compared with the experimentally observed penetration history. It was found that the characteristics of the predicted penetration were dependent on the way that the jet data are translated to a discrete description. The discrete jet descriptions were modified such that the predicted penetration histories fell very close to the range of the experimental data. In comparing the various discrete jet descriptions it was found that the cumulative kinetic energy flux curve represents an important way of characterizing the penetration characteristics of the jet. The GIMP method was found to be well suited for simulation of high rate penetration events.