M.Y. Lee, R.M. Brannon and D.R. Bronowski
Explosive failure of the SICN-UC02 specimen (12.7 mm in diameter and 25.4 mm in length) subjected to the unconfined uniaxial compressive stress condition
To establish mechanical properties and failure criteria of silicon carbide (SiC-N) ceramics, a series of quasi-static compression tests has been completed using a high-pressure vessel and a unique sample alignment jig. This report summarizes the test methods, set-up, relevant observations, and results from the constitutive experimental efforts. Combining these quasistatic triaxial compression strength measurements with existing data at higher pressures naturally results in different values for the least-squares fit to this function, appropriate over a broader pressure range. These triaxial compression tests are significant because they constitute the first successful measurements of SiC-N compressive strength under quasistatic conditions. Having an unconfined compressive strength of ~3800 MPa, SiC-N has been heretofore tested only under dynamic conditions to achieve a sufficiently large load to induce failure. Obtaining reliable quasi-static strength measurements has required design of a special alignment jig and loadspreader assembly, as well as redundant gages to ensure alignment. When considered in combination with existing dynamic strength measurements, these data significantly advance the characterization of pressure-dependence of strength, which is important for penetration simulations where failed regions are often at lower pressures than intact regions.
H.W. Meyer Jr. and R.M. Brannon
[This post refers to the original on-line version of the publication. The final (paper) version with page numbers and volume is found at http://dx.doi.org/10.1016/j.ijimpeng.2010.09.007. Some further details and clarifications are in the 2012 posting about this article]
Simulation results for a reference volume of 0.000512 cm^3 ; sf is the size effect factor
Continuum mechanics codes modeling failure of materials historically have considered those materials to be homogeneous, with all elements of a material in the computation having the same failure properties. This is, of course, unrealistic but expedient. But as computer hardware and software has evolved, the time has come to investigate a higher level of complexity in the modeling of failure. The Johnsone-Cook fracture model is widely used in such codes, so it was chosen as the basis for the current work. The CTH ﬁnite difference code is widely used to model ballistic impact and penetration, so it also was chosen for the current work. The model proposed here does not consider individual ﬂaws in a material, but rather varies a material’s Johnsone-Cook parameters from element to element to achieve in homogeneity. A Weibull distribution of these parameters is imposed, in such a way as to include a size effect factor in the distribution function. The well-known size effect on the failure of materials must be physically represented in any statistical failure model not only for the representations of bodies in the simulation (e.g., an armor plate), but also for the computational elements, to mitigate element resolution sensitivity of the computations.The statistical failure model was tested in simulations of a Behind Armor Debris (BAD) experiment, and found to do a much better job at predicting the size distribution of fragments than the conventional (homogeneous) failure model. The approach used here to include a size effect in the model proved to be insufﬁcient, and including correlated statistics and/or ﬂaw interactions may improve the model.
R.M. Brannon, J.M. Wells, and O.E. Strack
Realistic-looking, uneven damage zones in Brazilian simulations compare favorably with laboratory data for observable damage
Validating simulated predictions of internal damage within armor ceramics is preferable to simply assessing a models ability to predict penetration depth, especially if one hopes to perform subsequent ‘‘second strike’’ analyses. We present the results of a study in which crack networks are seeded by using a statistically perturbed strength, the median of which is inherited from a deterministic ‘‘smeared damage’’ model, with adjustments to reﬂect experimentally established size eﬀects. This minor alteration of an otherwise conventional damage model noticeably mitigates mesh dependencies and, at virtually no computational cost, produces far more realistic cracking patterns that are well suited for validation against X-ray computed tomography (XCT) images of internal damage patterns. For Brazilian, spall, and indentation tests, simulations share qualitative features with externally visible damage. However, the need for more stringent quantitative validation, software quality testing, and subsurface XCT validation, is emphasized.
A. F. Fossum and R. M. Brannon
This paper summarizes the results of a theoretical and experimental program at Sandia National Laboratories aimed at identifying and modeling key physical features of rocks and rock-like materials at the laboratory scale over a broad range of strain rates. The mathematical development of a constitutive model is discussed and model predictions versus experimental data are given for a suite of laboratory tests. Concurrent pore collapse and cracking at the microscale are seen as competitive micromechanisms that give rise to the well-known macroscale phenomenon of a transition from volumetric compaction to dilatation under quasistatic triaxial compression. For high-rate loading, this competition between pore collapse and microcracking also seems to account for recently identiﬁed differences in strain-rate sensitivity between uniaxial-strain ‘‘plate slap’’ data compared to uniaxial-stress Kolsky bar data. A description is given of how this work supports ongoing efforts to develop a predictive capability in simulating deformation and failure of natural geological materials, including those that contain structural features such as joints and other spatial heterogeneities.
Swan, S. and R. Brannon (2009)
Illustration of stair-stepping typical of finite sampling from a Weibull distribution
Current simulations of material deformation are a balance between computational effort and accuracy of the simulation. To increase the accuracy of the simulated material response, the simulation becomes more computationally intensive with finer meshes and shorter timesteps, increasing the time and resource requirements needed to perform the simulation. One method for improving predictions of brittle failure while minimizing computational overhead is to implement statistical variability for the material properties being simulated. This method has low computational overhead and requires a relatively small increase in resource requirements while significantly increasing the precision of simulation results. Currently, most simulation frameworks inaccurately describe brittle and heterogeneous materials as uniform bodies of equal strength and consistency. This over-simplification underscores the need to implement statistical variability to help better predict material response and failure modes for materials that contain intermittent abnormalities such as changes in hardness, strength, and grain size throughout the specimen. Uintah, the computational framework developed by the University of Utah’s C-SAFE program, has a simplistic native Gaussian distribution function that was hard-coded into select material models. The goal of this research is to create an easily duplicable method for enabling dynamic global variability according to a Weibull distribution in constitutive models in Uintah and to implement said ability into the constitutive model Kayenta. The main application of Kayenta is to simulate geological response to penetration and perforation. For the purpose of simulating failure in brittle geological samples, the Weibull distribution produces realistic statistical scatter in constituent properties that correlates well to flaws and irregularities observed in laboratory tests.
R.M. Brannon and S. Leelavanichkul
RHT Model: Contour plots of damage: side, front, and back view of the target (top to bottom).
Four conventional damage plasticity models for concrete, the Karagozian and Case model (K&C),the Riedel-Hiermaier-Thoma model (RHT), the Brannon-Fossum model (BF1), and the Continuous Surface Cap Model (CSCM) are compared. The K&C and RHT models have been used in commercial finite element programs many years, whereas the BF1 and CSCM models are relatively new. All four models are essentially isotropic plasticity models for which plasticity is regarded as any form of inelasticity. All of the models support nonlinear elasticity, but with different formulations.All four models employ three shear strength surfaces. The yield surface bounds an evolving set of elastically obtainable stress states. The limit surface bounds stress states that can be reached by any means (elastic or plastic). To model softening, it is recognized that some stress states might be reached once, but, because of irreversible damage, might not be achievable again. In other words, softening is the process of collapse of the limit surface, ultimately down to a final residual surface for fully failed material. The four models being compared differ in their softening evolution equations, as well as in their equations used to degrade the elastic stiffness. For all four models, the strength surfaces are cast in stress space. For all four models, it is recognized that scale effects are important for softening, but the models differ significantly in their approaches. The K&C documentation, for example, mentions that a particular material parameter affecting the damage evolution rate must be set by the user according to the mesh size to preserve energy to failure. Similarly, the BF1 model presumes that all material parameters are set to values appropriate to the scale of the element, and automated assignment of scale-appropriate values is available only through an enhanced implementation of BF1 (called BFS) that regards scale effects to be coupled to statistical variability of material properties. The RHT model appears to similarly support optional uncertainty and automated settings for scale-dependent material parameters. The K&C, RHT, and CSCM models support rate dependence by allowing the strength to be a function of strain rate, whereas the BF1 model uses Duvaut-Lion viscoplasticity theory to give a smoother prediction of transient effects. During softening, all four models require a certain amount of strain to develop before allowing significant damage accumulation. For the K&C, RHT, and CSCM models, the strain-to-failure is tied to fracture energy release, whereas a similar effect is achieved indirectly in the BF1 model by a time-based criterion that is tied to crack propagation speed.