Continuum homogenization and RVEs

For now (to help with a conversation that I’m having with a few collaborators) this post provides only the following “infographic” to illustrate the concept of approximating a periodic discrete system with an effective continuum over a sufficiently large scale. (More information will be added about this topic as needed and/or as requested).

Below is shown a five-link chain (in red-blue-green-orange-black). Immediately this colorful chain is a dark-gray plot of the exact (mesocale) lineal density, which is defined at a location “x” to be the mass within an infinitesimal segment dx at that location divided by the segment’s length dx. This local density is shown as the dark-gray shaded plot in the upper-left corner, and it is the slope of the black line in the graph of the lower-left corner.

The continuum concept. Homogenization and identification of an RVE is based on the value of x for which perturbations fall below numerical round-off error for a given application.

The continuum concept. Homogenization and identification of an RVE is based on the value of x for which perturbations fall below numerical round-off error for a given application.

The exact homogenized (macroscale) lineal density at a location “x” is defined as the exact total mass falling inside the span from zero to x,  divided by the chain’s length (x itself).  While the mesoscale density is the local slope at location “x” of the black line in the graph, the macroscale density is the secant slope at location “x” of the same black line.   The continuum (red-dashed) approximation of the local mass distribution ignores local fluctuations from the fact that the chain is actually heterogeneous. For short chain lengths, the exact macroscale density is significantly different from the continuum density, but this discrepancy asymptotes toward zero as the chain length is increased.


The theoretical representative volume element (RVE) size corresponds to the size for which the discrepancy (like the plot in the lower-right corner of the infographic) falls below some tolerable threshold, which is determined by considering the tolerable error in an engineering simulation.

These concepts apply to other properties besides density. For example, the macroscale elastic stiffness would be defined as the force applied to the chain divided by the corresponding induced displacement. Like density, this macroscale property varies with the number of links in the chain, but it asymptotes to a steady value as the chain length increases.

Density has a nice asymptotic continuum limit that isn’t sensitive to dilutely distributed statistical perturbations in the local (microscopic) density.  If, for example, 1 in 10000 links is made of light aluminum while the others are made of heavy steel, then the continuum density will be nevertheless close to that of a chain that is made entirely of steel links. The continuum elastic stiffness is likewise not highly sensitive to slight variations in local constituent (link) stiffness.  A chain’s failure strength, on the other hand, is profoundly affected by existence of even a miniscule fraction of weaker links. A mostly steel chain that contains relatively few aluminum links would have a continuum strength equal to the strength of the weaker (aluminum) link. That’s because (in the limit) an infinitely long chain would contain at least one aluminum link. For short chains that are made of, say, 10 links (each of which has a 1 in 10000 chance of being made of aluminum), the average macroscale strength would be higher on average than the strength of longer chains. The strength data for short chains would also be more variable.

These observations give insight into what a modeler must pay attention to when using continuum macroscale properties in simulations of engineering structures.  To design for the structure’s daily (i.e., normal and therefore usually elastic) usage conditions, homogenized continuum properties would be fine. However, continuum strength properties would need to be appropriately perturbed based on the size of the finite elements. This explicit incorporation of statistical variability in continuum properties is required when those perturbations strongly influence the engineering objective of the analysis (such as computing failure risk). In fact, it can be argued that such revisions are crucial to predict fracture and fragmentation whenever the finite-element size is smaller than a few kilometers. For more details on scale-dependent and statistically variable macroscale properties, see Publication: Aleatory quantile surfaces in damage mechanics and the more recent 2015 IJNME article, “Aleatory uncertainty and scale effects in computational damage models for failure and fragmentation” by Strack, Leavy and Brannon.

Undergraduate researcher applies binning to study aleatory uncertainty in nonlinear buckling foundation models


Sophomore undergraduate, Katharin Jensen, has developed an easily understood illustration of the effect of aleatory uncertainty, which means natural point-to-point variability in systems. She has put statistical variability on the lengths of buckling elements in the following system:


Continue reading

Publication: A model for statistical variation of fracture properties in a continuum mechanics code

NEWS FLASH: The print version of the Meyer-Brannon paper on statistical variation of fracture patterns in a continuum code (CTH) is now available at

Perforation with Aleatory Uncertainty

Perforation with Aleatory Uncertainty of high-pressure strength in an Eulerian Simulation.

Continue reading

Publication: A model for statistical variation of fracture properties in a continuum mechanics code

H.W. Meyer Jr. and R.M. Brannon

[This post refers to the original on-line version of the publication. The final (paper) version with page numbers and volume is found at Some further details and clarifications are in the 2012 posting about this article]

Simulation results for a reference volume of 0.000512 cm^3 ; sf is the size effect factor

Continuum mechanics codes modeling failure of materials historically have considered those materials to be homogeneous, with all elements of a material in the computation having the same failure properties. This is, of course, unrealistic but expedient. But as computer hardware and software has evolved, the time has come to investigate a higher level of complexity in the modeling of failure. The Johnsone-Cook fracture model is widely used in such codes, so it was chosen as the basis for the current work. The CTH finite difference code is widely used to model ballistic impact and penetration, so it also was chosen for the current work. The model proposed here does not consider individual flaws in a material, but rather varies a material’s Johnsone-Cook parameters from element to element to achieve in homogeneity. A Weibull distribution of these parameters is imposed, in such a way as to include a size effect factor in the distribution function. The well-known size effect on the failure of materials must be physically represented in any statistical failure model not only for the representations of bodies in the simulation (e.g., an armor plate), but also for the computational elements, to mitigate element resolution sensitivity of the computations.The statistical failure model was tested in simulations of a Behind Armor Debris (BAD) experiment, and found to do a much better job at predicting the size distribution of fragments than the conventional (homogeneous) failure model. The approach used here to include a size effect in the model proved to be insufficient, and including correlated statistics and/or flaw interactions may improve the model.

Available Online:

Research: Weibull fragmentation in the Uintah MPM code

Tough disk impacting brittle disk

Below are links to two simulations of disks colliding. The first is elastic and the second uses a fracture model with spatially variable strength based on a scale-dependent Weibull realization. Both take advantage of the automatic contact property of the MPM.

WeibConstMovie:  disks colliding without fracture

WeibPerturbedGood: disks colliding with heterogeneous fracture

This basic capability to support statistically variable strength in a damage model has been extended to the Kayenta plasticity model in Uintah.

Publication: Statistical perturbation of material properties in Uintah

Swan, S. and R. Brannon (2009)

Illustration of stair-stepping typical of finite sampling from a Weibull distribution

Current simulations of material deformation are a balance between computational effort and accuracy of the simulation. To increase the accuracy of the simulated material response, the simulation becomes more computationally intensive with finer meshes and shorter timesteps, increasing the time and resource requirements needed to perform the simulation.  One method for improving predictions of brittle failure while minimizing computational overhead is to implement statistical variability for the material properties being simulated. This method has low computational overhead and requires a relatively small increase in resource requirements while significantly increasing the precision of simulation results. Currently, most simulation frameworks inaccurately describe brittle and heterogeneous materials as uniform bodies of equal strength and consistency. This over-simplification underscores the need to implement statistical variability to help better predict material response and failure modes for materials that contain intermittent abnormalities such as changes in hardness, strength, and grain size throughout the specimen. Uintah, the computational framework developed by the University of Utah’s C-SAFE program, has a simplistic native Gaussian distribution function that was hard-coded into select material models. The goal of this research is to create an easily duplicable method for enabling dynamic global variability according to a Weibull distribution in constitutive models in Uintah and to implement said ability into the constitutive model Kayenta. The main application of Kayenta is to simulate geological response to penetration and perforation. For the purpose of simulating failure in brittle geological samples, the Weibull distribution produces realistic statistical scatter in constituent properties that correlates well to flaws and irregularities observed in laboratory tests.

Available online: