J. Burghardt, B. Leavy, J. Guilkey, Z. Xue, R. Brannon
The capability of the generalized interpolation material point (GIMP) method in simulation of penetration events is investigated. A series of experiments was performed wherein a shaped charge jet penetrates into a stack of aluminum plates. Electronic switches were used to measure the penetration time history. Flash x-ray techniques were used to measure the density,length, radius and velocity of the shaped charge jet. Simulations of the penetration event were performed using the Uintah MPM/GIMP code with several different models of the shaped charge jet being used. The predicted penetration time history for each jet model is compared with the experimentally observed penetration history. It was found that the characteristics of the predicted penetration were dependent on the way that the jet data are translated to a discrete description. The discrete jet descriptions were modified such that the predicted penetration histories fell very close to the range of the experimental data. In comparing the various discrete jet descriptions it was found that the cumulative kinetic energy flux curve represents an important way of characterizing the penetration characteristics of the jet. The GIMP method was found to be well suited for simulation of high rate penetration events.
The CSM group has independently confirmed a case study demonstrating the truth of a claim in the literature that any non-associative rate-independent model admits a non-physical dynamic achronistity instability. By stimulating a non-associative material in the “Sandler-Rubin wedge” (above yield but below the flow surface), plastic waves are generated that travel faster than elastic waves, thus introducing a negative net work in a closed strain cycle that essentially feeds energy into a propagating wave to produce unbounded increases in displacement with time.
Sandler-Rubin instability: an infinitesimal pulse grows as it propagates
The Uintah computational framework (UCF) has been adopted for simulation of shaped charge jet penetration and subsequent damage to geological formations. The Kayenta geomechanics model, as well as a simplified model for shakedown simulations has been incorporated within the UCF and is undergoing extensive development to enhance it to account for fluid in pore space.
A generic penetration simulation using Uintah
The host code (Uintah) itself has been enhanced to accommodate material variability and scale effects. Simulations have been performed that import flash X-ray data for the velocity and geometry of a particulated metallic jet so that uncertainty about the jet can be reduced to develop predictive models for target response. Uintah’s analytical polar decomposition has been replaced with an iterative algorithm to dramatically improve accuracy under large deformations. Continue reading
Analysis and computations have been performed by the Utah CSM group to support experimental investigations of unvalidated assumptions in plasticity theory. The primary untested assumption is that of a regular flow rule in which it is often assumed that the direction of the inelastic strain increment is unaffected by the total strain increment itself. To support laboratory testing of this hypothesis, the general equations of classical plasticity theory were simplified for the case of axisymmetric loading to provide experimentalists with two-parameter control of the axial and lateral stress increments corresponding to a specified loading trajectory in stress space. Loading programs involving changes in loading directions were designed. New methods for analyzing the data via a moving least squares fit to tensor-valued input-output data were used to quantitatively infer the apparent plastic tangent modulus matrix and thereby detect violations of the regular flow rule. Loading programs were designed for validating isotropic cap hardening models by directly measuring the effect of shear loading on the hydrostatic elastic limit.
Michael Braginski (postdoc, Mech. Engr., UofU)
Jeff Burghardt (PhD student, Mech. Engr., UofU)
Stephen Bauer (Manager, Sandia National Labs geomechanics testing lab)
David Bronowski (Sandia geomechanics lab technician)
Erik Strack (Manager, Sandia Labs Computational Physics)
I’ve been doing a lot of writing lately. I’ve come to believe that writing well is at least as important for engineers as calculus. This past semester I took a dissertation writing class from the writing department here at the University of Utah. It was very interesting to read dissertations from fields as diverse as literature, material science, nursing and nuclear engineering. I think that it’s safe to say it was beneficial for everyone involved. One nice resource that another student suggested, is a book title “The Elements of Style” by William Strunk and E.B. White. Yes that’s E.B. White of “Charlotte’s Web” fame. I picked up a copy of the book at the library and have found it an excellent, and readable, resource for writing well. I’ve also discovered that nearly everyone else on the planet knew about it and I was somehow left in the dark. So for any of you who might still be in the dark about this wonderful resource I highly recommend it.
Abstract: This report investigates the validity of several key assumptions in classical plasticity theory regarding material response to changes in the loading direction. Three metals, two rock types, and one ceramic were subjected to non-standard loading directions, and the resulting strain response increments were displayed in Gudehus diagrams to illustrate the approximation error of classical plasticity theories. A rigorous mathematical framework for fitting classical theories to the data, thus quantifying the error, is provided. Further data analysis techniques are presented that allow testing for the effect of changes in loading direction without having to use a new sample and for inferring the yield normal and flow directions without having to measure the yield surface. Though the data are inconclusive, there is indication that classical, incrementally linear, plasticity theory may be inadequate over a certain range of loading directions. This range of loading directions also coincides with loading directions that are known to produce a physically inadmissible instability for any nonassociative plasticity model.
You may download the full report here.
I have found the ‘xargs’ command line utility to be very useful sometimes. Often you can avoid writing a special shell script to perform a task by using xargs to perform a task on a list of files. Its use can best be described through an example. I have recenlty be migrating files from one svn repository to another. I began by copying all of the files over using rsync. This copied all of the .svn files from the old repository, which I didn’t want. This left me with at least 30 directories, each with a .svn directory that needed to be removed. What to do? Use xargs. Here is the command:
command-prompt> find -name .svn | xargs rm -rf
This uses the find command to generate a list of all files and directories names ‘.svn’ (case sensistive). This list is then piped to the xargs command, which runs ‘rm -rf’ on each file in the list. That’s it! Note that as this command is written it will seach the current directory and all subdirectories for files with name ‘.svn’. If you want to have ‘find’ search a directory other than the current directory you can add the path of that directory after the file name to search for. See the man page for find for more info.
I’ve been running a lot of simulations on the ‘updraft’ parallel computing cluster at the University of Utah. My input files often have to wait in the queue for quite a while (a few days sometimes) before they can be ran. The simulations generate large data sets which I then need to use for post-processing. The directory where these files are created on the cluster is regularly wiped by the administrators to keep space free for other users. This means that you don’t want to leave important data sitting around on this file system. I’d been moving it back to my home directory on the cluster using ‘mv’, and eventually transfering it to my workstation using ‘scp’. This was kind of a pain and took FOREVER! I also discovered something that caused me to completely abandon ‘mv’ for any data that is even somewhat important. I was using ‘mv’ to transfer the data to my home directory when I lost my internet connection. Big deal right. I logged back in only to find out that the data files had been corrupted by the inturrupted ‘mv’ command. Now I had to run the simulation all over again to generate a new data file. Bummer. I did a little research about ‘mv’ and found that if it is interupted for any reason, it often looses data. Not good. Enter rsync. rsync is a tool which makes a copy of files and directories. If it gets interrupted, you can simply restart it and it will essentially continue where it left off. Why not just use cp or scp? Two reasons. First if cp or scp is interupted, then issued again it simply restarts. This is really a problem when the transfer takes an hour an you need the data NOW. Which brings me to the second reason: speed. If you call rsync with the -z flag, it compresses the data before copying it. On remote file transfers this results in a HUGE speed up. Of course with rsync, once the files are transferred you need to manually delete the unwanted copy. You can use ‘rdiff’ to verify that the two copies are in fact identical before deleting the unwanted files. Did I mention that rsync is also great for backups too?
I remember learning how to use a debugger and Utah State University as an undergraduate. We learned to program in Fortran 90, and used a Microsoft debugger. It seemed cool, but honestly with the types of programs I was writing at the time, using print statement seemed to be effective enough and less of a hassle. Recently I have been working with the Uintah MPM code, which is MUCH larger and more complex than any code I’ve ever written. After several months of wading through the code I finally came across a problem that “cout” just could not help me solve. The code was suddenly crashing with an allocation error and giving no useful backtrace information. A colleague suggested that I use gdb to find out where the problem was occurring. It turned out to be very, very helpful and surprisingly easy to use. I’ve recently become a convert to Emacs, and found that the gdb interface in Emacs is especially nice. So here is goes: