Publication: Initial inclusion of thermodynamic considerations in Kayenta

T.J. Fuller, R.M. Brannon, O.E. Strack, J.E. Bishop

Displacement profile for Thermo-Kayenta at the end of the simulation. the red dots represent the experimental profiles

A persistent challenge in simulating damage of natural geological materials, as well as rock-like engineered materials, is the development of efficient and accurate constitutive models.The common feature for these brittle and quasi-brittle materials are the presence of flaws such as porosity and network of microcracks. The desired models need to be able to predict the material responses over a wide range of porosities and strain rate. Kayenta [1] (formerly called the Sandia GeoModel) is a unifi ed general-purpose constitutive model that strikes a balance between rst-principles micromechanics and phenomenological or semi-empirical modeling strategies. However, despite its sophistication and ability to reduce to several classical plasticity theories, Kayenta is incapable of modeling deformation of ductile materials in which deformation is dominated by dislocation generation and movement which can lead to signi cant heating. This stems from Kayenta’s roots as a geological model, where heating due to inelastic deformation is often neglected or presumed to be incorporated implicitly through the elastic moduli.The sophistication of Kayenta and its large set of extensive features, however, make Kayenta an attractive candidate model to which thermal eff ects can be added. This report outlines the initial work in doing just that, extending the capabilities of Kayenta to include deformation of ductile materials, for which thermal e ffects cannot be neglected. Thermal e ffects are included based on an assumption of adiabatic loading by computing the bulk and thermal responses of the material with the Kerley Mie-Gruneisen equation of state and adjusting the yield surface according to the updated thermal state. This new version of Kayenta, referred to as Thermo-Kayenta throughout this report, is capable of reducing to classical Johnson-Cook plasticity in special case single element simulations and has been used to obtain reasonable results in more complicated Taylor impact simulations in LS-Dyna. Despite these successes, however, Thermo-Kayenta requires additional re nement for it to be consistent in the thermodynamic sense and for it to be considered superior to other, more mature thermoplastic models. The initial thermal development, results, and required refinements are all detailed in the following report.

Available Online:

http://www.mech.utah.edu/~brannon/pubs/7-2010FullerBrannonStrackBishopThermodynamicsInKayenta.pdf

Tutorial: Use of the shell 'xargs' command

I have found the ‘xargs’ command line utility to be very useful sometimes.  Often you can avoid writing a special shell script to perform a task by using xargs to perform a task on a list of files.  Its use can best be described through an example.  I have recenlty be migrating files from one svn repository to another.  I began by copying all of the files over using rsync.  This copied all of the .svn files from the old repository, which I didn’t want.  This left me with at least 30 directories, each with a .svn directory that needed to be removed.  What to do?  Use xargs.  Here is the command:

command-prompt> find -name .svn | xargs rm -rf

This uses the find command to generate a list of all files and directories names ‘.svn’ (case sensistive).  This list is then piped to the xargs command, which runs ‘rm -rf’ on each file in the list.  That’s it!  Note that as this command is written it will seach the current directory and all subdirectories for files with name ‘.svn’.  If you want to have ‘find’ search a directory other than the current directory you can add the path of that directory after the file name to search for.  See the man page for find for more info.

Tutorial: Use rsync instead of mv or scp when it really matters

I’ve been running a lot of simulations on the ‘updraft’ parallel computing cluster at the University of Utah.  My input files often have to wait in the queue for quite a while (a few days sometimes) before they can be ran.  The simulations generate large data sets which I then need to use for post-processing.  The directory where these files are created on the cluster is regularly wiped by the administrators to keep space free for other users.  This means that you don’t want to leave important data sitting around on this file system.  I’d been moving it back to my home directory on the cluster using ‘mv’, and eventually transfering it to my workstation using ‘scp’.   This was kind of a pain and took FOREVER!  I also discovered something that caused me to completely abandon ‘mv’ for any data that is even somewhat important.  I was using ‘mv’ to transfer the data to my home directory when I lost my internet connection.  Big deal right.  I logged back in only to find out that the data files had been corrupted by the inturrupted ‘mv’ command.  Now I had to run the simulation all over again to generate a new data file.  Bummer.  I did a little research about ‘mv’ and found that if it is interupted for any reason, it often looses data.  Not good.  Enter rsync.  rsync is a tool which makes a copy of files and directories.  If it gets interrupted, you can simply restart it and it will essentially continue where it left off.  Why not just use cp or scp?  Two reasons.  First if cp or scp is interupted, then issued again it simply restarts.  This is really a problem when the transfer takes an hour an you need the data NOW.  Which brings me to the second reason:  speed.  If you call rsync with the -z flag, it compresses the data before copying it.  On remote file transfers this results in a HUGE speed up.  Of course with rsync, once the files are transferred you need to manually delete the unwanted copy.  You can use ‘rdiff’ to verify that the two copies are in fact identical before deleting the unwanted files.  Did I mention that rsync is also great for backups too?