CONTENTS:

- Status of Women in Japan
- Diary of Food Activities
- Unusual Japanese customs (by US standards)

**Trip Report, Unofficial Appendix**

**1. Status of Women in Japan**

**1.1 Girls Day, March 3, 1994**

I found out that March 3rd in Japan is called “girl’s day”, to honor girls. There is also a “boy’s day”, in May. The one in May is a national holiday. The one in March is not.

**1.2 Science Hall of Fame, March 6, 1994**

When I went to the Japan Science Foundation Science Museum in Tokyo, they had a dining hall which was wallpapered by the great contributors to science throughout history. They had about a hundred pictures with short biographies. I noticed that there were only two women. Almost all of the men had their own box on the wall, but both women were sharing theirs. The first was Marie Curie, who was with her husband Pierre. He was listed first; I assumed that was because he was oldest, even though he only had half the number of Nobel Prizes. However, I saw that Chien-Shiun Wu (co-discoverer of non-conservation of parity during beta decay) was listed last behind her two male collaborators, even though she was oldest.

I was happy to see that Jean-Baptiste de Monet de Lamarck, the French naturalist famous for the long-discredited Lamarckian theory of evolution, had his own place on the wall!

**1.3 News Item, March 8, 1994**

In the paper today, there was an article about Japan’s VERY FIRST female train conductor. She has a master’s degree in physics, but many are concerned that a women might have trouble doing the job, because conductors need to know how to handle drunken passengers and what to do in emergency situations.

**1.4 Institute Literature**

Quoting from an information booklet I received from one of the institutes I visited: “The Institute invites **men** of learning and experience to serve as the advisory committee to the Director-General,” (emphasis added).

**1.5 Office Ladies**

In Japanese companies, most of the young women in the offices are called “OLs” (Office Ladies). Their job is to serve tea, mop up spills, hold doors, answer the phone, and provide a soft, feminine presence. Some people call them “*shokuba no hona*”–office flowers, implying that they are there more for decoration than for work. They generally stay around until they reach “tekikeiki”–suitable age to marry, generally 25 to 27 nowadays.

By the way, many women at the upper limit of tekikeiki, born in 1966, are destined to go unmarried. 1966 was the Japanese year of the fiery horse; these women are called “hinoe uma”–fiery horses, and are considered to be bad luck. (If you don’t believe that the Japanese take such superstitions seriously, all you have to do is look at birth statistics– in 1966, the birthrate was 25% lower than in 1965.)

**1.6 Dinner Conversation, March 15, 1994**

However, things seem to be changing. At dinner I sat next to a woman scientist from [a prestigious Japanese university]–one of the only women at the conference. She is the discoverer of [a new and unusual high-pressure phase produced in the laboratory]. She got married last year, and has retained her original name (she said that her name raised some eyebrows when she and her husband checked into the hotel for the meeting). Japan is one of two countries in the world in which it is ILLEGAL for a woman not to take her husband’s name (the other is India). So her name is officially the same as her husband’s, but she uses her old name as a “pen name” and on her badge at the conference. She is also a [sport] pilot, [adventurer/explorer], and former field [scientist]. She is also an active ballet dancer–a fact she withholds from her colleagues because she feels they would not approve. Not all Japanese women are shokuba no hona. Unfortunately, she did not give a presentation at the conference.

**1.7 Progress**

The existence of women such as the researcher I met is evidence that the status of women in Japan is improving. I would compare the situation to that in the U.S about 25 years ago. While I was in Japan I refereed a paper [on the results of supercomputer simulations at an American university]. The principal author was a female Japanese graduate student, who had recently asked me about job opportunities at [my employer, a major US laboratory]. All the time I was in Japan, the Japanese scientists were telling me that there was a shortage of scientists. They are hiring a lot of foreigners, but they are very strict about immigration. As more and more Japanese women go into science, I think the solution is obvious.

**2 Diary of Food Activities**

**2.1 Breakfast, March 2, 1994**

When I checked into the hotel last night, the woman at the desk asked me if I wanted a “Japanese” or “English” breakfast. Being that I was in Japan, I ordered Japanese. I sat down for breakfast at a table with a funny tray on the opposite side. The waitress moved the tray in front of me. I took off the cover and there was food inside. It was time to embark upon my first meal in Japan.

The tray was like one of those TV dinner trays, with several depressions for different food items. A sad, dried-out looking fish was flattened out in one compartment, with eyes still in its head looking back at me. Some unrecognizable stuff was in another depression, and some white stuff (potato salad?) with two meat balls in another. There was a little plastic package with some strange green crackers (I later figured out this was seaweed). Finally, there was something I recognized–an egg.

I decided to start with the most familiar. I cracked it, expecting hard-boiled, and discovered it was raw. I had no idea what to do with it, and was sitting there staring at it when the waitress brought me a bowl of rice and a bowl of soup. The solution was obvious–I dumped it in the soup. I figured I’d let it sit and cook awhile when a businessman sat down at a table in front of me and opened his TV dinner tray. I decided to wait and do whatever he did. He cracked his egg, dumped some soy sauce on it, scrambled it with his chopsticks, and — put it on his rice. Oh well.

**2.2 Breakfast, March 3, 1994**

Instead of a raw egg I got some nondescript, slimy mush (probably raw egg mixed with something). I put it on my rice this time. In the other tray holes were: some kind of vegetables, a little weenie, chunks of sweet potato, some little edible packages of something fishy, and another dried-out, flattened staring fish.

**2.3 Dinner, March 5, 1994**

I was invited to dinner at the home of a Canadian couple who both have temporary jobs in Tsukuba (they are from Edmonton and are good friends of a couple I knew from grad school, and who hosted me when I gave a seminar there a few years ago). We had raw octopus as an appetizer. One of their young sons loves octopus, and scarfed most of it up, suction cups and all. I tried not to let the suction cups get stuck to my tongue.

**2.4 Dinner, March 8, 1994**

After visiting the Tokyo Institute of Technology, the professor who hosted me escorted me and another Japanese colleague to the Ginza district in Tokyo. It was raining hard, and we folded up our umbrellas and entered a high-rise building. The TIT professor pulled out a magnetized card and put it in a slot next to an elevator–the doors opened, we got in, and it took us to the top of the building. There was an umbrella rack full of dripping umbrellas. I was shown how to put my umbrella in the rack, close a clamp-like device around it, and remove a key like the ones on coin-op lockers. Apparently umbrellas are one of the few items in Japan that are subject to theft.

We then entered a door, and were greeted by several waitresses–in playboy bunny suits! We proceeded to sit down at a table, and the bunnies brought us course after course of unusual food items. Among these were shark-fin soup and jellyfish, along with the usual suction-cup-laden tentacle-bearing invertebrates.

**2.5 Dinner, March 10, 1994**

Tonight we were working late in the lab, and the NMR spectroscopist took me to dinner. Like a lot of Japanese fish places, the restaurant had tanks with fish swimming around, and you get to choose your meal while it is still alive. The menu has pictures of the food. In many of these pictures, a fish head is propped up and prominently displayed, eyes staring off into space.

First I ordered a beer, and with it they brought a bowl of something that looked like SQUID FETUSES! [My host] said they were glow-worm cuttlefish.

We ordered something called chunogunabe (or something), which is a big pile of stuff you put in boiling water (mushrooms, vegetables, tofu, chunks of chicken, fish, meat, oysters, clams, etc.). We had all the stuff piled into the boiling water, and we went to put the prawns (giant shrimp) in, and THEY TRIED TO JUMP BACK OUT!!!! THEY WERE STILL ALIVE!! I had to watch the poor little invertebrates die a horrible death, so I could eat them.

It’s a good thing we didn’t order this other thing on the menu that my host likes. They bring you a live fish and FILLET it on your plate while it’s STILL SQUIRMING, GASPING, WRITHING and TWITCHING!!! At least you know it’s fresh that way.

I observed to [my host] that fish heads and eyeballs would not be considered appetizing to most Americans. He said that fish eyeballs are quite a delicacy, and that tuna eyeballs are considered brain food. There is always a strong demand for them during examination times, when Japanese students are qualifying for various universities. They can fetch as much as 6000 yen (~$50) apiece (I don’t know if they need to be purchased in pairs).

**2.6 Dinner, March 15, 1994**

I had a cuttlefish with very large eyeballs. I noticed that when I chewed the head, the eyeballs exploded like that gum with the fluid centers, but it wasn’t the same flavor (not everyone would appreciate the humor in that!) The gum is supposed to make your breath smell better, but I’m sure the cuttlefish eyeballs don’t. They didn’t make me feel any smarter afterwards, either.

**3 Unusual Japanese Customs (by U.S. standards)**

- Lots of Japanese wear cotton surgical masks around. Is it because they consider it rude to blow their noses in public? A mask would be the only thing to do in event of a cold.
- Japanese commuters ride bicycles in the rain carrying umbrellas. I never got the hang of that, but I got soaked to the skin riding home late one night after work.
- Bicycles and umbrellas are the only items subject to theft in Japan. If they are not locked up they seem to be considered community property. There are racks of umbrellas outside of every building.
- The ticket-taker at the Imperial Palace in Tokyo had to put on his white glove before he took my ticket. I wasn’t sure if he always did that or if it was just me.
- Menus always show pictures of fish heads, usually tastefully arranged with garnish.

]]>

]]>

Below is shown a five-link chain (in red-blue-green-orange-black). Immediately this colorful chain is a dark-gray plot of the exact (mesocale) lineal density, which is defined at a location “x” to be the mass within an infinitesimal segment dx at that location divided by the segment’s length dx. This local density is shown as the dark-gray shaded plot in the upper-left corner, and it is the slope of the black line in the graph of the lower-left corner.

The exact homogenized (macroscale) lineal density at a location “x” is defined as the exact total mass falling inside the span from zero to x, divided by the chain’s length (x itself). While the mesoscale density is the local slope at location “x” of the black line in the graph, the *macroscale* density is the secant slope at location “x” of the same black line. The continuum (red-dashed) approximation of the local mass distribution ignores local fluctuations from the fact that the chain is actually heterogeneous. For short chain lengths, the exact macroscale density is significantly different from the continuum density, but this discrepancy asymptotes toward zero as the chain length is increased.

The theoretical representative volume element (RVE) size corresponds to the size for which the discrepancy (like the plot in the lower-right corner of the infographic) falls below some tolerable threshold, which is determined by considering the tolerable error in an engineering simulation.

These concepts apply to other properties besides density. For example, the macroscale elastic stiffness would be defined as the force applied to the chain divided by the corresponding induced displacement. Like density, this macroscale property varies with the number of links in the chain, but it asymptotes to a steady value as the chain length increases.

Density has a nice asymptotic continuum limit that isn’t sensitive to dilutely distributed statistical perturbations in the local (microscopic) density. If, for example, 1 in 10000 links is made of light aluminum while the others are made of heavy steel, then the continuum density will be nevertheless close to that of a chain that is made *entirely* of steel links. The continuum elastic stiffness is likewise not highly sensitive to slight variations in local constituent (link) stiffness. A chain’s failure strength, on the other hand, is profoundly affected by existence of even a miniscule fraction of weaker links. A mostly steel chain that contains relatively few aluminum links would have a continuum strength equal to the strength of the weaker (aluminum) link. That’s because (in the limit) an infinitely long chain would contain at least one aluminum link. For short chains that are made of, say, 10 links (each of which has a 1 in 10000 chance of being made of aluminum), the average macroscale strength would be higher on average than the strength of longer chains. The strength data for short chains would also be more variable.

These observations give insight into what a modeler must pay attention to when using continuum macroscale properties in simulations of engineering structures. To design for the structure’s daily (i.e., normal and therefore usually elastic) usage conditions, homogenized continuum properties would be fine. However, continuum strength properties would need to be appropriately perturbed based on the size of the finite elements. This explicit incorporation of statistical variability in continuum properties is required when those perturbations strongly influence the engineering objective of the analysis (such as computing failure risk). In fact, it can be argued that such revisions are crucial to predict fracture and fragmentation whenever the finite-element size is smaller than a few *kilometers*. For more details on scale-dependent and statistically variable macroscale properties, see Publication: Aleatory quantile surfaces in damage mechanics and the more recent 2015 IJNME article, “Aleatory uncertainty and scale effects in computational damage models for failure and fragmentation” by Strack, Leavy and Brannon.

]]>

(if the animated gif isn’t visible, please wait for it to load)

Similar anomalies have been observed in penetration simulations, where a sudden pressure pulse forms in material that had been once deformed severely but is, at the time of the anomalous pulse, now relatively quiescent. The fundamental reason for this anomaly (much less its resolution) remains an open research question, which is why it is being described here in this blog posting.

The problem definition is provided in WaterColumnKinematicAnomalyProblemStatement.pdf. We challenge (or plead with) the MPM community to articulate proper algorithms that will give realistic and stable solutions to this problem for a range of realistic material property values.

2017-09-15:

update: investigation results from Craig Schroeder (craig — at — ucr –dot — edu)

SUMMARY:

* If I run the simulation with implicit integration, it never fails.

* If I run with explicit, two failure modes occur: (a) instability, (b) J becomes negative

* Running with a modified F update (more on this below) prevents (b).

* Explicit sims fail because I am taking time step sizes that are too large.

* The equations of state are very stiff; a reliable explicit scheme will need to choose dt very wisely (see my advice at the end if you are interested).

WHAT I RAN:

I ran all combinations of the following (2*4*2*3*5*2*2 = 960 simulations)

* Constitutive model: both the power law and tait equation.

* Stiffness: 15 kPa, 150 kPa, 1.5 MPa, 15 MPa

* Time integration scheme: simplectic Euler (explicit) and backward Euler (implicit).

* Transfers: FLIP, PIC, APIC

* Maximum time step size: 0.03 0.01 0.003 0.001 0.0003

* Interpolation kernel: quadratic and cubic splines

* Deformation gradient (F) update: regular, modified (more on this below)

DEVIATIONS FROM DOCUMENT:

As far as I am aware, I deviate from the document in the following ways:

* Boundary conditions. I did not do reflection boundary conditions for my walls because (a) it makes the velocity field non-smooth, which messes up the update of F and (b) I don’t know how to implement this boundary condition with implicit time integration. Instead, I use the boundary conditions in [1]. It is essentially a sort of separating condition.

* Particle seeding. 4 particles per cell, irregularly seeded (using Poisson sampling).

WHAT I HAD TO CHANGE:

* My nonlinear solver had a bug, which I found and fixed.

* At the end of each time step, I replace F with sqrt(J)*I. Since the deviatoric part of F does not contribute to the constitutive model, errors here can grow without bound. Note that if J<0 at the end of the time step, the simulation will end.

* In both constitutive models, I test for J<1e-10. If this is true, then I return a “huge” result (1e20) instead of trying to compute the actual constitutive model. I need to do this because the nonlinear solver does line searches, which can evaluate the constitutive model at configurations where J<0. Since my simulations were all run with trapping floating point errors, this would terminate the simulations. If I instead return a large energy value, the line search will realize that this is a very bad configuration and ignore it.

EVALUATION CRITERIA:

A simulation is deemed a failure if any of these occur:

* The simulation terminates early. This can occur as a result of floating point exceptions.

* The velocity magnitude of any particle exceeds 10 in any time step.

* The time step falls below 1e-6 for any time step.

RESULTS:

* All implicit simulations succeed (480 sims).

* Explicit simulations go one of 4 ways

– success (81 sims)

– finish but big velocity (6 sims, max velocities: 10.7, 11.9, 12.2, 18.7, 34.7, 43.1)

– floating point error (183 sims); these are caused by J<0

– big velocity and/or tiny dt (210 sims)

* The modified F update prevents J<0, but it does not improve success much (for explicit: 43 with vs 38 without)

* The vast majority of failures occur immediately. The simulations are run to time 5s, but 345 of the simulations fail to reach time 0.05s. Of these, 169 die in the second time step. These sims are taking time step sizes that are unstable. My code calculates CFL based on the time it takes a particle to traverse one cell given the velocity at the beginning of the time step; the first time step has velocity zero, so it always takes a step at the maximum velocity allowed. No explicit sims finished at the highest stiffness, and only two succeeded at the second-highest stiffness (both using the smallest time step limit).

* The velocity range for successful sims is:

– explicit pic 1.6-2.7

– explicit apic 5.4-7.0 (first fail: 10.7)

– explicit flip 6.2-8.1 (first fail: 11.9)

– implicit pic 1.6-2.7

– implicit apic 5.3-6.1

– implicit flip 5.4-8.2

We see that there is a distinct gap between the usual range of velocities (which I am calling successful) and the cutoff of 10, which suggests that this is a good choice for the cutoff.

MODIFIED F UPDATE:

The usual update for F is: F^(n+1) = (I + dt gradV) F^n. The modified update is from [2]: F^(n+1) = (I + dt gradV + 1/2 dt^2 gradV^2) F^n. This update is a better approximation for the “ideal” update rule F^(n+1) = exp(dt gradV) F^n and has the advantage that det(I + A + 1/2 A^2) >= 0 for any matrix A. If you are updating J instead of F, you can apply the same trick.

MY ADVICE:

If you are going to run explicit, make sure your CFL computation takes into account forces. (My code does not do this because it is intended for implicit simulation, which does not require this). Ideally, compute your forces before deciding on a time step size. Then, choose a time step size small enough so that at the end of the time step:

* J does not change by more than a specific fraction (eg, by at most 20%)

* x does not move more than a specific fraction of a cell

* total energy does not increase (careful about boundary conditions!). Or at least a quadratic approximation of it (using the hessian of the potential energy function), which should be good enough.

I suspect that if you do these things, the failures you see in this sim will go away.

REFERENCES:

[1] Gast, T., Schroeder, C., Stomakhin, A., Jiang, C., Teran, J., “Optimization Integrator for Large Time Steps,” IEEE Transactions on Visualization and Computer Graphics, 21(10), pp. 1103-1115 (2015).

[2] Jiang, C., Schroeder, C. and Teran, J., “An angular momentum conserving affine-particle-in-cell method,” Journal of Computational Physics, 338, 137-164 (2017).

P.S. sent in separate email from Craig:

I have an explanation (without proof, of course) for why you see the explosions after things cool down. When the sim is calm, your velocities are small. This might be telling your code that it is safe to take a big time step. That would give you the opportunity to make a big mistake (eg, bring J close to 0.) In particular, small v does not necessarily imply small grad-v, so you can mess up J without doing anything bad to x or v. Then, in the next time step, you get a big force, and your particles are off to the racetrack. If that is the case, then including your F or J update in your time step calculation should make the problem go away.

]]>

*Copyright statement: This infographic may be used freely as long as it isn’t altered in any way.*

Keep reading for important clarifications!

For resilience, C and A were identified as roughly equal because the yield energy per unit volume (i.e., the area up to the yield point) was not obviously different for either C or A. Because the beginning parts of these functions were each idealized to be straight lines, the yield energies are just half the yield strain times the yield stress. In comparison to material A, material C has a larger yield strain but smaller yield stress, while the *product* of these is about the same for both of them to give roughly equal resilience, which is also known as yield energy per unit volume. By a similarly rough “eyeball inspection,” the rupture energy per unit volume (i.e., the area under the curve up to the “X” rupture point where the material breaks apart into separate pieces) looks about the same for all three graphs, so the toughness has been identified to be approximately equal for all of them.

The term “hardness” (not listed explicitly in the infographic) has multiple definitions. Loosely speaking, though, hardness goes in *inverse* proportion to one of the above-listed measures of inelastic strain (usually rupture strain). The key thing to realize is that a high-hardness material has a low strain at failure, which means it will be able to hold its size and shape all the way up to the failure threshold better than a low-hardness material. Thus, for example, a billiard ball is hard in comparison to a ball of solid gold. If some material “D” has higher strength than material “E,” then “D” will *usually* (but not necessarily) have higher hardness than “E.” High-hardness materials tend to be poor choices for design against structural fatigue fracture; low-hardness materials, on the other hand, can “flow” at crack tips to effectively blunt the tip and thereby reduce stress concentrations that would otherwise tend to propagate the fracture.

Please keep in mind the following cautions:

- The yield threshold is often misleadingly identified (for beginners) as the location of a sharp bend in the slope of the stress vs. strain diagram, with the graph being a straight line up to that point. However, yield is better defined to be the value of stress at which unloading back to zero stress would produce a nonzero residual strain. Real materials have a little bit of yielding (by this definition) even at small stress values, so the ASTM standard definition sets the yield stress to correspond to a 0.2% residual strain upon unloading. A far more useful and practical definition of yield stress is that it is the value of stress at which modeling the material to be elastic would produce unacceptable errors in a given engineering design or analysis. The definition based on 0.2% offset is merely the standard guideline at which it is believed (by experienced or influential engineers) that models for engineering structures would require some form of inelasticity to be reliable. If you choose to define yield stress to be the stress at which an elastic model would become unacceptable for a given engineering purpose, then its value naturally is not fixed, because “acceptability” would depend on the problem at hand. Moreover, this definition of yield stress would, unlike the precise definition based on 0.2% residual strain, include damage as well as plasticity (see below for the distinction).
- A material is elastic if stress is truly a function of strain, meaning that unloading the material will cause the stress-strain data pair to retrace (backwards) the same path that it followed to get there during the loading phase. There is nothing in this definition that requires the elastic portion of the stress-strain plot to be a straight line as shown here in the fake materials of the infographic. Not only can elasticity be nonlinear, there isn’t even a requirement for the elastic portion of the plot to monotonically increase, so it is incorrect to assert that reaching a peak in a stress-strain plot is necessarily a sign of material damage. The
*only*way to determine that there is inelastic deformation is to include unloading in your measurements of stress-strain response in order to look for signs of hysteresis. If the material unloads along a different path than it used for loading (i.e., if unloading and loading have different slopes), then the deformation is inelastic. - Though not covered in this infographic, the term
**plasticity**refers to a form of inelastic deformation that gives a nonzero strain (called residual strain) after the loads are removed, whereas**damage**refers to a change (usually a reduction) in the unloading slope such that unloading would cause BOTH stress and strain return*to the origin*upon unloading (contrast this with yielding, which has strain return to a nonzero residual value).

For a beginner to understand the distinction between damage and plasticity, it is very helpful to consider a simple mechanical system like the following, which shows a sliding top plate beneath which are springs and rigid links that will buckle when they experience a critical axial force. The bucking threshold is determined by both the lengths of the rigid links (shown here short and long, respectively) and the properties of the lateral support (shown here as simple springs).

Let’s now consider three buckling components (rather than just the two pictured above). The goal is to determine the overall force required to induce a given displacement of the top plate. To get a basic sense of the underlying reasons for variety in material behavior, we list below three cases corresponding to different choices for the lateral supports. In each of these plots, the solid blue line corresponds to statistically variable lengths of the rigid links, while the red dashed line shows the result corresponding to all equal-sized links to help you see that statistical variability of a material’s microstructure most profoundly affects peak load-carrying capacity rather than the initial elastic properties.

CASE 1: ELASTIC LATERAL SUPPORTS

Here the lateral supports are purely elastic and attached to a rigid wall. The other (vertical) springs are taken to be nonlinear so that infinite force would be needed to squash them to zero length. The resulting three-unit system response is as follows:

This example proves, by the way, that reaching a peak in a force vs. displacement experiment does NOT necessarily imply inelasticity. This system reaches a peak, but it follows exactly the same path upon unloading as it did in the initial compression phase. Accordingly, this system is 100% elastic!

This system furthermore illustrates the importance of specifying the nature of loading control. Here, we ran the simulation by controlling the displacement of the top plate, which gives a unique force of loading. It would be impossible to get the same result via force control because, as seen, a given force corresponds to more than one displacement. An elastic system is defined by force being a proper function of displacement. There is no requirement that this relationship be invertible to give displacement as a function of force.

Variability in the link geometry (solid blue line) causes a reduction in peak load-carrying capacity (i.e., the solid blue line does not go as high as the red-dashed line).

CASE 2: BREAKABLE LATERAL SUPPORTS

Instead of attaching each spring to a rigid wall, as was done in CASE 1, let’s suppose that they are attached to a breakable support. The lateral support behaves the same as in CASE 1 up until reaching a critical load in the lateral support, at which point the response is the same as if there is no lateral support at all. Then the overall three-component force vs. displacement plot becomes

This example is inelastic because the force vs. displacement response does not follow the same path upon unloading as it did during the initial compression phase. This example furthermore illustrates that elasticity must not be described as having the feature that the system returns to its original shape upon removal of the load. This system returns to its original configuration (from a macroscale perspective), but it isn’t elastic!

The hallmark trait of inelasticity is hysteresis, which means that the force vs. displacement plot has a loop in it. This system illustrates the concept of DAMAGE, which is irreversible changes in a system’s overall elastic stiffness. For pure damage, the force and displacement return back to the origin upon load reversal as is seen to happen in this example. Pure damage has no permanent inelastic strain, yet it is still an inelastic phenomenon. The fact that damage affects elastic stiffness would be clearer if we had included a recompression phase, where we would see that the system follows its damaged unloading path (not the original loading path) up until more damage is induced to break that final lateral support.

Variability in geometry (blue dashed line) also profoundly affects the post-peak response in comparison to the loading part of the curve. If the system had consisted of thousands of units (rather than just the 3 considered here), then variability would be seen to strongly influence the post-peak (softening) part of the response plot.

CASE 3: FRICTIONAL LATERAL SUPPORT

In this scenario, the lateral support behaves like a tightly fitted piston (e.g., a cork in a bottle) that behaves the same as a rigid support up until the force on the piston reaches a critical value needed for sliding. In contrast to CASE 2, where hitting a critical lateral force caused the lateral force to suddenly drop to zero, a frictional sliding causes the lateral force to suddenly stop increasing but instead stay at a constant value. When the loading is reversed, the frictional piston then drags against the motion to produce a residual internal lateral spring force even when the force on the three-component pusher plate is zero:

The hysteresis shows the system to be inelastic. The hallmark trait of internal friction is residual “permanent” displacement after the force on the pusher plate has been removed. In materials modeling, this phenomenon is called **PLASTICITY**. For many materials, plasticity is explained by movement of dislocations in the crystal structure, but that’s just one way to get plasticity. The broader definition, which requires no evidence of the physical source of plasticity, is merely that you get permanent strain upon load removal.

Interestingly, friction seems to regularize variability-induced sensitivities. As seen, after all links have experienced buckling and sliding, there is little difference between the variable and deterministic (red and blue) lines.

CASE 4: DASHPOT and MASS in the lateral support

Each of the previous cases considered all components to have zero mass. Accordingly, each of the previous 3 cases are said to be “rate independent,” which means that the force vs. displacement plot is the same for those problems regardless of how quickly the system is being deformed. This CASE 4 scenario, on the other hand, allows mass and damping to exist in the system, which leads to dynamic system response that is sensitive to the rate of loading. The following simulations include a mass at the point marked with a red dot. If the mass is zero and if the damping coefficient is infinite, then this system would behave as it did in CASE 1.

These plots show various results depending on the loading rate and amount of damping in the system. If the mass is zero and if the damping coefficient is infinite, then this system would behave as it did in CASE 1. Alternatively, even if mass and damping are high, conditions of VERY slow loading would also look nearly the same as CASE 1.

In materials modeling, rate effects can be profound (i.e., non-negligible) and easily observed for materials like silly putty or other rubbery materials like bread dough and human flesh. All materials have an intrinsic time scale required to reach equilibrium (i.e., to stop wiggling) after a loading perturbation has stopped. If the loading occurs on a time range that is shorter than this characteristic material response time, then we say it is a “high Deborah number event,” in which case the insufficient time for inertia or damping to be overcome will make the system behave more elastically than it would under slower loading. Thus, for example, bouncing a ball of silly putty would put it into a high-Deborah-number state, while slowly squashing it between your fingers is a low-Deborah-number event.

Fun trivia: the moniker “Deborah” was inspired from a version of Judges 5:5 in the bible, which refers to God seeing the mountains move (i.e., on His time scale, the mountains are flowing like a fluid and hence have a low Deborah number)!

]]>

The following infographic furthermore shows that engineering textbooks appropriately and overwhelmingly favor MoM over Strengths (to see details, click to open in separate page and then zoom to fit the page):

Thanks go to Dr. Ashley Spear for stimulating this commentary/flame.

]]>

- Clods of soil impact a plate: A major advantage of the Material Point Method (developed as part of this research effort) is that it automatically allows material interaction without needing a contact algorithm.

- Extrapolated buried explosive ejecta. The sample is in a centrifuge to get higher artificial gravity, so the particles move to the side because of the Coriolis effect!

]]>

Here she is 47 years later with the same messy hair and similar expression on her face…

]]>

**AUTHORS:** Michael A. Homel · James E. Guilkey · Rebecca M. Brannon

**ABSTRACT:** A practical engineering approach for modeling the constitutive response of fluid-saturated porous geomaterials is developed and applied to shaped-charge jet penetration in wellbore completion. An analytical model of a saturated thick spherical shell provides valuable insight into the qualitative character of the elastic– plastic response with an evolving pore fluid pressure. However, intrinsic limitations of such a simplistic theory are discussed to motivate the more realistic semi-empirical model used in this work. The constitutive model is implemented into a material point method code that can accommodate extremely large deformations.Consistent with experimental observations, the simulations of wellbore perforation exhibit appropriate dependencies of depth of penetration on pore pressure and confining stress.

http://link.springer.com/article/10.1007%2Fs00707-015-1407-2

Bibdata:

@article{ year={2015}, issn={0001-5970}, journal={Acta Mechanica}, doi={10.1007/s00707-015-1407-2}, title={Continuum effective-stress approach for high-rate plastic deformation of fluid-saturated geomaterials with application to shaped-charge jet penetration}, url={http://dx.doi.org/10.1007/s00707-015-1407-2}, publisher={Springer Vienna}, author={Homel, Michael A. and Guilkey, James E. and Brannon, Rebecca M.}, pages={1-32}, language={English} }

]]>

Well, I have so many pressures on my time, that this hobby gets pushed to the bottom of the stack. This said, I find myself often discussing fourth-order tensor operations, the distinction between Voigt and Mandel components, and scalar measures of anisotropy. To help with such discussions, I am here posting two excerpts from my unpublished tensor-analysis notes. Enjoy this enthralling topic!

The first excerpt, 150606tensorsVoigtMandelExcerpt, discusses Voigt and Mandel notation, and introduces some helpful operations on fourth-order tensors. Here are some highlights taken from this PDF excerpt:

What we have labeled as the 9×1 “contravariant Voigt array (without factors of 2)” is typically called a “stress-like” array in the composites community, while the 9×1 “covariant Voigt array (with factors of 2)” is called a “strain-like” array. When these arrays actually represent stress and strain, their last three entries are zero because of symmetry. Likewise, the 9×9 array is, in this context, the elastic stiffness so its last three columns and last three rows are zero because of minor symmetry. Accordingly, in constitutive modeling, you typically see this matrix relationship truncated down to only 6 dimensions. When using computer code to work out components of such tensors, we recommend keeping all 9 dimensions just to serve as a visual cue that you have indeed enforced symmetries properly in your equations.

It is mystifying that the composites community doesn’t seem to even realize that the Voigt representations would be more properly referred to as covariant and contravariant, so please add a comment to this post if you have ever seen any composites articles use this mathematically proper terminology. These are not just matrix equations. There is a tensor basis that goes with Voigt representations, and that basis is a set of mutually orthogonal tensors. The basis is not normalized, so that leads to co/contravariant representations, in which the factors of 2 are metrics. Whenever you have an orthogonal but not normalized basis, the obvious thing to do is to normalize it! That is what gives the following Mandel form, which you should note has no more of that ugly distinction between contravariant (stress-like) arrays and covariant (strain-like) arrays. Both arrays are treated the same!

In this list, the very last set of basis tensors are the ones that pair with ordinary components of a tensor. For example, the 11 component of an ordinary second-order tensor is paired with a basis tensor whose component matrix is all zeros everywhere except a 1 in the 11 spot. The 12 component goes with a basis tensor that has all zeros everywhere except for a 1 in the 12 spot, and so on. The Voigt and Mandel representations merely represent a change of basis so that the first six basis tensors span the manifold of all possible symmetric tensors, while the last three basis tensors span the space of all possible skew tensors.

If you are not convinced that the Mandel representation is the better choice, try comparing it with Voigt for the components of the fourth-order identity tensor. The result is the identity matrix in Mandel form (not so for Voigt). Also, you really need the Mandel form to find eigenvalues and eigentensors. Ordinary spectral analysis of the Voigt representation is completely meaningless — you need to use Mandel form to get meaningful eigenvalues and eigenmodes of a stiffness tensor.

Page 658 (PDF page 30) of my second excerpt, 150606tensorsFourthOrderOperationsAndMeasureOfIsotropyExcerpt, shows how to find the isotropic (IFOET) part of a fourth-order tensor (which is NOT generally some multiple of the identity), and how to define a scalar measure of anisotropy in the range from zero to one, as determined from the “angle” that the tensor makes with the linear manifold of isotropic tensors. Doesn’t this sound exciting? Here is an infographic summary of this process of finding the isotropic part of a fourth-order tensor, as well as setting the scalar measure of anisotropy (equal to 1 minus the scalar measure of isotropy):

In this infographic, the acronym IFOET stands for “isotropic fourth-order engineering tensor” (labeled “ISO” elsewhere in the image). In the scalar measure of isotropy, the denominator is the L2 norm of the original fourth-order tensor, equal to the square root of the sum of the squares of the tensor’s Mandel components (which is another benefit of Mandel over Voigt because getting the magnitude of a Voigt tensor would require insertions of factors of 2 and 4 — Yuck!). As you can see, the isotropy is 100% if the original tensor is isotropic, and it ranges down to 0% isotropy if the isotropic part of the original tensor is zero.

]]>