FeedIndex
Filter: 54 dB  view all
Ellis Barstow, the protagonist in Nick Arvin's most recent novel, is a reconstructionist—an engineer who uses forensic analysis and simulation to piece together, in minute detail, what happened at a car crash site and why.

The novel is based on Arvin's own experiences in the field of crash reconstruction: Arvin thus leads an unusual double-life as a working mechanical engineer and a successful author of literary fiction. Following an introduction to Arvin's work from writer, friend, and fellow explorer of speculative landscapes Scott Geiger, Venue sat down with Arvin on the cozy couches of the Lighthouse Writers Workshop in Denver for an afternoon of conversation and car crash animations.




Flipping open his laptop, Arvin began by showing us a "greatest hits" reel drawn from his own crash reconstruction experience. Watching the short, blocky animations—a semi-truck jack-knifing across the center line, an SUV rear-ending a silver compact car, before ricocheting backward into a telephone pole—was surprisingly uncomfortable. As he hit play, each scene was both unspectacular and familiar—a rural two-lane highway in the rain, a suburban four-way stop surrounded by gas stations and fast-food franchises—yet, because we knew that an impact was inevitable, these everyday landscapes seemed freighted with both anticipation and tragedy.

The animations incorporated multiple viewpoints, slowing and replaying the moment(s) of impact, and occasionally overlaying an arrow, scale, or trajectory trace. This layer of scientific explanation provided a jarring contrast to the violence of the collision itself and the resulting wreckage—of lives, it was hard not to imagine, as well as the scattered vehicles.



As we went on to discuss, it is precisely this disjuncture—between the neat explanations provided by laws of physics and the random chaos of human motivation and behavior—that The Reconstructionist takes as its territory.

Our conversation ranged from the art of car crash forensics to the limits of causality and chance, via feral pigs, Walden Pond, and the Higgs boson. The edited transcript is below.

• • •

Nicola Twilley: Walk us though how you would build and animate these car crash reconstructions.

Nick Arvin: In the company where I worked, we had an engineering group and an animation group. In the engineering group, we created what we called motion data, which was a description of how the vehicle moved. The motion data was extremely detailed, describing a vehicle’s movement a tenth of a second by a tenth of a second. At each of those points in time we had roll, pitch, yaw, and locations of vehicles. To generate such detailed data, we sometimes used a specialized software program⎯the one we used is called PC-Crash⎯or sometimes we just used some equations in Excel.


A screenshot from the PC-Crash demo, which boasts that the "Specs database contains vehicles sold in North America from 1972 to the present," and that "up to 32 vehicles (including cars, trucks, trailers, pedestrians, and fixed objects such as trees or barriers) can be loaded into a simulation project."

When you’re using PC-Crash, you start by entering a bunch of numbers to tell the program what a vehicle looks like: how long it is, where the wheels are relative to the length, how wide it is, where the center of gravity is, how high it is, and a bunch of other data I’m forgetting right now.

Once you’ve put in the parameters that define the vehicle, it’s almost like a video game: you can put the car on the roadway and start it going, and you put a little yaw motion in to start it spinning. You can put two vehicles in and run them into each other, and PC-Crash will simulate the collision, including the motion afterward, as they come apart and roll off to wherever they roll off to.

We then fed that motion data to the animators, and they created the imagery.


A screenshot of PC-Crash's "Collision Optimizer."


As the demo promises, "in PC-Crash 3D, the scene can be viewed from any angle desired."

Often, you would have a Point A and a Point B, and you would need the animation to show how the vehicle got from one point to the other.

Point A might be where two vehicles have crashed into each other, which is called the “point of impact.” The point of impact was often fairly easy to figure out. When vehicles hit each other—especially in a head-on collision—the noses will go down and gouge into the road, and the radiator will break and release some fluid there, marking it. Then, usually, you know exactly where the vehicle ended up, which is Point B, or the “point of rest.” But connecting Points A and B was the tricky part.

Twilley: In real life, are you primarily using these kind of animations to test what you think happened, or is it more useful to generate a range of possibilities of which you can then look for evidence on the ground? In the book, for example, your reconstructionists seem to do both, going back and forth between the animation and the actual ground, generating and testing hypotheses.

Arvin: That’s right. That’s how it works in real life, too. Sometimes we would come up with a theory of what happened and how the vehicles had moved, and then we’d recreate it in an animation, as a kind of test. Generating a realistic-looking animation is very expensive, but you can create a crude version pretty easily. We’d watch the animation and say, “That just doesn’t look right.” You have a feel for how physics works; you can see when an animation just doesn’t look right. So, very often, we’d look at an animation and say to ourselves: we haven’t got this right yet.


Screenshot from a sample 3D car crash animation created by Kineticorp; visit their website for the video.

One of the challenges of the business is that when you’re creating an animation for court, every single thing in it has to have a basis that’s defensible. An animation can cost tens of thousands of dollars to generate, and if there is one detail that’s erroneous, the other side can say, “Hey, this doesn’t make sense!” Then the entire animation will be thrown out of court, and you’ve just flushed a lot of money down the toilet.

So you have to be very meticulous and careful about the basis for everything in the animation. You have to look at every single mark on the vehicle and try to figure out exactly where and how it happened.

In the novel there is an example of this kind of thinking when Boggs shows Ellis how, when looking at a vehicle that has rolled over, you literally examine each individual scratch mark on the vehicle, because a scratch can tell you about the orientation of the vehicle as it hit the ground, and it can also tell you where the vehicle was when the scratch was made, since asphalt makes one kind of scratch, while dirt or gravel will make a different type of scratch.

For one case I worked on—a high-speed rollover where the vehicle rolled three or four times—we printed out a big map of the accident site. In fact, it was so big we had to roll out down the hallway. It showed all of the impact points that the police had documented, and it showed all of the places where broken glass had been deposited as the vehicle rolled. We had a toy model of the car, and we sat there on the floor and rolled the toy from point to point on the map, trying to figure out which dent in the vehicle corresponded to which impact point on the ground.

I remember the vehicle had rolled through a barbed wire fence, and that there was a dent in one of the doors that looked like a pole of some kind had been jammed into the sheet metal. We figured it had to be one of the fence posts, but we struggled with it for weeks, because everything else in the roll motion indicated that, when the car hit the fence, the door with the dent in it would have been on the opposite side of the vehicle. We kept trying to change the roll motion to get that door to hit the fence, but it just didn’t make sense.

Finally, one of my colleagues was going back through some really poor-quality police photographs. We had scarcely looked at them, because they were so blurry you could hardly see anything. But he happened to be going back through them, and he noticed a fireman with a big crowbar. And we realized the crowbar had made the dent! They had crowbarred the door open.


Screenshots from sample 3D car crash animations created by Kineticorp; visit their website for the video.

Sometimes, though, even after all that meticulous attention to detail, and even if you believe you have the physics right, you end up playing with it a little, trying to get the motion to look real. There’s wiggle room in terms of, for example, where exactly the driver begins braking relative to where tire marks were left on the road. Or, what exactly is the coefficient of friction on this particular roadway? Ultimately, you’re planning to put this in front of a jury and they have to believe it.

Twilley: So there’s occasionally a bit of an interpretive leeway between the evidence that you have and the reconstruction that you present.

Arvin: Yes. There’s a lot of science in it, but there is an art to it, as well. Pig Accident 2, the crash that Ellis is trying to recreate at the start of my book, is a good example of that.

It’s at the start of the book, but it was actually the last part that was written. I had written the book, we had sold it, and I thought I was done with it, but then the editor—Cal Morgan at Harper Perennial—sent me his comments. And he suggested that I needed to establish the characters and their dynamics more strongly, early in the book.

I wanted an accident to structure the new material around, but by this time I was no longer working as a reconstructionist, and all my best material from the job was already in the book. So I took a former colleague out for a beer and asked him to tell me about the stuff he’d been working on.

He gave me this incredible story: an accident that involved all these feral pigs that had been hit by cars and killed, lying all over the road. Then, as a part of his investigation, he built this stuffed pig hide on wheels, with a little structure made out of wood and caster wheels on the bottom. They actually spray-painted the pig hide black, to make it the right color. He said it was like a Monty Python skit: he’d push it out on the road, then go hide in the bushes while the other guy took photographs. Then he’d have to run out and grab the pig whenever a car came by.



But there wasn’t any data coming out of that process that they were feeding into their analysis; it was about trying to convince a jury whether you can or can’t see a feral pig standing in the middle of the road.

Manaugh: That’s an interesting analogy to the craft of writing fiction, related to the question of what is sufficient evidence for something to be believable.

Arvin: Exactly. It’s so subjective.

In that case, my friend was working for the defense, which was the State Highway Department—they were being sued for not having built a tunnel under the road for the wild pigs to go through. In the novel, it takes place in Wisconsin, but in reality it happened in Monterey, California. They’ve got a real problem with wild pigs there.

Monterey has a phenomenal number of wild pigs running around. As it turned out, the defense lost this case, and my friend said that it was because it was impossible to get a jury where half the people hadn’t run into a pig themselves, or knew somebody who had had a terrible accident with a pig. The jury already believed the pigs were a problem and the state should be doing something about it.


Screenshot from a sample 3D car crash animation created by Kineticorp; visit their website for the video.

Geoff Manaugh: In terms of the narrative that defines a particular car crash, I’m curious how reconstructionists judge when a car crash really begins and ends. You could potentially argue that you crashed because, say, a little kid throws a water balloon into the street and it distracts you and, ten seconds later, you hit a telephone pole. But, clearly, something like a kid throwing a water balloon is not going to show up in PC-Crash.

For the purpose of the reconstructionist, then, where is the narrative boundary of a crash event? Does the car crash begin when tires cross the yellow line, or when the foot hits the brakes—or even earlier, when it started to rain, or when the driver failed to get his tires maintained?

Arvin: It’s never totally clear. That’s a grey area that we often ended up talking about and arguing about. In that roll-over crash, for example, part of the issue was that the vehicle was traveling way over the speed limit, but another issue was that the tires hadn’t been properly maintained. And when you start backing out to look at the decisions that the drivers made at different moments leading up to that collision, you can always end up backing out all the way to the point where it’s: well, if they hadn’t hit snooze on the alarm clock that morning

Twilley: Or, in your novel’s case, if they weren’t married to the wrong woman…

Arvin: [laughs] Right.

We worked on this one case where a guy’s car was hit by the train. He was a shoe salesman, if I remember right, and he was going to work on a Sunday. It just happened to be after the daylight savings time change, and he was either an hour ahead or an hour behind getting to work. The clock in the car and his watch hadn’t been reset yet.

He’d had this job for four years, and he’d been driving to work at the same time all those years, so he’d probably never seen a train coming over those tracks before—but, because he was an hour off, there was a train. So, you know, if he’d remembered to change his clocks…


Screenshots from sample 3D car crash animations created by Kineticorp; visit their website for the video.

Twilley: That reminds me of something that Boggs says in the book: “It’s a miracle there aren’t more miracles.”

Arvin: Doing that work, you really start to question, where are those limits of causality and chance? You think you’ve made a decision in your life, but there are all these moments of chance that flow into that decision. Where do you draw a line between the choices you made in your life and what’s just happened to you? What’s just happenstance?

It’s a very grey area, but the reconstructionist has to reach into the grey area and try to establish some logical sequence of causality and responsibility in a situation.

Twilley: In the novel, you show that reconstructionists have a particular set of tools and techniques with which to gain access to the facts about a past event. Other characters in the book have other methods for accessing the past: I’m thinking of the way Ellis’s father stores everything, or Heather’s photography. In the end, though it seems as though the book is ambivalent as to whether the past is accessible through any of those methods.

Arvin: I think that ambivalence is where the book is. You can get a piece of the past through memory and you can get a piece through the scientific reconstruction of things. You can go to a place now, as it is physically; you can look of a photograph of how it was; you can create a simulation of the place as it was in your computer: but those are all representations of it, and none of them are really it. They are all false, to an extent, in their own way.

The best I think you can hope to do is to use multiple methods to triangulate and get to some version of what the past was. Sometimes they just contradict each other and there’s no way to resolve them.


Screenshots from sample 3D car crash animations created by Kineticorp; visit their website for the video.

Working as a reconstructionist, I was really struck by how often people’s memories were clearly false, because they’d remember things that just physically were not possible. Newton’s laws of motion say it couldn’t have happened. In fact, we would do our best to completely set aside any witness testimony and just work from the physical evidence. It was kind of galling if there was not just enough physical evidence and you had to rely on what somebody said as a starting point.

Pedestrian accidents tended to be like that, because when a car runs into a person it doesn’t leave much physical evidence behind. When two cars run into each other, there’s all this stuff left at the point where they collided, so you can figure out where that point was. But, when a car runs into a person, there’s nothing left at that point; when you try to determine where the point of impact was, you end up relying on witness testimony.


Screenshots from a PC-Crash demo showing load loss and new "multibody pedestrian" functionality.

Twilley: In terms of reconciling memory and physical evidence—and this also relates to the idea of tweaking the reconstruction animation for the jury—the novel creates a conflict about whether it’s a good idea simply to settle for a narrative you can live with, however unreliable it might be, or to try to pin it down with science instead, even if the final result doesn’t sit right with you.

Arvin: Exactly. It sets up questions about how we define ourselves and what we do when we encounter things that conflict with our sense of identity. If something comes up out of the past that doesn’t fit with who you have defined yourself to be, what do you do with that? How much of our memories are shaped by our sense of identity versus the things we’ve actually done?

Twilley: It’s like a crash site: once the lines have been repainted and the road resurfaced, to what extent is that place no longer the same place where the accident occurred, yet still the place that led to the accident? That’s what’s so interesting about the reconstructionist’s work: you’re making these narratives that define a crash for a legal purpose, yet the novel seems to ask whether that is really the narrative of the crash, whether the actual impact is not the dents in the car but what happens to people’s lives.

Arvin: I always felt that tension—you are looking at the physics and the equations in order to understand this very compressed moment in time, but then there are these people who passed through that moment of time, and it had a huge effect on their lives. Within the work, we were completely disregarding those people and their emotions—emotions were outside our purview. Writing the book for me was part of the process of trying to reconcile those things.


Screenshot from a sample 3D car crash animation created by Kineticorp; visit their website for the video.

Manaugh: While I was reading the book, I kept thinking about the discovery of the Higgs boson, and how, in a sense, its discovery was all a kind of crash forensics.

Arvin: You’re right. You don’t actually see the particle; you see the tracks that it’s made. I love that. It’s a reminder that we’re reconstructing things all the time in our lives.

If you look up and a window is open, and you know you didn’t open it, then you try to figure out who in the house opened it. There are all these minor events in our lives, and we constantly work to reconstruct them by looking at the evidence around us and trying to figure out what happened.

Manaugh: That reminds me of an anecdote in Robert Sullivan’s book, The Meadowlands, about the swamps of northern New Jersey. One of his interview subjects is a retired detective from the area who is super keyed into his environment—he notices everything. He explains that this attention to microscopic detail is what makes a good detective as opposed to a bad detective. So, in the case of the open window, he’ll notice it and file it away in case he needs it in a future narrative.

What he tells Sullivan is that, now that he is retired, it’s as though he’s built up this huge encyclopedia of little details with the feeling that they all were going to add up to this kind of incredible moment of narrative revelation. And then he retired. He sounds genuinely sad—he has so much information and it’s not going anywhere. The act of retiring as a police detective meant that he lost the promise of a narrative denouement.

Arvin: That’s great. I think of reconstruction in terms of the process of writing, too. Reconstruction plays into my own particular writing technique because I tend to just write a lot of fragments initially, then I start trying to find the story that connects those pieces together.

It also reminds me of one of my teachers, Frank Conroy, who used to talk about the contract between the reader and the writer. Basically, as a writer, you’ve committed to not wasting the reader’s time. He would say that the reader is like a person climbing a mountain, and the author is putting certain objects along the reader’s path that the reader has to pick up and put into their backpack; when they get to the top of the mountain there better be something to do with all these things in their backpack, or they are going to be pissed that they hauled it all the way up there.

That detective sounds like a thwarted reader. He has the ingredients for the story—but he doesn’t have the story.


Screenshots from sample 3D car crash animations created by Kineticorp; visit their website for the video.

Twilley: In the novel, you deliberately juxtapose a creative way of looking—Heather’s pinhole photography—and Ellis’s forensic, engineering perspective. It seems rare to be equipped with both ways of seeing the world. How does being an engineer play into writing, or vice versa?

Arvin: I think the two things are not really that different. They are both processes of taking a bunch of little things—in engineering, it might be pieces of steel and plastic wire, and, in writing a novel, they’re words—and putting them together in such a way that they work together and create some larger system that does something pleasing and useful, whether that larger thing is a novel or a cruise ship.

One thing that I think about quite a bit is the way that both engineering and writing require a lot of attention to ambiguity. In writing, at the sentence level, you really want to avoid unintentional ambiguity. You become very attuned to places where your writing is potentially open to multiple meanings that you were not intending.

Similarly, in engineering, you design systems that will do what you want them to do, and you don’t have room for ambiguity—you don’t want the power plant to blow up because of an ambiguous connection.

But there’s a difference at the larger level. In writing, and writing fiction in particular, you actually look for areas of ambiguity that are interesting, and you draw those out to create stories that exemplify those ambiguities—because those are the things that are interesting to think about.

Whereas, in engineering, you would never intentionally take an ambiguity about whether the cruise ship is going to sink or not and magnify that!


Screenshot from a sample 3D car crash animation created by Kineticorp; visit their website for the video.

Twilley: I wanted to switch tracks a little and talk about the geography of accidents. Have you come to understand the landscape in terms of its potential for automotive disaster?

Arvin: When you are working on a case—like that rollover—you become extremely intimate with a very small piece of land. We would study the accident site and survey it and build up a very detailed map of exactly how the land is shaped in that particular spot. You spend a lot of time looking at these minute details, and you become very familiar with exactly how lands rolls off and where the trees are, and where the fence posts are and what type of asphalt that county uses, because different kinds of asphalt have different friction effects.

Manaugh: The crash site becomes your Walden Pond.

Arvin: It does, in a way. I came to feel that, as a reconstructionist, you develop a really intimate relationship with the roadway itself, which is a place where we spend so much time, yet we don’t really look at it. That was something I wanted to bring out in the book—some description of what that place is, that place along the road itself.

You know, we think of the road as this conveyance that gets us from Point A to Point B, but it’s actually a place in and of itself and there are interesting things about it. I wanted to look at that in the book. I wanted to look at the actual road and the things that are right along the road, this landscape that we usually blur right past.

The other thing your question makes me think about is this gigantic vehicle storage yard I describe in the novel, where all the crashed vehicles that are in litigation are kept. It’s like a museum of accidents—there are racks three vehicles high, and these big forklift trucks that pick the vehicles up off the racks and put them on the ground so you can examine them.


A vehicle scrapyard photographed by Wikipedia contributor Snowmanradio.

Manaugh: Building on that, if you have a geography of crashes and a museum of crashes, is there a crash taxonomy? In the same way that you get a category five hurricane or a 4.0 earthquake, is there, perhaps, a crash severity scale? And if so, then you can imagine at one end of it, the super-crash—the crash that maybe happens once every generation—

Arvin: The unicorn crash!

Manaugh: Exactly—Nicky and I were talking about the idea of a “black swan” crash on the way over here. Do you think in terms of categories or degrees of severity, or is every crash unique?

Arvin: I haven’t come across a taxonomy like that, although it’s a great idea. The way you categorize crashes is single vehicle, multiple vehicle, pedestrian, cyclist, and so on. They also get categorized as rollover collision, collision that leads to a rollover, and so on. So there are categories like that, and they immediately point you to certain kinds of analysis. The way you analyze a rollover is quite a bit different from how you analyze an impact. But there’s no categorization that I am aware of for severity.

I only did it for three years, so I’m not a grizzled reconstructionist veteran, but even in three years you see enough of them that you start to get a little jaded. You get an accident that was at 20 miles an hour, and you think, that’s not such a big deal. An accident in which two vehicles, each going 60 miles an hour, crash head-on at a closing speed of 120 miles an hour—now, that’s a collision!


Screenshot from a sample 3D car crash animation created by Kineticorp; visit their website for the video.

You become a little bit of an accident snob, and resisting that was something that I struggled with. Each accident is important to the people who were in it. And, there was a dark humor that tended to creep in, and that worried me, too. On the one hand, it helps keep you sane, but on the other hand, it feels very disrespectful.

Twilley: Have you been in a car accident yourself?

Arvin: I had one, luckily very minor, accident while I was working as reconstructionist—around the time that I was starting to work on this book. I heard the collision begin before I saw it, and what I really remember is that first sound of metal on metal.

Immediately, I felt a lurch of horror, because I wasn’t sure what was happening yet, but I knew it could be terrible. You are just driving down the road and, all of a sudden, your life is going to be altered, but you don’t know how yet. It’s a scary place—a scary moment.



Twilley: Before we wrap up, I want to talk about some of your other work, too. An earlier novel, Articles of War, was chosen for “One Book, One Denver.” I’d love to hear about the experience of having a whole city read your book: did that level of public appropriation reshape the book for you?

Arvin: That’s an interesting question. There were some great programs: they had a professional reader reading portions of it, and there was a guy who put part of it to music, so it was reinterpreted in a variety of ways. That was really, really fun for me. It brought out facets of the book that I hadn’t been fully aware of.

The whole thing gave me an opportunity to meet a lot of people around the city who had read the book. I did a radio interview with high school students who had read the book—this was when we were deeper into the Iraq war and there were a lot of parallels being drawn with that war. And these were kids who were potentially going off to that war, so that was very much on their mind.

You had this concentrated group of people looking at the book and reading it and talking about it, and everybody’s got their own way of receiving it. It helped me see how, once a book is out there, it isn’t mine anymore. Every reader makes it their own.

Manaugh: Finally, I’m interested in simply how someone becomes a reconstructionist. It’s not a job that most people have even heard of!

Arvin: True. For me, it was a haphazard path. Remember how we talked earlier about that gray area between the choices you made in your life and what’s just happened to you?

I have degrees in mechanical engineering from Michigan and Stanford. When I finished my Masters at Stanford, I went to work for Ford. I worked there for about three years. Then I was accepted into Iowa Writer’s Workshop, so I quit Ford to go to Iowa. I got my MFA, and then I was given a grant to go write for a year. My brother had moved to Denver a year earlier, and it seemed like a cool town so I moved here. Then my grant money ran out, and I had to find a job.

I began looking for something in the automotive industry in Denver, and there isn’t much. But I had known a couple people at Ford who ended up working in forensics, so I started sending my resume to automobile forensics firms. It happened that the guy who got my resume was a big reader, and I had recently published my first book. He was impressed by that, so he brought me in for an interview.

In that business, you write a lot of reports and he thought I might be helpful with that.


Screenshots from sample 3D car crash animation created by Kineticorp; visit their website for the video.

Twilley: Do you still work as an engineer, and, if so, what kinds of projects are you involved with?

Arvin: I work on power plants and oil and gas facilities. Right now, I am working on both a power plant and an oil facility in North Dakota—there’s lots of stuff going on out there as part of the Bakken play. It’s very different from the forensics.

Twilley: Do you take an engineering job, then quit and take some time to write and then go back into the engineering again? Or do you somehow find a way to do both?

Arvin: I do both. I work part time. Part-time work isn’t really easy to find as an engineer, but I’ve been lucky, and my employers have been great.

Engineers who write novels are pretty scarce. There are a few literary writers who started out in engineering but have gotten out of it—Stewart O’Nan is one, George Saunders is another. There’s Karl Iagnemma, who teaches at MIT. There are a few others, especially in the sci-fi universe.

I feel as though I have access to material—to a cast of characters and a way of thinking—that’s not available to very many writers. But the engineering work I’m doing now doesn’t have quite the same dramatic, obvious story potential that forensic engineering does. I remember when I first started working in forensics, on day one, I thought, this is a novel right here.


When European farmers arrived in North America, they claimed it with fences. Fences were the physical manifestation of a belief in private ownership and the proper use of land—enclosed, utilized, defended—that continues to shape the American way of life, its economic aspirations, and even its form of government.

Today, fences are the framework through the national landscape is seen, understood, and managed, forming a vast, distributed, and often unquestioned network of wire that somehow defines the "land of the free" while also restricting movement within it.

In the 1870s, the U.S. faced a fence crisis. As settlers ventured away from the coast and into the vast grasslands of the Great Plains, limited supplies of cheap wood meant that split-rail fencing cost more than the land it enclosed. The timely invention of barbed wire in 1874 allowed homesteaders to settle the prairie, transforming its grassland ecology as dramatically as the industrial quantities of corn and cattle being produced and harvested within its newly enclosed pastures redefined the American diet.

In Las Cruces, New Mexico, Venue met with Dean M. Anderson, a USDA scientist whose research into virtual fencing promises equally radical transformation—this time by removing the mile upon mile of barbed wire stretched across the landscape. As seems to be the case in fencing, a relatively straightforward technological innovation—GPS-equipped free-range cows that can be nudged back within virtual bounds by ear-mounted stimulus-delivery devices—has implications that could profoundly reshape our relationships with domesticated animals, each other, and the landscape.

In fact, after our hour-long conversation, it became clear to Venue that Anderson, a quietly-spoken federal research scientist who admits to taping a paper list of telephone numbers on the back of his decidedly unsmart phone, keeps exciting if unlikely company with the vanguard of the New Aesthetic, writer and artist James Bridle's term for an emerging way of perceiving (and, in this case, apportioning) digital information under the influence of the various media technologies—satellite imagery, RFID tags, algorithmic glitches, and so on—through which we now filter the world.


The Google Maps rainbow plane, an iconic image of the New Aesthetic for the way in which it accidentally captures the hyperspectral oddness of new representational technologies and image-compression algorithms on a product intended for human eyes.

After all, Anderson's directional virtual fencing is nothing less than augmented reality for cattle, a bovine New Aesthetic: the creation of a new layer of perceptual information that can redirect the movement of livestock across remote landscapes in real-time response to lines humans can no longer see. If gathering cows on horseback gave rise to the cowboy narratives of the West, we might ask in this context, what new mythologies might Anderson's satellite-enabled, autonomous gather give rise to?

Our discussion ranged from robotic rats and sheep laterality to the advantages of GPS imprecision and the possibility of high-tech herds bred to suit the topography of particular property. The edited transcript appears below.

• • •

Nicola Twilley: I thought I'd start with a really basic question, which is why you would want to make a virtual fence rather than a physical one. After all, isn’t the role of fencing to make an intangible, human-determined boundary into a tangible one, with real, physical effects?


Pasture fence; photograph via Cheyenne Fence.

Dean M. Anderson: Let me put it this way, in really practical terms: When it comes to managing animals, every conventional fence that I have ever built has been in the wrong place the next year.

That said, I always kid people when I give a talk. I say, “Don't go out and sell your U.S. Steel stock—because we are still going to need conventional fencing along airport runways, interstates, railroad right-of-ways, and so on.” The reason why is because, when you talk about virtual fencing, you're talking about modifying animal behavior.

Then I always ask this question of the audience: “Is there anybody who will raise their hand, who is one hundred percent predictable, one hundred percent of the time?”

The thing about animal behavior is that it’s not one hundred percent predictable, one hundred percent of the time. We don’t know all of the integrated factors that go into making you turn left, when you leave the building, rather than right and so on. Once you realize that virtual fencing is capitalizing on modifying animal behavior, then you also realize that if there are any boundaries that, for safety or health reasons, absolutely cannot be breached, then virtual fencing is not the methodology of choice.

I always start with that disclaimer. Now, to get back to your question about why you’d want to make a virtual fence: On a worldwide basis, animal distribution remains a challenge, whether it’s elephants in Africa or Hereford cows in Las Cruces, New Mexico.


Photograph via Singing Bull Ranch, Colorado.

You will have seen this, although you may not have recognized exactly what you were looking at. For example, if you fly into Albuquerque or El Paso airports, you will come in quite low over rangeland. If you see a drinking water location, you will see that the area around that watering point looks as brown and devoid of vegetation as the top of this table, whereas, out at the far distance from the drinking water, there may be plants that have never seen a set of teeth, a jaw, or any utilization at all.

So you have the problem of non-uniform utilization of the landscape, with some places that are over utilized and other places that are underutilized. The over utilized locations with exposed soil are then vulnerable to erosion from wind and water, which then lead to all sorts of other challenges for those of us who want to be ecologically correct in our thinking and management actions.

Even as a college student, animal distribution was something that I was taught was challenging and that we didn't have an answer to. In fact, I recently wrote a review article that showed that, just in the last few years, we have used more than sixty-eight different strategies to try to affect distribution. These include putting a fence in, developing drinking water in a new location, putting supplemental feed in different locations, changing the times you put out feed, putting in artificial shade, so that animals would move to that location—there are a host of things that we have tried. And they all work under certain conditions. Some of them work even better when they’re used synergistically. There are a lot of combinations—whatever n factorial is for sixty-eight.


Cattle clustered under a neatly labeled portable shade structure; photograph via the University of Kentucky College of Agriculture.

But one thing that all of them basically don’t allow is management in real time. This is a challenge. Think of this landscape—the Chihuahuan desert, which, by the way, is the largest desert in North America. If you’ve been here during our monsoon, when we (sometimes) receive our mean annual nine-inches plus of precipitation, you’ll see that where Nicola is sitting, she can be soaking wet, while Geoff and I, just a few feet away, stay bone dry. Precipitation patterns in this environment can be like a knife cut.


Students learning rangeland analysis at the Chihuahuan Desert Rangeland Research Center; photograph by J. Victor Espinoza for NMSU Agricultural Communications.

You can see that, with conventional fencing, you might have your cows way over on the western perimeter of your land, while the rainfall takes place along the other edge. In two weeks, where that rain has fallen, we are going to have a flush of annuals coming up, which would provide high-quality nutrition. But, if you have the animals clear over three pastures away, then you’ve got to monitor the rainfall-related growth, and you’ve got to get labor to help round those animals up and move them over to this new location.

You can see how, many times as a manager, you might actually know what to do to optimize your utilization, but economics and time prevent it from happening. Which means your cows are all in the wrong place. It’s a lose-lose, rather than a win-win.


One of Dean Anderson's colleagues, Derek Bailey, herds cattle the old-fashioned way on NMSU's Chihuahuan Desert Rangeland Research Center. One aspect of Bailey's research is testing whether targeted grazing, made possible through Anderson's GPS collar technology, could reduce the incidence of catastrophic western wildfires. Photograph courtesy NMSU.

These annual plants will reach their peak of nutritional quality and decline without being utilized for feed. I’m not saying that seed production is not important, but basically, if part of this landscape’s call is to support animals, then you are not optimizing what you have available.

My concept of virtual fencing was basically to have that perimeter fence around your property be conventional, whether it’s barbed wire, stone, wood, or whatever. But, internally, you don't have fences. You basically program “electronic” polygons, if you will, based upon the current year’s pattern of rainfall, pattern of poisonous weed growth, pattern of endangered species growth, and whatever other variables will affect your current year’s management decisions. Then you can use the virtual polygon to either include or exclude animals from areas on the landscape that you want to manage with scalpel-like precision.

To go back to my first example, you could be driving your property in your air-conditioned truck and you notice a spot that received rain in the recent past and that has a flush of highly nutritious plants that would otherwise be lost. Well, you can get on your laptop, right then and there, and program the polygon that contains your cows to move spatially and temporally over the landscape to this “better location.” Instead of having to build a fence or take the time and manpower to gather your cows, you would simply move the virtual fence.



This video clip shows two cows (the red and green dots) in a virtual paddock that was programmed to move across the landscape at 1.1 m/hr, using Dean Anderson's directional virtual fencing technology.

It’s like those join-the-dots coloring books—you end up with a bunch of coordinates that you connect to build a fence. And you can move the polygon that the animals are in over in that far corner of the pasture. You simply migrate it over, amoeba-like, to fit in this new area.

You basically have real-time management, which is something that is not currently possible in livestock grazing, even with all of the technologies that we have. If you take that concept of being able to manage in real time and you tie it with those sixty-eight other things that have been found useful, you can start to see the benefit that is potentially possible.

Twilley: The other thing that I thought was curious, which I picked up on from your publications, is this idea that perhaps you might not be out on the land in your air-conditioned pickup, and instead you might actually be doing this through remote sensing. Is that possible?


Dean Anderson's NMSU colleague, remote sensing scientist Andrea Laliberte, accompanied by ARS technicians Amy Slaughter and Connie Maxwell, prepare to launch an unmanned aerial vehicle from a catapult at the Jornada Experimental Range. Photograph USDA/ARS.

Anderson: Definitely. Currently we have a very active program here on the Jornada Experimental Range in landscape ecology using unmanned aerial vehicle reconnaissance. I see this research as fitting hand-in-glove with virtual fencing. However—and this is very important—all of these whiz-bang technologies are potentially great, but in the hands of somebody who is basically lazy, which is all human beings, or even in the hands of somebody who just does not understand the plant-animal interface, they could create huge problems.

If you don’t have people out on the landscape who know the difference between overstocking and under-stocking, then I will want to change my last name in the latter years of my life, because I don't want to be associated with the train wreck—I mean a major train wreck—that could happen through using this technology. If you can be sitting in your office in Washington D.C. and you program cows to move on your ranch in Montana, and you don't have anybody out on the ground in Montana monitoring what is taking place …. [shakes head] You could literally destroy rangeland.

We know that electronics are not infallible. We also know that satellite imagery needs to be backed up by somebody on the ground who can say, “Wow, we've got a problem here, because what the electronic data are saying does not match what I’m seeing.”

This is the thing that scares me the most about this methodology. If people decouple the best computer that we have at this point, which is our brain, with sufficient experience, from knowing how to optimize this wonderful tool, then we will have a potential for disaster that will be horrid.


NMSU and USDA ARS scientists prepare to launch their vegetation surveying UAV from a catapult. Photograph USDA/ARS.

Twilley: One of the things I was imagining as I looked at your work was that, as we become an increasingly urban society, maybe farmers could still manage rural land remotely, from their new homes in the city.

Anderson: They can, but only if they also have someone on the ground who has the knowledge and experience to ground-truth the data—to look at it and say, “The data saying that this number of cows should be in this polygon for this many days are accurate”—or not.

You need that flexibility, and you always need to ground-truth. The only way you can get optimum results, in my opinion, is to have someone who is trained in the basics of range science and animal science, to know when the numbers are good and when the numbers are lousy. Electronics simply provide numbers.


Multispectral rangeland vegetation imagery produced by Andrea Laliberte's UAV surveys. Image from "Multispectral Remote Sensing from Unmanned Aircraft," by Andrea S. Laliberte, Mark A. Goforth, Caitriana M. Steele, and Albert Rango, 2011.

Now, you’re right, we are getting smarter at developing technology that can interpret those numbers. I work with colleagues in virtual fencing research who are basically trying to model what an animal does, so that they can actually predict where the animal is going to move before the animal actually moves. In my opinion if they ever figure that out, it’s going to be way past my lifetime.

Still, if you look at range science, it’s an art as well as science. I think it’s great that we have these technologies and I think we should use them. But we shouldn’t put our brain in a box on a table and say, “OK. We no longer need that.” Human judgment and expertise on the ground is still essential to making a methodology like this be a positive, rather than a negative, for landscape ecology.


Drawings from Anderson's patent #7753007 for an "Ear-a-round equipment platform for animals."

Manaugh: I'm curious about the bovine interface. How do you interface with the cow in order to stimulate the behavior that you want?

Anderson: I think that basically my whole career has been focused on trying to adopt innate animal behaviors to accomplish management goals in the most efficient and effective ways possible.

Here’s what I mean by that. I can guarantee that, if a sound that is unknown and unpleasant to the three of us happens over on that side of the room, we’re not going to go toward it. We’re going to get through that door on the other side as quickly as possible.

What I’m doing is taking something that’s innate across the animal world. If you stimulate an animal with something unknown, then, at least initially, it’s going to move away from it. If the event is also accompanied by an unpleasant ending experience and the sequence of events leading up to the unpleasant event are repeatable and predictable, after a few sequential experiences of these events, animals will try and avoid the ending event—if they’re given the opportunity. This is the principle that has allowed the USDA to receive a patent on this methodology.

The thing, first of all, about our technique is that it’s not a one size fits all. In other words, there are animals that you could basically look at cross-eyed and they’ll move, and then there are animals like me, where you’ve got to get a 2x6 and hit them up across the head to get their attention before anything happens.

When these kinds of systems have been built for dog training or dog containment in the past, they simply had a shock, or sometimes a sound first and then a shock. The stimulus wasn’t graded according to proximity or the animal’s personality.


Dean Anderson draws the route of a wandering cow approaching a virtual fence in order to show Venue how his DVF™ system works.

[stands up and draws on whiteboard] Let’s say that this is the polygon that we want the animal to stay in. If we are going to build a conventional fence, we would put a barbed wire fence or some enclosure around that polygon. In our system, we build a virtual belt, which in the diagrams is shaded from blue to red. The blue is a very innocuous sound, almost like a whisper. Moving closer to the edge of the polygon, into the red zone, I ramp that whisper up to the sound of a 747 at full throttle takeoff. I can have the sound all the way from very benign up to pretty irritating. At the top end, it’s as if a fire alarm went off in here—we’re going to get out, because it sounds terrible.



This video clip captures the first-time response of a cow instrumented with Dean Anderson's directional virtual fencing electronics when encountering a static virtual fence, established using GPS technology.

I’ve based the sounds and stimuli that I’ve used on what we know about cow hearing. Cows and humans are similar, but not identical. These cues were developed to fit the animal that we are trying to manage.

Now, if we go back to me as the example, I’m very stubborn. I need a little higher level of irritation to change my behavior. We chose to use electric stimulation.

I used myself as the test subject to develop the scale we’re using on this. My electronics guys were too smart. They wouldn't touch the electrodes. I’m just a dumb biologist, so…


Diagram showing how directional virtual fencing operates. The black-and-white dashed line (8) shows where a conventional fence would be placed. A magnetometer in the device worn on the cow’s head determines the animal’s angle of approach. A GPS system in the device detects when the animal wanders into the 200m-wide virtual boundary band. Algorithms then combine that data to determine which side of the animal's to cue, and at what intensity. From Dean M. Anderson's 2007 paper, "Virtual Fencing: Past, Present, and Future" (PDF).

If I’m the animal and I’m getting closer and closer to the edge of the polygon, then the electrodes that are in the device will send an electrical stimulation. In terms of what those stimulations felt like to me, the first level is about like hitting the crazy bone in your elbow. The next one is like scooting across this floor in your socks and touching a doorknob—that kind of static shock. The final one is like taking and stopping your gas-powered lawnmower by grabbing the spark plug barehanded.

What we did was cannibalize a Hot-Shot that some people buy and use to move animals down chutes. I touched the Hot-Shot output and I could still feel it in my fingertips the next morning, so we cut it right down for our version

As the cow moves toward the virtual fence perimeter, it goes from a very benign to a fairly irritating set of sensory cues, and if they’re all on at their highest intensity , it’s very irritating. It’s the 747s combined with the spark plug. Now, back from your eighth-grade geometry, you know that you have an acute angle and you have an obtuse angle. As the cow approaches a virtual fence boundary, we send the cues on the acute side, to direct her away from the boundary as quickly and with as little amount of irritation as possible. If we tried to move the cow by cuing the obtuse side, she would have had to move deeper into the irritation gradient before being able to exit it.

We don’t want to overstress the animal. So we end up, either in distance or time or both, having a point at which, if this animal decides it really wants what’s over here, it’s not going to be irritated to the point of going nuts. We have built-in, failsafe ways that, if the animal doesn’t respond appropriately, we are not going to do anything that would cause negative animal welfare issues.


Heart rate profile (beats per minute) of an 8-year-old free-ranging cross-bred beef cow before, during, and after an audio plus electric stimulation cue from a directional virtual fencing device. The cue was delivered at 0653 h. The second spike was not due to DVF cues; the cow was observed standing near drinking water during this time. From Dean M. Anderson's 2007 paper, "Virtual Fencing: Past, Present, and Future" (PDF).

The key is, if you can do the job with a tack hammer, don’t get a sledgehammer. This is part of animal welfare, which is absolutely the overarching umbrella under which directional virtual fencing was developed. There’s no need to stimulate an animal beyond what it needs. I can tell you that when I put heart rate monitors on cows wearing my DVF™ devices. I actually found more of a spike in their heart rates when a flock of birds flew over than when I applied the sound.

Now, there are going to be some animals that you either get your rifle and then put the product in your freezer, or you go put the animal back into a four-strand barbed wire fenced pasture. Not every animal on the face of the earth today would be controllable with virtual fencing. You could gradually increase the number of animals that do adapt well to being managed using virtual fencing in your herd through culling.

But the vast majority of animals will react to these irritations, at some level. They can choose at which point they react, all the way from the whisper to the lawnmower.


Diagram showing two cows responding differently to the virtual boundary: Cow 4132 (in green) penetrates the boundary zone more deeply, tolerating a greater degree of irritation before turning around. From Dean M. Anderson's 2007 paper, "Virtual Fencing: Past, Present, and Future" (PDF).

Here is the other thing: We all learn. Whatever we do to animals, we are teaching them something. It’s our choice as to what we want them to learn.

Of course, I don’t have data from a huge population that I can talk about. But, of the animals with whom I have worked—and the literature would support what I’m going to say—cows are, in fact smarter than human beings in a number of ways. If I give you the story of the first virtual fencing device that I built, I think you’ll see why I say that.

What our team did initially was cannibalize a kids’ remote control car to send a signal to the device worn by the animal. I had a Hereford/Angus cross cow, and she was a smart old girl. I started to cue her. I was close to her and she responded to the cues exactly the way I wanted her to. But she figured out, in less than five tries, that, if she kept twenty-five feet between me and her, I could press a button, and nothing would happen. I tried to follow her all over the field. She just kept that distance ahead of me for the rest of the trial—always more than twenty-five feet!

So that’s the reason why we are using GPS satellites to define the perimeter of the polygon. You can’t get away from that line.


A cow being fitted with an early prototype of Dean Anderson's Ear-A-Round DVF device. Photograph via USDA Jornada Experimental Range, AP.

What sets DVF™ apart from other virtual fencing approaches is that it’s not a one-size-fits-all. The cues are ramped, and the irritating cues are bilaterally applied, so we can make it directional, to steer the animals—no pun intended—over the landscape.

What’s interesting is that if you have the capacity to build a polygon, you can encompass a soil type, a vegetation situation, a poisonous plant, or whatever, much better than you can if you have to build a conventional fence. In building conventional fences, you have to have stretch posts every time you change the fence’s direction. That increases both materials and labor costs in construction, which is why you see many more rectangular paddocks than multi-sided polygons. Right now, you can assume that, on flat country, about fifty percent of the cost in a conventional fence is labor, and the other fifty percent is material.

Stretching barbed wire around a corner, shown in this engraving from A Treatise Upon Wire: Its Manufacture and Uses, Embracing Comprehensive Descriptions of the Constructions and Applications of Wire Ropes, J. Bucknall Smith, 1891.

Twilley: Which raises another question: Is virtual fencing cost-effective?

Anderson: It depends. I’ll give you an example to show what I mean. The US Forest Service over in Globe, Arizona, is interested in possibly using virtual fencing. Some of the mining companies over there have leases that say that before they extract the ore, and even after, the surface may be leased to people with livestock.

That country over there is pretty much like a bunch of Ws put together. In March 2012, for two-and-a-half miles of four-strand barbed wire, using T posts, they were given a quote of $63,000.

That's why they called me. [laughs]

Now, if that was next to a road, even if it cost $163,000 for those two-and-a half miles of fence, it would be essential, in my opinion, that they not think about virtual fencing—not in this day and time.

In twenty years from now—somewhere in this century, at least—after the ethical and moral issues have been worked out, instead of stimulating animals with external audio sound or electrical stimulation, I think we will actually be stimulating internally at the neuronal level. At that point, virtual fencing may approach one hundred percent effective control.


The DARPA "Robo Rat," whose movements could be directly controlled by three electrodes inserted into its brain; photograph via.

It's been done with rodents. The idea was that these animals could be equipped with a camera or other sensors and sent into earthquake areas or fires or where there were environmental issues that humans really shouldn’t be exposed to. Of course, even if it can be done scientifically, there are still issues in terms of animal welfare. What if there is a radiation leak? Do you send rodents into it? You can see the moral and ethical issues that need to be worked out.

Twilley: If that ever becomes a real-world application, will you sell your shares in U.S. Steel?

Anderson: [laughs] I have a feeling that we never will have a landscape devoid of visible boundaries. If nothing else, I want a barbed wire fence between Ted Turner’s ranch and our experimental ranch up the road here. With a visible boundary, there’s no question—this side is mine and that side is yours.


Fencing photograph via InformedFarmers.com. Incidentally, Ted Turner's Vermejo ranch in New Mexico and southern Colorado is said to be the largest privately-owned, contiguous tract of land in the United States.

Twilley: Aha—so it’s the human animals that will still need a physical fence.

Anderson: I think so. Otherwise you’re looking at the landscape and there’s absolutely nothing out there—whether it be to define ownership or use or even health or safety hazards.

Manaugh: Do you think this kind of virtual fencing would have any impact on real estate practices? For example, I could imagine multiple ranchers marbling their usage of a larger, shared plot of land with this ability to track and contain their herds so precisely. Could virtual fencing thus change the way land is controlled, owned, or leased amongst different groups of people?

Anderson: If you were to go down here to the Boot Heel area of New Mexico you could find exactly that: individual ranchers are pooling areas to form a grass bank for their common use.

Anything that I can do in my profession to encourage flexibility, I figure I’m doing the correct thing. That’s where this all came from. It never made sense to me that we use static tools to manage dynamic resources. You learn from day one in all of your ecology classes and animal science classes that you are dealing with multiple dynamic systems that you are trying to optimize in relationship to each other. It was a mental disconnect for me, as an undergraduate as well as a graduate student, to understand how you could effectively manage dynamic resources with a static fence.

Now, there are some interesting additional things you learn with this system. For example, believe it or not, animals have laterality. You probably didn’t see the article that I published last year on sheep laterality. [laughter]


USDA ARS scientists testing cattle laterality in a T-Maze. Photograph by Scott Bauer for the USDA ARS.

Twilley and Manaugh: No.

Anderson: Our white-faced sheep, which have Rambouillet and Polypay genetics, were basically right-handed. You’ll want to take a look at the data, of course, but, basically, animals are no different than you and I. There are animals that have a preference to turn right and others that have a preference to turn left.

Now, I didn’t do this study to waste government money. Think about it in terms of what I have told you about applying the cues bilaterally. If I know that my tendency is right-handed, then in order to get me to go left, I may need a higher level of stimulation on my right side than I would if you were trying to get me to go right by applying a stimulus on my left side, because it’s against my natural instincts.

With the computer technology we have today, everything we do can be stored in memory, so you can learn about each animal, and modify your stimulus accordingly. There is no reason at all that we cannot design the algorithms and gather data that, over time, will make the whole process optimized for each animal, as well as for the herd and the landscape.


Cow equipped with a collar-mounted GPS device; photography by Dave Ganskoop for the USDA ARS.

Twilley: Going back to something you said earlier about animal memory—and this may be too speculative a question to answer—I’m curious as to how dynamic virtual fencing affects how cows perceive the landscape.

Anderson: The question would be whether, if the virtual fence is on or near a particular rock, or a telephone pole, or a stream, and they have had electrical stimulation there before, do they associate that rock or whatever with a limit boundary? In other words, do they correlate visual landmarks with the virtual fence? Based on some non-published data I have collected, the answer is yes.

In fact, to give some context, there have been studies published showing that for a number of days following removal of an electric fence, cattle would still not cross the line where it had been located.

So this could indeed be an issue with virtual fencing, but—and my research on this topic is still very preliminary—I have not seen a problem yet, and I don’t think I will. Part of the reason is that cows want to eat, so if the polygon that contains the animals is programmed to move toward good forage, the cows will follow. It’s almost like a moving feed bunk, if you will. I'm sure that, in time—I would almost bet money on this—that if you were using the virtual fence to move animals toward better forage, you could almost eliminate the virtual fence line behind the animals, especially if the drinking water was kept near the “moving feed bunk.”

The other thing is that the consumer-level GPS receivers I have used in my DVF™ devices do not have the capability to have the fixes corrected using DGPS, which means that the fix may actually vary from the “true” boundary by as much as the length of a three-quarter ton pick-up. That’s to my benefit, because there is never an exact line where that animal is sure to be cued and hence the animal cannot match a particular stone or other environmental object with the stimulation event even if the virtual boundary is held static. It’s always going to be just in the general area.


A cow fitted with an early prototype of Anderson's Ear-A-Round DVF system at the Jornada Experimental Range; photograph via AP/Massachusetts Institute of Technology, Iuliu Vasilescu.

Manaugh: So imprecision is actually helpful to you.

Anderson: Yes, I believe so—although I don’t have enough data that I would want to stand on a podium and swear to that. But I think the variability in that GPS signal could be an advantage for virtual paddocks that spatially and temporally move over the landscape.

Twilley: We’ve talked about optimizing utilization and remote management, but are we missing some of the other ways that virtual fencing might transform the way we manage livestock or the land?

Anderson: Ideas that we know are good, but are simply too labor-intensive right now, will become reasonable. The big thing that has been in vogue for some time—and it still is, in certain places—is rotational stocking. The idea is that you take your land and divide it into many small paddocks and move animals through these paddocks, leaving the animals in any one paddock for only a few hours or days. It’s a great idea under certain situations, but think of the labor of building and maintaining all those fences, not to mention moving the animals in and out of different paddocks all the time.


A fence in need of repair; photograph via.

With the virtual paddock you can just program the polygon to move spatially and temporally over the landscape. Even the shape of the virtual paddock can be dynamic in time and space as well. It can be slowed down where there’s abundant forage, and sped up where forage is limited so you have a completely dynamic, flexible system in which to manage free-ranging animals.

Here’s another thing. Like anybody who gathers free-ranging animals, I have a song I use. My song is pretty benign and can be sung among mixed audiences. [sings] “Come on sweetheart, let’s go. Come on. Come on. Come on, girls. Let’s go.”



In this video clip, a cow-calf pair are moved using only voice cues (Dean Anderson's gathering song) delivered from directional virtual fencing (DVF™) electronics carried by the cows on an ear-a-round (EAR™) system.

That’s the way I talk to them, if I want them to move. One day when I was out manually gathering my cows on an ATV I put a voice-activated recorder in my pocket and recorded my song. We later transferred the sounds of my manual gathering into the DVF™ device. Then when we wanted to gather the animals we wirelessly activated the DVF™ electronics and my “song”—“Come on, girls, let’s go”—began to play. Instead of a negative irritation, this was a positive cuing—and it worked.

The cows moved to the corral based on the cue, without me actually being present to manually gather them—it was an autonomous gather.

I think this type of thing also points to a paradigm shift in how we manage livestock. Sure, I can get my animals up in the middle of night to move them, but why do that? Why not try to manage on cow time, rather than our own egotistical needs—“At eight o’clock, I want these cows in so I can brand them,” or whatever. Why not mesh management routines with their innate behaviors instead? For example, my song could maybe be matched to correspond to a general time of day when the animals might start drifting in to drink water, anyway.

Twilley: I see—it’s a feedback loop where you’re cuing behavior with the GPS collars, but you’re also gathering data. You can see where they are already heading and change your management accordingly.

Anderson: Absolutely. You are matching needs and possibilities.

Manaugh: To make this work, does every animal have to be instrumented?

Anderson: This is a very valid question, but my answer varies. All the research needed to answer this question is not in, and the answers depend on the specific situation being addressed. I have a lot of people right now who are calling me and asking for a commercial device that they can put on their animals because they are losing them to theft. With the price of livestock where it is currently, cattle-rustling is not a thing of the nineteenth century. It is going on as we speak.

If that’s your challenge, then you’re going to need some kind of electronic gadgetry on every animal for absolute bookkeeping. For me, the challenge is how do you manage a large, extensive landscape in ways that we can’t do now, and I don’t think we necessarily need to instrument every animal for virtual fencing to be effective.

Instead, if you’ve got a hundred cows, you need to ask: which of those cows should you put instruments on? As a producer, you probably have a pretty good idea which animals should be instrumented and why: you would look for the leaders in the group.


Position of two cows grazing similar pastures in Montana, recorded every ten minutes over a two-week period. The difference in their grazing patterns reveal one cow to be a hill climber and one to be a bottom dweller. Image form a USDA Rangeland Management publication (PDF) co-authored by Derek Bailey, NMSU.

What’s interesting is that there are cows that prefer foraging up on top of hills. There are others that prefer being down in a riparian area. A colleague of mine at New Mexico State University, calls them bottom dwelling and hill climbing cows and this spatial foraging characteristic apparently has heritability. So it’s possible that you could select animals that fit your specific landscape. If, as I mentioned earlier, the ease with which an animal can be controlled by sensory cues also has heritability, it seems logical to assume that you could create hightech designer animals tailored to your piece of land.

Now, when you start adding all of these things together, using these electronic gadgetries and really leveraging innate behaviors, it points to a revolution in animal management—a revolution with really powerful potential to help us become much better stewards of the landscape.


This photograph shows a worm fence, an American invention. It was the most widely built fence type in the US through the 1870s, until Americans ran out of readily accessible forests, triggering a "fence crisis," in which the costs of fencing exceeded the value of the land it enclosed. The "crisis" was averted by the invention of mass-produced woven wire in the late 1800s. Photograph from the USDA History Collection, Special Collections, National Agricultural Library.

Twilley: None of this is commercially available yet, though, right?

Anderson: That’s true—you cannot go out today and buy a commercial DVF™ system, or for that matter any kind of virtual fence unit designed specifically for livestock, to the best of my knowledge. But there is a company that is interested in our patent and they are trying to get something off the ground. I’m trying to feed this company any information that I can, though I am not legally allowed to participate in the development of their product as a federal employee.

Manaugh: What are some of the obstacles to commercial availability?

Anderson: The largest immediate challenge I see is answering the question of how you power electronics on free-ranging animals for extended periods of time. We have tried solar and it has potential. I think one of the most exciting things, though, is kinetic energy. I understand that there are companies working on a technology to be used in cellphones that will charge the cell phone simply by the action of lifting it out of your purse or pocket, and the Army has got several things going on now with backpacks for soldiers that recharge electronic communication equipment as a result of a soldier’s walking movement.


Lawrence Rome's kinetic backpack.

I don’t think the economics warrant animal agriculture developing any of these power technologies independently, but I think we can capitalize on that being developed in other, more lucrative industries and then simply adapt it for our needs. When I developed the concept of DVF™ I designed it to be a plug-and-pray device. As soon as somebody developed a better component, I would throw my thing out and plug theirs in—and pray that it would improve performance. Sometimes it did and sometimes it didn’t!

Manaugh: Have you looked into microbial batteries?

Anderson: That’s an interesting suggestion that I have not looked into. However, I have though a lot about capturing kinetic energy. If you watch a cow, their ears are always moving, and so are their tails. If we can capture any of that movement….

The other thing we need is demand from the market. In 2007, I was invited to the UK to discuss virtual fencing —the folks in London were more interested in virtual fencing than anybody else I have ever talked to in the world.

The reason was really interesting. England has a historic tradition of common land, which is basically open “green space” that surrounds the city and was originally used for grazing by people who had one or two sheep or cows. Nowadays, it’s mostly used by dog walkers, pony riders—for recreation, basically. The problem is that they need livestock back on these landscapes to actually utilize vegetation properly so certain herbaceous vegetation does not threaten some of the woody species. However, none of the present-day users want conventional fencing because of the gates that would have to be opened and shut to contain the animals. So they were interested in virtual fencing as a way to get the ecology back into line using domestic herbivores, in a landscape that needs to be shared with pony riders and dog walkers who don’t want to shut gates and might not do it reliably, anyway.

But it’s an interesting question. I’ve had some sleepless nights, up at two in the morning wondering, “Why is it not being embraced?” I think that a lot of it comes strictly down to economics.

I don’t know, at this point, what a setup would cost. But, in my opinion, there are ways we could implement this immediately and have it be very viable. You wouldn’t have every animal instrumented. You can have single-hop technology, so information uploads and downloads at certain points the animals come to with reliable periodicity—the drinking water or the mineral supplement, say. That’s not real-time, of course—but it’s near real-time. And it would be a quantum leap compared to how we currently manage livestock.


Barbed wire, patented by Illinois farmer Joseph Glidden in 1874, opened up the American prairie for large-scale farming. Photograph by Tiago Fioreze, Wikipedia.

Twilley: What do the farmers themselves think of this system?

Anderson: What I’ve heard from some ranchers is something along the lines of: “I've already got fences and they work fine. Why do I need this unproven new technology?”

On the other hand, dairy farmers who have automatic milking parlors, which allow animals to come in on their own volition to get milked, think virtual fencing would be very appropriate for their type of operation, for reasons of convenience rather than economics.


Robotic milking parlor; photograph via its manufacturer, DeLaval.

Now, let me tell you what I think might actually work. I think that environmentalists could actually be very beneficial in pushing this forward. Take a situation where you have an endangered bird species that uses the bank of a stream for nesting or reproduction. Under current conditions, the rancher can’t realistically afford to fence out a long corridor along a stream just for that two-week period. That’s a place where virtual fencing is a tool that would allow us to do the best ecological management in the most cost-effective way.

But the larger point is that we cannot afford to manage twenty-first century agriculture using grandpa’s tools, economically, sociologically, and biologically.


I.L. Elwood & Co. Glidden Steel Barb Wire, non-dated Advertising Posters, Advertising Ephemera Collection, Baker Library Historical Collections, via.

Some people have said, “Well, I think you are just ahead of your time with this stuff.” I’m not sure that’s true. In any case, in my personal opinion, if I’m not doing the research that looks twenty years out into future before it’s adopted, then I’m doing the wrong kind of research. In 2005, Gallagher, one of the world’s leading builders of electric fences, invited me to talk about virtual fencing. During that conversation, they told me that they believe that, by the middle of this century, virtual fencing will be the fencing of choice.

But here’s the thing: none of us have gone to the food counter and found it empty. When you have got a full stomach, the things that maybe should be looked at for that twenty-year gap are often not on the radar screen. As long as the barbed wire fences haven’t rusted out completely, the labor costs can be tolerated, and the environmental legislation hasn’t become mandatory, then why spend money? That’s human nature. You only do what you have to do and not much more.

The point is that it’s going to take a number of sociological and economic factors, in my opinion, for this methodology of animal control to be implemented by the market. But speaking technologically, we could go out with an acceptable product in eighteen months, I believe. It wouldn’t have multi-hop technology. It would equal the quality of the first automobile rather than being comparable to a Rolls Royce in terms of “extras”—that would have to await a later date in this century.

And here’s another idea: I think that there ought to be a tax on every virtual fencing device that is sold or every lease agreement that’s signed in the developed world. That tax would go to help developing countries manage their free-ranging livestock using this methodology because that’s where we need to be better stewards of the landscape and where we as a world would all benefit from transforming some of today’s manual labor into cognitive labor.


Herding cattle the old-fashioned way on the Jornada Experimental Range; photograph by Peggy Greb for USDA ARS.

Maybe with this technology, a third-world farmer could put a better thatched roof on his house or send his kids to school, because he doesn’t need their manual labor down on the farm. It’s fun for a while to be out on a horse watching the cows; what made the West and Hollywood famous were the cowboys singing to their cows. I love that; that’s why I’m in this profession. Still, I’m not a sociologist, but it seems as though you could take some of that labor that is currently used managing livestock in developing countries and all of the time it requires and you could transfer it into things that would enhance human well-being and education.

It’s in our own interest, too. If non-optimal livestock management is creating ecological sacrifice areas, where soil is lost when the rains come or the wind blows, that particulate matter doesn’t stop at national boundaries.

I always say that virtual fencing is going to be something that causes a paradigm shift in the way we think, rather than just being a new tool to keep doing things in the same old way. That’s the real opportunity.


Thanks to a well-timed tip from landscape blogger Alex Trevi of Pruned, Venue made a detour on our exit out of Flagstaff, Arizona, to visit the old black cinder fields of an extinct volcano—where, incredibly, NASA and its Apollo astronauts once practiced their, at the time, forthcoming landing on the moon.



The straight-forwardly named Cinder Lake, just a short car ride north by northeast from downtown Flagstaff, is what NASA describes as a "lunar analogue": a simulated offworld landscape used to test key pieces of gear and equipment, including hand tools, scientific instruments, and wheeled rovers.

Astronauts Jim Irwin and Dave Scott in experimental vehicle "Grover." Photograph courtesy of NASA/USGS, from this informative PDF.

As Northern Arizona University explains, NASA's Astrogeology Research Program "started in 1963 when USGS and NASA scientists transformed the northern Arizona landscape into a re-creation of the Moon. They blasted hundreds of different-sized craters in the earth to form the Cinder Lake crater field, creating an ideal training ground for astronauts."

Photo courtesy of NASA/USGS; see PDF.

The sculpting of the landscape began in July 1967, with a series of carefully timed and very precisely located explosions.

Photo courtesy of NASA/USGS; see PDF.

In the first round alone, this required 312.5 pounds of dynamite and 13,492 pounds of fertilizer mixed with fuel oil.

Photo courtesy of NASA/USGS; see PDF.

Photo courtesy of NASA/USGS; see PDF.

At the end of a four-day period of controlled explosions, USGS scientists had succeeded in creating a 500 square foot "simulated lunar environment" in Northern Arizona—forty-seven craters of between five and forty feet in diameter designed to duplicate at a 1:1 scale a specific location (and future Apollo 11 landing site) on the moon, in a region called the Mare Tranquillitatis.

On the left, an aerial view of the first stage of Cinder Lake Crater Field, designed to duplicate a small area of the Apollo 11 landing site shown in the Lunar Orbiter image to the right. Photographs courtesy NASA/USGS; see PDF

An aerial view of the second crater field constructed at Cinder Lake. This is more than double the size of the first field, and contains 354 craters. Photo courtesy of NASA/USGS; see PDF.

Geologic map of the crater field that was used to plan astronaut EVA traverses. Image courtesy of NASA/USGS; see PDF.

Sadly, the craters today are very much reduced both in scale and in perceptibility.

Indeed, at a certain point nearly every dent and divot in the landscape began to seem like it might also be part of this monumental project of planetary simulation, a possible detail in the stage-set used to rehearse hopeful astronauts.



This pronounced fading of the craters is due to at least two things.

One factor, of course, is simply long-term weathering and exposure in the absence of any plans for the historic preservation of the site.

As we'll discuss in a future post in relation to another of Venue's visits—specifically, to see the so-called "Mars Yard" at the Jet Propulsion Laboratory in Pasadena—these sites of offworld simulation are intellectually thrilling but also integral parts of the U.S. national space project.



That these locations—works of scientific utility, not art—can be discarded so easily is a shame, although exactly how, and under what departmental authority, they would be preserved is a thorny question.



Of course, all questions of budget or federal jurisdiction aside, an Offworld Landscapes National Park or National Monument is an incredible thing to contemplate.

A National Park—or, why not, a UNESCO Offworld Heritage Site—that consists only and entirely of landscapes designed to simulate other planets!



In any case, the other major factor in the craters' gradual disappearance is Cinder Lake's current recreational status as a place for off-road vehicles of a much more terrestrial kind.



Indeed, for much of the two hours or so that Venue spent out on the volcanic field—where walking is very slow, at best, as you sink ankle-deep into tiny pieces of black gravel that make a sound remarkably like dipping a spoon into dry Ovaltine—distant bikes, buggies, and trucks kicked up dust clouds, giving the landscape a distinct and quite literal holiday buzz.

Oddly, though, it's hard to complain about such a use, as this is more or less exactly what NASA was doing, albeit with taxpayer support, better costumes, and a much larger budget.

Apollo Field Test-13: astronauts Tim Hait and David Schleicher are in spacesuits, testing equipment and protocols, with a simulated Lunar Module ascent stage in the background. Photograph courtesy of NASA/USGS; see PDF.

As Northern Arizona University goes on to describe, the astronauts "ran lunar rover simulations and practiced soil sampling techniques wearing replica space suits in the shadows of the San Francisco Peaks. The training gave them the skills essential for the first successful manned missions to the Moon."

Photograph courtesy of NASA/USGS; see PDF.

Off-road to off-world, by way of a black lake of pumice on the outskirts of a college town in Arizona.

Astronauts Jack Schmitt and Gene Cernan practice describing crater morphology to Mission Control. Photograph courtesy of NASA/USGS; see PDF

Better yet, you can visit the lake quite easily; here is a map, with driving directions from the best breakfast in Flagstaff.

Final photo courtesy of NASA/USGS; see PDF.
 
  Getting more posts...