NASA Johnson Space Center
Oral History Project
Tacit Knowledge Capture Project
Edited Oral History Transcript
John S.
Chapman
Interviewed by Rebecca Wright
Huntsville, Alabama – 16 May 2008
Wright: Today is May 16th, 2008. We are in Huntsville, Alabama to
speak with John Chapman, who currently leads the External Tank Project
Office at the Marshall Space Flight Center. The interview is being
conducted for the JSC Tacit Knowledge Capture Project for the Space
Shuttle Program. Interviewer is Rebecca Wright, assisted by Jennifer
Ross-Nazzal. We thank you for finding time in your schedule to talk
with us today. We'd like to start by you giving us an idea of how
you became involved with the Space Shuttle Program.
Chapman:
I've been interested in things that fly since I was too small to describe
here. I remember making model airplanes when I was six or seven. I've
been interested in things that fly ever since then. It doesn't matter
what kind of flying machine it is, whether it's kites or model airplanes
or balloons or hot air balloons or real airplanes or jets or sailplanes
or rockets or spacecraft. Anything that somehow cheats gravity has
been very appealing to me since the very earliest days, and that has
shaped almost everything that I've ever been involved in.
In fact, my dad has a picture of me when I was about nine years old
at the old [Smithsonian National] Air and Space Museum on the mall
in Washington [D.C.]—when the rocket display used to be outdoors,
when it was at the Castle part of the Smithsonian there, the old reddish-brick
building. I'm standing right next to an Atlas rocket that's there.
Dad says that right after he took the picture I told him that I was
going to be involved in making rockets. He can still remember this
very vividly, and we've got the slide in the family archives. So this
has been an inevitable type of progression that I would end up here
doing these things. Very interested in all things aviation and aerospace-related.
The standard old story: built model airplanes when I was in junior
high and high school, built my first radio-controlled model when I
was watching [President John F.] Kennedy's funeral on TV, in fact.
I can remember sitting on the floor in the den sanding on the ribs
and the wing while I was watching the funeral procession. If you were
to go to my house today and were to go downstairs to my little model-building
shop in the basement, you would see the current radio-controlled model
airplane I'm working on right now. That has not changed at all. The
equipment has changed, the planes have changed, but Chapman is still
building model airplanes.
I have also been very interested in full-size aviation as well, not
just models. Back in high school while I was working on a particular
model, I went out to the local general aviation airport because I
had heard that you could go to the mechanic’s shop at the airport
and buy in bulk quantities—like quart cans and gallon cans—some
of the liquids that you use to make models at much better prices and
larger quantities than you could at your friendly local hobby shop.
So I went out there and asked the guy who ran the shop if I could
buy some of that material, and he said, "Yeah sure, come with
me." So we went out back to the storage room, and he found a
gallon can of this stuff called clear butyrate dope that's used to
seal up the covering material on a plane, and I bought it from him.
Just casually, as I was walking away, I said, "Do you guys ever
hire anybody for summer jobs?" He said, "Well, we might
be interested. Why, are you interested in working out here?"
and I said, "Yeah, I sure might be." One thing led to another,
and he offered me a job. I started working there in the 11th grade.
Worked there for over four years —full-time that summer, part-time
during the school year, senior year in high school, full-time for
the next summer and the next summer and the next summer—in other
words, coming back from college, any weekends that I came home from
college, I'd work at the airport.
Over those years I progressed from starting off sweeping the hangar
and getting it cleaned out, to four summers later, designing and installing
aircraft avionics and navigation systems for general aviation aircraft.
Along the way I learned about engine overhauls and all the general
aviation airframe and power plant mechanic stuff. It was an absolutely
fantastic job! Looking back on it, I probably learned more stuff directly
related to what I do on a daily basis today in that job working at
the airport than years in college and with the theoretical stuff.
The airport job gave me so much of the practical knowledge of what
it takes to ”cheat gravity”. You're going to hear that
theme throughout all the discussions that we have today.
In the middle of that airport job I went off to college at Georgia
Tech [Georgia Institute of Technology, Atlanta, Georgia], where I
started off in aerospace engineering. I switched over to industrial
engineering about halfway through, principally because of two reasons:
First of all I was disappointed that the program at that time at Georgia
Tech was highly theoretical with very little practical aerospace engineering.
I'd already earned my private pilot's license ( I took my flight test
the day before Apollo 11 was launched in July of '69), so here I was
flying as well as working at the airport in the summers.
I'd come back to Georgia Tech and I'd ask questions of a practical
nature about “Why do you do this, and why they do that, and
what is the purpose of this other thing?” The answers would
always come back from a theoretical standpoint and say that, “Well,
from a theory standpoint that doesn't make any sense, and you shouldn't
be doing it.” But I knew for a fact that this was being done,
and it was being done for a very practical and solid reason. So even
at that point I could see this gulf between theory and practice that
folks weren't really following. From my point of view at the time,
the theoretical side of aerospace was not as tightly coupled with
the practical side as perhaps it should be. I found this to be very
disappointing.
That coupled with the fact that in the early 70s the aerospace business
was really falling on hard times, and the potential of getting a job
when I got out of school in '73 was looking pretty bleak with aerospace.
So I switched over to industrial engineering because it looked like
I could still do aerospace-related things but would have a little
bit broader capability, in case I had to bide my time before going
into specific things in aerospace. Turns out that industrial engineering
was a very good fit for me, and it worked out well.
I graduated from college in August of '73, had several interesting
job offers, mainly in manufacturing type things, not aerospace manufacturing
but just manufacturing in general. Those were not for me, because
I'd already resolved early on that I was going to do something to
do with flying. It didn't matter what it was, and I might end up moving
from South Carolina, which is where I was born and raised, out to
Seattle [Washington] if I had to work for Boeing, or to Wichita [Kansas]
if I had to work in General Aviation there, or to southern California,
which was the heart of a lot of the aerospace business, or wherever.
I really wasn't sure, but I was going to do something to do with “cheating
gravity”. It was not going to be just making diesel engines
or automotive brake parts. It was going to be something to do with
flying machines.
My college roommate at the time was also very enthused about aviation
and aerospace. He was about as much of a space fanatic as I was. In
fact, we had journeyed together to watch several Apollo launches at
the Cape [Canaveral, Florida] while in college. We wrote off to my
U.S. Senator and managed to get passes to go to the launches, and
so went down to watch Apollo 15 and Apollo 16 and then the Skylab
Workshop launch—by the way, the 35th anniversary of the Skylab
Workshop launch, which was the last Saturn V flight, was yesterday—I
was fortunate enough to be there to watch the last Saturn V launch.
My roommate found a job over here in Huntsville, Alabama working for
a company called Northrop Services, which was euphemistically known
as the “sweatshop division” of Northrop Aircraft. They
had a support contract with this U.S. Government NASA outfit in the
little town of Huntsville, Alabama called the Marshall Space Flight
Center. He got that job and came over and started. And after his first
day I talked to him on the phone, and asked, "Tell me what it
was like." He was enthusiastically describing what it was like,
and I said, "Well, do you think they'd be looking for anybody
else?" Again, I had not accepted a job anywhere, and I had finished
college about three weeks before. He said, "I don't know, but
I'll check." So he checked and amazingly they were looking for
people. So on October 1st of 1973 I started work for Northrop Services
over here as a support contractor for the Marshall Space Flight Center,
working in support of the engineering laboratories out here at Marshall.
The very first thing I started working on was this brand-new program
called Space Shuttle. Again, this was in October of 1973. Shuttle
had only been authorized by the [U.S.] Congress a year or two earlier.
In fact, it was authorized while Apollo 16 was on the Moon. If you
look at some of the tapes of John [W.] Young and Apollo 16 on the
Moon, he gets relayed the message. It’s really interesting.
Here's Young on the Moon with Charlie [Charles M.] Duke, and he gets
the relayed the message that the Congress has just approved this new
thing called the Space Shuttle Program. Young's response is, "Well,
that's great. The country really needs that Shuttle." Then some
ten years later, John Young is in the Commander's seat for the first
flight of that very same vehicle. It's just interesting that it went
full circle there.
Anyway, I came over here to Huntsville in October 1973 and started
working at Northrop in the Space Shuttle Program doing theoretical
things. I say theoretical only because the vehicle had not yet been
built. These were “what-if” exercises… for example,
trying to figure out specifically how much reusable hardware for this
new portion of the Space Shuttle Program called the solid rocket boosters
would you need if you were going to fly a certain number of flights
every year. Each one of the components on the solid rocket booster
had a different useful life; some might be usable for five flights,
some might be usable for ten, some might be usable for twenty flights.
They had different attrition rates. In other words, they might get
banged up when they hit the water, or they might get involved in industrial
accidents that would take some parts out of the flow, just on the
ground as a normal flow of things.
But given all that, and how much time it took to get them ready, how
many of those parts and pieces did you need, ,to fly a certain number
of flights. I was writing computer programs to simulate and compute
how much hardware was needed. Little did I know that here I was helping
to calculate the amount of hardware that we ultimately needed to order—and
then some 20 years later I would be in the position of Acting Project
Manager for the shuttle solid rocket booster and would have to live
with those very same hardware quantities that my software had helped
predict. So I couldn't do anything but look in the mirror and accuse
the person in the mirror of not having recommended enough hardware
because we were having trouble supporting the mission model.
So that's how I got over to Huntsville, and how I began working on
the Space Shuttle Program. Passionate interest in all kinds of aerospace,
including rockets and Apollo and everything. At the time I got out
from college, the Space Shuttle was at the absolute leading edge of
the aerospace business. I've been involved in one aspect or another
of Shuttle ever since—and have been involved in one way or another
in every launch, ultimately of every propulsion project for Shuttle
at the Marshall Space Flight Center, either in the role of business
manager, or assistant project manager, or deputy project manager,
or project manager. Those have been my roles so far.
Wright:
Through all these roles, your duties changed, your responsibilities
evolved. Share with us some of the challenges that you faced going
through the roles that you've had, some of the memorable ones, and
some of the lessons that you learned from those that you continue
to use in your management position today.
Chapman:
Probably the greatest challenge in this business is the complexity
of the systems , which is a direct response to the complexity of the
ground and flight environments in which these systems operate. In
a NASA project office— and most of my time has been spent in
project offices—there's a need for a renaissance approach to
things. An approach in which you don't focus just on one aspect of
a problem, but you try to be knowledgeable about many, many aspects.
As I have moved between Shuttle propulsion projects—between
solid rocket propulsion (the Solid Rocket Boosters and Solid Rocket
Motors), liquid rocket propulsion, (the Space Shuttle Main Engine),
or propellant storage and flow (the External Tank)—there are
many different challenges in each one of those, and each one of them
is an incredibly complex system in itself. They are complex not because
we at NASA love complex things, that's not what drives it at all,
it's the fact that we deal with an extremely complex physical flight
environment.
Any time we talk about the kinds of things that we do in this business
that involve going into orbit around our planet, by definition we
have to begin from a standing stop. In our country's case, means being
bolted to a launchpad that's on the East coast of Florida. Then, after
lift-off of the rocket, some eight and a half minutes later, we're
going 17,500 miles an hour. That, in anybody's estimation, is a gigantic
challenge that requires a tremendous release of stored energy. To
release that energy in a controlled and safe manner that minimizes
the risks involved, is a tremendous, tremendous technical challenge
from many different perspectives.
Probably the biggest challenge that I've faced as I've been involved
in different aspects of this is to be able to fully grasp what all
of those constraints are and to be aware of the historical ways in
which we have dealt with these constraints or challenges. Challenges
such as: how do you keep a vehicle together when it's going through
that kind of velocity change? How do you cope with the changes in
going from atmospheric pressure at sea level on the coast of Florida
to the vacuum of space? How do you deal with machines that have temperatures
that on one side of a quarter-inch-thick piece of metal may be 423
degrees below zero and on the other side of that quarter-inch-thick
piece of metal may be 7,000 degrees above zero. What are the challenges
involved in making the systems work and making them work safely and
making them work correctly time and time again?
That's probably the thing I would say has been the biggest challenge
to me: to continually be curious and to always ask probing questions
like, “Why is it done this way? What were the historical lessons
learned that led us to this path? What did we try and if it didn't
work… why not? Why did we try that? Why do we do it the way
we do it now? What are the problems involved in the way we do it now?
What are the soft spots of how we do it now? Are our solutions robust?
Are they really able to cope with changing situations that we hadn't
necessarily thought of when we designed them, or are we successful
just because we have been lucky?”
You never want to have “we were just lucky” be the answer
in engineering. You want the design solution to be successful because
you've put a tremendous amount of thought, analysis and testing into
it, and you know how much margin you have on either side of the middle-of-the-road
answer and how much it can tolerate things being a little warmer,
a little colder, a little faster, a little slower, a little higher-pressure,
a little lower-pressure—how your solution can cope with all
those “off-nominal” conditions.
Wright:
You used the expression “need a sufficient knowledge of the
broader view.” How would you best share some techniques or some
instructions on how new people coming on board, or those that you
feel would be up-and-coming leaders within your area, how do you get
them to grasp that? How do you get them to understand how broad of
a knowledge base that they need to have? What lessons have you learned
that will help get them to understand that?
Chapman:
Let me take you back to something I said earlier when I was talking
about the first real job that I had, which was working at the airport.
The really neat and important thing to me about that kind of a job
and experience—and how it affected me and continues to affect
me—is I got to see up close, real world, “go out and touch
it” hardware designs that were solutions to real-world “cheating
gravity” problems. You could walk right up and look at the physical
piece of hardware, and the longer you looked at that piece of hardware—whether
it was a landing gear strut or a part on an aircraft engine or a control
cable and pulleys or electronic cables and connectors and things like
that—the more you looked at it, the more you could almost read
the mind of the designer that came up with that solution.
The more you looked at it, you could almost decode what the environment
was and the problem that the designer was trying to solve. In a connector
he had little tabs that made sure that it locked in place when it
plugged in so that any environment over the life of the airplane wouldn't
cause it to come unplugged and cause you to loose communication or
something like that. Or the way the threads were designed on a bolt
so that it gave a tighter connection. In other words, not being theoretical,
but actually looking at the practical thing in front of you. Then
from that saying, "What was the theory that led to that? Why
do we need to maintain that way? What was going through the designer's
mind when they did that, they came up with those dimensions or those
thicknesses or that type of material?" Starting from the hardware
and moving from the hardware back toward the theoretical numbers-based
understanding.
To me that's critically important, critically important. Too many
times as I've watched people work in this business, I've seen situations
where folks had a stunning grasp of the theory and the numbers involved
in it, but had zero understanding of the actual hardware and what
the practical aspects of what all these numbers really meant. So probably
more than anything else, I would say that we need to work much harder
today than I believe we do on getting people, from the top to bottom
of the organization in new programs in touch with the flight hardware,
and with the people that build and assemble and maintain the flight
hardware.
There's a tendency in this business to keep theory and practice separated,
and to say that the designers can go off and design stuff, and then
hand their design off to the builders, and the builders go off and
build it. And everybody's happy, and then the designers can go off
and design something else. What you find is when you talk to the builders
that have to build the stuff designed this way is that if great care
is not taken in the design, you can end up with a design that's not
buildable. Not buildable at all. You just absolutely can't make it.
So what happens is the builders have to, in essence, do what's sometimes
termed as “crutching” the design. You modify the design
and change it a little bit here and a little bit there to make it
buildable. There's a process for doing that, but ideally if the design
is correctly crafted to begin with by people who know and understand
what it really takes to build flight hardware, then the design will
have those features built into it, and it will be buildable to start
off with.
Perhaps the most telling thing I could use to illustrate that was
back about five or six years ago I was serving as the technical assistant
to the manager in the Space Transportation Directorate, which is an
organization that no longer exists here at the Marshall Space Flight
Center. It was a really neat organizational experiment in which technical
and project organizations were all under one umbrella rather than
having two separate organizations. This was created in an attempt
to foster good communications between the Project Office and the technical
arm—Engineering, let's say—supporting that the project.
For a variety of reasons we've gone back to the older way, which separates
those organizations in the organization structure at the Center here,
for very valid reasons.
Anyway, I was a technical assistant to the director of the Space Transportation
Directorate. Several folks one morning came into my office, and these
were folks that were senior manager types in the Space Transportation
Directorate. They came in and they all had cups of coffee in hand
and they said, "Gee, Chap, you used to be chief engineer of solid
rocket booster over in Shuttle, right?" I said, "Yeah, sure
did." They said, "Well, how long were you there?" I
said, "Well, I was there for about three and a half years."
They said, "We're really curious. Tell us—what did you
do during a typical day in a flight program like that? What kind of
stuff did you work on?"
I had not gotten my cup of coffee yet that morning—and these
were sharp, sharp folks. I said, "This is a real interesting
question you're posing. But I'm going to turn it around, and I'm going
to ask you to tell me what YOU folks think I did as chief engineer
of solid rocket booster over in Shuttle, this ongoing flight program.”
So they looked at each other and they said, "That'd be easy to
do. Fine." So I erased my whiteboard and said, "I'll be
back in five minutes. Here's a fresh marker and a good eraser. You
have at it, mark on the board however you want, and I'll be back in
about five minutes and you can tell me what you think I did."
So went to the coffeepot, got my coffee, came back, walked by my office,
there was a lively discussion going on, stuff being marked on the
board, so I walked around a little bit more, and came back—totaled
about ten minutes. Said, "Okay guys, time's up, tell me what
you think I did." They had this wonderfully esoteric view of
what I did as chief engineer on an ongoing flight program. They said,
"Well, we believe that probably three quarters of your time is
spent by trying to figure out how to get more performance out of your
systems to make them work better and faster and to work with the Project
Office to shave cost off. The Project Office is probably always pushing
you to cut cost, cut cost, cut cost, and do it faster, faster, faster.
You're probably always trying to resist that by saying no, no, we
need better and more technical performance, and this and that and
the other. You're continually trying to enhance the performance of
the system in the face of the tightwads in the budget world that would
try to keep that away from you. Probably that may take even 85 percent
of your time."
I said, "Well, what do you think the rest of my job was?"
"Well, it's probably just a little administrivia type stuff that
you have to do. But that's got to be it. That's the main part of your
job." I looked around. They're all nodding up and down, “Yeah,
yeah, that's what you do, that's what you do.” So I said, "Well,
guys, it's really going to come as a shock to you, but that's not
what I did most of the time as chief engineer in the solid rocket
booster project. And that's not what any chief engineer does in a
flight project.” The primary task of a chief engineer of a flight
project, an ongoing flight project—and most of our big propulsion
projects tend to be long projects—could be characterized by
one sentence and one sentence only. That sentence is: “As built”
does NOT match “as designed.”
In other words, that hardware that you’ve got there on the launch
pad or in stages of assembly is not really what you thought it was
going to be. It's not exactly what the drawing says. Either it's not
what the drawing says the hardware should look like, or the training
that the troops that built it had wasn't exactly like the training
you thought they had, or the ground support equipment that holds the
thing and gets it from point A to point B wasn't designed exactly
the way you thought it was, or the road surface that it had to drive
over wasn't graded to exactly the specifications that you thought
it was and it got a little bit bumpier and put a little more vibration
into it, or the weather that you're going to fly it in is not quite
what you thought it was going to be. In other words, what you envisioned
is not exactly what you got.
They looked at me with just total disbelief. Total disbelief. Their
response to that was, "Well, just make it match !!! The contractor
has built this piece of hardware so just tell him to go back and make
it match the design and the specifications.” If your design,
said that you needed three holes in a row exactly one inch apart on
this piece of metal, and then you have discovered that the middle
hole is a tenth of an inch slightly over to one side—let's say
that that's the problem. Their answer was, "Well, go back to
the contractor and tell the contractor to make it right. We designed
it like it should be three holes like that. Tell the contractor to
make it right." I said, "Guys, you need to spend more time
with the real world flight hardware.” Because seldom do you
ever find out about such a discrepancy when you are looking at a part,
just when it came out of the machine when it was being built. The
place that you typically will discover this is after everything has
been put together and built and assembled, and that particular little
part is embedded way down inside the rocket with all kinds of other
value added on top of it.
Someone will be doing a paper search and looking at what was done,
and they'll come across a little indication that that center hole
was over a little bit. To tear the rocket down and make it right,
fix it, is way too expensive, way too costly in terms of schedule,
could incur all kinds of other risk for the other parts associated
around it that are already built up. So now you've got to figure out
how do you cope with that. “Can you fly it like it is?”,
“Do you have to do analysis that says that it's okay for that
center hole to be off by a tenth of an inch?”, “Or is
it truly a matter of: this is unacceptable, and we really do have
to take the rocket apart all the way down to this little piece and
change it out to one that has the center hole in exactly the right
place."
They had zero concept of that. Zero concept, because the concept from
the design perspective is: "I designed it that way, so why can't
you build it that way?" But the real world that we live in says,
"Just because you designed it that way doesn't mean that every
one of them is going to be built that way." You need to understand
that perhaps your design should be sufficiently robust that it will
withstand that center hole being misdrilled by a tenth of an inch.
It goes back to my premise, almost all the way back to my little story
about working at the airport. The more familiar one is with the real
hardware, and what it takes to build and assemble the real hardware,
and how the instructions get to the folks on the shop floor that build
it, and how it gets checked, how those checked pieces go into inventory,
which ones get pulled from inventory to go into the rocket, how you
build the rocket up—the more you understand about that process,
the better and more effective your designs will be that will have
to use that process in order to fly. There's no substitute for that.
In the good old days—around here at Marshall that is defined
as the Saturn program—we used to build a lot of actual rocket
hardware right here at the Center. The way things used to work in
the “arsenal” concept, was we would come up with a basic
design for a rocket or a rocket subsystem internally within the government,
many times assisted with subcontracts to the major aerospace primes,
or even the minor aerospace primes coming on site here and helping
us with designs. We would then build the prototype of that rocket
or subsystem in-house here at the Marshall Space Flight Center, in
our laboratories. So the designers, the folks that were involved in
putting lead on paper (in the good old days before we had CAD [Computer-Aided
Design] systems), would be at most one building away from where the
fabrication was going on, and they could walk right over and watch
it work. Talk to the people that were getting those drawings and making
those parts. There would be this tightly coupled feedback between
the design organization and the fabrication organization. Realize
we would not yet be in a production environment in which you'd be
maybe making 10 or 15, 20 of them, but this would be in more of a
“pre-production” or almost “prototype” phase.
The result was an engineering design cadre that was acutely aware
of the “buildability” of their designs and a manufacturing
cadre that would produce pre-production hardware that was truly ready
to enter production.
We don't have that today. So the question has got to be, from my perspective,
“What can we do to at least get the essence of that?”
I can't see us going back to the days where we built prototypes for
every single rocket in-house within the government. But we should
be able to set up programs that will allow designers, at certain key
formative stages in their careers, to go spend time at manufacturing
facilities where they will be able to see how we build things, what
we do, how designs get translated into shop floor instructions, so
that as they progress through their careers they can look back and
draw on those experiences and design better hardware because of it.
This is exactly the same way I look back on the time that I spent
twisting wrenches and working on things at the airport those many
years ago.
Wright:
It sounds like that would be an element of planning. Can you share
with us more ideas that you have about things that you've learned
through your career that you would like to see instrumented into the
whole element of better planning?
Chapman:
What I'm talking about there in the sense of planning is almost career
planning—“How should we plan for the workforce within
the Agency or within the whole industry that is best postured to advance
us to the goals that we have in the future?” In fact, to me
it goes farther than just the technical aspect of things. I know if
you look over the list of my employment chronology, you'll find a
curious mix of things that I've been involved in. Several places on
there you'll see that I've been involved in business management. I'll
have to confess that wasn't a high-priority choice of mine to go do
that, because there's a tendency in the view of the technical world
that the money stuff is just an absolute necessary evil that you would
rather avoid at all costs and let somebody with the green eyeshade
and the arm garters go off and worry about that, because you'd rather
work on the real “rocket stuff”. But the reality of it
is, it's not the rockets that put the man on the Moon, it was the
dollars that purchased the rockets that put the man on the Moon.
So if you really want to go back to understand the full system—and
it gets back to this broader view that I talked about before—you
really ought to have some knowledge of the dollars, and how the dollars
flow, and what it takes to rationalize and defend budget estimates.
And to be able to convince those who have and control the money that
your ideas and plans are well worth allocating scarce fiscal resources
to fund. In this case the purse strings are controlled by the Legislative
Branch. It's put forward in a budget proposal by the Executive Branch
to the Legislative Branch, and of course we're part of the Executive
Branch, so we have to assemble budgets in support of the Executive
Branch and then defend those budgets as they go forward to the Legislative
Branch in an effort get the resources needed to build the rockets.
The more people know about the inner workings of that process, the
better the likelihood that we will be able to get those resources
that we need to go build the rockets. I'm a strong proponent of everyone
spending some amount of time in the business management world. The
actual amount of time depends of course on the individual person and
what their specific inclination—but for people who are inclined
to progress up in the project management world particularly, there's
no substitute for spending some time understanding how the resources
flow and how that relates to making the rockets.
Wright:
Do you believe that investment would help in program or project efficiency
overall?
Chapman:
Absolutely.
Wright:
Are there any other ideas that you have that would be good for program
efficiency?
Chapman:
This is going to sound like I'm waving your flag here, because I know
that part of what you're trying to do with interviewing folks that
have been involved in Shuttle is to try to the maximum extent possible
to capture lessons learned and to get those in a form that's readily
passed on to other people so that we won't repeat the errors of the
past and we'll build on the best practices to go into the future.
I hope I captured what you're trying to do.
Wright:
Yes.
Chapman:
We don't do, in my estimation, a very good job of lessons learned.
When we're working on a particular task, that task is all-consuming,
and it's a miracle we even document what we did yesterday or the day
before, let along step back far enough and document lessons learned.
Then when the task ends, by definition we dropped all the tools related
to that task and we ran full-blast toward the next task, and we don't
have any time to look back at the old task. So we don't do a very
good job capturing a record of what we did. I think we need to look
hard at figuring out how we make it more palatable to capture lessons
learned. From my perspective, the most interesting way to do that,
and the way that—in my experience at least—has captured
my imagination, is spending time listening to our forefathers. The
folks that have built rockets before us.
There's a really nifty program that has been set up through the NASA
educational world called Project Management Shared Experiences Program,
PMSEP. I was fortunate to go through an early one of those classes.
The first one was up at Wallops [Wallops Flight Facility, Virginia]
for about a week. The next one was down at an FAA [Federal Aviation
Administration] training center at Palm Coast in Florida, again for
about a week. The particular group of current and former project managers
that they brought in to talk to us was fantastic. It was really great.
It was Jim [James S.] Martin talking about the Viking Program, which
was the first soft landing that we made on the planet Mars. That took
place in July of '76—targeted for the same time as the Bicentennial—and
he spoke at length of the challenges that he faced in designing that
and getting this huge contractor team going and facing challenges
that had never been faced before to soft-land on another planet. Fascinating
stories. It was just a classic case of grizzled and hardened project
managers sitting up on the stage almost in the spotlight, lights dimmed
in the rest of the room, just philosophizing from real sketchy notes,
telling war stories. Fascinating. Absolutely just wonderful stuff.
There were many others that talked to us. Martin just comes to mind
because of the challenges that he faced.
That and the other examples, both in that first PMSEP program and
in the follow-on, really set the hook for me to do more research on
my own and go read about these things, to spend time studying about
the trials and tribulations of other programs. The physics hasn't
changed. All the things that Sir Isaac Newton said would happen are
still happening. His laws of motion are still there. We’ve still
got to deal with all those things on a daily basis. Just because the
rocket looks a little bit different—the limitations of physics
are still there. Liquid hydrogen is still just as cold today as it
was when we first started working with it. It still causes all kinds
of headaches today just like it did before. The thrust inside the
thrust chamber is still hot and melts things and is volatile and gets
out and burn things that you don't want it to burn, just like it did
before. It still does that today.
So there can be a tendency to say, "Well, the lessons learned,
those are lessons learned from a different age and a different environment,
and it may not be applicable." My take on that is absolutely
not. They're completely applicable. One of my favorite books is a
book by [Walter] Dornberger called V-2. The book by Dornberger is
the history of the development of V-2 rocket. It's fascinating. The
first time I read it in depth was when I was business manager on main
engine after [Space Shuttle] Challenger [STS 51-L accident], and it
was fascinating to read the problems that the rocket team at Peenemunde
in Germany was having developing the components of the V-2.
I would read the book by Dornberger, and then look at weekly notes
in the Space Shuttle Main Engine project, and realize they're facing
very similar problems. The turbomachinery has difficulty, the control
mechanisms are troublesome, the reliability of the materials is very
troublesome, we can't get the materials to do what we really need
them to do. Same stuff. If we can package things in a way that ignites
curiosity, and we can motivate people to have the time and take the
time to read about it and to learn about it, I think the quality of
our decisions and therefore the quality of our hardware and the efficiency
with which we get to our end goals will be significantly enhanced.
Wright:
You've worked with so many people as a manager and, of course, as
an employee. Talk with us for a few minutes about where you suggest
improvement for manager performance. How do you motivate those people
who you are working with, that you're responsible for? How do you
build the teamwork in those efforts that you need to get a project
done and done well?
Chapman:
That's a great question. I've been really lucky to have worked with
some just superb folks over the years, just superb folks. Folks that
I treasure as friends and I treasure as mentors and examples in terms
of how they manage. I'll list a few of them here just so you know
who I'm talking about. Tops among them would be folks like Bob [Robert
E.] Lindstrom. Bob Lindstrom was the head of Shuttle projects here
at Marshall during all the development years of the Shuttle, up until
about the 15th, 16th, 17th flight of Shuttle. So all of the propulsion
element project managers reported to Bob. Bob still lives here in
Huntsville. He’s a superb technical manager. He doesn't get
in and work the technical problems for his managers, but he makes
sure they have the resources that they need to do their jobs, and
then keeps things off of them where he can, to shield them from so
many of the things that could be distractions. He would answer the
questions coming from above that he could answer to keep his managers
focused on their tasks without undue distractions.
Then there are the very detail oriented technical managers. Folks
like Joe [Joseph A.] Lombardo. Joe was the main engine project manager
after Challenger. Had been involved in liquid rocketry for years and
years and years, first for the Army and then here within NASA at Marshall.
What an incredible intellect. Dr. Lombardo went to MIT [Massachusetts
Institute of Technology, Cambridge, Massachusetts]. Just great guy,
wonderful manager. Extremely personable. Knew the rocket, the main
engine for Shuttle, inside and out. Incredible grasp of the history
of why certain decisions were made in certain microscopic little places
inside the engine that improved reliability and got us away from problems.
Could always ask probing questions in meetings that were phenomenal.
Learned a lot from him.
Another one would be a great guy named Royce Mitchell. Royce Mitchell,
who also still lives here in Huntsville. has the distinction of having
managed the biggest rocket and the smallest rocket within NASA. The
smallest rocket was a little electric-powered thruster that was developed
down here in the laboratories that produced a fraction of an ounce
of thrust. When he retired from the Agency, he was the project manager
of the advanced solid rocket motor, which was being built over in
Iuka, Mississippi, and that was to be the most powerful single human-rated
rocket motor that the Agency had fully developed. I was fortunate
to be his deputy at the time of his retirement,. Both ends of the
spectrum. A phenomenal manager, extremely personable in addition to
having an amazing grasp of the hardware and the theory. Royce also
has absolutely the best sense of humor of any manager I’ve ever
been associated with.
I hope you're sensing a common thread here when I talk about the hardware
and the theory, and hardware and theory, and hardware and theory.
All three of those managers I just mentioned, all had a fantastic
grasp of both. I truly treasure the ability to work closely with those
great managers—even as I've progressed on in my career, I've
been able to tap the resources that these managers can provide. We
recently did a study within the external tank project, that I manage
right now, in which we assessed the internal decision making environment
within the project. To help us, we were able to bring in the original
manager of the External Tank Project, a phenomenal guy by the name
of Jim [James B.] Odom. Jim also works in town here although he lives
over in Decatur [Alabama]. He was the original manager of external
tank up through about the fifth or sixth Shuttle flight.
Jim then went on to become a spectacular manager in many other areas.
The Hubble Space Telescope manager here at Marshall. He later moved
on to become a manager in [International] Space Station. In fact,
when he left the Agency, he was the Associate Administrator in charge
of Space Station up in Washington. He subsequently retired and came
back to live here in North Alabama where he works as a consultant.
We got Jim Odom and Bob Lindstrom to come in on this consulting task
and interview all of our people in the Project Office here to then
give us a report and an assessment of how we were doing, were we asking
the right questions, were we giving our people the right insight,
the right authority, the right respect, the right education across
the board. Very helpful, extremely helpful.
Again, trying to look at the lessons learned from a little bit different
perspective rather than an individual kind of thing. You go tap the
people that have the lessons and get them to come in and look at what
you're doing and see how those lessons can be applied. I think that's
something that is very beneficial. I guess from my perspective, in
terms of philosophy and how I go about doing these things, I try to,
where I can, push my people out in the limelight and try to, in general,
stay back away from it. Unless it's certain things that I know that
they really don't like at all that I'm very comfortable with. If it's
those kinds of situations I'll bring them with me, but not necessarily
push them out in front.
A classic case might be dealing with the media. Most engineering folks
have this dread of dealing with the media, which is probably well-founded.
But that's never bothered me at all. My mom was a teacher for 40 years.
You can probably tell that. Probably doesn't surprise anyone. So I
approach that as being this is my chance to teach the media what they
don't know; I can tell them about this neat stuff we work on. Whereas
other folks just go into lockup, I must admit, I enjoy talking to
the media on most occasions. But other places, like particularly where
it's technical expertise and dealing with a contractor or dealing
with other parts of NASA management, I really try hard to get my folks
out in front and involved.
I guess there's another aspect of employee development. Of course,
there's always the one of MBWA—management by walking around—in
which you try to talk to people, spend time in their offices, talk
to people on the factory floor, talk to the engineers at the contractor,
talk to the engineers in the laboratories here at Marshall, go on
tours of the neat capabilities that exist across this Agency as much
as you can. The time spent doing those things is extremely valuable…
time that ought to be spent just to broaden your view of things.
The other aspect of this is—and this is Chapman's management
philosophy maybe—if I was going to divide the world of folks
that we work with (and that work for me and that I deal with) into
two big categories, those categories would be: the people that thrive
on structured problems and the people that thrive on unstructured
problems. There's a place for both of those in the world of project
management. Overall, it's probably best for us as managers to try
to move our people from a comfort zone of thriving on structured problems—and
I'll describe what I mean in a minute—toward the direction of
being more comfortable in dealing with unstructured problems.
When I talk about structured problems and unstructured problems, what
I mean is: If I'm going to give you a structured problem to work on
I'm going to say, "Well, we got this problem. Here it is as I
understand it," I'm going to describe it to you in gory detail,
and I'm going to say, "and we need to go move it toward a solution
that does this, this, and this." Let's say it's writing a report
or something like that. “We got to write a report, and it's
got to address this point, this point, this point, this point. It's
going to go to these five people over there and their backgrounds
are this, that, and the other, and this one will barbecue you on that
point, this one will barbecue you on that point, this one will be
asleep during the whole thing. The raw material that goes into that
report, there's that document over there, there's that document over
there, there's a research task that's going on, be finished up in
two weeks. It'll be just a bunch of notes that come out of it, but
you can use that to help write this.” Basically lay out all
this stuff. Say, "Oh, and by the way, when you write it you need
to put it in MS Word format, and it needs to be readable by MS Office
2000, and you need to be sure you do backups." Really define
everything, structure the problem to an nth degree. There are lots
of folks that really thrive on that, getting a problem presented to
them that has a very high degrees of structure, of “Here's the
framework that you need to use to solve your problem.”
There are other folks who really thrive on unstructured problems.
An unstructured problem may be, "We've been asked to write a
report. I got an e-mail that said we’ve got to write a report
on this, that and the other, and it needs to be finished and sent
in by such and such a date. I don't have a clue who's going to have
to read it. I don't have a clue what the inputs need to be. But we
got to write a report. The reputation of this organization is going
to hinge on that report. Have at it." That's an unstructured
problem, in which you give the person the freedom and latitude to
go research what are the inputs I need, what are the constraints,
what are the parameters that I operate within to do this, how polished
does the final product need to be, what's the accuracy of it. Obviously
it depends on whether you're talking about designing a piece of hardware,
solving an analytical problem, writing a report, whatever. But still
the concept of a structured assignment versus an unstructured assignment
is valid.
From a project management perspective, while we need both of those,
clearly I would rather have folks who are most comfortable in the
direction of the unstructured problems, because as part of that they
feel they need to understand how that task fits into the broader scheme
of the project. As a result, not only will they be doing that project
well, that task well, but they'll be working in a way that is synergistic
with the goals of the project overall. Whereas if it's up to me to
structure the problem and I miss something, then the response can
be, "Well, you didn't tell me it had to be in Arial font. Times
New Roman was what I thought you wanted, and I did it in Times New
Roman. Now you're telling me that Arial was all they would accept,
and you should have told me if you wanted it that way."
An answer that says, "I would have thought maybe you would have
asked what kind of font you would have had it in, what the requirements,
the style manual, said to do, and you should have told me." Those
sorts of things can get you into almost what a former boss of mine
used to wonderfully describe as “malicious compliance,”
in which they're maliciously going to comply with what you directed
them to do and almost take joy in the fact that you didn't supply
tight enough specifications. And, therefore, the failure of the product
is on your ticket and not on theirs.
Whereas with an unstructured problem—associated with that is
ownership of the problem and the solution, malicious compliance is
not an issue. So one of my roles as a manager is to try to move people,
move their comfort zone—through assignments and other coaching,
mentoring, all this—out of the structured problem arena and
toward comfort in and satisfaction in and pride of use of an unstructured
approach, in which they are starting to take ownership of the broader
view and goal of the project and saying, "What do I need to do
to help?" It's like having somebody help you in the kitchen.
It's a whole lot easier to have somebody help you in the kitchen if
you just look around and all of a sudden that task is being taken
care of and you didn't even have to tell them to go take care of that
task. As opposed to giving explicit instructions, and sure enough
that explicit instruction got complied with, and the next thing I
know they’re standing in the corner saying, "Well, you
haven't assigned me another task yet."
That's a biggie for me in project management, and to me that's applicable
across everything that we do—to try to move them toward that
way. It also ties back into the business about flight hardware. If
folks recognize that they will be better at solving unstructured problems—and
the solutions that they come up with for those unstructured problems
will be better—if they continually accumulate an understanding
and a background of the hardware and the hardware processes in advance
of when they may need them to go solve a problem. It's like if you're
out in the boat and the boat starts sinking, that's not the time to
start pondering whether you should have learned to swim or not. It's
back to that kind of stuff.
Wright:
Speaking of boats and things, it leads into risk, because so much
of what you've done and the decisions that you've had to make through
the years always have underlying elements of risk that you have to
take in consideration. So share with us about risk assessment and
also risk mitigation.
Chapman:
Great topic. The way we do risk type things in this Agency is one
of my pet peeves. There's a strong, strong tendency, from my perspective,
for us to not adequately differentiate between our risk assessment
tools and our risk communication tools. We will go into never-ending
round-and-round-and-round discussions about whether the likelihood
and consequence of a particular failure mode, what that value is.
“Is it very likely, less likely? Are the consequences higher
or lower? Whether you should put the checkmark in this box or that
box.” We will spend tremendous amounts of time arguing and going
round and round and round to decide which one of those boxes should
it be placed in. Without modifying the hardware.
The hardware is sitting there on the launchpad or in the processing
flow literally laughing at us while we go through these big arguments
about “No, no, no, the X ought to be in that box.” And
the real issue is: does the decision maker that is going to decide
it is time to “push the button” and launch this thing
or not launch this thing—does that decision maker understand
what the risk is? Not which box the checkmark is in, but have we adequately
communicated that? And once we've identified that there is some controversy
over how risky it is, as long as the decision-maker has the inputs
to say it's somewhere between here and here, and here are some arguments
on this side and here are some arguments on that side, then it's time
to quit arguing about it, because there's no more value added in that.
Along those lines, one of my real pet peeves is that our scarce resource—particularly
in the Shuttle Program right now as we're winding down, or in any
new program in NASA—our scarce resource is brainpower. The question
should always be “how much brainpower should we devote to this
problem?” If we surround a nonproblem or a minor problem with
more brainpower than that problem deserves, we are doing a huge disservice
to this other problem over here, which may be crying for assessment
and attention. But we're not looking at that because this is the “problem
du jour” and we're really focusing on this one. We really need,
from a risk standpoint, to frequently back off as we're talking about
risk and managing risk and reporting risk, and ask ourselves, "Do
we have this in the right perspective?" There can be a tendency
to say, "Well, the reason that we're focusing on this particular
risky area is because there's a whole lot of interest and talk and
discussion about that." That shouldn't be the basis for deciding
whether this is risky or not, just because a whole lot of people are
talking about it.
We ought to be talking to the smart people who know the systems and
say, "What keeps you awake at night? What are the risky areas
from an “expert practitioner” standpoint? What are the
risky areas that bother you?" If you talk to that expert and
they say, "This area over here that is the current popular high-risk
topic doesn't bother me at all, because I know and understand the
hardware.” It all gets back to that, there's no substitute for
that knowledge. "Because I know and understand the hardware,
I'm not worried about that one. I am worried about this one over here
that nobody's focusing on because it doesn't happen to be the popular
one of the day." Or, worse than that—from a risk standpoint—if
we oversubscribe the brainpower in looking at problems that have already
been identified, then there is no brainpower left to contemplate the
areas that we haven't thought of yet.
Wright:
Can you give us an example of where you might have been able to apply
this theory for risk mitigation?
Chapman:
Probably the most immediate example, to those of us in the External
Tank Project, would be with the little bit of foam on the tank that
we call the ice/frost ramp. An ice/frost ramp is a wedge-shaped piece
of foam that is sprayed onto or molded onto the outside surface of
the tank, and it goes around a little metal bracket. The metal bracket
is attached to the metal part of the tank, the part that's covered
with foam all over. The lower part of the tank has got liquid hydrogen
in it. It's 423 degrees below zero. So any metal that comes in contact
with stuff that's 423 degrees below zero gets real cold, and then
any metal that's in contact with that little bracket sticking out,
it conducts the cold—my thermodynamics guys would get mad at
me for saying “conduct cold,” because you don't conduct
cold, you conduct heat—it pulls the heat out and gets rid of
the heat so the bracket gets real cold.
In the case of the Space Shuttle, if the bracket gets real cold and
the bracket is exposed by itself, just a piece of metal sticking out
there real real cold in humid Florida air, little beads of moisture
form on the outside of it —the same reason that your can of
Coca-Cola gets wet. If it got real cold it would make those little
beads of water on your Coke can turn into ice, and that's exactly
what would happen to a bracket sticking out in hot humid Florida air
if you didn't surround it with foam and insulate it to keep ice from
forming on it. And ice on the outside of the tank is bad because it
could possibly break off during ascent and damage the orbiter. So
what we do is we pour some foam around that bracket that has to stick
out there because the bracket holds some metal pipes and a cable tray.
You need the bracket, but you don't want the bracket to form ice on
it. So we surround that bracket with a piece of foam, and this little
wedge-shaped piece of foam is called an ice/frost ramp. We've got
a whole bunch of these ramps going all the way down the side of the
tank.
We went through a big, big technical discussion about the potential
of shedding foam from ice/frost ramps because we saw a previously
unidentified failure mode with that foam on the ice/frost ramp, specifically
with the way that foam touches the foam underneath it and then the
metal of the tank below that. We noted some things in the tank that
had been filled up with liquid hydrogen several times at the Cape,
and then we decided not to fly it for a variety of reasons, and it
got shipped back to the manufacturing plant in New Orleans [Louisiana].
As a result, that was the very first one that we'd ever had a chance
to look at full-size, honest-to-goodness flight external tank that
had been gassed up and drained and gassed up and drained and gassed
up and drained with this 400-and-some-degrees-below-zero liquid and
then we got it back into the factory in New Orleans so we could look
at it real close and see how it actually performed.
When we started dissecting some of the foam and looking at it and
cutting it up and seeing what was there, we noticed some places where
that foam had come loose. That caused great alarm on the part of some
folks that said, "Gee whiz, if that comes loose in flight, then
that piece of foam will come off, and here we go back down the path
to [Space Shuttle] Columbia [STS-107 accident] again." So big
ordeal about what to do about it: “We need to redesign the ice/frost
ramps to keep this from happening.”
It turns out that early into that process of redesigning that the
ice/frost ramps, those of us in the Project Office, as well as within
Marshall's engineering organization, realized that the effects that
we saw there on the launchpad, on the one that we dissected and we
took back to New Orleans, really were not applicable to what you would
see in flight. The aspect of the physics that would cause that piece
of foam to come off are really not present during flight. We were
in the process, while this was all going on, of our test programs
increasing our understanding of the physics. The more we could see
that it really wasn't nearly the problem that lots of people thought
it was, because the real world flight situation would never get you
to a point that would cause the forces that would cause the foam to
come off. Even if you had cracks like we saw in the foam, the forces
would not be there to cause the foam to come off.
Unfortunately, things had already been set in motion that caused a
tremendous amount of work to be done in that area. Even though there
were other areas on the tank that we were struggling with and anxious
to put resources on to mitigate risk in other places, the die was
cast that we needed to go focus on risk mitigation in the ice/frost
ramp area. We did, and one of the high points that I have on my little
list here of success stories, would be the lessons that we learned
from dissecting that one that came in back to the plant. That particular
one was called ET-120 [External Tank], which ultimately flew on STS-120.
We repaired it, we dug out those cracked areas that we found on there,
we came up with a new way of applying foam in those areas. We don't
think that the cracks reoccurred. But our understanding of the physics
was such that we could say that even if they did reoccur, the foam
would not come off. Then we flew it, and the results of that flight—the
flight performance was spectacular, and we didn't lose a single piece
of any of the ice/frost ramps all the way to orbit. It was confirmed
by the separation photography, so it was a spectacular success story.
The problem, though, is from a risk mitigation standpoint, that investigation
and its subsequent redesign activity was a huge consumer of resources,
and it consumed resources in a way that we're still paying the price
for today, because it pulled people away from working on downstream
tanks that are still to be delivered. As a result, some of the things
that influenced delivery schedules downstream have caused those deliveries
to have pressure put on them to move them farther out, which is just
the opposite of what we want to do. We want to deliver them sooner,
because right now—I'm not sure if you guys are aware of this
or not- the critical path to completion of the Space Shuttle Program
is delivery of external tanks. That is the critical path. Everybody
else has got their stuff there and ready and okay to be assembled
into flight vehicles.
External tanks are still being built, and we are absolutely hand-to-mouth.
Finish one at Michoud [Michoud Assembly Facility] in New Orleans,
put it on the barge, ship it to the Cape, stack it, and fly it. There
are no extra tanks in there in surge flow, and so we are absolutely
hand-to-mouth. For the Hubble rescue mission we have to deliver two
external tanks—because there will be two vehicles on the pad
at the same time—the Hubble flight and the potential rescue
flight. Since we don't have the lifeboat capability of Station when
it goes to Hubble, we've got to have the rescue flight on the pad
ready to go no more than seven days later. So that means we've got
to have two external tanks down there together, as opposed to being
spaced out by the flight-to-flight-to-flight time that we have had
in the past, which is causing us to have to really put resources on
those next two tanks, and so the subsequent tanks are going to suffer
a little bit in their delivery schedules.
My overall point in this: in terms of your question on risk reduction,
I don’t believe we do a very good job of looking at risk in
a relative or global sense. I believe we should say, "Gee whiz,
I may be minimizing risk in this area, but when I do that, what's
my effect on risk overall in other places?" By putting my resources—my
scarce resource, the brainpower—on this risk and not distributing
it over other risky areas and looking at what I can do there, or by
not having some free time for my risk assessors to sit back and say,
"What have I not thought of yet that may be causing me risk?"
Am I really increasing my risk when I think I'm decreasing it? Because
I'm really putting it on an area that may be the problem area. I believe
we really we need to rely more on our expert practitioners to advise
us on where we need to focus risk mitigation activities, and not just
let it be wherever the thundering herd happens to be going right now.
Wright:
We can jump a little bit to more the future, because we've talked
a little bit about what we need or what would be good to instill in
the next group that's going to come behind you to be the leaders of
the Space Agency. Could you share a little bit more on your thoughts
about how best to train and equip this next generation of leaders?
Including in that, how would you teach them how best to know who to
trust as experts, and how to trust the people that they pick to choose
to work with them to get their missions accomplished?
Chapman:
That's also a great question. I’ll confess I’ve got real
mixed emotions on this one. I could be characterized as being the
ultimate Shuttle hugger here, so you have to take this with a little
grain of salt, because it's a spectacular vehicle. It's time to move
on. There's no question about it. It's time to move on. But I feel
that we are missing a spectacular opportunity in the waning days of
the Shuttle Program, which has been by all accounts—even despite
two terrible accidents—by all accounts, an unbelievably successful
program.
If I look at the capabilities that Shuttle has brought forward, the
advancements in the state of the art, the fostering of creativity
and out-of-the-box thought across the board in the development of
the flight vehicle itself, in the payloads that we've flown, in the
impact on education—across the board, it's been a spectacular
success. If we simply fly out these next ten, eleven missions and
then dust our hands together and say, "Ain't we great. We did
a wonderful job. Thank you very much, go read about it in the archives,
go look at the pieces in the Smithsonian," we've missed the boat.
We’ve truly missed the boat.
What we ought to do, in my estimation, is look at the remainder of
the Shuttle Program as a learning laboratory. Right now, it is the
only human spaceflight program outside of the Russians, and the small—but
admittedly potentially getting bigger— program that the Chinese
have. But it's the only routine large-scale human spaceflight program
that exists, certainly in this country. When it ends in 2010, there's
going to be a gap between this program and the next one. While the
next program is coming along, they really haven't been able to take
the time to look at Shuttle in detail and learn from it in terms of
“What does it really take to cheat gravity on a day-in-day-out
basis? What are the pitfalls? What are the high points, the low points?
How you deal with the people? How you deal with the technician workforce?
How do you deal with the physics? How do you deal with the weather?”—all
these things that it takes in the real world to make one of these
programs work. If they can’t take the time to really learn these
lessons, then I believe we've missed the boat. Shuttle is the absolute
perfect “learning laboratory.”
In my estimation, it would be marvelous if we could, in some way,
cycle as many people as we could through our ongoing manufacturing
programs. In our case, the external tank down at Michoud—which
will still be building tanks right up until 2010—and through
the launch site at the [NASA] Kennedy Space Center [Florida]. We should
consider setting up a specialized educational program where people
could come and spend, say, a two-month, three-month, four-month, five-month
internship. They could spend time in a structured formal educational
environment that would be at those locations so that they could really
see and absorb: What does it take to build the hardware? What does
it take to make this work? What are the systems by which we disposition
nonconformances? In other words, where as built doesn't match as designed.
How do we get around that? What do we do? How do we still come up
with a flyable product? What are the meetings that are involved? What
are the decision boards that we use in the factory? Who do we have
to convince that our hardware is good and what's the rigor that we
go through to do that? If we don't use these remaining several years
as a learning laboratory to get people up to speed, we're going to
really miss things, because what's going to happen is: either those
techniques and approaches will have to be reinvented with a new program,
or they'll fall through the cracks.
We'll have serious problems. Big investigative boards will be set
up, people will be tortured (as a project manager would say) and someone
will say, "Well lookee here, if I go back to NSTS 07700,"—which
is the bible of the Shuttle Program—and we say, "Well look
at volume yak-yak of paragraph yak-yak of section yak-yak of 07700.
It says set up a program that does duddle-uddle-uddle-uddle-uddle-duh,
where is that on the new program?" Either the answer would be,
"Well, we looked at that and we decided that was too inefficient,
and we didn't want to do it," or, "We didn't know about
it."
What I worry about is there can be—and I've already seen some
of this—a tendency to say, "Oh, that crappy old Shuttle.
The reason that we're building this new thing is because the Shuttle
is so inefficient. It takes so many people to go do it, it takes billions
of dollars to fund this thing on a year-to-year basis, and we only
get a handful of flights out of it. We're going to come up with a
new system that is far fewer people and all that.” Which is
wonderful, I applaud that idea; that is a great approach. What I think
is missing is we got to be careful that we don't throw the baby out
with the bathwater and just say a priori, "It's that crappy old
Shuttle, we're going to do something different."
To me the better way is to say, "Well, why did Shuttle do it
the way they did it? What's the history they brought forward that
caused the management systems, the evaluation systems, the shop floor
control systems, the way they build hardware—why do they have
the systems they have?" Because I now understand how it works,
then I can say, "I can come up with a better system that will
capture the essence of those lessons learned. And they will do exactly
that, utilizing the newer technology that we have today with IT [information
technology] resources and all that.” A better way to do it that
will keep us from falling into the potential inefficiencies of the
type the Shuttle has, but will still capture the essence of what shuttle
is doing.
I sense that in some places this is actually going on, but in a lot
of places it's not. I think that if we just let this remaining two
years go by, and we don't utilize Shuttle as a learning laboratory
to teach the future generations, we're really missing something. There
obviously are arguments on the other side. These are principally,
"I don't have time to study all that old history stuff because
I got to go develop the new system. If I were to take half of my resources
that are off developing the new system and send them to school on
Shuttle, then I'm not going to meet my development milestones."
So they'll just have to pick up this knowledge at the water fountain,
and ask when they put their silverware on their tray at the cafeteria,
or talk to the guy next to them in line at the Credit Union and say,
"Why'd you bring that change request forward on this?" and
learn it that way. Is that really the way we want to do this? I don't
know. There's no good answer for it. But I think that's something
that we need to strive to do. I also think that having things like
the Program Management Shared Experiences Program, is a way that is
very conducive to attendance and participation.
The way to get very little participation or grudging participation
at best is to tell people, “You got to go through all this preparation
work to get to go to one of these classes, and you got to document
all these things, and I want you to go pull all this from your archives,
and here's a whole ton of reading material.” What you're going
to get is malicious compliance again, and folks are not going to really
get from it what they should. If they show up at some of these classes
and they find themselves saying “Wow, this is really fascinating!
This is like sitting in an armchair next to an old friend, having
that old friend reminisce about the important and critical lessons
learned. And I find myself taking notes, because I want to take notes
on it because it's really fascinating. And I want to find out what
are the references that he's discussing and where can I get a copy
and does Amazon still sell them?” That's the right way to do
it to make lessons learned forums extremely palatable and interesting.
For instance, our former manager of Shuttle, [N.] Wayne Hale [Jr.],
as part of the Shuttle Program Management Council, ordered and distributed
a series of books for his managers to read. The most recent one he
gave us was called Angle of Attack: [Harrison Storms and the Race
to the Moon by Mike Gray], and it's the history of Apollo, specifically
about Harrison Storms. Wayne sent those copies out to everybody, and
I was particularly enthused because he autographed the copy he sent
to me. It said, "As a history buff, I think you'll really enjoy
this one." Fascinating story. Absolutely Fascinating story. Plus,
when I looked through the references books that the author lists,
it turns out that probably half of them were favorites of mine anyway.
So I'm automatically going to like it.
Through that sort of thing—and that's a little bit tough, because
it's a reading assignment. But it's a reading assignment that sets
the hook very quickly because it's such a readable book. By having
things like that, and intentionally making time to do that the reading,
and making sure that the managers both of the existing programs and
the new programs know that it is important to look at lessons learned
and history and how things need to flow from one program to the other
and not just to be so consumed with delivering the product—and
I'm not implying that delivering the product is not important. Absolutely
it is; that's our most important product, and what we deliver has
got to work, it's got to fly.
But we need to make time to learn lessons and to think about what's
the best way to go forward. If we're so oversubscribed with what we're
doing with the tasks at hand that we don't have time to reflect on
“Do we have the right tools? Do we have the right perspective
on history? What does history tell us about how we ought to go forward?”
then our product is not going to be as good as it should be.
Wright:
What do you believe is the hardest lesson that you've learned in your
career? Or maybe the best lesson?
Chapman:
Probably communication. Situations where you form your method of communication—the
use of anecdotes, or the way you phrase things, or the depth of detail
that you go into—largely based on your experiences and what
you've done and what you've come from and what you've been involved
with. Obviously everybody comes from a different background and different
things and all that, and I'm not talking necessarily about technical
communication in the sense that “Here's a list of specifications,”—those
tend to be pretty routine and fairly cut and dried. What I'm talking
about is status information of where a certain program is, or problem
areas, or what we're doing to improve the production flow, or how
we're addressing problems—more soft-side type things rather
than the hard technical communication. There can be many instances
where you think you're communicating, and then you find out later
on that you really haven't communicated to the extent that you thought
you did. We can see that across the board in almost everything that
we do.
I think it's probably very valuable to continually look for and seek
examples of both sides of that coin… examples of where there
was communication taking place that was totally ineffective, and what
lessons can be learned from that; and to see communication that was
taking place that was phenomenally successful and what can be learned
from that. To me, one of the classic examples of a great way to do
communication—and he would probably die if he knew I was telling
you this—is a guy who used to be an astronaut. He left the Astronaut
Corps. And is now working in private industry His name is Jim [James
D.] Halsell [Jr.]. Jim was an Air Force Colonel who for a good while
was the launch integration manager for Shuttle down at the Cape. As
part of that assignment, he chaired the Noon Change Board [Daily Program
Requirements Control Board]—which occurs at one o’clock
Eastern Time, (which is funny in itself, because people say, "Well,
no wonder the Shuttle Program has got problems! You run this thing
called the Noon Board and it occurs at one o’clock." That's
right up there with the “Who's buried in Grant's tomb?”
question.)
Anyway, Jim ran the Noon Board, and I really enjoyed working with
him—back to this communication question—because of the
way he would run this board. The launch integration manager has the
responsibility to run the program level decision-making at the launch
site and there were some very thorny issues discussed in this forum.
Typically items such as “As built does not match as designed.
What are we going to do about it?" You got to really listen to
all the people that are involved. You may not know the individuals
that are talking, you may be intimately familiar with the individuals
who are talking. You may not know the particular subject, you may
be intimately familiar with the subject. The whole range, the whole
gamut.
Every time that Jim would run this thing, there would be some thorny
discussion, sometimes taking just a few minutes, sometimes taking
an hour or more. Every time, at the end of every issue, Jim would
summarize it. He would say, "Okay, here's what we just talked
about." He would say, "These were the issues that were brought
forward. Here were the topics and this the way we dispositioned them.
There was discussion this way and that way and here are the action
items that came out of it, and here's the decision. Any comments?"
More times than not, somebody would say, "Well, now wait a minute,
did we really do it this way? I didn't really understand it that way,"
and so they'd go through some more clarification and discussion about
it. After that happened, Jim would go back through and summarize it
again: "Okay, one more time." Now if there were things upon
which there was agreement, he would hit those really quickly, but
he would focus on that topic that was reexamined right there and say,
"Okay, and we now just have clarified that duh-duh-duh-duh-duh-duh
on this topic. So the actions still are boom, boom, and boom, and
here's the decision. Any questions?" Every time, like clockwork,
every single issue was done that way.
As a result, when you got through with a Jim Halsell-run Board, there
was wonderful communication. You knew exactly what had been decided
because the feedback to all the participants was alive and well. So
many times in this business we will be so rushed and hurried that
we will not seek feedback on our discussion topics, and as a result
there can be the assumption of communication when it really does not
exist. I would say that in terms of most important lesson learned,
it would be that lesson from Jim Halsell. Like I said, he would probably
fall over on the floor right now if he knew that I was talking about
that. He lives up here in Huntsville now, works for ATK [Alliant Techsystems],
so I see him more frequently than I did for many years. Great lesson—that
we need to focus on communication; that in many cases that can be
where things fall through the cracks.
Wright:
Before we close, I wanted to give you a chance to look at your notes.
I didn't want to lose any other best practices or sound ideas that
you'd like to pass on in this conversation.
Chapman:
I think we've covered most of these things. Mentioned some names to
you—I mentioned Royce Mitchell and Joe Lombardo and Bob Lindstrom,
and Jim Odom. There's another guy here in town that if you needed
to talk to. He'd be a great resource, that's a guy named John Thomas.
John Thomas worked for Marshall for many years, and retired back in
the late eighties. Worked in private industry for Lockheed for a while,
and now is part of a consulting partnership here in town and has lots
of key project management experience.
Again the ones that I'm mentioning to you—specifically, Royce,
Joe Lombardo, Bob Lindstrom, John Thomas, Jim Odom—they're particularly
good resources because they've worked on both sides of the contractual
fence. They've worked on the government side for many many years,
retired, gone to work on the contractor side, and have been in very,
very influential positions addressing issues from that side of the
fence. So they provide a perspective that is invaluable to folks that
have not worked on both sides. I'm fortunate that I've worked on both
sides of the fence. That really helps you understand what it takes
to get this done, which is a real partnership.
Just some basic little tidbits: Frequently we have failure investigations.
We either have a problem—sometimes it occurs during flight and
it's an in-flight anomaly type investigation following flight, or
sometimes it can be something that we've discovered during checkout
and planning before we get ready to launch—and we have a big
investigation and a big hurrah about how we're going to do things.
There can be a tendency, whenever we have one of those, to not adequately
staff the investigative body. We typically overdo it in terms of the
technical resources that are available to help out, but we drastically
underdo it in terms of the administrative help.
A lesson learned from my vantage point is we need to always make sure
that whenever we set up an investigative team, we ensure that that
team has adequate administrative help to help with making charts,
producing documents, tracking action items, setting up meetings, getting
out minutes… all those things. Because if we make the technical
leadership of the investigation responsible for doing all the administrivia
too, the administrivia is going to suffer and thus the end product
is going to suffer.
So I think we need to really do a better job at making sure that we
have that critical resource. The investigative teams that I've been
involved in that have worked very well, the common thread in lots
of those have been twofold: One has been clear and concise leadership
of the investigative team, and good administrative help that can set
up websites to distribute information—now that we're in that
age—good production of presentation material, good coordination
of telecons to get it going and talk to people, so that the leaders,
the technical leadership, does not have to worry those problems.
Another thing dealing with investigations and things—we need
to always put at the highest point of our list of things to think
about when we set up investigating groups, Tiger Teams, to go solve
problems—clear and concise definition of roles, expectations,
and limitations of the investigation. Because if not, these investigations
can tend to almost have a life of their own. All the members of the
investigating team can be unclear as to what the extent of the expectation:
Are they supposed to solve this specific problem on this flight vehicle?
Are they supposed to solve it for now and forever more? Are they supposed
to solve it in this area and like areas? “What do we need to
do?”
If all the team members don't have that clear charter in mind, and
if the leadership can't continually go look at a piece of paper or
e-mail or something that says, "Here are the limitations of this
investigation," then it can get out of the box and it can go
on forever. Folks will all feel that they're not really contributing
like they thought they should. So it can be a good thing to concisely
define expectations and limitations, because the participants can
then feel the self-actualization that, "I feel like I'm contributing
to the final thing." We really need to think about what types
of personalities and capabilities we assign to Tiger Teams and that
sort of things, to make sure that when we do that we are not just
getting the best technical capability, but that we are getting the
best system to go solve the problem. The system would be personalities
involved—“Are we setting up ourselves for failure because
we have personality clashes?” Frequently that can get in the
way of getting to the end answer that we all want to get to.
You asked for best practices. I'd say to anybody, the absolute best
practice has got to be to be to listen more and talk less. And always
be curious. Do as much reading as possible, discuss technical details
as much as possible. Just keep asking, “Why? Why did you do
it?” You asked for some specific examples and things. The specific
examples of the things that I've been involved in that had memorable
outcomes and had risk reduction would be solid rocket booster, specifically
dealing with the flight STS-97 when I was booster chief engineer.
We had a pyrotechnic device that did not fire. We discovered this
when we got it back and found an unfired pyro device. So that set
us off onto a big investigation to figure out why it didn't fire when
it should have The redundant side did fire, so the mission was a complete
success.
But doing the failure investigation, we found that we had a problem
with a cable. The cable had broken internally, and it was not evident
through our testing that it had broken. Because when it broke internally,
when it flexed back together, it was making what we call “kissing
contact” inside the insulation, and it was still providing a
conductive path. What happened, though, is during ascent, with vibration,
it opened up and we couldn't get the electrical signal through there.
So we launched a big investigation to see what was going on there.
Some of the lessons learned were that folks that were involved in
the reusable, refurbishable cables were making assumptions that the
testing that would be done on this refurbishable piece of hardware
would detect problems later on, and it would not get back into the
flight inventory if problems were found. So it was treated as “flown
hardware” but not “flight hardware”, if you see
the subtile distinction there; that in order to be flight hardware
again it was going to go through a whole bunch of exhaustive tests,
and that testing would show whether there were problems or not, and
if there were problems it wouldn't get back in the flight inventory.
So they could treat the hardware—I won't say with wild abandon—but
when they pulled these cables off they could flex them, they could
move them around—with what they felt was confidence that it
would be tested in a way that if it was broken, it wouldn't be flown
again.
It turns out our testing was not as robust as we had hoped it would
be to catch those problems. We ended up installing a cable that had
a break in it. When it was in a particular position it was making
contact, when it was in another position it wasn't. We did the testing
when it was making contact. When we installed it, it was just barely
making contact. So the big lesson there is to communicate to everybody
that flight hardware is flight hardware, there's not a differentiation
between flight hardware and flown hardware—that you got to treat
it with care and kid gloves throughout the process and not assume
that our testing program is always completely robust to catch any
problems.
Other things that were particularly satisfying to me—when they
gave me this job for external tank, we had just had the first Return
to Flight mission (STS-114) following the Columbia accident in which
a part of a tank foam came off that was, much to everyone's chagrin,
this part of the tank called the PAL ramp, which stands for Protuberance
Air Load. We already had an activity underway to remove those ramps.
We'd done studies that showed that they really were not necessary.
They were put on from a standpoint of conservatism early on in the
program, when our analytical tools were not as sharp as they are today,
and we were able to go back and redo wind tunnel tests and redo analysis
and show that we really did not need that foam. After that foam came
off on that first Return to Flight, we were subsequently able to demonstrate
to the whole community that it would be okay to remove the ramps and
that we would not cause any deleterious consequences because of that.
That was proven with the next flight (STS-121) that flew successfully,
and everything worked great.
That was a big success story for me to be involved in. That was basically
getting a diverse team together to capitalize on ongoing work and
keep an eye on the very near future, and keep them moving in a direction
that could quickly support resumption of shuttle flights. But, again,
that was a good example because the team that we put together was
very focused with a clearly defined objective.
There were other areas of the STS-114 investigation where we didn't
characterize the scope of what the investigative team needed to work
on, This lack of scope definition coupled with a lack of specific
limitations caused the failure investigation to drag on much longer
than necessary, which caused a corresponding delay in the resumption
of shuttle flights.
Already talked about the tank known as ET-120—which was the
one that we tanked up several times and brought back to the factory
and dissected and evaluated. While many people said, "That that's
a perfect tank to go to the Smithsonian. No project in their right
mind would ever fly that thing again, because it's been sliced and
diced on. There's no way you're going to get it back to a flight configuration."
But we had the confidence that we could return it to flight configuration.
And it was one of the most successful flights that we've had so far,
in terms of debris generation. That was, again, a matter of keeping
everybody focused on the task: to make the repairs, do a good job
on them, prove that we had a good tank, and then go fly it and get
the photographic proof that it worked.
One of the most recent examples of best practices would be the investigative
and repair work leading up to the successful flight of STS-122, which
was the flight that was going to take place early last December, and
we had problems with this little itty-bitty piece of hardware called
an ECO sensor. ECO stands for engine cutoff. It keeps you from running
out of gas. You MUST turn the engines off before you run out of gas,
because if you run out of fuel before you run out of oxidizer, then
that's a bad day. So this little thing tells you when you're about
to run out. It tells the computer, "Hey, time to shut the engines
off," so you don't get a bad problem.
It turns out that because part of that system has to operate at 423
degrees below zero, it does strange, strange, strange things. At temperatures
that cold, air turns into a solid. That’s right… the air
we breathe becomes solid at 423 degrees below zero. Solid air is an
insulator. It's not conductive. So if you let solid air form in a
connector, because it was gaseous when it entered and then it became
liquid because it got colder and colder and then it became solid because
it froze, and then as it gets colder and colder still, it causes things
to contract. As things get cold they shrink up, and then that caused
the connector to go together and the pins of the connector to ride
up on that solid air—and now it doesn't make contact anymore.
Then you do an investigation after it's warmed up and that solid air
becomes liquid air, becomes gaseous air, and it makes contact again—you
look at it and say, "Everything looks good, what's happening?
Why isn't it working?"
We figured out what caused the problem through some brilliant free
time detective work by our chief engineer and the contractor's chief
engineer and one guy from one of our laboratories here at Marshall.
Principally because we freed up a couple days of time where we said,
"You don't worry about anything else. You can go think about
this problem, and don't worry about the administrivia involved in
the investigation. Don't worry about the fact that the program is
breathing down our necks to fly this flight again and figure it out;
you go think about what could be causing it, and think outside the
box, and see what you can come up with."
They came up with the scenario and came up with a fix for it that
we then demonstrated in our test lab out here beyond a shadow of a
doubt that that would fix the problem. Made the repairs to the flight
hardware, flew it two and a half months later, and it worked absolutely
perfectly. It totally solved the problem. We had complete confidence
that it would solve the problem. This was truly a great success story.
Those kinds of things are very satisfying. Again, it's a matter of
having the right people and not burdening them so much with extraneous
stuff, and let them work on the problem with a concise understanding
of the objective.
The last thing I was going to mention was acronyms. We got to be really
careful in this Agency with our use of acronyms. It gets back to communication.
Remember that was one of my things: “Are we communicating? Do
I think I'm communicating with you, and am I really not communicating
with you?” If everything I say is in acronyms, then the only
way I'm effectively communicating with you is if you know those acronyms.
If you're not familiar with those acronyms, then I might as well be
speaking a foreign language to you. When we both know them, it's a
wonderful shorthand that allows us to communicate ideas and concepts
quickly without having to go through all these long gobbledygook names
that we have in this rocket business. But I'm convinced that acronyms,
on a not infrequent basis, are used as a weapon.
What I mean is: if I wanted your help on a certain problem, I would
make sure you understood what I was talking about. But if I wanted
a quick test to find out if you really knew what I was talking about,
I would describe my problem in acronyms to you. And if you didn't
understand the acronyms, you wouldn't feel like you could add anything
to my problem. Even though it might be that you may have direct experience
that could be very applicable to my problem by abstracting my problem
a little bit into some other area. But if I describe my problem in
acronyms to you, then it's like saying, "Well, if you don't understand
the acronyms, then clearly you're not going to be able to add any
value to this discussion I have. So I'm going to use my acronyms as
a weapon to keep you from taking my valuable time in having to answer
any of your questions."
I believe this is a very, very real issue in this program. From my
perspective, I think we need to do a better job at reminding all aspects
of the program—because we're all, in varying degrees, guilty
of using acronyms too much—to make sure that we're not unfairly
impeding conversation and communication across the program. I have
a tendency, when I look at charts in big reviews, to count the acronyms
on the page. Here in recent months I've seen a chart at a shuttle
flight readiness review that had 44 acronyms on a given page. One
page had 44 acronyms on it. That does not foster communication.
If the object is to make certain that the one person that I am talking
to knows exactly what I'm talking about, as long as that person and
I both know those same acronyms, that's perfect. That works great
if that's the objective. But if I have been asked to put all the other
activities that I have going on on the back burner and come attend
this meeting, and then those acronyms are being used to the person
that is the target of that communication, then I've got to ask, “Is
that a good allocation of time to go have a huge meeting and then
embed huge number of acronyms in the presentation material?”
and as a result exclude a significant portion of the audience from
understanding the issues..
Because one of the premises that we have here—and in general
it's true—is by assembling a brain trust, each time we have
a flight readiness review for example, that we will end up with better
assurance of our success of the flight by having a lot of smart people
involved in it. If somebody over here hears a topic that's being discussed
that they're not really sure is the right conclusion, they can ask
a question about it. If that topic is so laden with acronyms that
it excludes that question, then we're missing the boat.
Wright:
I appreciate all the information. I think it's going to be very beneficial.
Appreciate all your time.
Chapman:
Let me know if there's anything else I can do.
[End
of interview]