NASA Johnson Space Center
Oral History Project
Edited Oral History Transcript
Robert
L. Carlton
Interviewed by Kevin Rusnak
Houston,
TX –
26 October 2001
Rusnak: Today is October 26, 2001. This oral history with Bob Carlton
is being conducted in the offices of the Signal Corporation in Houston,
Texas, for the Johnson Space Center Oral History Project. The interviewer
is Kevin Rusnak. I’d like to thank you again for taking the
time to come out here and provide us with some more of your insight
again.
Carlton:
Thank you. It’s my pleasure to be here. We’ve wrapped
up the recollections of the things that took place in the manned space
flight programs, my experiences in those and perspectives in those.
As we now come to the end of that, it seems to me that we somehow
ought to end up with—what to call it—lessons learned.
What did we learn out of all that, you know. A lot of people will
view it from the standpoint of here’s what happened. Intellectual
curiosity and a variety of other motivations will bring people to
look at the work you’ve done here to record all of this of what
took place.
But it seems to me it’s important that if there’s something
that came out of all of that, that could find application in the future
efforts, it would be worthwhile for—this would be a proper forum
to put it. So what I’m trying to do today is to kind of present
just one lesson learned, but it’ll take me a while to elaborate
on it, and I greatly fear that I’m inadequate to really get
across the thought here.
The basic principle, or the basic thing, the biggest lesson I learned
in just thinking about over my whole lifetime in operations, which
it has been essentially, operations. What I see in looking back and
reflecting on all those things that happened was this thing we call
operations is not a science today but, rather, is an art, and I see
that as a great deficiency. So the thrust of what I’m trying
to cover is to bring forth the realization of the need to translate
operations from being an art into becoming a science, a science such
that it could be applied and be applied uniformly in the future, and
people that were applying it would have an understanding of the principles
that were driving them in this application.
So I’ll discuss a little bit here some things, just thinking
back, that illustrates how it is an art and illustrates the need for
it to become a science, and then a few thoughts on some of the principles.
I won’t try to identify all the principles that are basic fundamental
operations engineering principles that make for a good design, but,
rather, just to show enough examples so someone will see what I’m
talking about in the application of that.
So first let me reflect back on a few thoughts that try to illustrate
how we, operations engineering, is now an art rather than a science
and illustrate the need for it to be a science. In the manned space
flight program we were very successful in the Apollo Program in bringing
a huge amount of technical expertise to bear on problems of designing
the systems in such a way that they were safe and would perform their
mission. But there was a tremendous cost associated with that, that’s
not appreciated, and I didn’t appreciate it at the time that
we were doing it, how the cost must be balanced. The cost of the program
management must be balanced against all the desires of good operations
engineering. There is a direct relationship between what the operations
things you do to make the systems operationally acceptable and flexible
and able to perform their duties or their functions against what it
costs the program management to do it. Well, let me just make an observation.
Our thrust was simply that we wanted to at all costs—and cost
was no object to us—we wanted to guarantee that the mission
was successful and that we got the crew back safely. Now, not all
programs would go to the extremes we did to do that. One of the things
that drove you to justify it was those programs were very important
from a national prestige standpoint, not just from the importance
of a human life, but if we lost a flight crew en route to the Moon
or landing on the Moon, it would be a national disgrace, and there’d
a hue and cry that would be remembered even today. It would be a black
mark on the nation’s ability to do what it set out to do, far
beyond the loss of three human lives.
That importance to the program justified everything we did, but it
cost a tremendous amount of money and added time to the program and
complexity to the program. And I didn’t appreciate, you know,
what it did really mean.
If we were to embark on another program, well, I and my cohorts in
operations would have set about doing it in the same way we did Apollo.
We want to be sure we got redundancy in the systems. We want to make
sure we got the ability to see into the systems, to diagnose them,
which demands a lot of telemetry be designed into the bird. Telemetry
costs money and it puts weight on the bird. It adds to the complexity
of the bird, a whole lot of wires you got to keep up with and a whole
lot of transducers you got to design. The telemetry system gets bigger
to flow that data to the ground, and the recording system gets bigger
to record that data for review. The presentation of that data to a
team of guys on the ground costs money in its complexity. The team
of guys on the ground looking at it costs money. The more they are,
the more they cost. The training of that big massive army of people
on the ground is at great expense.
Well, that was not appreciated by myself as I came out of NASA. My
approach would be, “Let’s do that again in that same way,”
so the program would be super-reliable. Well, what I had failed to
realize and recognize was the costs in terms of operations. Operations
support has to be balanced against the program’s needs. What’s
the program there for?
I left NASA and went to work for the Aerospace Corporation, who were
fundamentally an engineering arm of the Air Force. I always thought
of it as I went to work for the Air Force. The Air Force guys were
unmanned, and so as I went into that transition phase, in my mind-set
we should be doing it the way we did at NASA, and the Air Force guys
instantly began to challenge me. You know, they said, “Hey,
wait a minute. That costs money. We don’t need a man up there,
for one thing. We don’t need all that level of intense support
on the ground for our satellite. We got ten more up there. If one
of them breaks, we’ll replace it. So what does it cost us if
we lose a satellite? The bottom line is whatever it takes to build
another one and get it into orbit. That’s how much it costs.
What does all this support you’re proposing cost? You know,
it’s a massive burden. How does it increase the complexity of
the design of the bird? Tremendously. So you got to ask yourself,
is it worth the cost?”
Well, you know, that jarred my thinking, and that set me on the path
of thinking that, you know, really they’re right. And more fundamentally,
how do you quantify for a program manager the amount of operations
support you should have? So what’s the program manager trying
to do? What he’s trying to do is to build a system that performs
a function over a lifetime at the most economical cost he can that
does its job. In the long run it’s the economical cost. Now,
he may have some other parameters that are operations-oriented, how
fast it can react, how fast it can get him information. The downtime
may be of concern to him; he may be sensitive to that; he may not.
There are just a lot of parameters that are operations-oriented kind
of parameters that he will recognize. “I need this and I have
to pay for it.” But even so, he’ll probably have varying
levels of increased operations support that would enhance this system
if they were not too expensive. So there is a balance there between
what kind of operations capabilities he designs into it versus what
it costs him versus the long-term performance he wants out of it.
I can think back, and I’m trying to think of examples that illustrate
this thing, and examples that illustrate that we don’t really
have yardsticks in the operations world of people. We don’t
have yardsticks where we know ourselves the principles we’re
applying.
I remember one situation that illustrates this. The Cape [Canaveral,
Florida] guys were sitting down with the contractors and the Air Force
when they were thinking that they’d have their own launch site
out at Vandenberg [Air Force Base, California]. So the Cape guys came
up to give them their expertise, you know, the benefit of their expertise.
The guy from the Cape got up and he began to tell them they would
go through phases. He said, “The first phase you’ll go
through is your contractor will come down to the launch site and they’re
going to launch this vehicle under your management.” He said,
“The first phase they go through, you’re the all-knowing.
You wear the white hat. You know everything and they know nothing.”
Then he says, “Then they evolve through a phase where they begin
to realize the step you want to do costs money, and their management
is sensitive to money. They’re beginning now to get a feel for
what you do in a launch.” He said, “When that begins to
happen, then they begin to figure out ways to get around this oversight
you’re trying to ram down their throats, and you become their
enemy.”
So we went through the phases, which illustrated that the contractor
came into the program not having any grasp at all of what operations
were all about at a launch site. He got it through OJT [on-the-job
training]. There was nothing in his training, in his engineer’s
degrees as they went through college that equipped him or gave him
insight into the problem.
Same way with us in manned space flight. When I went through college,
I would say that probably about all we got in the way of operations
application was there was a little bit of discussion or one course
that talked about human factors in the design of a system, that human
beings got a certain amount of strength in the arms, certain amount
of reach, you know, have certain dimensions to them, they weigh so
much, and so forth. You would apply that to if it restricted your
design or drove your design. The idea of how much instrumentation
it would take internal to a system for you to be able to diagnosis
a problem, I don’t recall any subject anywhere that tried to
scope that for you nor, much less, go the next step and say, “Here’s
how you balance the cost of that instrumentation to a program, and
here’s how much it gives you of an advantage or improvement
in your ability to troubleshoot the system,” much less the next
step to say what’s the balance between that cost and the cost
of just building another bird or another piece of hardware, whatever
it happened to be.
Those are sort of the things that you would balance out if you was
trying to come to the optimal balance of engineering operations capabilities
versus overall program impact.
It seems to me it boils down to—the whole thing boils down to
trying to obtain the right balance between the costs and impacts of
doing the things operations-wise that enhance your system versus the
cost to program management. That cost might be in dollars. It might
be in terms of how long it takes you to build the system. It might
be in terms of how much it reduces some of the system’s performance,
like all the weight you put on a spacecraft to enhance its ability
to troubleshoot it and so forth. Is it at the expense of payload it
could be carrying?
So we used to have a parameter that the program management used, that
a pound of weight is worth so many dollars, you know, and they could
give that as a guideline to a contractor and say to reduce the weight
and it’s worth this much money to us. I don’t recall there
ever being a parameter that said a recurring maintenance workload
is worth this many dollars, if you could reduce the recurring maintenance
workload, it’d be worth this much money to us, or guidelines
that said, what is the amount of dollars you should spend on a recurring
maintenance.
Now, the Air Force began to do this in the late fifties and the early
sixties. They woke up to the fact that maintenance cost was just a
stupendous impact to the programs, and they began to tell aircraft
designers, “We want some guidelines. Here is the maintenance
man hours that you will apply to flight hours.” Where the maintenance
got just extremely high, they’d become sensitive to it.
Other programs, I doubt that’s ever been reduced to guidelines
that found its way in the college curriculum in such a way that the
other engineers in other applications could appreciate those same
principles that were being applied and how they translate from dollars
to reduced maintenance costs. But you can see the need for it in nuclear
power, in just about anything that operates. You can see the need
for it.
So a big part of the problem, as I see it, is how do you get all of
the factors in that quantify the operations parameters? How do you
quantify those and get them in and describe them such that a guy that’s
doing a design or a program manager that’s building a system
can see them, understand them, and quantify what they cost him? That’s
the problem.
It seems to me that the starting point is to recognize what is trying
to be done, and that is to establish operations engineering as a science.
Don’t depend on just the background experience of the people
that get brought to bear in the different programs as they come along.
But it backs up into if you can reduce it to a science, then you can
identify the principles that need to be applied at the training level
in colleges.
In college you teach a young engineer how to design for weight. If
you want something to hold so much load, it takes this much weight.
That becomes a quantifiable. That’s when it begins to move into
being a science.
You need to teach them the same thing in the way to operate it. He
needs to understand or be able to understand, "As I’m designing
this thing, an important thing to understand is what is this design
implying to the program or whoever buys it in terms of downstream
maintenance, ongoing year-in, year-out maintenance?" What does
it imply to him in terms of the direct operations workload, man hours,
that he’ll have to apply to it to operate it? There’d
be a host of other things. How long is its lifetime before it wears
out? That’s an operations consideration.
Some areas, they’ve learned to put a yardstick on that. The
automobile manufacturers, I suspect they design for obsolescence.
They’ve got parts made out of plastic that I guarantee you they
know very exactly where this is going to wear out. They know that
because they can set the limit of the warranty where it don’t
happen before the warranty runs out.
But a program manager that’s designing something or somebody
that’s buying something or somebody’s designing something,
he ought to recognize that’s a principle, operations principle,
that I need to get up front and discuss with program management so
they understand what they’re buying into, and the implications
of costs if they want it to go further. You might spend 90 percent
of your budget trying to get 10 percent more improvement. There’s
a balance between designing something with a whole lot of redundancy
in it so that it can last a long time versus just buying a more simple
one and buying three or four of them and replace them as they come
along.
Now, I’m groping around and around it, and I know what’s
happening here is it’s difficult for me to get this topic organized
in such a way it comes across real clear. So now let me back out of
the details and try to say it in a clear way once again.
There is a need to establish operations engineering as a science instead
of the current, I’ll just call it, “black art” mode
in which it operates today. To establish it as a science, there are
some things that need to be done. It’ll tend to close in on
a lot of different areas. There will be tentacles. One tentacle will
reach into the area of reliability, trying to quantify reliability.
Another tentacle will reach into the area of the human factors. How
do you design it to be easy to operate? Another will reach into the
area of how do you design for a lifetime. Another will reach into
the area of how you quantify the importance of the design as driving
its ability to be guaranteed to work. If it’s real important
that it work and never fail, then you put a lot of effort on it.
You know, if you got a program like the Apollo Program was, or a manned
program going to the Moon will be, or a nuclear power plant operation
is, you know, when something gets really important and it reaches
the level of importance that you’re going to say, “I want
to guarantee that we don’t lose it or it don’t quit or
it performs its function at an extreme. I’ll go to extreme cost
and trouble and effort to do that,” then there are certain guidelines
that will just permeate their way through just a whole lot of parts,
of components, of that system. You’ll bring more manpower to
bear. You’ll have training systems. You’ll argue about
whether or not you should have simulators to train them so that they
can react. How quick they need to react will get into it.
On the other hand, if your program is simple, like an unmanned satellite
or like a plant operation where it sort of works autonomously—if
it breaks, well, you can go in and fix it—or operating a car,
you’ll have the whole spectrum of complexity and importance.
In fact, I could go to the really—one end of the spectrum would
be a manned space flight program. The other end of the spectrum in
its very simplest form of a system I can think of operating would
be a refrigerator. You know, there’s the two bounds and most
everything falls in between those two bounds. The biggest thing that
drives it is the importance of the system and the cost it takes to
do things.
Some other tentacles of this operations engineering as a science would
be the costs of all the different things you might do. If somebody
were designing systems from an operations perspective, he’d
be interested in the lifetime of materials. We used to, in the airplane
industry, used to think of rubber as—if you got a component
made out of rubber, you’d better plan on it living about three
years. It’s better than that, but as a general yardstick that’s
what we used years ago. Now probably the rubber is better and it’s
five years, but it still has a limited lifetime. Plastic has a limited
lifetime, depending on its exposure to things.
Just off the top of my head, Kevin, I don’t think of other—is
the picture I’m trying to describe coming through to you now?
Rusnak:
Yes, it is.
Carlton:
I would bet you, just to show the need for what I’m proposing,
I would bet you if you set a bunch of operations people down in a
conference and you had representatives there from the Cape that were
going to talk to them about a manned space flight program, from the
Cape, from the Johnson Space Center, from the Air Force, and foreign
people that have had a manned space flight program, if you set them
all down here, fifteen or twenty or thirty groups of operations people,
and you begin to try to get them to agree on the basic principles
you should apply, I bet you would find fifteen or twenty different
opinions. In fact, I’ll guarantee that.
Now, why would that be? Now, let me contrast that with if you set
an engineering team down and said, “Let’s design a column
to hold this much weight.” They’d all come to the same
answer. Now, why? Because they have engineering principles that teach
them how to design this to do that job. The operations people don’t
have those principles to lay down in a way that you can universally
apply them to the job. I believe that’s probably my clearest
illustration of the need for the whole thing right there.
Now, I think what I have said here, it has in no way got down to the
depth that somebody could listen to this and say to themselves, “Now,
okay. Here’s what that guy’s proposing needs to be done
in human factors.” Probably even a program manager might not
quite grasp what I’m saying. It might be that if you took a
survey of the other people and maybe had a symposium together somewhere,
people smarter than I would be able to put it in better perspective
as to what’s needed.
But what I would hope might be forthcoming some day is there would
be a discipline of engineering called operations engineering, that
that would be a discipline that was recognized in colleges and taught
in colleges, and it would grow and it would change with time as materials
change and as reliability of things gets better and the software’s
ability to mechanize things become better, etc. It would change. But
the basic principles wouldn’t.
Let me illustrate a operations principle. The ability of man to operate
a system introduces a great flexibility to react to unforeseen problems.
At the same time it is the most probable point of failure. Most airplane
accidents are due to human error. There’s a principle to understand.
How do you eliminate human error? To take care of this, if you want
to gain the flexibility to react to unforeseen situations, how do
you counteract the additional risk you’ve incurred by putting
people in the loop? One way you do is you have redundancy in the checks
and balances between the people, just like in our government. In a
system if you got checks and balances, one guy fails to see something,
the other one sees it, and so you don’t have a failure loom
up and not get recognized. Or if one guy wants to take a reaction,
some action to correct something, the other guy is sitting there to
sort of be sure you don’t do the wrong thing. Redundancy.
If you want an organization to operate quickly, you must have clean
lines of authority and responsibility. In the NASA manned space flight
program there were very clean lines of authority and responsibility
that went from the lowest flight controller to the flight director,
and those lines of authority are what allowed such quick decision-making
to take place. You knew who you had to tell, and he knew who to come
get the information from, and he knew who had to be coordinated in
this decision. They would make decisions measured in seconds, and
those decisions, it’s just amazing. If you go back and look
at the simulations, you’d find it’s just amazing and how
good their decisions were. There’s rarely ever they came back
after a sim [simulation] and they said, “Well, I messed up.”
Usually what they’d do was right. So, lines of authority and
responsibility are extremely clean, well understood. The buck stops
here at each point, where it all ends. That’s a principle.
So in my fumbling way I think I’ve come at it from this direction,
in a different direction, a different direction trying to illustrate
what I’m talking about. If I were to define what is operations
engineering, maybe that would help somebody listening to this later
or looking at this later to see what is in view. I’ll try to
make a definition for it. Operations engineering is the science of
applying engineering principles to systems in such a way as to achieve
their desired functions in the most economical way and with acceptable
risks. That might be a start. To do that, you’ve got to be able
to quantify risk. You’ve got to be able to quantify costs. You’ve
got to be able to understand that you’re going to—let’s
turn this off.
That combined with what I said before maybe will convey to you the
thing. If I had the time to do it, I’d write a book.
Rusnak:
[Laughs] Well, maybe you will have time to at some point.
Carlton:
Have you had this topic come up before in your discussions with anybody
else?
Rusnak:
Nowhere near this specifically. We’ve had, I guess, some illustrations
of this. I’ve been sort of running through my mind, I’ve
been trying to think of the names of the people we’ve been talking
to, but it’s mainly been from flight controllers and flight
directors who made the transition between Apollo to Skylab to Space
Shuttle, and then now even maybe Space Station a little bit. Then
the differences in the—“environment” is probably
the wrong word, but in the conditions under which those systems have
operated, how that’s affected operations and required different
applications of these principles, because obviously running a long-duration
program requires much different thinking operationally than something
very short like Apollo. Certainly when costs and risk, as you’ve
pointed out, when that varies between programs, then that affects
how you apply these principles to how the things are going to operate.
But, no, no one’s sat down and laid out this kind of thinking
about it before.
Carlton:
I don’t know who in NASA, someone in program management, somebody
at Headquarters, NASA does advanced studies, or they used to all the
time be doing some kind of a special study. That might be a candidate
for a special study just to scope it better than I have here in describing
the need for it, probably would be.
Rusnak:
Well, let me ask you, how much exposure have you had to the way NASA
currently does operations, either through the last decade of decision—
Carlton:
A good deal. My son-in-law’s still there. He talks to me all
the time about what they’re doing. So I think I know pretty
well what they’re doing now, and, in my opinion, it’s
an absolute disaster.
Rusnak:
Oh, really.
Carlton:
Yes.
Rusnak:
Okay.
Carlton:
The lines of responsibility for a starting point. They’ve got
too many people that are—there are too many bosses run the show
simultaneously. It’s probably impossible to ever reconcile it.
You can’t tell a nation what you’re going to do. That’s
the problem. You can’t tell the Russians what they’re
going to do if it’s a big impact to them. Just the whole thing
is set up for conflict. A good operations organization resolves conflicts,
and it don’t have a mechanism to resolve conflicts very well.
The Russians decided—and I’ll illustrate this to you—they
were going to send a tourist [Dennis Tito] up there. Went totally
counter to all of the policy of American thinking. What happened?
We weren’t the boss, were we? [Laughs] I rest my case.
Rusnak:
Well, that brings up a very good point, though, that I was thinking
about as you—once you had mentioned the Russians and you were
talking about quantifying risk and all that. Your definition of operations
engineering here is based on applying the engineering principles.
I wonder, though, with operations you have a lot of factors, principles,
if you will, that are in a way more cultural than technical. If you
look at the way we as Americans think about risk, the value of human
life, the importance of people in a system, and then just compare
that with the Russians, who are the obvious analog there, their thinking
is very different. Both our systems work, but certainly as they’re
discovering now, they’re not necessarily compatible. So do you
think there’s a way to reconcile those sort of factors that
are non-technical and to be able to teach those?
Carlton:
Yes, I do. I think it’d be easy to reconcile them, and that
is you quantify them. Now, they’re not as different as you think.
If you go look at people that build bridges and build dams and so
forth, when they start into a project—I remember sitting in
a briefing that’s telling about how to build one of the dams.
It said, “We expect there to be so many lives lost. If a man
falls in that big pile of concrete while it’s being poured,
we don’t interrupt the operation. He’s gone. We don’t
try to get him out.” There’d be, I forget now how many
lives they said would be lost in each big major operation like this.
Seemed like it’s twelve. Okay. There is a totally different—but
that’s more the Russian thinking, you know. There is a recognition
that there is a risk to human life in everything we do and a willingness
to quantify that into a thing. Here’s just something we’re
going to have to accept.
Now, they could have gone to super extremes to not lose those twelve
lives. In the space biz, we did, but why did we? It was more than
the value in human lives. It was a different principle come to bear:
it was national prestige. In a nuclear powerplant, guaranteeing that
sucker won’t explode and all of the trouble you go to to do
that is not to protect the lives of those guys in there operating
it; it’s because this sucker spews stuff out like Russia had
in the nuclear catastrophe they had.
So I suspect that you’ll find that if you broaden your perspective
outside the space business, not that much difference between the different
peoples, but even if there is a difference, you can still quantify
it. You can say, “If we want to guarantee you won’t lose
lives, what amount of trouble are we going to go to?” You know,
that’s a program management parameter there of the human life
risk and how much you’re willing to go to, to avoid the risk
of loss of human life. You don’t have make a judgment on whether
it’s good or bad. You can quantify what it takes to guarantee
you won’t lose a guy on a mission.
You know, NASA used to say, “We want 99.99 percent probability
they’ll return alive.” Well, that drove costs. Now, if
we had tried, I’m sure we could have quantified this is how
much extra cost it took to guarantee that 99. Well, what if you had
reduced it to 50 percent? Aha! How could we have reduced all this
support they was going to do? So it doesn’t matter what our
social position is on the importance of human life. We can still quantify
what it takes to guarantee it.
Rusnak:
I wonder how much of this is getting into things like engineering
ethics and this sort of thing.
Carlton:
Probably there’s some that will get in. What you try to do is
not have it get in, though. It ought to be basic principles that you
can apply, you see what they are, and you pick how much you want to
buy. I would hope that you could keep the ethics out of it, but it
might be. It will be an ethics decision on what a program manager
decides to do, how much he wants to buy into operations enhancements
in his system.
If it’s a big embarrassment to him if his system fails, you
know, like it might throw a pollutant all over, you know, you got
an oil spill or something like that, you might think if it’s
a threat to human life, it would certainly would be a ethics question.
You could say the people that put the tires out on the road, there’s
a ethics question when they refused to acknowledge they’re killing
people. So you got ethics things entering into more than just the
operations.
Rusnak:
Surely that’s the case. But what I was thinking is that a lot
of this has to do with, as you’ve been saying, quantifying the
risk and quantifying the costs. Is getting that extra nine or those
extra couple of nines of redundancy to save a few lives, in the case
of anything where we’re dealing with people, is that worth the
extra millions of dollars, and where do you say that okay, well, we
can accept at this cost losing six people but we can save—
Carlton:
This much money.
Rusnak:
Yes, this much money if we can accept killing twelve people.
Carlton:
I think you’re putting your finger on the need for what I proposed.
Right now I don’t think we consciously say we ought to be able
to quantify how much improvement we get in the protection of people
by doing these extra things.
Rusnak:
No, you’re absolutely right.
Carlton:
We don’t ever bring ourselves to a point where we say that this
is something we do need to quantify and this is something we do need
to lay in front of whoever is buying the system, the program manager
who designed it or overseeing the development of it. We need to be
able to put it in front of him so he can see this is what the system
capabilities of risk with this amount of complexity. If you think
you need more or less risk, then here’s how much additional
complexity it would add to you, and if you need more risk yet, here’s
what it means to you.
To sort of quantify that in one application, when we said you will
not have, when you’re trying to get to 99.99 percent probability
of coming back, well, systems have a probability of failing. So you
design the system to be thus and so many nines able to perform this
mission in this period of time. If this system just can’t quite
do it, you have redundant systems, two systems there, and that’s
what we had in Apollo.
Then we had, to even further guarantee that 99.99, we had some operations
principles come in to bear. Human decisions came in to bear. We said
if we have a failure in a system and lost the redundancy, well, yes,
the other system could go and complete the mission. But we wouldn’t
complete the mission. We’d come home as quick as we could because
we wouldn’t put ourselves in a posture of where we’d only
have one system to complete the mission. If it failed, we lost our
crew, and we will not lose a crew under any circumstances fundamentally.
So there was a operations principle that came in to bear that had
nothing to do with—the design’s all over with. The design
tried to give you that 99.9, but then we overlaid it with another
layer of just procedures, just human procedures.
Now, there’s another thing that would perhaps be a factor in
how you do operations, and that is how do you make it visible and
communicate it to a program manager. If you go back and listen to
the discussion we had about mission rules, you’ll see there
was a mechanism that made it visible. There needs to be means to make
it visible to a program manager what his decisions are doing. We had
that in the way of mission rules, but there also needs to be a way
of doing that in the front-end design of the system.
You probably need to be invented some kind of a mechanism that allows
the communication with program management of what they’re getting
and what it’s costing them. You might go to a program manager
if you’re looking at—I’m trying to find an analogy.
If you have a guy that’s building a bridge and you go to a program
manager and you say, “Okay, here’s the design of the bridge.”
It’ll hold the load you specified. You specified the lifetime
you want it to last so that forces us to paint it and maybe some other
things. But esthetically it’s ugly. And how do you quantify
you want it to look pretty?
If he looked at it and said, “It’s ugly,” then you’d
find a way, you’re communicating with him, with design reviews,
that lay out before him what it will do and then extrapolate that
into costs so he makes his decisions. If he wanted it to look pretty,
you’d show him some alternatives to make it look prettier and
tell him what it costs, and then he’d decide whether he wanted
to pay that much for that much more prettiness. I don’t think
you’ll find a comparable communication taking place with respect
to the operations aspects of the system. There needs to be a mechanism
put in place so that gets communicated to him.
A whole bunch of little bits and pieces. You see why it’s kind
of hard to get your arms around it and to describe it. Well, I think
I’ve kind of outlined to you what was in my mind, and I don’t
feel any need to go back and edit the tape you put on this, but if
you want me to, I’ll be glad to.
Rusnak:
Well, we’ll certainly send you a copy just like we did with
the other ones.
[End
of interview]