NASA at 50 Oral History
Project
Edited Oral History Transcript
Bryan
D. O'Connor
Interviewed by Sandra Johnson
Washington, DC – 19 March 2007
Johnson:
Today is March 19th, 2007. We are at NASA Headquarters in Washington,
D.C., to speak with Bryan O’Connor, [Chief] of the Office of
Safety and Mission Assurance [OSMA], for the NASA at 50 Oral History
Project. The interviewer is Sandra Johnson. In preparation for the
space agency’s fiftieth anniversary, the NASA Headquarters History
Office commissioned this oral history project to gather thoughts,
experiences, and reflections from NASA’s top managers. The information
recorded today will be transcribed and placed in the History Archives
here at NASA Headquarters, where it can be accessed for future projects.
Can I answer any questions before we begin?
O'Connor:
No.
Johnson:
Thank you for providing us with your time again today. In your current
position your office has functional responsibility for the safety,
reliability, maintainability, and quality assurance of all NASA programs.
You first joined NASA in 1980 as an astronaut candidate. If you will,
please briefly describe how your career with NASA has led you to your
current position.
O'Connor:
Sure, and thanks for asking, Sandra. Appreciate it. When I came to
NASA, I came from the flight test community and the flight test and
development engineering background with the [U.S.] Marine Corps. In
fact, I came from the Naval Air Systems Command. I had served as a
test pilot on the Harrier Program, and also as a Chief Engineer for
the Harrier Program at NAVAIR [Naval Air Systems Command] when I was
invited to come to NASA. So as I got into flight operations and learning
how to be a Shuttle crew member, I also got an opportunity to have
some collateral duties that were in the other area of development
and flight test matters and so on. So it was a real good education
and a follow-up to what I had done before.
Of course, NASA had some unique things in the way they do development
and flight test from their own history, going back to Mercury and
Gemini and Apollo. And in the early eighties it was nice to see some
of the Apollo and even Gemini and Mercury people that were still there
and talk to them about their history and the culture of NASA. A lot
of it was very similar to what I was used to in aircraft flight test,
but there were also some unique aspects to it that were very interesting
to me and served me later on when I got out of pure flight [test]
operations and got into some other jobs.
I was in the Astronaut Office for eleven years, and as was the case
with people that came when I did, it was punctuated, unfortunately,
by the accident. My first flight in the Shuttle was shortly before
the [Space Shuttle] Challenger [STS 51-L] accident, and my second
flight was after the accident, so it gave me an opportunity to see
some of the root cause things and to be able to participate in the
recovery and the Return-to-Flight activity and to watch the agency
as it learned from that catastrophe.
I think that learning for me was important in that it kind of steered
me towards flight safety even more strongly than I had been before
I came to NASA. I had long since been a Safety Officer. When I was
in the Marine Corps, I was in safety, a trained, certified Aviation
Safety Officer. But when I got to NASA, I realized that that was an
area that would be of great interest to me if I stayed with the agency
after my flying days, and sure enough, that’s where I wound
up. In this, what is my third assignment at NASA, I’m working
in safety, reliability, and quality engineering, and I really enjoy
that. I think it’s a good calling for anybody, and I appreciate
the opportunity to serve in that way.
Johnson:
Since you’ve served in various positions, and as you said, a
lot of your work was in safety-related positions, how do you feel
that NASA has changed over time since the beginning, both in general
and also in your specific area?
O'Connor:
Probably the most important thing to me is the distinction between
flight operations and flight test operations. When we had the Challenger
accident, there were people who looked at what happened and advised
us. We looked at it ourselves. And I think it was a big realization
to us that this thing was more dangerous and higher risk than we thought.
There were things that we weren’t looking at hard enough. There
were processes that we backed off on because we didn’t think
that the risk required that kind of oversight and review.
In retrospect, we were wrong, and that piece of the story, I think,
hit us again on [Space Shuttle] Columbia [STS-107]. The Columbia accident,
the board that looked at us thought that we were wrong to think of
ourselves as purely operational, and that we were more of a flight
test kind of an analogy. We should have more engineering oversight
and more safety oversight, more government oversight of the contractor
activities. When you look at that, I’d have to say that that’s
a common set of learning that, unfortunately, we had to learn twice.
When I look forward, and I occasionally hear people talk about we’re
going to do a development activity with the new system and then we’ll
be operational, it always raises a little yellow flag with me that,
in fact, if I take it to the extreme, I could say that we’ll
never be operational in the way I think of operations. When I was
at the Naval Air Systems Command, we flew airplanes—when I was
there it was the F-18 and the AV-8B Harrier, and those two airplanes
went through a couple of thousand flights in a flight test environment
before we gave them over to the fleet pilots to operate and declared
them operational. That’s quite a few years to get those couple
of thousand flights.
I’m not saying that we need a thousand flights on a human-rated
space system to make it operational, but I am saying that it does
take a while, and it takes a lot of experience and a lot of tests.
We’re still learning on Shuttle to this day how it really operates
in the environment it’s in, flying the mission it flies, and
we’re somewhere where I would think of as mid to late flight
test on the Space Shuttle today. We’re certainly not operational
the way I remember it.
So that’s why I say the yellow flag comes up when I hear people
talk about operations. I think the bad implication of operations is
that it’s okay to back off and not watch what you’re doing
too much. Everything is all clear; procedures and the techniques are
all well established and tried and true, and you’re not getting
hit by surprises very often. I just don’t know that we’ll
get there anytime soon with the new systems, and we ought to keep
our eyes open and act more like a flight test community for the foreseeable
future.
Johnson:
Do you feel that that’s happening now, that it’s more
of a flight test feeling?
O'Connor:
Well, yes, and when I mention this to people, they say, “Oh,
sure, of course. You’re right,” and yet I see lapses occasionally
where folks will say, “Yeah, but this one here is already pretty
much proven out. The equipment that we’re using, we’re
not using high-tech, new technology. We’re using proven stuff,”
and so on. I see that as a path towards convincing ourselves once
again as we have several times in the past that we’re different;
that maybe we’re above and beyond the lessons learned in the
past because of differences. And I see similarities.
So maybe I’m a “glass half empty” kind of guy here
when it comes to this, but I don’t blame the folks at NASA.
They’re can-do folks. They’ve got a great attitude about
the future, about discovery, about their systems they’re developing.
That’s all wonderful. I just sometimes get a little bit concerned
that we forget some of those lessons from the past, and that’s
something that we need to keep in mind as we go forward.
Johnson:
Well, as you mentioned, maybe you’re a “glass half empty”
kind of guy, and I think maybe that’s the personality attribute
that may be good in your position in charge of safety. But if you
could, describe just a little bit for us the scope of your current
position and what you do, what you’re in charge of, as you see
it.
O'Connor:
Well, this job here as Office of Safety and Mission Assurance is responsible
for the functional oversight, the policy and direction and leadership
in the functions of safety, reliability, maintainability, and quality.
Now, each of those functions has different aspects to it. Safety,
for example, includes industrial and occupational safety for the workforce
in their day-to-day jobs. It also includes systems safety engineering
as a discipline in the engineering community.
Now, for years and years NASA—in fact, ever since the Apollo
fire—NASA has decided that they would separate out safety and
reliability engineering from the Engineering organizations and put
them under a separate organization. They’re still engineering
disciplines, but they’re under separate organizations. Usually
it’s called Safety and Mission Assurance. I think there’s
one or two of the [NASA] Centers that have a slightly different term
for it, because they have Environmental or they have Occupational
Health.
But basically the system safety and reliability and quality engineers
in the agency fall under my functional leadership as well as those
who do pure assurance and verification of procedures. So we’re
not just the checkers; we’re also people who are actively involved
in the design and the development work with our people. So I think
maybe that helps to capture some of the scope of what we are responsible
for.
Johnson:
And as you mentioned, you’re also responsible for the oversight
of employee safety as well.
O'Connor:
Yes. Now, you mentioned up front that we have functional leadership
for safety and reliability and quality assurance for all the programs.
It’s also for all the institutions, and that’s basically
where the industrial and occupational safety piece comes in. We have
a close alliance with the Chief Medical Officer of the agency, who
is what they call the DASHO [Designated Agency Safety and Health Official].
I’m sorry that I mentioned an acronym, because now I’ll
have to figure out what that means, but he’s the safety and
health officer, by statute, for the agency. Every agency has to have
one.
Well, the word safety is in that title, but when it comes to mishap
prevention and pure accident safety kinds of matters, that’s
where our folks come in. When it’s health and the health environment
type of things, that’s where the health community tends to come
in.
Johnson:
And each one of those offices or the different Centers report to your
office.
O'Connor:
Yes. Well, the health community reports to the Chief Medical Office,
and the system safety and occupational safety folks report functionally
to me, operationally to their Centers.
Johnson:
Let’s talk about the historical mission of this office as you
understand it historically. Has it changed? Or if you want to, talk
about when the office was first formed and how it’s changed.
O'Connor:
When I first came to the agency, this office did not exist here, but
it did at the Centers. Every Center had a Safety—and they used
to call them a variety of things. At the Johnson Space Center [Houston,
Texas] it was called the Safety, Reliability, and Quality Assurance,
SR&QA. At some point the m word came in there, Safety, Reliability,
Maintainability, and Quality Assurance.
Most of the Centers had similar titles to these offices. They were
a vestige of the post-Apollo time frame when the safety engineering
and the reliability engineering functions were put into those independent
offices for a check and balance. The check and balance was meant not
to be simply with the programs and the projects at the Centers, but
also with engineering. So the safety folks would look to the Engineering
organization, as part of their scope, as well as the projects that
were at the Center.
The reason I mention that is that you’ll find that some other
aerospace companies and government organizations, that the Safety
Engineer and especially the Reliability Engineer may not actually
be in a separate Safety organization. They may be assigned to the
Engineering organization as divisions of Engineering. There are other
places, in fact, the one I came from, the Naval Air Systems Command,
where the Safety Engineer wore two hats; reported to two different
organizations. They reported to the Chief Engineer or the Engineering
Director at the Systems Command. But they also had a separate reporting
line to an independent Safety organization.
The reason they did that was similar to the reason that we came up
with ourselves after the Apollo fire, was that the Safety Engineer
needs to have a check-and-balance function over all the other engineering
that’s going on; not just worry about their hazard reports,
for example, that they’re being done on time or whatever, but
to also be able to step back and assure themselves that the safety
aspects of the other engineering disciplines were being carried out
properly, and that’s why they needed an independent path. Didn’t
want the Safety Engineer to be drowned out and maybe a Safety Engineer
input left out with no alternative route, so that’s why they
had it that way.
At NASA we went a step further, and we actually took them out of Engineering
and put them in the Safety and Mission Assurance organization. Now,
the name Safety and Mission Assurance came about after the Challenger
accident, where we had a variety of names in the agency. We were adding
“-ilities” to the function, like Maintainability, in some
cases, Survivability or things like that. The titles got so long that
we decided to just keep it shorter by using the word Mission Assurance
to capture all the other things, including quality; quality engineering,
reliability, and maintainability.
That’s different from the Defense Department [Department of
Defense (DoD)]. In the Defense Department, Mission Assurance captures
different kind of things, and we sometimes will confuse our friends
in the DoD because of that. But here at NASA Mission Assurance was
a term to capture all the other “-ilities” other than
safety and just make the title shorter.
The other thing that happened after the Challenger accident was that
the Challenger Mishap Board, the Rogers Commission, dedicated one
of their ten recommendations to the fact that we did not have an independent
Safety and Mission Assurance organization here at Headquarters like
we did at all the Centers. In fact, the Safety Engineer at Headquarters,
the Reliability and the Quality Engineers at Headquarters, were basically
assigned to the Chief Engineer’s Office here at NASA Headquarters.
So they thought that was a disconnect, that there ought to be a separate
Safety and Mission Assurance organization here just like we had at
the Centers, and that we should have functional ownership of the safety,
reliability, and quality disciplines under an Associate-Administrator-level
manager reporting directly to the Administrator. So we invented what
was then called Code Q and now is called the Office of Safety and
Mission Assurance.
Johnson:
From the beginning when this office was formed after Challenger and
compared to now, has the mission changed, and how do you see your
mission right now?
O'Connor:
From the beginnings of this Office of Safety and Mission Assurance
shortly after Challenger till now, I haven’t seen much change
in general scope and function. We have leadership and policy and directional
oversight of these “-ilities” that we call Safety and
Reliability and Quality. So that means we are responsible for directives,
the NPDs [NASA Policy Directive], NPRs [NASA Procedural Requirement],
those things that we call our directive system, standards that are
used, and so on. About a third of the people that I have here in Washington
deal with that every day, updating the standards, which we try to
update every five years our policy directives, and that keeps them
busy, because we own about fifty or so of these directives.
We also, as part of our functional oversight, do audits and assessments
out of Headquarters here and try to keep track of what’s going
on in the Mission Directorates and at the Centers, so that I’m
aware of that and can participate in the reviews for those things
that come to the agency for top-level decisions, like, for example,
launches and major programs and so on. So those functions have not
really changed very much. Some of the things we’re doing about
those functions have changed a little bit, but the functions themselves
have been the same.
There have been several groups that have come in as late as the Columbia
Accident Investigation Board that have recommended that we actually
have more operational leadership of the things that are going on at
the Centers and in the programs and projects, and relieve the Centers
of those responsibilities that they’ve had all these years.
Functional leadership is not the same as operational management, and
there are people who every now and then will come in and suggest that
we combine those two, and that all the safety people, for example,
that work on programs and projects should actually be reporting operationally
to me, and that I would handle their budget.
Every time we look at this question—we understand why it comes
up, and it’s not unique to our organization that this kind of
undelegating suggestion occasionally comes up, or centralization.
But every time we’ve looked at it, we’ve decided that
no, we think it’s better to allow a Center Director, for example,
to own and manage the Safety and Mission Assurance people at their
Center as part of the Center’s job in hosting and providing
the technical authority for the programs and projects that are at
that Center; the same with Engineering.
So we do occasionally challenge that notion, but so far we’ve
decided to pretty much keep it the same, and so I’d have to
say it hasn’t really changed too much over the years.
Johnson:
Why do you think it works better having the Centers somewhat independent?
O'Connor:
The reason I think it does is because the real work that goes on in
hosting a program and a project at one of the Centers happens at the
Center, and if we want to go to a model that says the Center Directors
are similar to what we think of as base commanders in the DoD analogy,
where their only responsibility is roads and commodes and providing
the paper and maybe the personnel but no technical authority whatsoever,
then if we were to decide to do that, then it would be appropriate
to elevate the technical authority to me and the Chief Engineer of
the agency and the Chief Medical Officer.
But so far the agency has seen that it’s good that the Center
Directors have technical authority and that they be technical people,
they have technical staffs, that they be responsible for the technical
oversight that goes on in their Engineering and Safety organizations.
That’s why we’ve kind of kept it that way.
I’m sure we’ll look at this again as time goes on, but
so far—with one exception, and that is major programs like Shuttle
and Space Station, the Center Director, per se, as an authority, does
not have technical authority over programs that are hosted at their
Center, just projects. So when you go down to a program review in
Houston for the Space Shuttle, for example, you’ll see a OSMA
placard at that review, and although the person wears a JSC badge,
they are exercising technical authority that’s a Headquarters
authority at that meeting. But when you go to an Orbiter project meeting
or any of the projects that you see at Goddard [Space Flight Center,
Greenbelt, Maryland] or JPL [Jet Propulsion Laboratory, Pasadena,
California], the Center Directors have the technical authority for
project oversight.
Johnson:
Let’s talk about the strategic vision for this office and what
you would see for the future. How would you like to shape the Office
of Safety and Mission Assurance?
O'Connor:
I told you earlier I’m sometimes a “glass half empty”
kind of guy, but I’m not the kind of guy who raises the red
flag every time it looks like somebody’s bumping up against
a rule or a regulation. I think that our community needs to be smart
enough to be an active member of any design or development team, and
not just a policeman. They need to be aware of what the safety requirements
are and be very familiar with them, but they also need to be smart
enough to understand those things in the context of the design as
a whole.
Now, it’s hard to find a Safety Engineer that knows all about
the entire integrated story, but they have a community back in their
organization that does. So you may have a Safety Engineer who turns
out to be a propulsion expert and is a little weaker on electrical;
that’s fine, as long as they know where to go to get electrical
help in the Safety organization. And they need to be “yes if”
kind of attitude, not a “no because” attitude. It’s
a little easier to be “no because.” “No, you can’t
do that because it violates this standard,” for example. It’s
a little harder to be a “yes if.” “Yes, you can
do it that way if you come up with an equivalency for this standard
that you’re going to have to violate.” That’s the
kind of attitude they need to have.
I think that’s most helpful to the designers. Standards and
rules and regulations were all based on lessons learned from the past,
so we need to give them credit and understand why they’re there,
but we also need to realize that it would be virtually impossible
to design, develop, and operate a system that meets every rule we
got. We’re going to have to find ways around some of those regulations
and rules, by definition.
The Space Shuttle, for example, had some requirements for reliability,
and it had to find its way around them several thousand times with
its design. We call those “waivers” or a “critical
items list” and those sorts of things. With the kind of work
we do, we have to make sure that if and when that happens, we’ve
got to have a safety and a reliability and a quality community that
can help the designers figure out the best ways to deal with these
things.
We’re not there yet. In this agency we have a Safety and Mission
Assurance community that is very good. In fact it’s a lot better
than what I had in the government ranks when I was at the Naval Air
Systems Command as far as their training and their education and their
understanding of what’s going on. But we can also improve ourselves,
and we can become better at our disciplines and we can be better systems
engineers as a whole and help us move forward in Constellation.
Johnson:
Let’s talk about the budget for this office, and whether it’s
adequate now to accomplish this vision that you have, or if you could
increase it or even double it. What would you do with that?
O'Connor:
Well, we recently had an independent team come and look at us, and
they didn’t think we were spending enough time, effort, resources,
on some of what I’m going to call the engineering excellence
parts of our role, and that some of the things I told you, that we
need to be “yes if” people. In order to get us to that
next step of competence we need to do some things, and some of those
will cost some money and some resources and training and so on. The
Chief Engineer is in the same boat. He’s trying to improve the
engineering excellence in the agency as a whole for all the disciplines,
and we’re sort of following in his footsteps there.
We’re also creating a NASA Safety Center up at Glenn Research
Center [Cleveland, Ohio]. Right outside the gate there is a facility
that’s the home of our new NASA Safety Center. That Safety Center
will have a big role in helping improve the training and the qualifications
of our people to where we can get to that next step, and that will
probably cost a few million bucks which we haven’t had in our
budget in the past, and we’ll need to step up to that.
Now, I think we’re spending pretty much a bare minimum on other
things that we have to do as well, so that means a little bit more
money will probably be required, and the agency has told us that they
expect that and will deal with it. Doubling my budget is not a good
idea. I wouldn’t know what to do with all that. Certainly improving
today’s posture a little bit so we can get a better handle on
agency-wide technical excellence is in the works.
Johnson:
So training and that sort of thing.
O'Connor:
Yes. We also need to improve how we do mishap investigation support
and a little bit on how we develop new tools and standards. None of
those are free. They all cost. Independent verification validation
of the software, that’s a fairly sizeable piece of our budget,
and we need to keep doing that. I don’t think we need to double
it, but we can’t let that dwindle. That’s important for
the software.
Johnson:
How would you improve the mishap investigation support?
O'Connor:
For mishap investigations what I find is that when we convene a mishap
investigation board—and just to give you an idea of what we’re
talking about here, in the last two years we convened thirty-one class-A
and -B mishap investigations. Class-Bs are what you do when you have
damage to hardware in excess of $250,000 or an injury to personnel
that requires that they go to the hospital. Class-A is when you have
a million dollar damage or a serious injury or incapacitation or death.
Like I said, we had thirty-one of those in a two-year period, and
each of those boards was a three- to five-member board. Each of them
had a Chair that depended more or less on members of the board to
help them navigate themselves through our mishap investigation process.
It takes a lot to do a good mishap investigation board, a lot of good
engineering, a lot of good analysis, and very good communication skills
in the way of findings, recommendations, and writing a good report,
and it takes time. Some of our boards struggled with the thing, not
necessarily just because it was a difficult technical challenge to
find out what happened and get to root cause, but because the tools
and techniques for developing your root cause analysis can take a
lot of time and effort.
Frankly, some of them struggled more than they needed to just because
they hadn’t done it before, and they hadn’t really sat
down and gone through the thinking that goes into findings and recommendations.
So they spent an awful lot of time writing their report. The technical
part was easy for the engineering team that they had, but the writing
of the report was hard, because it’s not the normal kind of
report that they’re used to writing.
All those difficulties that they had, I think they could use some
help in the way of experience, facilitation, and advice. One of the
things we will call for our Safety Center to do is to have a small
staff of people who are very good at mishap investigation, especially
that last part, the development of the findings, recommendations,
and the writing of the report, so that each team doesn’t have
to learn this on their own the hard way. They can have someone there
to advise them and help them get through that. I think that will cut
down the amount of time it takes to do these reports and improve the
standardization across the board. So we’re talking about four
or five, maybe six, people that would be dedicated to mishap investigation
support for the agency.
Johnson:
Kind of streamline the whole process that way.
O'Connor:
Yes.
Johnson:
Let’s talk about NASA’s impact on society in the past,
what it is now, and what you see in the future as far as what the
impact will be.
O'Connor:
Well, NASA’s had impacts in a variety of ways. You can look
at the old Spinoff magazines that we used to put out and see all kinds
of things that technology brought to the fore, but there’s also
some process things. I couldn’t tell you that we invented some
of these processes that I’ve been impressed with, but we certainly
have taken them on and made good use of them, like process failure
mode and effect analysis was something that our [Morton] Thiokol [Incorporated]
folks developed on the solid rocket motors.
That’s an excellent mission assurance process that doesn’t
just look at the design of the hardware; it looks at the process that
people are using to build a motor or build a nozzle, for example.
It uses a process similar to what you do in a design to look for single-failure
points in your process where you might then solve the problem by adding
an inspection or an independent oversight function of some sort. That
process, I think, has improved the reliability of the product that
comes out the back end of that process. I know that there are other
people now using it outside of the agency for their own purposes.
Now, we tend to beat ourselves up and get advice from our own mishap
investigations on where our failings are, but we also have people
calling all the time asking us how we do this, that, and the other,
looking to benchmark NASA on how it does safety practices.
We have a pretty good record in the government for industrial and
occupational safety across our Centers. Part of that is because we
have stepped up to the Voluntary Protection Program, VPP, which is
an OSHA [Occupational Safety and Health Administration] process, which
basically gets the leadership much more involved and improves the
discipline on your [operational] hazard analysis and your [incident]
reporting. Those things have given us a big improvement in our mishap
statistics for slips, trips, falls, industrial and occupational safety
matters.
So I think there’s been some impact there, because we’ve
had other agencies and other companies come and look at us to see
what it is we’re doing and have taken some of those lessons
back. We like to benchmark other people, too, but we find ourselves
as a subject, or an object, of a benchmark every now and then.
Johnson:
What do you think the impact in the future might be?
O'Connor:
Well, hopefully we’ll help this country do what we do best,
and that is explore the unknown and try to answer some of those answers
that nobody else can come up with. We’ll do it in new and inventive
ways that are more efficient, more effective, and those spin-offs
or impacts on society will take root in other areas. But fundamentally
we don’t do our work here at NASA for spin-off reasons. We do
it because we have a mission to go and explore the unknown, and we
do find that, oh, by the way, when we do that work, there are things
that the country and its institutions learn from us in the way of
technology or procedure or process that help in other ways.
Johnson:
As you mentioned, exploring is something that NASA does best. Do you
believe that’s the most important role for NASA, for the nation?
O'Connor:
Yes. Yes, I do. I think any nation that is going to claim some sort
of a historical leadership role, in retrospect you would find that
they spent some part of their resources on exploring the unknown.
When you look back on ancient civilizations and so on, people focused
on things like the art and the technology that they developed that
made them great. Our job is the technology and answering the science
side of the questions. There are other people work the art and the
architecture and that sort of thing that make great nations, and other
things in philosophy and social sciences and so on. But when it comes
to scientific unknowns, that’s one of the areas that we’ve
been asked to deal with, and that’s what we ought to be doing,
and we ought to focus on that.
Johnson:
What about the importance of human space flight and robotics, the
importance of both of them and how you feel about that?
O'Connor:
Yes. I think there’s a role for both humans and robotics in
exploration. We wouldn’t be putting human beings on inhospitable
places like Venus, but we have drawn a line somewhere between the
Venus inhospitability and the Mars inhospitability, and said that
maybe Mars is okay. I think we could probably deal with that.
Having said that, though, the role of the robots is to pave the way.
When we have the Spirit and the Opportunity doing their thing on Mars,
I remember one of my visits to JPL when I talked to one of the scientists
out there about it, and he was going on and on about how much more
effective and efficient timewise that whole operation could be if
there were a human being actually there working with those robots
rather than having the big time delays, the limitations of telemetry
and so on to deal with on the ground.
So even our robotics people, I think, sometimes will tell you that
there are places where human beings can really work with the robots,
not instead of them but with them, to come out with a better exploration
model. So I’m looking forward to the day when we’ve got
human beings and robots on Mars working together.
Johnson:
What about the aeronautics side of NASA? Is that something that should
stay with NASA, and if so, why?
O'Connor:
Well, maybe I’m too simplistic about it. Being a Marine, I guess
maybe that’s what comes with that background. But we’re
one of the few agencies that has an and in our title. You know, Food
and Drug; they can’t just do one or the other. Their whole charter
says you do both, and so does ours. Aeronautics and Space, that’s
what we were set up for. If they take the and out of there and get
rid of aeronautics, then we won’t do it anymore. But as long
as we have that and in there, I think we owe it to the public, and
it just goes back to the beginnings. There’s a lot of discussion
about is it going away. Well, it can’t really, unless we go
change our charter.
Do we need to do more? Yes, sure, but it takes resources. There will
always be a balance in there of what’s the appropriate amount.
I think Lisa Porter working with other government agencies and the
White House recently was instrumental in establishing a framework
for how the government deals with aeronautics research and development,
and our role in that is going to be very pivotal and important in
doing advanced research stuff. Not so much the prototype work we used
to do; more basic research, and that’s great. Somebody needs
to do that, and that’s an important part of aeronautics.
Johnson:
You mentioned earlier about some lessons that we’ve learned
in NASA, but based on your experience with NASA and also based on
what you know about it historically, what do you feel that the lessons
learned are through the last fifty years?
O'Connor:
Well, in the job I’m in I tend to focus on lessons learned that
had to do with failures and how we recovered from them. I guess that’s
part of the nature of this job. I sometimes refer people in my community
and in the engineering community to a book by a fellow named [Henry]
Petrovski called To Engineer Is Human. In that book his basic premise
is that all the great engineering advances throughout history tended
to come from recovering well from failures. Not to say that every
time there was a failure, people recovered well from it. Sometimes
people ignored failures, and so they didn’t get any learning
from them. But when you have a failure, you owe it to yourself, the
people who may have suffered in the failure, and the future, to learn
as much as you can about why it happened and how to avoid it in the
future.
So I tend to look at things like the Apollo fire, the failures we’ve
had in our space flight, you know the Atlas failure with lightning
back in 1987—twenty years ago this month, in fact—the
human space flight failures that we’ve had, failures in operations
where we lost people in aircraft, and some of the mission failures
we’ve had in our robotics programs, and I worry that we will
lose some of those lessons. I worry a little bit about how we capture
lessons learned. I think we have a lot to do there to make sure we
don’t lose those.
This office several years ago, in worrying about that—this is
before I got here—developed a system called Lessons Learned
Information System, LLIS. As you know, every two or three years any
kind of database or computer program software you come up with to
do anything is pretty much outmoded, and it’s the same with
the LLIS. It was a great thing to do. It was meant to solve part of
that problem on not losing our lessons learned. When you look at it
today, you say, “We’ve got to do better than that.”
It’s not searchable like we’d like it to be. It’s
not using the latest technology and so on.
I’m a believer in lessons learned not just being in a database
or in a book somewhere, but also in the day-to-day operations, the
procedures, the design requirements, the standards that we have. Those
things need to capture our lessons learned. That’s how we would
not lose them.
An example is I mentioned the Atlas failure, struck by lightning.
Well, that lesson had been learned in Apollo. Apollo 12 was struck
by lightning. There was a lot of work in developing the science and
understanding of triggered lightning, which is a phenomenon that shows
up much more in [launch vehicles] with long ionized plumes coming
out of them than it would in aircraft, where it’s not a big
deal. But from the Apollo experience there was a lot of learning and
lessons that came out of that, and yet a few years later in 1987 we
were struck by triggered lightning and lost the payload and the Atlas
rocket.
In retrospect you’d say we failed to learn that lesson. It turns
out that when you go back and look at that accident investigation,
you find that there was a rule in the rule book, the launch commit
criteria, that dealt with that. It said don’t launch in clouds
that are a certain depth with the freezing layer going through them.
But there was a lack of understanding by the launch team about why
that was there, what it was for. It’s not clear from reading
the transcripts that they even knew that that rule had anything to
do with triggered lightning, because they were asking questions about
icing and so on.
So you could say that we had imperfect capture of lessons learned
there, and that that was part of the root cause of that accident.
That’s the kind of stuff I worry about.
Johnson:
That’s applying the lessons once they’ve been learned.
O'Connor:
Yes. How do we keep from repeating mistakes? Shame on us when we have
something happen twice. It’s just almost unforgivable, and yet
you really struggle with how to deal with it. There are so many lessons
we’re learning every day in our design and operational activities
that it’s really difficult to capture how do we make sure that
the next generation doesn’t forget those. That’s not an
easy task.
Johnson:
Do you have any ideas on the best ways to tackle that?
O'Connor:
Sure. Things like when we develop our lessons learned from accidents
and failures, that we find homes for those things that include not
only the lesson itself but some reference to show you where it came
from and why it’s there so that people understand that that’s
not something you can violate, you can waive, without discussing and
understanding why it’s there. Just putting the rule in there
doesn’t necessarily prevent people in the future from having
a problem.
Human nature is such that in the “yes if” modus that I
told you about, yes, you can do this if you can come up with an approach
that matches that rule that you’re trying to waive or deviate
from. I know that’s going to happen in the future. We’re
not a rule-driven organization, and when people do challenge the rules
and the regulations, they need to do it from a knowledge base that
captures the real lesson learned, not just what the rule says, but
why it’s there and why it got there in the first place. That’s
a lot of effort to put a system like that into place.
There are people who have done it well. The Mission Operations people
in Houston, for example, have, because of the Atlas accident, which
was not a human space flight thing. But because of that accident the
Mission Operations people in Houston decided that from now on the
flight rules that we live by for human space flight will have not
just the rule but a little italicized rationale behind that rule right
in there with the book so that everybody reading that rule will see
why it’s there.
It’s hard to capture the entire why. Sometimes the why it’s
there could be a volume. But in two or three sentences they capture
the essence of it and maybe a reference to something else. That’s
the way they tried to deal with that lesson learned.
There are other ways to do it. Training, of course, is a big piece
of that, making sure that people who are qualified as operators understand
the rules they live with, not just what they are but why they’re
there.
Anyway, all that stuff is important, and it’s one of the things
that I worry about as much as anything.
Johnson:
Do you feel that tapping into that corporate knowledge or the past
generation helping the next generation understand why those rules
and regulations are there, that that is important?
O'Connor:
Yes. I think, in fact, things like this oral history project is a
big piece of that. People should not put it on the shelf. They ought
to make use of it. They’re going to learn something every time
they touch it, and they may find that they might even save a mishap.
Johnson:
Let’s hope so. Let’s talk about your perception of NASA
culture and what you feel that that culture is.
O'Connor:
I think the NASA Values statement helps a little bit with that, the
Core Values, which over the last ten years have pretty much been about
three or four items, integrity, safety, excellence, and teamwork.
Different words, maybe, have defined them in the various strategic
plans and so on, but those four things are things that NASA people
tend to strive for. They have a keen sense of awareness of safety.
The Snoopy Program and the Space Flight Awareness Program, for example,
are great examples of how NASA people really do worry about the people
they strap into the spacecraft, and the same with the airplane community.
That’s a cultural thing that I noticed when I first came to
this agency.
The integrity and the excellence, when you talk to NASA people, they
take pride in their work, and they take pride in the integrity of
their work. If they can’t trust somebody in the chain of command,
for example, they take offense at that, because they believe that
integrity is important in this agency, and somebody that maybe is
walking on the edge of an integrity issue or an ethical issue really
bothers NASA people. That shows up as a cultural aspect that I appreciate.
I know Steven [J.] Dick just did the culture survey, and one of the
things that really bothered us when we heard about that was that there’s
a higher than comfortable segment of our NASA population who believes
that there’s an integrity issue with their leadership, for example.
“Can you trust your leadership?” I think is the way the
question came out, and it didn’t come out 100 percent yes. When
it doesn’t come out 100 percent yes, we in the agency worry
about that.
Now, just because something is a Core Value doesn’t mean we’re
there. It does mean something that we value, though, and I sense that.
Johnson:
Since you’ve been with NASA for several years and from your
perspective as a long-term NASA employee off and on, if a young person
came to you today and asked you about joining NASA as a long-term
career and staying with NASA, what would you tell them?
O'Connor:
I’d say don’t worry about the long-term part of it, but
if you have the drive and the interest in doing important work for
the nation in the area of discovering unknowns, and you don’t
mind long hours and hard work, you will enjoy this agency. You’ll
enjoy the people you work with, because they’re all of like
mind, and I think you’ll enjoy the values that we share. Now,
if you’re coming here for the money, for the retirement plan,
for the location of the Center, for example, forget about it. That’s
not why people come to this agency. They will be disappointed in all
those other things. If they’re not turned on by the mission
that we have, then we probably don’t need to take them on.
Johnson:
And why do you think people, the majority of people, come to this
agency?
O'Connor:
I think the majority of people that come to this agency do so because
they like the mission.
Johnson:
Is there anything else that we haven’t talked about on any of
these subjects that you’d like to mention?
O'Connor:
No, I don’t think so.
Johnson:
Okay. I appreciate your time today.
O'Connor:
Thanks, Sandra.
Johnson:
Thank you.
[End
of interview]