NASA Johnson Space Center
Oral History Project
Edited Oral History Transcript
Interviewed by Jennifer Ross-Nazzal
Houston, Texas – 9 July 2018
Today is July 9, 2018. This interview with Mark Voyton is being conducted
at the Johnson Space Center for the JSC Oral History Project. The
interviewer is Jennifer Ross-Nazzal. Thanks again for stopping by
on your way to LA [Los Angeles, California], that’s great.
my pleasure. Glad to help tell the story.
Tell us how you became involved with the James Webb [Space] Telescope.
me, it happened back in around 2002. I was working for Northrop Grumman
at the time and was hired to do some electrical systems work to architect
the data system that would be ultimately used in James Webb. I accepted
that position, and then we started working on the avionics, which
was the ISIM [Integrated Science Instrument Module] Command and Data
Handling System, or ICDH. We had a lot of development to do. Most
specifically we developed SpaceWire, which was our backbone communications
network, or router scheme, that was primarily used—and is primarily
used—to extract data from each of the instruments. After they
make an observation that data image is stored, packetized, and sent
over that SpaceWire link to be stored and then ultimately telemetered
back to Earth. That started in 2002. I stayed, and I ultimately grew
into the lead role on that.
In about 2008, I was selected as the ISIM deputy [manager]. It’s
the Integrated Science Instrument Module, which is the instrument
package that lives on the back of the telescope. Accepted that position
and I worked that for about six years with a really exceptional project
guy, a guy named Jamie [L.] Dunn.
May 5th of 2014 I started the position of the OTIS, the Optical Telescope
Element and the Integrated Science Instrument Module, otherwise known
as OTIS manager. It’s our easy way of saying all that together.
So I’ve been doing that roughly for the last four years.
Most of my effort during that time has really been focused in terms
of getting the telescope, the OTE and the Integrated Science Instrument
Module, integrated together, ambient-tested at [NASA] Goddard [Space
Flight Center, Greenbelt, Maryland]. Getting it readied for delivery
and prepared to be integrated into Chamber A over in [JSC] Building
During that time, we also did a lot of risk reduction work. We had
three large risk reduction tests that really prepared us for the major
thermal cryo [cryogenic] vacuum test that lasted—it was 100
days actually. We conducted two optical GSE [ground support equipment]
tests and then a thermal Pathfinder test from 2014 to I guess the
end of ’16. During that time we checked out all of the equipment,
all of the instruments that were integrated into the chamber. Checked
out all of our procedures for the personnel that were going to be
running tests, operated all the equipment. Worked very closely with
the Johnson team who were really fine-tuning how to operate the chamber
and the pressure and temperature profile.
Quite frankly—I’m jumping ahead—it worked out really
well. All of that training and learning together really made for a
test that was very robust. And as you know, robust enough to survive
[Hurricane] Harvey. I think without all that planning, training, and
working together, Harvey would have been a challenge. I think that
mentally it would have been a challenge for the team. But since they
were so confident in the systems, their operation, their procedures,
their backup systems, they really were able to work through that.
How did you first become involved with the test? You moved up the
ranks and you were becoming head of the different instruments on the
telescope itself, but how did you first become involved with this
idea of taking the equipment and bringing it here to Johnson?
see. I’d like to tell you that the whole notion of it was mine,
but that would be a complete lie. Ultimately, I got to tell you, Lee
[D.] Feinberg was early on working with a number of folks: Jonathan
[L.] Homan [at JSC], John [F.] Durning back at Goddard, Charles [E.]
Diaz, who was all the way there from the beginning when we really
started evaluating Plum Brook [Station at NASA Glenn Research Center,
I don’t know if you know Plum Brook was being evaluated as a
possible location to do the testing, but ultimately the team selected
Johnson. It was the right decision, but nevertheless all that was
going on while I was doing these other things that I described to
you earlier. What happened in 2014 was basically—the project
manager who was leading the effort prior to me was a guy named Jim
[James M.] Marsh. He might also be someone you might want to chat
with, by the way.
Yes, we’re always looking for other people.
was moving out, and project management decided to select me to replace
him. At that point in time, in 2014, the chamber was being modified.
We were preparing for the risk reduction tests. At the same time,
the telescope and the instrument modules weren’t delivered;
they’re still being tested. Mostly the ISIM was being tested.
The telescope, this test was its cryo vacuum certification. It was
mostly for the telescope mirrors.
The Integrated Science Instrument Module came fully verified. It had
gone through three cryo vacuum tests and completed all of its environmental.
I don’t want to say it was along for the ride—because
there were some very critical things that had to be evaluated and
tested between the telescope and the instrument—but the focus
was really on making sure the mirrors and the telescope were behaving
as planned and as expected.
It was a very high-paced effort from the moment I started. There was
really no breathing room for anybody on the team, and it’s been
a really long run, a very rewarding one, but nevertheless a really
long and exhausting run. A lot of folks traveling, a lot of folks
away from home.
What was interesting is when I came down, it wasn’t quite clear
in May of 2014 who was running the test. I don’t know if this
came across in any of your other interviews.
wasn’t clear. Johnson, they thought they were running the test.
We thought we were running the test. One of the things we had to work
out early on was to figure out, “Hey, where are the strengths?
Let’s leverage the strengths.” It’s Johnson’s
facility, and Goddard developed the hardware.
Very quickly we put together an MOU [memorandum of understanding].
Scott [A.] Swan was involved with that. [Organizational code] EC4
[Crew and Thermal Systems Division, Systems Test Branch] was the branch
where much of the work was being performed to support it. We had to
craft an MOU to really make it clear who’s responsible for what.
Once we got that organized, it went pretty well after that.
It was confusing in that I think both Centers were really thinking
they were helping, but at the same time we had to sort out what helping
really meant. Once we got that worked out we were able to continue.
But it was a little period in there, a few months, where we weren’t
sure whether we should be handing over and then supporting [the test].
We finally had this division of labor that worked out really well.
I actually crafted the language for it. Johnson would operate the
chamber in accordance with the ambient pressure profile, as specified
by the test hardware team. And then when we conducted the test there
was a back-and-forth, almost hourly or as necessary with the handover.
It was choreographed very well, and the two teams just worked magnificently
What did the chamber look like when you came down that first time?
What still had to be done?
question. There were bits and pieces. A lot of it was integrated;
there were some critical systems. The Center of Curvature Optical
Assembly [CoCOA] was integrated, but there are some autocollimating
mirrors which are, you can imagine, the size of this table there [demonstrates].
I think that’s five or six feet in diameter. There’s three
of them. At the time, only one of them was integrated. One of them
had cracked in its cryo certification testing up at Harris [Corporation]
in Rochester [New York], so they had built another one. We had installed
one of three mirrors. We had installed the photogrammetry system.
There was a lot of other instrumentation.
Diodes that monitor temperature—I think we had 1,500 diodes
in our final configuration. They were constantly getting installed,
then there were also targets. This photogrammetry system has to see
targets. Based on all the targets, it can measure within millimeters
of accuracy how things are moving. You needed targets to do that,
so they were always putting targets on and scales. There were composite
scales that were added, and targets were put on those. So you could
see how composites were changing and understand the changes over temperature
with a composite material. There was additional contamination [protection]—CQCM
[cryogenic quartz crystal microbalance]. In effect they were able
to measure contamination, particulate accumulation, and rates of accumulation.
A number of those still had to be installed. Again, that was back
In our first test, which was OGSE [Optical Ground Support Equipment]-1,
we had one ACF [autocollimating flat] mirror, we had the PG [photogrammetry]
system, and we had CoCOA. Then all of the equipment to get the Pathfinder—which
was basically a surrogate telescope but it only had two mirrors on
it—was also installed. What we were missing was something that
would behave like ISIM. OGSE-1, we checked out everything, but we
weren’t able to run light through something that would actually
So OGSE-2 we installed what was called the Beam Image Analyzer, and
that basically mimicked the detectors of ISIM. We could actually evaluate
the optical path of the light we would run through the system in the
final test, as well as all the instrumentation that would evaluate
the quality of that light. That would ultimately evaluate how well
ISIM and the telescope performed together.
In the second test we added in the Beam Image Analyzer, then in the
third test we had a leak. The CoCOA had two pressure-tight enclosures
that had some really difficult requirements to meet. One of them leaked,
and we couldn’t have a leak at its rate. It was unacceptably
high. Ultimately, at the end of the day, at those temperatures, when
you’re leaking air, it’s freezing out. Oxygen, nitrogen,
all that freezes out at cryo temperatures.
A number of things have to happen. When you warm up, you have to liberate
that in a way that is managed so that it doesn’t create gradients
and cause hardware to possibly be damaged. You’re also at risk
if you accumulate a lot of gas and for some reason you lose your helium
system. Temperature goes up, that gas is released, and then you definitely
have what’s called a very high gas load. That would ultimately
couple warm surfaces to cold surfaces in an uncontrolled way, and
that would definitely damage the hardware.
The bottom line is we had to repair the leak. It was a big [concern].
We had a Failure Review Board populated with Johnson and Goddard personnel.
There was a lot of discussion about how to do it properly, but we
ultimately ended up using the appropriate vacuum-quality tubing. We
had some requirements in there to couple. We didn’t want a lot
of dynamics. We had basically some soft interfaces, and it was determined
that it would be better to just give up a little bit of dynamic performance
and have more coupling with some harder material. We ultimately were
able to fix that.
Then we had a leak on the other side, the pressure-tight enclosure,
and that was a tougher decision. Ultimately, I basically decided I
wasn’t going to rework that side the way I did the first side,
which is called the CSA [Canadian Space Agency] hexapod side. I didn’t
rework that in the same way, because the risk of changing it was too
high at that time. We were able to go in there and find the leak.
Basically it was some clamps. They’re a set of three. There
were three. They had loosened up, so we retightened them and fixed
The chamber was tighter than it had ever been through all four years
in the very end. Again, in that thermal Pathfinder test we were able
to identify the leak and repair it. And in that test we put in what
was called the Space Vehicle Thermal Simulator, which emulated how
the telescope attaches to an area of the spacecraft. That, thermally,
was very important to understand that performance. That was the final
piece. That test was called the thermal Pathfinder test, and it was
really an evaluation of our ability to cool down and to make sure
that all the temperatures at the various interfaces were correct for
when OTIS arrived. And that was successful as well.
What was your role during those three simulations?
me I was a test director. You have three shifts a day. Those first
three tests ran anywhere from 35 to 51 or 52 days. I think thermal
Pathfinder was 52 days. I was a test director, although not as much
of a test director as some of our other team members. That was more
I was down here most of the time during those tests. Oftentimes you’re
here when things go wrong. They want to bring their story and have
an opinion. Most of the time I’m here when things weren’t
going as planned, so that we could quickly make decisions, and move
efficiently, and move with certainty. Ultimately, my job is really
to keep my boss, Bill [William R.] Ochs, in the loop on how things
were progressing day to day and our challenges, how the team was doing.
We had a lot of issues with the team working too many hours early
on—I’d have to go back and look. I think it was during
OGSE-1 or between OGSE-1 and 2. We had someone fall; we had a safety
event. At that point in time I was down here to really make an assessment
of what is the right amount of hours to be working safely, because
that’s nonnegotiable. We worked through that.
At the end of the day, my job was to work through the problem/failure
reviews that we had. If we had a major problem that needed to be addressed,
you have to form a team. A PFR (Problem/Failure Review Board) is established
when there’s a failure. We had to work through a number of those
during our risk reduction test, much of which had to do with the hardware.
I mentioned the leak. That required a Failure Review Board to really
understand fully that solution and get the corrective action. Monitoring
that, trying to maintain a balance between cost, schedule, and getting
the right answer was my role, and then just general guidance and leadership
was the job.
It was nice to be a test director because it really got me involved
in the actual operation of each of the tests. It helped give me a
lot more insight into the challenges that the team that was living
in Building 32 running the test was facing. It gave me a good understanding
of all of the systems and the interaction of the systems. For example,
when you’re running PG, taking PG measurements, that limited
you from imaging. There were certain relationships that existed between
various pieces of test equipment that we learned during the risk reduction.
There were some interactions, so we needed to manage that. It was
good to have that understanding.
How well did things work? You mentioned that you had to get an MOU
between JSC and Goddard. Then of course you had all these contractors,
Jacobs [Engineering Group], Harris, and Ball [Aerospace & Technologies].
You had a lot of people. How did things work?
that’s a great question. It worked extraordinarily well. We
structured it so that all of the Johnson team worked with Jacobs.
PAE [Pacific Architects and Engineers] was managed basically out of
CTSD [Crew and Thermal Systems Division], but mostly out of EC4, [by]
Mary [P.] Cerimele and Jonathan Homan. They managed all the work when
it came to touching the chamber; they were integrally involved in
Then on our side, my job was to make sure that Harris, Northrop, Ball
Aerospace, and all of our subcontractors—one of my main roles
was to make sure that they all had the right statement of work and
scope to be able to work. And that the interfaces between them and
the other contractors were understood well enough that I wasn’t
just going to be spending time in meetings all day saying, “Hey,
Northrop isn’t doing a piece of the work that we required, and
we didn’t put it in our contract.” My job was really to
make sure that our technical folks did a thorough job of linking together
I had a very good, stellar integration and test lead, manager—a
guy named Ed [Edward L.] Shade—who planned and coordinated every
day. There was a meeting in the morning at 7:00 a.m., sometimes 6:00
a.m., and they just coordinated all the work for the day. There was
a schedule that managed all of the individual companies. They knew
exactly which line items on the schedule they were responsible for.
They knew that each company has a different way of documenting how
they go about doing work. We had a work order authorization system
that we use at Goddard, and then Northrop Grumman uses what’s
called CAPE [Computer-Aided Process Engineering], a command media
and procedure system. Harris uses manufacturing instructions. You
had to wrap all of that under a WOA [work order authorization] to
then execute it. Then make sure that each of the individual support
person, like Quality Assurance and Safety—you needed them from
Northrop, you needed them from Harris, you needed them from Ball.
Then you had Goddard and/or Johnson [review and sign off].
Our Safety and Mission Assurance manager hired two absolutely stellar
Johnson employees, a guy named Larry [M.] Starnes—he was our
Quality Assurance manager down here—and a guy named John [P.]
Byard. Both worked at Johnson their whole career, and they were able
to just work amazingly well. They were two of the best I’ve
worked with in terms of the quality of their work and their insights
and the way they went about doing their job. They fused it all together
from a quality assurance and a safety perspective.
Basically my job was to make sure that all the contractors, civil
servants all knew what their role was and make sure that there was
adequate scope and the contracts and/or tasks were written to capture
the work they were going to perform.
You mentioned something I thought was interesting, that you had to
come down here to evaluate how many hours was ideal for a person to
be working. No more than, I don’t know, 8, 10 hours. Why do
you think people were working so hard and so madly on this project?
What was the interest?
a passion. I think it was pervasive across the whole team—Goddard,
Johnson, all of our contractors—the passion to prepare for the
largest and most capable space telescope ever assembled, that would
be tested here. It was the marquee test that had been prepared for.
Many started in ’96, but when 2014 rolled around—and then
ultimately in 2017, when OTIS was here—people were passionate
about preparing for this.
There were a number of things going on. The contractors, when they
come down here, and the civil servants—when you’re on
travel, you work. You’re down here to work. So it’s a
combination of that, along with their just desire to want to be able
to really maintain schedule, be ready, be complete, be thorough.
I think we ultimately ended up on six tens, times two. One shift:
10 ten hours a day, times 6 days and then a second shift the same.
Effectively, you’ve got roughly 19 hours of coverage with an
hour of overlap. That was our pace when OTIS arrived. That seemed
to be about right. Folks couldn’t work more than 60 hours. We
had instances where folks were working more than that, and it was
wearing them down. They were physically and mentally fatigued, and
that’s not good. That’s not good for them safety-wise,
and it’s not good for the hardware.
The environment over there, you had rails and then you had very large
structures that you had to navigate around continually. I’m
actually impressed that we only had a couple of incidents. One of
them was severe. We had some broken bones. But in general the team
was pretty careful about working around the hardware, both for their
safety and hardware safety.
Did you have a chance ever to work on the floor?
yes. I would go in occasionally into the clean room when there was
something to see.
One of the major events we had to plan for was how we would integrate
this Space Vehicle Thermal Simulator. If you can imagine these 20-foot
harnesses, about four inches in diameter, having to be spread out
and fit through a cage. You have to kind of fly this thing in, because
OTIS sits there, and the harnesses have to be fed through this thing
that you fly in, and then ultimately anchor. We had to figure out
how we were going to do that. We had a lot of issues with how we were
going to route those harnesses, how they were going to get terminated
to the connector panels. I went in. I really wanted to get my eyes
on exactly what the routing challenges were. I was in quite a bit
I was also in the chamber when we were going to route our harnessing
out the chase. A chase is a cooled area to 100 Kelvin. We had to fit
everything through there—diodes, all of our flight harnesses
that would be driven during the test. That chase was too small, so
we had to redesign it. There were instances like that where you have
to go and get your eyes on it and really see what the team’s
We had a case where our photogrammetry system—if you can imagine
a windmill-like system with a camera on the end. You have to feed
out the hose that maintains pressure in a pressure vessel, and you
have to maintain vacuum. We had an instance where the hose was coming
off of its reel, much like a water hose at your house. If you have
a circular take-up system, if you go too fast, the hose wants to expand
out and will run off. We had an instance where our PG system early
on was doing that, and I needed to go take a look at that with them.
Ultimately had to build a shroud around it to contain it, so if it
did want to push up and out it would be caught by the shroud. That
was a significant redesign.
I always was walking the floor every day, because we positioned a
lot of hardware test sets outside the chamber. It was a tremendous
amount of work for the team to get that integrated into the chamber,
and electrically it was a challenge. It was worth also seeing what
they were doing. We had a few electrical challenges but not many.
That must have been exciting, as an engineer, to be able to work on
some of those things. Because I imagine once you get to be a manager
you don’t get to do as many technical things.
hit the nail on the head, yes. For good reason. I think it’s
hard to do both well, but I think being immersed into the technical
area does allow you to make some better decisions. I was fortunate
to have a really strong team that supported us. There was never a
case where I felt like, “Oh, I have to go in and make a decision
Because the team, across the board—Goddard, Johnson, all the
contractors—they were a very capable team. It was really just
helping them work through and understanding the problem so that we
could support our decisions and articulate that up to management and
the stakeholders. That was fun.
As you mentioned, early on I was doing a lot of engineering, when
I was leading the design and development of the data system. Seeing
it integrated and seeing all those interfaces integrated, it was a
little bit special. Here really capable folks [were] taking care of
Were you here when OTIS officially arrived here at JSC?
was en route. I was here the next day, but my deputy, Juli [A.] Lander,
was here when it came in on STTARS [Space Telescope Transporter for
Air Road and Sea]. She was here to greet it, but I was here the next
day. We were staggering our schedules at that point in time.
It was an exciting time, that whole week. As soon as it arrived, there
was a buzz. There were probably 150 people that converged on Building
32. We were running two shifts with about 20 to 40, sometimes 50,
people in the clean room at a time.
It’s a rather small space for that many people.
was tight; it was really tight. But there was work going on in the
chamber and there was work going on where OTIS was going to be on
its stand. There were two teams operating in parallel, two shifts,
six days a week working. Our original schedule, I think, was six weeks
to get it prepared. We hit some snags with integrating the Deployment
Telescope Assembly Offloader, this big tube. You had to hold it and
offload it so it wouldn’t see loads. We had a couple of issues
with that which slowed us down. I think we ended up taking a couple
months, eight weeks or so, to integrate. I can get you the exact timeline.
The first day when everyone converged it was mayhem. We all looked
around and we said, “Man, how are we going to manage all these
people?” They were all over the place in Building 32. They were
all around the chamber, they were up on the fifth floor, they were
all the way up on the seventh floor with—they call it the “hooch.”
It was an enclosure that was built that was a conditioned enclosure
for electronics up there.
We had expanded up into the 32 annex, that whole area down on the
first floor was filled. Every conference room was filled with people
just setting up, trying to get their computers running, and getting
organized. Then upstairs above the annex, where we had room for about
40 people, that was over capacity.
The first week was a real challenge. We got our legs under us within
a couple days, but the first day or two was very difficult. I think
a lot of folks were questioning, “Man, are we going to be able
to pull this off,” including myself. But it got a rhythm. The
team quickly started to get organized. They got their legs under them,
and then we were up and running within a couple days just pretty much
as planned. It was an exciting time though.
I can imagine. Waiting this whole time to see that movement and seeing
it move forward at that point. What are your memories of seeing your
equipment finally go into that chamber? As I understand it, the clearance
was extremely tight.
running on the rails, the rails run you all the way into the chamber.
The OTIS sits on something called the HOSS, which is the Hardpoint
Offloader Support Subsystem. That sits another 8 or 10 feet off [the
ground]. So by the time you stack OTIS up on top of the rails, on
top of the HOSS, and having to go through the chamber—I think
we only cleared it by about a foot, which is not much.
When you’re looking at it from the angle of viewing it looks
like it’s not going to make it, but when you’re up top
you can see it’s going to clear. All that planning came through.
A lot of tight tolerances had to be met, and they were. All of the
measurements, the modeling, a lot of very detailed measurements were
made of every aspect of the system to ensure that it would go together
properly. And it did.
Like I said, we had a couple of instances where we were working off
of drawings or models that weren’t as accurate, weren’t
as representative of the actual hardware. Usually when that was the
case we had problems. We had to do some real-time figuring out. When
you think about the number of interfaces, the amount of wire, and
the fact that you had to decouple everything dynamically, we couldn’t
have the facility moving OTIS. We had to have a really controlled
dynamic environment. All of that had to come together with, like I
said, 1,500 or more harnesses hanging off this thing. All of which
had to be splayed out, so that they wouldn’t be hard enough
to impart force. They had to be soft enough so that we decoupled.
Having it come together and being able to meet all these opposing
requirements—thermal competes with the mechanical, the optical
was competing with thermal stresses, the thermal gradients created
stresses. They were opposing. The way we wanted to route it, efficiently,
didn’t create a dynamics environment that was acceptable, so
we had to really route everything inefficiently. Meaning use the largest
possible volume area to route harnessing, so that we could get the
softest interaction between the environment and OTIS.
To me, the whole notion of seeing the 18 mirror segments be phased
and aligned so that they looked like one mirror was the peak moment
of all my experiences on the job. When they came together and aligned,
you could see what’s called the fringe. They showed fringe patterns
where when they weren’t aligned the fringe pattern was contained
within 1 of 18 segments. Then once you aligned all 18 of them, that
fringe pattern, the whole mirror looked like 1 of 18 segments, and
those patterns went across all the mirrors. That was a telling moment
like, “Wow!” You couldn’t have planned it better.
The team really worked through that well. All those tests were done
to verify that CoCOA would really be able to phase and align the mirrors.
The algorithms that they were using to change the prescription of
the mirrors, the wavefront sensing and control—Ball Aerospace
team and the Space Telescope Science Institute were doing the algorithm
and adjustments to the mirrors to phase them based on the measurements
that were being made and using the NIRCam [Near-Infrared Camera] as
the sensor as well, because that’s how we’re going to
do it on-orbit. It all came together.
We did find an interaction. Luckily we did the test. It goes to tell
you—we can do a lot of analysis, we can do a lot of modeling,
but it’s hard to replace a good test. This was a great test.
We did see when we turned on our CoCOA—we opened up a shutter
and the blanketing material around the edge of the mirrors heated
up. When it heated up it loosened, it wasn’t as taut, and basically
that changed the prescription of the mirror. We learned that we had
an interaction. Ultimately, we learned that some of the blanketing
around the edges of the mirror was too tight. It was too taut and
needed to be loosened up so that a variation in temperature wouldn’t
ultimately impart forces into the telescope and change its prescription.
It was changing what’s called “wavefront error.”
It changed its wavefront error too significantly.
We were able to spend a few weeks after the test characterizing a
number of different effects. We had an effect with the CoCOA shutter,
then we had an effect with the ISIM electronics compartment heaters.
If you can imagine heaters turning on and off and expanding ever so
slightly and imparting forces into a mirror, we were detecting it,
and it was unacceptably large. We were able to decouple that. We found
that it was a ground effect.
We’re still doing the final adjustments to the frill as we speak.
This month and next month we’ll do the final measurements and
adjustments to the frill to make sure that the blanketing material
is adequately loose so that we don’t have any temperature interactions.
But yes, it was amazing. You wouldn’t have found that if you
didn’t run that test. That would have been a significant reduction
of performance. The science community probably could have lived with
it, but it would have been a setback.
You had planned for so many things, but had you planned for a hurricane?
we did. We did. We planned; we planned. Yes, we did. This is where
the Johnson and Goddard teams worked so closely together. We understood,
and Johnson understood every primary and redundant system. They had
a full failure mode effects and criticality analysis for the whole
facility, which systems were redundant; which systems could fail;
which systems, if failed, would result in damage to hardware; which
systems could fail, but it would just turn into a test efficiency
issue. We’d just lose time. We wouldn’t damage the hardware.
They knew all the systems in and out.
At the end of the day, some of the largest concerns were losing control
of temperature. Because as I mentioned, you generate gradients that
at those temperatures can really damage composites. We spent a lot
of time doing simulations, turning on and off primary and redundant
We actually spent a fair amount of time putting in a huge backup system
for the helium system. We didn’t want to lose the helium skid,
so we had a backup on that. We put a backup on the roughing system
so that we could pump out the gas that might accumulate if we lost
the helium system. That was a huge upgrade later in the flow and wasn’t
really ready until a couple months prior to running our final test.
That was a system that ultimately we never had to use, but it was
there just in case we did lose the helium shroud and had to pump out
that gas load.
The one thing that we didn’t—we had had some significant
storms during our years leading up to the test, and the team had weathered.
The team was staying locally, and they knew what the roads would be
like. There was one evening I think we got 12 inches of rain here.
The building leaked. It leaked on things, but we had covered everything.
We had planned for this. We knew the building, where its weaknesses
were. We had covered our hardware. We had channels, water routing
systems. If water did get in it would get routed. Basically it routed
out by gutters off and away from the test sets. But it’s the
planning, like I said.
The actual hurricane itself presented different challenges. It went
from a hardware safety concern to a human safety concern. I think
I left the Wednesday before Harvey hit. I had been here two or three
weeks. I stayed on the phone with Lee Feinberg and Carl [A.] Reis—I
have to say maybe two to three hours every day, throughout the whole
day, through the whole evening. We were just communicating on what
we should do.
The hardware was holding in there. We were leaking. There were a lot
of leaks in the building. Our guys were in hotels. If they were in
their hotel they were safe, but the Johnson team had a lot of real
issues. Their families were having to deal with the excessive amount
of water. But they were here. They were coming in, the Johnson team.
The Jacobs team came in and secured everything and were able to manage
the water. Then it turned into, “What’s the minimum team
required to keep the hardware safe?” Then even that became an
issue with moving folks in and out of Building 32. We had cots and
we had air mattresses. They were able to stay. There was food.
They were able to hunker down for an evening or two, but I think the
one thing that we didn’t plan on—we definitely didn’t
plan on having a lot of issues getting folks safely to and from their
hotels. They had to do a little real-time [planning]. They ended up
gathering up all the folks who had large trucks, had better clearance.
They started shuttling folks, not having them out on their own individually.
The fact that we were able to focus on that and not have to worry
about the various systems that were supporting the hardware—because
we really did understand the system well, both the hardware and the
facility and the chamber. We had a pretty good understanding of how
that would behave and what the failure modes were. It let the team
focus on human safety. That became paramount for a couple days really,
Testing cut back to basically what you needed. You didn’t run
any tests that required anybody in addition to what it took to keep
the hardware safe. Say [to] keep the hardware safe took 12 people.
Well you would run some testing with those 12 people that was minimal,
but wouldn’t require any additional folks, and wouldn’t
put the hardware in a state that would allow it to be more vulnerable
than it was.
Lee and Carl were our test directors during that time. They just had
to make a lot of decisions over a period of about three or four days,
and they made good decisions. They were amazing. They made great decisions.
How soon did you get back down here?
soon as I could. I chartered a plane. We chartered a plane and brought
like 24 folks down immediately, as soon as we could. I think Ball
Aerospace got in with their plane a day or two before us. We were
trying to get a NASA plane. We couldn’t get one that was large
enough, so we ended up just chartering one, flying in.
It was great. The team that was here was exhausted. They really were
happy to see fresh faces and go home to their families. It was exciting.
It was very stressful. I think there were points—and I know
this because I was down at [NASA] Headquarters [Washington, DC]. We
were doing our Senior Executive [Service] quarterly [meeting] while
we were struggling with the water thing, the LN2 [liquid nitrogen].
We had an issue. We needed liquid nitrogen, a lot of it, like three
or four trucks a day, to keep things going. There weren’t any
trucks. Once human safety was managed—the next focus was getting
LN2 replenished. I remember reporting, because that day we reported
to [NASA Acting Administrator] Robert [M.] Lightfoot [Jr.].
We were telling him how many gallons we had left. “What’s
the decision point for whether or not we start warming up?”
Which would have been a major setback in the timeline, and quite frankly
I’m not sure what would have happened if we really had to warm
up due to lack of LN2. We had a plan in place. We would have started
warming up if we ran out of LN2, and we knew where that was.
We were basically trying to manage the test. My boss and others were
actually having some discussions amongst themselves about, “Do
we continue this test? Do we really continue? Look at everything going
on in the community, look at the hardship, and we’re running
a test.” The problem was to stop the test it took 30 days to
warm up safely, so there was no easy answer.
We had to continue the test, because we couldn’t stop. It required
just as many resources to stop as it did to continue. Luckily, we
continued. I really do wonder if we stopped, given all that was going
on, if we really would have picked it back up. I really don’t
know the answer to that, but I know it would have been seriously discussed
if we had warmed up.
Were there specific lessons learned that you think other people should
be aware of for future testing?
and we’ll write up a lot of these, too. The more time you spend
preparing and simulating and working with the real hardware, you are
going to uncover a lot of things that you wouldn’t otherwise.
I think we found failure modes in our hardware in the facility that
we would have never found if we didn’t run real tests with real
hardware. We just couldn’t simulate it. We couldn’t model
it. You have to actually operate the hardware. I think that was our
biggest lesson learned.
The lesson learned is that we probably could have done more. There’s
always more you can do, but I do think that the amount we did was
adequate to survive a lot of the off-nominal. When things are going
well it’s somewhat easy. It’s when they go off-nominal
where all that planning and—we did some simulations. We had
really good models of the thermal performance and thermal behavior.
We probably could have done a few more simulations and a little more
modeling. A little more off-nominal simulations are going to come
I think the only other thing is our hardware simulators. Probably
if we could go back in time we would put a little more fidelity in
those. Not so much from a hardware safety perspective, but from a
test efficiency perspective we found that a lot of our hardware simulators
didn’t have the same fidelity as the hardware. When we would
run scripts they would run in our simulator, but they would get jammed
up on the flight hardware. We’d have to sort it out and it would
just waste time.
But having done that, we still were able to conduct a 90-day test
in 100 days. It wasn’t too terribly painful. We probably could
have had some efficiencies if we had better simulators. That will
be a lesson learned.
Did you feel like the ghost of Hubble [Space Telescope] was sort of
following you around as you were working on this test?
question had been asked over and over again. “Is James Webb
going to suffer the same spherical aberration issue that Hubble suffered?”
I think we never thought that, from the fact that this end-to-end
test that we conducted here was something that hadn’t been done
All of our test equipment that was used to build the individual components
was not used to test the final integrated product, so we used independent
test equipment. It’s not like we used the same sensor at the
element level, subsystem level, and used that at the fully super-element
level so that if it was a bad answer it would always be bad. We had
independent test equipment used in this test that hadn’t been
used to build the pieces.
The combination of that end-to-end test and the fact that we had independent
hardware I think gave us all [the confidence in the hardware]. And
the fact that we had mirrors that we could change the figure. We could
change the figure and the wavefront error and rephase them, realign
them if there was any small perturbation due to temperature gradient
that we didn’t anticipate. We could possibly null it out.
The combination of all those things I think gave us confidence that
we shouldn’t have any issues like that. However, we didn’t
plan on the frill. The frill got us. That was something; that was
really a surprise. Like I said, without the test we wouldn’t
have caught that. Without catching it we probably would have had a
wavefront error significant margin reduction for the science community,
and they wouldn’t have been happy with that.
What do you think was your biggest challenge working on this test
here at Johnson? Or even before, if there was a big challenge?
think at the end of the day it was just balancing the enormity of
it all and trying to balance the whole notion of what’s good
enough. Because of the complexity—the thermal, the optical,
electrical complexity, dynamics complexity—balancing what’s
good enough. It’s an old phrase, [better is the enemy of the
good] but it really does [make sense]. You literally could have tunneled
into hundreds and hundreds of areas of the architecture, and you would
never do the test. You would never do it. You would always find something
new to do that would reduce your risk even more.
You really had to balance getting answers, knowledge, certainty with
an acceptable level of risk. It’s not quite objective, it’s
a little bit of an art form. You have to depend on your team. You
really have to work with a lot of the significant contributors to
develop an opinion and move together. It’s not always consensus.
You don’t really always have everybody on the same page, but
you still have to go forward. Yes, I think that was it, just balancing
Honestly, I was telling Lee the other day, I said, “You know
Lee, given where we are and given the IRB”—we’ve
had this Independent Review Board come down and really bury into this
mission success thing—“I really question whether or not
we’d have been allowed to do the test.” I think given
everything we’re experiencing about mission success, we would
have been asked to probably do more. I don’t think we would
have had a better result, quite frankly, I don’t. I think we
may have had a little more certainty, but we were at diminishing returns
on buying down the risk.
I really do wonder whether or not we’d have been able to run
the test given the newer environment that is being asked of us. Although
you have to go with your gut eventually, it comes down to that. A
lot of it was you have all the data, you have a compelling technical
story, it’s just a question of, “Is it enough?”
I think we proved that it was.
We just have a couple of minutes, but you mentioned that sometimes
you wouldn’t come to a consensus on something, but of course
you had to move forward because you couldn’t just stay stagnant.
Can you talk about one of those issues where there was a lot of disagreement
perhaps about something you were going to do in the test?
I alluded to it—not repairing the pressure-tight enclosure on
the CoCOA. We never had consensus. Part of the Failure Review Board
wanted to remove all of the vacuum tubing and replace it with higher
quality, lower risk tubing. We had a lot of interfaces that went through
that, and the risk of damaging the CoCOA coupled with the fact that
I was willing to bet that the leak wouldn’t get worse—if
the leak had gotten worse it’d have been a different outcome.
That was an example.
I think on the telescope side, after the test we were not—and
my boss wasn’t on board with really spending the time to try
to fully isolate one of the couplings I mentioned, that IEC [ISIM
Electronics Compartment] heating. We had to do some things to the
hardware that were a little bit risky. We bought down the risk. We
understood the risk well enough. We executed. I was here for that
whole three-week exercise and having to convince him that it was worthwhile,
even though we were taking a small amount of risk.
When you work on flight hardware, there’s always some risk that
you’re going to incur. We took a small, managed amount of risk.
The result was that we were fully able to understand one of the couplings
of the IEC heating into the telescope. There was a case where we weren’t
all on the same page, but we were able to get through it.
Just one last question. NASA really focuses on the value and importance
of teams, but I wonder if looking back there was something that you
would say, “This is the most significant accomplishment that
I made toward the success of the test.” Is there one thing?
talking about me individually?
Individually, yes, or a decision that you made.
think being calm. Being calm, and being able to navigate through all
the different organizations and continue to try to build as much consensus
and confidence that we’re going in the right direction. Try
to just be the glue that holds all these different organizations together.
That was my job at the end of the day, was to just keep it all together,
keep it going, keep people focused.
I didn’t have to have all the ideas. I didn’t have to
have all the solutions. I had to understand enough to talk to everybody
about it so that we could piece it forward, but I think for me individually
it was just keeping everybody together and maintaining the schedule,
managing the budget, and then executing the test. I think we proved
as a team that we had enough of a balance to do that.
That was a pretty large contribution, to keep all that going. And
we pulled in teams from all over the world to do this. Having the
problems be addressed—no finger-pointing, keeping the finger-pointing
to zero, was key. We always were looking at trying to solve the problem
and never really assigning blame, because that’s just not productive.
Just having a history on the job—having been on the job for
I guess, at that point in time, 12 years—and knowing a lot about
the whole infrastructure that we were using to test and all the test
support systems helped me be able to quickly address issues as they
Yes. I don’t want to keep you because I know you’ve got
your one-hour parking, but is there anything else?
Yes, I try to keep close to time when I can. Is there anything else
that you thought you might want to cover or we should know about the
going to write a paper on the whole thing for next March, and there’s
a lot I’m going to put in that that I wasn’t able to cover
We saw first light. There’s a really complicated scheme to move
light. If you can imagine the observatory is moving. The light coming
in, you want to make sure that when you’re putting that light
on the detector it’s solid, that the observatory motion isn’t
moving your image on your detector. There’s a fine steering
mirror that’s nulling out the motion of the observatory so that
you track. You have to guide on a star, and as the observatory is
moving the star is not moving, but it’s looking like the star
is moving. This mirror is nulling out all the observatory motion.
We were able to prove that 16 times a second we were able to evaluate
this loop. The loop involved contributions from the Goddard ISIM team
and the Northrop Grumman spacecraft team and the Canadians. They basically
were imaging it, and Ball Aerospace had built the steering mirror
and the avionics. So all these teams came together.
This loop had to close [in] 64 milliseconds. You had to take an image
of the mirror, of the star, centroid it, send it over to the attitude
control system, and then send a command back to the mirror to recenter,
to stay center, and see all that come together. You could only do
it cold, because the detectors don’t see light when they’re
warm. The first time that happened where they were able to track,
guide. Again, the guiding mode.
First light we put through here at temperature, and it worked. That
just blew me away, too. It was a matter of we have to be able to track,
or we can’t do the science. That was so important that we were
able to do that. All those organizations came together. Individually
we’re testing the pieces, and you never had the whole loop.
The loop came together here, and it worked the first time.
There’s more. There’s things like that, but yes.
Lots of high-fiving that day, or hugs?
Begoña Vila, Maria [María Begoña Vila Costas],
she was the lead for the closed-loop test. She’s one of the
Canadians that supports us, and she’s part of the ISIM team.
She’s world-renowned. She’s worth interviewing, by the
Is she? Would I find her in global [NASA email contacts]?
can get you her contact. She’s amazing.
Okay yes, we haven’t interviewed any women.
the hurricane she stepped up too and took a leadership role during
that time. Was able to stabilize the team, get them organized. That
little bit of testing we did with the minimal staffing, she coordinated
all that. It was all her thing. She was there. She’s pretty
impressive. She’s worth spending a little bit of time with.
Absolutely, yes, we’ll have to do her over the phone, but yes,
that’d be great.
there’s a few things, yes.
Okay. Thank you so much for spending part of your morning with me,
I appreciate it.
I really enjoyed it. It was great to talk about it and relive all
the successes. Thanks for having me.
Yes, good luck out in California.