NASA Johnson Space Center
Oral History Project
Edited Oral History Transcript
Lee
D. Feinberg
Interviewed by Jennifer Ross-Nazzal
Redondo Beach, California – 3 July 2018
The
questions in this transcript were asked during an oral history interview
with Lee Feinberg. Feinberg has amended the answers for clarification
purposes. As a result, this transcript does not exactly match the
audio recording.
Ross-Nazzal:
Today is July 3, 2018. This telephone interview with Lee Feinberg
is being conducted at the Johnson Space Center for the JSC Oral History
Project and Northrop Grumman [Corp.] in California. The interviewer
is Jennifer Ross-Nazzal. Thank you for joining me today. Tell me how
you became involved with the James Webb Space Telescope and the development
of the beryllium mirror.
Feinberg:
It started with my work on Hubble [Space Telescope]. I was hired by
NASA to work on the first Hubble servicing mission. I was an optical
engineer that helped with the optical correction on Hubble. I then
worked on the second Hubble servicing mission as the instrument manager
for STIS [Space Telescope Imaging Spectrograph]. After Hubble, I spent
a few years as assistant chief for technology in the Instrument Systems
Division at Goddard [Space Flight Center, Greenbelt, Maryland] and
also worked at a startup company for a year. While at the start-up
company, 9/11 [terrorist attacks of September 11, 2001] happened and
I realized that the company probably wasn’t going to do well
in the new environment and it was time to go back to NASA.
I called Bernie [Bernard D.] Seery, who was then the project manager
of the Next Generation Space Telescope [later renamed James Webb],
and asked him about working on the new space telescope. He said, “I’ve
got a job for you.” I was thinking he would probably want me
to work on the instruments, but he actually wanted me to be the telescope
manager. They call it Optical Telescope Element manager.
I went to work for NASA to be telescope manager in very late ’01,
early ’02. At the time, the prime contractor procurement was
underway. The prime contractor got selected a few months after I started.
Part of the plan was that there would be this process called the OTE
[Optical Telescope Element] Optics Review, or OOR, where we would
select the mirror material for Webb.
It was a very intense year where we put together a team to select
the mirror material. My counterpart from TRW—now Northrop Grumman—Scott
[C.] Texter, and I co-chaired this committee called the Mirror Recommendation
Board, which culminated in this review called the OTE Optics Review
where we selected the mirror material.
So that’s how we selected beryllium. It was through this team
that we put together, which analyzed the technical and the programmatic
considerations. We really had narrowed it down to a beryllium and
a glass option and did cryogenic testing of technology development
mirrors to characterize both. And then, from all of that, made a decision
to select beryllium primarily based on the technical material property
advantages of beryllium at cryogenic temperatures.
With respect to the final system test, when we first brought the prime
contractor on we had this big kickoff meeting with them. We went through
the whole observatory design. We broke into splinter sessions, and
there was one on the telescope. When they went through the design,
I looked at the telescope and looked at the options for the mirrors,
and it all made a lot of sense to me. It looked pretty doable. There
were a few things I knew we probably would want to address, but then
they went through how they were going to test it. That’s really
where it dawned on me, “the hardest part of this is going to
be figuring out how to test it,” because the testing just looked
really complicated and difficult.
My background was optical testing. When I got hired by NASA, in part
it was because I had experience doing interferometry, which is a way
you test optical mirrors and lenses. I had worked as a student for
a couple years at a laser fusion facility at the University of Rochester
[New York] where I went to school, so I’d actually done hands-on
optical testing and knew a lot about optical testing.
On Hubble, I literally tested some of the mirrors that we used to
correct the Hubble problem, the COSTAR [Corrective Optics Space Telescope
Axial Replacement] mirrors. I went out and did my own tests at the
facility where they were making the mirrors, etc., but this was a
whole different scale. Of course the cryogenic aspect was a big complexity
factor, but the size of it and just the number of different tests
you would need to do at these cryogenic temperatures [was much larger
in scope]. I knew right away that that was going to be one of our
hardest problems. Testing probably was the most challenging aspect
of building the telescope. But in terms of getting involved, that’s
how I got involved in it all and went from there.
Ross-Nazzal:
You’ve said that testing the telescope was the “hardest
engineering problem you had on Webb.” Please explain why that
was the case.
Feinberg: To test the telescope, you have to cooldown the equipment
to 50 degrees Kelvin, so around minus 400 degrees Fahrenheit. It’s
an optical telescope, meaning the mirrors are segmented. We knew that
there were some optical tests that we needed to perform at cryogenic
temperature, these very cold temperatures. There was just no way around
that.
It is also true that the early test architecture was very ambitious
and was going to try and do more than just test the minimum set of
things that we had to do. There were some comments in the press that
the team was very conservative, based on the Hubble experience. While
there was a little conservatism early on, it did not take us long
to realize that this was such a complicated test that we really had
to focus on the things that we really had to test at the cryogenic
temperatures.
For example, the alignments of the system. As the system cools down,
you actually get distortions. Of course you’re also in one G
[gravity], you’re under gravity load. We knew we had to model
the effects of gravity and had to understand the system at these cold
temperatures. And demonstrate also that the workmanship was right,
that you didn’t in the process of building it do something that
could perhaps affect the performance in space.
The original test architecture that the prime contractor team had
was based on some of their experience from the Chandra X-ray Observatory.
The architecture was something done on a lot of telescopes—you
build a big tower, and you basically test the telescope in a tower.
But as we started to analyze it, it got pretty complicated.
First, they had this huge tower planned that was going to have the
telescope cup-down, so the primary mirror would be facing down. This
meant that you not only had to rotate the telescope and position it
in the chamber cup-down, but there almost had to be this elevator
thing that lifted the telescope up. That was really complicated because
now this whole tower had to cooldown to 50 degrees Kelvin. Kodak [Remote
Sensing Systems], now called Harris Corporation, was the team in charge
of the integration and testing. They started modeling this tower,
and a huge number of technical concerns came up about what would happen
if the tower cooled down. “How long would it take to cool? How
much liquid nitrogen would it take to cool it down?” Their first
calculation was the tower was going to weigh about 600,000 pounds,
and we were worried about the gradients in the tower. That was just
one concern.
The other concern was optical complexity. The original vision was
to use six auto-collimating flats that were at the bottom of the chamber
and would rotate around and then you would do what’s called
interferometry, which is a technique that we used on Hubble to test
its mirrors and other systems.
You would have to put all of that data together in complex analysis
in order to figure out how the system performed and which was complicated
by the fact that we’re at these very cold temperatures. All
the equipment had to work at these temperatures, but also we knew
inside of these big cryo vacuum chambers that there’s a lot
of vibration, plus the effects of gravity. When you put all that together,
it just became clear that this was going to be a really difficult
test. Both because of the tower and because of the complexity of the
optical architecture, we spent several years re-architecting the test.
The one that I was particularly instrumental in was going to what
was called the cup-up configuration and getting rid of the tower.
We literally realized one day that we didn’t really need that
humongous heavy tower. If the telescope was cup-up it would be able
to be rolled into the chamber cup-up. The earlier version of this
had us actually put it on an isolated table and use what’s called
a laser truss. A later version that was a slight re-architecture was
to actually use some rods that would hang the telescope in that cup-up
configuration. These ideas got rid of the 600,000-pound tower. That
was a big re-architecture that made a huge difference on how we were
going to test it, but that was really geared towards the thermal challenges.
The other thing that happened around that time was the original facility
we were going to use was actually Plum Brook [Station] in Ohio, which
was a different NASA Center, associated with Glenn Research Center
in [Cleveland] Ohio. We had several concerns with Plum Brook ranging
from transportation challenges to get there—it was far from
an airport, which complicates things—and there wasn’t
a large infrastructure of expertise.
We decided to assess other options to make sure Plum Brook made the
most sense from a total lifecycle and risk perspective and a few members
of our team went and visited the Johnson Space Center’s facility.
Ultimately we concluded the JSC facility would be the way to go, but
we still need several upgrades. We needed a new clean room. We needed
a new helium shroud and improvements to the liquid nitrogen system.
With the cup-up architecture a lot of the key optical hardware would
reside on the top of the chamber and actually not even be at cryogenic
temperature. It’d be in an ambient system that was somewhat
isolated from the cryo, but we had to make modifications to the top
of the chamber.
One thing you do get concerned about when you have a mirror that’s
face up is contamination and particles. That was one of the reasons
that the system was cup-down from the beginning. So we did re-budget
our contamination, and there was an independent Science Assessment
Team (SAT) that endorsed this new architecture and the slightly elevated
contamination budgets.
Another key change was we added in pathfinder tests. Pathfinder was
subscale version of the telescope with two primary mirror segments
(spares) and a secondary mirror spare. We added in one test of pathfinder
that was just thermal in nature and augmented another test planned
to make sure that anything that we were going to do during the final
test was checked out during one of three pathfinder tests. Plus, we
also added an additional checkout of the chamber prior to any of the
pathfinder tests. All of these changes were really critical to our
success and were handled through replans and risk management. We also
re-architected the optical portion of the test along the way to rely
more on methods like photogrammetry and less analysis dependent methods.
Also, along the way NASA took over responsibility for what’s
called OTIS. OTIS is the combination of the OTE (the Optical Telescope
Element) and the Integrated Science Instrument Module (ISIM). That’s
what we called OTIS. OTIS was actually what we wound up testing at
the JSC facility, but prior to that the prime contractor team was
responsible for all that. NASA, realizing that the test included the
ISIM—which already was being built under NASA’s direct
responsibility, and realizing the facility was being government-furnished,
and realizing there was a lot of complicated interfaces—took
over direct responsibility for the OTIS including the test of it,
which meant the test directorship, at that time.
So by about the 2011 replan we had completed the re-architectures
and added in these extra tests that we really needed to check out
the system. We had been very systematic at that point in terms of
looking at things like crosschecks, making sure that every optical
parameter was going to be tested at least somewhere and that all the
appropriate crosschecks were there.
We ultimately tested what we cared about, the things that were important,
and we made sure we had crosschecks of important parameters. There
always is more you could do but we were still very cost-aware and
tried to really just do what we had to do, but we also wanted to make
sure that when we got it in space it was going to work. In architecting
things there were a lot of lessons learned from the Hubble experience
that we really tried to take to heart. In fact, I even gave a talk
and wrote a paper called, “Applying the HST [Hubble Space Telescope]
Lessons Learned to Webb.” It was a very systematic look. There
was actually this report done by the [Lew] Allen [Jr.] Commission
[The Hubble Space Telescope Optical Systems Failure Report, 1990]
looking at what led to the Hubble failure. We took those items that
they listed in the report, and we made sure that we were addressing
them through our test program. So, we tried to take the lessons learned
to heart.
Early on we probably had an overly complex test, but we were able
to re-architect the test to focus on what we cared about, while still
addressing the lessons learned. Things like making sure that if you
do a crosscheck, you have the appropriate test criteria going into
the test. If you get a result that’s unexpected, you can’t
just brush it off, which is kind of what happened in the Hubble case.
On the Hubble primary mirror, they actually had two different tests
with two different null lenses of the primary mirror. The problem
was that they believed the wrong data and didn’t go back and
make sure that the data—which they wound up considering to be
a crosscheck with a test device that they just felt they couldn’t
trust. Instead of saying, “Okay, well, we have discrepant data,
we need to go back and understand it,” they just said, “We’re
just going to believe the one test device that we think should work,”
which was what’s called a reflective null system.
A little more history on Hubble, because it’s really relevant
to Webb. They had what’s called a reflective null that uses
only reflection and a refractive null that uses lenses. Typically
a reflective null would be more accurate, so they felt, “Well,
we’re going to believe the reflective null results because it
should be more accurate.” The refractive null though was telling
them that they were way off. I mean, way beyond the level of accuracy
that that test should have done. Had they had good test criteria and
had they insisted on understanding any discrepant test data, they
would have gone back and checked and realized that it was actually
the refractive null lens that was made right.
In our case, on Webb, what we do is we rely a lot on crosschecks,
but we always have test criteria. If anything doesn’t meet those
test criteria, is discrepant, we’re very open about it and we
insist on understanding it. There were over 30 different optical tests
that we did. There was also a number of thermal tests that we had
to do to understand the thermal performance when we did the JSC tests.
I would say that overall the testing went incredibly well. I think
because we did all those pathfinder tests—every pathfinder test
had its issues when we did these practice tests. But they were incredibly
good at training the team, and they were also good at pointing out
some of the challenges that we had with the tests.
A couple of examples that happened in the pathfinder program. Thinking
back, there’s two really critical moments of operational key
moments for us. The first was during the first pathfinder test. We
called it OGSE-1, the Optical Ground Support Equipment test. I mentioned
that we had these rods that hold the telescope. The rods were connected
to these isolators, and the whole thing actually hung. It hung in
such a way that it would be isolated from vibration. There were load
cells that would tell you that you had the right load, and as you
cooled down that load should have a very specific amount, or you had
some sort of short develop. When we did the first test, we saw the
load going up and it was continuing to go up as we cooled. We ultimately
determined we developed a mechanical short and were able to quickly
to deal with it so we could continue the test by adjusting weights
on the isolator system.
During the second pathfinder test, we put the flight Aft Optics System
on. So this one we actually had flight hardware, which included the
tertiary mirror and the fine steering mirror. The rest of it was the
pathfinder, which was non-flight; it was flight spares and non-flight
hardware. It was the first time we were able to use our infrared fiber
sources where we could look through the whole telescope. As we did,
we could see vibration was higher than we had predicted and realized
immediately that our method for doing some of our image analysis—we
have a method called phase retrieval—was not necessarily going
to work the way we had hoped, because of the amount of vibration we
were still getting.
Actually one of the engineers from Ball Aerospace, [J.] Scott Knight,
had this idea of using what’s called a Hartmann test, which
is different than the method we were using, the phase retrieval method,
and is less vibration-sensitive. I think within 24 hours, in the middle
of the test, we came up with a way to test out a Hartmann-type test,
and quickly went and tried that out during the second pathfinder test.
This was the last test that was intended to be optical.
It turned out that the third test, which was intended only to be a
thermal test, we wound up doing some optical testing. In particular
we did interferometry on the primary mirror, center-of-curvature interferometry,
using the multi-wavelength interferometer that we had at the top of
the chamber. That was a separate set of challenges associated with
that, some of which had to do with vibration, some of which had to
do with just understanding how to do multi-wavelength interferometry
on a segmented system in the presence of vibration.
That was really, at the time, what we considered to be some of the
most challenging aspects of the testing. It was the combination of
optics and vibration. The other aspects of the tests—the thermal
aspects seemed to go very well, the photogrammetry we added late also
worked very well. We always had some small nagging issues, and we
definitely had learned lessons from pathfinder.
I think the training of the teams during those pathfinder tests were
incredibly important. All of the different groups—the thermal
and cryogenic groups, the operations teams, the facility team, the
JSC team itself—which I haven’t talked much about, but
they ran the facility, they had their own facility test director—all
of that got rehearsed multiple times. We went through contingencies.
Which was all-important leading up to the final test, the run-for-the-money
test, which is probably what I should talk about next.
Ross-Nazzal:
That’s fine. I wonder though if you would talk about some of
the optical goals for the OGSE-1 and -2. You mentioned how important
they were, but what was it you were looking for?
Feinberg:
At the top level we had some very simple goals. One was that every
piece of optical test hardware—we called it ground support equipment—we
wanted to be able to check out during at least one of the tests. That
was goal number one.
Goal number two was we wanted to exercise every test. We wanted to
try every test, even though we didn’t have the full observatory.
We had things in the ground support equipment and the pathfinder that
were simulating the instruments. They weren’t quite like what
it would be during the flight final tests, but we were able to really
get through. We really had a matrix of every test, and we said, “Have
we tried every test, and what have we learned? Here’s what the
performance is that we need during the flight test. What performance
were we able to achieve?” We were trying to be very systematic,
trying every test, and checking out each piece of equipment.
In some cases we were learning things like the multiwave interferometer,
which was at the top of the chamber—we were learning about the
lasers in that thing. It had these very sensitive lasers. Some of
them failed, and we had to go work with the vendor to get new lasers.
It was a very unique piece of test equipment. I probably should say
a lot of this optical equipment I’m mentioning was all custom-built
just for the test. We had all sorts of different companies and vendors.
For example, the multi-wave interferometer was built by this company
4D Technology, which is in Tucson, Arizona. Started by one of the
fathers or grandfathers of electronic optical interferometry, Dr.
[James C.] Wyant. We had a team from Johns Hopkins [University, Baltimore,
Maryland] that built the cryo photogrammetry cameras because of experience
that they had in related things in the past.
The Harris team themselves made the auto-collimating flats, which
were three one-and-a-half-meter borosilicate mirrors. We had to understand
their performance at these cold temperatures, which meant you had
to figure out how to test them at cold temperatures. Testing flats
is actually very complicated. Those are just examples.
The other one was the AOS Source Plate Assembly (the ASPA), which
included over 50 optical infrared fibers. Turns out that as fibers
go cold their performance changes. You have to understand that. They
all had to be positioned in very accurate positions. That all was
built by Ball Aerospace. Ball and Harris were working with the Goddard
team, probably at least 15, 20 optical engineers from Goddard were
working on this—that is what made up the optical team.
Our goal was to exercise all the equipment, exercise all the tests.
Get everybody who was going to be involved in testing there to be
trained. During the first test, we were only able to test the primary
mirror essentially. So we would only be using the interferometer at
the top of the chamber and the photogrammetry.
During the second test, when we had the Aft Optics System and were
getting light through the whole telescope essentially, we brought
our whole data analysis team out to Johnson. In fact our analysis
facility wasn’t even in Building 32, which is where we had our
control room. It was across the way. They actually gave us this large
space that was in the mission ops [operations] building. It turned
out it was literally across the hall from where the Apollo Control
Center was, so you would walk by the original Apollo Control Center.
When we started 15 years ago, in the very beginning and we were going
out there, the door would be open sometimes. You’d look in and
be amazed. That’s where we put our analysis team. During the
actual test, during some of the rehearsals, we would have 15 or 20
people in there. They were just getting data and analyzing data. That
was a different team than the team that was executing the test, that
was going through the test procedures and running the ground support
equipment and doing quick-look analysis of data and saying, “Okay,
we’re ready to move on to the next step.”
Those people were actually in the main control room in Building 32,
which was the same control room that we had our thermal people in,
and our ops people in, and our OTIS test director. But the analysis
team was across the street. All of that was exercised during these
rehearsals, during these pathfinder tests. There were issues with
data and communications and all that that we had to work out.
We did it to the extent that we could. Obviously we were limited somewhat
by what the pathfinder was. The pathfinder did not have 18 segments.
It only had two primary mirror segments that were spares. One wasn’t
even coated. It did have a secondary mirror spare on it as well, again
not coated. It kind of acted like a telescope. Therefore we were able
to get through many of our steps, and we improved the test procedures
as a result of that where we could. Not having the real flight science
instruments there was probably the biggest thing that we couldn’t
rehearse.
We built something called the Beam Image Analyzer. Actually Genesis
[Engineering Solutions] and Ball helped us with that. Basically it
was an infrared detector that was on a two-dimensional stage that
simulated the science instruments during the second pathfinder test.
We could put it in position as though it was one of the instruments,
so that was very helpful, and that was a challenge just to get built
and use it. Using that allowed us to simulate optically what would
be going on during the final test, but we didn’t have the actual
science instruments. Those instruments are really complicated. There’s
four science instruments, and they each have their own detailed operations.
Three of them are contributed internationally—two from Europe,
one from Canada.
One of the big things during the final test was the fact that we actually
had the flight instruments there, the flight ISIM. That was a big
learning curve. I’d say the biggest delta, the biggest change,
from our pathfinder test to the flight test was having the actual
instruments there, which involved a huge international team that had
to support it, and a lot of operations aspects of things.
You had to protect those science instruments. Safety was a big concern,
especially as things cooled down. A lot of the instrument side of
things was learned though on the ISIM tests. ISIM itself, the Integrated
Science Instrument Module, they did their own level of testing. They
actually had three separate cryogenic tests. I think two of them had
the full complement of instruments.
They had their own, if you will, rehearsals. They were a pretty capable
team, having been through 200-ish cryogenic test days, living through
snowstorms and other things. They were a pretty battle-tested team,
but they weren’t able really to rehearse all that as part of
our pathfinder tests. Our pathfinder tests made our optical teams
and our test teams very battle-tested. Then it was those two different
groups that came together for the final OTIS test.
But other than where you actually needed the final flight instruments,
we were really able to rehearse everything and practice everything
and test out all the equipment and make changes to things that didn’t
work right. That was the whole intent of that whole very elaborate
test rehearsal campaign.
Ross-Nazzal:
Did you relocate to Houston during the time of the test, or were you
going back and forth between Goddard and JSC?
Feinberg:
The pathfinder tests were shorter because you could cooldown and warm
up. I think I came out for a few weeks for one of them, and a few
weeks for the other. I typically would test the entire optical portion
of those tests.
During the final OTIS test—which originally was planned to be
93 days, but it wound up being a 100-day test. It turns out when you
work for the government it’s complicated to relocate formally,
but what I wound up doing—you can come out for I think it’s
like 28 days at a time, and so I came out for 28 days the first time.
Then I went home for 3 days and intended to come out for another 28
days, but then we had Hurricane Harvey, which I’m sure we’ll
talk more about.
I was able to get them to extend it by several weeks. I think I was
out there for almost two and a half months and only had to come home
once for a few days. So I wasn’t relocated, but I spent a good
chunk of two and a half, three months out in Houston and would come
out for large chunks of time before that.
Actually, during the test itself my wife even was able to join me
for almost two weeks. Because she works for NASA at NASA Headquarters
[Washington, DC] in the Legal Office, she was able to do a short stint
in Houston. That made it a little easier for me personally.
I felt the need to be there—because the final test was really
important. That’s where we had the flight hardware there, all
the flight hardware. There was a lot at stake, particularly from the
point of view of keeping the hardware safe. I don’t think people
appreciate that. You always think about a test like this being all
about making sure the equipment works so that when you’re in
space you have confidence it works. It turns out that’s the
second most important thing.
The most important thing is safety. Obviously first is personnel safety,
but second is flight hardware safety. With a test of this complexity—I
mentioned the issue we had with the mechanical short, we of course
had the issues with the storms. The hardware safety issues were the
ones that were the most intense.
We had two OTIS lead test directors, Carl [A.] Reis, who lives in
Houston, and myself. We both felt the need to be there almost nonstop.
There was so much at stake in such a short amount of time that we
just felt the need to constantly be there when we could. I didn’t
quite relocate, but I spent quite a bit of time there, especially
during the final test.
Ross-Nazzal:
Tell me about being a test director for the optical test of the OTIS.
Were you there when Webb arrived here in Houston?
Feinberg:
Probably not there when it arrived from a shipment point of view.
Prior to the test itself there was an ambient test phase. I personally
did not spend a lot of time in Houston during that phase, in part
because we had some really great people who led that effort that we
had a lot of confidence in, who had experience doing a lot of that.
There was a team from JSC that they were working directly with.
I would come out for certain key moments. I came out a lot for the
test planning, etc. Also it was just a practical matter that the team
that did the ambient activities—the shipping it, getting it
out of the shipping container, and checking it out often was different
people, which just was practical. Those people worked really, really
hard for two or three months getting ready. Then the test would start,
and it was a different team that needed to be ready to go 24 [hours
a day]/7 [days a week], so I was really part of the test team. There
isn’t a ton of optical work that’s done. There’s
some alignments, and we kept track of that. There were some people
from the Harris team who literally relocated or lived in Houston,
and they stayed there and did some of the—we call it ambient,
which is being in the clean room, activities.
We spent a lot of time, as we got closer, really checking on things
like the alignments. We wanted to know the telescope was in the right
position before we started the test. Even when we went to vacuum,
before we cooled down, there were a lot of checkouts that we did of
the interferometry. We didn’t want to cooldown and then find
out that something was wrong that was like, “Oh, we got to warm
back up.” There was a lot of intense effort there.
I did go down, though, once we started what we call the functional
testing of the telescope and of the instruments. We had not the flight
ground system, but a simulation of the flight ground system, including
a simulator of the spacecraft. Once we started running everything
with that at ambient, before we even went to vacuum, and then when
we went to vacuum—when we did those things, I was down for that.
I came down right at the beginning of those tests.
Those tests were really almost rehearsals for the tests that we would
be doing at cryogenic temperatures, so it was important that we had
a lot of the same team, the test directors there. We started the test
directors taking shifts. The way it worked was we had probably over
a dozen different test directors that would take turns, because we
were running three shifts a day. Often the test director would be
on shift for four, five days at a time. The two lead test directors,
which were Carl and myself, we pretty much made sure that one of the
two of us was there at least every day, and we would overlap with
the test director. We would have a lead test director. We sometimes
took shifts as the test director, so we’d be both the lead and
the test director. As we started getting into it, we backed off from
taking too many shifts. We were more focused on the bigger items,
a lot having to do with having to replan things and deal with storms.
The other folks, as they got more experienced at test directing, were
able to do that, which was the minute-by-minute directing, versus
the looking out.
Literally every day from the OTIS side we had a meeting, and we would
invite the facility team too, so they would join us for this. Every
day we would have a 2:30, we called it a test configuration board.
That was something we instituted only for the final test. It was a
lesson learned from the pathfinder test, and it turned out to be maybe
the most important change we made.
Literally every day at 2:30 all the critical disciplines would be
part of this meeting where we would go through where we were in the
test, what changes needed to be made. We would also go through any
what we call problem reports (PRs) that had come up and that required
us to disposition or to do additional tests. Really every technical
issue that came up would get heard at that meeting, and any changes
we made to the test plan would be vetted at that meeting. That was
a really critical focal point. The two lead test directors, Carl and
myself, were really in charge of that daily meeting, and of all the
planning that went into coming up with the agendas, talking with the
people before the meetings to make sure that we were addressing the
right issues, and then after the meetings to make sure that the changes
got implemented.
That’s the way we managed the big picture test, because literally
we were testing 24/7, and there was never a moment where we weren’t
doing anything. There was often times where we were doing two or three
different tests simultaneously and having to make changes because
something didn’t work right. Then you would have to make a decision
whether to move on or do something different and how. There were all
sorts of constraints on, “You can’t do this while you’re
doing that.” A lot of that had to be aired and figured out as
we went along.
In addition to that daily meeting, which was a big focal point for
us, we had—and this was more for me. Carl, his background was
the facility itself and the cryo aspects of it. My background was
the optics and the flight hardware, being the telescope manager. So
I spent a lot of time with the optical team. Every night I think at
5:30 we would have what’s called a data analysis meeting. Every
day different analysts would report on the optical data they were
getting, and it would allow us a chance to look at how the results
were looking. That was a really important clearinghouse for issues
that came up, because it turns out that we did have some optical issues
that came up.
The most significant of which had to do with the stability of the
primary mirror, resulting from some of the insulation around the primary
mirror getting too taut as everything cooled down. Turns out that
the insulation shrank enough that it became taut, so as we changed
temperature just slightly the primary mirror was changing its shape.
There were early indications of that that were discussed at that meeting,
and I think that was really important because it gave us an opportunity
to then devise additional tests to understand the data. Those additional
tests helped us better understand exactly the source. It actually
turned out there was more than one issue that was going on.
One of them was very much associated with something that was not a
flight configuration issue. It was the test hardware configuration
was creating a mechanical short. Not one that was a big vibration
concern or a concern to safety, but it was just causing some cyclic
behavior that we were able to understand by some additional tests
we ran.
There was another issue though, that did turn out to be the insulation
around the outside of the primary mirror and this thing we called
the frill and came as a closeout insulation. That became taut, and
that’s something that after the test we actually went and changed.
We had to loosen up certain locations as a result of what we learned
during the test. It was probably the most important thing we learned
during the test from an optical point of view and maybe overall, because
I can’t think of anything else that we learned where it resulted
in us making flight hardware changes. We certainly learned some important
things about other things, including the NIRCam [Near Infrared Camera]
instrument had some things we learned about that were important to
learn there. But this was a thing that actually resulted in a hardware
change, and we were only able to understand it and get ahead of it
because we were doing these daily data analysis reviews. That was
the end of the day every night. We’d sometimes go for two or
three hours after people having been on shift, and then they would
stick around for a couple hours doing that. It made for some very
long days, but I think it was critical to have that.
We had some really good people running the optical team. Kim [Kimberly
I.] Mehalick, who was really helpful, and the entire optical teams
from Goddard, from Ball, from Harris. We also had the Northrop teams
who operated the telescope. There was a number of tests that we ran
on the telescope as well: functional tests on the heaters and on the
latches and understanding the telemetry that we were getting.
There was a large team from Northrop Grumman that was responsible
for the telescope operations. They were not reporting at the data
analysis meeting, which was more optical, but they also were getting
results and sometimes reporting at our 2:30 meeting, our test configuration
board meeting. It was really multiple disciplines.
The thermal team also was getting data they were analyzing constantly.
It did turn out on the thermal side that there was a big surprise
during the cooldown. As we were cooling down there was a location
that was at the interface of the warm electronics to the backplane,
which is the composite structure, that ran warmer than we expected.
The gradient was higher than we had planned for. We had to do some
quick analysis during the cooldown with the mechanical team to make
sure it was safe, and we concluded it was. Over time we were able
to better understand that the 20, 25 Kelvin higher-than-expected temperature
in a very local region at the interface was not a problem. But that
was another critical thing that happened that we were tracking on
a daily basis at these daily meetings, working with the thermal team.
Also the stability issue, as I mentioned. As that data came up, we
were injecting new tests to better understand that. That’s a
lot of the operational side of what the test directors were doing—what
I was doing, what Carl was doing, and what we were interacting with
the JSC facility team on as we changed things. Mostly the interactions
with the JSC team were during cooldown and warm-up, because once you
got stable there was about a 30-day stable period.
In general their jobs were a little quieter then, because you just
had to be stable. The exception being when we had a storm come, but
for the most part a lot of the interactions with the facility team
occurred during cooldown and warm-up. We were trying to manage the
liquid nitrogen systems, the liquid helium systems, the pressures
in the chamber, and all those kind of things.
Getting back to my earlier comment, it was all about safety and the
flight hardware safety. The big safety concerns were during cooldown
and warm-up, cooldown because of the gradients, and warm-up because
of the contamination issues and gradients. That’s a lot of what
we were worrying about during those periods, whereas during the actual
stable period where we were doing the testing it was more focused
on the optical results and thermal results.
Ross-Nazzal:
You mentioned the importance of safety. It’s my understanding
as soon as you really hit that cooldown period, that peak, that Harvey
started rolling in. Would you talk about the impact of the storm on
the test? Were you here at that point?
Feinberg:
Yes. That storm was just unbelievable, because ever since we started
working on this test, any time we’d ever have a review somebody
would mention the word hurricanes. It was almost a running joke about
hurricanes, as far as the fact that we thought we were well-prepared
for them but also thought the likelihood was just incredibly small.
In part we had done all these rehearsal tests, and the pathfinder
tests never had a storm like that.
What wound up happening—I remember the dates very specifically.
The first date I remember is August 21st, which happened to have been
the [2017 solar] eclipse. Some of our team actually went to see the
eclipse. The real die-hards were there, and we had a small team go
outside. To see the eclipse we had to work it out so that a few at
a time could go outside from the team that was on shift. I remember
that day I walked outside, and that was the first time Carl Reis,
who was tracking weather on a daily basis, mentioned to me, “You
know, there’s a storm brewing in the Gulf [of Mexico]. It might
be a tropical storm, might head our way.”
The next day, August 22nd, is actually the day that we declared that
we had achieved what was called cryo-stable. That was the point at
which we were stable in temperature. Thanks to some of the many preparations,
we had prioritized our testing to do the most important test first,
in case there was a big storm, which we didn’t think would ever
happen, but nonetheless.
So some of the most important optical tests were going on those first
few days. Then I think on the 22nd, actually our deputy project manager
John [F.] Durning was visiting. I remember Carl and I, and him and
Mark [F.] Voyton [ISIM and OTIS manager] went into a conference room,
and we started talking about the fact that there was a tropical storm
coming and that we probably needed to ramp up the planning. That was
Tuesday.
By Wednesday morning, we had a sense that this tropical storm was
heading for Houston. That is when we really went into overdrive, Carl
and I in particular. We actually were able to meet with meteorologists,
and obviously were working closely with the facility team. We had
done a large amount of preparation, but as much as you prepare, there’s
always things that you need to worry about.
I remember one of the big things I was focusing on was just the power
systems. The storm was supposed to hit Friday night. Our thinking
at the time was it was going to be a tropical storm, and that there
could be a lot of wind and we could lose power. We had duplicate power,
and we had the UPS [uninterruptable power supply] systems, etc., but
it really forced us to say, “Okay, we need to go back through
and audit everywhere there’s power and make sure that we’re
not going to be in a situation where we can’t do something because
we’ve lost power.”
We went through every—they call it UPS, where they were located.
I actually wound up putting together diagrams of all this stuff, and
we did find that we did have some holes in our thinking. Had we lost
power, we would have been single string in our ability to command
the observatory. So one of the first things we did is say, “Well,
we better add some redundancy there for that.”
Then the other aspect was there was some ground support equipment.
For example, the cooler system had some heaters that you would need
to switch over, and we were not quite ready to do that. We had some
procedures that we had to prepare and some outside-the-chamber work
we needed to do. There were some very intense preparations in terms
of power initially.
Then on Thursday, that was the big day. We met with the meteorologists,
and they started indicating—I remember the words they used was
“pillow fight.” This storm might hang out over Houston.
There would be this kind of pillow fight over Houston of these different
parts of the systems interacting, so we needed to be prepared. They
were still calling it a tropical storm at that point, but we needed
to be prepared for this to last a long time. We needed to make sure
that everybody had enough water and food. This is Houston in the summer
when it’s very hot. One of the real things they pointed out
was people get stuck in their hotels without power. It can get really
hot, so you need a lot of water on hand.
We had over 120 people in Houston at the time that we were supporting
the test. This is independent of the facility team, just the OTIS
team. This includes the international teams from Europe and Canada
that were supporting the tests, many of whom didn’t know much
about hurricanes.
We decided to have an all-hands meeting that Thursday. I think I have
these dates right. We had an all-hands meeting and went through what
we had learned from the meteorologists and that this could last a
long time. Everybody needed to go out and stock up on food and on
water. Explained to them, “Here’s exactly what we’re
going to do if we lose power.” We went through that sequence.
I think we gave everybody a sense that we were on top of every possible
issue. We were not overly worried at the time about the liquid nitrogen
supply, which is something that you would normally worry about, because
in the planning for the test we literally went through every possible
thing that could go wrong during the test, including what could be
the result of a storm. We had a five-day supply of liquid nitrogen,
which we thought would be more than adequate. So there wasn’t
a lot of discussion of that issue, but there was a lot of discussion
about the potential loss of power, personnel safety, what to do if
you can’t get in, who you call, how you deal with it. We went
through all that.
On Friday morning, I remember when I woke up—I was a pretty
early riser—I checked the weather immediately, of course. We
were getting updates from the JSC meteorologists, but the thing that
was really unique on Friday morning was they were like, “This
thing is going to be a hurricane.” Initially they thought Class
1, 2. Later, as it got closer, it became a Class 4. When we woke up
Friday morning, initially we were still thinking it was a tropical
storm. It was only that morning that they indicated it was going to
be a hurricane. My initial thought was, “Wow, we may not be
able to get people to and from the facility.”
There were plans for a small rideout team. We’d always done
contingency planning with a small rideout team. The rideout team was
very small and really the thought was it was a pretty short-duration
kind of thing. The thought was probably at that point the Center would
have been literally shut down. What we realized is that this storm
might be something a little different, this might be elongated multiple
days, and we might have to have teams stay overnight for more than
a day even. We weren’t sure.
One of the things we did on Friday was we went out and got 40 air
mattresses, which we put in a conference room down in the bottom of
the facility, just in case, as a precaution. Blew some of them up.
I think we blew up maybe about half of them. Started preparing the
team for the storm to hit Friday night. At this point, the thinking
was that there still was a good chance that it would just hit Friday
night, and then by Saturday everything would be a lot less impactful
and we wouldn’t have to worry too much.
Friday night we started going into an operational mode from the OTIS
perspective where we skinnied down the team and made sure that we
had people. There were a couple people who stayed overnight, one or
two, on Friday night. Carl stayed over; I stayed late. I think I was
even able to get back to my hotel for a few hours at some point that
evening. It turned out we made it through the night Friday night.
Even though there was pretty strong winds—I think they got up
to 40 miles an hour—we didn’t lose power.
Saturday morning we were just relieved. It turned out the weather
looked okay Saturday late morning. So we had a shift come in, and
we started getting back to testing, thinking we had survived the worst
of it, because the storm had at that point passed over us. They had
mentioned the thing about a pillow fight, but at that point I don’t
think we fully appreciated what that meant. As I mentioned, we were
doing some of our highest-priority testing. We had just finished aligning
the primary mirror. That morning we got some interesting results about
one of the segments which had made contact with some insulation that
turned out not to be a concern, but we were troubleshooting it.
I remember going home Saturday. Actually I went out to dinner as it
was even nice enough to go out to dinner. I went back to my hotel
a little bit early from dinner because I was just a little bit nervous
still. We got some emails from the JSC meteorologists that the storm
was in a spiral shape, and that there was a possibility that the next
big streak of the spiral could hit Clear Lake where JSC was, and that
it could be a very intense situation.
So I called Carl up. He had gone home too it turned out, and we both
raced back to JSC. I was staying at literally the closest hotel, so
I was there like in five minutes. I was on the phone with Carl as
he was getting ready to get back. That was by far the most intense
evening from the whole test.
We had to make quick decisions. We sent people home and then brought
some other people in, some of whom we even told to go to sleep. “We
may need you working the night shift, so go take a nap.” There
were people that had been on for quite a while that we didn’t
want to be stuck there. We were quickly modifying the shifts, and
we’re calling off testing at this point and just going into
a safety mode with a skeleton team. We could never get much below
about 10 to 12 people. You always had to have a certain number of
people just to monitor the health and safety because this system is
operational. You’re at cold temperatures, so we needed to make
sure we had them. Normally you’re working shifts, and you can
work some longer shifts. We ultimately went to 12-hours shifts, so
twice a day.
That night was by far the scariest evening. The storm was just intense.
We got almost half the rainfall—there were 51 inches that week,
and over half of it was that one night. And there were four tornadoes
very close, where all our phones were going off indicating a tornado.
One of them was within a mile of the facility. I think it was even
at an intersection that people would have been at had we not changed
the shifts. So that was scary to us.
There were also some leaks that developed. We were dealing with a
lot of leaks in the building, almost doing triage on the leaks, and
having to cover up some of our equipment. It was never a concern with
the flight hardware as far as water, but it did run the risk of getting
on the ground support equipment. Some of it we turned off, a lot of
it we covered up, in terms of the optical computer hardware.
The other hardware, of course, we cared about was the thermal hardware
that you couldn’t turn off. You had to have that to monitor
the temperatures, etc., and be able to control heaters, so we were
carefully making sure that was all safe. There were a few moments
that evening that were pretty nerve-racking. But the team did a phenomenal
job, everyone—the facility team, the OTIS team—that was
there.
I remember we got a phone call early Sunday morning from somebody
who was stuck in their car in 18 inches of water. We basically told
them to call 911 because we weren’t able to get out to them.
There was significant flooding all around. If anybody’s seen
the images, the flooding in and around Houston was unbelievable. Many
people lost their houses. We had one of the members of the facility
team that was stuck at a gas station for over 24 hours—I don’t
remember exactly how long—and they eventually got rescued by
a boat. We were worried about personnel. The facility team was particularly
impacted because many of the towns surrounding the test were impacted
by the flooding. A lot of our folks were staying at hotels that fortunately
were not in the worst-hit flooding area.
We were just very fortunate that both the test facility and the hotels
we were all staying at did not flood. There were things like some
of the local areas were recommending people to not flush their toilets
and use their water because of concerns about the sewage system. And
as the week went on all the grocery stores were shut, so there was
limited places you could get food.
Pretty much we went to 12-hour shifts. There were about three or four
people who had these large pickup trucks, and we quickly coordinated
shift changes where we’d bring about a dozen people. Each of
them could take three or four people, and they’d bring three
or four people per pickup truck. Then the person that drove the pickup
truck would take the people that were there back to their hotels.
We were able to check the weather online and make sure it was safe,
because the storm was very predictable. You could see that there’s
this big streak heading towards you and that maybe 20 minutes from
now there was going to be a humongous flooding. Any time there was
really intense rain we would not do a shift change, but roughly every
12 hours we were able to do shift changes and get about a dozen people
there.
After a couple days of that we realized instead of just sitting there
staring at computers, we actually had a big enough team to do some
testing, and we were able to get actually even some testing done.
Limited amounts, but get some testing done, when we had the right
people there. For the most part just rode it out for the five or six
days.
The one pretty significant thing that we realized—actually I
remember first hearing about on Sunday—was one of the people
said to me, “How’s your liquid nitrogen supply looking?”
A lot of folks don’t realize this, but when you run a test like
this you’re literally using three or four trucks of liquid nitrogen
per day that you have to refill in order to keep the liquid nitrogen
shroud cold enough.
That five-day supply was starting to deplete down. The company that
supplied the liquid nitrogen was literally—the word they used
was underwater. Their reprocessing facilities were not making liquid
nitrogen. That was a very intense situation, because if we didn’t
have enough there was a point at which we would have to start warming
back up.
So we started working closely with the facility team. The first thing
we did on our side was to look at, “Is there a way that we could
change how we operate the shroud?” Maybe running every other
zone essentially to maybe use less liquid nitrogen and extend it.
Then the facility team was looking at other options for getting liquid
nitrogen.
Finally on Tuesday I remembered an old boss of mine—when you
had a serious issue like this, he always said, “Call the president
of the company.” So we got this crazy idea to call the president
of the company. We didn’t quite get the international CEO [chief
executive officer] of the company that made the liquid nitrogen, but
we eventually got I think what was the president of that division
that had the liquid nitrogen, and they were able to work with us.
We explained to them the seriousness of the situation, “We have
this national asset,” etc.
They were able to work with us and get us the first two trucks of
liquid nitrogen that evening, which was a huge relief. Even though
they weren’t reprocessing, they had some available, and they
were able to find a driver and a truck, all the things you needed.
Then little by little we were able to get more and more supply as
the week went on. That was probably one of the more tense things that
we were dealing with that week, in addition to just keeping everybody
safe.
I think it was Thursday finally we got blue skies, and the storm had
passed. That was right before Labor Day weekend. Our team had lived
through quite a crazy week. It turned out that—I think it was
Ball Aerospace first decided they were going to fly a corporate jet
with a whole new team to replenish the people and offered to bring
out some fresh food. That was great. Then Northrop Grumman decided
they were going to fly a corporate jet, then actually even NASA found
a way through a private company to send a jet.
We probably had, I don’t know, 70, 80 people that were able
to go home for the Labor Day weekend, but also new people come down
to take over. The people who had been working the shifts—these
were long shifts. Everybody had lived through a pretty intense week,
and everybody was exhausted, so that was really nice. So when Labor
Day weekend came, we had this whole new team. That was about the time
that we realized the extent to which the facility team, who lived
locally—we kind of knew all week that they had been impacted,
but we were so focused on getting the liquid nitrogen resupplied and
getting everything safe.
As we got into Labor Day weekend, I think our team having a fresh
group of people realized—we started being concerned about our
facility team. So I think it was the Monday of Labor Day weekend we
asked for volunteers to go help people with their houses, because
a lot of people had such huge flooding. A lot of the flooding—literally
the water went up the entire first level. People had to go to the
second level of their house, get rescued by helicopters. A lot of
people had so much damage where they had to get rid of all their carpeting
and the drywall. We had, I don’t know, 35, 40 people that we
all met at a church and broke into different teams. We had a list
of the people from the JSC facility team that needed help. Some of
the people went to those houses; other people went to just some of
the local communities.
There were also a lot of people who needed help with just filling
out their paperwork for the FEMA [Federal Emergency Management Agency]
application. Some of the hotel workers, English was their second language.
That was one of the things I was doing, helping them with the FEMA
application. Which was ironic because the Hilton [Hotel], where I
was, the whole basement of the Hilton had been flooded.
Yet the Hilton—initially we had all these [U.S. Army] National
Guard people staying there during the storm who were literally flying
choppers during the day and going to rescue people. And then as the
storm finished, they all left to go to some other part of where the
storm was hitting, and all these FEMA people were staying at the Hilton.
There were also a lot of displaced families at the hotel because even
the neighborhood right next to the Hilton—Hilton is the closest
hotel to JSC—even in that local community, the houses were flooded
in a major way. There were these pockets everywhere you went. Anyone
that was below the flood line, their house was probably underwater,
and it was for several days. There was a lot going on, even within
the city, during that whole week.
Yet while all of that is going on, I mentioned we were getting some
test results. Some of them were amazing. I think the first time we
did the loop test between the guider and the fine steering mirror,
which is a really critical interface in the observatory—it turned
out to be four or five days into the storm, we just happened to have
the right people there. Did the first demonstration of that, but we
also were starting to get a little bit of data indicating some issues
with that stability issue I had mentioned earlier.
I very specifically remember right after the test—I think it
was maybe two days after, three days after—Bill [William R.]
Ochs, who’s the project manager on Webb, came to visit. I remember
he and I sat down, and we were going through what happened. At that
time I was briefing him on, “We have some funnies in the data
on the primary mirror stability that we don’t understand.”
I naturally was ready to go home, but I was like, “We really
need to understand this. This is a really important issue.”
Cryo-stable came, literally the day before we were already starting
to worry about the storm. The storm came a couple days into it. When
the storm left we still had testing in front of us, but we also understood
that there was a stability issue that we needed to understand. Many
of the optical people didn’t even go home. Or if they went home
they just went home for a couple days and came back. We were cryo-stable,
and we were finally doing the tests that we had set out to do. So
we still had several weeks of testing that we did.
Fortunately, the rest of the testing went fairly smoothly at that
point. We did have to add additional tests, several days’ worth,
to understand the stability issue. Very focused tests to try and assess
it.
That was the story of the storm and what we did to deal with it. I
don’t have as much information on the facility side. They had
their own set of challenges and a lot of personal challenges. But
certainly on the OTIS side everyone was just incredible about helping
out during the storm. There were all sorts of different examples of
people who just were helping out in any way that they could. The people
who came in to basically sleep on an air mattress so they could take
a shift in the middle of the night, the people who found food to bring
to the team—it was just an amazing collection of people that
were willing to really make it successful. I think that probably is
a testament to the fact that this is the James Webb Space Telescope.
It’s maybe one of the most important scientific endeavors that
NASA has undertaken. I think people are just so incredibly committed
to it. That really came through when we were going through this whole
storm, just the amount of dedication everybody had.
In fact, I’d say maybe the biggest problem we had was when we
would bring a skeleton crew in to operate. Just a very limited crew
that would come in on the pickup trucks each day, but that meant a
lot of people were having to stay at their hotel. I think a lot of
them were going stir-crazy. We felt really bad for them, but obviously
from a safety point of view didn’t want a ton of people on the
roads. I think that was one of the hardest things, just having to
tell people not to come. I think finally by about Tuesday, Wednesday
people started just showing up. I think they were at the point where
they were like, “Look, I have to work. I need to help.”
That’s the story of the storm.
Ross-Nazzal:
I don’t want to break in here, but it’s after 2:00 here.
Are you okay to go a little bit longer? Do you have something?
Feinberg:
I am. I’ve given you most of what I could probably give you.
But yes, I’m happy to go a little bit longer. Let’s do
it.
Ross-Nazzal:
I have a couple more questions. You talked about Harvey. Are there
any other memorable events or moments that took place? It’s
probably hard to top Harvey, but were there any things that come to
mind?
Feinberg:
During the final test itself?
Ross-Nazzal:
During the final test, yes, the cryogenic test.
Feinberg:
That was so significant, it’s really hard to top that. I think
that whole five, six days was probably the part that it’s just
hard to top.
Maybe when we ended the test was memorable. We had a consent to proceed,
consent to warm up basically. I think after having been through the
hurricane and having that meeting, the amount of optical data that
got analyzed and the quality of the data and how well the telescope
was working, I think that consent to warm up was very memorable. Just
because I think people despite the storm were like, “Wow, this
was really successful, this data looks great. You guys are ready to
warm up.” I think that was memorable.
I don’t even remember if I was actually there the very final
day. I might have been there at the end of warm-up, but maybe not
when we repressurized, I can’t even remember. I don’t
think I was there when we opened the door. That wouldn’t be
one that I would remember, Carl would probably remember that better
than I would. But yes, I think that’d be the other critical
moment that was a huge relief for everybody.
Ross-Nazzal:
You mentioned of course the Canadians, the Europeans. This was a multi-Center
and an international effort. Would you talk a bit about that?
Feinberg:
Yes, I think that was a really good part. I mentioned earlier how
when we did the rehearsals we didn’t have all the instruments
there. Partially what that meant is we didn’t have this large
international contingent.
The guider/NIRISS [Near Infrared Imager and Slitless Spectrograph]
instrument was provided by the Canadian Space Agency. The MIRI [Mid-Infrared]
Instrument and the NIRSpec [Near-Infrared Spectrograph] instrument
were both provided by—one by ESA [European Space Agency] and
one by EC.
As a result of all that, you had maybe 8 to 10 people at least at
any given time from each of those teams. So maybe 30, 35 people that
were internationals. During the storm, in a lot of ways I think that
was some of the most critical people. They were less familiar with
hurricanes and how to deal with it and our emergency systems.
Carl and I, one of the ways we communicated with the team is we wrote
emails to the team. There was a listserv. During the storm, for example,
we were sending five, six emails a day sometimes, and I know a lot
of the members of the international community were extremely thankful
for a lot of the information that we provided. From, “Here’s
where you can go get groceries,” to, “Here’s what
you need, here are the supplies you need, here’s what you need
to worry about at your hotel, here’s the roads that are safe
to take,” all that kind of stuff.
The international team, a lot of them are scientists. The instrument
teams tend to be maybe half scientists, half engineers. It was a wonderful
experience. Some of the best memories I have of the test was being
there working with members of the instrument teams, including the
U.S. instrument as well, the NIRCam instrument.
Because it was one of the few times where some of us had really worked
closely with the scientists developing tests, looking at data together,
it really felt like what we expect it will feel like when we get the
observatory in space. We’re getting data down, and we’re
making decisions. That was a great experience. All of those teams
were just wonderful to work with. These are some of their top talent
that they send out, and they’re all really collaborative people
who really have the good intentions of Webb at all times. That was
great.
Then, like you said, it was multic-Center. Obviously JSC and Goddard
were the ones really executing the test. There was actually a small
contingent from JPL [Jet Propulsion Laboratory, Pasadena, California]
supporting the cooler. They were also supporting MIRI, but the team
that came out were more of the cooler people that we interacted with.
The cooler is a really critical part of the test as well.
Some of the other teams weren’t necessarily there from other
Centers. For example, the Marshall [Space Flight Center, Huntsville,
Alabama] team that supported a lot of the mirror testing when we were
developing the mirrors. A couple members of that team, I remember,
were on some of our review panels to make sure that the lessons learned
from the cryogenic optical testing that they did at Marshall were
input into our thinking. They weren’t physically there, although
they had some role in the review of it going along the way.
I think that’s probably it from the NASA Center perspective.
Of course there were some folks from NASA Headquarters always involved
from the program level and tracking. They would come out and visit.
When the storm was going on also, they were of course very interested
in what was happening and understanding what we were going through.
But overall, I know that when we were dealing with some of the issues
of the storm—like the liquid nitrogen, I remember Ken [Kenneth
J.] Anderle, who was one of the members of the facility team who’s
part of the JSC community—I know he and I and Carl really closely
collaborated on dealing with some of those issues. Some of the other
facility test directors, there were several of them who we were working
very closely with on just resolving issues.
It was a very collaborative effort. We were communicating both formally
and informally. The Facility Control Center was right next to the
OTIS Control Center, and a lot of times we would just walk over to
the other. Whether we were talking through the loop system—which
is a formal communication that’s a headset—versus just,
“Let’s go sit around a table and talk,” there was
so much to deal with during the test that it forced you to do that
amount of communication.
Did that answer your question? I think I covered the international
and the OneNASA aspect of it.
Ross-Nazzal:
Oh, yes. I wonder if, looking back—because it took a long time
to get ready to do the tests, and to actually do the tests seemed
like probably a much shorter period of time. But was there one thing
that you’d point to that was your greatest challenge? I know
you said it was the testing, but was there one thing in particular
that you would point to?
Feinberg:
I don’t know that you can narrow it down to one thing. I used
to call it the “mother of all tests,” just because I was
involved with a lot of tests for pretty complex instruments over the
years on Hubble, and had seen a lot of other observatories and how
they do testing. I don’t think there’s ever been a test
this big and this complicated.
We even had a whole external review panel just to look at the test
at one point. We had an independent team—we call it the Product
Integrity Team (the PIT)—that helped assess all of the optical
aspects of the observatory, but with a special focus on this test.
I don’t know that I would narrow it down to a single thing.
I think once we got into the test itself, once we got into the pathfinder
testing, it turned out the vibration aspects of the test were some
of the more challenging ones. Certainly being 50 degrees Kelvin, being
cryogenic, makes everything difficult because working in that environment
and dealing with that environment is complicated. There were probably
9 or 10 different engineering disciplines involved in this test, and
I think every one of them would say that this test really challenged
what they do. I can’t pin it down to one thing.
Ross-Nazzal:
What do you think is your most significant contribution to the test?
Feinberg:
I think helping to architect it, especially the cup-up configuration.
Then I think Carl and I being the OTIS lead test directors, the day-to-day
just managing the test, especially during the storm. I think I personally,
on the storm part—because honestly, Carl and I had this—not
agreement, but it was always like Carl was more the person who did
the typical test director things, and I was more the person who understood
the optics.
I never really expected to get as involved as I did on the actual
storm stuff, the dealing with the personnel issues, but it was so
intense. There was so much to deal with, that I think probably the
most important thing I did was helping the team there. Beyond developing
the architecture over all the years, including how we would check
the test out and the discipline to check it out.
One of the things that made the test incredibly successful was the
number of really talented people who all worked collaboratively. I
think part of the reason we had all those great people was because
of what this was, the James Webb Telescope. I do think that putting
the team together, understanding it needed to be a badgeless team,
and creating that culture, maybe that is the most important thing.
Because at the end of the day these things are never one person, they’re
large team efforts. So the most important thing is getting a great
team that communicates properly and is collaborative and has the expertise.
I think maybe that was an area where I played a very significant role
just because of where I was and where I came from, which I learned
from other programs, like the Hubble servicing missions. That was
always the way we operated. We tried to take a lot of what we learned
on the Hubble servicing and apply it here, and I think that was successful.
Ross-Nazzal:
You mentioned the people. I’ve been asking folks we’ve
been talking to, are there other folks that you think we might consider
interviewing?
Feinberg:
Have you talked to Carl? Carl Reis, he’d be a very important
person to talk to. I think Mark Voyton would be a really good one
from Goddard.
Ross-Nazzal:
I think he’s on my list.
Feinberg:
Let’s see. Gary [W.] Matthews, who worked for Harris Corporation
for a while, and Tom [Thomas R.] Scorse who’s at Harris right
now. Those guys probably from the Harris perspective would have a
lot of really good input.
Ross-Nazzal:
Thank you so much for your time today. I know you are busy with everything
out there, so I really appreciate it.
Feinberg:
No problem.
Ross-Nazzal:
All right, have a good afternoon.
Feinberg:
All right. Good luck, thank you very much.
Ross-Nazzal:
Thank you. All right, bye-bye.
Feinberg:
Take care.
[End
of interview]