Episode Transcript
Rhett:
On this episode of pipeline things we dive into the challenge or failure. Probably
next to maybe three mile island the most talked about safety related incident in
all of American culture and one that you know really I'd say 20 years after first
hearing about it because I was taught a lot about it in school Just really has a
lot of meaning for me. I hope you will enjoy it as we dive into it And I'm just
going to ask you one question before you get into this. When you make your
decisions as an integrity engineer, is it your job to justify that the pipeline is
conclusively safe to operate, or do you have to justify that you conclusively need
to take action? We talk about why that's important on this episode. Hope you
enjoy it.
- Intro –
Rhett:
So as we continue our failure file series here. This particular episode,
Chris was, was interesting to me. I talked about it in the introduction, but it's
because I heard about it so much in school. And did y 'all take it?
Let me ask you this much. Did y 'all take an ethics course when you were in U of
H? Nope, you didn't take an ethics course.
Chris:
Not in the undergrad. I took a supervisory course, like organizational systems and supervisory and you kind of touched on it a little bit, but it wasn't really like ethics.
Rhett:
So we had a designated ethics course, which I understand at A &M actually from Alec, our new hire doesn't exist anymore, but it didn't, we were there. And unfortunately, I think it was like a, whatever its intentions were, it didn't register with me. That's what I'd say, it didn't hit home. And but the episode today, which we're talking about the Challenger episode, One thing I heard about, not just in that class, I mean, it felt like Challenger was beat into us between 2000 and 2005 when I was at A &M to the point where honestly, man, I just like tuned it out. And I think what's interesting to me is at that time, I can literally remember thinking like, my God, why, why do we talk so much about this? Just do the right thing. This is an easy decision.
Chris:
Did you always do the right thing? When you were in college?
Rhett:
Well, I felt like as an engineer, it would always be easy to make the right decision, right? I mean, engineering is black and white. It's right and wrong. There's a right answer. There's a wrong answer. Don't defend wrong answers. Like I think as a student of engineering, that's how I would have approached it.
Chris:
What did Bruce Nessar all tell us about engineering?
Rhett:
Oh, you're gonna have to refresh my memory. I mean, we were talking about MFL, but you got a quote in mind.
Chris:
Yeah, he said it's about compromise.
Rhett:
Yeah, let me tell you what, Chris, that's fair. If you wanna make this about
compromise, I think the challenge is where do you draw the line on compromise?
Chris:
And that is why they wanted to give you a class in ethics.
Rhett:
That's why, I mean, that's the reality, but I don't think you think, look, as a young engineer, when you're doing textbook problems, the homework problem is right or wrong, right?
Chris:
That's what we're behaviorally trained, right? A, B, C, D, right or wrong?
Rhett:
And so I think you think that most of the decisions we make in engineering, and now as a pipeline integrity engineer are right and wrong. But the truth is much more complex than that. And I think that's born out of this episode. So, man, let's get into it. 'Cause I wanna set the stage, right? And one thing I wanna do in this episode and you know some of the other episodes is, man, there's always this tendency to paint a bad guy, a villain in the picture. There's always hidden information. There's efforts to cover up efforts after the fact that I think really obscure the lessons that can be learned. And so I want to encourage the audience, if you're familiar with this, I don't want to get lost in some of the obscurities that happened after the fact and the coverups. I really want to focus what took place.
Chris:
So story time with the red starts now.
Rhett:
Let's go. So, um, on January 28, 1986, uh, space shuttle Challenger explodes 73 seconds after liftoff, right? I had killed all six astronauts and the one teacher was on board. And, um, again, I would have been only four years old at the time, almost four. Um, so I would have been totally unknown to this, but this was at a time shortly after the space race between the United States and Russia, right? And the United States considers itself the technologically advanced superpower in the world, kind of almost on a high horse of irreproachability, like we are it, right? And this really shocks everyone. Like, how does this incident happen to us? We are The best at doing it. This doesn't happen to us and it causes the nation Really like when you go back and read the story you understand how deeply it kind of shocked everybody I would have been obviously feel like oblivious to that even years later when I studied it Didn't really understand it But what happened and everybody knows everybody will tell you it's o -ring. There's a whole book written about it truth lies and o things. But, but I want to set the technical stage. So the space shuttle, if you're familiar with it, you know, it looks like right, you got the big white shuttle, you got the big orange fuel tank that sits in the middle, and then you got the two SRBs, their solid rocket boosters on either side, right? So the solid rocket boosters, which are the white things that run the whole length two on each side, run their solid rocket boosters because they run on solid rocket fuel, right? Because of challenges with the whole design of the space shuttle, those couldn't be manufactured in one big segment. The solid rocket boosters had to be manufactured in multiple segments and then shipped and assembled at Cape Kennedy. Whenever they assembled them, if you can imagine, right, like you've got to, it's nothing more than a metal coat can. That's really all it is. And you have to take the two pieces of the coat can and stick them together. And then you've got the solid rocket fuel in the middle. Well, between the two Coke cans, if you will, that you stack on top of each other, there's a sealing mechanism. And it's not really complex. It's just two O -rings, not one, but two O -rings, right? So imagine that you got almost kind of like a pen and a bell configuration. You slide in the inside and two O -rings are on the outside. This whole assembly, believe it or not, if it fails when the solid rocket fuel is going, it will leak outside of the boundaries If it blows through and if it blows through, it's going to light up the gas tank on either side. That was always the fear. So the sealing mechanism is important. The reason they had two O -rings is because this whole, the complexities around how the actual SRB joint sealed wasn't fully understood. They knew that when it lit up, it had to expand and then as the joints expanded, there is a potential for it to leak. All they knew after multiple tests was that the O -rings themselves had to have some flexibility so that when the system expanded, the O -rings could fill the gap and then seal it and keep the fluid from burning by. And since they were kind of unsure how one worked, that's why they doubled up and made two O -rings, right? Because two is always better than one.
Chris:
J -I -C.
Rhett:
J -I -C. Just in case. Yeah, yeah, exactly, right. So on that particular morning, so NASA was aware of the issue and everybody knows that I'm gonna get into that later because it plays into the conversation. It is the coldest morning on record at Cape Canaveral, colder than it has been on any previous launches. Actually fun fact, the temperature record on that stands to this day still being the coldest temperature in that area.
Chris:
It was a cold day, great way to start the story.
Rhett:
So what happens obviously is the shuttle launches, the both of the O -Rings fail, the solid rocket fuel burns through. When it burns through, it burns a hole in the exterior fuel tank, and then eventually the exterior fuel tank explodes, the big orange thing, right? And then the whole thing goes, maybe if you actually see the, um, if you ever see the famous picture of the challenger, it's got the one big white smoke and then it's got the two contrails that go off. Those are actually the SRBs that separated. And when they separated, they just started flying off before they hit the kill switch and had them explode. So that's the technical cause. The technical cause is that it was so cold the elastic O -rings couldn't expand to fill the gap and that's why it failed. That's not the most interesting part of the story. The most interesting part of the story is all of the decisions that lead up to launch at that point in time. And so NASA, being the organization that it is, had a procedure that required all critical parties to sign off on the launch called the Flight Readiness Review. And so when I say that every party, I mean everybody that's involved, so the SRB manufacturers, like the heads of NASA, the flight control, everybody's got to say, hey, we sign off, this launch is good to go. One of the major parties in this is Morton Thiokol. So Morton Thiokol, Morton, you've seen Morton Salt, same entity, but they bought Thiokol and Thiokol is who actually designed and manufactured the O-rings, which really in the whole SRBs. And there's one particular person a character in this story, his name is Roger Boisjoly, and he had actually been studying the O-rings for quite some time before this, because he had observed burn through on the O-rings and prior launches. He had even observed launches, so they had seen multiple launches where the first O-ring had been burned through. And the response was always like, well yeah, that’s why we have the second one, if the first one gets burned through you have the second one. His opinion was like no the second one is last resort, we always had two, like just in case something ridiculous happened, but we never expected the first one to burn through, right? And then he had multiple cases where both of them had burned through on prior launches. And so he understood that there was an issue and he had a lot of documentation around that issue. He also understood that it was in some ways related to cold weather, but there were challenges like when he would put forth all of the failures and look at the burn through, you'd see some that happened at a temperature that was over here, a little bit warmer, then you look at other ones and it was a little bit colder. And so you couldn't just draw this clear trend where you're like, oh, look, as the temperature decreases, as you'd expect in a complex problem like this, right? And so the night before, when they realized how cold it's gonna be, and it's no joke, this is where the story gets crazy. The night before, they're on an 11 o 'clock PM phone call, conference call, old school conference call, not teams, not Skype, like you guys, literal conference call. Roger Boisjoly’s team has to assemble the information and fax it over to Cape Canaveral, where they then have a discussion about it. And Roger Boisjoly ‘s team presents all the information. He believes like, look, man, it was conclusive. You know at this point in time that we shouldn't launch. Right? And so when they finish, they all go around the room and they say, "What do y 'all think? Should we or shouldn't we launch? And what do you think about what happens?" And all of the Morton Thiokol call engineers on the phone and the VPs say, "We shouldn't launch." That's the decision. The people in charge at NAS are like, "Well, okay, that's kind of crappy." And they start thinking this is off but then one person. And he's the person who gets the bad guy in the story, unfortunately, his name is Larry Malloy says, I'm appalled that Morton Thiokol at this late in the game would come forward and suggest that we shouldn't launch. And he's like, my God, with the data that you produce is exact quote, “my God, what do you want me to do wait until April to launch?” And this is where you have to appreciate the complexities that are taking place. So the entire shuttle programs justification is built around being able to have a reusable shuttle that could launch as many as 24 times a year. That's what they sold it on. They've never been able to achieve that. Congress is wanting to cut their budget, and so they're feeling this intense pressure that we have to live up to what we were designed to. We have to meet this launch schedule. Otherwise, NASA's gonna cut our budget, and then what happens to our jobs? what happens to our livelihoods. So when you victimize Larry Malloy, I think you have to first appreciate that Larry Malloy's justifications were, number one, he thought the data was inadequate, but number two, he had strong bias and justification to want to preserve the organization he was working for and feeling that he needed to do that. Then on the flip side, we have Morton Thiokol. And I want to appreciate Morton Thiokol’s biases in this. At this point in time, when NASA's trying to condense budgets and squeeze them, first person that NASA goes to is Morton Thiokol. And they say, "Hey, we're going to start looking at other manufacturers for the SRBs." Right? And Morton Thiokol is like, "Oh, hell, this is billions of dollars worth of work. We have a whole facility with 200 people that we employ." Right, right, right. So whenever NASA starts putting pressures on Morton Thiokol, Morton Thiokol feels, "We don't deliver to NASA. NASA's going to go find someone else to build build the SRBs, and then what am I gonna do with my plants in Utah that are manufacturing all this stuff? So here's where the story gets complex.
Chris:
Oh, it's pretty complex already.
Rhett:
Oh, yeah. After Larry Malloy pushes back on NASA, the VP at Morton Thiokol says, "Hey, give us a minute, we need to go on mute." And he goes on mute. He then turns to the room with all four, there are five of us VP's, and he says, "Guys, we need to talk about about this says, what do you think? He turns to number one, number two, starts putting pressure on them. What do you think? And all in line, they start folding and they're like, I think it's okay to launch. I think it's okay to launch. And then he gets to the fifth one and the guy, this was the most engineering of all of them. He's uncomfortable with it and he won't make a decision and he makes a famous statement and he says, look, guys, his name is Bob Lund. He says, Bob, “I need you to take off your engineering hat and I need you to put on your management hat. I need you to make a management decision.” And Bob under pressure eventually says, okay, okay, I agree. They come offline and they tell, Hey, Morton Thiokol is okay to launch. NASA's like, hmm, like they think it's odd, but they're like, okay, we're good to launch now. Al McDonald, who is the Morton Thiokol representative there in Cape Canaveral. He's not on the phone call. The other ones is literally like completely confused about what's going on. They turned to him and they're like, well, NASA's like, this is odd. So what I call we're going to need you guys to sign a document that says you're okay launching. And they give it to Al McDonald, Al McDonald's like, I'm not signing this. You're going to have to get one of those other guys to sign it because I don't understand how we just overruled all the engineers, right? And that's the conversation that I think in college was never presented to me. And I think even if it was Chris, I couldn't have appreciated it. 'Cause now as a consultant, I think I can appreciate those cost pressures. You can appreciate the perspective of senior executives at a company thinking, oh my God.
Chris:
Yeah, but not because we're consultants, but because you've been in organizations \where we see how decisions are made, Right, we understand the management pressures not the technical or let's put specific Let's put a name on this or pipeline safety. Yeah right, we've been in situations where You can kind of have this siloed approach of we're gonna commercialize this we're gonna build a service and it's not Hey, we're a partner in pipeline safety.
Rhett:
Yeah And I think it's when you realize you appreciate like you wish I want to go back to that idyllic world in college where you think it's just two plus two equals four and you make a decision but the reality is it's never that way right pipeline decisions are never that way pipeline integrity decisions are never that way right there's always this is a critical line this feeds something yes or we saw that before like there's always legitimate do you how much it's going to cost to shut in this line. Like there's always legitimate cost pressures, those legitimate operational pressures that feed into decisions that oftentimes have to be evaluated in light of the engineering. And that's what makes, I think this particular episode so easy for me to empathize with is cause I feel like I've been in a part of a part of those decisions. I feel like I could see myself in that room, right? Like you could say, I mean, like, and look, we, we. We talked to on this in Macando. The whole group think in having the ability to be the lone person and say, I disagree with this and not succumbed to pressure, at least until I'm convinced.
Rhett:
Right. So, um, you know what, to our audience, to you, this is a good chance. I want to take a pause right now because it just gave you guys the whole story. But when we come back, I want to break down some of the elements of the story and make them directly applicable to us and the pipeline industry and come away, I think with some very clear lessons learned. So hang on and we'll be right back.
- Break -
Rhett:
All right. So welcome back on the challenger episode of failure files part two. So really we set the stage in the beginning, I explained to you guys the technical side and what actually happened with a focus on the conversation that took place the night before. Chris, and to the audience, I really want to get directly into the lessons learned and a bullet down to four. And we're going to talk about each one of them. And again, my audience, I really don't think it will be difficult for you guys to see yourselves in these. The first one is normalization of deviance. And when I say that, here's what I mean, Chris. NASA had gotten comfortable With flying a shuttle that they knew had a faulty slash risky design to begin with. So yeah, you look back at the history of the SRB's They actually didn't even fully understand exactly how the ceiling mechanism worked, which is why they had two o -rings, right? But they knew that they were having issues from very early on the circuit The second launch was the one where they actually found the first evidence of burn through where one o -ring had been singed They'd seen it happen multiple times, but they kept saying, yeah, but we've launched every time, right, we've launched every time. And in fact, even when presented with the information, one of the engineers said, yeah, but look at it this way. We have two O -rings per joint. We have, I think it's six joints on each SRB. We have two SRBs on each rocket or on each shuttle launch. We've done this many launches, like it's less than 10 % of the time that we've had a single O -ring that might have failed, and you need both of them to fail, come on. Yeah. This is not that big of a deal, right? So I want to ask you when you think about normalizing something that you should be alarmed by, but you become comfortable with it, where do you, where do you feel like we see this in the pipeline industry?
Chris:
We see this all the time. All the time. That's a big word. Yeah. We highlight this often when we speak about different ILI systems, right? So one of the things that y 'all have heard me say pretty often lately is that a standard, which is now incorporated by reference in here in the US and regulation is 1163 and API 1163 allows for the technology companies to self govern. And we've just become comfortable with that. And so When you think about it, it's we say, "Hey, when I choose an ILI system, we begin to normalize what we believe performance or success is," and it's, "Well, it's always been like that. Well, it's always been like that. Oh, we understand that." And so where I think is important for engineers as either they come into this industry or if you're in a leadership position and we are focused on, I'm gonna throw this in here, this will be my theme for today. Pipeline safety. If we begin to think about pipeline safety, just because we've done something a certain way, just because the CFR says that you do 1163 and 1163 allows the ILI community to self govern and establish what the norm is, that norm may not be grounded in safety. And so now we've normalized a standard practice of performance that is not geared around what our goal is, and that's the new normal. We've never done that. Hey, ILI can we do it this way? Hey, well, no one does it that way. So no, we can't. Right? We're not challenging as an organ as a as an industry. We're not challenging ourselves to say, is our normal performance acceptable? Right? Is it? And we kind of started thinking already about, you know, and with the failures that we've talked about, like for type A failures, and it's, well, we've always done it like this. And those are outliers, right? And so for me, as I'll say, you know, we can find different situations where we've just always done it like this, right? And if we're not careful, we have a risk of aging infrastructure, right? So time is against us. And if we keep doing things the way we do, a lot of things are going to work out well. And maybe the O -rings might not have been that big of a deal. And maybe it only happened 10 % of the time. But you know what? When you have that outlier, like it was a little bit colder day and maybe we couldn't have predicted this. Yeah, but maybe if we had the right system in place from a cultural perspective and not in this normalization of deviance, maybe we'd have different outcomes.
Rhett:
You know, but it's hard. As you were saying, you know what I thought about? I thought about sometimes the justification that we use when you're dealing with a new threat. Right. And we actually did it on the episode with Mike where we talked about hard spots where I was like, would you rather deal with this or that? Because the reality is a lot of times we're like, well, this threat's not as bad as that, but it doesn't mean it's that mentality. And then I say this to myself, I think can, can have a tendency to, to maybe even put us to sleep a little bit or lull us into a sense of complacency. Well, look, I mean, girth world failures don't happen near as much as Seamworld failures, so we don't need to worry about girth worlds. Hydro-technical failures don't happen near as much as this failure. Therefore, I think what that can do is it can lead you to not paying attention to where you have the critical ones, where all the holes in the Swiss cheese-- I hate to use that analogy-- line up, so to speak, right? And I feel like-- I told you this before, I feel like in my price life, where we used to work in the Gulf of Mexico, I'd see it a lot. Because in 2005 we had all these hurricanes come through and they wanted the platforms to redesign and all the operators were pushing back and like Well, we survived 2005 and it was like look past success is not a guarantee of future performance, right? And so I think that that's the natural tendency in the industry look back see what we did it this way before and we turned out okay, and this serves as a obviously a clear reason why we shouldn't have that.
Chris:
So what's our lesson learned here, right? Make it simple.
Rhett:
I don't think you should, you know, past success isn't a guarantee of future performance just because you did it that way before and it worked or just because it's always done that way. Doesn't mean you shouldn't take a critical look at it. If it looks like a duck, quacks like a duck and walks like a duck, just because it didn't burn you before doesn't mean you shouldn't say, "Hey guys, why are we doing it this way?" Yeah. That's my lesson.
Chris:
My message would be most of us operating in pipeline integrity have a technical role the majority of us and the majority of people who are in a leadership role came from a technical background Don't lose sight of that Right. So while engineering as Bruce Nessarose will say is compromise. Let's be careful what we compromise.
Rhett:
Yeah, that's real That's well said. I would if I had a gold star give it to you. That was good.
Chris:
We've done that before.
Rhett:
The next one's bias risk standards, and I want to explain it to you this way, right? Man, this one is really real to me. I get passionate about this. NASA's position had always been that if a contractor had issues, the default position was to prove that it was conclusively acceptable. So if you raised an issue, you fell on the side of unsafe until proven safe to use an analogy like that. In this instance, when Larry Malloy spoke up and said that he was appalled, going back to that quote and said, my God, what do you want me to do? Wait until April. The engineer, Roger Boisjoly, said it's the first time I ever felt a dramatic shift in position on risk because I was being asked to conclusively prove that it was unsafe to launch, rather than conclusively proving that it was safe to launch. And look, I can tell you personally, I had this conversation with our team the other day. We came up with an issue where we were looking at a dent with metal loss and it was unclear if we could see the signal in the previous data and it had been reported in the new data and the implications of this, Chris, where if we couldn't see it in the previous data, we had to reanalyze the whole dent. It's it sounds small, but it was going to delay the report. The team would have to reanalyze it. They didn't want to reanalyze it. I said guys look we need to stop first and foremost Our goal is that we need to prove that this den is conclusively safe Our goal is not to see how we can get out of doing work or to short change it It's not if it's not Conclusive everybody on this call doesn't agree that that signal equals this, then we reanalyze it. 'Cause that's the right, that's the safe way to do it, right? - Yeah. - And I mean, man, I got an argument with someone on the call the other day on the same thing. So what are your thoughts, man? You feel like you see this?
Chris:
Yeah. If the topic is biased risk standards, I feel like we've had the opportunity to see this real time before there's consequences. And I hope you guys will align with this and not, not be too upset at me for bringing this up. You know, you had an appreciation after talking to Andy Drake in that episode with Andy around the origins of our prescriptive regulation. And some of that has been very costly, right? And doesn't have much technical merit. But at the time films is position on what I'm gonna say was geared towards safety because of the uncertainties in assessing the specific feature which could be a defect and potentially critical defect. And so if you have a dent with metal loss in an HCA, what's the response? You're gonna dig it And that's been expensive for a long time and how many times have we dug up dents and been like, oh, man, another waste of dollar, you know, I could have invested in my CP systems, right? I could have done some, some pipeline recodes with all the money we've spent on ditto metal losses. And so our risk standard was very high, right? Meaning that we wanted to be very conservative as it turns to dense. But much like in this scenario, scenario, we were saying the basis was to prove it was safe to rather now flipping the scale, we're now seeing that with DENT ECAs. What did we just observe? We observed that there was a lawsuit against FIMSA because a safety factor in a discussion was different than the safety factor that was published. And how many discussions have we had where there's now argumentation around how much do we need to evaluate then ECAs? Right and so I feel like we're kind of in that right now. We went from one side of being ultra conservancy. We're gonna dig all of these To now we have this form to say hey, we recognize that that was maybe too much and we have an opportunity to change it but are we using engineering to establish with that safety threshold from a process and from a goal of we want to be safe, but we don't want to waste dollars. So we have that platform now. But are we going to just flip over and use it as an opportunity to make up potentially for some of the wasted dollars? So I think as an issue, we just need to be really careful here in recognizing that just because there's an opportunity, we don't want to take advantage of it either. Right? We need to find that balance.
Rhett:
So I think, you know, another place I want to make this real for the audience, the geohazards fall into this a lot. And the reason why is usually when you find a geohazard it's something that's been there forever like or for a very long time and the first argument that's always made when you deal with a geohazard is well this happened years ago or it's been this way since before we started this conversation did we really have to do something now and um I found myself in that position in that conversation numerous times um I think it's useful and again I want to I want to I want the audience to think about it, when you approach a problem like that as an audience, do you feel it's your position to prove that the pipeline is conclusively safe to operate? Or are you asking your integrity engineers that they must conclusively prove that they need to dig it immediately? Because the implications are huge, right? And nobody likes that question. And again, I realize they're always financial implications. And so there's people that might not like it. But I think when you stop and you say, what am I being asked? Yeah, it really changes the nature of the conversation. Because if you force your integrity engineers to prove that it's conclusively unsafe to operate, you're going to put them in a position where it's nearly impossible sometimes to take action on things that they need to take action on all the data suggesting you should take action on.
Chris:
And it's also inefficient, right, which will then become unmotivating, right? It reminds me again of the Macando discussion that we had where, you know, are we testing to confirm or are we testing to investigate? You know, we run an ILI and you have features that are reported. Are you, like you said, are you trying to prove it safe or are you trying to prove it's unsafe? Because that perspective, I think it does matter, right? Are you investigating Or are you just trying to confirm what you're hoping for is that it's still it still has integrity Right because those are two different things.
Rhett:
So, the last one is organizational silence. I think this is a cultural thing. The senior most launch officials and NASA weren't aware of the problems because it wasn't passed up the chain correctly. In fact after that 11 o 'clock phone call concluded I think it was like one in the morning one of the guys said, you know, should we call the old man and wake him up because he was the head guy in charge and he was like no let him sleep. So when all the crap went down so to speak, the people at the top are like I had no idea that Morton Thiokol had changed its position the night before, nobody let us know this. It spoke to the fact that the organizational pressure silenced people from passing the information up the chain of command to let people know, right? I think, you know, and in fact, I had, I told you, I had this conversation recently with somebody where they struggled with an integrity decision because they actually, they hit the red panic button to go after an issue. And then the issue turned out to be a nothing burger, turned out to be a total false alarm. And they felt like, oh man, I wish I hadn't done that. And I'm like, yeah, you know, but you did the right thing based on the information that you had, right? The information that you had, although it might've been spotty, a lot of it might've been problematic, indicated potentially something very serious. And you took action. I was like, you can't go back in 2020 hindsight and say, we shouldn't do that. 'Cause then you create a culture where people might not want to speak up.
Chris:
It's what you just said. Are you trying to prove that it's safe or are you trying to prove that it's unsafe? When this one comes to mind in our sphere, I'm curious, obviously we can't poll the audience, but I think the vast majority of the people that was into our podcast are integrity professionals. And obviously the higher you go up in your organization, the breadth of your domain changes, right? You're probably over other things on top of integrity, but in general, I think it still applies, right? When you think of integrity, we're thinking of plan -do -check -act, right? And what I almost wonder is it's, if we kind of were in this prescriptive mindset, right? We've been under prescriptive regulation for a long time. It's like, hey, I did what I was supposed to do. I did what I was supposed to do, right? I ran an ILI, right? I responded to all the features if it's a gas in 192 .933, right? If I'm in liquids, 452H, I've done all these things, right, it's compliance. But now we're starting to see pipeline safety management systems become in a more relevant topic. And why that's important is Because one way that I've heard it communicated is it's a culture, right? And so we have to start asking ourselves a little bit around how does pipeline safety management and integrity management, how do they coexist in our current environment? We have the silver tsunami, we have YPP doing great things, we have cultural backgrounds. We have different financial drivers. We have a lot of political pressures. We also like to talk about relevant things. FIMSA recently put out a request for outdated regulation. Which of these regulations are outdated so we can try to get rid of them? We have a lot of change happening. I think one of the things that we need to be careful with is it's, you know, um, when I think of this is the left hand talking to the right hand is it's maybe a good, a good, a good, um, catalyst or a good bridge for what we need to start doing next to keep pipe rounds sounding in the ground and I'm not pushing pipeline safety management systems, but maybe that is the opportunity. Right. It maybe becomes a platform for thinking different and I'll give another a bit anecdotal and forgive me audience for not being to be specific, but going back to the dent ECA's Right. So when when we saw the lawsuit come through, I'll be honest with you, right? I was I was quite a bit surprised right and it's because obviously we had some pretty decent experience and in many cases. We recognize that a safety factor of two versus a safety factor five in most cases is irrelevant. Not the point, right? The lawsuits are different reasons. I can appreciate that, right? All the managerial precedence that were trying to be established or were established But I remember reaching out to our network of pipeline operas and I said How important Was this to you as an integrity leader? He goes they said I could care less About the safety factor change I'm way more concerned about now no longer being able to do an ECA, right? Because the financial implications of having to dig dense with metal loss removes resources for me to make better decisions potentially, right? And so where I think that matters, it's the left hand and right hand. And so I wonder if these integrity, the right people who understand the implications of that change, were they able and was there the right platform for them to communicate the people that are commuting up the chain to those people who are making decisions as to which regulations matter and which ones don't? Or is that a disconnected process?
Rhett:
Well, we'll never know because we weren't in the room in that decision, man. That's for sure. So as we wrap this up, and I want to close out on the Challenger episode, I want to encourage you all if you get a chance, there's a lot of really good books out there. One of them is the Challenger book written by Adam Higginbotham. There's also multiple Netflix did a series on the Challenger recently, I think it was 2019. There's also Truth, Lies, and O -Rings it was written by Al McDonald. He was the engineer that refused to sign it. Definitely. recommend that you talk about that. So on that note, I wanna thank you guys for listening to this episode of the failure files part two. I hope that you enjoyed it and hopefully, hopefully your perspective is a little different than mine was in college. And as a practicing engineer, you can appreciate a little bit better and learn from these situations so we don't repeat the past. Thanks for joining us.
- Outro –
Rhett:
This episode was executive with Produce by Sarah Etier and written by myself, Rhett Dotson. The source material for this episode is Challenger Disaster on American Scandals podcast and also the book Challenger written by Alan Adam Higginbotham.