Episode 3

March 19, 2025

00:33:13

Statistically Speaking... We Have a Problem - with Tom Bubenik

Statistically Speaking... We Have a Problem - with Tom Bubenik
Pipeline Things
Statistically Speaking... We Have a Problem - with Tom Bubenik

Mar 19 2025 | 00:33:13

/

Show Notes

On the newest episode of the PPIM 2025 series of Pipeline Things, we are joined by the incredible Tom Bubenik to discuss his paper, “Lessons Learned Using API 1163 for Metal Loss.” And the findings are surprising…

Tune in to this highly anticipated episode to learn more.

Highlights:

  • Background and history of API 1163
  • The difference between the second and third edition of API 1163
  • The validation of performance specifications
  • What percentage of inspections actually met specifications?
  • What further research should be done to improve this?

Connect:   

Rhett Dotson   

Christopher De Leon   

D2 Integrity   

Tom Bubenik

Be sure to subscribe and leave a comment or rating!   

Pipeline Things is presented by D2 Integrity and produced by FORME Marketing.    

D2 Integrity (D2I) is providing this podcast as an educational resource, but it is neither a legal interpretation nor a statement of D2I policy. Reference to any specific product or entity does not constitute an endorsement or recommendation by D2 Integrity. The views expressed by guests are their own and their appearance on the program does not imply an endorsement of them or any entity they represent. Views and opinions expressed by D2I employees are those of the employees and do not necessarily reflect the view of D2I or any of its officials. If you have any questions about this disclaimer, please contact Sarah Etier at [email protected].   

  

Copyright 2025 © D2 Integrity  

View Full Transcript

Episode Transcript

Rhett On this edition of Pipeline Things, we bring on long over do guest, Tom Bubenik, to talk about his paper on ILI performance according to 1163. And I'm just gonna ask the question to the audience on why you should listen to this episode. If I asked you how many inspections do you think can be qualified to API 1163 in the vendor's reported specification for tolerances. What would your answer be? 50 % 10 % 70 % 100 % You need to listen to this episode. Chris You didn't offer a confidence interval Rhett You need to listen to this episode to find the answer to that question and some discussion on confidence intervals. Thanks for joining us *intro* Rhett Welcome to the PPIM edition of Pipeline Things we are looking at another great guest from our 2025 paper selection and that is going to be Tom Bubenik. Now I have to say, without question, we have a decent listener base, Tom, and they give feedback on on topics they want us to cover. Guests, you are the most requested guest. Yeah. Tom Thank you. I don’t believe it but thank you Rhett I'm telling you, after the Sage series, we got a rift of emails. Why wasn't Tom doing this? Chris Chris, if it's a Sage series, why was Tom not on it? How did you miss that? Why did you do this? Do you have a problem with Tom? No. Tom You guys must have a problem. Chris You just ghosted me. You just ghosted me. - Now I know what it feels like. Rhett So I'm actually, Tom, personally, I mean, you're one of those people in the industry, like I told Mike Rosenfeld, whose reputation just really precedes them. You're very revered, like people have an immense amount of respect. You're where I hope to be, whenever I reach the end of my career, I want you to know that. That's how people look up to you. So it's really an honor to have you on the show. Tom Thank You. I appreciate that. Rhett So, but we didn't bring you here just to talk about it. See, people are stopping to take photos of you now. Chris It's you, Tom. Tom That's really that makes you feel good. Rhett I mean yes, so um, and I don't know who that was so I assume they took the photo for you. Tom I have no idea Rhett There you go. So Your paper was “Lessons learned using API 1163 for metal loss” and this is a I'm gonna suck the trouble I'm gonna have Managing this episode is 1163 is like Chris's baby. He's really passionate about it You have a lot to say about it. I think it's going to be a great episode. But I'm just going to give you a little bit of leeway. Give me the background. What drove you to write this paper and look at 1163? Because we've had it for a while now. Tom That's right. So I've been involved with 1163, the first edition I helped write, and I helped edit the third edition. The first edition was directed a lot by the inspection companies with less input from the pipeline companies. That's because it was their vested interest to have a document like this. Second edition came along, there were some changes and improvements. By the time we got to the third edition, we wanted to see if we could strengthen it a little bit, change a few things that were optional in the second edition and make them mandatory in the third edition. The idea was to bring along more consistency in industry. We wanted to be able to make sure that when somebody ran an inline inspection, they could use 1163 to validate the inspection and on that basis make a good decision about maintaining the integrity of their system. Rhett Okay, I'm sorry, I have to ask a question out of ignorance. You mentioned three editions of 1163. Is the third edition the one that's incorporated by reference? Rhett No. The second edition. The second edition is the one that's incorporated by reference. But today we're talking about the third edition. Tom Yes, we are. Rhett Okay. So given that's the third edition, what did you do? What was the what was the purpose of the paper? What were you investigating? Tom Okay. So I was involved in helping write the section of the paper that are the section of the 1163 that the paper discusses. And that's the validation section. All right, it talks about how do you demonstrate that the inline inspection met its performance spec. It did what you expected it to do. And I was curious as to how it worked out. How would it work out when pipeline companies came about and tried to do this? Would it give good results? Would it make them do more digs than they're used to doing? Would it make them change their integrity management decisions? That's what I wanted to know and that's why we did the paper. Rhett And so just to elaborate on that for users that are out there when you qualify an ILI tool, it has a performance specification. We do FFS or burst pressures on the basis of that performance specification. If the tool meets that performance specification, the tolerances you added and hence you have a lot of confidence in your assessment. If it doesn't, then potentially that raises questions about whether you added sufficient tolerance to your assessment. So this has significant implications when you say whether or not a tool met its stated performance application. Tom A lot of pipeline companies will take that tolerance number and say add it to the reported depths. That gives them a conservative idea of what that defect might look like and a conservative burst pressure approximation. Rhett And so You did this study using a data set of reported pipeline features and provided excavation results. Tom Absolutely. Rhett How many studies? So give me the details of the study. What are we talking about here? Tom So we looked at a hundred cases. We looked at a hundred pipelines where pipeline companies had contracted with an inspection company, performed an inspection, and afterwards went out and had done digs. And then those dates they measured the reported the depths of the anomalies and they compared them to what the reported depth was from the ILI and the results were surprising. Rhett What do you mean by surprising? Tom Well the results showed me that things were more conservative than I expected and by conservative here I mean they were saying that the tolerances that are being applied on an inline inspection are bigger than what the inline inspection companies were saying they are. Rhett Tolerances that were applied were larger than what the in -line inspection company was saying they were. Tom Basically that means the in -line inspection companies were a little bit optimistic in terms of how that tool would perform. So that's a big implication in terms of things like dig and repair. If in fact the tool is performing say plus or minus 15 % versus plus or minus 10%. It's not as accurate. You take that inaccuracy into account, you have to dig more. If you have to dig more, you have to take more measurements, you have to spend more money, and is it providing any benefit at the end of the day in terms of safety? Rhett So I wanna make sure that I understand this, 'cause now I'm curious about the actual data set. So each pipeline you would go through you would take a look at the reported features and the reported excavations But some of these runs you mentioned were like past ten years. So the operator at that time chose some number of excavations based on I'm not sure what- Tom Based on upon their normal procedure Rhett And then you determine whether or not the tool met its performance specification based on the digs that you were provided Tom Correct and based on the third edition third edition of API 1163. Chris Did you group all these? Or you looked at them individually? Tom I looked at them individually. Rhett Okay. And so what's curious to me about this though is can't you, if you're not within your performance space specification, for instance, do additional digs? Tom You can. In fact, that's one of the findings of the study. A number of the dig programs that the operators did didn't have enough data points to do a correct statistical validation of results so we had to look at that and in those cases either the operator would have to do more digs take more measurements or I can revert to something called a level one analysis if the defects are shallow or if he has a great deal of experience with the inspection system being done. Chris Yeah if I can maybe let's back up just a wee bit if you don't mind. We briefly talked about and let's keep it in context of the paper. There were more shall statements is the way I'll say there's more requirements and level three that were absolute We're forward that were optional Ton In the third edition. Yeah, there are more shall statements than should statements Chris Were any of those Making it harder to meet the requirements of 1163 We're any of those you that we would say are a little bit game changes or elevate the standard that's needed to say it achieves performance or achieve success. Tom It made it harder to be able to say that my inspection met its performance specification. Chris And how did it do that? Tom Okay, it did it in a couple of different ways. The statistical techniques that are present, they basically allow you to test the actual performance of the tool. Okay? You look at the results and you say, "Hey, I expected to find, let's say, 8 out of 10 within specification. 8 out of 10 of my measurements should be within plus or minus 10 percent." Well, the statistical methods allow you to try to determine is that met? In fact, did I meet that? And what we found was, again, some of the programs didn't have enough measurements to be able to do any kind of evaluation. Some of them had enough but the evaluation said no it did not meet its performance spec. When it says no it could not meet its performance spec, API 1163 says hmm you should reject that inspection. Well that's not the right answer because a lot of times it was close to meeting and by using a little bit larger specification, you could use that information. Chris For example, plus or minus 14 percent instead of plus or minus 10 percent. Tom Absolutely. Rhett Man, this is such great work, Tom. I have questions I want to ask because now I feel like I'm really getting on to this. On average, you did 100. How many excavations, just on average per run, do you think you had? Tom Let's see. We had about 14 ,000 Okay, so out of a hundred so that met on average. We had about a hundred and four. That's a big number Rhett That is a big number per inspection. I would not expect more operators to do a hundred and forty Chris And it was digs or joints or anomalies Tom Well its measurement If you did a laser map, for example, you might have hundreds or thousands of measurements on a single joint of pie Rhett Got you. Okay. That makes a lot more sense. And so out of the hundred studies you did now In hundreds nice because it allows us to speak. Yeah, how many were outright rejected? Well We should define rejected Tom Let's define. Rhett Yeah, define rejected. I should define it you wrote the paper. Tom Well, there are there are several ways in which you can reject an inspection in 1163 one is if you don't have enough measurement points Chris Okay, so level three now says you have to have a minimum number of features? Because before we vote- Tom For a level two or a level three evaluation, you have to have a minimum number of measurements. And that number for a level two is six. The number for a level three is ten. So some of the inspections had less than that. And basically what would happen is you'd be sent back to the school room to get more data and more information so that you could run the evaluation. Rhett But that's not a fault of the ILI vendor. Chris Yeah, but I'm going to get some low hanging fruit here and you don't care if it's in the same joint as long as it's unique features. Tom Unique features. Rhett Got it. At least 10 measurements. So how many out of the 100 approximately didn't meet the minimum threshold for measurements to validate the tool? Which again is nothing to do with the ILI vendor, they can't control that. Tom In our case it was 13, so a little under 15%. So basically one in six or seven didn't meet the specification. Rhett And then our second group or what I'm calling the second group now, maybe you called it a different thing. How many do would have resulted in qualifying a larger tolerance than what was in the ILI vendor specification? Tom I'm going to say the numbers are probably about 60 to 65%. It's a huge number. Rhett That's a big number. Yeah, it's a big number. Two thirds of the ins I want to state this and - Correct me if I'm wrong. I want to make sure I'm not mistating this. Two thirds of the inspections that you looked at could not have justified using the ILI vendor specification. – Chris Stated performance spec. Rhett Stated performance specification and instead would have needed to use something larger to statistically justify their fitness or service. Chris And I'll use another word. And I'll qualify that statement by saying, and these were all within the essential variables of system. We're assuming that these were all within like normal wall thickness range, DQAs were reviewed and all. So we're going to put that aside and say it wasn't a essential variable situation. Tom Agreed. Rhett So out of this two -thirds, I don't know if you can opine on this. Are there any portion of those that maybe if they'd done additional measurements could have could have benefited maybe gotten to that ILI tolerance? Tom Yes. We looked at that, and a handful, meaning maybe 10, 15 of them, could have used more measurements and may have passed the performance, the API 1163 validation with more measurements. Rhett That still leaves half, though. It still leaves a lot. I mean, that's still telling you half did not meet the performance spec. Tom It gets a little worse than that. Rhett Oh, my God, you're getting darker? Tom Forty out of a hundred, the API validation actually said it did not meet its spec period. And in API 1163, when you have this outcome, it's called Outcome 1 in the document, if you have Outcome 1, you're supposed to reject the inspection. Not a good idea. Rhett It's an expensive idea. Tom It's an expensive idea. And what makes a lot more sense is to look at whether you would have passed the inspect the validation using a bigger tolerance. And that's what we demonstrate. Rhett There's not a loop that's in there. There's not a loop that's currently there's not a loop that says evaluate for a larger tolerance. Tom That's what it that's right. That's what it needs to have in it in the next edition of 1163 is a way to go back and say okay I didn't pass it plus or minus 10 percent. What happens if I use 11 % or 15 % or 20 % and I pass Chris Tom. Let's let's go through a scenario here because I like where you're going with this And I always like to try to make it practical. Rhett calls it. I say “so what” like this is so what of this. So let's say we're dealing with second edition and We don't have a defined Minimum number of samples needed to do a level two or level three. Is there a scenario That you would buy off on as this operator says I have 10 samples and It's at plus or minus 14 % Can I not say that that is a level 3 where the spirit of the level 3 is to define its performance? And then you use that performance in your integrity assessment. Tom Okay, right now API 1163 doesn't have a way to say to do that. There are some things that are in there that allow you to use larger tolerances. For example, if you do a level 2 validation, level 2 provides what's called an equivalent tolerance, but those equivalent tolerances are sometimes pretty large.. All right. If they're not as, I'm sorry, they're bigger than what you would do if you just increase the tolerance to see what would cause it to pass. Rhett So before we take a break, I want to wrap up the results of the study. So I just want to reiterate it. We had about, let's just say 15 % outright failed, or no, didn't take enough data points. About 15 % didn't take enough data points. About 10%, they didn't meet the specification, but had they taken more measurements, potentially could have met the specification. 50 % outright failed for the specification. And then I assume that leaves about another 10 or 15 % that passed. Tom It's in that vicinity. 20, 25 % would have passed. Chris I know what I was going to say. When we come back, I know what I was going to ask him. Rhett I just want the audience to chew on that. 'Cause I I understand why you're surprised now. That does not sound like-- does that sound pleasant? That does not sound good? And I'm going to be honest. In the plenary, there was another paper that took a look at ILI performance over the last 10 years and presented thousands of data points. And I think, again, that was Sherry Bacham's paper. She did an endeavor to drive that point home. But she did present a similar picture with, I think, wider data than many of us may be aware of in our excavation. Chris So the question I when we come back is, is, are you-? Rhett Why don't you take us to the break Christopher? Chris Why, why would you say so? You know, why do we think that is? Why do we think that two thirds of the data sets are not meeting their performance specs? Rhett So when we come back, we're going to hear Tom's thoughts on why do we think that? Stay tuned and we'll be right back when you join us. *theme song* Rhett All right, welcome back as we continue our conversation with Tom Bubenik extremely excited to be here with you. So we set the stage in the first part of the episode. I'm just going to repap it real quick. Tom looked at the performance real world performance of API 1163 using approximately 100 inspections and using the data from those inspections. We found that about 10 % of those inspections had insufficient data points to use or to assess it at all. We found that about 15 % of the inspections couldn't meet the stated performance specification of the tool, but had they taken additional measurements, potentially could have gotten over the hump, about 50%, no matter what, could meet the stated foreign performance specification. And we discussed the fact that there's a missing loop in 1163 right now that allows them to go and evaluate for a larger specification. And then finally, there's 10 % to 15%, and I did not verify during the break whether this adds up to 100. Somebody, I'm sure, will leave a comment on the podcast that it doesn't. But about 10 % to 15 % passed and got an A in the course. Tom A little bit less on that. But- Chris I had a question. Rhett And then Chris asked a question. Chris Why do you think it was-- I remember 2 /3. Why is it that you think that about 2 /3 of them We're not able to achieve their specification. Did you guys have any did you rationalize it at all? Do you have any ideas? Rhett Why is it surprising? Chris Yeah, well no for me It's why do you think that may happen? Like what do you think the reason for it is that they're that they're not able to achieve their spec? Tom I think that in general when we think about ILI performance specifications, they're made under generally idealized conditions All right, they're what the tool will do on a good day. It's what the tool would do when the sun isn't as bright and it's not raining in the ditch and all of those sort of things. And so instead what we have is we have the real world. And in the real world, things are not quite as good as what we would hope them to be. Chris Wait, so when we look at the pool test spools that are real nice, circular type features at various depths at really cool patterns, you're saying that's not how corrosion looks in the real world? Tom That's not how the real corrosion world looks. Rhett Oh boy. Tom And as a result the performance specs tend to be optimistic, maybe a lot optimistic. Depends upon the inspection technology. Chris It's so subtle. Did you hear how he did that? Rhett See I was thinking that it's surprising to me because I do still feel like we've done a good job of keeping our pipeline safe and I guess the reality of that, the reality is that fortunately somebody who used to stress engineering used to tell me fortunately steel is ductile And we use appropriate safety factor, right? So – Chris Yeah, I mean, that's why we like we like modified B 31, right? Because again as long as you get the depth, it's close enough. You're digging the right stuff and let's not get on the r -string conversations We get the point. Rhett No, that's not so Tom I want to I want to play devil's advocate a bit now, right because you presented So I want to play the opposite side and I want to challenge your findings of that Tom Go ahead Rhett The first thing - go ahead - You will lose but you can challenge it. You're a consultant and as a consultant we normally get the problems that are the tougher the more difficult the Challenging in the hundred data sets, which I'm assuming you randomly selected within DNV's database Is it possible that you have some form of selection bias there or that you only seeing the bad children and you're not seeing, because I would assume somebody came to you and said, gave you that data set because they had problems potentially. You think any of that could potentially be a player? So I said, maybe your dark world picture is a little rosier? Tom I think that's probably possible. The 100 cases come from a large variety of different pipeline operators, so it's not all from the same company. Rhett How did you go about selecting them? Did you randomly do it? Tom We actually looked at ones that we had analyzed, as you mentioned, for other purposes. Rhett You should have said a machine learning and a neural network. Oh, yeah. Tom I love machine learning. I love statistics. I love all of that stuff. Rhett So you think that it's unlikely that that played a role? Tom Well, I think it could have impacted what we're doing. It could have made things look a little bit worse than they really are, but the fact is if 50 % or more don't pass the validation, that's a big number. It's a big number whether it's 50 % or 40 % or 60%. Rhett I would say it is a big number. I agree. I think if there's one thing that strikes me from this episode, it's that number because I don't think anybody would have guessed 50 % and I'm real curious what our listeners' reactions are to that number. But it really begs the question for me. I like to do a lot of studies like this myself, Tom, really looking at how good are we at doing something, whatever that is, whether it's bending strain, whether it's dense, why did repeatability studies on dense, how well do we do this? Where do we go from here though? I mean, I feel like you, I feel like you, you picked a scab or you turned over a rock now and now we can't put that rock back down. So what should we do next? Tom I think there's a couple of things that need to be done. Rhett More people take photos. We don't have this many people take photos of us, Tom. I'm just telling you, you're kind of a big deal. Chris I told you I wanted him to come on the SAGE series. Tom Thank you. I appreciate that. Rhett Where do we go from here? Tom I think there's a couple of places that we can go. One is I think pipeline operators need to be prepared for this, all right? If PHMSA decides to incorporate by reference the third edition of API 1163, companies are gonna be required to face these problems themselves. And they're going to have to have a way to realistically estimate what the tolerances are, all right? If we fail at plus or minus 10%, we might pass at plus or minus 11 or 12 or 20%. So step number one is be prepared. Step number two, I think, is to recognize that the tolerances that you will be using in your integrity decisions may be larger than what you've historically used. In other words, if you're used to adding 10 % to the reported depths in order to look at what might be a problem in the future, maybe you have to add plus or minus 15 or plus or minus a different number. Chris Yeah, and let's get there though Tom, right? I mean, let's have a brief discussion on that. Am I wrong from your perspective been saying this, it's that one that's ILI system dependent. And so while I'm keeping a database to understand the performance of the tools I've been using, whether I as an operator do that or I'm relying on the ILI vendor, let's have that document, different conversation. But it's all, like I said, it's system dependent, right? So if I run a newer resolution tool, ultra high with all these fancy algorithms that are being developed in new data transformations, then I have to do that again and understand what the performance of that system is and maybe the picture looks a little bit better is is a potential outcome. Tom That's possible but there's physics involved here too. Rhett Oh my gosh. Tom Part of the trouble is... Chris You're starting to go like the Bruce Nessler Roth podcast. Tom That's exactly what it is. And Bruce and I are good friends. The physics of the problem is that there's only so much you can get out of an MFL type system. Yeah. Okay? You're and I You're never gonna become plus or minus one or two percent or plus or minus five percent plus or minus ten percent is About as good as you're gonna get. Yeah, maybe a plus or minus eight or nine I think what I was getting never yeah Chris I think what I was getting to Tom and I started to cut you off was more that while maybe the data says that we looked at over Call it ten years. Yes Paint one picture. It's possible like even in Sherry's presentation, right? She was like but if I look at data only over the last so many years with only the top quarter of ILI vendors, the performance went up and it was actually more attractive. So what I'm saying is it's one of the messages I think we're trying to communicate is it's, this is a good lessons learned. Tom Right. Chris Let's be diligent about what we're doing moving forward because the ILI systems are improving, but we can learn about what we need to start looking out for. Rhett Sherry's presentation didn't show that either. Just for the record, it didn't. In fact, she showed a flat line performance. You’re right she did argue that in the one quarter, potentially it got better. But, so let me ask that question. Tom, is this the last study we're gonna see on this? It feels like you have to do more work on this now 'cause all you've done is tell us that we have 50%. I mean, are we gonna do things like start slicing and dicing by newer inspections? Are we gonna broaden it? I mean— Tom All of that's useful information. I think, yeah, all of those things make sense. And I think it is valuable to take a look at how technology is evolving and improving. Some of the studies we've done shows, yeah, there's a little bit of a slope that the improvements are taking place, but they're not taking place at a rapid rate. This is a pretty mature technology. Rhett I could not agree more. I absolutely agree. So I guess what I was asking is this a pre, are you gonna do an IPC follow -up on this? Do we get an expanded data set, you think? Or have you turned this rock over found the problem and put it back down. Tom Put it back down, that's right. No, what I'd like to do is I'd like. Rhett You opened Pandora's box, Tom, you let it out, Pandora's out running. Chris It's one of those things, we've experienced this before, when you touch a sensitive topic and then all of a sudden somebody approaches you and snaps, it's like you probably turned over a rock, there was a snake and you're like, oh crap, put it back down, we don't want the guys coming after us. Tom I think what we need to do, and maybe what we can do, and I'm looking in a crystal ball in the future, is get more data, get a broader range of ILIs and DIG programs, and not just for those that happen to be sent to my company to look at, but let's get some from day -to -day operations and run similar sort of analyses. Chris Man, if only PHMSA would have had an initiative to do a volunteer information system. Tom Gee wiz. Chris Man, oh, too bad that never took off. Wait, do we have one of those? Anyways, let's move on. Rhett Yall, I went right past that. I'm aware of that and even I was like slightly slow on that So what I get one more attack? Any chance that pipeline age played a role here? We do was there any bias towards vintage of the pipeline newer or older? Do you think in this data set Tom Good question. I Think in general We we believe that pipeline age itself doesn't necessarily have an impact on overall integrity but Corrosion is an ongoing process So corrosion does tend to make systems more likely to have Complicated corrosion as they get deeper and deeper as they get older and older and I think from that perspective Yeah, I guess older systems might be a little bit tougher to inspect because they have a little bit worse corrosion on them or more complex corrosion. Chris So I want to- Rhett Wait. What about diameter? Because Sherry brought that up, that larger diameter, they saw better performance. Any bias towards diameter in this study? Tom I don't know. We didn't look at it. But diameter does have an impact on ILIs. Smaller diameters are just tougher to get things to work. Rhett And that was the point that she brought in her presentation as well. All right, go, go, go, go, free question. Chris So, often when people think 1163, we think about statistics. Tom Right. Chris And coming from an ILI vender, one of the things we always either dreaded or enjoyed was, did we reach 80 % certainty, or 8 out of 10 within the tolerance band of let's just use plus or minus 10 %? Is that enough statistics? Can I just look at that? Can I just say, hey, I met 80 % certainty on these features, so I'm good to go. I'm done. Or do you think I maybe need to look at something else? Tom Well, I know you worked for an inline inspection company. And one of the things you Rhet Worked, past tense, worked ED. Tom Inline inspection companies understand that just because you get it, maybe 6 out of 10, or four out of ten that meet spec doesn't mean that your inspection didn't meet spec and they're good at that.- Basically though What's been put into 1163 or some good statistical tools to evaluate if I get six out of ten What's the chances that I really did meet my spec If I get seven out of ten is that good enough if I get ten out of ten Does that guarantee that I've met this spec? Yeah, and that's what statistics is used for. Chris Yeah, so I Appreciate you bringing that up right because that's where we start using certain Terminologies like confidence, right? And we shouldn't we shouldn't confuse certainty and confidence with confidence Those are two different things. So a PHMSA per chance says hey, the system has to achieve at least a 90 % confidence, that's not the whole story, right? Tom No, that's not. Chris Yeah, we need to have certainty and confidence. Would you agree? Tom We need to have certainty and confidence. And it's a confusing topic for a lot of people. Certainty means, if I were to select 10 locations, and I took measurements on 10 locations, how many of them are gonna be within spec? And we expect eight out of 10- Chris 80 % roughly yeah- Tom Confidence has to do with if you I were to repeat that experiment over and over again Select a different ten locations dig them up measure them Let's say I get nine out of ten or seven out of ten How many times on repeating this over and over again? Will I meet my eight out of ten criteria? Well, that's not a statistical definition That's how I understand it- Chris It gives us a practical way to begin to understand certainty versus confidence. So I want to have, at least in this case, a 90 % confidence that I can achieve an 80 % certainty. There you go, guys. Rhet So, Tom, again, I want to say, this has been a pleasure talking to you. You and Mike Rosenfeld are two guests that I feel were long overdue. It's been great having both of you on. And if this podcast goes out of order, I don't know which of you's going first. Now I might have set how it goes. But I want to give you the chance, based on this study, going to give you the floor. What message do you want to leave with operators or the audience on this study moving forward? Chris Or anyone who's listening to this podcast. Rhett This is your moment. Tom OK, I can say anything I want. Rhett You actually can. Tom I could say anything I want. Rhett We can also edit it. Tom You can edit it out. That's right. And - Okay, did we bring us back? Rhett Go ahead, what would you like? Tom Well, my main advice to the pipeline companies is to be prepared. This is coming down the pipe. We're gonna see it eventually in the regulations and regulations are gonna require our call for pipeline companies to validate that their inspections in fact work. And so something like this is gonna come down, whether it's API 1163 third edition or anything else, something's gonna happen. So be prepared. Use and be prepared to use tolerances other than what you're given by the inspection company. Inspection companies are good at what they do, but the real world is not always as generous as they would like us to believe. And for that reason, sometimes the tolerance isn't plus or minus 10%. Sometimes it is plus or minus 15%. And be prepared to address that sort of issue on your pipeline system when you find you have an inspection results. Rhett Tom, thanks for joining us. Really appreciate it. To our audience, I think you now know why so many of you wanted to hear from Tom on our podcast. Certainly an honor to have him. To the people walking by filming him they offered. You know, we did not text them and tell them to do that. Totally unsolicited photographs. But thank you for joining this episode and we'll see you again in two weeks.

Other Episodes