In this episode, Trish and Traci chat with David Jamieson, a former Formula One engineer and founder of Salus, who developed an AI copilot for HAZOP workshops to reduce time and improve efficiency. The tool is aimed at assisting process safety engineers by suggesting questions, referencing incident databases and providing augmentative support while emphasizing human critical thinking and final decision-making.
Transcript
Welcome to Process Safety with Trish and Traci, the podcast that aims to share insights from past incidents to help avoid future events. Please subscribe to this free podcast on your favorite platform so you can continue learning with Trish and me in this series. I'm Traci Purdum, Editor-in-Chief of Chemical Processing, and joining me, as always, is Trish Kerin, director of Lead Like Kerin. Hey Trish, how are you?
Trish: I'm really good. How are you doing?
Traci: I'm doing fine and I want to hear about your new role. Can you tell us a little bit?
Trish: Yeah, I'm very excited. This week, I started full operations in my own business, so Lead Like Kerin Pty. Ltd. is now fully up and operational, and I've got some really exciting work happening around the world coming up. So, I work in helping companies improve their process safety and material risk leadership and governance space, as well as helping people improve through storytelling because you all know how much I love storytelling. So, I've got some exciting storytelling workshops coming up in Houston at the start of April, and I'm looking forward to getting up and helping people improve their communication to improve their process safety.
Traci: Well, congratulations. That's so exciting.
Trish: Thank you.
Traci: Well, today we are welcoming a guest to our podcast, Salus Founder David Jamieson. David is a former Formula One engineer for Red Bull racing. He started Salus with a vision of ensuring everyone working in a hazardous industry returns home safely. That's a vision that Trish and I have for this podcast series. So, we are really happy to have you joining us. Welcome, David.
David: Hello both. Thank you very much for having me on, and congratulations on the new venture, Trish. Sounds very exciting.
Trish: Thanks David.
Formula One Racing & Process Safety
Traci: Well, first, let's talk a little bit about how you got into Formula One racing and what led you to your current role as the founder of Salus?
David: Well, when I went to go to university in the UK, I chose to study this course called aeronautical engineering, and I knew that I wanted to do engineering, and I was told that was the hardest engineering degree in Scotland. So that was the only reason why I chose to do that. But when doing so, I fell in love with the aerodynamics and the CFD, the computational fluid dynamics side, and the only way to go in that kind of thing was to try and get a job in Formula One, which I was fortunate enough to do. And yeah, as you say, I had two great seasons at Red Bull Racing and I was working as a CFD engineer on the rear diffuser of the car. That's the bit at the back to help suck the car to the ground so that it can go faster around corners.
That was hugely exciting. And when I moved on from there when my wife and I moved back to Scotland, I transitioned using those same tools, but I was modeling fires and explosions and gas dispersion. And that led to QRA, which eventually led to more frontline process safety. And I really just fell in love with working on the operator side, doing hazards and HAZOPs and all of that kind of thing. And yeah, and then in November 2019, set up on my own, I've always wanted to have my own business, and I really thought that continuing in process safety would be a really worthwhile thing to do. So, we named the company Salus after the Roman goddess of safety and well-being, and we've built a small engineering company, and we've got a training course, and we've got some software, and I believe at the last count we have seven customers in over 20 countries. So that's a very brief overview of how I got to where I am.
Traci: It's very fascinating and I watched Formula One racing. My husband's really into it, so I told him that I was talking with you and Trish today, so he was a little excited and he's going to be listening to this later, so very cool. And just talking a little bit, we're going to switch gears now, obviously, no pun intended, but our discussion today revolves around an AI copilot that you developed that can reduce the time it takes to complete a HAZOP workshop, cutting the time in half, which is very interesting. I'm looking forward to this conversation. But before we jump into the AI copilot program, I want to set the stage a little bit. So Trish, I'm going to toss it over to you and can you talk about what goes into a HAZOP workshop?
What Is a HAZOP Workshop?
Trish: Yeah, so in general, I mean, a HAZOP workshop is a process where we use a set of guide words to help uncover potential hazards and risks in a particular part of an operating plant. So, we actually work within a small section in each individual HAZOP called a node and we go through and ask a series of these guide word questions to try and help determine what's actually a potential that we need to be managing more effectively. And some of the ways that we do that is there's a whole lot of information that needs to be prepared to go into the HAZOP. So we need to have accurate drawings, and those drawings then need to have our nodes defined so that we can actually work through the specific nodes and the areas that we are looking at.
And we also then need to understand some of the operational information or the design information around how the plant was intended to operate, what sort of instrumentation is in the field, what are the typical flow rates or the other sort of process safety information that needs to feed into this HAZOP, as well as understanding past incidents that have occurred in similar systems or systems that could be similar to what we are doing the HAZOP on. So an enormous amount of data and information needs to be gathered to prepare for a HAZOP.
Then we get into the workshop, and we actually go through the guide word type processes where we look at a particular line, and we talk about things like high flow, low flow, no flow, and reverse flow as we go through the guide word type things and talk about what could eventuate in that space. So, a HAZOP is a very structured process and it requires subject matter experts to be engaged in it, and that includes the operators that operate the plant as well as the maintainers that maintain the plant and the engineering experts for various different elements around it too. That's why HAZOPs take so long because we're bringing together a lot of subject matter experts to sit together for quite a period of time and really focus on what's going on around those particular nodes.
Traci: So, streamlining it would be very beneficial. And David, I want you to tell us about the AI copilot, how it was developed, who was involved in creating it, and what part of the HAZOPs workshops would benefit from AI?
Introducing AI to HAZOP
David: Well, it's just quite rightfully said, it's incredibly difficult to do a good HAZOP. There's so much work that has to go into it. You have to have all the right people, you've got to have them in the right frame of mind, you've got to have the right information available. You've also got to either remember or have at hand lots of information that's relevant, maybe like past incidents or recent management of changes and things like that. So I've always been fascinated about how we could try and improve that process but retain the same high quality of risk assessment. And we'd been dabbling in quite a few things. We've got a small software division in Salus and we've been trying for a few years for various things, but large language models, which is the technology behind apps like ChatGPT. They were available for a few years before ChatGPT launched, not anywhere near as sophisticated as they are now.
And we were experimenting with those to see if that would help improve things. And we were finding that we weren't getting very far. So, in summer 2023, what we did was the entire company took a week out of being a company and every single person in the company, our deep learning AI people, our safety engineers, our software devs, even our sales and marketing team, we all just spent the entire week seeing how far could we go with what we built so far. And what came out in the end was a showcase product called HAZOP AI, which the landing page is still available if you just go to hazop.ai. And that did three things. So, if you described a HAZOP node as Trish has just explained, and the equipment that's there, what the process is, where it's located, what this would do is it would first of all give you a list of likely questions to be raised in the HAZOP.
So, if you were someone that was preparing for it, you could make sure that all of that information was to hand. It would also, we took a copy of the IChemE process safety incident database, which I believe has got several thousand abstracts of incidents on it. So, what that did was that took the most relevant 10 incidents to the process that you were studying, and it would tell you all about those incidents and what you could discuss in the HAZOP to see if you would be susceptible. And then the third part, which maybe we'll get onto, I'm not really supportive of this type of thing, but it would start to pre-populate the HAZOP worksheet. So, the idea was just to see if it could be done, if it could be done to a decent level of accuracy, and really to see what other people thought of it.
And probably the most miraculous thing that came out of it was over 80 companies got in touch and we were a very small business. So, for that many companies to reach out and arrange of call with us is quite big and a lot of people, despite being skeptical, there was a huge interest in trying to solve some of those things. So, I do think there's a lot of legs in pursuing this, but it has to be done in a way that it's accurate and that it improves process safety. So yeah, it was a really good thing. And then we've gone on to build a few bespoke products for companies. I think to do something as general as HAZOP given it can be used from oil and gas, petrochemical, pharmaceutical, it's very difficult to get something that'll apply or be accurate across all of those industries. So, I think we're not quite there yet on a tool that will do all of HAZOP, but certainly bespoke tools or tools that have a very narrow window of application. Yeah, I think there's a good feature ahead.
Traci: You train it to extract good information, but how do you get it to apply to big things, like you said, going across the pharmaceutical industry and oil and gas. What is the goal? How do you do that?
David: Yeah. Well, right now with large language models, and although it's hugely impressive what they can do, and I'm sure every single person listening has at least sample ChatGPT, that final 5% is actually incredibly difficult to get it to be incredibly accurate. And all around all of those edge cases, because there's many people who believe, and even some of the people who have developed these large language models don't fully know how they work, but there's many who believe that they're not as sophisticated as we think they are. They're simply answering questions as if they had the answer written in front of them because they've read it somewhere on the internet before. For applications such as HAZOP, there's probably not as much literature published that it's had in its training data. So, it's almost certainly going to come across a process or it's going to come across groups of equipment together that it's never seen before.
And it's really those cases where my concern is how does it do there? So, really what we've been doing up until now is we've been giving it lots of really advanced prompting engineering. So even a simple task that seems like it could be done in one step, we are actually doing in 20 very small steps just to minimize the risk of errors or hallucinations, as they're called. We also give it lots and lots of past ops, so it really understands the process and, as I've said, past instances, so it's about giving it the right level of sophistication in the prompts so it's very clear and what it's got to do and giving it the right data. So, it's got the right context to answer, but as perhaps we'll get onto, you're always going to run the risk of there being some inaccuracy. So what tools are you going to build using AI that means that you're going to minimize the risk of that happening.
Traci: I was going to get into the accuracy question and you had brought it up that the skepticism and a lot of our readers, we hear from them, they're very skeptical on AI. They don't understand how they can apply it, and they just think it's something Terminator/John Connor related and they don't understand really how they can best apply it. And there's a lot of people are dipping their toes in this and talking about the accuracy in this setting. And Trish, I want to get your feel on it about what David just told us.
Addressing AI Skepticism
Trish: I think it's a really exciting space. I think there is a lot of potential here, but as David mentioned, this idea of AI hallucination is something that sort of worries me quite a bit. I talked about it the other day to someone, and there's so many things that we can be using AI for and I love that it can scan data at a rate that the human brain just can't do. So it has this enormous potential to scour all of this really valuable information and present it to us in some form of a usable way very quickly. And I think that is an enormous game changer for what we need to do in process safety. I think that's fantastic. I don't think AI or large language models in particular are going to be replacing any process safety engineers anytime in the future because someone still has to make sense of that information because the large language models don't yet make sense of it.
They scour the information. And as David said it's like it reads you an answer that's written down in front of it. It has assembled this answer and sometimes it does hallucinate and make stuff up. And so we need to have competent process safety engineers that are going to be able to look at it and say, no, that's just not right. That doesn't feel right. I need to go and do more work on that particular bit or delve into that node or challenge that. But I love the idea that it can bring together the enormous data input and give it to you in a more usable form than you could previously do. I'm not quite yet sold on the full pre-populating part of the HAZOP, although I still think there's value in that too, provided there is then discipline and competent facilitation in the workshop to go through and very clearly make sure that not just, oh, well, the AI has told us, so we just tick that and move on.
You actually have to challenge everything it's telling. You you have to still do the risk assessment on every node through all the guide words, even if it's pre-populating something for you. I think the risk is that we might get caught up and think, well, it's there. That must be right. And so the bias comes in that we get anchoring bias. Well, the system's told us that's what the answer is. So we just believe that's the answer and move on. We need to still have I think great ways to really challenge and maybe that pre-population part is actually we do the node ourselves and then we check what the AI said as opposed to seeing what the AI said first and then doing the node. So there are ways to manage this, but I think there's great potential here, but we do need to be aware of the challenges and the limitations of large language models.
Traci: And I think that it takes some of the trudge work out of it, the drudgery of it. So, in essence, it would bring more of these HAZOP workshops. You could do more because if you're streamlining it a little bit but still going in and verifying through the nodes, then I think it's very useful. But what about some security concerns, David? Are there any security concerns? And before we get into that, has anybody utilized this and really found the streamlined process?
David: Let's take the second part of that. And this extends from what Trish just said, so about on the accuracy and about is it being used. So there's two entirely different ways to use large language models, and I call them AI automation or AI augmentation. So with AI automation, that's prepopulating the worksheet, that's filling in the risk assessment and that's writing the report for you. And you'll to an extent be rolling the dice on accuracy there. If the activity is to write social media posts for you, it's probably okay if they're not perfect. But if it's going to be used to make risk-based decisions, it needs to be very, very accurate. But with AI augmentation, which is the direction that I believe that we should go, which is about imagining having the best engineer that you've ever worked with just sitting on your shoulder just telling you things, oh, you've maybe missed that. Do you remember that incident that took place? There's actually an open action linked to this system that we're discussing right now. Let me bring that up.
So then that goes away from an accuracy discussion and that goes on to how valuable is it discussion? So one of the tools that we built for someone, which for this exact thing, they've got hundreds of offshore wells in the UK. Their problem was that because they're all near the end of their field life, the administration of risk assessing all of the impairments was actually overshadowing talking about the risk. And so what we help them do is they've got an administration team and their safety department that start to populate all these risk assessments and then the experts come in to do so. So they compile all the information and all we'd simply do is use an LLM that does a quality check on their input, it checks their own governance, it checks good practice and it checks regulations.
And it just simply asks them open questions based upon things that it might have missed. So they might not have mentioned if the well contained H2S or if the well could free flow on its own. And what that's meant is when everyone comes to do the risk assessment, they know that all the information's there. And then the other thing that it does, which is incredibly straightforward at the very end, it just assesses the overall risk assessment exactly what Trish just said. The human beings do the risk assessment and then at the very end it just does that exact same check. And then that means when it goes to the technical authorities and management for review, they know it's past the pre-screen so they know that all the prerequisites are there so that again, they can focus on the risk. So that's using AI to augment rather than to automate.
And I think that's a really key thing. And I think when you said right at the beginning, a lot of people are skeptical. I think that's because they assume that you're going directly to automation with AI and I'm skeptical of that. I would describe myself as a skeptically optimistic about AI and it really depends on the direction that you can go. And then just to touch on that last point about security, that's a huge concern today. I think less of a concern tomorrow. So who do you get? Who owns the large language models just now? Well, there's big tech in the US or there's companies over in China.
I think neither of those are sources that have proven to be trustworthy with our data in the past. So absolutely to get the very best cutting edge, there's definitely security risks. They do have enterprise models that have got increased security, but I think that problem's going to go away because I think that the freely available models that you can train yourself, that you can run locally are only going to get more and more powerful. So I don't know how far out this is, but I think it's within the next five years we're all going to have our own large language models that run on our machine that no one else can see. So I would hope that security concern over time will start to diminish.
Traci: What is some of the feedback you're getting?
David: The feedback that we are getting is everyone seems quite excited, but then at the same time remain quite skeptical. People might be worried about their jobs. I think when it comes to process safety for good reason, people are reluctant to make changes. So certainly I don't think there's much appetite for a magic red button that does everything for you. I think in many major hazard industries in many countries that would be against the regulations anyway, and I certainly don't want to remove a single person from the loop. So I think our biggest challenge is getting across the hill, certainly from a Salus point of view, is that what we advocate is that having the best engineer in the room at all times, just to give you some pointers, that's what we want to use it for. And all it's doing is taking data that already exists in various forms and formats and locations and just putting it right in front of you in a manner that you can easily understand.
So yeah, I would say lots of skepticism, but there's lots of excitement about it too. But my fear is that maybe accidentally or on purpose, that becomes an over-reliance on AI and that perhaps diminishes the quality of our decisions. If you think back to being at university, for example, I am sure on a loan, you'd have to write a big report or a big dissertation near the end. And I'm almost certain that every single university engineering project that's ever been written is of absolutely no use to anyone today. And I'm sure that the lecturer probably filed it in the bin not long after the grade was awarded, but the value was in putting a student through the process of having to compile that, having to do the research, do the calculations, construct the argument, build conclusions. So the value was in the journey, not the destination.
I think the trouble with using AI to complete a risk assessment takes you straight to the destination and then you lose the value in the journey. So it wouldn't surprise me, and it's maybe already happened if we might start to see a slight backward step in some areas or maybe some instance as a result of someone just handing it over to the AI because a lot of work that's been done, a lot of surveys of industries find that employees and particularly employees of Gen Z and even some millennials like myself, are desperate to start using this in their work. So the chances are that means that they probably are using it whether their employers know or not. So I am a bit worried that we start to use it a bit too much. So yeah, I'd be interested to hear what you will think about that.
Traci: Trish, do you have any questions, or what are your thoughts on that?
Trish: I think that's a really valid observation, that when we study engineering, we actually are studying how to solve problems with critical thinking. We're not actually studying much else. We learn how to solve problems. We learn how to solve problems in a specific discipline of whatever we study, but we can apply the process to multidiscipline, which is how just because you studied one form of engineering doesn't mean you can't do other things. David, you are an aerospace engineer, obviously working in process safety. I'm a mechanical engineer working in process safety, but I think if we can somehow maintain that critical thinking as engineers that we learn and make sure that we continue to exercise that, then as you summed it up, the augmentation use of AI I think is incredibly valuable and something that we should be working with. I love that idea of, as you describe it, of the best engineer you've ever worked with just sitting on your shoulder.
And if we can get to that point, I think that is absolutely a fantastic outcome that will help industry. But you're right, I think we could possibly see some backward steps for a period of time where we end up with people just taking it for granted and assuming it's correct and using that automated action as opposed to that augmentation. And I think there was a case a couple of years ago where it was in the very early days of ChatGPT and a lawyer used ChatGPT to write their legal briefs for a court case, and the judge was very, very unimpressed because they were very, very wrong and it became public and it was a big obvious misuse at the time.
But maybe we also need to be making sure that when we do see AI misused, that we do call out and say, that's not what it's there for. That's not its purpose, and that's not helping safety; that's not going to deliver a safer outcome. We need to go back and actually really encourage and embrace that critical thinking and problem solving that we study as engineers because that's actually what helps deliver safer outcomes, I think. So it's around really pushing that focus for me.
David: And we had the exact same thing with HAZOP AI. We gave it loads of examples of condensate storage tanks, and they all had export pumps. And then when we entered information of a condensate storage tank that didn't have an export pump, no matter what we did or how much we prompted it, it still was obsessed with this export pump and all the hazards related to the export pump. And that equipment simply wasn't there. And it had enough information to know that, but yet it still got confused. Now obviously the quality of these things are improving with time and that problem may not be there anymore. But yeah, if you're asking it to solve the whole problem for you, you're always going to run the risk that it's got a little bit confused and imagine that same problem the opposite way. Imagine if it missed a key hazard. That could have a phenomenally bad impact.
So yeah, I'd be interested to hear over the next few years if regulators want to pass anything about how much or how little AI is used in various things. But certainly in the UK, the regulations are very clear that who is responsible, the type of rigor that needs to go into the risk assessment. So it is still people at the end of the day that have to sign off on these things and be accountable for their decisions no matter how you got there. And hopefully it's with a human being with some help from AI, not with AI doing it for you.
Trish: Yeah, I think the key there is making sure that our decision makers and our authorized people that are the ones that do sign off that final safety report where there are safety case type regimes, that they are still understanding that they are clearly accountable for what is in that document. So if someone wants me to put my signature on a document, I certainly want to know what's in it, and I want to understand and be across the details so I know what I'm liable for.
David: Absolutely. I agree.
Traci: Well, David and Trish, I appreciate the time today and pushing us to learn more about process safety into the future and helping to ensure that everybody gets to go home after their shift. Unfortunate events happen all over the world, and we will be here to discuss and learn from them. Subscribe to this free podcast so you can stay on top of best practices. You can also visit us at chemicalprocessing.com for more tools and resources aimed at helping you run efficient and safe facilities. On behalf of Trish and David, I'm Traci, and this is Process Safety with Trish and Traci.
Trish: Stay safe.