How (and why) to make Black Swans extinct

How (and why) to make Black Swans extinct

hi guys welcome people are turning up another 30 seconds and start talking about black swans i thought this was really topical cartoon from michael mittag and by the way do jump in the chat if you can’t hear me or what’s going on but i should be in theory i should be live at the moment so yeah so this cartoon from michael mittag from cool risk he goes back probably about 10 years ago i used it in a book a little while ago but i thought given the current pandemic it’s kind of a pertinent little way of looking at the world so where are we 12 30. let me blather about black swans for a moment let’s give people a minute to get in the case they’re having i.t problems so i think most of you probably know i’m julian talbot i write books about risk management and i consult on risk and i i like risk um and somehow it just sort of came together that that’s i get to make a living doing the fun stuff that i enjoy and one of my pet peeves is black swans now i should jump into that and say look part of the idea of black swans reflects in my um in this quote from one of my favorite movies fight club and simply you know a long enough timeline everything’s going to happen everyone’s going to die in a long enough timeline something’s going to happen and this whole idea of black swans is predicated on the unexpected so i’m glad that so many turned up there’s more there’s like 120 registered and about 30 of you here so far so thank you i know some of you are in places where it’s very dark in the middle of the night on the other side of the planet so thanks and as per a couple of emails i’ll do my best to keep you awake and try and make it a bit of fun it’s if it’s any consolation here in canberra today it’s a very gray day so but not quite as dark it’s about midday here so let me first say look i have nothing against cygnus atratus i actually love black swans the black swan the water bird fantastic guy i can see them from the balcony here as i look out and as we walk around lake billy griffin there’s any number of them they’re not the black swans i’m talking about making extinct they’re a different type of black swan the uh and i’m not really shot at nasa’s either i love his books so i really enjoy reading them and i enjoy his his thought processes and i actually think black swan as an idea is a really useful addition to our vocab and the way we think about risks so i’m having a bit of a tongue-in-cheek shot here but but in reality i do think we need to make them extinct i think it’s lazy thinking um talib has said basically his essay is a about our single idea that blindness with respect to randomness particularly large deviations and high volatility or things we don’t expect and i think that gets to the heart of this idea of our blind spots because you know what’s a blind spot for one person is a really obvious thing for another so before i get into shooting down taking a shotgun to the proverbial black swans let me have a quick talk about so we’re all on the same page so black swan has three ideas that it’s extremely rare it has a severe impact and it’s only predictable in hindsight so you know i think i’m going to disagree with that vote and argue that case very strongly why i think none of this is true and there’s no such thing as a bloody black swan okay so let me give you some examples about black swan which is wikipedia talking about talib’s book and other people’s definitions of black swans so they’re talking about saying these are classic examples of black swans okay major scientific discoveries historical events artistic accomplishments the rise of the internet personal computer world war one dissolution of the soviet union the september 11 attacks so i’m going to start with a couple of things here that you know the personal computer yes there were a few people who didn’t see that coming but when you look at the people who did see it coming to many people was very obvious i’m going to argue the dissolution of the soviet union is not a black swan most of western europe and the united states and half the world have been working for years to try and dissolve the soviet union so it wasn’t as if this was something out of the blue this is something we actually constructed and worked towards creating and then suddenly everyone goes oh it’s a black swan that was unexpected well no that’s risk management we’re trying to manage the risk we’re actively trying to dissolve the soviet union the september 11 attacks as well you know what that they’re often referred to as a black swan and i’ll come to some examples that but fundamentally you know we know people have been hijacking planes for years we know people have been in suicide attacks not just suicide vests but you know sort of like truck bombs and all sorts of things even flying planes into buildings in 1936 i think it was there was a a world war ii era bomber flew into the side of the empire state building in a fog so you know we’ve got this whole chain of events that says none of this is really should come out of the blue but what do we do about it so this is the whole point of the webinar is right what do we actually do to improve it i’ll jump into the chat here to see i think i’ve got someone’s got a comment there oh hello a few people saying hi all good and loud and clear thank you very much um i’ll come to some questions in the end uh but let me just talk first of all about this this is a it’s not a very good picture i’ll grant you okay but it’s the northeast shelf gas project in karate it’s a photo that i took so i can use it royalty-free and i can sort of brag about it the point is here that i worked at this gas plant for a number of years i was the manager of security and emergency there came back as a consultant to do a few jobs and one of the jobs that i came back as a consultant was in early 2001 was to do a security risk assessment first you know in 20 years of operation they had a few security reviews but this was the first enterprise level security risk assessment looking at the offshore rigs there’s a point to the story by the way but it it’s a set in a beautiful part of the world i think where the ocean beats the desert part of the job of doing the security risk assessment was to obviously look at all the risks and improvements and what have you but i looked at it partly with my security background my military background and you know this that’s 230 hectares which is 500 acres for people using the old money it’s got a couple of outlying facilities apart from the rigs but basically it’s a 500 acre body of pipes gas tanks a couple of jetties usual sort of administration buildings the sort of thing you’d expect when you are producing natural gas and oil and when you look at it from a point of view of defense about 60 of it is facing the water i mean like right on the water the fence goes to the water and you could you could turn up with a row boat basically and you could climb the fence and you’re in the plant and you the reason i picked this picture of it is the show you could climb a hill you could walk uphill drive in fact up a hill right next to it and have complete oversight with a high-powered rifle or with a shoulder launch device bear in mind none of this this was before 9 11. so you know when i look at this idea of standing here with a man pads shoulder launch missile or rpg’s lobbing into these gas tanks the idea is pretty far-fetched and fanciful but it was part of the suite of risks that i looked at the other whole bunch of other risk environmental demonstrations and things going breaking down and petty theft were all in there but when i looked at this risk and said okay what would we do if we were facing a military or a terrorist threat and and again it was pretty far-fetched in that area so once that was far better to me but it was far pitched to the management team so they were like yeah you know we’re never going to face that so our bottom line was we couldn’t resolve that with our own resources you know if it was a military threat we would need australian military to defend it uh it was terrorist threats similarly it was a military and we had a range of other processes in place then 9 11 happened and then suddenly i was working at this point as an employee in a different job there and the client that i my boss there that i’ve done the review for was called into the manager’s office senior management media said what are we going to do 911 stack we need to change posture and he very calmly said no you don’t here’s the security review it’s all in there here’s what we would do if and it involved upgrading cctv and upgrading liaison with the defense force a great whole range of upgrades to you know getting people to safe including having call off contracts so this is part of what i’m saying to us as risk managers and and it doesn’t matter between financial risk you know global financial crisis are you talking any sort of a risk issue is to think about the worst case scenario and have something in there have something in your risk assessment in your process so that it’s not a matter of reinventing we go you know 911 was not a black swan in the world of security practitioners a lot of people have been thinking about these things but it was actually having that solution in there and say this is all we do these are the steps we take we already know what we’re going to do we don’t need to rewrite this we don’t need to revisit it it’s been thought about so i’ll get into the nitty-gritty of a few more ideas so basically there are a couple of things which we can do one of them i like is the all hazards approach and and that simply says look you know when 911 happened we didn’t have to go and create a whole carter of firefighters and ambulance officers and police because we already have an approach to managing risk just as we have insurance and we have contingency and we all have all sorts of things in place another topic i’m going to talk about is probability distributions thinking about what’s the range of possibilities and then crazy ideas so and for example to think about crazy ideas if we were to think about the most outrageous risk you know the biggest black swan of all you can imagine so what is that i mean i don’t know i read science fiction most of it’s in there could the earth be destroyed by a piece of antimatter at any point the moon split in half and caused the end of the world as a seven eves a book by neil stephenson recently proposed all of these things can happen and it’s just about having the craziest ideas and accepting god doesn’t matter whether it’s antimatter or dark matter alien invasion a long enough time frame these are all risks which we need to think about what would we do if whatever the source is what would we do and then of course this reality check because you know i don’t know about you but i’m not going to fill the meteorite proof structure over the top of my roof on the off chance it’s going to land here tomorrow it’s not a risk that i care about but it’s not a black swan either you know we know at some point when we know regularly the planet is taken attacked by meteors and and all sorts of space debris so let’s frame it in another point of view this is a stroud matrix and this is one of the tools that i use i’m going to bag risk matrices i’ve got a book on risk matrices coming out soon and i’ve got a lot of bad things favorite risk matrices but i have a lot of good things to say about them and how they’d be used one of them is as a discussion tool so this stroud matrix is a really simple idea of a model about discussing risk and how we look at it so the business as usual all those little unlikely and minor things you know things like tools will get lost computers will fail you know you’ll lose all sorts of you know maybe you’ll lose a file will be corrupted something will break at work these are these kind of routine things or not in written vau business as usual and then you know the routine which this example is fraud but i mean routine is also things like doors will stop working um you’ll get a leak in the warehouse roof which will damage some stock these are all likely but they’re not big deals they’re not huge risks and up in this sort of right hand quadrant i’ve got this idea of a danger zone which is very lucky to happen and they’d be very significant if that something does happen to us so a cyber attack is something in that space and my concept with this stroud matrix is that if it’s in the danger zone you should be already doing something about it whatever it is you should have got it get the hell out of there and let’s do something as a really urgent priority so i’ve got swans in this case and this bottom right so they’re unlikely events but they’re major so they’re all the things in your organization in your life and your business um that are things that you know maybe in a personal space that’s coming down with a major disease in the business space maybe that’s something like having some sort of a well i say catastrophic mechanical failure or a new product line fails or you have some sort of a industrial accident which causes a series of injuries and so long for example where a gas plant had an explosion and failed in melbourne so these are all the things and they’re i’ve just called them swans not black swans because there are white swans which the bleeding obvious ones we know about okay in the in the swan mode but there’s also gray swans and green swans and blue swans and all manneris ones we haven’t anticipated so we may all go to somewhere in the amazon and discover there’s a blue swan i don’t know we may go to a different planet and discover the new species this one’s but the point is these are all things which are in that unlikely but major and that’s kind of the topic of what i’m trying to address with this extinction event for black swans so part of the process in terms of looking at this is this iso 31000 process and you can relax i’m not going to do an iso 31 000 process webinar here today that’s a different webinar but it’s just up there as a reminder to say this is a logical sequence a logical process of steps which we can apply as part of taking out black swans finding them um i say killing black swans let’s just say finding them and mitigating those risks shall we so let’s have a talk about risk analysis in this context of inherent current and residual risk and i’ll talk about a simple view of risk first of all you can look at this risk matrix as a way of explaining here’s terrorism cost management capability management so capability in the context of an organization saying maybe we can or can’t get stuff so that’s a low risk but it’s the largest impact so you know the largest most likely risk so you change the color to identify it you can change the size of that risk bubble to indicate the level of uncertainty around it so it’s a the larger the risk bubble larger the uncertainty and so here’s here’s an example of using a risk matrix to think about conveying information so if you think about this risk here which is looking at it starts up here it’s been a moderate risk and i’m going to take this with a pinch of salt i’m using risk matrix here as a communication tool not as an analysis tool okay so just to be very clear i’ll come to the limitations later very quickly but you can use this to indicate risk if it’s inherent and we treat it down to here and if we spend a little bit more money we get it down to there and if we spend ridiculous amounts of money we can make this an almost irrelevant risk so everything in this you know i’ll put this up online as a video so you can pause it and have a look at it later and the various elements of it because it’s i don’t want to get bogged down the details but essentially what i’m saying here is this is a really useful communication tool to be able to say okay so you know order of magnitude for costs to get it down from one area are we talking about a high medium or low you know how volatile is it how rapidly can it change so they’re just using different shapes the confidence interval so you know if it’s a larger area we don’t really have as much of a bigger confidence about exactly the rating for this risk so if we had something which covered most of that risk matrix were basically saying i have no idea how risky it is i don’t know how likely it is i don’t know how big the impact will be let me leave you with that and move on to a couple other areas now here’s part of the problem with black swans is that we try to think of risks in the idea that there’s a point value if someone slips on the floor we can say it’s x like you know it’s possible and it’s negligible and this risk matrix we said this is this is the risk this is the reality and it’s not the reality of the risk because risks are not that simple and i’ve seen risk assessments where people have rated risks to two or three decimal places and i’m just thinking okay so when i say to my boss um one of my clients and they said so he’s gonna rain tomorrow i said oh i think it’s about a one percent chance of rain and they come and i see them that afternoon they’re dripping wet and they blame me i say well that was the one percent now this is the uncertainty of risk management so it’s about thinking about how do we deal with it if we’re not able to actually address it so you know let’s think about this idea this slip trips and hazard all the things that have happened you know in the mix you’ve got death and injury and fractures and bruising in a simple example to play around with the risk matrix here let’s say that it is you know this is what we think is what we decided and again i’m using this illustration not for analysis but let’s play around with it so we know now that’s really pretty notional a bit useless really in the scheme of things you know what do you do with that you think it’s probably someone’s going to bruise himself if they slip on a water leak somewhere so let’s have a think about what could actually happen you’ve got water on the floor someone walks in slips on it there’s a whole range of possible outcomes you know and and again this is this black swan idea there’s a whole range of possible outcomes of an aircraft being hijacked the whole range of possible outcomes of a new technology being invented you know we’re dealing with now with the risk of artificial intelligence trying to anticipate how will we manage it you know will it will it take over the world and will it create the terminator scenario or that create paradise as we’ve never known before you know these are all the things which we can anticipate and and some that we can’t fully but we can broadly anticipate the direction the waiting is going with enough thought there are people there whose entire job it is to think about this category of risks just because i don’t know doesn’t mean there aren’t a heck of a lot of people who actually do know so think about this slip strips and falls and a range of possible outcomes you know you might trip over and have no injury at all you know just pick yourself up and go on you might bruised might break your arm might end up in hospital we might even die how likely all that is we don’t know so so what do we do how do we model it so i’m going to give you a scenario to imagine we’ve we’re going to look at this idea of a bridge this is a really simple risk and it’s really easy to model but i like simple risk things that are easier to model because that’s what we’re trying to do here now imagine this is in a very cold climate so it’s wet and it’s icy and it’s rainy and we think about okay so let’s have a think about this bridge in the context and there’s a reason why i want to separate these out so inherent current and residual risk so this inherent risk is the idea if we had no controls what would happen if the current risk as it is today with the existing controls whatever they may be and the residual risk we’re trying to say okay what’s our risk appetite you know how many people would we allow to die or you know how how would our reputation as a local council responsible this bridge what would be acceptable so what do we need to know and this is this is the big question there’s a couple of great books on this how to measure every measure anything i think it’s titled by douglas hubbard he’s written a lot on this topic a few other people have as well but the fundamental question is if we want to assess risk what is it what information do we have and what do we not have so let’s sort of start and think about okay so what’s the source this risk you know where is it coming from in this case it’s water it’s ice it’s now it’s nature so we can we can narrow it down this is about scoping what assets do we care about in this case it’s going to be people you know they’re our resource they’re our key risk um where are we so how many people are going to be using the bridge if it only gets used once every 10 years that’s a different kettle of fish from 100 people an hour going across it to go to work so what’s the likelihood what’s the consequence you know what are the range of possible eventual outcomes in their individual likelihoods and think about ultimately what’s our objectives here what are we trying to achieve so what what all these things we need to know we need to have an understanding and often we don’t sometimes we have the data sometimes you know if we’re looking at the law of large numbers with insurance claims and we have 20 people fall off this bridge in the last year then we’ve probably got a good data set in a really bad way but let’s imagine we don’t have quite that simpler data so what do we do how do we figure it out well we turn around to our pool of uh trusty volunteers and we we get a big budget we hire 10 stunt people different shapes and sizes heights widths all the bits and pieces and we wrap them up in padding and felt protection we just push them onto the bridge we say okay okay so we’ll get it we’re going to get a data set right so we’ve got all sorts of sensors in there designed to be out of track when they fall how hard they hit and where medical experts then look at what this means or we have some idea of the impact the consequence we can assess likelihood based on how often they fall and i’m not proposing to do this okay it’s a very dull experiment which it could be a fun experiment you know it could be a lot of fun but yeah bags not being one of the stunt people so if we look at the whole different types of springs you put different shoes on them you know we give them hiking boots with ice cleats on them we give them ballet shoes and thongs and slippers and we get an idea what different people wear and what they do and we start looking at you know how they work or imagine we take the handrails off we’re going to say someone if people hit the edge with a handrail then we call that falling off the bridge because you know we like our stunt people and we need to keep pushing them across again so so if we break too many of them we won’t get much data set because i’ve all quit so it’s just imagine hitting the handrails is the process and when we do another 100 crossings where they can use the handrail and then we do 100 crossings again where we put a non-slip surface we would gravel down so that’s if you like simulating our inherent risk our current risk with handrails and our notional risk if we do this proposed treatment so and there is a point again to this story and hopefully it makes sense in the context so if you extrapolate this to whether it’s financial risks or operational risks but just to have a think about what we’re doing here in terms of this now this again is just about presenting information not analyzing i’m not proposing that we get a risk matrix and don’t yes it’s x y and then we’ll get our group of experts in the room to dream up some numbers we’re actually using real data from real stunt people who by now black and blue and bruised and probably on workers compensation but we make some assumptions okay so the bridge is always wet in it so i see we consider on our risk matrix of consequence existential risk is one or more deaths whatever you could call it 100 deaths and if you’re in the military you’d probably in world war ii you probably would have called an existential threat about the size of a battalion but that’s we’re not we’re in sweden unless they’re in sweden or somewhere where the legislative environment’s pretty strong so one death is a bad thing time frame is one year we have a frequency we know a thousand people a year use it so a thousand people there fantastic so we model this idea of 100 people across the bridge we look at the current risk this is how many people stunt people fell over and you can see the distribution of how injured they were in this notional study we look at the inherent risk and we say okay so you know what what does that look like so up here if there are no guard rails and nothing to hold on to in this icy bridge you know we’ve killed two people and you know agely injured pretty much everybody really one way or another and then we look at this sort of residual idea of putting a non-slip surface on there and we think okay so now what are we talking about in terms of all hazards and what are we talking about in terms of presenting and analyzing data we’re looking at this different model we’re looking at something we say okay we’re no longer saying a slip trip and fall is a point and shoot value and we’re no longer saying that this range of outcomes i never thought someone could fall to their death on it um because we modeled it we’ve looked and we’ve said okay this is this worst case scenario it’s kind of which is a bit more dangerous than i’d like to think it was in real life but you know it’s a it’s a model but how do we then present risks in ways which make sense so i’d argue there’s a better matrix where you start to think about likelihood being zero to one that’s a probability that’s it’s basic maths right so we don’t get more likely than guaranteed but consequence of zero to one whatever your existential threat is so if it’s the entire budget for your organization or it’s the loss of your platoon in the military scenario or it’s you know whatever you consider one risk to be and then you start modeling these not as a point as a probability distribution because what you see there is very quickly in this model you know risk a is pretty benign but risk c and risk d are actually you know they’re they’re right out there they’re kind of out off the richter scale in terms of they’re going to start killing a lot of people path to people with risk d whatever it is you know essentially it’s anything in that category is is starting to become your black swan started being really anticipated this is a bad risk so i bore you with this you get the general idea you know risk is about what’s the likelihood consequence what does it look like map it and and yes you can have what’s better again this idea of the black spawn risk c is a double dip you know it’s the the camel with two humps okay and people might say well what how can you have this sort of probability distribution well it’s it’s a little bit around contingency so let’s say you are building a new submarine you might say that the cost is likely to be or the cost you know overrun is likely to be this and that would be the left-hand side of this curve so we’re looking at this kind of an area let me draw this uh this part of it is where we think our cost will be we can even do some analysis and say well you know this part is starting to get unacceptable so we have to think about how we manage our project risks and that but what happens here this is the part where we’ve actually had the discussion instead of just going to our bosses or our politicians saying no problem we’ve got a great submarine we can bring this in on time and on budget which of course is what we all want to say we might say well if you choose to not go with the swedish one or choose to not extend the existing columns class then that introduces a whole new kettle risk and the cost will almost certainly blow out so this is if you like contingent and you might make an assessment about how likely it is that that will become part of it if you’re looking at four options as we were in australia with the submarine project one of the least likely ones turned out to be the one we chose and we were the whole new model a whole new class so unsurprisingly our cost overrun turned out to be this end of the scale but that was entirely anticipatable it was entirely something that we could model and understand so have a look at that in another area let’s have a look at how do you get this and i’ll give you an example this this is a kind of concept that i’ve used before this consequence asset sources and events so once you have that idea you can work out a likelihood and you can work out some you know range of consequences but this concept of saying um what are the four things we can worry about here so what what its impact on objectives what assets or resources do we care about we know what a critical tourist or likely vulnerable what are the sources of risk we care about you can come up with a list of each of these income risk events everything from fire to theft to fraud to financial market collapse you can do all those in a list but getting away from that or in terms of applying it you start to think about this causal chain and this is where a lot of the risk thinking i’m going with at the moment is in terms of what’s the causal chain that’s leading to this and i i don’t want to point and shoot three decimal point accuracy of risk right i actually want to think about what can i do in the scheme of things in a swiss cheese model to make sure that it doesn’t happen or it’s so unlikely or if it does happen and again looking at this in terms of likelihood versus consequence management we’re thinking think about classic a meteorite strike or a bit of dark matter antimatter hitting the earth creating massive disruption earthquakes whatever you want to call it um not a lot we can do about likelihood right and so we can’t really break that causal chain of a meteorite strike not yet give it give it time i know people are working on it and when we’re on mars or probably a few martians that don’t care what happens on planet earth at the meteorites but right now i’m a bit concerned that we should have some sort of a plan in place so this is this idea of escalation factors and escalation barriers and a little bit with bushfires for example there’s a certain amount we can do to prevent bushfires there’s not a heck of a lot if we have a super drought or we have the conditions we had a year and a half ago when basically any spark turned into a you know thousand acres of fire but if you think about that and and rest assured again take a deep breath this is not a webinar about bow tie theory and bow tie tables i just want to illustrate the idea because when you start to think about this case casey the consequent asset source and event you start to think about okay well how does that like what are all the possibilities that you might come up with so you might think about an example of a network breach and keep it simple you could have 100 risk events you cared about here okay so i’m just i’m just trying to illustrate how you might mix and match some tables into it and i’ll show you another example in a minute but right now we’re really looking at this idea of let’s say pathway number one a state actor so foreign intelligence service causes a network breach looking at your operations data um what’s the consequence lost sales profit is the property to the impact on objectives so when you start modeling these in a herringbone or a root cause analysis or a tack tree you start to come up with all sorts of possibilities now some of them are ludicrous okay so terrorists are not going to steal your building they’re probably not going to steal your stationary either but likewise petty criminals are not going to blow up your service station or put a truck bomb underneath your car park so so you can eliminate some of these ideas but equally some of them will be a little you know thought-provoking some of them will cause you to think and okay would i you know i don’t know in my mind i look at it if someone goes to their boss and says sorry boss it’s a black spawn i’m just like well you know you really didn’t think this through or you listen to the group thing so any number of examples a group think which i won’t go into now because someone is kind of tragic events but we all know what we mean by that so here’s my rule of thumb don’t worry about this black swan idea if you treat the top 10 risk you’re going to address most significant risk and you’re going to have an all hazards approach if you think about this idea of the objectives are comprehensive and defensible and prioritized i mean prioritized in the sense here you can’t treat everything and i remember coming into a job i won’t say where but i inherited some risk registers and risk treatments some of which were like uh put put some cctv in the ceo’s garden shed so nobody steals the lawnmower i’m like okay it’s on the risk register and i can’t ignore it there are a couple others a little more real i can’t ignore them but i can prioritize them i can say okay until i do my next risk assessment i take off the whole cctv and the lawn lock so i don’t have to worry about that and waste resources and money on it i’m just going to move that down the list and i’m going to work my way through it so that’s that’s a strategic way of thinking about how to address them so move away from it and start thinking about all these possible sources of risk all the possible events could happen you end up 300 odd risks let’s say for a single enterprise security risk assessment if you’re doing an enterprise risk assessment that’s covering not just security but finance and safety and procurement and supply chain you might end up with a thousand risks by doing the same model okay so there’s just an example if you think about rolling them up aggregating them and then you where you are the location and the facilities and the situation will vary a bit but you might end up with this whole aggregated risk which should be less than 50. so this is a rule of thumb okay so i’m not do not go to your boss and say julian talbot said only 50 risks or exactly 50 risks on the risk register please that’s not what i’m saying at all i’m just thinking this is a rule of thumb if you start with a lot of risk you should you want to get them down something in the space of about 20 to 50 maybe 30 to 50 and then just look at the top ten in the context of this all has its approach of course so just to model what i’m talking about give you an idea is a notional kind of concept you just start with a table just roll these things up and say okay all the things could possibly go wrong what all the consequences that i care about my reputation my costs you know what and these are all in the context of objectives okay so not i don’t really exist alone if you i hope we’re on the same page there we look at the assets that we care about whether it’s information people we look at the sources of triumph this is a really you know just a simple example of a service station looking at security risks same principle applies some of these when you join together they won’t make sense again cyber criminals are not going to go along and do an armed robbery okay that’s not their their whole mo that won’t happen but when you start looking at linking all the things that could impact each other you start to see swans walking around i don’t quack but that’s once whatever sound the honking noise that swans make is coming loud and clear by this point so you’re starting to see okay so what’s the likelihood of this event playing out we’ve got the um three ways of looking at risk and this is one of the things that i look at these three laws of the laws of the landlord the jungle law of large numbers so lord large numbers if you’ve got some good historical data insurance as an example cyber security is a classic because you can tell how often your network is getting hit you know you’ve got logs about how many attacks or attempts are happening there you’ve got data in the broader world that’s being reported to governments about how many uh hacker groups there are out there what the techniques are how it’s working so you can do all sorts of great statistical modelling and monte carlo especially will give you a and again not a webinar monte carlo i’m sorry if you know what it is fantastic otherwise just look it up but monte carlo can give you a really good long tail which you might say i think it’s going to cost me a hundred thousand dollars for a cyber attack but this monte carlo model says if everything goes wrong and the nth degree goes wrong i’m up for 10 million dollars i can’t afford 10 million my business isn’t that big so i might then start say well i’ve either got to cut that off with risk treatments or i’ve got to get insurance that might count my amount maybe i can afford a hundred thousand dollars i can’t afford five hundred thousand so that gives you information about where to look at your um cut off point now the next one’s the law of the land so this is really looking at you know areas where you can think about criminal data but you can also think about competition you know legitimate competition in the marketplace where people are trying to out-market you and out there but it all works largely within a legal framework mostly okay there’s a few notable you know enron being one classic example but generally speaking you you can look at these and you can actually model it you can historical data you can look at what your competitors are doing you can look at contract law as a defense of this but now we look at this sort of law of the jungle type of approach we and this is the this is the world of blacks once okay all swans all colored swans so this is where you have anything goes you have an adaptive adversary who’s using asymmetric tactics against you you don’t have a good data set you know this is where you’re looking at maybe new evolving threats you might be looking at use of ai for ransomware you might be looking at terrorists and assaults bomb attacks these kind of things that are very uncommon very unlucky you really don’t you can’t take them to court and sue them and have a nice little legal argument you don’t have any statistical models that you can rely on so this is where you start looking at subject matter experts to work attack trees to look at root cause to look at causal chains maybe delphi technique and workshops to explore these ideas so we’ve got uh yeah i’ll just keep going i’ll get you out of here within the hour so i did say 12 30 to 1 30 maybe even a bit quicker but let me have a look then at this this idea of risks and an example so going back to that service station we talked about where people be doing armed robbery you could think this is law of the land stuff so you can use some historical criminal data it’s a little bit adversarial asymmetric but you can apply the law of the land here equally though you’ve got the law of the jungle because you don’t know you know a boss of mine the security business used to say he said he said after 20 years in the business he said i know every trick in the book except the one they’re using right now and i think that’s that’s you know that’s classic thinking about swans we know pretty much all the things can go wrong except the next thing that go wrong so how do we anticipate it so this is some of the techniques i use and i rattle through these so against the same sort of thing likelihood can can be informed by things like the demographics and the local data what have you and you can think about the inherent risk depending where you are you know some parts of south america maybe afghanistan right now it’s it’s not going to be the a wonderful risk because the inherent risk is likely to be quite high and there may not be existing controls so you can think about how we’re informed by threat assessments so we can do we can look at how adequate are the existing controls or the historical events etc and this is one of the areas that i like to think about too is this p p10 p50 p90 and so that’s a typo that p5 p50 best case sort of thing actually no that’s correct there’s a best case in the worst case there are your outliers okay so if you think about 90 chance that these things will happen start thinking about the worst and best case but you’re five percent each it’s a little bit like the 80 20 rule but here i’m just separating it out and you could depending on your level of uncertainty or your confidence with this you might say i’ve got a p95 you might say i’ve got a p80 and it’s sort of ten percent best case in absolute worst case i don’t know but the important thing here is you’re actually stating your level of uncertainty so you’re actually being able to save without or maybe ideally with mathematical models um but as i said you know to three decimal places be very skeptical mathematical models you just check those assumptions again and again because they will bite you so if you look at this and then you start thinking okay let me put this in the framework of in this case an issue car or herringbone diagram and think okay networks are compromised by hackers and and this is only one example there’s any number there’s a bunch of examples i’ll give you the site you can download a lot of these the security risk management aid memoirs a couple of examples there but the idea of red timing is building and saying what could possibly go wrong now you could pick any one of these okay so you could pick um i don’t know high staff turnover and explore that and you could simply say what does that do to my vulnerability what happens if morale is higher morale is low what happens if i have untrained staff because i’m turning them over two days what happens if i can’t get qualified staff and so you know i’m simply doing it so so all of these things start to push your boundaries around the unexpected they start to hopefully get you thinking you know getting the budget holders thinking because it’s you need to persuade the people with the budget if it’s not you someone has to spend the money and say okay if this is not addressed here’s our worst case scenario here’s our swan right is our white spot it’s pretty obvious we might get the project done on time or we’ll be we might be able to remediate this when we are compromised um that’s the white spawn the black spot is we don’t have anybody because the staff are so pissed off because of the turnover and the conditions or we just can’t recruit so when we do get attacked we have no recovery plan we have no recovery start so that’s kind of a very short way of you know my my idea about um what do we do about black swan so i think the first thing is you load up a shotgun you get rid of you take them out of vocabulary apart from something down the park that you can feed a bit of bread to and start to have a real big picture take the responsibility as risk managers to say none of this should be a surprise if you don’t have a great imagination and you’re probably in the wrong business as a risk manager i suspect most of us have pretty wild imaginations and to the point of being accused of being wildly pessimistic or wildly optimistic sometimes both but for anyone who doesn’t it’s just a matter of reading science fiction and heck you don’t even need to read science fiction i was reading a jack car novel recently with bio weapons and exploring those whole ideas of the what if scenario you know how bio evidence might play out be used against us so that’s kind of it from me i’ve got um those urls are pretty small i’d say i’ll put this online as a video on youtube so if you’re not already subscribed to juliantalbot.com do or just jump on the website every now and again have a look at that article that i wrote there i will put a link to this there but i’ll also let you know by email and some of these models have been put up at the srmam website so that’s the security risk management aid memoir they’re free to download help yourself that’s that’s about it for me that’s 49 minutes so i’m going to say thank you now and i will turn off sharing and pull up the question so if anyone has any questions now’s a chance let’s see yes lots of comments slam clearly good thank you yep yes there will be a recording sent out so um how to perform bow tie diagrams no i don’t think there’s an industry standard for how to perform bow diagram bow tie diagrams there are a few pieces of software out there which are pretty good i’ve got a i’ve got a few articles there and i’ve got a model if there’s enough interest i’ll do a seminar on it because bow ties have this wonderful graphic interface but when you turn them 90 degrees into a table they start to perform a really effective causal chain analysis with a good credibility and links into why is this a risk and what am i doing about it and what are the actual controls not not just this notional idea like i will train more people but the bowtie table then lets you say okay i’m training them to what standard to what criteria where the record is kept are the records up to date so it’s a pretty robust um so uh mark has said uh great beard yeah thank you yeah i did a walk recently in the lara pinter in central australia and that was a 18 days in the bush so i didn’t shave again to have the captain ahab the cpe points i’m not sure it’s an isrm presentation so you might be able to as part of isrm institute of strategic risk management on the acd chair but there’s also an international body there who else any questions uh jonathan said if it’s 100 it’s no longer a risk but a certainty i’d love to say yes to that but you know as you say in life there’s only death and taxes and uh i don’t know if you if you look at how some of these corporations go even taxes are optional so 100 um i don’t know there’s such thing but yeah right hey thanks mj all right everybody if there’s no more questions i’m going to say thank you very much for coming along and close it off here please email me if you’ve got any questions or suggestions or you would like to see any other topics covered or more of that that was pretty quick rattling through some of it thanks for coming along and nice to see so many people cheers bye
rn

How (and why) to make Black Swans extinct

rn

Share this post

Leave a Reply

Your email address will not be published. Required fields are marked *