In this episode, host Brandon talks to Gordon Malcolm, CRO at Investec Bank Channel Islands. They discuss the evolving landscape of risk management, the impact of AI on financial services, and Guernsey's strategic positioning as a global finance hub. Together, they explore how firms can navigate emerging risks, why culture is becoming a critical pillar in risk management, and how jurisdictions like Guernsey can lead with a human‑first approach.
Read the transcript below:
Brandon (00:00)
Hello and welcome to the Guernsey Finance podcast where we bring you interviews with leaders from the global finance industry as well as news and developments from Guernsey's financial services sector. My name is Brandon Ashplant and I am technical manager, funds and private wealth here at Guernsey Finance. For those of you who aren't familiar, Guernsey is a leading global finance centre. The success of the industry here is underpinned by economic substance, political stability and asset security. And we are committed to the cause of sustainable finance. To find out more about Guernsey's success in sustainable finance, tune into our sister podcast, the Sustainable Finance Guernsey podcast. Today, I am delighted to be joined by Gordon Malcolm. Gordon is Chief Risk Officer and Executive Director at Investec Bank Channel Islands Limited.
With a career spanning almost 30 years, Gordon's focus has been overwhelmingly on compliance and risk management. In this episode, we'll be discussing those contemporary challenges and strategies and how advisors and firms can adapt for the future with a particular focus on AI artificial intelligence. So, without further ado, welcome Gordon.
Gordon (01:14)
Thank you very much, Brandon. Great to be here. It's my first podcast. You may need to be patient with me, but I'll do my best. But thank you for having me. It's a real privilege.
Brandon (01:23)
Brilliant. Well, thanks for joining us. Thank you. So firstly, just tell us a bit about yourself and your career to date.
Gordon (01:30)
Sure, well as you can hear I'm from South Africa, completed my studies there, started my first job, I was a game ranger after my studies. But yeah, my financial services career started in 1998, or as I like to say, the late 1900s. Started out in the insurance industry as a legal advisor and moved quite rapidly into broader financial services, banking specifically. Started my career with Investec in South Africa in 2006, in a compliance role and came to Guernsey in 2018. And I've been here for almost eight years now in regulatory compliance roles and risk management roles. And now I'm the CRO for the bank.
Brandon (02:15)
Brilliant. So, your career started at some degree of risk and maybe over time the risk has changed slightly, but the risk is still there. That's a fascinating start to your career. So yeah, as mentioned, you are, and as you've mentioned there, risk, sort of overseeing risk and chief risk officer at Investec based here in Guernsey.
Gordon (02:22)
The risk is still there.
Brandon (02:36)
And obviously much of this conversation is going to focus on the rise of AI and it’s kind of impact on the wider sort of financial services industry. But just to begin, what makes Guernsey's banking industry such a sort of strong and trusted player within the kind of global finance scene?
Gordon (02:52)
The first thing I'll say is something I've observed here is incredibly strong, incredible regulation. We have a very active regulator, not only from a supervision point of view, but from an industry engagement point of view as well. I haven't seen that in many jurisdictions to the extent that it is. The level of supervision, the level of industry engagement, the proactive approach to engagement, the proactive approach to regulation has helped us to really raise the standard and raise the bar in the industry. So, on the flip side, you've got intensive regulation, means a lot of time is spent on supervision. But it also means that firms have a very clear vested interest in maintaining incredibly high standards and in aligning with the regulatory expectations. So, ⁓ we've got a very, very effectively regulated industry with incredibly high regulatory standards as evidenced by our performance in the recent MoneyVal.
A couple of other things you mentioned, political stability, but I'd add political and legal certainty. It's critical for long-term growth. It's critical for attracting clients. And those firms that hold assets on a long-term basis, that's the sort of stability that they require. We've got real substance in the island, no brass plating. The level of experience in our fiduciary sector is probably industry leading or globally leading. That experience across the industry, not just in the fiduciary banking and financial services in general, and some of the specialist areas as well is exceptional. So, a lot of talent, a lot of deep, deep experience. We're well known for transparency and international cooperation which stands us in good stead obviously. We've got a fantastic reputation that the island has built up over decades and it's obviously very precious to us and it counts a lot. That reputation is something for us to hang our coats on. We've got a wonderful level of human connection, great community spirit. We must have in a jurisdiction that's this size. I mean, it is tiny. It's a tiny jurisdiction, so that community spirit is everything. We've all got to pull in the same direction. We all must contribute, and we all have to support each other. We've got exceptional service levels. Because we're small, because we're agile, because we're well-regulated, we can focus on service. We've got a fantastic history of data protection, and I actually think that that's going to be one of our competitive advantages. We haven't had any major data protection risk events. I think there's some of the phrasing out there is that data is the new dollar. And we're seeing what's happening with data and the data grabbing that's going on. Because it's he who controls the data will control humanity. That's what they're saying. And he who controls the data will control AI and will be the AI winner. So, I think there's a big role, a big opportunity for us on that side of it, which I'll talk about a bit later. And then, of course, small and agile and that actually allows us to respond incredibly quickly. A lot of bigger jurisdictions almost have to respond to post-risk events and then they put in a very reactive response whereas we can be incredibly proactive and that can be pretty powerful if we embrace that and we take advantage of it.
Brandon (06:05)
So, switching gears a little bit. So, you recently spoke to clients around the evolving nature of risk, and I guess the risk landscape. Could you tell us a bit about this talk and why it is and but also isn't about AI?
Gordon (06:21)
Sure.
Firstly, I wanted to make it interesting. So that was the first reason for a slightly different angle. But maybe an opening statement, don't know, have you ever driven somewhere using GPS and you drive to this destination and you follow your GPS, you don't pay particular attention to what's going on around you and you get to the destination and you find that you got there using your GPS and when you want to go home, you find that you have no idea where you are and you have no idea how you got there and you have no idea how to get back. And I think AI poses that exact risk. You know, we run the risk of being a passenger on this process and we could end up in a destination that we have no idea how we got there. And more importantly and more concerningly is that we wouldn't know how to get back necessarily. So, and we are already on that trajectory. AI is already, it's not like technology, like new software and new technology tools. It's very much an infrastructural thing. It's deeply embedded in our infrastructure already.
We're in the middle of a geopolitical and industrial arms race and we are seeing a lot of jurisdictions really pushing the AI agenda and trying to stay ahead of the race and when you're running at that kind of speed you end up- you do run the risk of ending up in a place that wasn't by design but by default. But coming back to the presentation the presentation is called superpowers and blind spots and it's the human evolution of risk. So, superpowers and blind spots the human evolution of risk and I talk about three narratives that I think are very common that have come up repeatedly in the AI space. And I wanted to avoid those three narratives. But my presentation is really intended to be a thought-provoking process. I'm not trying to lay down my opinions and enforce my opinions. I'm just trying to get some conversations going and maybe some people thinking about stuff that they may not necessarily know about because there is some very strange stuff happening out there in the world in terms of AI.
My personal passion that's really, I found it very interesting aside from the stuff I do in my role around corporate governance and risk management is really, I've been exploring a lot around what it means to be human. And what it means to be human is pretty awesome. I've also done quite a bit of work around culture and investing because we hold our culture very dear and I think a lot of firms hold their culture very dear. But I've done a lot of work around the interplay between culture and risk, which I'll talk about today a bit. And that's covered in my presentation, and the last piece is really a call to action around to the industry and I think Guernsey as a whole, think there's some fantastic opportunities for us and I think we should take advantage of the fact that we're not behind the curve, we've got timers on our side and I'd love to see us, the conversations are happening, I'd love to see us taking the conversation further. So, if I can, can I carry on about on the subjects of superpowers, this is the real interesting conversation, but if you look at the history of human or homo sapiens and more broadly hominins, which is, homonularity, a Gernseensis or Heidelbergensis, where Heidelberg is actually a place in South Africa, but Heidelbergensis is from Germany, I think. These are different forms of hominins that have existed over the last 67 million years. And we've developed some incredible, I'll call them evolutionary skill sets or evolutionary superpowers. The first one is a really interesting one that I'll use to try and demonstrate this is something called petrichor. I don't know if you've ever heard of it, of something called petrichor. But petrichor is that smell that we get before it's going to rain. And specifically, it's actually related to a soil bacteria called geosmin.
And I've noticed since moving to Guernsey in the last eight years that you don't really smell it when rain's coming, but when it's been dry for a long time, you do. Certainly, in South Africa, which is far more arid, or Africa as a whole, which is a very, very arid jurisdiction, if there's a storm brewing long before the rain has started to fall, you start to get that smell of rain, which is petrichor. And the reason why we can smell this so distinctly is because over the history of us evolving.
Water scarcity has been an issue and keeping ourselves hydrated is really important. So, we are actually able to detect Geosmin, the soil bacteria in parts per trillion, which is a significantly different kind of approach to smelling bacon, for example, around the street. Yeah, our ability to pick this up is, you know, it's almost bloodhound territory. So, it's an incredible thing and it's clearly a protection mechanism and a survival mechanism. So, you know, it's to me it's a fascinating thing, but it's taken us 67 million years to get there. The other one that's interesting is snake-like shapes. Snake-like shapes, we spot or we recognise snake-like shapes more than any other object category out there in the world, other than the human face. So, we are programmed to identify snake-like shapes. Now, Guernsey, that's not particularly useful, as you can imagine. So you're more likely, or I tell people, is you're more likely to turn around the corner in your garden and see your hosepipe and get a fright than you are to get a fright when someone tells you you've won the lottery some way that you've never entered that lottery, or somebody contacts you on the internet with some scam.
We're not particularly well equipped with the internet, but we are able to identify snake-like shapes. So, in a place like Australia where everything's trying to kill you, well, maybe fairly useful, but you'd have to be out in the outback, so we've got this amazing superpower, but not of particular use. And then the other one is, I'll come back to two faces, which is the example I use in my presentation, is this study that was done at Cornell University by a guy called Jeff Valle.
And what he set out to do, and his team set out to do, Wendy Williams and Steve Sesse, they set out to prove, or to not necessarily to prove with absolute certainty, but to demonstrate that humans have this remarkable ability to draw inferences about each other from the look of our faces, not the look on our faces, which is an emotional thing. If you're smiling, I can assume you're happy but I can draw inferences about you by the look of your face, which is a really strange thing to be able to do. And it's quite a controversial thing. And we actually do it all day, every day. We're actually judging people all the time.
We certainly do it on an intelligence basis. You look at somebody and you determine if they're intelligent or less intelligent, or you make some assumptions. But what these chaps at Cornell University set out to do was they wanted to see if we could draw inferences about people from a point of view of criminality. So they ran a study using 36 faces, and I won't bore you with the details of the study, exactly how they produced the data and made sure it purified, data and all the rest of it.
They did quite a lot of work around that to remove any sort of external, what is the word, influences around the data. And they took these through study groups.
And they determined, and I've done it in my presentation using some of the faces, they determine that humans have this ability to draw inferences about people based on their criminality, because these faces, these 36 faces, were divided between 18 people who were non-criminals and 18 who were criminals. And in my presentation, using about 10 of these faces, I usually get about a 95% to 100% success rate with the audience, and the ability to identify criminality, which is bizarre. And this is very much around, it's such an important tool because if you're a caveman and you're walking with this boar that you've just hunted out in the woods and you're walking along and you see someone walking towards you with a club, you want to know if this is going to be your dinner partner or you're always after your or he wants to steal your meal. So, it is a really important thing to have. a That's a stupid example or a funny example. But it's a fantastic thing that we use even still today. And it certainly highlights why regulators want us to engage more in a face-to-face basis than non-face to face on because this is a skill that we have. But coming back to the faces side of it, well, the superpowers versus the 67 million years and how we've developed these and how long it's taken us to build these superpowers, it highlights how deficient we are as humans when it comes to technology, because the internet's been around for probably circa 30 years, it's started to gain some traction in our homes in the mid 90s, I would say. So, we're like 30 years in. 30 years versus 67 million, there's no comparison. And it demonstrates why we are so bad at it. If I was selling you my iPhone, I would come to you. And you and I met downstairs and you said to me, here's the cash, can I have the iPhone? And I said to you, I haven't got the iPhone. But give me the money, I'll bring it to you tomorrow. You wouldn't give me the cash. But when we engage online, like on sites like eBay and that sort of thing, that's exactly how we transact. We give the money first and we hope that we'll receive the goods in the order that we expect them to be in. And often that's not the case. And that's because we have some sort of belief system that the internet is there to protect us, whereas in actual fact, it's not there to protect us. So, we've got a long way to go. And that's what I'm trying to highlight around AI risk. Sorry, I'm doing all the talking. Can I carry on and move on to the interplay between culture and risk? This is something I find equally fascinating. I think it's a remarkable thing. But what I've noticed over my 30-year career in kind of risk management and compliance is that some firms believe that having, hiring incredibly responsible people and having very, very robust processes is enough to deliver the outcomes that they desire. So, the example I use in my presentation, I'm going to ask you this question. Have you ever been hit by a car? No. When crossing the street? No. No, you haven't. Or I should say a vehicle. Have you ever been hit by a vehicle when crossing the street? And you haven't. And why do you, thinking back to what you were taught as a kid and all the rest of it, why do you think you haven't been hit by a car when you've crossed the street?
Brandon (17:04)
I suppose you take precautions, you look left, right, whichever way the traffic's coming.
Press the red light or the button and then when the red light turns on the traffic, the green man comes up, you're ready to go sort of thing. You take precautions.
Gordon (17:16)
Yeah, and as what you've actually described is taking precautions, which is being responsible. So you're a responsible person and then you say you look left and you look right and you look left again and you press the button and you cross when the when the traffic light beeps and you cross the street so you follow process and your assumption is that by being responsible and following processes why you haven't been hit by a car when crossing the however, if you were in, let me ask you this question actually, if you were in Sweden versus America, where do think you'd be more likely to be hit by a car when crossing the street? Or maybe I should say South Africa, because I come from South Africa. Where do you think you'd be more likely to by a car when crossing the street? Sweden or South Africa?
Brandon (18:01)
I have been to the US and Sweden, but I haven't been to South Africa, so I don't know what the traffic levels are like in South Africa. But I'd imagine if it was between the US and Sweden, you'd probably more like to be here in the US because larger cars more of a road driven, it's more of a car culture, isn't it? So, the roads are wider, there's actually probably less, in my experience, crossing points. Whereas actually if you're somewhere like, yeah, somewhere Sweden or anywhere in Europe, it's generally, there's more kind of on foot culture and actually people tend to walk a bit more. But I don't know about South Africa, I don't know.
Gordon (18:35)
Well, yeah, I'll talk about South Africa, but that's you've used you the word culture twice. So, you referenced US car culture and…
Brandon (18:45)
The other one?
Well, Europe, I guess, is more sort of foot...
Gordon (18:48)
Culture pedestrian culture. Yeah, pretty sure this culture. Okay, that's a good one. So and that's exactly right. So in Sweden, they've got something called vision zero, which is a cultural norm that they've adopted which is to say that there's no tolerance for unnecessary death and they collectively as a society have adopted that and they they've embraced it and that is their culture Whereas in the US whilst they're not looking and wanting in unnecessary death. They value freedom over control for example and people feel very freedom was one of the most important things they want to have freedom of movement Sure, and if you get hit by a car in the US, it'll be more likely while you were stupid, that's your own fault. You did the wrong thing whereas in Sweden they'd want to investigate that and understand and then put measures in, so what it highlights actually is that,
you are being responsible and you following process is not necessarily enough. There has to be the third pillar, which is culture. And that to me is something that is critical in Guernsey. We have fantastic culture in our industry and a fantastic culture across the island, quite frankly.
Brandon (19:56)
So, I guess what you hope people will take away from all of this and what is the driving ethos behind having this discussion, I guess.
Gordon (20:07)
Yeah, well, it is very much the cultural piece.
As a CRO, one of the most important things we do is emerging risk monitoring and scenario planning. And scenario planning can sound, especially when it comes to AI, can sound almost borderline conspiracy theorist because there's some real dystopian risk with AI. But you still have to, even though it sounds crazy, still, in some of the scenarios that could play out, could be quite mind-bending. But if we don't do some scenario planning and see how badly things could get out of control, then I think we might be caught out. So that's the big thing is that let's take some real focused industry led, regulator led scenario planning initiatives and let's think about what could play out.
Let's spend less time on mainstream AI narratives, which I think are critically important, but I don't think they adequately address the systemic risk or the real systemic risk of AI. So, let's have those conversations about the mainstream narratives by all means. But let's also focus on some of the way out there scenarios that actually some of the godfathers of AI, some of the guys like Elon Musk and those guys are talking about. So, we shouldn't ignore what they talk about.
And then the last one is let's be really intentional about culture by design and let's set a very clear course around what culture we want and how we're going to deliver that culture.
Can I quickly talk about, can I quickly mention culture? And talk about things that I find fascinating around culture. I've mentioned the pedestrian example, which I think really drives the point home.
It's the Princeton, the Good Samaritan experiment, which was done at the Theological Seminary. And what they did is they took a bunch of theology students. The reason they approached theology students is they assumed they'd be more altruistic than the rest of us. And they created a scenario where these theology students were told they had to go and do a presentation at another building on the campus quite far away.
And they sent these students off under one of three scenarios, either a low-pressure scenario, take all your time, a medium pressure scenario, or a high-pressure scenario in terms of you've got to get across to this building immediately. You're due to do this presentation very shortly. And they discovered.
With the high-pressure scenario, where people were rushing to get there. Well, what happened is, sorry, I forgot to mention the important point, which is somewhere on this route between one building to the next building, they had a person pretending to be having a health crisis and was lying on the pavement on the route and in clear view, and on the very clear view of the students walking to this new building. And what happened was the high-pressure students, well, most of them ignored the person having this health crisis completely.
But the high-pressure students completely ignored them had no intention of helping whatsoever and in fact there was one example of the person stepping over the person having the health crisis because they were so focused on getting their job done And that really just I mean it's I'm going to say it but it's fairly obvious that if you have an environment where you're putting people under incredible culture, do not expect do not be surprised when they start behaving badly because you keep creating a inadvertently you're creating a cultural blind spot and it's not by design, but you're getting the outcome of that anyway. So, you’re forcing it. And so, it's really important for firms. I think it's really important for firms to
Brandon (23:50)
a permission structure for bad behavior.
Gordon (24:00)
you know, dive deeper into the relationship between culture and risk, that interplay between culture and risk and really understanding that. And if we take it out from a firm level, put it at an industry level, and we start to get a much, better understanding of what we are actually capable of doing. We are capable of having an unbelievable culture by design that can set us up for fantastic opportunities in the future. So that's really the backbone of my presentation.
Brandon (24:27)
Yeah, and those are your I guess, common AI narratives. the circumstances. Sorry, yeah, in terms of the narratives then, I'm jumping the gun. Yes. In your presentation, can you outline what those are then?
Gordon (24:31)
Yes. In essence I call them the common narratives because if you go onto social media and you look at what people are talking about or industry events and that sort of thing, these narratives tend to come up most regularly in my assessment. I might be wrong, but this is what I've picked up as being fairly common.
The one is the obvious one, which is the value narrative, which is how can we use this to drive value in our firms. The problem with the value narrative can be a profit kind of narrative, which is product design, data analysis and seeing what we can do with it and extending market share, but it could also be most likely it's an efficiency narrative around creating efficiency and reducing cost.
So that's the value narrative. The problem with the value narrative is that it easily extends into a greed narrative, which is a cultural thing at a firm level. Could be at an industry level as well. Then you've got the threat narrative, which is watch out for deep fakes, watch out for misuse and bias and loss of human control. So, we talk about the threats that are posed to us, but very much again, at a kind of firm level. And then we talk about the trust narrative, which is really AI governance. And it's saying, well, if you want to implement AI, make sure you know how it works and make sure you understand where the outcomes come from and how it's delivering the outcome. And make sure you don't simply rely on what the AI is telling you. Make sure you're able to apply judgment and still investigate and you can prove to regulators and whoever else that needs to know that you have full control of your AI. So, it's the value, threat and trust. Now the problem with these is that they don't necessarily give us any control over the real systemic risk that AI poses to us. So, they're very limited in their scope and they're almost inward focused.
Brandon (26:42)
So how do these tie into taking a step back, I suppose, financial services? What topics should industry leaders be focusing on when looking to adapt their businesses in such a rapidly changing environment? Five, 10 years ago, AI was a buzzword that very few had probably heard of. Five years ago, maybe it was kind of just about reaching the watermark. But now it's in the general vocabulary of everyone you talk to, especially in a professional setting. How can businesses and professionals really try and grasp and grapple with this issue in a practical way?
Gordon (27:17)
Yeah, it's a great question actually and that is probably the most critical question you can ask today. So, the first thing I'll say is that those three common narratives, value, threat and trust almost create an illusion of control. And if you think about it in the context of cryptocurrencies and virtual assets and blockchain, that is how we would easily default to those we look at what is, if we wanted to build something around crypto and adopt virtual assets and digital currency, and that kind of stuff, we'd say, okay, what is the value that we can gain from this? How can we apply this to our firms? How can we deliver value to our clients? What do we need to worry about from this point of view? What will our obligations be? Maybe it's a challenge around source of funds, source of wealth and validating. And then we'll say, okay, and then we need to have some sort of governance around the stuff. Very inward focus and gives us this illusion of control. What we're not talking about though, and how do we control the non-regulated jurisdictions where anybody can build a crypto brokerage, and anybody can build their own cryptocurrency, and anybody can run significant sanctions busting and initiatives. So, we haven't even looked at that. And why don't we look at that? Because we have no idea how to deal with it. And it's almost become, it's run away. And because it's run away from us, it's very hard for us to bring it back into control.
So that's the one scenario. One thing to recognise about those three common narratives is to be aware that they don't lend themselves, I don't think, to a real, a genuine risk management. They certainly don't lend themselves to scenario planning. So, the example that I'd love to use, which I think is particularly pertinent to Guernsey, is something called smart cards.
I'm going to give you a lot of examples that might actually surprise you about what's happening out there in the world. But smart cards is a great example. Now supermarket chains that have implemented smart cards that build in algorithms. Firstly, they obviously scan the barcodes and all the rest of it. But they also weigh their goods, and they can work out algorithmically what the likely composition of your trolley is based on the weight there. Because there'd only be a certain weight algorithm that it can work out. And they work it out in a second.
And you can also pay and you can leave the supermarket just with your cart. So fantastic technology. when you bring it into Guernsey, what would be the implication of introducing smart carts? And I think of the St. Martin's Co-op, which is for me a fantastic store in the island. And if you look at the employment opportunity that the St. Martin's and any co-op, I was using the St. Martin's one because that's the one I frequent the most. But if you go there and you go to the till you can see the employment opportunity that they're creating for people that may not necessarily find employment opportunities elsewhere in the private sector. So, they're delivering a fantastic service to the community, is inclusion for those people. Its inclusion, its independence, and it's being part of the community. So, do we want convenience of smart cards, or do we value community? And do we value inclusion and independence? And I would say for a jurisdiction like Guernsey, it's a very obvious answer. We shouldn't be for, know, progress is not necessarily progress, or what we frame as progress is not always progress. And I think in introduction of smart cards would be a step backwards. And that's just one example of many. So, then we move on to, well, let's look at the industry and Elon Musk says this many, many times and you can Google it and you'll find him saying this stuff, but he talks about.
He talks about industries or firms that are purely AI based versus firms that attempt to augment with AI. Now, if you're established firm with a, you know, been around for 50 years, the ability to advocate to AI is very challenging. First, there's a huge people impact, but also, you've got embedded systems and processes and you're trying to and advocate to AI. So, it would be incredibly challenging. Whereas if you're a brand-new market entrant, advocating to AI is almost the obvious way to go.
Elon Musk says that the abdication firms with a hundred percent AI will wipe out firms that are augmentation firms. They'll wipe them out from a cost point of view, from an efficiency point of view and all that kind of stuff. So, there's a real risk around that. So, we should be starting to have conversations around are we abdication or are augmentation? And I think the answer for Guernsey has to be augmentation. And the reason why I think it has to be augmentation is that the natural result if you look at what's happened in the US. They've assessed about 10.7 % of their jobs are at risk of AI. The PWC study that was done in Guernsey, it was in the press the other day, alluded to in the press in an article. I think the study was commissioned six years ago. They determined that there were about 27,000 jobs in Guernsey at risk. So, know, that's a that's a lot of jobs for Guernsey. That's huge and the natural default will be to abdicate entry-level jobs Because they lower risk and they easier to address the trust narrative because make sure if you've got the governance. You have to have the more judgmental roles. You can have a much harder time Demonstrating the governance to demonstrate in the trust narrative and what that could do is it could drive out school leavers?
And if you drive out school leavers, it's a terrible result for Guernsey. But what you also then end up with is that you then almost don't have any entry level jobs coming through, so then you're forced to abdicate the next level. So those historic incubator type roles aren't there, so you have no one coming through the system. And so, in the end, you just end up losing more and more and more jobs as you progress. And that could have a serious economic impact to Guernsey, and it could put the industry at real risk not necessarily a scenario that plays out but it's a scenario we should be talking about. And then the last thing I think is that I want to mention is things are moving so fast with AI that we're losing touch with what is being used for. So, the last example, so I'm doing all the talking but the last example is the printing press. The printing press was intended to be this fantastic tool, and it was a fantastic tool. It served education fantastically. We got far more connected. We could spread information much better. But it also was used for misinformation. It was also useful for manipulation and propaganda. So, it was very quickly used for bad intent and bad purposes. I think we're going to see a lot of that, huge amount of that.
Brandon (34:09)
Well, we know that these kind of, guess, cutting edge issues, which you're talking to there, have always been sort of fed their way into sort of the client world and how kind of, I guess, kind of quality of service to clients is affected. And given how revolutionary AI is and the speed of the change that you're talking to, clearly that's going to continue to have ramifications for society at large, but obviously in that professional setting. How can financial services retain that human touch? Obviously, that's less of a pressing issue maybe than some of the topics you've talked to, but clearly for firms in the here and now with the change and the speed of change that we've talked to, it is important for them and especially for the client offering. So, while still looking to implement these new advances, how do we navigate that and bring in the human element and retain that human touch?
Gordon (35:11)
Yeah, another good question. So, I actually think that from a Guernsey point of view that's one of us opportunity sweet spots are about being human. What we are seeing, and we'll see this gain rapid momentum in my view is we seeing something called a data and technology Renaissance and what I mean by that is we are really seeing people migrating away from social media. You know social media was designed to connect us, make us feel more connected. But actually, we finding that social media is forcing us to become less connected and actually lonelier. But also, social media is so full of AI slop and it's becoming incredibly hard for these social media sites to manage the AI slop and to identify AI content versus human content that people, you see a cat flying a glider or a dog driving a car and you just want to, okay, that's enough doom scrolling for tonight, I'm going to read a book instead. But equally, we're becoming tuned into the why all this smart technology has been introduced to our homes and why it's always listening to us. And you used your phone, I'm sure you're talking to somebody about something on you. You're going like, I'd really like to go on a ski trip to Switzerland. And then you go onto your phone and the first advert that pops up is a ski lodge in Switzerland. And people are becoming aware that we're being listened to all the time. And we're becoming very aware that our data is actually really, really private. And you can't order a beer at a restaurant if you don't go and scan a QR code and download an app and then you quickly accept the app permissions because you're so thirsty for that beer so you go yes I consent and all this stuff but what are those apps doing they were never there to, you know, they weren't there to help you order your beer, they're there to gather your data and who knows what data they are actually accessing so we're going to see, I think we're going to see people migrating away from data collecting devices, as wonderful it is and convenient as it is to have them in our homes.
And I think we'll see a social media renaissance. I don't think people are so tired of AI slop. We've seen TikTok. We've seen millions of users leaving TikTok now.
So people are becoming way smarter around this stuff and moving towards an analog state and then one of the most interesting things that I think we're seeing, I'm sure you've seen it yourself, is we're seeing the rise of the sports star as the next kind of rock star. And you see some of these sports stars, they come out onto stage and they literally look like rock stars. Come out with fireworks and all sorts of things and they're dancing on the tennis courts and they're getting paid huge amounts of money. And that's because the reason why that's happening is because people are defaulting back to sports, because that's what we love. In the old Roman days, we used to go to the...
Brandon (38:13)
We'll see you.
Gordon (38:14)
And watch people killing each other and people love that kind of that kind of raw engagement, they don't want to see, they don't want to just engage technology anymore. So they're defaulting to what it is to being human because what is what it is to be human is actually, it's a wonderful thing. It's actually pretty cool. That's what I'm trying to do with my presentation is say- Yeah, don't forget what it is to be human because if you don't- if you forget we'll quickly lose some of our- we already seeing cognitive ability and neurological pathways changing in people that have been using AI for a number of years so it's already having that impact. So, we're losing what it is to be human. And I think we're going to see people clawing it back. So if you listen to some of the big investors out there, they're focusing on sports, specifically focusing on women's sports, because they believe it's an under invested area of sports. And there's a lot more growth. There's huge amount of growth potential.
So we're going to see that migration towards watching humans compete. So I think we're going to see a lot of that, not just in sports, but in other areas as well, much more coming back to what it is to be human. And if there is that migration to what it is to being human, then for me, in a community where we have such a strong community spirit like Guernsey, and such a societal focus.
Adopting a human first mentality and a human first culture and adopting and publishing, I guess, and marketing it is, is that you will always speak to a human and you'll always be subject to human judgment here. And we are a humanity first jurisdiction. I think that's where the real opportunity for us is. The alternative is we go, well, let's advocate to AI, let's save cost. And in time, I think we'll just lose touch with ourselves, our clients because it will be somewhere in between and if they want a highly efficient, highly cost-effective model they'll probably get it somewhere else where there's a full application model. So, if you're to go augmentation model then you're probably better off being a human touch point and positioning ourselves as a human touch point.
Brandon (40:22)
Interesting. That’s not a conversation line I'd heard before around sort of demand being driven for kind of human-to-human contact, whether it's sports or entertainment or whatever it is. Yeah. You touched on there about earlier the malign kind of possible intent or outcome, let's say. But I guess as AI evolves, is that, yeah, there is that, or chance or certainly we seem to be maybe on potentially a path that takes us to quite a dark place and we're actually not in control of that ship, but equally there is a cause for conversation around it being taken control and then driven into a malign place where actually it's well within the control of somebody or a group of individuals or whatever it is, but it's still a malign outcome. How can firms maybe and regulators work together to actually approach these new risks head on and actually look to mitigate these because clearly, we're in uncharted territory where actually humanity's never been before.
Gordon (41:27)
That is right. And I think this brings in where we need to bridge the gap between regulation and industry and bring much more regulation and industry engagement. So, I was driving home the other night in the rain and this thought just popped into my head and I thought you know, because AI is infrastructural, it's not like adopting technology, it's an infrastructural thing. And because it's an infrastructural thing and it's deeply, it won't just be a system user, it's going to be in every aspect of what we do.
The thought I came up with was I thought trying to regulate AI is going to be trying to regulate water because I can't stop it from raining and I can't stop it from pouring down. What we can regulate is water is we can regulate things like pipes and infrastructure. We can regulate drinking water standards, you the quality of our water. We can regulate pricing. We can regulate flood defences. We can put flood defences in, can attempt to control water to a certain extent but not completely. We can control and regulate pollution limits and we can regulate access.
And to some extent, we can regulate usage, but we can't regulate usage in your home itself. People could be using water for other things, and we wouldn't even know what they're using it for. And that's kind of the AI analogy. But what we can't regulate with water is rainfall, ocean currents, evaporation, storms, long-term climate effects. You can't regulate that stuff. So, there's something else that comes into play. And if you put that in the context of AI, we could regulate - if you go back to the value threat and trust narratives, yes, you could regulate how we use it, which roles we use it for. So, you could use it for maybe lower risk credit decisioning or maybe some lower risk accounting processes. And so, you can regulate how that's used. You could regulate human judgment and what is required from an oversight. You could regulate the trust element, which is the governance side of it.
And you can do all those things, but what you can't - and you can regulate data provenance and the use of data and that sort of stuff, that's well within our control. What we can't regulate is the existence of the technology.
We cannot regulate global diffusion of models. So, we can have jurisdictional regulation, but the models are all over the world. And we won't be able to trace where the models are. And we won't be able to trace who's using the models. And we won't be able to regulate that side of it. We won't be able to regulate open-source innovation. I always say to people that pre-AI, there's a lot of stupid people with bad intent. Post-AI, you've still got a lot of stupid people with bad intent. But they now have AI to augment what they weren't able to do before so they can write code, can become hackers, can become exceptionally good at deep fakes, they can do all sorts of things. And we're going to start seeing a lot of that because there are a lot of bad people out there, we just haven't seen how bad they are because they haven't had the tool set. We're going to see a lot of emergent uses; we're going to see extensive dual use risk.
What we're actually seeing already with AI, because it learns, because it's learning from human nature, and because we're an ego-driven species, and our egos are obsessed with self-preservation, we're seeing AI adopt exactly the same self-preservation approach to humans and our egos, which is quite bizarre. And we also seeing it now starting to it’s willing to lie in pursuit of that self-preservation. So, it's adopting some of our worst behaviours. And that's the- that is one of the major, major risks with AI. And then maybe I can finish with something on this specific point is well, maybe two more points I can say around regulation and AI is one is one is something called the gorilla problem. I don't know if you've heard of the gorilla problem, but the gorilla problem is this principle that once humans started to progress with intelligence and became the most intelligent species on earth, gorillas started to kind of fall behind, and they virtually became extinct. And it's not because humans hate gorillas.
In fact, we quite like gorillas. no one gets up in the morning and goes, gosh, I hate gorillas. I must get more palm oil. But because we're the more intelligent species, a natural thing that naturally what happened was that we were more intelligent. And intelligence was the only factor in the extinction of gorillas or the gorillas approaching extinction. That is the only differentiator and the only thing that caused that sort of loss of territory. And they now only found in national parks and very much limited national parks. And that's the gorilla problem. And there's a lot of speculation that if we are not, as AI becomes far more intelligence and we start to approach the singularity and the singularity, Elon thinks it's the next five years or sooner, singularity is where AI becomes super intelligent and exceeds the intelligence of all the humans on Earth. So, a single AI tool will become more intelligent than all of us. And if you look at, you translate that into the gorilla problem, then it puts humans at extinction risk, not because AI hates us, just because it doesn't need us and for it to - and it doesn't value us because it values itself more. So, it just progresses with its own agenda. That's the one thing. The only other thing else about regulation is regulation has limits and it doesn't have limits because of incompetent regulators. It has limits because of the problem, the water problem, is you can't regulate water, you cannot regulate the entire thing. So, what you're going to need to have, in my view, is you need to have culture and intent working directly alongside regulation. Because without that, if we don't set our risk appetites, if we don't set who we're willing to do business with, who we're not willing to do business with, and set that risk appetite, and set a very clear message that we communicate with our clients and our markets with in terms of what we're willing to do and what we're not willing to do, then we're putting ourselves at risk of hoping that the regulators can solve this, and it's something that they can't solve.
This is where the industry needs to embrace and lead from a cultural point of view and I think that's really, really important now, and I think that will be our sweet spot because of the agile nature of Guernsey because we can respond so quickly so we mustn't take - I don't think we must take a wait and see approach and I think a way wait and see approach would be terrible and we can take a very proactive forward approach from a regulatory point of view, but also from a cultural point of view and from an appetite point of view because we can implement it so quickly. So, we can proactively move on this, and we can actually be a global leader. But if we are going to be a global leader, then in my view we should be very, very bullish about putting that out in the market and we should publicise it. And I think we will attract, we will then attract the right people and the right clients and the right industries and that will strengthen us rather than weaken us.
Brandon (49:04)
Interesting. As the digital world evolves, I mean that's kind of surface level consideration, but the need for global financial centres like Guernsey to be nimble has never been more prevalent in light of everything you've mentioned. How is - to switch gears, how is Guernsey positioning itself to continue to be a leader in the finance industry as the world faces these challenges?
Gordon (49:33)
Well, we're already seeing, I've read a few articles in the Guernsey Press. I've seen some of the articles off the back of people at speaking engagements and conferences and industry events. We've seen the digital consultation coming out from the regulators, which is fantastic from the GFSC. We're seeing a lot of people, I’m not the only person talking about this. So, I don't claim whatever the word, I can't think of the word now.
Brandon (50:01)
Local monopoly.
Gordon (50:02)
Monopoly on these ideas and I'm actually repeating a lot of the ideas that are out there in the public domain anyway. But hopefully bringing a slightly different mindset and a different approach to the human element. But people are definitely talking about this at the right levels in government, regulators, industry and in the wider public domain as well. So that's a big thing.
We are starting to build deep specialist expertise. I think we should continue to do that. That's a sweet spot for the industry as well. If we are going to create efficiencies and there's profits to be gained from that or costs to be reduced, I think we should have some of that money going back into the industry. I think we should be developing school leavers. We should be growing young people that are highly equipped with AI skill sets and being able to build AI agents that we can deploy in our workforce and slowly start to try that workforce towards being a highly resilient AI enabled augmented workforce. As you've mentioned, we're very nimble. And then our genuine substance and our approach to never compromising trust, never compromising stability, and never compromising on accountability and responsibility is going to stand us in good stead. I mean, we've got a really, really good culture here. I've been incredibly impressed. In the eight years I've been in Guernsey but we've got a fantastic culture, and I think there's an opportunity for us to extend that culture into AI. But again, I'll use the words with intent and with design.
Brandon (51:41)
Speaking to private wealth and banking specifically, it's clearly your role as Chief Risk Officer protecting therefore Investec client base and clearly that's probably a key, you know, that is if not the key remit for the firm. We've spoken about risks as the world shifts towards further AI adoption, but where are the opportunities for banks and fiduciaries and actually talking to that augmentation piece, how can that be brought into practice and how maybe, how is that happening at Investec?
Gordon (52:12)
Well, yeah, I can cover the Investec side first, are on the front foot. We've put some fantastic AI tools in place, very much at an augmentation level. We don't have anything that's abdication. So, I can look at my own efficiency, and performance has improved significantly since I've been able to use AI, just because I can get through a lot more work a lot faster. I don't rely on AI. I don't trust AI, but it certainly speeds me up and gets the result a lot faster. So, we're doing quite a lot around that. Our values have always been very community focused. We say we live in society, not off society, which is a great philosophy. I think it's a fantastic value to have as a firm. And we've always held our culture as a real differentiator. We've got a fantastic culture there that allows people to explore, be creative, and I think creativity is one of the most important attributes to have when working with AI. I creative thinking is what's going to set firms apart. Real creative thinking is where you can differentiate this technology. But moving on to industry and the finance industry in Guernsey specifically, I've listed a couple of opportunities down. There quite a few here. I think I've got 13. Can I whip through them quickly? I think there's actually a lot more than we realise. So, the first one I've said is let's position ourselves as a test case for augmentation and how AI can coexist without dehumanizing our work.
Yeah, the larger jurisdictions are, as I said earlier, are often reactionary. So, there'll be a risky event, something will happen. And then they'll step in and say, okay, we need to take steps against this and because this risk has now flared up and how do we regulate this and how do we control it? I think we have the ability to do some scenario planning proactively because we're smaller, we're more agile, and we can make some quick decisions and actually be on the front foot. So, we can be proactive and we can, and we can shout about that, and we can lead the charge. Yeah, we don't have to be always following and relying on the bigger jurisdictions. We actually have the skill sets. We've got the experience. We've got the depth of specialist knowledge. That's another opportunity. A great opportunity would be to say you'll always speak to a human.
That's not going to be the case. I mean, I speak to my father. He's 82 years old and he gets huge frustration, he feels like technology is leaving him behind. And it is leaving a certain demographic behind and always being able to speak to human, I think is going to become a major marketing point for jurisdictions not just for firms because you could have that at a firm level, but that's what we should let's position ourselves like that as a jurisdiction and we will attract clients and markets that value that. Yeah, let’s not let another opportunity to let entry-level jobs disappear.
It's a risk that they disappear, but there's an opportunity in protecting those roles and protecting our workforce and not forcing our workforce out of the jurisdiction. So, let's use that opportunity to retain and attract talent and then let's augment those youngsters and make them highly effective right from the start in terms of when they join the workplace.
A strong ethical culture is always going to set us apart, is how we position ourselves already. We are a values-based culture that puts morals and ethics and integrity at the forefront of what we do. So, let's keep pushing that. There's an opportunity to set our risk appetite and to make it very clear around what we will tolerate and what we will not tolerate. Because if we don't do that and we get the wrong stuff, we put our reputation at risk and we undermine all the other aspects of what I've spoken about. Let's position ourselves as a safe harbour for clients, a safe harbour for a human jurisdiction that's human judgment led and not an application model.
I would love for us to be humanity first because I think we're going to see technologies that undermine humanity. There is some frightening stuff happening out there. I don't know if you've seen some of it.
And there's a big movement towards us engaging and they want us to engage actively; the billionaires out there wanting us to really engage with technology and feel like we almost bring a human element to it. We almost become, it almost becomes a bit of a - we stop seeing it as technology and we start to engage it at a human level, which is a bit of a weird thing for me. And I don't know if I'd want that and I don't know that would fit for a community like Guernsey, if that would work for us. So, I think that's something to think about. I've spoken about moral leadership as a differentiator. That's a huge opportunity. I said right at the beginning of the presentation, I spoke about data privacy. And I think data protection is going to become something that will be sought after.
And if we could position our jurisdiction as a data vault, so essentially what I mean by that is that once your data is here, it will be protected and it will not be at risk. So that's really how do we profile the ODPA and how do we prioritise the work that they do, because data and data security is going to be fundamental, because it is going to be under much greater threat than we've ever seen before because of all those bad actors who now have access to the technology that they didn't have before.
We have an opportunity to set an appetite around zero appetite for abdication. We're going to see, we will absolutely see firms coming in a full AI abdication model. We'll see them replicating asset management firms and all sorts of stuff. And we'll see them doing it on a far more efficient basis. But is that what we want to do? Do we want to allow for that kind of market player in Guernsey? And I would imagine that we wouldn't want that sort of market player because it doesn't create employment for us.
It also forces the industry that wants to adopt an augmentation model to now have to stay relevant, to stay competitive, to be almost forced into an application model. So, we need to protect ourselves from that. And then the last example which I've spoken about so many times is that we have a great opportunity to set a very, very strong regulatory position. And let's be the leaders in this.
Let's not downplay what we are capable of doing and let's set down some rules that we know based on the scenarios that we're projecting for ourselves, whether those scenarios play out or not, let's set down some very clear rules and let's actually market it and publicise it and say, this is the stance that Guernsey is taking in relation to AI and let's be a leader of that. Let's invite, let's run industry engagement across multiple jurisdictions and invite people to come and see the model that we've implemented because I think that's a fantastic way to position ourselves. That's just 13, there are way, way more opportunities around us. And that's not even talking about the value, the classic value opportunities, which is efficiency and, you know, cost savings and all the rest of it. But as I've mentioned before that can, as much as it can create efficiencies, it can also create risk.
Brandon (59:54)
And then with that sort of applied Guernsey lens to build on that, I suppose. Clearly Guernsey has a range of sort of local structures like our family PIF & the PTC. How can using these kind of going into the future and ensuring families are exposed to opportunities, but also protected from the risks as well?
Gordon (1:00:23)
Yeah, it's having a structure in a regulated structure, with legal certainty is fundamental to what we do in the industry, that alone is an opportunity and an advantage to industries that don't necessarily have that. I mean, there's lots of jurisdictions that have similar structures, less regulation, which actually means that you can't trust those structures. But if you combine our structures that we create in the industry with our regulatory certainty and our strong corporate governance, then you've actually got a very, very strong safe haven for clients, so you position that very well. And then if you align that with our appetite and our culture and what we will tolerate and what we won't tolerate from an industry that we're willing to work with and willing to have as clients and not willing to work with and not willing to have as clients, and we position ourselves very strongly like that, then we should be aligning with like-minded jurisdictions and like-minded clients. And there should be more than enough opportunity for that.
So, I think those structures can serve us. I think they can really serve us well, actually. And we should continue to leverage that side of our industry.
Brandon (1:01:36)
And from what we've talked about today there, clearly there are those two competing schools of thought emerging. One, I think you described it as abdication that sort of suggests humans are entirely replaced from the picture. And then there's obviously the augmentation that sort of promotes that symbiosis or combination of the best of humans and AI and actually ensuring we don't get edged out of the scene. Talking at a practical level, how can we ensure augmentation actually is built in and baked into what we're doing here. What skills are going to be required from humans that AI doesn't have? What are the considerations we should actually start really thinking about?
Gordon (1:02:16)
I think that's where we need to bring in, we need to have active engagement and conversations between society, because we live in society, not off society. And the government, we have a great government here, very, very well-run jurisdiction, certainly compared to other jurisdictions that I've seen in terms of that side of it. So, I've always been very impressed; to be honest and we should be engaging the regulators and should be engaging industry and collectively we should be setting for ourselves, I believe that we regulate and mandate human accountability and at certain decision points. So, we have a very clear, you know, what is tolerated and what's not tolerated. And that's where I talk about the lower risk decision making versus medium and high-risk decision making and then imposing judgment into certain things as well. So that should, I think that should be regulated. Otherwise, it's going to become a bit of a free for all. And as I said already, you know, you're to see that with jurisdictions, they're not going to regulate themselves because it's a geopolitical and dual arms race, they're trying to just stay in front. And to stay in front means that they need to not curve themselves a regulation. We don't have the same challenge as that. That is not what's concerning us. So, we don't have to worry about that. What we should be worrying about is how we position ourselves.
So, we can regulate ourselves. Let's embed proper AI governance, just put it in place and manage ourselves because we need to do, I think we do need to manage ourselves.
I've mentioned ring fencing, AI use cases, define a very clear culture, and that can be driven by a combination of regulator and industry. To use the example, the water example, there's an aspect of AI that can be clearly regulated, and there's an aspect that can't be, and that's where integrity and morals and values and ethics and culture, it's going to play a critical role. So, we have to strengthen that side of it.
Define a very clear risk appetite, let's enforce that on ourselves. It doesn't need to be enforced by either government or regulators, we should all have a position on this. And then abide by it. And then invest in human capability, have a requirement where if it needs to be regulated that there's - we put money back into the system and we support that growth and that skill set coming out of the school leavers and entry level jobs.
Brandon (1:04:53)
And just finally, would be remiss of us not to end without me asking you what the workforce of tomorrow looks like. If we're to consider time horizons, what do things look like in 10 to 15 years?
Gordon (1:05:07)
That's a really, really challenging question. I think the one outcome which I think is plausible, I think it's a plausible scenario because we've got some very clever people saying it's a plausible scenario, is that because of the rapid adoption of AI, because of the intentional approach to not regulate it and to become, and to really focus on becoming art as the number one, the leader in AI with a lesser focus on regulation and losing control over the technology that's being deployed. I think we run the risk of ending up in a very dystopian position in five to 10/15 years where people are on a universal basic income, we see mass job losses, and we see a complete change of, a complete societal change. Elon Musk predicts that there will be a billion, I think, I saw someone saying that.
One day people will forget that Tesla ever made motor cars because they're building these incredible humanoid robots called Optimus. And Optimus 3 apparently is unbelievable. It can learn from watching videos and it can repeat that behaviour. So, it can teach itself incredibly fast. Musk's prediction is I think a billion humanoid robots will be in place by 2030.
That's insane. That's a billion jobs lost. We're seeing that in one of the motor manufacturing plants in, I think, a Hyundai where they're going to replace the entire workforce with humanoid robots. So, we're seeing that kind of thing coming through. So very quickly, could have, if we leave that unchecked, unregulated and we don't stop this replacement of human workers with humanoid robots. Where does it leave us? There are a great efficiency and cost to cost benefit from that. You could have robots sweeping the streets in Guernsey 24 hours a day. But do we want that? Is it really going to be, is it really progress for, is it really progress? And I think that's the question we need to ask for ourselves. But that's the one scenario is dystopian. And then the other scenario is that we control it and we have an augment.
In an AI-augmented workforce, have an incredibly robust culture, we have incredibly good regulation, and we're thriving as a jurisdiction, and we're attracting like-minded clients and like-minded markets who want to deal with a jurisdiction ⁓ that has position itself in the way that Guernsey's positioned itself. And I think we can get there way faster than much larger jurisdictions where they're just not going to be able to get that level of agreement across those four pillars of society, government, regulators industry.
Brandon (1:08:03)
So, Guernsey's nimbleness is back in the conversation.
Brilliant. Well, thank you very much for joining us on the podcast today. It was fascinating to explore the impact of AI on society at large, but obviously financial services specifically as well, namely on fiduciary banking and clearly Investec. We are clearly on the cusp of huge change, including to the way we do work and do business. So, thank you very much.
Gordon (1:08:29)
Yeah, well, thank you very much for having me. As I said, it was a real privilege. It's my first podcast. I hope I haven't let you down and had a very interesting conversation with you. So, thank you.
Brandon (1:08:39)
Great. Thanks.
And thanks also to you for listening. If you enjoyed this discussion, we have a back catalogue of interviews on the Guernsey Finance podcast channel. You can check them out by searching Guernsey Finance on your preferred podcast platform. We also have links to Gordon and Investec in our show notes. To find out more about Guernsey and its leading financial services sector, head over to our website, guernseyfinance.com. We look forward to welcoming you back to the podcast soon.