Braking Systems and Black Boxes: AI Ethics in Finance
There may be errors in spelling, grammar, and accuracy in this machine-generated transcript.
Isaac Heller: Hey, everyone, this is Isaac. I'm CEO at Trillian. And today on AI, accounting, intelligence. We'll be talking with real people about real things related to AI and the impact on the accounting industry. Stay tuned. Welcome, everyone, to AI accounting Intelligence podcast. I'm Isaac Heller, CEO of Trillian, and I'm really excited to be joined by [00:00:30] Richard Jackson. Hey, Richard, how you doing?
Richard Jackson: I'm good. Isaac, thanks for having me join you today for this conversation.
Isaac Heller: Absolutely. So Richard Jackson is the AI assurance leader for Ernst and Young. Um, a recent guy, I've gotten to know some incredible thoughts and ideas about the future of AI and assurance, and we'll get into all that, and I'm excited to do it. But before we do, I mean, Richard, you had kind of this amazing arc, you know, in your in your career as an accounting leader, I know that you're [00:01:00] you know, you're based in the Bay area right now, but but you don't sound like you're from the Bay area, if I may say so. Tell us a little bit about that arc and how you got to where you are today.
Richard Jackson: Yeah, absolutely. Isaac. So I always tell people, for those that have an astute ear, as you say, this is not a native Californian accent. I started my career in the UK, and I'm not sure if I've ever even admitted this to you, but I started life as a I'm an English graduate by by origins. And, uh, if if that wasn't obtuse [00:01:30] enough, I actually went to Wales to study English. So if you can wrap your head around that one. But, you know, I started my career straight out of college into Ernst and Young in our assurance practice. And so I spent a lot of my early days that was back in the in the beginnings of the 90s, where anyone who's a history of the tech industry knows. That's really when a lot of the US companies really started expanding into Europe, and many places used the UK as the as their home base, similar, similar language, similar legal system, all kinds of things. And so [00:02:00] from the get go, I was just working with technology companies, albeit the European subsidiaries or the European operations of some of the major houses. So increasingly, I'm just a tech nerd. I just I'm not I'm not a coder.
Richard Jackson: So, you know, don't ask me any coding questions, but I'm just a genuine enthusiast for the technology. And so back around 2000, I decided I wanted to swim upstream to see what was going on at this mythical place called Silicon Valley. So I joke because, again, for anyone that's a student of history, [00:02:30] they remember what happened in 2000 to the tech industry. Instead of coming over to do a bunch of IPOs and work with some massively expanding companies, I spent a lot of time dealing with restructurings and debt issuances and and fundamental refiguring of the business as we experience the dotcom. And then. Anyway, through a a laundry list of of iterations and reinventions of the of the technology stack, I've worked with everything from the hardware companies through to software, [00:03:00] the cloud generation, the SaaS generation, and then now into AI. So it's been a it's been an exciting journey. It's one of those ones where you always say it made absolutely no sense if you were trying to plot it on a forward graph. But when you look back, you can kind of see a pattern to it.
Isaac Heller: That's awesome. And, you know, just for everyone listening today we're going to talk about AI and ethics. I think we'll also get into a little bit about some of Richard's perspectives about maybe the future of audit and assurance as it relates to AI. [00:03:30] So that's really exciting. But before we do like we we've got to unpack a little bit more. I have to wonder if, you know, you've now got this, this role of AI assurance leader at Ernst and Young. Do you feel like those Silicon Valley that air out there made you uniquely positioned to, you know, be advanced as a tech leader? Or maybe it was the other way around. It drew your tech inclined mind to the Bay area. You know, I'm a I'm a tech founder now. And you hear that there are always a lot of the [00:04:00] early adopters out there in Silicon Valley. You think you think the audit and assurance practitioners are ahead of the game, too?
Richard Jackson: Um, I'm not I'm not sure. The words assurance providers and ahead of the game have ever been used in a sentence before. Isaac. But no, you know, I think in our world, you know, I describe to a lot of people that, you know, our profession has been around for 130 years. And so and we've always done things in a certain way and at a certain pace with, you know, an extensive amount of regulation. And, you [00:04:30] know, we started seeing some of this wave of disruption and technology coming. We've been we've been in this game now for a couple of decades, building some really cool tools and technologies to to really enhance our business. But this explosion of AI in the last few years, and especially with generative AI in the last couple of years, when it broke onto the consciousness, I described to people that our profession is going to change more in this next ten years than we have in the last 130. And so I think to your point, you know, some of that silicon Silicon Valley magic sauce [00:05:00] or the secret sprinkle of it, I think has kind of injected or infused our ability to just imagine the art of what is possible.
Richard Jackson: And I think when you think about, you know, our profession has been one where, like many professions, we've been the aggregators of knowledge. We've had this great knowledge rich environment where through the accreditation, through the apprenticeship model, you know, we've really become the the experts in our field of knowledge aggregation. And then through the way in which we take that to our clients, we've also [00:05:30] been dealing with very traditional distribution channels. Right. We've done that typically through audits. We've done that through the way in which we consult and advise our clients. And both of those paradigms have been fundamentally blown up by generative AI. And so I think our ability to recognize that, I think, you know, to our credit, as a profession, we recognize when change and disruptions come in. Are we moving fast enough? Well, time will tell. But if, uh, if I have my way, I think we're going to be certainly at the forefront of the conversation.
Isaac Heller: Very [00:06:00] nice. Well, that's super interesting. I mean, the the way I hear it is, you know, if you're if you're tech inclined and or you're in these, these tech environments, I don't want to say bubble, but environments, places like Silicon Valley, it's almost like you do develop muscles, you know, to be able to be attuned and maybe adapt or, or jump on some of these technological trends a little bit better. You know that. That being said, I is significant. [00:06:30] Meaning even if you can recognize whether it's the DB or Excel or the cloud or AI, this one's moving a little bit faster. Either way, you've got to be ready to adapt whatever it is. So that's really interesting. But I think, you know, you mentioned 130 years, right? By the way, you were an English major. I was a history major. So I love, you know, I love a little bit of history.
Richard Jackson: Yeah, well, I tell people I'm a storyteller of numbers, right? I mean, that English [00:07:00] major. And, uh. And so, yeah, to your point, whether it's English, whether it's history, uh, I think the liberal arts teaches us a lot of things about technology.
Isaac Heller: Absolutely. And, you know, they say they say history repeats itself, right? And they say things like nothing new under the sun. And I tend to subscribe to those, those, those things. So, you know, maybe maybe let's jump a little bit before we get into ethics.
Richard Jackson: Yeah.
Isaac Heller: When, when you see this change happening so fast and you think about the history [00:07:30] of assurance and audit and really your role, right, and your team's role and your company's role as the steward of, you know, assurance. You know, does it does it fundamentally change how it's going to be looked at, or does it just need a reorientation around what does it mean to be an assurance practitioner? Right?
Richard Jackson: Yeah. I think the simple answer is I see it as a fundamental [00:08:00] redesign or reshaping and for the positive. And so, um, you know, let me describe it this way. You know, for some years now, there's been a challenge with the attractiveness of the profession. We see it in all kinds of lenses where, um, even down to just the number of graduates coming off campus into our profession has dwindled significantly for a variety of reasons. But one of them is the attractiveness of the profession and the relevancy with which we work with our clients and the insights and that we [00:08:30] can provide. And I think what the technology has done for us, besides helping us. And we'll talk a little bit about this, I'm sure, over the course of the next few minutes together, but as well as taking just a lot of the brute force out of how we've traditionally done that work, it's fundamentally making us rethink and be able to revisit for our clients what it is that an audit provides. Because and what I mean by that is, you know, back to my comment. You know, we spent the last decades of doing audits where we pick samples of populations across [00:09:00] large volumes of data sets and our methodologies and our frameworks and our and our professional training teaches us that that's an appropriate way to do it. But you're always picking samples and then extrapolating or inferring conclusions from those experiences.
Richard Jackson: Well, now we have the capability to actually just capture entire populations of data from clients environments, whether it's structured or unstructured. We can actually look at entirety of flows of transactions within financial statements. We can look for patterns. We can actually get [00:09:30] more readily, uh, closer to the issues that are of most relevance to the client, most relevance to the audit. Along the way, you can unlock a huge volume of just insights and observations. So it suddenly becomes not about efficiency. And efficiency is absolutely going to be part of what we go through on this journey. But it absolutely, fundamentally retools the quality of what we do as an insurance provider. In our traditional product of financial statement audits, it starts to expand what that financial statement audit can involve. We start [00:10:00] can talk. You know and again we're not there today. But we'll start talking about the frequency with which we can do those audits. The way in which we can report is going to fundamentally change as the capital markets adjust to new information demands from organizations. And that's before we even get into the fact that organizations are looking for a significant expansion around what they even look for from an assurance provider. So it's not just assurance over the financial statements, but it's assurance across a much broader spectrum of some of the ethical issues we'll talk about today. [00:10:30]
Isaac Heller: Yeah, I think you're right. And like, you know, now I've, you know, become sort of a, this CEO of a, you know, of a, of a, of a startup and it's grown and it's grown and it's grown. And the more you grow, I think the more you value both your, your CFO, your finance partner, but also your your audit and financial advisory partners, right? The more you grow, the more the complexity grows. And you kind of just want to have someone who gives you that sigh of relief, of the complexity. [00:11:00] And a lot of times, um, I wanted in a, you know, I'm hoping for it in a brief answer that makes a lot of sense to me in a brief time, but knowing people like you, Richard and your team, it's backed up by, you know, whether it's years or mountains of of data and experience. So, um, I just I appreciate it a lot more, you know, now.
Richard Jackson: Well, thank you for those words. And, Isaac, as you and I have got to know each other, and as you say, from that vantage point of [00:11:30] a CEO where you know, you as an organization are making lots of representations, whether it's internally to your board, whether it's to your shareholders, whether it's and your investors, whether it's externally to your customers, and then increasingly to regulators and third parties and all of those conversations, all of those representations. Tunes to your point. You want that sigh of relief you want that level of comfort around. Well, are all of those representations valid? Are they based on, you know, quality information processes, systemic [00:12:00] and repeatable, um, communications that you can feel good about? And what's that that's assurance. That's that's that thing that is, you know, every day that's what I get up and think about. How can I provide that for our clients.
Isaac Heller: Yeah. That's amazing. And I, you know, let's maybe let's jump into ethics a bit. And I'd like to kind of the way I, the way I think about this, you know, I had a, I had a joke with our head of AI today and [00:12:30] you know, five years ago when we started, we felt like people weren't adopting enough AI. You know, we were talking about AI and accounting and audit, and it fell in a lot of deaf ears. We said, where is everyone? And now, fast forward five years later, we actually think that they're overblowing it up a little too much. We do think there's a little bit of hype. And, you know, I always wonder, like, you know, you mentioned kind of the professions and changes, you know, wanting to just increase the attractiveness to the profession and [00:13:00] stuff. On one hand, you've got to adapt the technology and adapt the business models to meet the demands. But on the other hand, if you move too fast, you can maybe, I don't know, drop some things when it comes to ethics, which is really a big part of the bedrock of being in this industry. So I mean, just paint us a picture. The fundamentals. What role does ethics play in the accounting and audit industry. And then we'll unpack what AI is going to do [00:13:30] to this.
Richard Jackson: Yeah, absolutely. Well I phrase I stole this from one of my colleagues shamelessly. But and I won't name him because he'll won't copyright on it. But, you know, one of the things that I loved is this idea that an organization can only move as fast as its braking system, and the better the braking system, the faster you can move. And that I think has has really been to your point, Isaac around to date, a lot of organizations have been challenged by that combination of the brakes and the accelerator [00:14:00] in my car analogy, um, in the early days, and when I mean early days, you know, especially when I exploded onto everyone's consciousness, right? You and I know it's been around for a lot longer than that, but it's certainly spilled into the into the boardroom and spilled into kind of everyone's consciousness. A couple of years ago. The first reaction to that was to hear and see all these kind of risks and all these kind of horror stories of what could go wrong with the technology. It was everything from bias to hallucinations to transparency, explainability, [00:14:30] you know, all the things that are a byword. Um, and there was a lot of reactions to that, which was to shut it down. It was, you know, whether it was in corporations basically saying don't allow it into the house, whether it was blocking employees from being able to go out and experiment with some of the public tools. We even saw some countries where the whole country was trying to ban it. Um, and I think it was very much a knee jerk reaction to that, that fear of not having a braking system, which meant you had to have a heavy foot on the brake.
Richard Jackson: I think what's changed [00:15:00] since then is, first of all, people have started to get a better appreciation of, well, how do you how do you build those brakes and what do those brakes look like and how can you make them proportionate? I think the other thing that's massively come along is the fact that people have started to realize this technology, to your point, isn't going away. We can discuss and debate is there a height bubble to it? But it is certainly transformative. And I think a lot of organizations have started to recognize that it is going to be a differentiator for them in the growth agenda. [00:15:30] It's not just about the margin effects. It's not just about taking cost out of the organization. It's actually going to be increasingly a top line revenue growth generator. And for some organizations, it's going to be the difference of whether or not they actually get disrupted at an existential level. And so I think because of that, there's been this pivot back to, okay, well, now how fast can I go on the accelerator? Now, within that, to your point, I think people are realizing it takes a lot of investment to go full throttle on the investment. Again, [00:16:00] we can talk a little bit about what why is that? Um, but I think we're in this interesting period whereby organizations know they've got to do something. They're not quite sure exactly what to do. And again, as I'm sure we'll talk a little bit about today, they're looking at this event against a very rapidly evolving regulatory and compliance environment, where they're trying to make sense of what, you know, what does good look like?
Isaac Heller: Yeah, absolutely. And, you know, to keep going a little bit with the analogy, you know, I remember my days [00:16:30] in the arcade, I don't know if anyone out there went to went to the arcade, but um, there was that driving game, you know, at one point it was Cruis'n USA, and you could go as fast as you want, but you needed to either use the brakes or let the brakes off of the turns, or you hit the walls. Right? And you would you would be okay. Like bumping into the walls a little bit. But if you bump too hard, you would just spin out, right? And so that's what that's what I feel like it is. It's kind of that art of I've got to brake. [00:17:00] I want to keep the speed going. Um. I'm okay. Maybe, maybe there's some bumpers I got to hit along the way, but I don't want to spin out. Right. Or I don't want to finish last in the race. So I, I like that analogy. I mean, are there, um, AI driven tools, whether in finance or other areas that are being implemented that you see as a potential risks or just kind of things you're flagging as you look at these big, these larger companies, right?
Richard Jackson: Yeah. [00:17:30] I think, um, well, I think first off, I think one of the things we're seeing is it is different organizations have got massively different ways in which they're trying to implement the technology. And that's given rise to different risks. And so what I mean by that is, you know, we see some and we're an example where we have democratized the technology by putting some powerful we call it IQ, which is our version of a private large language model within our environment where it's [00:18:00] available to all of our 400,000 people. Um, we're also significantly deploying Copilot across all of our workforce over, over a multi-year journey. And what that does is an example is that creates a huge opportunity for innovation across the the enterprise. But at the same time, it also exposes significant risks. And some of those risks just, you know, keep it very, uh, you know, tangible. It can be the risk of where's the data, what's happening with the data? Are you [00:18:30] satisfying your existing obligations? You know, we've got hundreds of thousands of clients that trust us with the custodianship of their data for our everyday activities, whether it's in assurance, whether it's in our tax service lines, etc..
Richard Jackson: And so part of what we've had to do is prepare for how do you put in place an appropriate, um, layer of governance, a layer of just good housekeeping around some of those fundamentals of confidentiality of data, the misuse of, you know, mitigating from the [00:19:00] misuse of data, as well as starting to get into understanding, well, what are the use cases that people start putting this technology to, and how do you make sure that you know, everything from it's not being abused through to it's not being duplicated? There's a whole questions about efficiency. We perhaps won't touch on that today in terms of the cost element of this deployment. But but but you do have to kind of set those ground rules. Um, and so we talk a lot about how do you do that through establishing governance frameworks within an organization [00:19:30] where you can actually start helping the leadership teams and the organizations understand even just what are the ground rules? What are the table stakes in terms of how you start deploying this technology?
Isaac Heller: Yeah, absolutely. And, like, I can, um, you know, I'll, I'll I'll share a little confession. You know, I mentioned five years ago, we said, oh, you know, these companies should be adopting AI so much faster. You know what's going on. And that included, you know, whether it was the big four or audit [00:20:00] accounting firms, office of the CFO. And, you know, fast forward five years later, I've gotten to know you met, you know, different people on the team. You've got a new leader with Janet. And we've been really impressed with, uh, you know, a lot of the, you know, kind of AI elements that UI has, has put into place. I mean, we don't know everything, obviously, but some of the things we've learned from, um, you know, the the senior managers and the partners on up have been pretty cool. So it's it's kind of been fun to [00:20:30] see this almost like fast adaptation. Um, and I think it relates a lot to the ethical backbone that I think a lot of people come from. You know, we want to move fast, but we want to make sure that we don't spin out or, you know, have have healthy brake fluid, let's call it. You know, we want our brake fluid and stuff like that, right? Is that fair?
Richard Jackson: No, absolutely. And I think, you know, it kind of goes back to one of your earlier comments, you know, having that heritage of being an assurance business for, for as long as we have been, [00:21:00] it's kind of in our DNA. So in some in some regards, we're a little bit spoiled in that when we come to, you know, a conversation like this, disruption of technology, we do. So we're being thoughtful, intentional about deployment, being thoughtful around the risk management of it. I mean, that's just in our fabric of what we do. And so if you like the opportunity there is you can only move faster than your natural your natural gear is um, for a lot of organizations, it's the other way around. It may be to your point where [00:21:30] back to your analogy of being the teenager in the arcade game. You just want to go fast. You want to go fast around that corner. And a lot of organizations are having to learn, you know what it feels like to hit that wall too fast on a corner. They're having to realize that pumping the brakes sometimes isn't actually a dirty word. And now. Now the trick comes. How do you actually do that in a way that balances speed and efficacy at the same time?
Isaac Heller: Interesting. I mean, I've got to ask you a curveball here because, you know, you're you're in Silicon Valley [00:22:00] and I always wonder, you know, we're talking about, um, AI and how to kind of use AI and internalize it. Do you think that being around AI companies has helped, um, you know, you, your team, your colleagues, meaning like, uh, you know, we don't have to get into who audited who, her advisor. But, I mean, Silicon Valley is the heart of some of these hyper growth AI companies, the large language models. Does [00:22:30] that does that collaboration benefit you guys as being able to stay at the forefront of how to innovate internally with AI?
Richard Jackson: Yeah, absolutely. I mean, the way I describe to people is I over the years, I've spent a lot of time working with clients, and especially in this kind of DevOps environment that we are, you know, we see prolifically across the Valley. There's nothing more, uh, I think, intellectually interesting than sitting down with an engineer and talking about what [00:23:00] a problem, what you think your problem statement is. And the way I describe it to people is what I love about working with software engineers and developers. They come back to you a couple of days later and say, hey, you know, I look, I thought about your problem. And one of the things I realized is that your problem wasn't issue A, B, c, it was actually A, b, C, d, e, and f. And so I built a solution for you. And I think that that ability to actually, you know, work in environments where people help you critically challenge what you think is your problem and actually then bring back a level of creativity [00:23:30] around a solution. Plus on steroids has been one of the things that's made is helped me and my thinking about the art of what's possible within the technology. Um, the other thing I think that's interesting, being in the Valley for as long as I have, and you mentioned Isaac, I'll go back to you. Being a student of history, there are these evolutions of technology that we've seen before, right? We we've seen what it's like when technology has, in past iterations, in past generations, moved at speed [00:24:00] and then had to be have a snap back for, uh, the, the regulatory environment. I think we're seeing that learning curve, you know, that that muscle memory has been I'm not saying it's fully ingrained in Silicon Valley, but I think there's a much stronger appreciation of it nowadays in its impact on AI than perhaps we may have seen ten, 20 or, you know, certainly longer ago than that.
Isaac Heller: Yeah. It's it's interesting. Um, and, you know, there's a topic that I think is super interesting. [00:24:30] Um, and I haven't I'd like to I would like to hear more about it. You're one of the few people who've, you know, you and I've talked about this and it's this idea of auditing the I, you know, it's like, who's going to wait a minute, wait a minute, who's going to audit the I. And, you know, I also think about this as an important, um, framing exercise and maybe reframing exercise in the profession. Right. So you're, you know, are you an auditor? Are you assurance. Are you risk [00:25:00] advisory. What is your just at the highest level? What is your relationship. And I think about well, my relationship, it's not with my auditor. I just want to partner, you know, I want to partner in the business or a partner in finance or hey, tell me what is super risky about what I'm doing that could lead to some major risk in my company. So I always I always think about that, like who's going to who's going to audit the I, who's going to make sure it's safe and secure? I mean, where are you at, Richard, as you [00:25:30] start to think about this topic, is is it at the drawing board? Is it at the brainstorming board is, is it moving to regulation? What are you what are you seeing out there?
Richard Jackson: Yeah, it's. So I'll try and answer it this way. Isaac, I think, you know, because I do think it's multi-dimensional. I think I think initially, you know, I'll talk for UI, but I think, you know, it's similar for for all of the, uh, the larger, uh, accounting and auditing firms. We are certainly starting [00:26:00] to see the AI come through in our clients environments where we're doing our traditional role as being the financial statement auditor. And so, you know, again, for all kind of reasons, we can discuss that, you know, finance and the role of AI in finance in a lot of organizations has perhaps been slower on the adoption curve. I think personally, that doesn't surprise me in that you think that, you know, that that community is probably the most risk averse within any organization. It also tends to be the part of the business that's subject to most scrutiny, third [00:26:30] party verification, compliance obligations. So again, for all kinds of reasons, doesn't surprise that finance has been perhaps a little bit behind the curve in terms of adoption rates, but that's changing. And so I think we will and we are starting to see the AI come through those traditional distribution distribution channels of the financial statements. And then and then our role or obligations as the external party opining on those. And so and we can talk a little bit about what therefore what do we do in [00:27:00] specific to that response. But I think at the same time, to your point, the explosion of the use of AI across the enterprise is leading the management.
Richard Jackson: It's leading the CEO, it's leading the board, it's leading the investors to start asking those questions that you're asking, which is, well, hang on, who's going to help me understand? Are we doing this at the right pace? Are we doing it in the right way? Are we implementing the right governance and the structures and the training? Do we have the right? [00:27:30] Are we are we even thinking about who we're partnering with on some of these technology solutions in the right way, and they are naturally lean into organizations that have the breadth and the experience that organizations like EY have. They're also looking to organizations that have the credibility of that, that trusted assurance provider that that third party. But I agree with you that in the first instance, I think they are looking through it in the lens of how can someone give the advice, the insight and the perspective? [00:28:00] Um, I'm careful about using the words partnership because a lot of people then misunderstand that and don't think that could be objective. It absolutely can be. But it is in that way that I think you describe a really given that confidence to whether it's the management team or the executive leadership within an enterprise. I think if and when and how that will move into more external regulation and compliance, we can talk about that. But I think that I think that other step is actually much, much more where we're seeing the demand and where we're [00:28:30] seeing the conversations with clients at the moment.
Isaac Heller: Got it. So I'll try to I'll try to say it back and characterize how I'm thinking about it. Um, because you you mentioned specifically about, you know, using AI in financial operations, which is a great place to start. And the the thought I would have, you know, for anyone who's listening, who's not an auditor or assurance or had experience in audit is that, you know, you as as an assurance leader, um, [00:29:00] are in charge of, you know, verifying the accuracy of the financial statements. And presumably when AI is introduced more and more into financial operations on the corporate side, they've leveraged AI to construct those financial statements, whether it's high volume, you know, document review, um, anomaly detection and systems, um, connecting multiple systems together, uh, looking for nonstandard terms or items, all these things. And, [00:29:30] and essentially you as the audit person come in the next day and they tell you, you say, how did you arrive at this number? Oh, well, this system did it. That's right. Right. Then her name is Susie. Well. Who's Susie? No, no, no, she's not. She's not our assistant controller. She's a she's a bot. She's an agentic, you know, or whatever this conversation is. So I'm just kind of. Is that, like, the vibe? Is that what's happening? I'm not saying exactly like that, but are these are their new financial tools being introduced? And then all of a sudden, you and your team [00:30:00] have to think of the next level of frameworks. Is that what's happening now?
Richard Jackson: No, you're on the money. I'm not sure whether we call them Susie, but you're absolutely on the money in the way that you describe it, and it very much is. It starts in that conversation, as you say, when we come in and we and we try to dissect how the financial processes are being put together, how they're driving the numbers. You are absolutely starting to see that injection, whether and sometimes it could be management saying, hey, we've got this cool piece of technology we've introduced. Sometimes it [00:30:30] could actually be that they're using use they're working with a vendor or a third party software provider that they don't even realize the technology is embedded in there. And so some are partly what we're doing is sometimes helping them discover where those issues are. And then and then it leads exactly where you're headed, which is okay. You start getting into these conversations now of, well, what risks does that now introduce into that ecosystem, whether it's where's the, you know, where's the where's the capability come from? Is it built. Is it bought. What's the data that [00:31:00] it's trained on. What is the way in which it's been introduced into the environment. What's the tolerances. Where does the human sit in this? Are you giving it a large degree of autonomy? You mentioned agentic. Are you just letting it flow or are you, you know, where have you engineered or designed in those control points or those touch points of the human in the workflow? All of those questions you get into, even before you get into that question you asked a few minutes ago, which is the, you know, the ability to actually audit the AI to actually get into the black box. And again, we'll come back to that, [00:31:30] that little, uh, problem statement as well.
Isaac Heller: Very good. So we're not we're not in the black box yet right now. We're we're we're on the journey, you know, and we're we're we're trying to understand how you arrived at that number based on a series of steps that may or may not have I, that you may or may not know about. Right. Could have been embedded. Right?
Richard Jackson: That's exactly right. Yep.
Isaac Heller: And then, you know, I have to ask, you mentioned, uh, regulation and regulators a few times. That's obviously a huge [00:32:00] part. A huge part. Right. Of the audit practice. We you know, you've got to be compliant, whether it's GAAP, IFRS, Sox, all the existing frameworks. And then, you know, as an as an audit firm, you've got the pcob and internal inspections and all those areas. Do you think we're at a moment where the um the because AI is moving so fast, there's concern that we're not able to keep up in that dynamic between the regulators and [00:32:30] the auditors, meaning, you know, traditionally the regulation has been pronounced and it's been implemented. But, you know, you're sitting in Silicon Valley, you're at the cutting edge of all this advancements. You, as the audit or assurance leader, may pick up on a lot of these things more quickly or be able to adapt quickly, is there? Do you have to take a lot of initiative at this stage? Um, differently than you would before? I mean, tell us, like a little bit about the dynamic with, uh, building those regulatory frameworks. [00:33:00]
Richard Jackson: Yeah, absolutely. I mean, I think one thing I would say is that, you know, regulators, stakeholders, the professional standard setters have all been very interested in this area for a period of time. So it's not you know, they they've been watching all the same things. They've seen all the same trends and reading the headlines. And I think they are genuinely interested in engaging in conversations. And we again, both here in the US but internationally as well where there are conversations [00:33:30] across the ecosystem around that understanding, well, what does the technology look like? Where is it? You know, again, back to my comment of how quickly is it coming into these traditional areas that we have, uh, audited the financial statements and then where do we see it go in? Um, I would say there is a continued discussion of information sharing of education on both sides. And I think it does come back very tactically to, you know, again, things that we both mentioned are just identifying [00:34:00] what are the immediate risks that we can spot today, and then understanding how many of those can be solved for by the technology itself at the point of design. Right. And how much of them can be satisfied or will need to be satisfied by the role of the human within interacting with oversight of being that break in system relative to, uh, to the acceleration of the technology. And then I think a lot of conversation around even just the professional standards against which we do audits. [00:34:30] Are they fit for purpose? Do we need to have new auditing standards? Do we need to have new guidance that keeps pace with or anticipate some of these technology changes? Now, I think we're still very much a work in progress on those.
Richard Jackson: I think those of us that have, you know, been heavily involved in those conversations, you know, we see that the auditing standards already envisaged the use of technology in audit. They already had. We already have professional standards to understand where the data and its usage sits within any audit [00:35:00] process. We already have fairly expansive and very robust standards around what's the responsibilities of the human to interact with, to supervise and to review those technologies. But at the same time, the technology is pushing the boundaries every single day of the art of what is possible. And again, genetics is a great example, right, where you think about the ability to just fully automate and give autonomy to the technology itself. Well, at [00:35:30] some point you've got to be able to understand what the technology is doing to be able to see the logs of was it really following what it had been asked to do? Was it really acting with the creativity sometimes that the technology can actually introduce into the process, or is it executing to a plan that the human has already kind of understood and reviewed and blessed? And I think that my point being, it is a continual I think evolution will continue to be an evolution, and I think [00:36:00] there will be a healthy tension between the pace at which the technology can evolve, and then the way in which the regulatory environment can either keep up with it and anticipate it, and with varying degrees of specificity. Again, we've seen a little bit of a checkerboard, not just within the audits, but more broadly around what does compliance in the AI space look like.
Isaac Heller: It's funny. Well, first of all, like if I'm starting my career in accounting or thinking [00:36:30] about going into it, I think these are pretty cool problems to solve. I think these are very interesting challenges, and I also think that there are challenges that some people who are graduating may be, you know, more uniquely suited to solve. Right? You know, it's a generation. You know, Richard, I don't I don't use ChatGPT. I'll be honest. Like I saw someone with the app. I'm sure I should use it more. But, like, you know, I've looked over the shoulder at some people who work here at Trillian, [00:37:00] and they prompt engineer everything, and it's like they're good at it. It's like a skill, you know, on the list. And so I wonder, like, I just think that's an incredible, uh, professional challenge to tackle. Um, you mentioned agents. Maybe. Let's let's try. Can we unpack the black box? Can we just, like, can we go inside and check it out and like, how I mean, how do you audit Like. Like I don't even get it. Like, how would you audit the black box? Would it [00:37:30] be reperformance? Would it be. Just show me what you arrived at. Is there something we're not even thinking about? Do you just leave the black box alone and focus on outcomes?
Richard Jackson: Um, I up until your last point, I was going to say all of the above. I think it absolutely can't be about leaving the black, black, black box alone. Um, I think certainly today, because there's been that certainly the challenges of opening up the black box, we've had to look around the box. Right? We've had to look to issues like training. [00:38:00] We've had to look to how it's been developed. We've had to look to the data source, all those things that I think a lot of people are familiar with. But I think now as we move forwards and we learn more, we are we are cracking open that box and we're getting into I would still say it's a work in progress. I would say I would characterize it as it's combinations of the things you described, Isaac. It's the ability to actually create better logs of what the technology is doing to better create that breadcrumb trail that understands, [00:38:30] at least on a look back basis. And and again, we'll come to whether or not you can start doing it more real time. But to look back and see was the technology doing the things you expected it to, to actually create parallel technologies and use of the bots where they can actually be quality control in the AI. So I checking the AI itself and, and perhaps having multiple versions of those quality control bots in the environment simultaneously checking things, um, as processing occurs. And [00:39:00] then I think there's going to just again, for where we are right now, there's going to be that strong element of putting the human at the center of those conversations. Right? And the ability to manually review some of those things.
Richard Jackson: So it won't be the, you know, and so that will mean that we won't get the, you know, the logical, uh, extremities of efficiency because you're not going to suddenly move overnight to get rid of humans, right. That's why there's certainly a, um. It's not a whole redundancy of the human is my point. It's certainly going [00:39:30] to be require a new retooling of the human and the humans skill sets and its role within those processes, but it's not going to be a removal of the human. Uh, you know, at least in the short to medium term. And then even in the longer term, it's going to be in discrete parts of processes. I do want to come back to one thing you mentioned, because I'll connect it back to, you know, the skills we're talking about. I do agree, I think are most readily adapted and picked up by some of our youngest professionals and that level of interest and energy, and just the intuitive [00:40:00] way in which we see our youngest, uh, you know, we hire thousands of people off campuses every year across the globe, and they are just more naturally attuned to this way of thinking. They're more curious about it. They're less intimidated by the technology, and they perhaps don't draw the, you know, the bright lines that people of my generation do when we think about all these challenges, they're actually much more willing to interact with it in the way you described and actually jump into the box, think around the box, [00:40:30] interact with the box. And I think that's hugely exciting. I think your observation that this is a definitely, you know, an exciting time to be joining the profession.
Isaac Heller: Well, I mean, well, they we need the best of both worlds, you know, the English, the history, all of that. I think, you know, people forget there's a lot of money at stake. There's a lot of trust that we've built up over decades. You know, accounting is the language of [00:41:00] finance. And it's it's the bedrock of of financial reporting. So I just think it's, you know, it's fun to always, always remember that we're in this for the, you know, ethical considerations and concerns, stuff like that. So maybe maybe, you know, leave us on on one last note, look into the crystal ball. Um, what is a what is a topic around the intersection of AI, finance and ethics? One area that you're personally interested in these days and exploring, [00:41:30] you know, in the next year?
Richard Jackson: Yeah. Um, I'm going to bring it back to that nice little word called data. Ooh. Um, because I think what are a lot and where a lot of organizations, we're finding that they've come to that stark realization that that true disruption and transformation of AI and its capabilities is heavily predicated on the access and the quality and the availability of the data that they've got within the enterprise. And [00:42:00] they're significantly realizing that they've probably underinvested in it. They probably haven't thought as creatively as they could around what all those different data sources are within the enterprise, and they are desperately keen to understand well, what is the AI I readiness of that data. How do you actually make sure that it's fit for purpose so that you avoid issues like bias. So you avoid issues like just basic quality inputs. And [00:42:30] it's kind of it's not sexy. It's kind of you're taking care of the plumbing. But I think a lot of organizations that I speak to, they've suddenly realized that, you know, that demand from the CEO of how are you going to fundamentally transform my business or that ask for how are we going to fundamentally change the organization, the the product offerings, the service offerings we've got? How would we stop ourselves from being disrupted? It's got to come back, at least to a significant level, that that linchpin of data.
Richard Jackson: And [00:43:00] that's that's our sweet spot, right? Our sweet spot. Coming back to understanding where that data comes from, being able to, you know, good old traditional things. Is it complete? Is it accurate? Is it fit for purpose? And then giving those insights and values to client around, okay, what is it you need to do to now transform that data and unlock the capabilities to unlock all of the potential that the technology can now bring to it. Um, so, you know, perhaps I'm doing myself a disservice. I started with telling you that, you know, the accounting profession was now was, was, was now getting attractive [00:43:30] again. And then I bring it back to data, which for a lot of people perhaps seems a little bit dull. But it is such a fundamental driver for a lot of our organizations. And again, it just plays to the skill sets that I think we as audit professionals bring to the conversation.
Isaac Heller: Yeah, I think so. Nothing new under the sun, you know, garbage in, garbage out. We can't do any of this fun stuff without the complete picture. So, Richard, it was good seeing you. Thanks for your time. I think that this provided us with a lot [00:44:00] of nuggets of wisdom around AI and ethics, but also the crystal ball into where some of the future could take us. So it's always exciting to catch up with you.
Richard Jackson: Appreciate it. Thanks very much, Isaac.
Isaac Heller: Thank you.
