An interview with Vishal Marria
“This is becoming more of a board level conversation than ever before”
An interview with Vishal Marria
“This is becoming more of a board level conversation than ever before”
Maurice:
Hello everybody, welcome to C&F Talks. Today, I have with me Vishal Marria, who's the CEO and Founder of Quantexa. Vishal's going to be speaking at the AI and Digital Innovation Summit, which is being held as part of City Week on the 1st of July at the Royal Garden Hotel, London. Vishal, welcome.
Vishal:
Thank you, Maurice, and lovely to be here, thank you.
Maurice:
Great to have you with us.
Now, AI adoption is growing rapidly, but many organisations still struggle to move from proof-of-concept to scalable impact. In your view, what are the key enablers of success when it comes to realising the full potential of AI?
Vishal:
It's a great question, Maurice, and again, I think we need to distil what we mean by AI first. So, areas around predictive AI, neural networks, random forests, areas around deep learning, machine learning, those type of techniques and capabilities have been around for decades. It's been well-deployed if it's in organisations across financial services, government, telco and so on, well-deployed, well-utilised with great value.
Obviously, in 2022, a new form of AI, much more generative in nature, much more around predicting the next word, was invented, obviously, through open AI and other areas. Now, what we're seeing in the industry today is a convergence of bringing together advanced analytics, advanced analytics on trusted data, but the integration of both into large language models or small language models as organisations start to grapple with this great technology and putting it to work. Now, as you clearly say, many organisations have gone down proof-of-concepts, hundreds of proof-of-concepts right across the enterprise, but only a small number are making it into production. And lots of people are asking, why is that?
So, in our experience, what we have seen is that we've serviced a number of regulated markets, and so the burden of truth on data, the burden of truth around the transparency of those models are really important when it comes down to taking something from a POC into production. And those organisations that have had their AI policy in place, have worked with data stewards, have invested time, money on technology and process around data and those techniques have definitely been more beneficial in the adoption of AI into production.
So, some of the impacts that we have seen and why organisations have struggled is if they haven't done some of that homework in place, if they haven't put that grounding and that foundation on getting that data trusted, working closely around the organisation for those different policies, how do you get these models through risk management, model risk management and so on. If you haven't got that process in place, you're not going to get this into production.
Maurice:
So, get your data sorted before being more ambitious.
I suppose, leaping forward a little bit. Agentic AI, that seems to be, as many people say, the third phase of AI, able to take initiative itself and have a degree of autonomy. What do you see as the biggest opportunities for large enterprises from agentic AI?
Vishal:
So, agentic AI is sort of the next wave of AI that we're obviously seeing come more and more in the forefront of the charge of AI. And let's also be very clear here, this is making a massive difference. The autonomous way of making decisions, taking actions, learning from interactions, especially when it's servicing multiple goals with and now without human intervention is becoming even more topical.
But there's both opportunity here, as well as risk. And I think sometimes organisations are quite fixated on the opportunity, which they are far amount. But one also needs to consider all of the risk attached to this.
So again, if we go down to the next level of detail, agentic AI systems are designed to operate independently, make choices and more importantly, make their own understanding with regards to a situation or relying on sort of pre-programmed instructions. So, we've got to be very careful here on what is feeding into these workflows, what are they feeding into these AI agents. And again, a lot of it, like my earlier point, is around that trusted view of data.
If the trusted view of data is inaccurate, then we're feeding something into an AI agent, which will be learning from actual inaccuracies. One thing I would also couple this with is the learning and the feedback loop into these workflows or these AI agents is going to be really critical. So having both the opportunity, which is there and the safety, which we need to consider.
If I was to summarise this sort of era that we're about to go into, once again, it's going to come down to the trusted view of data. And if there's any thought leader or any organisation looking to adopt, have you got your data foundation right? Have you got the data foundation right?
And the governance model, when it comes down to the feedback loop of those interactions and those outcomes, back into your analytics, back into the data, back into AI. These are some of the areas, but it's truly exciting. As an engineer by trade, this is becoming more of a board level conversation than ever before.
And contextualising this understanding is going to be a huge win as you look at the opportunity, but one does need to consider the safety around this too.
Maurice:
Yeah, safety and governance are obviously key in this area.
Quantexa has built a reputation around contextual decision intelligence. How does your approach leverage AI to deliver clarity and accuracy in complex domains like financial crime, risk and data management?
Vishal:
So, when I started Quantexa in 2016, it seems like a lifetime ago, just nine years ago. Just nine years ago, there's been many sleepless nights in that nine-year period. But what I would say, AI was at the core to our business.
AI was at the core to the platform, the capabilities we've got in the platform and the use cases where we have deployed that platform. So, if I double click into the next level, the way we connect silo data together, the way we connect silo data together at scale, and we are able to define what we call our entities, and then how we stitch data to form an entity is all AI driven. So, we have got a set of deep learning machine learning compounds.
We have got a heuristics when it comes down to the different types of components that you want to stitch together for resolution. This again is baked into the platform. The second capability in the platform is once I've resolved the entity, how do you then build real graphs from those entities?
So, understanding connectivity between entity is really important to tackle a number of different use cases for the enterprise. But finding those real associations versus those tenuous associations is a real difficult challenge, a difficult challenge that can be again solved by using business rules, but also advanced analytics and machine learning. And all of that has been in the platform since we started and has been a journey.
So, connecting data, building the context of that data, the use of AI has been paramount for the success of that platform. Now, solving use cases. So, we mentioned there financial crime, risk management and data management.
So, I'm just going to double click on financial crime. Now, when you're trying to predict this transaction has a high degree or high efficacy when it comes to detecting financial crime, i.e. anti-money laundering, one does need to think around the patterns of behaviour, the volume of transactions, the type of transactions, that's all very important. But what's also important when it comes down to AML and contextualising patterns is looking at the context of that transaction.
So, if I send you a one-off $50 transaction, that might be interesting, that might not be. If I'm sending you $50 every week for the last six months, that becomes even more interesting. If you then start passing somebody some form of that money to another party and that person then passes it back to me, that becomes even more interesting.
So, an isolated transaction in its isolation might look fine. But when you contextualise that across the graph, as well as looking at the sub-entities, that's suddenly when it becomes a grey zone to be highly risk, or the grey zone then becomes absolutely fine, this transaction has no risk. So that once again is using a range of AI capabilities to find out those anomalous transactions to understand in isolation might look fine. But when you contextualise it, it actually looks and has a high degree to be suspicious.
Maurice:
Very interesting. I mean, so context is key in the whole approach to understanding these sorts of patterns.
Looking at financial services more broadly, you stress the importance of data and governance and getting your homework right when it comes to data first of all. Is it particularly difficult in the financial services area because it's so highly regulated? Does it make the application of AI more problematic because of that highly regulated environment?
Vishal:
So, I think in financial services and our experience of working in financial services, there are many use cases where you can deploy AI. There are many use cases that can have tremendous value in deploying AI. Once again, contextualising the data, building the patterns through that becomes really important.
But with regards to financial services, it is one of the highly regulated markets. And regulators, as I've seen and worked with domestically here in the UK, we can't use the old policies when it comes down to understanding models to this new type of technology. It's a fundamental different type of technology.
So, if we're trying to use the same policies, the same standards to understand a model when it comes down to pre-AI versus this new world, we're going to stagnate innovation. We're going to hinder this powerful technology to get more prime time when it comes down to financial services. And we've partnered with many organisations on how do we get AI into production.
We have partnered with many organisations when it comes down to, again, how do you curate data, connect data, apply the machine learning across that data, et cetera, et cetera. This becomes really important. And the transparency of this model becomes even more important to getting these models through model risk management, model governance, and so on. So, it's a real important thing.
And this is a journey. One forgets this is a journey. We're on a two-year of a 10-year journey. And actually, the advances in technology has been great in the last two years. But so has some of the adoption.
But there's so much more on adoption. And AI will transform every single process in an organisation. And we have to be open to taking this change, because it is happening.
So, I think financial services are well poised to get the value out of this powerful technology. And I'm really looking forward to the next steps, how we embrace, adopt these technologies right across the enterprise.
Maurice:
Just taking that point a little bit further, do you think that regulators understand this need to adjust regulation, perhaps, to take account of this new form of technology? Do you think that they understand the difference between pre-AI and post-AI? And do you feel that not just here in the UK, but across the world, that there is a sea change in the approach of regulators to AI?
Vishal:
I think change is happening on this. I think regulators, my dialogue, my personal dialogue with the home regulator here in the UK is very much open to change, understanding how to take these models through. We've had a number of conversations with the regulator on what is our view on this model? What is our view on this data? And more importantly, how can one take this through governance and bring it into production?
So, in my view, on the experiences I've seen and the meetings I've had so far with the regulators around this topic, absolutely, they're very open to understand how do we do this? How do we put this into production? So, there's definitely a willingness. But once again, this technology is moving at record-breaking speed.
The agility in these models, the fact that these models are being tuned constantly, this change is happening at speed. So, one needs to have that agility in their process, and one has to have the agility in their models to bring and adapt this into production. And that's both from the financial services side, but also from the regulation side too.
Maurice:
Yeah, yeah, absolutely. We could continue this conversation for a very long time. There's so much we could cover. Sadly, I think we've probably run out of time.
For our viewers, we very much hope you'll be able to join us at the event itself to hear more on this and many other issues. And the event again, the AI and Digital Innovation Summit being held as part of City Week on the 1st of July at the Royal Garden Hotel in London. Further information available at www.cityandfinancial.com.
Vishal, I very much look forward to seeing you in July.
Vishal:
Likewise. Thank you for your time today, and I look forward to seeing you all on the 1st of July.
Jump to
Key enablers of success when it comes to realising the full potential of AI
The biggest opportunities and risks of agentic AI for large enterprises
How Quantexa’s approach leverages AI to deliver clarity and accuracy in complex domains
The difficulty of applications of AI in financial services due to regulation