How can generative AI revolutionize financial services, particularly in the areas of risk management, early warning signals, and market condition monitoring?
Machine learning has become an almost standard tool, particularly in anomaly detection. Given the rivers of data emitted by markets, traders, and systems, firms are hungry to digest this information and identify patterns faster than ever. Even more significantly, the explosion of generative AI techniques is unlocking new possibilities for financial services, especially when used by workers as a “data-centaur” – where AI acts as a powerful assistant, but humans lead and make decisions. For example, generative AI models could be used to brainstorm new stress tests based on a portfolio. When the risk surface of a firm drifts from original models, AI tools can summarize that deviation automatically and a risk manager can then decide how to strategically address it. Ultimately, AI enables firms to pursue more high-quality opportunities with less compromise.
What challenges do investment firms face in establishing a robust data governance framework, and how can these challenges be overcome?
The rule of “garbage in, garbage out” is well-known, since dirty data gets magnified through an AI lens. High-performing investment firms understand the importance of properly managing the massive amounts of data they collect. Like most things, data is easier to clean as you go than to clean up later. Still, firms have to do both: the volumes of disparate data from a vast array of sources make it a challenge to ensure data quality and consistency. For example, every data source has slightly different identifiers, terms, schedules and approaches to correcting errors or publishing revisions.
Additionally, as some data is extremely sensitive and not for companywide distribution, data governance must be built into all data handling processes to support security, compliance, and risk mitigation. Firms need a secure platform for their valuable data that includes a data catalog, automated data quality rules, and fine-grained data access controls. This acts like a data officer copilot, guiding the firm to best practices.
When it comes to AI, accuracy, cleanliness and preparation are a technical necessity and a strategic investment in future-proofing a business.
What role does a flexible technological framework play in the successful deployment of AI in investment firms, and why is it important?
A flexible technological framework gives investment firms the ability to stay current and scale along with their AUM – a “must-have” as we move into a period where speed is imperative. Long cycles for change crush the ROI in a firm’s AI strategy, while fast adaptability lowers the barrier to change and provides firms the ability to leverage AI not only to make informed decisions quickly, but also to improve their operational efficiencies and overall investment outcomes.
The best flexibility comes with a platform which operates on two layers. The base layer ensures all data is treated uniformly, while the second layer understands the financial context of the data. This eliminates the need for time spent managing the data itself and unlocks the ability to invest time into testing new models or applying new AI technologies.
What insights can you provide on constructing a durable data foundation and framework to empower successful AI integrations in the financial industry?
Weak or inconsistent data puts the entire AI structure at high risk and will produce false and unreliable information, leading to subpar decisions, reporting errors and even regulatory repercussions.
Firms must first be able to ingest, validate, cleanse and normalize their data to make it suitable for downstream AI integrations. Data quality is paramount for predictive modeling tools that rely on historical information. A data catalog is crucial for helping firms reduce the risk of using incomplete or unfinalized data. With discoverability and proper tagging further verifying the reliability of data, firms can have confidence that their information is trustworthy. Investment firms should also encourage data exploration by publishing to a “data lake”, which allows for new use cases without having to bridge data silos.
Also, a platform that allows you to pivot to new models or onboard new data sets quickly will provide the agility to adapt to new challenges.
What are the key considerations for investment firms when deciding whether to build or buy generative AI tools, and what are the associated risks and costs?
The build vs. buy question comes down to what firms want to achieve both in the short and long term, as well as what is currently available in the market and how this fits to serve their unique needs.
If firms opt to build, there are significant considerations, including:
· Cost – AI models can be incredibly expensive in human capital and server costs to create, train, and operate.
· Environmental impact – The impact a model has on water consumption for cooling processors is significant, not to mention its effect on energy use.
· Security – Even with a secure data foundation and protected “data lakes”, firms must ensure generative AI doesn’t inadvertently reveal sensitive information due to an improper prompt.
· Speed – Finding the right talent in a competitive market can be a taxing endeavor and will require time.
By leveraging pre-built solutions, deployment times can be significantly reduced, allowing firms to quickly benefit from enhanced efficiencies and improved performance. When it comes to buying, firms should ask two key questions:
· Is the vendor aware of your financial domain and are their models able to understand the nuances of your industry’s data? A model that can compose poetry for you is probably not tuned to generate effective risk management models or sample investment portfolios for tests.
· Is the vendor available long term with a plan to support you as your needs evolve?
What strategies can investment firms employ to ensure that their AI-driven initiatives are both effective and sustainable in the long term?
Success starts with winning over internal stakeholders. A well-developed AI project will only get you so far if internal members are hesitant or are not properly utilizing the tools. Starting with small initiatives and announcing successes companywide will provide proof of concept. Designing your AI initiative to superpower your people as “data-centaurs” and improve their capabilities will help with adoption.
Also, those who fail to build a good foundation early will need to backtrack, which is costly and time-consuming. Building a proper data foundation with data governance should be a top priority for all firms. No matter if a firm builds or buys when developing an AI initiative, governance assures that only the right people and programs can access data, and lineage ensures there are credible, easy ways to track models’ sources or identify and mitigate unusable data.
Matt is a Senior Vice President, Arcesium’s Technical Strategy Lead, and heads Arcesium’s Forward Deployed Software Engineering team.