This week sees the release of an industry briefing we prepared for Sybase entitled ‘Managing Risk Data in the Siloed Enterprise’. I’ll be presenting the key findings at an event in London on Tuesday, where I’ll be joined by a group of risk managers and data architects – as well as Sybase’s Stuart Grant, IDC Financial Insights’ Matt Clay and Deloitte’s Julian Leake – to talk about what’s emerging as our industry’s thorniest problem: how to extract data required to meet the reporting needs of senior management and regulators from the “spaghetti crow’s nest” of data repositories and plumbing connections that is today’s financial markets enterprise.
The event is sold out (although there are always cancellations, so if you’re interested in attending, drop me a note and we’ll try to squeeze you in). And I’m not surprised.
Two years ago, when we first started thinking about our Risk-Technology.net publication, the single compelling industry issue we sought to give clarity to was this. Since we’ve been covering the marketplace more closely, what’s emerged is a clear picture of opacity. That is to say, the legacy of 10, 15, even 20 years of market and credit risk systems has left a siloed landscape of individual data repositories that find it difficult to speak to one another, are difficult to extract useful data from, and operate to their own timescales.
This leaves the data manager – and in particular the risk architect – with the task of cleaning up the problem. At stake is his or her firm’s ability to meet the ever more-onerous requirements of the regulators, while at the same time giving management the business information it needs to remain nimble, agile and ahead of the competition that’s trying to do precisely the same thing.
Our paper on the topic was based on discussions with some people we consider to be innovators in the European marketplace, mostly from large sell-side organisations. What we found was a distinct commonality in the understanding of the challenge they are facing, and the adoption of some clever ways to clean things up from a risk data perspective. Remember: identifying, gathering, normalising and orchestrating the broad range of data required is a gargantuan task in itself; getting funding for help is even tougher.
Our interviewees were candid about the task in hand, and realistic about their ability to get things in order. They talked about their approach in adopting technologies to help them, about how to manage those technologies and the projects to implement them from a governance standpoint, and the importance of getting the balance right between buying and building your own.
There is reason for optimism: Two years ago, few had identified the need for action in this area. Today, it’s widely recognised as a, if not the, top priority for data architects. Meanwhile, we’re talking to an increasing number of solution providers who have tools that can help, with pioneers adopting fast database technologies and highly scalable distribution mechanisms that fit the bill for the kind of system needed to generate on-demand risk information in the large financial enterprise.
Tuesday’s event – at the Green’s & Runner Bar in the City – should be enlightening. If you can’t make it along, we’ll be posting the paper for free download in the coming days. You’ll hear from us, or you can check the research section of the web-site.
Subscribe to our newsletter