by Tim Baker.
The rather boringly-titled BCBS 239, published in January 2013 by the Basel Committee on Banking Supervision, differs in approach to other regulations out of the committee –it is less about the “what” and more about “how” major banks should be organized to manage financial risk.
Covering areas such as data governance and information and data management, it describes 14 principles (11 for banks and 3 for regulators). The first paragraph quite succinctly sets up the need for these principles to be adhered to:
“One of the most significant lessons learned from the global financial crisis that began in 2007 was that banks’ information technology (IT) and data architectures were inadequate to support the broad management of financial risks. Many banks lacked the ability to aggregate risk exposures and identify concentrations quickly and accurately at the bank group level, across business lines and between legal entities. Some banks were unable to manage their risks properly because of weak risk data aggregation capabilities and risk reporting practices. This had severe consequences to the banks themselves and to the stability of the financial system as a whole.”
2016 Looming Deadline
BCBS 239 set a date of January 2016 for this to be fixed, or at least for institutions to make material improvements against the principles outlined in the report.
How are banks going to do this? For one thing, they’ve created positions that didn’t exist in the past, such as chief risk officer, chief data officer and chief information officer. Many have set up “239” working groups aimed at systematically addressing requirements. But for most the task is hardly trivial. When we started asking banking clients about the complexity of their operations even we were surprised: the sheer number of databases could be seen as a proxy for the challenge. One large global investment bank has about 90,000 databases. Another has 40,000, another 19,000. However you define a database – it’s a lot. And with that kind of complexity, centralizing it is not practical. But what you can do is organize it.
Using the PermID to Improve Risk Data Aggregation and Reporting
Five years ago, Thomson Reuters, a newly-merged company with a multitude of databases, needed to implement an information model that would enable the seamless cross-referencing and connection of data so it could be delivered into product and client workflows in a joined-up fashion. A project called the Content Market Place (CMP) set about defining how to structure and organize data, a clear set of rules around data governance and the imperative to create a single “master copy” of every data fact, as well as associated linkage and metadata. Everything organized by the Information Model is identified by a unique and permanent identifier (PermID) – a number similar to a barcode.
The PermID is a critical part of the information model seeking to solve the challenges of managing and linking data. It is unique in a number of key regards:
1. While most identifier methods describe subsets of entity types or categories, PermID provides comprehensive identification capability across a wide variety of entity types and today consolidates 30 million financial instruments, 3 million issuers and 3 million people as well as a myriad of entity types.
2. Because PermID can be used to identify a wide variety of object types, it is an ideal method for better description of the relationships between those objects, as well as an anchor for description of an object’s properties or characteristics.
3. Most identifier methods are opaque, not openly sharing the meaning behind the identifier. PermID is accompanied by services supporting selection and dereferencing – lookup of identifier based on supporting data and conversion of identifier back to supporting data.
4. Most identifier methods originated when processes were mostly focused on people. PermID fully supports use by machines, thus helping improve scale and reducing latency.
Particularly powerful was the establishment of central data “authorities” to ensure that key entity data (like company and people data) were only mastered once, and linked to/from the legacy databases. The approach was a fundamental departure from the norm, which typically would involve putting the data in a shiny, new data warehouse. Instead, this federated model meant that databases of record were left largely untouched, “re-mastered” and linked to common centrally-managed references. Filings data thus referenced the same corporate entity as the security master. Officers and directors of that same company were properly linked to the correct corporation. As databases adopted the PermID, the data model became more complete and powerful.
This federated data model has led to a rich and interconnected data model that increasingly spans the data assets of the whole firm.
Why PermID is Fundamental and Key
Our customers are looking to solve for a similar set of challenges – legacy databases mastered on different identifiers, many of them populated from data from both internal and external data sources – but now with a burning need to connect and aggregate data for risk and reporting purposes. Something as simple as assessing a bank’s risk exposure to (say) a single corporation (and its subsidiaries) – where that exposure can be through a myriad of securities, private loans and derivatives — is a non-trivial exercise and requires a precisely-defined and accurate information model like the one we have developed.
When banks hear about how we solved for fragmented and disconnected data, it suggests a relevant approach for them to address deficiencies in their information architecture and data governance. Some clients have indicated that PermID should be included in all our products, as well as linkage data that would help clients work with, ingest and connect our data. Others want a copy of the whole data model or “graph.” This would provide a more effective “catcher’s mitt” for our data, and maybe the basis of their own data model. Others want us to open license the ID so they can re-use the identifiers and encourage others to adopt it. And some want to be able to become part of the “federation,” to be assigned IDs for their own use or to build on to our graph.
As banks look to improve their information model and data governance, and as a result make their risk management systems more robust, they can certainly learn from the journey we’ve been on. While few will adopt our approach in its entirety, we are committed to partnering with them to develop better data strategies.
BCBS 239 has put many firms on the defensive, but in the long run we are sure implementing better data management practices will support growth initiatives. By helping banks better understand the principals and approaches that govern the Content Market Place, we are actively helping them develop their 239 strategies.
Receive stories like this to your inbox as they are published. Subscribe here and follow us @Alpha_Now on Twitter. If you are looking to access Thomson Reuters data or analytics, register for a free trial.