<img src="http://www.gblwebcen.com/41894.png" style="display:none;">

Volta Data Centres Blog

Putting Reference Data to Work

Posted by Matthew Dent on 05-Mar-2014 11:24:00

volta_imageMatthew Dent, CEO of Volta Data Centres, explores the options for financial services companies under pressure to manage their Regulated Data due to recent regulatory requirements. 

2014 is acknowledged as the year of Big Data and recent regulatory requirements mean that financial services companies have to get to grips with “Regulated Data” whether or not they are ready to embrace the idea of Big Data itself.  Financial services companies need not only untangle the different types of reference data required by each piece of legislation for reporting purposes, they also need to familiarise themselves with advancements in new data management systems and the options available for storing and, more importantly, quickly and accurately retrieving relevant data sets.
 
Requirements against risk
The European Market Infrastructure Regulation (EMIR) is the most recent legislative arrival in early February 2014, sitting alongside the Dodd-Frank Act, the Foreign Account Tax Compliance Act (FATCA) and Solvency II - all needing reference data for their reporting requirements.  These regulations, with the exception of FATCA, have been put in place to rectify the disjuncture uncovered by the collapse of Lehman Brothers in 2008, namely the need to dramatically improve transparency and risk management within the financial system.  Regulators would like a system that shows the whole picture of market risk and exposure so that counterparties are able to quickly identify trades with another bank if one of them fails.  With these new requirements comes the need for derivatives traders, for example, to classify their counterparties, yet the counterparties each need to be classified differently according to the requirements of the different regulations.  Protocols, LEIs, UTIs, valuation, margining and Know Your Customer (KYC) information all need to be rationalised and streamlined.
 
In order to have a proper overview of risk, existing technology silos and operational processes need to be assembled and information needs to be centralised.  In addition to the standardisation of data, the challenge is being able to manage the sheer volume of data, together with the ability to access it with accuracy and speed.  According to an Oracle white paper on Financial Services Data Management, traditional ETL (Extract, Transform and Load) systems takes several days to extract, transform, cleanse and integrate data, however, regulatory pressure dictates that this entire process be done many times every day. In addition, running risk scenarios can also generate terabytes of additional data on a daily basis.  Within this challenge, however, an opportunity exists for those businesses that can successfully manage and mine both structured and unstructured data because they then have the ability to dramatically improve business insight.

Advancements in data management techniques allow for more performance and scalability.  Distributed file systems and parallel processing have made gains on the traditional RDBMS (Relational Database Modelling Systems) that struggles to keep a grip on big data.  Non-relational databases such as Hadoop, paired with distributed querying and data processing engines, such as Map Reduce, allows for file systems to be spread over several physical computers and data is accessed in parallel, improving response times.  

Options for data management
The burden of responsibility of ensuring robust compliance infrastructure within a financial organisation lies with Heads of Risk and Chief Technology Officers.  Banks hiring ‘Heads of Data’ is now becoming more common.  The options that are available to Heads of Data are either to continue storing reference data on servers located on their own premises, or store data on outsourced servers in either the form of a private or hybrid cloud, or outsource the entire data management process to specialised reference data management solutions providers.   Significant resources are needed in order to collect, standardise and manage reference data and the overheads of maintaining and upgrading server capacity in-house can be an unwanted fixed cost in times of shrinking margins for financial organisations. 

Outsourced cloud providers are attractive because they remove the need to constantly upgrade equipment and also provide the ability to move workloads between servers more effectively.  When choosing cloud providers, it is best to ensure that their data centres are well connected with a number of carriers providing ultra-low latency links since volumes of data will potentially be spread across a network of servers making the speed of connectivity critical.  Volta’s Great Sutton Street data centre is a good example of interconnectivity with more than 19 carrier providers, 8 with their own diverse fibre links.  Its central London location on the doorstep of the City’s financial services industry means low-latency connectivity to financial organisations themselves or other London data centres.

Data management services are becoming more popular as a result of the recent regulatory drive.  According to Computer Weekly these service suppliers can typically help banks make initial cost savings of 20-50%, and identification of the most accurate and timely feeds whilst eliminating redundant fees can lead to savings of up to $200m (120m GBP) for global organisations.  One of the most important criteria for these service providers will be scope for growth since the amounts of data to be managed will naturally increase.  A predominate question to such suppliers is whether they have procedures in place to let organisations exceed the number of virtual machines within a data centre if required. 

Data diamonds in the rough 
The choice of how to handle the problem of reference data will come down to how comfortable the Head of Risk are in letting other companies handle the sensitive issue of data management.  Regulation is the main driver in getting a grip on data and ensuring that up-to-date data management systems are being installed in order to get the most efficient and effective results from inevitable volumes of data.   An opportunity presents itself for those who implement more advanced systems in which the data within can be used to gain a fuller understanding of customers and markets thus improving the data quality that ultimately power financial algorithms and modelling. 

Subscribe here!

Recent posts