16 min read
In the past year, the big data pendulum for financial services has officially swung from passing fad or experiment to large deployments. That puts a somewhat different slant on big data trends when compared to 2016 trends.
The question of big data hype versus reality has finally been put to rest for banks. This is further supported by the upward trending spend on big data solutions in financials. Bear in mind that the trends and themes discussed here apply to next-generation infrastructure solutions and quantitative approaches that have come to market over the past seven to ten years. The past year squarely put the ROI and value generated from these efforts into question. This is not only evidenced by discussion from management consulting reports, but also by the change in CIO and CDO positions at many banks, as these financial executives have been increasingly scrutinized after several years of spending.
While the hype has subsided, banks appreciate that big data is not a panacea for all woes. Of course, big data remains vital to reducing overall IT operating costs and delivering advanced data capabilities.
As we anticipated last year, risk, compliance, and marketing are the focus for major banks globally. Risk and compliance naturally continue to be most pressing for SIFIs or G-SIBs, as major mandates tied to heightened capital controls, risk oversight, and data management are coming into effect. Examples include Basel III, FRTB (Fundamental Review of the Trading Book), and MiFID II (Markets in Financial Instruments Directive).
The one theme that remains consistent is that banks are wringing costs from legacy reporting and storage, and are offloading analytic workloads and utilizing “next generation” big data solutions.
By looking at the overall market, we’ve identified what we believe are the key trends expected to reverberate through 2017 in the financial services industry:
What was once deemed as principally unrealistic for banks is now a reality. Bank adoption of cloud solutions remains strong, and is occuring at banks in a variety of ways. Although private clouds have dominated the past several years, in the coming year we’ll see a marked increase in large projects to test, and a hardening of hybrid cloud environments.
The goal is to gain a clearer view of what a manageable path for large public cloud adoption looks like for financial services. This is not only true for storage and compute, but also for agile applications development. Banks will need to crack the code for big data management and to master nuances of updating, synchronizing, and governing data assets to effectively solve this.
The 2016 Wells Fargo account scandal only highlighted how stakes are raised for fraud each year, with 2017 being no exception. Fines are climbing, and sanctions compliance demands have forced banks to increase their transaction monitoring, KYC compliance (Know Your Customer), and money laundering detection and prevention efforts. Regulatory agencies will also increase their scrutiny of business practices and investigation of potential financial crimes.
We also expect announcements to be made concerning a more formal risk assessment for financial services firms, especially with the incoming US administration. Data management and new generation analytics are key tools to improve fraud detection and criminal activity. Therefore, expect risk data aggregation, model risk, and data analytics to be the focus for banks.
As discussed last year, solutions for big data governance have improved. However, the conversation for banks regarding big data is still largely focused on centralization efforts and data lakes. Bank CDOs—whose importance is growing and who are largely spearheading data governance initiatives—will be more focused on gaining operational and business lift from “lake” projects.
CDOs are becoming less likely to put big data governance processes in their own bucket, but are instead incorporating its data governance into overall bank plans. There is a fair amount of consensus among banking CDOs that there are no single solutions or tools that enable data governance within data lakes. The focus will be on adopting solutions that will manage critical aspects (lineage, wrangling, prep, quality) of the overall governance practice to realize greater usage of data lake environments.
As banks look to leverage big data solutions as systems of record for risk and compliance over the next year, the open source and proprietary debate will shift. Banks are realizing that an open source-only approach is proving technically more difficult than expected as bank activity and requirements grow. Further, pure open source business models of vendors who solely rely on them are under scrutiny because of a lack of understanding of the true costs of the solutions they provide, especially at scale.
This is not just a vendor question. Bank business and IT jobs are on the line. Some large financial services IT initiative decisions will begin to change the nature of how these environments are deployed. We will see large bank risk and compliance deployments that consist of a combination of both open source and proprietary solutions.
Financial firms, such as institutional traders, payments, and credit card providers have always stored historical data and increasingly they are analyzing and making decisions based on these data stores. These firms have also made substantial improvements over the last decade to real-time environments. The integration of both historical and real time data, which leverages larger data volumes, is where banks are putting more focus. Further, the integration of both operational and analytical systems is also taking place, putting transactional data in line with analytical or modeling data for vastly improved efficiency and faster time to market.
For example, trading algorithms manage volumes of real-time data very well, though too often these systems don’t go far enough. They often don’t tie in other historical, timed data sources to inform a trader of past customer behavior and trading patterns. Yet, this is commonplace in e-commerce with systems recommendation engines integrating real-time shopping and historical customer activity.
In terms of card or payments, there is a strong drive to accelerate real-time digital payments. Here too, leveraging historical data—which could be measured in seconds, hours or days—can add the needed context to improve customer engagement, or to provide more detailed data to other payments’ parties, thereby improving security and fraud efforts. The ability to combine real-time operational data and historical data analytics into new converged application models using a microservices development approach will become more common over the coming year.
IoT was a large theme in big data circles this past year, and while obvious for some industries (i.e., P&C and multi-line insurance carriers or manufacturers), it was not an obvious choice for financials. The basic applications for the financial sector revolved around mobile or ATMs, and those remain important for this year. Additionaly, we expect discussions to expand on how “things” will involve spending or payment activites that could be value added for the consumer. However, the fixation on physical assets (cars, fridges, etc.) caused many to stumble a bit on its applications in banking or capital markets. Re-examining the underlying aspect of IoT, which is ‘things’ sending and receiving data or streaming information from the ‘edge’ in a bi-directional manner is giving rise to a host of uses cases as the definition expands.
In the financial sector, streaming data can expand the speed, access, and ubiquity of market data throughout a financial firm’s trade lifecycle—lowering costs and usage, notably in the middle and back office. Expect the discussion to evolve on how banks define what “edges” are as they explore how to utilize event streaming data on a massive scale. Also, expect to see a more rapid uptake from banks who will deploy streaming solutions that will enable continuous computing in financial risk, monitoring, or transactions.
Financial services saw some of the greatest interest in blockchain technology over the past year. This technology is still in its early stages, and there’s still a lot of confusion about its usage and future in financial services. What are the legalities of contracts in a private vs. public blockchain? How is it different or more advantageous when compared to current settlement or clearing activities? What are the benefits, if any, in working with big data platforms? Many emerging technologies pose the same issues, yet financials will forge ahead in 2017.
General ledgers offer an intriguing way to manage transactions, especially where workflows are repetitive. Ripe areas to explore are how blockchains may enhance big data security, blockchain analytics, or immutable compliance archives for transactional areas. State Street is one financial powerhouse that has indicated its commitment to going live with blockchain POCs in 2017 in areas such as syndicated loans, securities lending, and collateral management.
Expect to see greater exploration and dialogue over the next year as to how blockchains will converge with big data platforms along with the testing of blockchain technology itself. This is still very much an early game.
Over the past few years, the financial services sector has come to understand how automation can disrupt services in areas such as portfolio management. The basic element of automation utilizes machine learning, which trains data to improve algorithms that make automated decisions on how to handle incoming data and queries. These tasks are vastly improved by leveraging larger amounts of data.
Many financial services firms are just getting underway with broader machine learning usage. To date, it has been mainly a niche activity in areas such as automated trading systems and high frequency trading.
Simple automated tasks that can assist middle and back office operations such as loan underwriting, reconciliation, and risk model development hold tremendous value across segments. While perhaps not seen as bleeding edge AI examples, they represent solid opportunities for financial firms. Expect wider adoption from banks pushing in this direction over the coming year as big data platform adoption grows.
Machine learning also facilitates predictive analytics, which will continue to receive considerable attention as banks with more mature data science groups will be pushing analytical boundaries. Not only are large and varied data sets being used to develop and improve predictive models, but also real-time and historical data is used as well, as explained above. Deeper internal skills sets will be needed to help apply predictive techniques that enhance existing human and descriptive analytical efforts.
Risk management (i.e., underwriting and credit modeling), has and will continue to be one of the strongest areas of predictive use. Attention and adoption over the coming year will be in cybersecurity and other security initiatives as well as customer activity. We expect that data scientists and data engineers will become important members of quantitative and risk assessment teams and that downmarket firms may struggle to fill open headcount.
Utility-based shared service models have seen substantial growth over the past several years in financial markets, but expect to see more strategic opportunities develop beyond the sell-side. Technologies such as blockchain, big data, advanced analytics, machine learning, and event streaming will increasingly underpin the next generation of data-driven processes. This will create opportunities to advance market structure, resiliency, and efficiency objectives, which serve as key areas for financial utilities.
Symphony is a great early example on the technical communications side, and R3CEV/blockchain represents newer groups to facilitate data management. Buy-side, wealth management, and payments providers can look to the sell-side for examples of how to expel costs with consortium led utility models.
2017 will mark a particularly critical year for vendors and financial firms alike to partner together and ensure that large deployments are successful. This will usher in more opportunity to impact business productivity, and increase the pervasiveness and value of big data.
The coming year will continue to see a strong uptake in core use cases for the financial sector. This includes not only the IT-focused use cases like warehouse offload, storage, and reporting, but also line of business applications such as risk and marketing.
Fundamentals are the focus of big data projects for the coming year for a few reasons. First, banks, wealth management firms, asset managers, and insurance firms are non-digital natives, so the process of conversion is longer. Big data platforms require a fair amount of behavioral change—tools, functions, quality, and usage need agreement across business and technology groups. Without consensus, progress is hampered.
Next, security for financials is and must be at the forefront of big data efforts to enable the expansion of new projects. If data is the new currency of banks, they must accordingly treat it as a monetary asset.
Finally, while the pool of next generation data science and developers are growing, banks still need to aggressively compete for talent. The supply of this type of talent thins even more as you go down market.
While big data adoption in financials is still at a relatively early stage, it is important to note that banks, insurers, and asset management firms represent some of the most advanced big data users in the world. However, the situation of haves and have-nots will likely close over time.
This year marks a great opportunity for a wider collection of firms to hone in on the low-hanging fruit of big data use cases. Therefore, 2017 will see a greater focus on these basic use cases and an expanding uptake on the application of big data technologies in financial services.
The investment and groundwork laid by many major financial firms over the past several years have provided the larger financial market with solid proof that big data solutions are both real and beneficial. The financial advice banks often give their clients is something they should embrace this year for big data as well—it’s about time in the market, not market timing.
Stay ahead of the bleeding edge...get the best of Big Data in your inbox.