Skip to content Skip to navigation

Making Sense of Big Data in Financial Services

Vo Thanh Trung (Tom)

Deputy Head of Market Operations - SGX

Tom Vo has over 11 years of working experience in trading, clearing, collateral, and payment operations. In his current capacity as the Head of Derivatives Operations, he is working with his team to build trading and clearing platforms for the derivatives market. Tom will be graduating from the Master of IT in Business Financial Technology and Analytics Track in 2019.

Making Sense of Big Data in Financial Services

Big Data is the buzzword that has been bandied around in the financial services industry. The power of Big Data is commonly harnessed in the retailbanking sector, where organisations derive multiple insights from a large client base. Its potential can also be tapped by many other sectors in the financial services industry. Crunching social media data, for example, can help investment firms sense the mood of investors, an important factor that in turn determines the “mood” of the market.

Have you thought about what Big Data is and how it could impact you and your organisation?

What is Big Data?

Big Data simply involves dealing with data. “Big” implies that the data involved embodies characteristics of the “3-Vs”: large in Volume, fast in Velocity, and wide in Variety.

Take the retail banking sector for example. Just 20 years ago, before smartphones and social media platforms came into the picture, a customer would simply walk into a bank, talk to the banker and get some paperwork done to open a bank account. The only data generated and recorded would be her account details. The same action is now vastly different. In addition to the steps above, she may take a selfie with her smartphone and Instagram her experience with the message, “Finally opened a bank account! Great service! It’s time to start saving ☺”

What a contrasting difference these two scenarios offer in terms of data: the former comprises about 50KBs of delayed text data while the latter makes up 2MBs of real-time data in multiple forms, including text, images, ‘likes’ and location tags.

Why is Big Data important?

Big data generates value, as we can draw out patterns that offer new insight and perspectives on anything, from customer behaviour, to whether products or services are sought after. This enables organisations to strategise and create plans on how to acquire and retain customers, or differentiate their services.

With reference to the above-mentioned customer’s trip to the bank, let’s assess how Big Data could improve aspects of service delivery and operations in line with Big Data’s 3-V characteristics:

  1. Timely customer engagement and commercial opportunity (“Velocity”)

    Based on real-time social media update on the customer’s experience, the bank may send the customer a personalised thank-you message on her profile to acknowledge and support her feel-good story. This form of customer engagement is an aspect of service that provides timely feedback to the client. It maintains and builds good relationships with them, while reaching out to their followers at the same time.

    Another example that illustrates the speed at which data can be processed is in algorithmic trading which leverages on ultra-low latency connectivity. Let’s look at a simple example of Trader X, who is trading the same contract in two markets. As soon as her computer program knows that the price has moved up to $101 at market A, the algorithm will send a buy order at $100 at market B right before market B’s price moves up. Simultaneously, her algorithm will immediately enter a sell order at $101 at market A, allowing her to make a profit of $1. For this to happen, transmission of data has to be extremely fast at microseconds or milliseconds depending on the locations and efficiency of 2 markets. (Note: This is however an overly simplified and hypothetical example. In practice, the computer program has to take into consideration other factors such as cost of trading, bid-ask spread, market depth, etc.)

  2. Contextual marketing (“Variety”)

    As new types of data shared by the customer and harvested by the bank may be quite granular, having details such as customer location could enable the bank to prompt her on credit card promotions nearby. This marketing tactic could increase spending and generate more revenue.

  3. Predictive strategies (“Volume”)

    Working with data from the same customer and other new account holders, the bank could apply data mining and statistical modelling techniques to cluster and reveal the spending patterns of different demographic groups. This can lead to new marketing insights. Such patterns could also bring out any attempted fraudulent spending outside of the norm, making it easier for the bank to detect and stop the transaction. In this way, working with Big Data helps maintain a high quality of services and risk management.

How can Big Data drive competitive advantage?

Big Data provides unique insights that enable organisations to develop timely, contextual and highly customised services for their clients. In this way, it gives companies a competitive edge in product differentiation, especially for early adopters who can harness its power. However, as the adoption of Big Data increases over time, such insights are no longer unique to its early users. Hence, being competitive means being cost efficient in how the most insights can be mined with the least effort and at faster speeds. This will help the firm to lead in competitively pricing their products and services more attractively.

In short, financial institutions need a roadmap to leverage on Big Data to differentiate themselves in the medium term and reduce the cost of harnessing and analysing such data in the long term. As the amount of data accumulated will only continue to grow, any Big Data plan needs to be agile, such that it can scale quickly in the long run, without incurring huge costs in the short run. This is most apparent in data storage solutions. To manage these very large volumes of data, how the bank creates, holds and distributes such information on commodity hardware, and/or cloud storage would have critical impact on costs.

What enables the implementation of Big Data?

Big Data is not just about the technology. Getting started with this requires two key ingredients: system architecture and talent.

  1. System architecture

    At the start, the firm can first consider using open-source frameworks and tools such as Spark, Hadoop, R, and Python. These are relatively less costly, established and have strong community support. In this way, the firm could quickly build successful use-cases to obtain buy-in from internal stakeholders. Such tools are also good platforms to conduct experiments and gather requirements for the next phase of their Big Data competencies. Once the firm feels that it has achieved data-driven maturity, they can spend more time and resources to scale up their data storage capability, research and develop proprietary predictive modelling systems with more customised software and applications unique to its needs.

  2. Data-savvy talent

    Talent is another equally important enabler. Overall, there are three types of data-savvy talent needed at any point of time. These include data engineers, machine learning engineers and software engineers. The ratio of these personnel can be adjusted accordingly over time, based on the development of Big Data capabilities. At the start, the firm will need more data engineers to build the right foundation for the system architecture to ingest and store the data. At the next stage, the firm will require more machine-learning engineers to make sense of the data and generate valuable insights through their predictive algorithms and data mining modelling. Finally, to support the end business users, the firm will need more software engineers to develop proprietary front-end dashboards and visualisation tools to effect monitoring and decision-making.

    In order to build competitive advantages not only for the financial services industry but also for Singapore as a whole, it is important to support the high demand for talents in the field of Big Data. Reputable universities can play a big role in producing talents through their structured, high quality programmes. SMU, for example, runs the Master of IT in Business programme offering tracks in Analytics, Artificial Intelligence and Financial Technology and Analytics. I have personally benefitted from the vigorous training under the Financial Technology and Analytics track. It equipped me with the necessary skills and mindset for work, enabling me to drive plans leveraging on data for risk management and operational scalability.

The Big Data Wave

Big Data is much more than a tool to meet business objectives. It can help firms plan processes to meet regulatory requirements, such as in stress testing, where there is a need to crunch historical and large datasets, or in streamlining digital processes to reduce costs. Like any technology, it has its own risks. For example, high frequency algorithmic trading could spiral into a self-fulfilling prophecy. When one signal triggers ‘buy’ orders within a server, once matched, these orders become actual trades in the market and essential additional data points. These additional data points may form a pattern that match other market participants’ models, which could trigger them to enter more ’buy’ orders that could influence the trading price.

Besides meeting business objectives, is important that data-driven organisations put in place the right check and balance on the machine and data models to ensure they are behaving not only accurately and but also ethically. Undeniably, organisations will be hit by a Big Data mega wave. They must confront the onslaught by stepping up their data-driven game to ride this out, duly prepared.

Last updated on 09 Sep 2020 .