Outcomes monitoring is not entirely new in financial services regulation, but it has gained real traction with the regulators in recent years and is now an obligation for firms. In the first part of this century, financial services regulation in the UK focused on compliance with overarching principles and then with principles and specific rules. The most recent work of the Financial Conduct Authority has introduced the shift from ‘principles and rules’ towards outcomes, which emphasises the results or impacts of business practices on consumers and asks firms to support customers to achieve their goals.
Findings from thematic reviews have highlighted that adherence to rules encouraged a tick-box approach and that principles can be somewhat ambiguous, leading to varied interpretations by different firms. The rules-and-principles framework seemed to encourage reactivity, rather than proactivity, and so the FCA started moving towards including an outcomes-based approach in their regulatory framework.

RELATED ARTICLES:
RELATED RESOURCES:
This was cemented with the Consumer Duty. The Duty requires the delivery of good outcomes for retail customers, and the monitoring of the achievement (or otherwise) of those outcomes. This includes collecting and analysing data on customer experiences, complaints, and the overall performance of financial products and services.
But the move from the older framework towards outcomes is a difficult one. The latest review of outcomes monitoring has found that some firms are still focusing on a ‘process over results’ method, using evidence that a process has been completed rather than going the step further and showing what that process has actually achieved.
The shift from older methods of compliance towards outcomes and monitoring is a big leap. The FCA have, helpfully, highlighted where they think firms are going ‘wrong,’ in their reviews.
One recurring theme is the limited use of different types of data, collection of ‘sufficient data’ to actually monitor outcomes, and the presentation of that data to the Board. Quantitative data is widely used, but to offer a deep and detailed analysis of customer outcomes, qualitative data should also be used to build a clear view of real-world experiences.
The design of any qualitative method is the first sticking point with those new to the practice. With qualitative data, what you get out will depend on what you put in, so it’s important to clearly define, from the outset, what you want to understand about customer outcomes. Are you looking to understand customer satisfaction, identify specific issues, understand the customer journey, assess the impact of products or services, or a mix?
The development of research questions might seem like overkill, but they’re vital to getting properly representative qualitative data. Basing the research questions around objectives and defined good outcomes is a great start. For example, “how do customers perceive the value of our post-sale service?” or “what challenges do customers face when using our product?” Before you design the method, clarify which outcomes you are aiming to monitor, and set out how you want qualitative data to inform understanding. This may seem like a basic step, but it will really matter if the end narrative is going to both make sense and add understanding.
The next step is to choose the method. There is a good range – interviews, document analysis (e.g. of customer feedback, complaints, support tickets), focus groups, observations, surveys (open ended questions) – so think about which are most suitable for your research question. For example, if you’re aiming to understand how easy customers find the on-boarding process, analysing documents alone might give you an idea of how many completed the process or what happens if things go wrong, but not what the experience felt like or what customers genuinely think.
Once data collection is up and running, it’s important to ensure consistency. This is more difficult with qualitative data than with quantitative. Tools and processes used to gather data should be applied uniformly, across all participants and settings. This will mean if more than one person is undertaking data collection, their methods and approaches must be standardised. If the data collected can’t be meaningfully compared and contrasted, then there will be variability that could undermine the findings. And this also has the potential to cause harm. If the findings miss something, then harms could continue without having been noticed and mitigated. The aim is to avoid experiences and perspectives being missed or influenced by the construction of the question or how observations are conducted.
Think about how the analysis will be presented. Thematic analysis is useful where you want to identify and report patterns, presenting themes with supporting data extracts to tell a clear story. Narrative analysis focuses on stories as told by individuals, useful where there are substantial interviews or written accounts to draw on. It’s important to cover the context of the story, including personal factors, and narrative analysis can be really useful where there’s a complex issue that needs to be brought to life.
Mixed methods – combining both quantitative and qualitative data collection and analysis - makes the most of the strength of both methods and can offer a more comprehensive view of what is actually happening. This is ideal where the aim is to find out not just what outcomes are being delivered but why they are occurring.
Interpreting and then reporting are where data collection becomes meaningful. The aim is to provide context to any quantitative findings so that it is clear how and why outcomes are achieved. The final insights need to be understandable, actionable and effectively communicated. Those interpreting qualitative data need to have a thorough grounding in the material. Categories and themes can help audience understanding, and the findings must be related to objectives and outcomes. Use direct participant quotes to give an accurate flavour of real world experience but ensure that anonymity and confidentiality of participants is maintained. Provide contextual information and interpretation to explain the significance and implications of the findings.
And finally, the findings must be presented in a way that enables them to inform decisions and actions that must be taken by the Board.
Much of the support of customers towards achieving their goals and receiving good outcomes happens during contact with the customer. This is why it’s so important to support staff themselves, giving them the tools they need to understand what’s required, how to discern what the customer needs, and what to do in difficult situations. Our online training courses quickly teach the basics before demonstrating how to support customers in a range of situations. Our courses in financial difficulties, fair treatment of vulnerable customers and treating customers fairly are ideal for staff at the forefront of customer communication. Priced at just £20 a head, with discounts available for group bookings.
Comments