How to unlock the full potential of your data
Though every company recognizes the power of data, most struggle to unlock its full potential. The problem is that data investments must deliver near-term value and at the same time lay the groundwork for rapidly developing future uses, while data technologies evolve in unpredictable ways, new types of data emerge, and the volume of data keeps rising.
The experiences of two global companies illustrate how ineffective today’s predominant data strategies are at managing those challenges. The first, a large Asia-Pacific bank, took the “big bang” approach, assuming it could accommodate the needs of every analytics development team and data end user in one fell swoop. It launched a massive program to build pipelines to extract all the data in its systems, clean it, and aggregate it in a data lake in the cloud, without taking much time up front to align its efforts with business use cases. After spending nearly three years to create a new platform, the bank found that only some users, such as those seeking raw historical data for ad hoc analysis, could easily use it. In addition, the critical architectural needs of many potential applications, such as real-time data feeds for personalized customer offerings, had been overlooked. As a result the program didn’t generate much value for the firm.
The second company, a large North American bank, had individual teams tap into existing data sources and systems on their own and then piece together any additional technologies their business use cases required. The teams did create some value by solving challenges like improving customer segmentation for digital channels and enabling efficient risk reporting. But the overall result was a messy snarl of customized data pipelines that couldn’t easily be repurposed. Every team had to start from scratch, which made digital transformation efforts painfully costly and slow.
So if neither a monolithic nor a grassroots data strategy works, what’s the right approach?