Self-Service Analytics. You’ve Built It…but Will They Come?

Over the years, since the inception of Business Intelligence and Analytics tools, organizations have invested tons of dollars and resources in providing access to the data assets to the business. However, even with the innovative platforms and technology now available, many of those ‘decision support’ initiatives to date have failed or faced challenges. Meanwhile, those same end-users are more demanding than ever. We now live in a world where we can get answers as quick as we need them.  Enter: self-service analytics.

What is self-service analytics?

In many large enterprise analytics adoption programs, one of the first questions I typically ask a few users is for them to define self-service in the context of analytics. Realistically, some of them perceive it as being able to run and pull a report or dashboard ‘whenever they need it’. And for some others, it is the ability to create their own exploration, one asset at a time, and analyze at their own pace with acceptable response time. Supporting both ends of the business user spectrum can be challenging, especially if some legacy assets are involved. In addition, many surveys conclude there is a very low trust in the data provided to the business. Therefore, it is critical to deploy self-service with a solid foundation and strategic planning, which I elaborate in the following 6 guidelines:

1. IT is your new friend. Embrace the new IT.

Traditionally, most of the business organizations saw IT as the ‘necessary evil’ and bottleneck for supporting its decision-making and ensuring data consistency. However, data has evolved into introducing more complex use cases requiring a more agile approach to data delivery. Those who make the data accessible need to understand the usage of it. This paradigm shift is represented in the following diagram: Self-service Analytics

In an analytics-driven culture, organizations no longer have to sacrifice governance for self-service or vice versa. This new collaboration between IT and the business focuses on achieving a common end goal through people, processes, and technology, and partner to adopt the modern approach to enterprise analytics.

2. Get the one right tool for the job. Not a toolbox.

There’s been an explosion of BI and analytic platforms offering self-service access to data, each developed with different approaches and focus. Some tools offer more visual and summarization capabilities, some offer good performance against granular data. Some require more advanced skills and data preparation experience. Although this could be challenging, organizations should work towards consolidating and standardizing on a single self-service platform, even if this means separating the operational mass-reporting from analytics. When selecting a platform, focus on the criteria that matter, based on common use cases, data, and user personas that create the community.

3. Do not start without an Executive Sponsor. And keep them involved.

In a recent survey from Mckinsey,  ‘executives say senior-leader involvement and the right organizational structure are critical factors in how successful a company’s analytics efforts are, even more important than its technical capabilities or tools.’ A self-service analytics project should focus on a use case or area that exposes metrics that keeps them up at night. A departmental analytics team defines what improvement goals can be achievable by elevating the value of the data and giving access to it.

For that reason, set up monthly meetings and change the format from monthly reports to data discovery. Engage curiosity by exploring the data on the spot. Track those improvement goals and remember that data assets for self-service are constantly evolving.

4. Garbage in, garbage out. The importance of trusted data.

The following factors critically impact the user adoption of a self-service analytics program:

  • User interface and experience
  • Performance
  • Trust in the data
  • Governance

Failing on one of those factors can jeopardize the success of the deployment. And addressing all those factors all at once when making data assets available can be very challenging. A proven practice involves starting with a more focused and narrowed set of data, with a smaller audience. This approach ensures initial pitfalls, such as poor performance or data discrepancy are addressed before expanding.

The platform of choice should have the ability to present the data in a meaningful and documented format, allowing more flexibility in optimizing the presentation layer.  In addition, effective data augmentation capability can greatly accelerate the time to value, as long as it is made available to the right user, and that such processed can be easily reviewed and accessed by the rest of the audience. The introduction of a business glossary or governance repository may be needed as the deployment grows.

5. One size will not fit all. There will be outliers. There will be rebels.

Similar to historically failing to deploy the perfect enterprise data warehouse, it is unrealistic to believe that a self-service analytics environment can please all targeted users. Select a ‘best-fit’ solution architecture and deployment strategy. Define your scope by defining the use cases and be careful not to be caught defining it based on requirements. Such scope should address 80% of the core use cases. The other 20% can be addressed separately, sometimes with minor changes.

Also, it is also normal to see different use cases being delivered from different data storage. The emerging concept of data lakes facilitates the practice of co-existing technologies to support different analytic needs.

The ‘rebels’ are those users who, for some reason that may be credible, aren’t satisfied by the environment deployed. Commonly seen within the data scientists or advanced analysts. Those exceptions can quickly influence adoption. Therefore, it is important to isolate those cases and work with them individually, assessing if a separate set of tools, data storage, and in many cases data preparation processes are necessary.

6. Deploy, assess, adjust, repeat (fail fast)

Finally, the deployment of self-service analytics is a perfect scenario for agile implementation. A typical challenge met when gathering requirements is that ‘users don’t know what they don’t know’. Therefore, it is common to start with an initial scope that may lack clarity and determination. An iterative methodology, presented in the diagram below, will help by getting more feedback from the user community, especially when starting with a smaller focus group.

Self-service Analytics