From frontline supervisors to executive leadership, reporting is the lifeblood of call center operations and the window into Customer Experience. Yet after decades of development, the entire industry still remains in disarray.
The innovation quagmire in the Call Center Reporting and Analytics space is rooted in three key issues:
- Vendor's who's contact center suites are an amalgamation of various acquisitions, such as: Cisco, Avaya, Genesys, NICE and Verint have done a poor job of providing integrated data and reporting across their own portfolios.
- Call Center technology providers that have made reporting or analytics a priority in their offerings tend to not support third-party data, rendering their fancy dashboards useless beyond a few front line users. Couple this with pricing strategies from CCaaS providers that place a premium on analytics capabilities and it quickly becomes cost prohibitive for many companies to acquire these tools.
- Despite significant advancements in big data and open source tools, niche Contact Center Business Intelligence providers are still wed to their own proprietary tools leaving them saddled with technical debt and an inability to innovate from within.
This leaves call center management to either figure it out on their own or invest in expensive data integration projects and BI tools such as Tableau or Microsoft Power BI. Unless a company already has a strong data team and executive support, this approach tends to fall flat.
Today, Xaqt is changing that.
The Xaqt Approach, A Paradigm Shift
As the foremost Contact Center AI company, we quickly realized that in order for companies to realize the full benefit of Artificial Intelligence, we needed to solve for the data integration and reporting problem first. We also knew that launching just another business intelligence product for call centers would add little value.
So, we set-out to architect an end-to-end framework and platform that could:
- Be adopted by anyone
- Eliminate cost as a barrier to implementation and adoption
- Remain agnostic to technologies a company may already have in place
- Adaptable to changing market conditions and advancements in technology
- Be open source at each layer so as to avoid any proprietary lock-in.
- Be the easiest platform to set-up, use and operate.
- Build a community of support and interoperability
The goal of this initiative is not to sell companies on yet another proprietary product or platform but rather to create a strategic direction for the industry that builds consensus around a common approach to data infrastructure and analytics. We're delivering a framework that's within reach of any call center and creating a global community with economies of scale.
Therefore, we're not making a big product announcement but rather accelerating a paradigm shift that puts companies in control of their own data and empowers them to act on it. And, we're offering it as a fully managed service with disruptive pricing as well as making the full stack open source for those that wish to manage it in their own cloud.
Out with the old guard, in with the new.
Xaqt: Cognitive Insights Platform (CIP)
For Contact Center Managers and Executives who are dissatisfied with the current reporting and analytics options on the market, Xaqt's Cognitive Insights Platform provides unified call center intelligence at a fraction of the price of existing commercial offerings.
CIP is an open framework architected and built from the ground-up to power the modern contact center's analytics and AI needs. It leverages open source software end-to-end, and we're publishing the architecture for anyone use or modify to their own needs.
It is comprised of six integrated modules:
- Data Ingestion and Data Connectors (real time and historical)
- Data Storage (Data Lake)
- Metadata Discovery and Data Dictionary
- Data Modeling and Normalization
- Data Access (SQL and query interface)
- Reporting, Dashboards and Visualization
Each of these components are exchangeable and interoperable with your existing infrastructure and tools.
Data Ingestion and Connectors
The first step in building a data lake is get data out of your existing business systems. For the contact center, this typically includes your ACD, Self-Service Applications, Workforce Management or Workforce Optimization Suite, Speech Analytics, QA, HRIS, etc.
Most cloud based Contact Center providers provide documented APIs to access your data, while with many legacy contact center vendors, it's still a challenge to get your data out of their reporting tools or platforms. It often requires additional licensing, or developing a custom application against their APIs to extract the data.
We want to make it simple to get your data out of your systems and into the data lake with as minimal IT involvement as possible. To do this, we've developed a library of data connectors and interfaces to the leading contact center systems and will be making them available to the community. Additionally, anyone may develop and publish their own connectors as well.
A key part of data ingestion is data orchestration, or the ability to schedule data imports. Apache Airflow is one of our favorite tools for this, but we support many others as well.
Data Storage and Data Lake
Once you've established how you're going to get data out of your existing systems, you need to decide where to store the data. Your data lake creates the foundation for all downstream analysis and uses.
The idea of a Contact Center data lake seems simple in concept, yet is still illusive for most. Once upon a time, the only real choice for data storage was Microsoft SQL Server or Oracle. And they were both really, really expensive.
Today, however, data architecture is no longer a one size fits all proposition. There are numerous commercial and open source options available depending on the type of data you're ingesting as well as its intended use. These expenses can also add-up quickly and tend to compound with the more data you store.
With this in mind, Xaqt's default data lake is with Amazon Simple Storage Service (S3), which we've found to be the go-to-standard for data and object storage. This provides an inexpensive and flexible data "landing zone" for easy access and transformation. However, if a customer wants to host their own data somewhere else, we'll support it.
In order to build call center reports or conduct data analysis, all stakeholders and data consumers need to understand the underlying data, where it comes from and what it means, as well as know what all data is available for analysis. However, there exists no master data dictionary that defines and maps common data elements for the various call center vendors. Existing call center reporting and visualization tools have failed to provide this functionality.
Many contact centers have recently moved to the cloud and had to switch ACDs and vendors as a result. This can wreck havoc on contact center reporting because most vendors have their own calculations for metrics and it's difficult to navigate the differences between them (ie, does Total Handle Time include After Call Work Time and AUX?, or "Does Service Level include Calls abandoned?"). This becomes even trickier when you need to create your own custom calculations and stakeholders aren't clear about the underlying data or formulas.
So today, we're announcing the first industry wide data dictionary and metadata repository. Making metadata openly available will improve the productivity of data analysts, data scientists and end users when interacting with data.
Best of all, you can customize it to your own needs and include data from your own sources as well. To accomplish this, we're embedding Amundsen, an open source data discovery tool developed at Lyft, into our core platform. The metadata dictionary will also be published and version controlled in GitHub for anyone to access and to contribute.
Data Modeling and Normalization
As there exists no master data dictionary for the call center industry, there also exists no normalized data model that maps data from the various vendors into a schema that can be easily queried by analysis engines. This is a problem that Latigent tackled back in 2005, but when Latigent was acquired by Cisco Systems in 2007, Cisco stopped work on that product in favor of using the reporting engine on their own data. Cisco is not alone in this strategy, as most contact center vendors fail to support data from third-party systems leaving call center managers to manually combine their data.
While some companies have taken on this challenge as part of their Business Intelligence products, the data models are not published or portable. Thereby leading to more expense, lack of interoperability and a larger strain on internal resources.
We believe in bringing transparency and interoperability into the contact center ecosystem. We also realize that there is no "one-size fits all" data model, as they need to be adapted based on their intended use and as new data sources are considered. Therefore, Xaqt’s data models will be open, publicly documented and version controlled in Github. By putting it into the public domain, we invite your feedback and contributions. Through global collaboration, we can develop models that work for everyone and put companies back in control of their own destiny.
Data Access and Interactive Queries
Contact Center and Customer Experience data can provide core business insights beyond the contact center. Accessing this data throughout the enterprise is often a challenge. Unless a company has gone through the effort of building their own data lake, then reporting interfaces are dictated to end-users based on whichever call center vendor they've chosen.
However, some companies may already have business intelligence products, such as Tableau or Microsoft Power BI, in place. These applications should be able to access the underlying call center data to be combined with other data.
Now that can happen too.
Xaqt's Cognitive Insights Platform leverages Presto, an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes.
It was built from the ground up for interactive analytics and approaches the speed of commercial data warehouses while scaling to the size of organizations like Facebook (where it was developed).
Presto provides the ability to virtualize data across various sources and access it from a single SQL interface without the need for an expensive consolidated data warehouse. In other words, more power and flexibility at a fraction of the cost of commercial databases.
We've benchmarked Presto against databases such as Amazon Redshift and Google BigQuery and are blown away by its performance and flexility. Best of all, it can query against raw data files stored in the data lake (S3), and negate the need for a database all together. Thereby reducing overhead and expense.
Dashboards and Visualization
The Contact Center dashboard and visualization tool is where the rubber hits the road, or data hits the eyeballs, so to speak. Several call center vendors tout their reporting and dashboard tools, but they are built as walled gardens with little ability to extend to data outside their application.
On the other end of the spectrum, tools like Tableau, Lookr and Microsoft Power BI exist for companies that have data savvy teams. But these are expensive and are typically priced per-user, which incentivizes companies to NOT get data into more people's hands because it adds expense. Additionally, these tools not well suited to the various stakeholders in the Contact Center.
Having built our own Business Intelligence product over the last three years, we understand these challenges. With recent contributions to the open source community from digitally native companies such as, AirBnB, Facebook, Lyft and Uber, we decide to conduct a fresh analysis of the market. After months of vetting, we've decided to adopt Apache Superset as our standard dashboard and visualization engine (and retire our own tool as a result).
Superset is a modern, enterprise-ready business intelligence web application developed and open sourced by Airbnb. In recent years, it has gained an active community of contributors and continues to accelerate its feature development.
This means that companies no longer have to worry about per-user licensing fees or proprietary reporting tools. As Apache Superset is open source, the only expense is hosting and managing it in the cloud which any company may do on their own, or chose from one of Xaqt's low cost hosting plans.
- An intuitive interface to explore and visualize datasets, and create interactive dashboards.
- A wide array of beautiful visualizations to showcase your data.
- Easy, code-free, user flows to drill down and slice and dice the data underlying exposed dashboards. The dashboards and charts acts as a starting point for deeper analysis.
- Ability to port charts and dashboards between instances.
- Enterprise-ready authentication with integration with major authentication providers (database, OpenID, LDAP and OAuth)
- An extensible, high-granularity security/permission model allowing intricate rules on who can access individual features and the dataset
- A lightweight semantic layer, allowing to control how data sources are exposed to the user by defining dimensions and metrics
- Supports native integration with Amundsen for Data Dictionary and discovery.
Building a Community
Over the last couple of decades, an entire cottage industry has been built around developing and customizing reports for call centers. This is largely a result of the proprietary-ness of the underlying systems (Avaya CMS, Cisco CUIC, etc.) and the lack of user friendliness.
Rather than building a vendor specific practice or user group, we're building a community around the application of these open source tools in the contact center. Together, we can develop best practices and sharable templates for the entire industry.
In the coming weeks, we'll be creating GitHub repositories, slack channels, and email distribution lists. Most of all, we'll be recruiting your help to launch a new movement into the future.