top of page

Five stages of event-streaming platform adoption (recommendations for business executives)

At Confluent, we’ve worked with a number organizations that have architected themselves around event-streaming platforms, to the extent the platform has become fundamental to how the business operates.

These organizations range from digital natives to more traditional companies across Financial Services, Retail, Automotive, Telecoms, Healthcare, Government. In fact, we’ve seen adoption in all industries in which data is critical, which of course is every industry. Indeed, this years Apache Kafka report found that 94% of organizations plan to deploy new applications or systems using Kafka as their event-streaming platform. And two thirds [67%] plan to deploy between 1 and 10 new applications or systems.

Use cases for streaming platforms vary from improving customer experience, to facilitating new business models, to driving increased efficiency and / or mitigating risk. Regardless of the use case, through working with many of these organizations, we have synthesised some common themes of event-streaming maturity and have identified five stages of adoption, as shown in Figure 1.

Figure 1. Migrating to an event streaming platform is a multi-phase adoption journey. We see 5 stages to event-streaming maturity.

The journey starts with awareness and moves up a curve, ending with an event-streaming platform acting as the central nervous system of the Enterprise. The aim of this 5-part blog series is to communicate these common stages and provide guidance for organizations aiming to increase their event-streaming maturity. First, we start with Pre-Streaming.

0. Pre-Streaming: Prior to Streaming

Before an organisation implements any form of streaming platform, they sit in the pre-streaming stage. Here, we often see a group, or groups, of data engineers, or developers, struggling with moving data around an organization to deliver required business functionality.

This struggle is usually a result of ‘evolved complexity’. That is, legacy core systems that have grown around entrenched key business processes, 3-tier architecture and simple point-to-point connections, with a growing numbers of integrations. In most organizations, legacy architecture has become a tightly-coupled, complex and highly rigid ecosystem, as depicted in the diagram below. Data moves around in bespoke batches and hence processing is often scheduled, rather than real-time. And Data becomes ‘stuck’ within the various legacy systems and / or in business silos.

Figure 2. Data moves around in bespoke batches and hence processing is often scheduled, rather than real-time.

Time and focus that should be spent building responsive event-driven applications is instead mostly spent on landing data in the right places. This complexity is exacerbated in organizations with a history of M&A, where different systems have been patched together.

We want to help organizations move away from this legacy complexity, not only to realize value from real-time data but to also realize the many benefits of simplification, which includes developer velocity and lower costs.

Stage 0 Recommendations:

  • Increase awareness of streaming.

  • Watch this short video by Confluent CEO, Jay Kreps.

  • Undertake a Confluent Streaming Discovery.

  • We will work with you to assess, pilot, or develop proof of concept (POC) use cases, business processes, or data challenges that could benefit from streaming.

  • We typically see two options here: Connecting to existing data stuck in external data stores or legacy systems Producing new streams of data (new business models)

  • Assess value potential–saving money, making money or protecting money--in moving to a streaming platform. Value could include:

  • Cost savings from architectural simplification.

  • New revenue streams or enhanced revenue through better customer experience.

  • Managing risk, for example implementing fraud detection,

Stage 1. Streaming Awareness & Pilot

The journey to stage 1 typically starts when the data engineers or developers (mentioned in the previous section) hit a problem, or pain-point with their current / legacy infrastructure and architecture. Whereas enterprise technology transformations are traditionally driven from the ‘top-down’, we see event-streaming platforms entering the enterprise from the bottom-up.

The tech team may have already worked with products like Kafka, or they come across event-streaming at a community event or through their own research. This awareness initiates a Pilot, or proof of concept (PoC). And because Kafka is Open Source, the barrier to entry for small scale POCs is minimal.

The teams start by tackling one or two edge cases and are modest in scale and ambition. As we explained in stage 0 (pre-streaming), there are two key drivers for a Pilot or POC:

  1. Connecting to data stuck in external data stores or legacy systems.

  • Examples include a bank off-loading data from the Mainframe, into an event-streaming platform, in order to process the ‘events’ more rapidly and meet regulatory compliance requirements, or to provide a better customer experience.

  1. Producing new streams of data (new business models).

  • The tech team may play with event-streaming, producing new streams of data that were previously not possible to process with legacy systems. An example includes Audi’s connected car initiative; moving data from car sensors to a central hub and processing the data in real-time.

POCs at Stage 1 are mostly technology, rather than business driven. That is, they are often focused on the data Engineering challenges, more so than business benefits.

It is also important to note that the evolution of the pre-streaming world into an event-driven one doesn’t necessarily mean completely replacing existing or legacy infrastructure. The old request-response application paradigm can coexist with the new event-driven one. This stage is typically about supplementing the architecture, where the change to event-centric thinking makes sense. Traditional transactions might still need to be processed using synchronous communication, running in parallel.

Stage 1 Recommendations:

  • Move to early production use cases: Start small, think big and act fast.

  • The tech team will often be most interested in the increased agility and flexibility of a streaming platform. Their developer velocity increases significantly.

  • The business team will obviously see benefits here, but will also see a wider business use case, such as improved customer experience and reduced costs. They get a technology infrastructure that actually describes what their customers do, mapping these onto their various existing business processes. So, a customer or business person can reason about how it all fits together.

  • Overall, this transition isn’t just about adopting a different technology, it’s about changing how you think about data, or events, in your business. It’s about making a fundamental shift to event-centric thinking where you are continuously capturing, assessing and responding to streams of events that matter to the business.

  • There may be minimal business value in the first POC step—and this will mostly be in the data engineering area—but the business value will come as you move up the maturity curve. Look for early opportunities to expand.

  • Move to early production use cases - Start small but think big and act fast.

Stage 2. First steps into production streaming

We saw the transition from stage 0 to 1 is mostly driven from the bottom-up. The Tech team (Developers, Data Engineers, Architects) champion the need for event-streaming platforms - because they are seeking efficiency and developer velocity.

Organizations typically run small-scale pilots and progress these into production, for non-mission critical (or experimental) use cases. It is stage 1 to 2 where Tech and Business minds meet. It is here the business application owners see clear business benefit in real-time data, offering; improved customer experience (CX), new business models, and / or ways to mitigate risk.

Of the two broad categories of ‘business-application’ adoption we see slightly different value drivers:

  1. Connecting to data stuck in external data stores or legacy systems.

  • This years Apache Kafka report found the top benefit of Kafka is Increased Agility. Developers and data engineers appreciate being ‘loosely coupled’, with increased speed, flexibility and extensibility. As such, they often champion the change, as it simply enables them to get sh*t stuff done. Developers and data engineers often realise this benefit before the business ‘gets it’. This is mostly around savings - doing more for less.

  1. Producing new streams of data (new business models).

  • The tech team may play with event-streaming, producing new streams of data that were previously not possible to process with legacy systems. As with the example of the connected car initiative, this is often driving top-line business value via new business models.

Using an event-streaming platform, Pilot teams - both Tech and Business - are able to show how data can be handled in real-time, with the corresponding technical and business benefits, proving the value of the platform.

Whilst the Tech and Business teams work closely in stage 2, the project teams are typically self-contained, or ‘bounded’. Only a small number of highly skilled resources really appreciate the difference between the streaming platform and, say, legacy messaging technology. The wider business may remain unaware of the power of the platform and how event-streaming can be applied across multiple projects. Projects remain silo’d.

Stage 2 Recommendations:

  • Tech: Educate the business around the early production use cases - freeing data from legacy systems or silo’s.

  • Business: Look for business opportunities of real-time data flows.

  • For Pilots in Stage 1: Demonstrate the benefits of the Pilot, either:

  • Freeing data that was stuck in legacy systems or external data stores, or

  • Producing new streams of data that were previously not possible to process with legacy systems.

  • Look for opportunities to expand early production applications to mission-critical apps.

  • Think about the team - skills, experience. Think - Operating model (see next stage).

Stage 3. Mission-Critical Integrated Streaming

As organizations onboard more apps onto the Streaming Platform, they move into the mission-critical Streaming stage. Rather than managing edge cases, this stage is characterized by the streaming platform being tied more closely to the business’s overall vision and strategic objectives.

Mission critical capabilities of a Streaming Platform that matter in this stage include security, durability, exactly-once guarantees, together with the ability to monitor the event flows across multiple applications and maintain data completeness SLAs.

This stage is also about Integrated Streaming. This means, different business processes or business units working together where before they might have worked in silo’s. We see a number of organizations breaking silos with Apache Kafka, by joining events that were previously managed separately. An event-streaming platform can be used to join streams and tables, leveraging the stream table duality to provide a unified view of data in real-time.

In this stage, the organization has to think about their operating model. This includes how the business and data functions are structured, organized and managed (the governance). For example, a new mission-critical tech stack most likely requires new skills within the organization and potentially a new delivery model (see appendix - SIDEBAR on organizational design).

Often, at this point, the C-Suite and Executives get to hear about an event-streaming platform for the first time. Whilst they see the benefits, some education around the technology is required. Even tech-savvy Execs can fail to appreciate the differences in existing or legacy messaging technology - and why event-streaming is fundamental to business success.

Stage 3 Recommendations:

  • As with any major business initiative, event-streaming should have its own strategic direction. To create an event-streaming strategy beyond a few use cases - look for opportunities to join-up teams, for efficiencies and economies of scale.

  • Focus on the mission-critical use cases - and the architecture ‘ililties’; reliability, scalability, operability, extensibility etc. as well as security.

  • Integration: Think about the Business Applications and how they are integrated across the organisation.

  • Think about the Operating model, including delivery model and team (skills etc.). The C-Suite, together with whoever is tasked with leading the company’s event-streaming initiatives—should set up a series of workshops for the executive team to coach its members in the key tenets of advanced event-streaming and educate the broader management audience. 3 crucial questions for the C-Suite and company’s leaders:

  • i. how can event-streaming help the company make more money?

  • ii. how can it help save operational costs?

  • iii. How can it mitigate risk ?

Stage 4. Global Streaming

As organizations move beyond the Integrated Streaming stage, they enter the Global Streaming stage. In this stage, the streaming platform has grown, within the business, to the point where it must service customers internationally.

There’s power in global streaming, but there are also several big challenges a business needs to address in this stage, such as:

  • How do you make data available across different regions?

  • How do you serve data efficiently from closer geos?

  • How do you implement data sovereignty rules, like GDPR?

The old (pre-streaming) ways of solving these problems are now insufficient or operationally challenging. This includes running hot standbys and / or doing manual failovers for disaster recovery that take several hours, or breaking up a global service into bespoke regional services that requires all the complexity of geo partitioning into the application logic.

Kafka, as a global event-streaming platform, is evolving and at Confluent, we are working with global organizations to ensure kafka plays a role to really enable Global Streaming. Please contact us if you want a Kafka global roadmap presentation.

Stage 4 Recommendation:

  • Look to qualify and quantify the Business Value of global streaming - how will it impact the global business?

  • Design the Global Operating Model (Target Operating Model, or TOM).

  • Understand and align with how the business is organized.

  • Event-streaming capabilities need to be embedded in the business, resulting in an ineffective event-streaming organization structure

  • We have observed that organizations with successful event-streaming initiatives embed event-streaming capabilities into their core businesses.

  • Define roles and responsibilities around event-streaming

Stage 5. Central Nervous System

The final stage in our maturity model is stage 5, in which the event-streaming platform effectively becomes the central nervous system of the entire enterprise.

This is the stage that is associated with the digital natives (businesses that were born digital). These organizations have often architected themselves around an event-streaming platform from the beginning, without the burden of any legacy complexity.

Everything in a digitally-native business is an event and all the data in an organization is managed through an event-streaming platform. These organizations tend to progress along the streaming journey fairly quickly, and tend to handle large amounts of data in real-time in a hugely efficient manner.

Netflix, is a prime example of a digital native using Kafka at massive scale. Netflix run approximately 50+ Kafka clusters, with 4000+ brokers, processing an astonishing 2+ trillion messages every single day. This is a pretty powerful state for a business to be in and the possibilities are immense.

In this stage, everything happening in the business is available instantly to all applications in the company through the event-streaming platform. The technical team is happy as the architecture is greatly simplified and they can work efficiently. The business team is happy because they can get real-time insights and actions on data, or events, as they happen vs. when it is too late.

The key recommendations here are look for opportunities of Continuous Improvement and Benefits Realization.

Summarizing the Stages

Along the stages of adoption, we have identified different areas of focus. In the earlier stages (0,1,2), the technical team tend to drive event-streaming and the focus is more bottom-up on Infrastructure and architecture that enables data flow. In the later stages (3,4,5) the focus is more top-down, including the strategic objective, the operating model and the business applications.

We have summarized the areas of focus in our Streaming-Adoption framework:

Figure 4. Confluent’s Event Streaming Framework

A summary of the stages is shown here in Figure 5. Incl. typical characteristics and recommendations.

In conclusion

We hope this level of guidance proves useful, as organizations think about their use of Apache Kafka, as a streaming platform. This document can be used to provide a common language of adoption of event-streaming with fellow Kafka users, or to help pitch Apache Kafka internally within an organization.

If you have additional information to add, your own story to tell, or any questions or comments on this article, please get in touch. We would love to hear how your organization is moving along the phases of the streaming journey, to arrive at the point where it is truly event-driven.

______________________________

Footnotes:

For further information on bottom-up vs. top-down driven change, this Deloitte article is a worthwhile read. Deloitte say significant re-engineering technology projects often start from the bottom-up ...and this is especially the case when:

  • the technologies are poised to redefine business models and processes. ...and

  • when modernizing underlying infrastructure and architecture.

At Confluent our objective is to complement the bottom-up entry of event-streaming with a top-down message. The WEF state that one of the key enablers identified to drive return on digital investments is for Leadership to be both Agile and Digital-Savvy. The leadership should maintain a strategic vision, purpose, skills, intent and alignment across management levels to ensure a nimble decision-making process on innovation. See the WEFs diagram below. We think it is important for the Executive and Middle-Management, as well as the [Tech] Workforce to understand event-streaming and what it can do for the business.

_______________

McKinsey state, in their paper ‘Implementing Data Analytics’, they ”often see small scale data-analytics entering an organization via a few curious developers.”

The paper goes on to say “These developers bring a technology into an organization from the bottom-up. Few executives can describe in detail what data analytics talent their organizations has, let alone where that talent is located, how it’s organized, and whether they have the right skills and titles”.

We believe the same to be true of event-streaming.

Likewise, McKinsey’s recommendation for implementing Data Analytics is for the “CDO and chief human resources officer (CHRO) to lead an effort to detail job descriptions for all the data-analytics roles needed in the years ahead”. According to McKinsey, the skills include a mix of Business, Analytical and Technology Skills. Again, the recommendation for event-streaming platforms also applies, especially at stage 3-4 in maturity.

When it comes to Organizational Design, McKinsey’s view can be applied to event-streaming: Organizations can develop event-streaming capabilities in isolation, either centralized and far removed from the business, or within ‘projects’ which are typically ‘pockets of poorly coordinated silos’.

In contrast, an organization can centralize the event-streaming function. But, as McKinsey point out, over centralization can create bottlenecks and can lead to a lack of business buy-in.

This diagram comes from McKinsey - Ten red flags signaling your analytics program will fail

https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/ten-red-flags-signaling-your-analytics-program-will-fail

After reading this article, you can turn it on it’s head - converting warning signs of failure into recommendations for success. We can also apply the same lessons’ learned to event=-streaming platforms. And so, we have the following key recommendations:

  • The C-suite should consider a hybrid organizational model for event-streaming, in which agile teams combine talented professionals from both the business side and technology side.

  • A hybrid model will retain some centralized capability and decision rights (particularly around data governance and other standards), but the event-streaming teams are still embedded in the business and accountable for delivering impact.

Interestingly on another McKinsey recommendation, around centralization, we would suggest the opposite.

McKinsey state that “for many companies, the degree of centralization may change over time. Early in a company’s data analytics journey, it might make sense to work more centrally, since it’s easier to build and run a central team and ensure the quality of the team’s outputs. But over time, as the business becomes more proficient, it may be possible for the center to step back to more of a facilitation role, allowing the businesses more autonomy.”

At Confluent, we see companies early in the event-streaming journey as working in a decentralized manner. Initiatives are run as small pilots of projects, or sometimes skunkworkds. It is later in the journey, that the organization moves towards a centralized model, with the corresponding efficiencies and economies of scale which come with that move.

RECENT POST
bottom of page