Moving to the Cloud is NOT about the Cloud

One of the questions that I get asked very regularly by customers, colleagues and friends is “Why do I need to move my … to the cloud?“. Almost every time this question comes up, the conversation very quickly gets into technical capabilities of the cloud, cost saving or the last service that this or that cloud vendor has released. These are all valid reasons, but that’s not what drives people, teams or organizations.

Take for example the decision to buy a new house. Surely, it can be driven by the fact that one is simply interested in getting a new shinny home. But very often, it is about providing, yourself or your family with more options. Whether it is upsizing or downsizing to adjust to a growing or shrinking number of people living under the same roof. Getting a backyard so your kids can practice their favorite sport. Or remodeling to fit new family needs. It’s rarely about the house itself, it’s about the value it adds to your family’s comfort.

Shortening Time to Value

One of the upmost aspirations of any business leader is how fast they can get “new” value in the hand of their customers. Imagine, you want to build a brand-new information technology capability, test it, deploy it, secure it and monitor it. In addition, you need to have your globally distributed teams work on it.

Traditionally this would require:

  • An engineering system to host your development team activities: code, build and test.
  • Test environments to allow validating the features in isolation and as an integrated product.
  • Secure production environments that elastically scale to respond to growing or shrinking demand.
  • Finally, most of this net new investment would be in a form of a capital expenditure (CAPEX).

Setting aside all the cost saving that results from using the economy of scale. Getting the prerequisite logistics ready in the cloud, for the example above, could take few minutes to provision. It could switch the investment model from CAPEX to an OPEX. And it could be scaled down or discarded in minutes in case of changing priorities.

For instance, let’s attempt an experiment on Microsoft Azure. I want to achieve the goals I have listed in this small use case in a matter of a minutes.

1 – I am going to start by creating a “DevOps project“. (don’t worry of you don’t know what DevOps is)

2 – It then asks me what kind of language/technology I want to use:

3 – It then asks me what basic architecture components I will need:

4 – Then I choose how I want to host it. I am choosing Kubernetes, so that if I want to take it anywhere else, I can with no code modifications:
5 – I then specify few information and choose the initial infrastructure size I want:

The end result is:

  • A backlog to plan, manage and track work required for my investment.
  • A git repository to host my source control securely.
  • A build pipeline to build, validate and test my application.
  • A release pipeline to continuously deploy new value and release it to my end users.
  • A Kubernetes cluster to host my application in a secure and elastically scalable way.
  • A private docker registry to host my application docker images.
  • Last but not least, monitoring capabilities to allow me to track the application usage and learn about opportunities for improvements or expansions:

Accelerating innovation

You should be thinking that you can’t possibly compare a “hello world” sample to a production ready application. That’s absolutely correct. But I have already prepared everything I need to start building the real application.

If you already have your own code, clone the sample application locally, replace the sample code with yours, push it to the git repo and let the Pipeline do its work. This allows you to focus on the business goals you want to achieve rather than the infrastructure you possibly could need. The on demand, self service capabilities of the cloud accelerate your innovation velocity.

For instance, as I take my initial sample application from demo form to product ready, I might decide that the initial Kubernetes cluster size I chose is not enough, then all I have to do is choose my new size. Better I can scale up and down based on the demand from my new user base.

Enabling agility

One of the biggest advantages, in the use I have used so far, is the ability to shift technology strategy, pivot the application business goals or even fork it to different verticals. None of those are now dependent on initial technical choices:

  • My application is portable to any cloud: public or private.
  • My development technology is cross platform.
  • My hosting model is elastically scalable.
  • My code base is in a distributed source control system that can allow parallel development efforts.
  • And all my investment aspirations are managed in a lean board:

What we have seen in this small example, is that the cloud itself is not the goal. The ability to achieve your business goals faster, in a cost-effective way or adjust your aspirations as needed are most likely going to be the answer to “Why do I need to move my … to the cloud”.

Advertisements

Safe on-premise releases using VSTS

I have recently interacted with many customers that were clearly under the impression that they should only consider DevOps practices if they are doing Cloud development. So, I decided to break this myth.

The concept that I want to address in this blog post is the idea of Safe releases and incremental feature rollout.

What are Safe releases:

First, safe releases are not about security, although they can and should integrate security into the pipeline. They are rather about limiting your exposure surface as you are rolling out new releases of your product, even within your enterprise. They are also about learning from your roll out and adjusting your plan.

The traditional thinking is to have Dev, QA, UAT, Staging and Prod environments. There is nothing wrong with that, and you may still have all those and few more. However, releasing, as opposed to deploying, is about how you assess the positive or negative impact of the new value you are making available to your end users. Deploying is more about how the bits get to specific environments.

The ingredients:

End to end walkthrough:

Choose your (Sample) Application

Let’s assume that we are building an ASP.Net Core Application. (it really could be anything you want, this just an example)

dotnet new mvc

Build your application locally:

dotnet build

Run your application locally:

dotnet run

Commit and push your code to a git repository

Then

Add some new functionality:

So, let’s imagine you are about to add a “Support page” to your application. Let’s track that with a product backlog item (or story).

Create a feature flag:

In this example, I am using an off the shelf product for managing feature flags called launchdarkly (this is not an endorsement, just an example).

Notice how I used the work item id in the feature flag key. This is not required. However, it is extremely convenient for reverse traceability.

Create a topic branch:

Name your topic branch using a convention that can allow reverse traceability. Work item tracking to code is already insured in VSTS. But naming your topic branch using your work item id, allows anyone to look at the branch and easily find the source product backlog item:

Open the topic branch locally:

I am using VS Code as my code editor (use any code editor you like).

Add the feature to your code:

Let’s start by adding the “launchdarkly” dependency to the web application. This will allow us to integrate with the feature flag we created earlier.

Let’s set up our code to interface with the feature flag:

As shown in the code above we are limiting the visibility of feature to the user ilias@someapdomain.com . In a real application you will want to have the list of users/roles/rules injected from your configuration source, so it can be passed during a release! We will then integrate the flag with our controller code as follows:

In our sample feature flag, we have only enabled that page for that one user:

Since the feature flag is set to true for the user we are using to experiment, when we launch that page in the application we get the following response:

If we switch the flag to off for that specific user:

The application returns the following without any change to the code:

Commit and push your change to the topic branch:

Create a Pull Request to send your changes to the master branch:

Once the pull request is completed the CI build will be triggered:

Build the application using a CI build:

The CI build shows the full traceability between the feature we’re building, the code and the build itself:

Create a release definition to help you validate the new feature through variable validation rings:

Let’s first define our release pipeline. The pipeline is designed in a way that surfaces the feedback opportunities rather than the deployments environments:

Since we are going to deploy to internal servers, we will use the VSTS deployment group feature to execute our deployments. (you can learn more about deployment groups here: https://docs.microsoft.com/en-us/vsts/build-release/concepts/definitions/release/deployment-groups/?view=vsts )

In this sample, I am using one deployment group with one environment that is tagged for the kind of activities I would like to use it for:

Using this deployment group, we can an internal deployment to a quality assurance environment where we will execute security, performance and functional validations. (in real scenarios, performance would probably a different environment):

Once we are done with the validation steps, we start the controlled rollout steps. The product backlog item we are working is linked to the feature flag.

Therefore, to enable the feature to the preview users we will have to start turning on the feature flag for those users:

 

Since we have full traceability and linking from the work item, source control, build to the release. The release will know which feature flags it needs to control.

Before the feature flag is switched on, any user who tries to access the support page will get the following response:

Once the release is fully completed, the flag will be switched on for the preview users:

And then if those users try to access the support page, they will get the following response:

 

In summary, we’ve see in this post that you can deploy to on premise environments using VSTS, either by using deployment groups or on private agents. Also, you can control the exposure to new partially released features by using feature flags. Finally, you would use a release pipeline to define how you would want to orchestrate the release of your features.

 

Automating releases to separate Azure resource groups using TFS and VSTS

This post is written in collaboration with my colleagues Daisy Chaussee and Wyn Lewis-Bevan. It will appear in their respective blogs as well.

Business Case:

One of our customers recently challenged us with a use case that is not very common. They wanted to keep using TFS 2015 for work item management, git source control and build management. However, they wanted to use VSTS for release management into Microsoft Azure. In addition, their deployment process is quite advanced. They deploy the same application to hundreds of customers in separate resource groups, but they do not wish nor want to have to create hundreds of release definitions, and above all they want close to zero manual intervention steps.

You may ask why? And we did too. Their (decently large) team is currently heads down focused on delivering the product they are building and cannot afford any migration time to VSTS. They also had some upcoming tight deadlines. Hence, they didn’t want any disruption to their teams, but they did want to accelerate their deployment process using built-in capabilities in VSTS release management.

Solution:

Azure to the rescue: The solution is extremely simple and relies heavily on existing features in Microsoft Azure.

  1. Bridging TFS and VSTS:

    The team continues doing what it has been doing without having to change anything. They push their commits to a source control repository in TFS 2015, and they have a continuous build definition that builds their application. What we added to that build definition is an additional step that uploads the build drops to an Azure file storage location. For simplicity we used Azure Files as the drop destination: https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction . We named the folders using the name definition.

  2. Automating deployments to separate resource groups:

    We chose to build a VSTS extension that encapsulates the whole deployment process. A release manager selects a customer name from a drop down. This triggers a release for that customer’s environment.

    For each customer that will be managed using this utility, we store all their target deployment information in an Azure Key Vault: https://azure.microsoft.com/en-us/services/key-vault/ . Once that customer name is picked from the drop down, we connect to the vault, get the information we need and trigger a release definition to deploy the application.

  3. Working around (current) known limitations:

    Currently VSTS release definitions do not allow overriding variable values at run time. So, when we pick a customer from the drop down we can’t really pass that customer name of the release definition. This created another challenge, since we needed to change this information on the fly. The solution is simple as well, we used a build definition in VSTS, since they allow overriding variable values at queue time. This build, which doesn’t do any code building, simply takes a variable as an input (the customer name), downloads the TFS 2015 build artifacts from the Azure storage, then publishes them as artifacts to be used in the release definition. For simplicity the customer name is written into a text file and added to the build artifacts as well; simply the customer name and nothing else, because we don’t want secure information to be copied to text files or artifacts. From there everything is handled via a normal VSTS release definition.

What is looks like:

Let’s look at the TFS 2015 build definition. It builds a simple web application, zips it and then uploads it to Azure files.

The PowerShell script connects to the Azure files and uploads the zip file:

(this is a proof of concept, in a real-life scenario you may want to use much advanced security practices for passing parameters to PowerShell files)

Once these files are uploaded they are labeled using the build number:

The extension that we build in VSTS fits into VSTS as follows:

  • It sits right under the “Build and Release” VSTS Hub.

  • It has an extremely simple user interface (this is just a sample):

The most important aspect of this extension is the release manager picks a build, a customer and then the rest is done by the extension. It gets the information for that customer from the Azure Key Vault, downloads the TFS 2015 build drops from the Azure Files and kicks off a release pipeline into several validation environments before going sending it to product. The release process itself is managed via the release definition:

Conclusion:

While the scenario we’ve built may not be very common, the possibilities we were able to explore show that the most important aspect is to define your business goals clearly, then quickly experiment.

The “P” in “PM”

In recent discussions with customers and colleagues about why large I.T. shops often struggle in adhering to Agile principles or becoming DevOps organizations, I couldn’t help but notice something extremely simple, however tremendously important. Large I.T. shops manage Projects, while software shops build Products. One would think that it is the same thing, but the gap starts to happen there. On one hand, you will have product owners (or product managers) and the other you will have project managers. One is focused on the continuous delivery of value whereas the other is focused on the delivery of a known set of artifacts constrained by time, budget and scope.

PM in DevOps organizations:

The first step of assessing the gap is to survey the common understanding of DevOps within your organization. Does it represent a culture or solely a set of tools for automation? It’s probably a very good idea to level set what everyone in your organization think what DevOps value is.

Here is one I like a lot (I am biased probably): https://docs.microsoft.com/en-us/azure/devops/what-is-devops :

“DevOps is the union of people, process, and products to enable continuous delivery of value to our end users. The contraction of “Dev” and “Ops” refers to replacing siloed Development and Operations to create multidisciplinary teams that now work together with shared and efficient practices and tools. Essential DevOps practices include agile planning, continuous integration, continuous delivery, and monitoring of applications.”

If you were to accept this definition, you would realize that running your I.T. organization as a set of connected or disconnected projects is going to be challenging in this “new” reality. You will be faced with questions such as:

  • How do I measure progress?
  • Who is responsible of what?
  • How do I get my 18 months plan and how do I accept the deliverables?
  • … you probably can easily fill in the blank based on other project focused questions you’ve heard in your organization.

This can be summarized into one single question: Are you able to switch to a product focus instead of a project focus?

If your answer is yes, then the role of P… Manager switches from making sure deliverables are created within specific constraints, to enabling the multidisciplinary team to continuously deliver more value.

Delivery of value vs Delivery of deliverables:

One of the best examples I have been watching for continuous delivery of value is the VSTS Team. Every 3 weeks, they have a release with a bunch of new features. For instance, their latest release (at the time of writing this post) is: https://docs.microsoft.com/en-us/vsts/release-notes/2018/apr-16-vsts . As a user of VSTS, I am getting used to seeing constant increments of value being enabled every few weeks. This completely changes the nature my relationship with the product. I am continuously expecting to receive from them new value every few weeks. My engagement with the product is continuously increasing.

Planning for continuous delivery is not planning for specific deliverables. It is about making sure that we are able to get the most needed value out to the hands on my users as early as we can. The plan with change and should change to reflect the continuously changing realities.

This can summarize into (potentially) two questions: Do know the value you are delivering? And do you know how to maximize that value?

We don’t need PMs anymore?

Hopefully that’s not the conclusion you came to from reading this blog post. But what’s more important is to change your focus, measure the most important value and celebrate the right accomplishments.

The challenges that will come from such a desire to switch towards a product focus, is that your business metrics must align. If you are measuring success based on whether a certain number of deliverables were delivered on time and within a preset budget, then you will have to redefine success

This itself can be summarize into one single question: Are you able to fund teams rather funding projects?

A branching strategy for CI/CD using Git in VSTS

When it comes to Branching and Merging strategies, the internet is full of examples and “best practices”. I personally don’t believe in a best practice, because that’s usually subjective. I’d rather talk about a proven practice.

Topic Branches

We will start our branching strategy from the work item we are working on. This ensures traceability is baked into the rest of the process, and ensure that we are working on producing value (technical or functional).

Let’s consider the following work item.

We will create a topic branch, signaling the start of development activities on the work item:

I am placing all the topics under a folder, and naming the branch using the work item id. This allows anyone looking at the branch separately to easily know which work item it is for.

At any time during your development cycle you might have several ongoing topics. These topics should be short lived to avoid complex merge conflicts scenarios.

You can read more about topics branches here: https://git-scm.com/book/en/v2/Git-Branching-Branching-Workflows

Continuous Builds

What we would want to do next to have all these topics branches trigger a build whenever any commit is done for them.

In VSTS, there is a nice feature that allows a build to be triggered by many branches for instance:

Hence with one build definition we can continuously build all the ongoing topics and generate their build artifacts in an organized way:

 

Testing in Isolation

The topic builds should trigger a release pipeline into environments where the main goal is to validate the work item in isolation from the rest of the ongoing work items.

You should be using as many environments as you deem necessary to validate the code quality, the functionality, the performance, the security and its impact on the overall required infrastructure. If you’re thinking that this would require way too many environments. You are correct, however in a world where cloud is more ubiquitous (Public, Private or Hybrid) and knowing we have now techniques allowing us to create environments on the fly, use them and dispose them once we’re done testing. This should be a no issue.

Collaboration

Once validation is done for your work item, it’s time for you to merge your changes back into the master branch. You will do that using a pull request.

The pull request will show the team responsible for merging the changes into main, what files changed or were added and what work item they are related for. They can comment on your changes, or request additional changes before they approve the merge. Note that in many cases, you are part of that same team. You shouldn’t be however approving your own changes. However, it is very important not to think about this step as a gate but rather collaboration step. Everybody owns the code:

You should have branch policies on your master branch that ensure that pull requests are required for changes to be merged:

 

Continuous Integration

Once your pull request is approved, it will trigger a build from the master branch, which itself should trigger a release pipeline allowing your team to integrate your latest work item with others. You should validate the integration of these functionalities and their impact on the health of the application and its infrastructure before releasing it to production:

Continuous Delivery

Once everything is validated, you should be able to deploy the changes to your pre-prod environment and consequently to your production environment. However, achieving continuous delivery requires way more than the ability to deploy continuously. This is where ideas such as “Feature Flags”, “Kill Switches” and “Testing in Production” will help you to increasingly raising your maturity until you can continuously get value in the hands on your customers.

In summary, this is what your Branching CI/CD strategy would look like:

Agile work in progress tracking: Are we there yet?

In almost every engagement I have with customers seeking guidance on how to effectively run an Agile development shop, the question of work in progress tracking comes up. However, the question comes in different “flavors”, often signaling where in its Agile journey the organization is.

Project tracking vs Work in progress tracking

Organizations that were born agile have mostly never planned in a project centric manner. They mostly fund a product team and track the value they deliver to their customers and their customers satisfaction.

However, many large organizations have spent years planning their investments using a project centric manner. Their key to a successful delivery is a project that adheres to the famous principle “On time, In Scope and Within Budget” triangle. Their challenge once moving to Agile planning, is how to reconcile the two worlds.

According to the Agile manifesto, the primary measure of progress is working software. The word “project” isn’t even mentioned. Does that mean we stop tracking projects? We throw away everything we know how to do? How do we fund new work efforts? How do we know If we are going over budget? How do we know if we are running a profitable organization?

The answer to these questions is that, Agile is a philosophy that focuses on better ways of writing software, and nothing else. Once you mix it with other concepts, confusions and conflicts arise. It’s even more important to avoid confusing “Product Lifecycle Management – PLM”, “Project and Portfolio Management – PPM” and “Application Lifecycle Management – ALM”. All are needed, they are very much related, but they are not the same thing.

Successful organizations focus on the flow of data, information and feedback loops throughout these 3 worlds. It’s crucial not to let your PLM or PPM dictate how you run your ALM (Process, People and Products).

Tracking progress

Since “working software is the primary measure of progress”, let us ideate how we might measure such progress. We would first need to define what a working software is. You should come to an agreement, as an organization, as to what makes a software you built a working one.

Let us consider the following (made up) definition: “A working software is a high-quality product that allows us to continuously deliver value to our customers”. This could be a good starting point. However, even with such a simple definition, there can be room for debate on what defines a high-quality product. I personally like this definition by A.V. Feigenbaum: “Quality is a customer determination, not an engineer’s determination, not a marketing determination, nor a general management determination. It is based on the customer’s actual experience with the product or service, measured against his or her requirements — stated or unstated, conscious or merely sensed, technically operational or entirely subjective — and always representing a moving target in a competitive market.”.

Hence, for us to track progress we need to have indicators showing us quality measures while the product is being developed and others for when it is used by customers. More importantly we need to know whether we are continuously delivering value to our customers, and how long it takes us to ship an idea out of the door.

Here are some KPIs that when used collectively can provide insight about the quality of our software.

  • Code Analysis
  • Code Coverage
  • Functional Tests Coverage
  • Tests pass rate
  • Build pass rate
  • Cumulative flow diagram
  • Lead Time
  • Cycle Time
  • Live site incidents trends
  • Customer feedback Trends (5, 4, 3 … Stars)

In most ALM efforts I am involved in, we either use VSTS or TFS as a tool to manage the application lifecycle. I often get asked the question, “what does a dashboard look like for your team?”. Usually it’s a set of dashboards not just one. The idea is to get a wholistic view of your software development investment. I often align my dashboards with DevOps pillars: Plan, Dev & Test, Release and Learn. In VSTS you can create as many dashboards as you need: https://www.visualstudio.com/en-us/docs/report/dashboards .

(Charts used in this blog are directly taken from the VSTS documentation website: https://www.visualstudio.com/en-us/docs/overview )

In the Plan dashboard, I often use the following widgets. However, I never use these charts or KPIs to draw conclusions in isolation. These are supposed to help conversation within the team on how well we are doing as a team:

In the Dev & Test dashboard, I want to know how well our code base is progressing and how are we handling our technical debt. Hence, I usually add the following widgets:

  • Code Tile: It will display the number of recent changes in the code repository. I often track my master branch. What I am looking for is that the number of commits should be equivalent to the number of pull requests for the work items we work against.

  • Code coverage: It will display the code coverage from build definition of your choice. What I am looking for is not necessarily the value alone, but rather whether we progressed or digressed.

  • Code Complexity: There are several integration points in VSTS with famous code quality tools. SonarQube, ReSharper and NDepend come to mind. You should use the one that you understand the most. (I will be looking into these tools deeper in future blog posts. I am not advising or endorsing any of them). In the meantime, take a look at the following sample from NDepend, and consider the insight your team gets from such an overview.

  • Test plan(s) executions summary: I want to know how many are passing, failing or not run at all. Hence, I will consider different angles:
    • Test status by test suites:

    • Test status by sprint:

    • Test status by testers:

    • Test case creation trends:

  • Build Status History: showing me how often we break our build, and spikes in build duration signaling a significant change in our code base or its unit tests.

  • Requirements Quality: Once you associate your work items with automated tests, you can track the quality of your requirements and how they map to your release pipeline.

  • Test Results trends: This chart will display the trend of test results, such as passed or failed tests, for a selected build definition:

In the Release dashboard, my focus is how well our release pipeline is continuously allowing us to get value in the hands on our customers.

I am usually watching the progress of our topics:

And watching the evolution of our production path pipeline:

More importantly I want to know how well our testing is progressing within each environment in our pipeline:

In the Learn dashboard, I will want to integrate data from telemetry, customer feedback and any source of data that provides me insight as to how well the value we delivered is being used or even perceived.

For instance, I may add telemetry points from Application Insight:

More important than dashboards or widgets is team communication. These data points should just help us drive change and progress within the team. If they are not then we’re not measuring the right thing.

Scaling Agile using Microsoft Visual Studio Team Services

When the Agile Manifesto was written about 16 years ago, it had a simple goal and that is to uncover better ways of writing software.

Source: http://agilemanifesto.org/

Writing software was and still is at the heart of it. In fact, the manifesto states that the primary measure of progress is a working software.

Small teams are able to easily adopt and adhere to these principles, since they usually don’t have historical processes and practices they are bound to.

However, larger organizations, have historically struggled in deriving the same value from Agile methodologies for a multitude of reasons.

In the recent years, there have been many strategies aiming to scale Agile for larger enterprises.

SAFe,
LeSS,
DaD,
LeadingAgile,
Scrum,
Business Mapping
all take different approaches at adapting agile to the realities of larger enterprises.

Whichever methodology you chose, keep in mind the original goal and the primary measure: Better Ways of Writing Software | measured by | A working software.

Let us choose SAFe as an example, and let us choose Visual Studio Team Services (VSTS) as a tool to implement it.

Setup:

Out of the box, when you create a VSTS project, it lets you choose one of 3 default process templates: Scrum, Agile or CMMI.

You may be asking yourself:

Sample Implementation:

Imagine the following Organization Structure:

Let’s see how such an Organization would organize its work in VSTS.

The initial setup would lead to the following areas hierarchy:

In addition, we would have the following cadences (for instance):

Each team will choose their areas of interest, and they would subscribe to the cadence the will use to plan.

The Portfolio Team:

For this example, let’s consider the “Modern Portfolio” Team. The team would mostly focus on Envisioning Epics, describing them, and most importantly defining their value stream or funding source.

They could manage their work using a Kanban view which allows them to visually plan and track the stages of their Epics as well as their related implementation features.

(All Epics in this example are imaginary, and names were hidden)

Each epic is tagged using a combination of its value stream or funding source. As each epic gets decomposed by the different program teams, the portfolio team can then monitor progress and the value it accrues. Results are much better when a bottom approach is followed. Most of their planning activities can be done directly from the board.

The Portfolio team can monitor the flow of their backlog, to make sure they are working on the same amount of value at any stage of their lifecycle, and that their backlog is proactively populated. Hence, they might set their dashboard with indicators showing them how many Epics are being considered for each stage, and they would continuously monitor the cumulative flow diagram, but more importantly track the “Lead Time” for their Epics. This indicator shows them how fast are they able to take an idea from inception to market availability.

The Program Team:

The role of the program teams is to implement the vision and goals set by the portfolio team. They will be creating Features, Mapping them to Epics:

They will plan their work using program Increments:

They will track progress through their workflow stages using the Kanban board, allowing them to see the value they accrue via the implementation PBIs from different feature teams:

More importantly they will track Lead Time and Cycle Time to make sure their release train is on a healthy path.

The Feature Team:

Sometimes feature teams are also be referred to as implementation teams or products teams. They are mostly focused on implementing the Features defined by their parent program team and they do that by implementing Product Backlog Items and resolving bugs.

They plan using Sprints that belong to the parent Program Increment:

Their kanban board reflects the stages they go through to deliver a working Story or Bug:

Their dashboard would have KPIs From the Sprint Progress, Bugs Count, Testing Results, PBIs Lead and Cycle Times, Cumulative flow , Test Trends, Build History, Release Progress and Real Time telemetry from their production systems. In other words KPIs allowing them to answer the question: Do we have a working software?

Roadmap:

All these teams would do shared planning and look at their delivery roadmap, across teams and across cadences: