This article is part of a series (links below). I would recommend reading the articles in order, starting with “Greenfield Opportunity”, which provides the required framing.

Over the next two years, we have the once in a lifetime opportunity to rebuild IT from the ground up, covering the technology, people and processes. In the article “Modern IT Ecosystem” I highlighted our philosophy and shared the vision of our future-state IT architecture.

We have defined ambitious goals regarding our use of modern technologies and techniques, including the aggressive adoption of Public Cloud, API-Centric Architecture, Automation, etc. To be successful, we will need to design and implement a daunting number of enterprise services, covering the Network, Hosting, Identity, Endpoints, Collaborations, etc. To further increase the complexity, these enterprise services must operate at scale from day one, whilst meeting all of our quality, compliance and regulatory requirements.

As part of this series, I plan to document our journey, covering our architecture, key technology decisions, and positioning. However, as a prerequisite, this article will highlight our service delivery approach, recognising that the architecture is only one piece of the puzzle.

Introduction

To help ensure quality and consistency, we have defined an enterprise service delivery plan, which outlines the key deliverables associated with the implementation of any new service.

It is important to note that the enterprise service delivery plan is a guide, therefore not every deliverable is required/appropriate for every implementation. For example, at the extreme end of the spectrum, any implementation that must meet “GxP” quality guidelines and/or regulations, will include mandatory deliverables. These deliverables will need to meet the principles and procedures outlined by Good Automated Manufacturing Practice (GAMP).

Enterprise Service Delivery Plan

The table below, highlights our enterprise service delivery plan, including the phase, item, and a RACI.

Service Delivery

Although “Delivery”, “Ops” and “IS” are separate RACI items, the enterprise service delivery plan assumes a “DevSecOps” philosophy is being followed. Therefore, security practices should be integrated within the DevOps process, meaning that “Delivery”, “Ops” and “IS” are all actively engaged as part of the design phases.

Where required, the Design Qualification (DQ), Installation Qualification (IQ), Operational Qualification (OQ), and Performance Qualification (PQ) are embedded as part of the associated phases.

Continuous Quality

To further highlight the complexity associated with operating within a regulated industry (GxP compliance), it is important to understand the basics of Qualification and Validation.

  • Qualification: The act of proving that equipment or ancillary systems are properly installed, working as designed, and comply with the specified requirements.

  • Validation: A documented objective evidence that provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specifications.

Qualification is part of the validation, but the individual qualification steps alone do not constitute process validation. For example, the infrastructure must be qualified, while the software running the processes on the infrastructure must be validated.

Achieving this outcome can be very time-consuming, however, modern technologies and techniques can streamline the process, whilst simultaneously improving quality.

Continuous Qualification

Software-Defined techniques and the use of Infrastructure-as-Code has a dramatic impact on how we provision and maintain infrastructure. The four steps outlined below are an example of how this can help to enable and maintain a Qualified State (QS).

  1. As outlined in the enterprise service delivery plan, infrastructure requirements must be substantiated as code (Infrastructure-as-Code). This code can become a blueprint, accurately reflecting the state of the infrastructure.

  2. Establish a “continuous” qualification framework, targeting the blueprint. The framework (following standard CI/CD practices) can be used to consistently deploy the infrastructure in a NON-PRD environment, where automated testing can be performed, resulting in test execution reports.

  3. IT Quality can review and certify the blueprint and test execution reports.

  4. Publish the qualified blueprint as a Service Catalogue item, enabling enterprise re-use.

To ensure end-to-end quality, associated Automation and Monitoring tools must also be qualified, empowering them to proactively deploy and maintain the Qualified State (QS).

Continuous Validation

A preciously highlighted, similar modern software development techniques can be applied to validation. The three steps outlined below are an example of how to achieve continuously validation.

  1. Position automated testing and test-driven development (TDD) as a core part of the software development lifecycle.

  2. Functional tests become the backbone of the validation process, where product teams simultaneously write the validation scripts along with the code, executing the tests at regular intervals.

  3. The regular execution of automated tests (before and after implementation) resulting in an application that is continuously validated.

A common misconception is that regulators require tests to be executed by a human and physically signed. Using the FDA as an example, the requirement states that “objective evidence that software requirements describe the intended use and that the system meets those needs of the user.” Objective evidence can absolutely be produced through the use of automation.

Conclusion

Regardless of the defined compliance and/or regulatory requirement, our goal is to embed quality (Quality by Design) as part of every implementation, making it a continuous part of the delivery process. Where possible, we will look to automate quality, helping to drive higher accuracy and consistency, whilst unlocking agility.