What do we do when we need to validate ten or even twenty of the same computerized system? While it’s technically valid to create unique validation documentation and perform identical testing on each system, this is wildly inefficient from a time and financial perspective.  Instead, one can create a risk-based plan to validate a family of systems that both ensures diligence from a compliance perspective while reducing burdensome/redundant documentation and testing activities.

How do we define a system family?

A system family is one where an organization has multiple identical assets to be used for the same business intended use.  Additionally, only systems utilizing the same hardware and software (i.e. same version) should be included in a system family.  Part of justifying a family approach in your validation plan is proving that there is little to no variation from asset to asset.  Otherwise, the burden of proof and rationale is on the organization to show that a family approach is still valid despite variations in your assets.

What are some cases in which a family approach should not be utilized?

Knowing when to use a family approach is just as important as knowing when not to use a family approach.  It’s easy to fall into the trap of thinking a family approach is appropriate, when in reality the logic used to group assets may not be sound. Some examples include:

  1. Assets with the same make but different model

o   There is not adequate evidence to assert that the assets will operate and perform in the same manner. 

  1.  Assets running different software versions

o   Software patches may add/remove/alter functionality or capability from one version to the next, making a like-for-like comparison invalid.

  1. Assets that have different business purposes (i.e. identical assets, but belong to different departments and have different use cases)

o   Different departments using the same type of system may have varying configuration needs, functional or performance requirements.  If it cannot be shown that the systems are being held to identical standards, then they cannot be grouped as a family for validation purposes.

What does testing look like for systems in a family?

In your typical validation process, you may have some combination of an Installation Qualification (IQ), Operational Qualification (OQ) and a Performance Qualification (PQ) that comprises the testing required to show that your system is capable of meeting established user requirements.  However, is there truly a need to test the same functionality every single time for each asset in a family?  Unfortunately, it’s not as simple as a straightforward ‘Yes’ or ‘No’.  The decision to leverage testing performed on your first-in-family unit (FIF) for your remaining assets vs. performing a test in entirety for all subsequent systems should be tied to your Quality Risk Management (QRM) process and documented in the appropriate lifecycle documentation (i.e. in a Functional Risk Assessment). 

Let’s take a look at the following examples:

 On a GAMP Category 3 (i.e. Commercial off the Shelf) system, you are looking at the template creation functionality on an instrument with a desktop interface.  The assessed functionality does not have any capability for configuration or customization and is utilized the way it came from the vendor.  The chance of standard functionality failing is low, given the widespread testing and use across the industry.  Since no elements of customization are introduced, the complexity and chance of failure is minimal. We can assert that the risk of this functionality failing is low, and the chances a functionality failure would be caught (i.e. detectability) is high.  Therefore, there is no need to re-test this functionality on every unit in a family.  The testing performed for the FIF unit can be leveraged for subsequent units.

 On a GAMP Category 4 (i.e. Configured) system, you are looking at the functionality used to maintain temperature within a critical operating range.  While the manufacturer may sell a similar platform to all of their customers, this system and specific functionality were highly configured to meet your organization’s unique needs.  As such, we cannot assert that this is a standard functionality and there is not sufficient evidence to claim the chance of failure is minimal.  We may say that the risk of failure is medium and the detectability of a failure is medium.  Given the assessed risk, it is decided that the testing of this functionality shall be performed on each unit in the family.

There is no one-size-fits-all approach to QRM and how to justify a testing strategy within your validation lifecycle.  It’s important to ensure your risk criteria, inputs/outputs and controls are well-documented and the logic used to come to your conclusions is sound.

What does a typical lifecycle look like for a family of systems?

There is always an element of variation from lifecycle to lifecycle, but let’s take a look at an example list of lifecycle deliverables for a typical GAMP Category 4 system. For the sake of brevity, this example will not go into detail regarding what each of the deliverables contains, but further information can be found in GAMP guidance and is easily searchable on the internet.

Example Lifecycle Documentation for FIF (First-in-Family) UnitExample Lifecycle Documentation for Subsequent Units in the Family
– GxP Assessment
– Data Integrity Assessment
– User Requirements Specification (URS)
– Functional Risk Assessment (FRA)
– Validation Plan (VP)
– Configuration/Design/Functional Specifications (CS/DS/FS)
– IQ/OQ/PQ (as appropriate)
– Executed IQ/OQ/PQ (as appropriate)
– Requirements Traceability Matrix (RTM)
– Validation Summary Report (VSR)
– Supporting SOPs/Work Instructions (i.e. System Use, System Administration, Business Continuity/Disaster Recovery Plans)
– Executed IQ/OQ/PQ
Only a subset of the full testing will be performed based on the assessment documented in the FRA.

– RTM1
– VSR1

Note 1: RTM and VSR can be optionally combined or separated for organization and readability purposes.

As can be seen, we’ve managed to cut down a significant amount of validation deliverables for subsequent units in a family.  As a rule of thumb, plans, assessments and specifications tend to be performed at a family level, while testing, traceability and reporting tends to be done at an asset level.  Regardless of how one chooses to parse their lifecycle deliverables and combine or separate as deemed appropriate, there will be significant savings in time and effort that only grows larger with the number of assets in a family.

While this article and content is geared towards computerized systems, the same logic and rationale can be applied to non-computerized systems as well.  It’s important to remember that in the world of validation, more is not necessarily better.  Does your organization need support in creating a validation approach for a family system?  Assurea can help you with our expertise!