Is it mandatory to classify User Acceptance Tests (UATs) as raw data?
- Frederic Landry
- 12 minutes ago
- 2 min read

The classification of User Acceptance Test (UAT) records as raw data has been a topic of debate within the pharmaceutical and biotechnology industries. Past discussions on the ISPE Forum have illustrated the complexity of this question and the diversity of perspectives, leading to a recurring and important question:
“Is it mandatory to classify User Acceptance Tests (UATs) as raw data?”
One viewpoint expressed in these discussions can be summarized as follows:
Raw data is often defined as information that cannot be reconstructed or recreated at a later time. From this perspective, the data generated during UAT execution is considered reconstructible. In the event of paper-based data loss, the information could theoretically be re-collected, as it already exists within the IT system under test and can be retrieved again. Under this view, the critical aspect of UATs is the execution of all planned tests, while the subsequent review and summarization of the results in a UAT report further reduces the criticality of the original execution records. As such, UAT data would not have a direct impact on product quality and would not require classification as raw data.
While this interpretation has merit, we believe the classification of UAT data is more nuanced.
It is true that, in principle, some UAT outputs may be technically reconstructible. However, test execution results are not determined solely by the test script itself. They are influenced by multiple variables, including the clarity and completeness of the script, the tester’s knowledge and experience, system configuration at the time of execution, and the inherent complexity of the system. As a result, re-executing the same test does not guarantee the same outcome. This variability challenges the assumption that UAT execution data can always be reliably reconstructed.
From a validation and data integrity perspective, preserving evidence of executed tests provides essential traceability. Executed UAT records allow organizations to investigate future issues effectively by answering questions such as:
Was an issue related to test execution, test design, or system behavior?
Did system behavior change after validation?
Were acceptance criteria truly met at the time of release?
Without access to the original executed UAT records, answering these questions becomes difficult, if not impossible.
An additional consideration is that not all UAT activities are fully scripted. Exploratory or unscripted tests are often intentionally included in UATs to capture real-world user behavior and identify issues that scripted tests may miss. These tests, by definition, cannot be reconstructed if execution details are lost. In such cases, the evidence of execution clearly meets common definitions of raw data.
For these reasons, managing executed UAT records as raw data—or at minimum, applying raw data–like controls—provides a more robust, consistent, and risk-based approach. Furthermore, treating all executed tests uniformly avoids subjective, case-by-case determinations of reconstructibility and supports stronger data integrity, traceability, and inspection readiness.