Layers of Defense Against Data Modification

Sometimes, as a test engineer, you have to perform testing under a knotty environment and not affect production with your activity. Autotests can help, but they require thorough preparations.

Before diving into the topic about autotests, I need to explain the architecture of my project and its testing environments. Without this context, it will be unclear what a problem is solved here:

Subtotal, an IoT devices send its telemetry (coordinates, fuel, temperature, etc.) to «Control service» → «Control service» stores this data in the database → backend takes this data and returns in to frontend by HTTP API → frontend, as a web application, shows data in the interface in a browser → users can see devices’ status and hit commands to any device → commands go to «Control service» which triggers devices to some action (beep, move, reboot, etc.).

From a bird’s eye view, the project does not look complicated, in addition «Control service» and behavior of IoT devices are out of testing scope, but testing environments are tricky:

Of course, many testing problems can be solved by having a few real devices for testing purposes in production, but unfortunately it is not possible due to some business reasons.

Next I will cover only a part of automation testing of this project, namely frontend automation testing based on Playwright.

One of the purposes of my autotests, besides regression testing, is to prevent accidental commands on production devices from any non-production environment, but at the same time autotests should run in both testing and prestable environments (ideally also in production).

Thus my automation strategy comes from the «zero trust» concept, that any mocks or aborted requests can be overcome. I have to distrust any environment setup and expose additional layers of protection against data modification on IoT production devices.

Layer 1: environmental restrictions — skip tests in undesirable environment

Some tests should be allowed to run only in a certain environment. This may be connected to limitations of the environment or severity of tests.

For example of my project: if I have a command for turning the device off in the autotest, then I really do not want it to be accidentally executed outside the testing environment.

For this case Playwright allows skip certain tests based on the condition (my condition is an environment based on the URL):

test('Should turn off the device', async ({ page, HOST }) => {
test.skip(HOST !== 'https://testing.domain', 'Skip test in non-testing env');


As I mentioned earlier, I do not trust the environment setup, because the testing environment can be changed and not be isolated from production data. Therefore, I need more safety net options.

Layer 2: user restrictions — forbid actions from test users

If you can limit access to certain features for certain users — be sure to do it for your test users. Do not grant full access to test uses if they are used in autotests. Each test user should have only one particular role for execution only one particular action in a limited set of autotests.

For example of my project: if I have a command for turning the device off in the autotest by a test user, then I limit the access to trigger such kinds of commands by that test user on the level of «Control service». It turns out that the test user will send the command, but it will not be executed.

This case applies not only to autotests — it is a good practice to prevent execution of critical actions by random users or hide functionality from them, but it can be implemented only in applications with a complex role-based access control model.

Layer 3: test objects restrictions — choose idle testable objects

This item does not refer to restricting, but sanitary measures. To reduce the impact of unexpected commands or data modification, if they do happen, it is worth using less important or popular/loaded objects for testing. Сhoose the least noticeable objects for testing in a prestable/production environment.

For example of my project: I try to select devices which are offline or hidden, not just randomly.

Layer 4: mock responses — use synthetic data

Actually, your UI autotests can be completely based on mocks and you probably do not need anything at all that I wrote about earlier.

But not everything is so simple

for example of my project I have two cases:

If a test user does not have access to a certain command, but still tries to call it (or the call is directed to a non-existent device), he gets a response with 403 Forbidden status code — so, I need to mock 200 OK response for successful pass of the autotest.

Playwright’s API for modification network traffic is perfectly fit for that (override status code and response body of a specified response):

await page.route('https://testing.domain/handler', (route) =>
status: 200,
body: `${mockedDevice}`,

Layer 5: request restrictions — prevent requests from frontend to backend

The final restriction layer is to prevent requests to protected application or service. As an ultimate solution, requests can be terminated on a network level through proxy settings, but it is not an option if you still need to make requests from time to time in some cases.

For example of my project: after all listed layers, my assumed autotests will run only in a testing environment, test users will be forbidden to trigger commands and commands will be sent to an offline or non-existent device, but what if all of the above will fail?

For this case Playwright can abort HTTP requests:

await page.route('https://control.servise/handler', (route) => {
if (route.request().method() === 'POST') {

I am placing the sample code in the beforeAll hook of my autotests and none of the specified requests reach the server during a test-runs. Which means none of the actions lead to undesirable consequences.

If you have a similar problem with testing environments, you can build your own defensive layers against data modification. Just do not trust your test data, test users and so on.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Andrey Enin

Quality assurance engineer: I’m testing web applications, APIs and doing automation testing.