Blog, Events & Press

28 Jun

Integration Testing Dependent on External Dynamic Environment

Are you a developer who wants to test a shiny new piece of software to see how it fits into a larger software system?

Or would you like to check if it is still working after your latest changes? Or do you hope to create automated tests that execute every night and report back to you if your piece is fine and working in a rapidly changing environment?

OK, but what if your module's behavior also depends on the behavior of some other, external software that changes its responses and output dynamically?

Can you make sure, that if your test passes today, it will continue to work tomorrow, when the external software has changed the output it produces on certain requests, or even worse – when the machine it works on is down and this external “thingie” is not available?

A bad approach

One solution is to rely on some knowledge about the environment you work in. Sometimes, for example, you know that this environment will not actually change as dynamically as expected because it is currently supported in your “garden” and probably you're the only one who uses it.

In this case, you can just “hardcode” the expected results to be the ones that are currently being returned. Well, that might work for you until the development cycle is finished, but what will happen when you move forward and start dealing with something else?

Can you rely on the system that your module’s output depends on in a month or two, or will it still respond the same way and be available and running?

A good approach

Nowadays, separate software systems communicate in a somewhat standardized way. Actually, you can easily sniff the SOAP or REST communication between your side and the other side.

But how this data can be useful for your testing? Well, what if you create some form of “simulator” that can just load this gathered data and start responding to certain requests the same way as the original system would?

In this case, you would have something like a “frozen system” that you could communicate with. So now you can go and make your test scenarios, save the communication between your side and the opposite one, and create sets of “expected results” and “frozen external behavior” that will always be the same and available. Now your tests will always work with the same data. And if they fail, you know it was because of a problem in the code.

A better approach

You want to sniff, huh? But what if the communication between two systems is too large? What you can do is to analyze the format of the communication data and strip out what you don’t need for the current test.

Each test needs just a small amount of all communication data and getting rid of the rest is a good optimization. Also, if you get familiar with the format, you could go further and even create a generator of random, but valid communication data. This kind of tool could make your testing much more reliable!

Additional benefits

New versions are always being release, which brings new outputs and new behavior. It’s not safe to think that if you created automated tests for a certain version that they will stay the same as the final versions released.

When you move this part of development cycle into your hands – by freezing or controlling the output of the system that your module depends on – you can create separate sets of tests and simulation data for each version and thus differentiate your testing, making it more agile and robust.

Every project is different from others in terms of requirements and time frames, so the approach you choose really depends much on the current situation.

If you are developing something with unclear requirements, the “bad” approach might actually be useful because the piece you’re going to test will change anyway, so there is no point spending too much time testing.

You only need something that will guarantee your reliability to some extent.

But as soon as you have clear requirements and your module is developed in sync with them, it is best to test your software in a solid and reliable way! That could cost you lots of time and resources, because developing “simulators,” decoding communication formats and stripping unneeded data is hard and time consuming work. But if you want to survive in the long term, working against dynamically changing environments and software is worth the effort.

nikolay-baklicharov
Written by Nikolay Baklicharov
Senior Software Engineer

Nikolay Baklicharov is a senior software engineer at SevOne. He joined SevOne in August 2013, and has been working professionally as a software developer since 2008. In this role, he has gained a high-level understanding of C++ and networking experience, as well as various other technologies. He enjoys continuing to improve his network knowledge at SevOne.

Subscribe To Our Blog