| 5 min read
Pytest is the king of Python testing tools. With more than 12,000 stars on GitHub, a very active community, continuous improvement with new releases, and a lot of forks and plugins to extend its functionality, pytest is the most important reference when we need to test code in Python.
If you look for Python testing tools or frameworks on the Internet, you'll find articles like "Python Testing Frameworks" by E. Sales, "10 Best Python Testing Frameworks" by GeeksForGeeks, "Top 9 Python Testing Frameworks" by M. Echout, and many other blog posts with similar rankings where pytest is never missing.
The comparison with other tools seems unfair since pytest is a tool for unit and integration testing, whereas other frameworks compete in a very specific domain. Lettuce and Behave, for example, introduce behavior-driven development, but they're not for all development teams; Robot works for end-to-end tests and RPA, but it's not for unit or simple integration tests; Testify, TestProject, and others are dead projects… The unittest module (a Python built-in module) and nose2 could be the biggest competitors to pytest, but they lack the support, community, and plugins pytest has.
At Fluid Attacks, we use and recommend pytest. Still, we decided to wrap it before using it in our codebase. In this post, I want to share why you should try it and how we achieved a new testing framework for highly maintainable and readable tests.
Testing with pytest
A previous blog post, "From Flaky to Bulletproof" by D. Salazar, guides our intention to improve the unit and integration tests. There, we recommend the use of a testing module (a pytest wrapper) due to a big deal: standardization.
When your development team has a lot of members, you need to define some rules to speak the same language. You can add linters and formatters to standardize the code syntax, but the way programmers write code is more tricky to standardize.
Some libraries are considered "opinionated" because they enforce standardized usage with specific file structures and their own methods and classes. Indeed, pytest has its own methods and files, but it is so flexible that it seems "unopinionated."
These are some of the most unwanted features of pytest when it comes to its flexibility:
-
Pytest has an official way to mock methods and classes, but you can mock using other libraries (e.g., unittest) without conflicts.
-
Pytest fixtures actually do magic. You can modify all test behavior with fixtures without directly referencing those functions and messing up the expected test flows.
-
You can put tests wherever, even beside the functional code.
-
You can use real or mocked services because pytest execution is not sandboxed.
If you assemble a growing team in which everyone has different experiences working with pytest, you can accumulate tons of technical debt, unreadable code, and exponentially increasing WTFs/minute.
From OSNews.
Therefore, we decided to implement our pytest wrapper, which includes other testing libraries like moto, freezegun, or coverage-py to write test code in only one way.
Let's talk about our new guidelines and their benefits:
Pytest prohibition
Thanks to the importlinter library, we forbid the use of pytest and unittest in any place other than our testing framework. We safeguard against misuse by removing pytest fixtures, unittest mocks, and any unnecessary features that might be introduced later. Thus, the testing module can use pytest and export the most important tools to be used.
Wrapped utilities
pytest.raises
(to catch exceptions),
pytest.mark.parametrize
(to handle multiple cases per test),
and freezegun.freeze_time
(to use a fake time to run the test)
are the most common features that we use.
We found a way to wrap them into functions
that can be imported from our testing module,
making them easy to use and allowing us to document examples
of how to implement them.
Mocked services
Fluid Attacks’ platform uses AWS services. We decided to mock them with moto to achieve very simple and fast tests thanks to in-memory simulation of services like DynamoDB and S3. However, moto requires some boilerplate code to run and ensure isolation among tests. Developers could add complex logic inside tests to mock services or forget the right way to clean up these mocks because some of the copy/paste steps miss it.
For that reason, we included a decorator in our testing module to start the fake AWS environment. We also encapsulated all the startup and cleanup to ensure test isolation and simplify the tests. Developers only need the documentation to know how to preload data or files for testing using a standard declarative approach.
Fakers for data objects
Leaving behind the discussion about the differences between fakers, mocks, stubs, and spies, we needed to create fake data. We addressed this by implementing Fakers, a collection of functions designed to return specific fake objects based on the data types.
Fakers are simple to implement and can call other fakers to populate nested fields. Developers can modify any field when calling them for flexibility. This approach significantly reduces boilerplate code, enabling the creation of well-structured test objects without the need to define every property manually.
Coverage per module
We took a deep dive into the coverage-py library to achieve a modular solution. This powerful tool generates detailed reports on test coverage for the executed code, serving as a critical resource for identifying gaps in our test suite. By analyzing these reports, we can check areas where additional tests are needed, and the modular approach helps to focus developers on important tests first.
Also, the new file structure speaks for itself. Test files are next to normal files, giving developers a simple way to check for missing tests and extend them.
Continuous discussion
If any case requires one of the pytest missing features, any developer can start a discussion to validate if a new feature is necessary or if the current tools can handle the case.
We are open to discussions and improvements with the whole team, prioritizing testability and readability. Any component must be easily testable, and any test must be highly readable. We embody a declarative (explicit decorators in the startup) and descriptive (tests are divided by Arrange, Act, and Assert sections) solution for testing.
Results
Our testing framework allows developers to test if a user can create a new organization (collection of groups or projects to be assessed) on our platform:
They can also test if a file was uploaded to Amazon S3:
It was an amazing change because the amount of WTFs/minute that old tests could generate was considerable, even more so when tests had to be maintained and some mocks could hide potential bugs:
Thus, we get a very simple and readable code with our testing framework. The developers greatly appreciated the new testing experience and have frequently commented on the ease, speed, and confidence with which they execute the new tests. That good experience reduces the assertion-free tests problem progressively and motivates developers to write important tests. We even focus some weeks on migrating old tests to the new framework in a hackathon style, prioritizing our code quality to be faster afterward.
Note: V. Khorikov in Unit Testing: Principles, Practices, and Patterns called "assertion-free tests" those tests that don't verify anything. Even with asserts at the end, a test could assert things that are not significant (e.g., a function was called n times instead of querying a new entity added directly from the database).
A continuous testing practice is the first layer of any security strategy.
In this blog post, I shared our learnings around building a pytest wrapper for standardization and its huge benefits. Don't forget that enforcing standardization, reducing features to the essential ones, mocking your external services only, and being open to team discussions and feedback can improve your testing culture. The more testable and readable your code is, the faster you can deliver new value (and the more confident you'll be in your production releases).
Special thanks
-
D. Salazar, for supporting, discussing and implementing the core of this solution with me.
-
D. Betancur and J. Restrepo, for attaching and prioritizing the testing framework development to the roadmap.
-
The development team, for using the testing framework, giving meaningful feedback and contributing to its maintenance and growth.
Recommended blog posts
You might be interested in the following related posts.
Introduction to cybersecurity in the aviation sector
Why measure cybersecurity risk with our CVSSF metric?
Our new testing architecture for software development
Protecting your PoS systems from cyber threats
Top seven successful cyberattacks against this industry
Challenges, threats, and best practices for retailers
Be more secure by increasing trust in your software