pytest
is a unit testing library and coupled with requests
package can be used for writing integration tests. With integration tests, couple of things we would need is a way to distinguish between environments like local, staging, production, different datacenters and run tests in all or some specific tests per env and such.1. We need a way to pass env during test runs. We will use pytest-variables for that which makes it easy to get variables passed in from command line available to the tests. Another way is to write
conftest
. So our requirements.txt
file will now have pytest-variables[hjson]==1.5.1
apart from other packages.2. We will name our pytest file starting which starts with
test
as something like test_api.py
3. Now on writing the tests. I would recommend that the python code be modular and well organised instead of spaghetti coding. Read Modular Programming with Python by Erik Westra if you are interested. We could separate parts into different file or have things organised under class and keep it under the same file. The point is separation of concerns, readability and maintainability (especially by other programmers). A thing with OOP is that go for composition as opposed to inheritance when possible.
4. We will go with how to write pytests in an object oriented fashion rather than just writing
test_
functions. Let's call the below module as test_api.py
import pytestAny class that starts with
import logging
import requests
__version__ = "0.0.42" # corresponds to app version
logging.basicConfig(level=logging.DEBUG)
__timeout = 90 # seconds
def get_headers(props):
"""Return header map."""
# ...
def http_post(params, data, headers=None):
"""Send an HTTP POST request."""
return requests.post(params['server'] + params['endpoint'], data=data, headers=headers)
class FooAPI:
"""Sample Foo API helper class.""""
def __init__:
pass
@pytest.mark.timeout(__timeout)
class TestFooAPI:
"""Test Foo API service."""
pytestmark = [pytest.mark.local, pytest.mark.ny1, pytest.mark.ams1, pytest.mark.ny1_green, pytest.mark.stg1]
@pytest.fixture()
def foo(self):
"""Return Foo object."""
return FooAPI()
def gen_name():
"""Standalone function that not pytest functions."""
pass
def test_add_user(self, variables, foo):
"""Test adding of user."""
log = logging.getLogger('test_add_user')
resp = foo.create_request(variables)
headers = get_headers(variables)
http_resp = http_post(variables, data, headers)
assert http_resp.status_code == 200
@pytest.mark.ams1_beta
def test_beta_endpoint(self, variables, foo):
"""Test beta endpoint."""
log = logging.getLogger("test_beta_endpoint")
data = {'users': base64.b64encode('something'), 'bar': 'baz'}
start = time.time()
headers = get_headers(variables)
log.info("\nheaders: %s", headers)
http_resp = http_post(variables, data, headers)
rtt = time.time() - start
log.debug("\nrtt: %s\n", rtt)
body = json.loads(http_resp.content)
assert body['status'] is True
assert body['code'] == 0
assert http_resp.status_code == 200, 'Expecting 200 OK status code.'
class TestBarAPI:
# ....
Test
and any function that starts with test
will be executed by pytest
by default. However these behaviour can be configured. Fixtures are variables that are passed to each test as argument before it runs. There are various pytest markers. If whole pytest functions within a pytest class needs to executed by many different environments, we can pass specify them as pytestmark = [pytest.mark.en1, pytest.mark.env2, ...]
. The env1, env2 correspond to env config file name. And if we need to execute specific tests against specific env then, exclude them from the above list and annotate it above that particular function as @pytest.mark.ams1_beta
, which means, if the env passed via command line is ams1_beta
, only then this test will execute and since the env is not specified common in the pytestmark
list, other tests will be skipped. Example usecase is running sanity tests (a subset of tests) against productions at a specific interval, where we do not have to run the full testsuite. The variables
argument contains the config values defined in the config json which will be available to the test methods as a python dictionary.5. The configuration files are under
config
directory. Example local.json
is below. The name of the file corresponds to the env which we will be passing to the pytest via command line.{6. Tying it all together using shell script. It is better to use
"server": "http://localhost:8080",
"endpoint": "/foo/test/",
"key": "local_key.pem",
"cert": "local_cert.pem",
"env": "local",
"json": true
}
virtualenv
. So below script does a bit of housekeeping before running the tests.#!/bin/bashThe
# Integration test with pytest.
TEST_DIR=test_api
VENV_DIR=venv
CONFIG_DIR=config
function activateVirtualEnv {
echo "Activating venv .."
if ! hash virtualenv 2>/dev/null; then
echo "virtualenv not found. Installing .."
pip install virtualenv
fi
virtualenv $VENV_DIR
source $VENV_DIR/bin/activate
}
function installDeps {
echo "Installing dependencies .."
pip install -r ../requirements.txt
}
function prepareForTest {
echo "Running integration test .."
activateVirtualEnv
installDeps
}
function deactivateVirtualEnv {
echo "Deactivating venv .."
deactivate
}
function displayhelp {
echo "Usage: $0 options"
echo "where options include:"
echo " local run test in local env"
echo " ams1 run test in Amsterdam production datacenter 1"
echo " stg1 run test in staging enviroment 1"
}
# $1 result xml file name
# $2 env variable name
function run {
echo "Running test against $2"
# -n xdist flag
# -s capture output
# -v verbose
# --variables The config file
# -m marker
py.test -n 4 -s -v --junitxml="$1" --variables "$CONFIG_DIR/$2.json" -m "$2" test_api.py
}
# $1 results.xml
# $2 env : local, rqa2, etc. corresponds to the filename in config
# To run against all machines in staging and production, use env as stg and prod respectively
# For integration test
# ./runtests.sh results.xml local
function main {
result="$1"
env="$2"
echo "result: $result"
echo "env: $env"
echo "config: $CONFIG_DIR"
cd $TEST_DIR
prepareForTest
if [[ $env == "stg" ]]; then
run stg1.xml stg1
run stg2.xml stg2
python merge_junit_results.py stg1.xml stg2.xml > $result
elif [[ $env == "prod" ]]; then
run ny1.xml ny1
run ams1.xml ams1
python merge_junit_results.py ny1.xml ams2.xml > $result
else
# individual
run $result $env
fi
return_code=$?
deactivateVirtualEnv
}
main $@
exit $return_code
merge_junit_results.py
can be obtained from the cgoldber gist.To run the test we run the command
./runtests.sh ${result}.xml ${env-config-file-name}
, which will check for virtualenv and such and run the test. To run the tests in parallel use xdist.7. Sample
reqirements.text
file would look likepytest==3.0.78. To integrate the results with jenkins, it needs to know the final result of the testsuite which will be present in the
pytest-variables[hjson]==1.5.1
pytest-xdist==1.15.0
requests==2.13.0
pytest-timeout==1.2.0
results.xml
file. To parse that use the code below.#!/bin/pythonNow we have a full blown integration testsuite which can be integrated with CI/CD pipeline.
"""Extract test result failure, error values for Jenkins job."""
import xml.etree.ElementTree as ET
import sys
def parse_result(file):
"""Parse the given results.xml file."""
root = ET.parse(file).getroot()
if root.attrib["failures"] == "0" and root.attrib["errors"] == "0":
print "ALLTESTSUCCESS"
else:
print "ERROR"
sys.exit(1)
if __name__ == '__main__':
print "junit filename: ", sys.argv[1]
parse_result(sys.argv[1])
For clarity, sample folder structure will look like below.
test
├── requirements.txt
├── runtests.sh
└── test_api
├── __init__.py
├── local.xml
├── config
│ ├── ams1.json
│ ├── ams1_beta.json
│ ├── local.json
│ ├── local_beta.json
│ ├── ny1.json
│ ├── ny1_beta.json
│ ├── ny1_green.json
│ └── vm.json
├── merge_junit_results.py
├── parse_xml.py
├── result.xml
├── test_api.py
└── venv