0


Pytest单元测试系列[v1.0.0][Pytest基础]

Pytest安装与配置

和Unittest一样,Pytest是另一个Python语言的单元测试框架,与Unittest相比它的测试用例更加容易编写、运行方式更加灵活、报错信息更加清晰、断言写法更简洁并且它可以运行有unittest和nose编写的测试用例。

Pytest 安装

启动命令行,在命令行中使用pip工具安装pytest,如图所示。

C:\Users\Administrator>pip install -U pytest
Collecting pytest
  Using cached pytest-5.4.1-py3-none-any.whl (246 kB)
Requirement already satisfied, skipping upgrade: pluggy<1.0,>=0.12in c:\program files\python38\lib\site-packages (from pytest)(0.13.1)
Requirement already satisfied, skipping upgrade: atomicwrites>=1.0; sys_platform =="win32"in c:\program files\python38\lib\site-packages (from pytest)(1.3.0)
Requirement already satisfied, skipping upgrade: colorama; sys_platform =="win32"in c:\program files\python38\lib\site-packages (from pytest)(0.4.3)
Requirement already satisfied, skipping upgrade: wcwidth in c:\program files\python38\lib\site-packages (from pytest)(0.1.8)
Requirement already satisfied, skipping upgrade: packaging in c:\program files\python38\lib\site-packages (from pytest)(20.3)
Requirement already satisfied, skipping upgrade: attrs>=17.4.0in c:\program files\python38\lib\site-packages (from pytest)(19.3.0)
Requirement already satisfied, skipping upgrade: more-itertools>=4.0.0in c:\program files\python38\lib\site-packages (from pytest)(8.2.0)
Requirement already satisfied, skipping upgrade: py>=1.5.0in c:\program files\python38\lib\site-packages (from pytest)(1.8.1)
Requirement already satisfied, skipping upgrade: six in c:\program files\python38\lib\site-packages (from packaging->pytest)(1.14.0)
Requirement already satisfied, skipping upgrade: pyparsing>=2.0.2in c:\program files\python38\lib\site-packages (from packaging->pytest)(2.4.6)
Installing collected packages: pytest
Successfully installed pytest-5.4.1

代码示例

新建一个python文件,并写入如下代码:

deftest_equal():assert(1,2,3)==(1,2,3)

然后在命令行运行该文件,执行命令为 pytest xxx.py,执行结果如图

C:\Users\Administrator>pytest C:\Users\Administrator\Desktop\123.py
===================================== test session starts ==================================================
platform win32 -- Python 3.8.1, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
rootdir: C:\Users\Administrator
collected 1 item

Desktop\123.py .[100%]======================================1 passed in0.09s ==================================================

如果想看到详细的执行结果,可以给执行命令加上参数 -v或者–verbose,即pytest -v xxx.py,执行结果如图

C:\Users\Administrator>pytest -v C:\Users\Administrator\Desktop\123.py
======================================= test session starts =================================================
platform win32 -- Python 3.8.1, pytest-5.4.1, py-1.8.1, pluggy-0.13.1-- c:\program files\python38\python.exe
cachedir:.pytest_cache
rootdir: C:\Users\Administrator
collected 1 item

Desktop/123.py::test_equal PASSED                                                                                [100%]============================================1 passed in0.04s ===============================================

我们在看一个执行失败的例子,再新建一个py文件,写入如下代码:

deftest_equal():assert(1,2,3)==(3,2,1)

然后执行该文件,pytest -v xxx.py,执行结果如图

C:\Users\Administrator>pytest -v C:\Users\Administrator\Desktop\123.py
=========================================== test session starts ==============================================
platform win32 -- Python 3.8.1, pytest-5.4.1, py-1.8.1, pluggy-0.13.1-- c:\program files\python38\python.exe
cachedir:.pytest_cache
rootdir: C:\Users\Administrator
collected 1 item

Desktop/123.py::test_equal FAILED                                                                                [100%]============================================= FAILURES =======================================================
____________________________________________ test_equal ______________________________________________________

    deftest_equal():>assert(1,2,3)==(3,2,1)
E    assert(1,2,3)==(3,2,1)
E      At index 0 diff:1!=3
E      Full diff:
E      -(3,2,1)
E      ?  ^^
E      +(1,2,3)
E      ?  ^^

Desktop\123.py:2: AssertionError
====================================== short test summary info ===============================================
FAILED Desktop/123.py::test_equal -assert(1,2,3)==(3,2,1)=============================================1 failed in0.20s ==============================================

虽然断言结果是失败,但我们从执行结果中能够很清晰的看到为什么,pytest使用脱字符(^)表明结果中不同的地方

配置Pycharm

在这里插入图片描述

卸载Pytest

C:\Users\Administrator>pip uninstall pytest
Found existing installation: pytest 5.4.1
Uninstalling pytest-5.4.1:
  Would remove:
    c:\program files\python38\lib\site-packages\_pytest\*
    c:\program files\python38\lib\site-packages\pytest-5.4.1.dist-info\*
    c:\program files\python38\lib\site-packages\pytest\*
    c:\program files\python38\scripts\py.test.exe
    c:\program files\python38\scripts\pytest.exe
Proceed (y/n)? y
  Successfully uninstalled pytest-5.4.1

C:\Users\Administrator>

常用命令行参数

Pytest执行规则

  • 在命令行使用pytest执行测试,完整的pytest命令需要在pytest后加选项和文件名或者路径
  • 如果不提供这些选项或参数,pytest会在当前目录及其子目录下寻找测试文件,然后运行搜索到的测试代码
  • 如果提供了一个或者多个文件名、目录,pytest会逐一查找并运行所有测试,为了搜索到所有的测试代码,pytest会递归遍历每个目录及其子目录,但也只是执行以test_开头或者_test开头的测试函数

pytest搜索测试文件和测试用例的过程称为测试搜索,只要遵循如下几条原则便能够被它搜索到

  • 测试文件应命名为test_(something).py或者(something)_test.py
  • 测试函数、测试类方法应命名为test_(something)
  • 测试类应命名为Test(something)
测试代码

假如有如下待测代码,将其保存在py文件中,文件名为tobetest.py

import pytest

# 功能defadd(a, b):return a + b

# 测试相等@allure.stepdeftest_add():assert add(3,4)==7# 测试不相等@allure.stepdeftest_add2():assert add(17,22)!=50# 测试大于@allure.stepdeftest_add3():assert add(17,22)<=50# 测试小于@pytest.mark.aaaadeftest_add4():assert add(17,22)>=50# 测试相等deftest_in():
    a ="hello"
    b ="he"assert b in a

# 测试不相等deftest_not_in():
    a ="hello"
    b ="hi"assert b notin a

# 用于判断素数defis_prime(n):if n <=1:returnFalsefor i inrange(2, n):if n % i ==0:returnFalsereturnTrue# 判断是否为素数deftest_true():assert is_prime(13)# 判断是否不为素数deftest_not_true():assertnot is_prime(7)
执行单一文件
E:\Programs\Python\Python_Pytest\TestScripts>pytest tobetest.py
============================================= test session starts ====================================================
platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.13.0
rootdir: E:\Programs\Python\Python_Pytest\TestScripts, inifile: pytest.ini
plugins: allure-pytest-2.6.3, cov-2.7.1, emoji-0.2.0, forked-1.0.2, instafail-0.4.1, nice-0.1.0, repeat-0.8.0, timeout-1.3.3, xdist-1.29.0
collected 8 items                                                                                                                                                          

tobetest.py ...F...F                                                                                                                                             [100%]================================================ FAILURES ===========================================================
_________________________________________________ test_add4 __________________________________________________________

    @pytest.mark.aaaadeftest_add4():>assert add(17,22)>=50
E       assert39>=50
E        +  where 39= add(17,22)

test_asserts.py:36: AssertionError
_________________________________________________ test_not_true ______________________________________________________

    deftest_not_true():>assertnot is_prime(7)
E       assertnotTrue
E        +  where True= is_prime(7)

test_asserts.py:70: AssertionError
========================================= warnings summary ============================================================
c:\python37\lib\site-packages\_pytest\mark\structures.py:324
  c:\python37\lib\site-packages\_pytest\mark\structures.py:324: PytestUnknownMarkWarning: Unknown pytest.mark.aaaa -is this a typo?  You can register custom marks to avoid
 this warning -for details, see https://docs.pytest.org/en/latest/mark.html
    PytestUnknownMarkWarning,

test_asserts.py::test_add4
test_asserts.py::test_not_true
  c:\python37\lib\site-packages\pytest_nice.py:22: PytestDeprecationWarning: the `pytest.config` globalis deprecated.  Please use `request.config` or `pytest_configure` (i
f you're a pytest plugin) instead.if report.failed and pytest.config.getoption('nice'):-- Docs: https://docs.pytest.org/en/latest/warnings.html
================================2 failed,6 passed,3 warnings in0.46 seconds =========================================
  • 第一行显示执行代码的操作系统、python版本以及pytest的版本:platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.13.0
  • 第二行显示搜索代码的启示目录以及配置文件,在本例中没有配置文件,因此inifile为空:rootdir: E:\Programs\Python\Python_Pytest\TestScripts, inifile: pytest.ini
  • 第三行显示当前已经安装的pytest插件plugins: allure-pytest-2.6.3, cov-2.7.1, emoji-0.2.0, forked-1.0.2, instafail-0.4.1, nice-0.1.0, repeat-0.8.0, timeout-1.3.3, xdist-1.29.0
  • 第四行 collected 8 items 表示找到8个测试函数。
  • 第五行tobetest.py ...F...F显示的是测试文件名,后边的点表示测试通过,除了点以外,还可能遇到Failure、error(测试异常)、skip、xfail(预期失败并确实失败)、xpass(预期失败但实际通过,不符合预期)分别会显示F、E、s、x、X
  • 2 failed, 6 passed, 3 warnings in 0.46 seconds表示测试结果和执行时间
执行单一测试函数

使用命令

pytest -v 路径/文件名::测试用例函数名

执行结果如下:

E:\Programs\Python\Python_Pytest\TestScripts>pytest test_asserts.py::test_true
==================================================== test session starts ===============================================
platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.13.0
rootdir: E:\Programs\Python\Python_Pytest\TestScripts, inifile: pytest.ini
plugins: allure-pytest-2.6.3, cov-2.7.1, emoji-0.2.0, forked-1.0.2, instafail-0.4.1, nice-0.1.0, repeat-0.8.0, timeout-1.3.3, xdist-1.29.0
collected 1 item                                                                                                                                                           

test_asserts.py .[100%]================================================== warnings summary ====================================================
c:\python37\lib\site-packages\_pytest\mark\structures.py:324
  c:\python37\lib\site-packages\_pytest\mark\structures.py:324: PytestUnknownMarkWarning: Unknown pytest.mark.aaaa -is this a typo?  You can register custom marks to avoid
 this warning -for details, see https://docs.pytest.org/en/latest/mark.html
    PytestUnknownMarkWarning,-- Docs: https://docs.pytest.org/en/latest/warnings.html
==========================================1 passed,1 warnings in0.07 seconds =========================================
其他命令行规则
  • 运行某个模块内的某个测试函数pytest test_mod.py::test_func
  • 运行某个模块内某个类的某个测试方法pytest test_mod.py::TestClass::test_method
  • 执行单一测试模块的语法是pytest test_module.py
  • 执行某个目录下的所有测试函数语法是pytest test/

常用pytest命令选项

--collect-only

在批量执行测试用例之前,我们往往会想知道哪些用例将被执行是否符合我们的预期等等,这种场景下可以使用–collect-only选项,如下执行结果所示:

D:\PythonPrograms\Python_Pytest\TestScripts>pytest --collect-only
================================================= test session starts ===================================================
platform win32 -- Python 3.7.2, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\PythonPrograms\Python_Pytest\TestScripts, inifile:
plugins: allure-adaptor-1.7.10
collected 17 items
<Package 'D:\\PythonPrograms\\Python_Pytest\\TestScripts'><Module 'test_asserts.py'><Function 'test_add'><Function 'test_add2'><Function 'test_add3'><Function 'test_add4'><Function 'test_in'><Function 'test_not_in'><Function 'test_true'><Module 'test_fixture1.py'><Function 'test_numbers_3_4'><Function 'test_strings_a_3'><Module 'test_fixture2.py'><Class 'TestUM'><Function 'test_numbers_5_6'><Function 'test_strings_b_2'><Module 'test_one.py'><Function 'test_equal'><Function 'test_not_equal'><Module 'test_two.py'><Function 'test_default'><Function 'test_member_access'><Function 'test_asdict'><Function 'test_replace'>============================================= no tests ran in0.09 seconds ==============================================
-k

该选项允许我们使用表达式指定希望运行的测试用例,如果某测试名是唯一的或者多个测试名的前缀或后缀相同,则可以使用这个选项来执行,如下执行结果所示:

D:\PythonPrograms\Python_Pytest\TestScripts>pytest -k "asdict or default"--collect-only
================================================= test session starts ===================================================
platform win32 -- Python 3.7.2, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\PythonPrograms\Python_Pytest\TestScripts, inifile:
plugins: allure-adaptor-1.7.10
collected 17 items /15 deselected
<Package 'D:\\PythonPrograms\\Python_Pytest\\TestScripts'><Module 'test_two.py'><Function 'test_default'><Function 'test_asdict'>============================================15 deselected in0.06 seconds ===========================================

从执行结果中我们能看到使用-k和–collect-only组合能够查询到我们设置的参数所能执行的测试方法。
然后我们将–collect-only从命令行移出,只使用-k便可执行test_default和test_asdict了

D:\PythonPrograms\Python_Pytest\TestScripts>pytest -k "asdict or default"================================================ test session starts =================================================
platform win32 -- Python 3.7.2, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\PythonPrograms\Python_Pytest\TestScripts, inifile:
plugins: allure-adaptor-1.7.10
collected 17 items /15 deselected

test_two.py ..[100%]=============================================2 passed,15 deselected in0.07 seconds ================================

如果我们在定义用例名的时候特别注意一下便可以使用-k的方式执行一系列测试用例了,同时表达式中科包含 and、or、not

-m

用于标记并分组,然后仅执行带有标记的用例,如此便实现了执行某个测试集合的场景,如下代码所示,给我们之前的两个测试方法添加标记

@pytest.mark.run_these_casesdeftest_member_access():"""
    利用属性名来访问对象成员
    :return:
    """
    t = Task('buy milk','brian')assert t.summary =='buy milk'assert t.owner =='brian'assert(t.done,  t.id)==(False,None)@pytest.mark.run_these_casesdeftest_asdict():"""
    _asdict()返回一个字典
    """
    t_task = Task('do something','okken',True,21)
    t_dict = t_task._asdict()
    expected_dict ={'summary':'do something','owner':'okken','done':True,'id':21}assert t_dict == expected_dict

执行命令

pytest -v -m run_these_cases

,结果如下:

D:\PythonPrograms\Python_Pytest\TestScripts>pytest -v -m run_these_cases
============================================== test session starts ======================================================
platform win32 -- Python 3.7.2, pytest-4.0.2, py-1.8.0, pluggy-0.12.0-- c:\python37\python.exe
cachedir:.pytest_cache
rootdir: D:\PythonPrograms\Python_Pytest\TestScripts, inifile:
plugins: allure-adaptor-1.7.10
collected 17 items /15 deselected

test_two.py::test_member_access PASSED                                   [50%]
test_two.py::test_asdict PASSED                                          [100%]=======================================2 passed,15 deselected in0.07 seconds =========================================

-m选项也可以用表达式指定多个标记名,例如-m “mark1 and mark2” 或者-m “mark1 and not mark2” 或者-m “mark1 or mark2”

-x

Pytest会运行每一个搜索到的测试用例,如果某个测试函数被断言失败,或者触发了外部异常,则该测试用例的运行就会停止,pytest将其标记为失败后继续运行一下测试用例,然而在debug的时候,我们往往希望遇到失败时立刻停止整个会话,-x选项为我们提供了该场景的支持,如下执行结果所示:

E:\Programs\Python\Python_Pytest\TestScripts>pytest -x
=============================================== test session starts =================================================
platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
rootdir: E:\Programs\Python\Python_Pytest\TestScripts
plugins: allure-pytest-2.6.3
collected 17 items                                                                                                                                                                                                                     

test_asserts.py ...F

=====================================================FAILURES ===========================================================
____________________________________________________test_add4 ___________________________________________________________

    deftest_add4():>assert add(17,22)>=50
E       assert39>=50
E        +  where 39= add(17,22)

test_asserts.py:34: AssertionError
============================================ warnings summary ===========================================================
c:\python37\lib\site-packages\_pytest\mark\structures.py:324
  c:\python37\lib\site-packages\_pytest\mark\structures.py:324: PytestUnknownMarkWarning: Unknown pytest.mark.run_these_cases -is this a typo?  You can register custom marks to avoid this warning -for details, see https://docs.pyt
est.org/en/latest/mark.html
    PytestUnknownMarkWarning,-- Docs: https://docs.pytest.org/en/latest/warnings.html
===============================1 failed,3 passed,1 warnings in0.41 seconds ==========================================

在执行结果中我们可以看到实际收集的测试用例是17条,但执行了4条,通过3条失败一条,执行便停止了。
如果不适用-x选项再执行一次结果如下:

E:\Programs\Python\Python_Pytest\TestScripts>pytest --tb=no
============================================ test session starts =====================================================
platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
rootdir: E:\Programs\Python\Python_Pytest\TestScripts
plugins: allure-pytest-2.6.3
collected 17 items                                                                                                                                                                                                                     

test_asserts.py ...F..F                                                                                                                                                                                                          [41%]
test_fixture1.py ..[52%]
test_fixture2.py ..[64%]
test_one.py .F                                                                                                                                                                                                                   [76%]
test_two.py ....[100%]============================================= warnings summary =======================================================
c:\python37\lib\site-packages\_pytest\mark\structures.py:324
  c:\python37\lib\site-packages\_pytest\mark\structures.py:324: PytestUnknownMarkWarning: Unknown pytest.mark.run_these_cases -is this a typo?  You can register custom marks to avoid this warning -for details, see https://docs.pyt
est.org/en/latest/mark.html
    PytestUnknownMarkWarning,-- Docs: https://docs.pytest.org/en/latest/warnings.html
====================================3 failed,14 passed,1 warnings in0.31 seconds =================================

从执行结果中我们看到一共收集的测试用例为17条,14条通过,3条失败,使用了选项–tb=no关闭错误信息回溯,当我们只想看执行结果而不想看那么多报错信息的时候可以使用它。

--maxfail=num

-x是遇到失败便全局停止,如果我们想遇到失败几次再停止呢?–maxfail选项为我们提供了这个场景的支持,如下执行结果所示:

E:\Programs\Python\Python_Pytest\TestScripts>pytest --maxfail=2--tb=no
============================================= test session starts =======================================================
platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
rootdir: E:\Programs\Python\Python_Pytest\TestScripts
plugins: allure-pytest-2.6.3
collected 17 items                                                                                                                                                                                                                     

test_asserts.py ...F..F

================================================= warnings summary ======================================================
c:\python37\lib\site-packages\_pytest\mark\structures.py:324
  c:\python37\lib\site-packages\_pytest\mark\structures.py:324: PytestUnknownMarkWarning: Unknown pytest.mark.run_these_cases -is this a typo?  You can register custom marks to avoid this warning -for details, see https://docs.pyt
est.org/en/latest/mark.html
    PytestUnknownMarkWarning,-- Docs: https://docs.pytest.org/en/latest/warnings.html
=========================================2 failed,5 passed,1 warnings in0.22 seconds ================================

从执行结果中我们看到收集了17条用例,执行了7条,当错误数量达到2的时候便停止了执行。

--tb=

命令及参数描述pytest --showlocals# show local variables in tracebackspytest -l# show local variables (shortcut)pytest --tb=auto# (default) ‘long’ tracebacks for the first and last entry, but ‘short’ style for the other entriespytest --tb=long# exhaustive, informative traceback formattingpytest --tb=short# shorter traceback formatpytest --tb=line# only one line per failurepytest --tb=native# Python standard library formattingpytest --tb=no# no traceback at allpytest --full-trace#causes very long traces to be printed on error (longer than --tb=long).

-v (--verbose)

-v, --verbose:increase verbosity.

-q (--quiet)

-q, --quiet:decrease verbosity.

--lf (--last-failed)

–lf, --last-failed:rerun only the tests that failed at the last run (or all if none failed)

E:\Programs\Python\Python_Pytest\TestScripts>pytest --lf --tb=no
================================== test session starts ==========================================
platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
rootdir: E:\Programs\Python\Python_Pytest\TestScripts
plugins: allure-pytest-2.6.3
collected 9 items /6 deselected /3 selected                                                                                                                                                                                          
run-last-failure: rerun previous 3 failures (skipped 7 files)
test_asserts.py FF                                                                                                                                                                                                               [66%]
test_one.py F                                                                                                                                                                                                                    [100%]=============================3 failed,6 deselected in0.15 seconds ============================
--ff (--failed-first)

–ff, --failed-first :run all tests but run the last failures first. This may re-order tests and thus lead to repeated fixture setup/teardown

E:\Programs\Python\Python_Pytest\TestScripts>pytest --ff --tb=no
================================= test session starts ==================================
platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
rootdir: E:\Programs\Python\Python_Pytest\TestScripts
plugins: allure-pytest-2.6.3
collected 17 items                                                                                                                                                                                                                     
run-last-failure: rerun previous 3 failures first
test_asserts.py FF                                                                                                                                                                                                               [11%]
test_one.py F                                                                                                                                                                                                                    [17%]
test_asserts.py .....[47%]
test_fixture1.py ..[58%]
test_fixture2.py ..[70%]
test_one.py .[76%]
test_two.py ....[100%]======================== warnings summary ==========================================
c:\python37\lib\site-packages\_pytest\mark\structures.py:324
  c:\python37\lib\site-packages\_pytest\mark\structures.py:324: PytestUnknownMarkWarning: Unknown pytest.mark.run_these_cases -is this a typo?  You can register custom marks to avoid this warning -for details, see https://docs.pyt
est.org/en/latest/mark.html
    PytestUnknownMarkWarning,-- Docs: https://docs.pytest.org/en/latest/warnings.html
=================3 failed,14 passed,1 warnings in0.25 seconds ==========================
-s与--capture=method
-s等同于--capture=no
(venv) D:\Python_Pytest\TestScripts>pytest -s
============================= test session starts ============================================
platform win32 -- Python 3.7.3, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Python_Pytest\TestScripts, inifile:
plugins: allure-adaptor-1.7.10
collected 18 items                                                                                                                                                                                                                        

test_asserts.py ...F...F
test_fixture1.py

setup_module================>
setup_function------>
test_numbers_3_4
.teardown_function--->
setup_function------>
test_strings_a_3
.teardown_function--->
teardown_module=============>

test_fixture2.py

setup_class=========>
setup_method----->>
setup----->
test_numbers_5_6
.teardown-->
teardown_method-->>
setup_method----->>
setup----->
test_strings_b_2
.teardown-->
teardown_method-->>
teardown_class=========>

test_one.py .F
test_two.py ....========================================== FAILURES ============================================
____________________________________________ test_add4 ______________________________________________

    @pytest.mark.aaaadeftest_add4():>assert add(17,22)>=50
E       assert39>=50
E        +  where 39= add(17,22)

test_asserts.py:36: AssertionError
_____________________________________________________________________________________________________________ test_not_true ______________________________________________________________________________________________________________

    deftest_not_true():>assertnot is_prime(7)
E       assertnotTrue
E        +  where True= is_prime(7)

test_asserts.py:70: AssertionError
_______________________________________ test_not_equal ________________________________________________

    deftest_not_equal():>assert(1,2,3)==(3,2,1)
E       assert(1,2,3)==(3,2,1)
E         At index 0 diff:1!=3
E         Use -v to get the full diff

test_one.py:9: AssertionError
==================================3 failed,15 passed in0.15 seconds =================================
--capture=method per-test capturing method: one of fd|sys|no.
-l (--showlocals)
-l, --showlocals show locals in tracebacks (disabled by default).
--duration=N
--durations=N show N slowest setup/test durations (N=0 for all).

该选项绝大多数用于调优测试代码,该选项展示最慢的N个用例,等于0则表示全部倒序

(venv) D:\Python_Pytest\TestScripts>pytest --duration=5===================================================== test session starts ==============================================
platform win32 -- Python 3.7.3, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Python_Pytest\TestScripts, inifile:
plugins: allure-adaptor-1.7.10
collected 18 items                                                             

test_asserts.py ...F...F                                                 [44%]
test_fixture1.py ..[55%]
test_fixture2.py ..[66%]
test_one.py .F                                                           [77%]
test_two.py ....[100%]======================================================= FAILURES ======================================================
_______________________________________________________ test_add4 _____________________________________________________

    @pytest.mark.aaaadeftest_add4():>assert add(17,22)>=50
E       assert39>=50
E        +  where 39= add(17,22)

test_asserts.py:36: AssertionError
_____________________________________________________ test_not_true _____________________________________________________

    deftest_not_true():>assertnot is_prime(7)
E       assertnotTrue
E        +  where True= is_prime(7)

test_asserts.py:70: AssertionError
___________________________________________________ test_not_equal ______________________________________________________

    deftest_not_equal():>assert(1,2,3)==(3,2,1)
E       assert(1,2,3)==(3,2,1)
E         At index 0 diff:1!=3
E         Use -v to get the full diff

test_one.py:9: AssertionError
================================================ slowest 5 test durations ===============================================0.01s call     test_asserts.py::test_add4

(0.00 durations hidden.  Use -vv to show these durations.)==========================================3 failed,15 passed in0.27 seconds ==========================================

在执行结果中我们看到提示(0.00 durations hidden. Use -vv to show these durations.),如果加上-vv,执行结果如下:

(venv) D:\Python_Pytest\TestScripts>pytest --duration=5-vv
============================= test session starts =============================
platform win32 -- Python 3.7.3, pytest-4.0.2, py-1.8.0, pluggy-0.12.0-- c:\python37\python.exe
cachedir:.pytest_cache
rootdir: D:\Python_Pytest\TestScripts, inifile:
plugins: allure-adaptor-1.7.10
collected 18 items                                                             

test_asserts.py::test_add PASSED                                         [5%]
test_asserts.py::test_add2 PASSED                                        [11%]
test_asserts.py::test_add3 PASSED                                        [16%]
test_asserts.py::test_add4 FAILED                                        [22%]
test_asserts.py::test_in PASSED                                          [27%]
test_asserts.py::test_not_in PASSED                                      [33%]
test_asserts.py::test_true PASSED                                        [38%]
test_asserts.py::test_not_true FAILED                                    [44%]
test_fixture1.py::test_numbers_3_4 PASSED                                [50%]
test_fixture1.py::test_strings_a_3 PASSED                                [55%]
test_fixture2.py::TestUM::test_numbers_5_6 PASSED                        [61%]
test_fixture2.py::TestUM::test_strings_b_2 PASSED                        [66%]
test_one.py::test_equal PASSED                                           [72%]
test_one.py::test_not_equal FAILED                                       [77%]
test_two.py::test_default PASSED                                         [83%]
test_two.py::test_member_access PASSED                                   [88%]
test_two.py::test_asdict PASSED                                          [94%]
test_two.py::test_replace PASSED                                         [100%]====================================================== FAILURES =========================================================
______________________________________________________ test_add4 ________________________________________________________

    @pytest.mark.aaaadeftest_add4():>assert add(17,22)>=50
E       assert39>=50
E        +  where 39= add(17,22)

test_asserts.py:36: AssertionError
____________________________________________________ test_not_true ____________________________________________________

    deftest_not_true():>assertnot is_prime(7)
E       assertnotTrue
E        +  where True= is_prime(7)

test_asserts.py:70: AssertionError
___________________________________________________ test_not_equal ____________________________________________________

    deftest_not_equal():>assert(1,2,3)==(3,2,1)
E       assert(1,2,3)==(3,2,1)
E         At index 0 diff:1!=3
E         Full diff:
E         -(1,2,3)
E         ?  ^^
E         +(3,2,1)
E         ?  ^^

test_one.py:9: AssertionError
============================================== slowest 5 test durations ===============================================0.00s setup    test_one.py::test_not_equal
0.00s setup    test_fixture1.py::test_strings_a_3
0.00s setup    test_asserts.py::test_add3
0.00s call     test_fixture2.py::TestUM::test_strings_b_2
0.00s call     test_asserts.py::test_in
=========================================3 failed,15 passed in0.16 seconds =========================================
-r

生成一个简短的概述报告,同时配合-r还可以使用
OptionDescriptionffailedEerrorsskippedxxfailedXxpassedppassedPpassed with outputaall except pPAall例如只想看失败的和跳过的测试,可以这样执行

(venv) E:\Python_Pytest\TestScripts>pytest -rfs
=================================================== test session starts =================================================
platform win32 -- Python 3.7.3, pytest-4.0.2, py-1.8.0, pluggy-0.12.0
rootdir: E:\Python_Pytest\TestScripts, inifile:
plugins: allure-adaptor-1.7.10
collected 18 items                                                             

test_asserts.py ...F...F                                                 [44%]
test_fixture1.py ..[55%]
test_fixture2.py ..[66%]
test_one.py .F                                                           [77%]
test_two.py ....[100%]==================================================== FAILURES ===========================================================
____________________________________________________ test_add4 __________________________________________________________

    @pytest.mark.aaaadeftest_add4():>assert add(17,22)>=50
E       assert39>=50
E        +  where 39= add(17,22)

test_asserts.py:36: AssertionError
__________________________________________________ test_not_true _______________________________________________________

    deftest_not_true():>assertnot is_prime(7)
E       assertnotTrue
E        +  where True= is_prime(7)

test_asserts.py:70: AssertionError
____________________________________________________ test_not_equal _____________________________________________________

    deftest_not_equal():>assert(1,2,3)==(3,2,1)
E       assert(1,2,3)==(3,2,1)
E         At index 0 diff:1!=3
E         Use -v to get the full diff

test_one.py:9: AssertionError
================================================ short test summary info ================================================
FAIL test_asserts.py::test_add4
FAIL test_asserts.py::test_not_true
FAIL test_one.py::test_not_equal
========================================3 failed,15 passed in0.10 seconds ============================================
pytest --help

获取更多参数

在命令行输入pytest --help 然后执行结果如下,在打印出来的结果中我们能够看到pytest命令的使用方式usage: pytest [options] [file_or_dir] [file_or_dir] […]以及一系列的执行方式(options)及其描述。

C:\Users\Administrator>pytest --help
usage: pytest [options][file_or_dir][file_or_dir][...]
positional arguments:
  file_or_dir
general:-k EXPRESSION         only run tests which match the given substring
                        expression. An expression is a python evaluatable
                        expression where all names are substring-matched
                        against test names and their parent classes. Example:-k 'test_method or test_other' matches all test
                        functions and classes whose name contains
                        'test_method'or'test_other',while-k 'not
                        test_method' matches those that don't contain
                        'test_method'in their names. Additionally keywords
                        are matched to classes and functions containing extra
                        names in their 'extra_keyword_matches'set,as well as
                        functions which have names assigned directly to them.-m MARKEXPR           only run tests matching given mark expression.
                        example:-m 'mark1 and not mark2'.--markers             show markers (builtin, plugin and per-project ones).-x,--exitfirst       exit instantly on first error or failed test.--maxfail=num         exit after first num failures or errors.--strict              marks not registered in configuration fileraise
                        errors.-c file               load configuration from `file` instead of trying to
                        locate one of the implicit configuration files.--continue-on-collection-errors
                        Force test execution even if collection errors occur.--rootdir=ROOTDIR     Define root directory for tests. Can be relative path:'root_dir','./root_dir','root_dir/another_dir/';
                        absolute path:'/home/user/root_dir'; path with
                        variables:'$HOME/root_dir'.--fixtures,--funcargs
                        show available fixtures,sorted by plugin appearance
                        (fixtures with leading '_' are only shown with'-v')--fixtures-per-test   show fixtures per test
  --import-mode={prepend,append}
                        prepend/append to sys.path when importing test
                        modules, default is to prepend.--pdb                 start the interactive Python debugger on errors or
                        KeyboardInterrupt.--pdbcls=modulename:classname
                        start a custom interactive Python debugger on errors.
                        For example:--pdbcls=IPython.terminal.debugger:TerminalPdb
  --trace               Immediately break when running each test.--capture=method      per-test capturing method: one of fd|sys|no.-s                    shortcut for--capture=no.--runxfail            run tests even if they are marked xfail
  --lf,--last-failed   rerun only the tests that failed at the last run (orallif none failed)--ff,--failed-first  run all tests but run the last failures first. This
                        may re-order tests and thus lead to repeated fixture
                        setup/teardown
  --nf,--new-first     run tests from new files first, then the rest of the
                        tests sorted by file mtime
  --cache-show          show cache contents, don't perform collection or tests
  --cache-clear         remove all cache contents at start of test run.--lfnf={all,none},--last-failed-no-failures={all,none}
                        change the behavior when no test failed in the last
                        run or no information about the last failures was
                        found in the cache
  --sw,--stepwise      exit on test fail andcontinuefrom last failing test
                        next time
  --stepwise-skip       ignore the first failing test but stop on the next
                        failing test
  --allure_severities=SEVERITIES_SET
                        Comma-separated list of severity names. Tests only
                        with these severities will be run. Possible values
                        are:blocker, critical, minor, normal, trivial.--allure_features=FEATURES_SET
                        Comma-separated list of feature names. Run tests that
                        have at least one of the specified feature labels.--allure_stories=STORIES_SET
                        Comma-separated list of story names. Run tests that
                        have at least one of the specified story labels.

reporting:-v,--verbose         increase verbosity.-q,--quiet           decrease verbosity.--verbosity=VERBOSE   set verbosity
  -r chars              show extra test summary info as specified by chars
                        (f)ailed,(E)error,(s)skipped,(x)failed,(X)passed,(p)passed,(P)passed with output,(a)allexcept pP.
                        Warnings are displayed at all times except when
                        --disable-warnings isset--disable-warnings,--disable-pytest-warnings
                        disable warnings summary
  -l,--showlocals      show localsin tracebacks (disabled by default).--tb=style            traceback print mode (auto/long/short/line/native/no).--show-capture={no,stdout,stderr,log,all}
                        Controls how captured stdout/stderr/log is shown on
                        failed tests. Default is'all'.--full-trace          don't cut any tracebacks (default is to cut).--color=color         color terminal output (yes/no/auto).--durations=N         show N slowest setup/test durations (N=0forall).--pastebin=mode       send failed|all info to bpaste.net pastebin service.--junit-xml=path      create junit-xml style report file at given path.--junit-prefix=str    prepend prefix to classnames in junit-xml output
  --result-log=path     DEPRECATED path for machine-readable result log.

collection:--collect-only        only collect tests, don't execute them.--pyargs              try to interpret all arguments as python packages.--ignore=path         ignore path during collection (multi-allowed).--deselect=nodeid_prefix
                        deselect item during collection (multi-allowed).--confcutdir=dir      only load conftest.py's relative to specified dir.--noconftest          Don't load any conftest.py files.--keep-duplicates     Keep duplicate tests.--collect-in-virtualenv
                        Don't ignore tests in a local virtualenv directory
  --doctest-modules     run doctests inall.py modules
  --doctest-report={none,cdiff,ndiff,udiff,only_first_failure}
                        choose another output formatfor diffs on doctest
                        failure
  --doctest-glob=pat    doctests file matching pattern, default: test*.txt
  --doctest-ignore-import-errors
                        ignore doctest ImportErrors
  --doctest-continue-on-failure
                        for a given doctest,continue to run after the first
                        failure

test session debugging and configuration:--basetemp=dir        base temporary directory for this test run.(warning:
                        this directory is removed if it exists)--version             display pytest lib version andimport information.-h,--help            show help message and configuration info
  -p name               early-load given plugin (multi-allowed). To avoid
                        loading of plugins, use the `no:` prefix, e.g.
                        `no:doctest`.--trace-config        trace considerations of conftest.py files.--debug               store internal tracing debug information in'pytestdebug.log'.-o OVERRIDE_INI,--override-ini=OVERRIDE_INI
                        override ini option with"option=value" style, e.g.
                        `-o xfail_strict=True-o cache_dir=cache`.--assert=MODE         Control assertion debugging tools.'plain' performs no
                        assertion debugging.'rewrite'(the default) rewrites
                        assert statements in test modules on import to provide
                        assert expression information.--setup-only          only setup fixtures, do not execute tests.--setup-show          show setup of fixtures while executing tests.--setup-plan          show what fixtures and tests would be executed but
                        don't execute anything.

pytest-warnings:-W PYTHONWARNINGS,--pythonwarnings=PYTHONWARNINGS
                        set which warnings to report, see -W option of python
                        itself.

logging:--no-print-logs       disable printing caught logs on failed tests.--log-level=LOG_LEVEL
                        logging level used by the logging module
  --log-format=LOG_FORMAT
                        log formatas used by the logging module.--log-date-format=LOG_DATE_FORMAT
                        log date formatas used by the logging module.--log-cli-level=LOG_CLI_LEVEL
                        cli logging level.--log-cli-format=LOG_CLI_FORMAT
                        log formatas used by the logging module.--log-cli-date-format=LOG_CLI_DATE_FORMAT
                        log date formatas used by the logging module.--log-file=LOG_FILE   path to a file when logging will be written to.--log-file-level=LOG_FILE_LEVEL
                        log file logging level.--log-file-format=LOG_FILE_FORMAT
                        log formatas used by the logging module.--log-file-date-format=LOG_FILE_DATE_FORMAT
                        log date formatas used by the logging module.

reporting:--alluredir=DIR       Generate Allure report in the specified directory (may
                        not exist)[pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg file found:

  markers (linelist)       markers for test functions
  empty_parameter_set_mark (string) default marker for empty parametersets
  norecursedirs (args)     directory patterns to avoid for recursion
  testpaths (args)         directories to search for tests when no files or dire
  console_output_style (string) console output: classic orwith additional progr
  usefixtures (args)list of default fixtures to be used with this project
  python_files (args)      glob-style file patterns for Python test module disco
  python_classes (args)    prefixes or glob names for Python test classdiscover
  python_functions (args)  prefixes or glob names for Python test function and m
  xfail_strict (bool)      default for the strict parameter of xfail markers whe
  junit_suite_name (string) Test suite name for JUnit report
  junit_logging (string)   Write captured log messages to JUnit report: one of n
  doctest_optionflags (args) option flags for doctests
  doctest_encoding (string) encoding used for doctest files
  cache_dir (string)       cache directory path.
  filterwarnings (linelist) Each line specifies a pattern for warnings.filterwar
  log_print (bool)         default value for--no-print-logs
  log_level (string)       default value for--log-level
  log_format (string)      default value for--log-format
  log_date_format (string) default value for--log-date-format
  log_cli (bool)           enable log display during test run (also known as "li
  log_cli_level (string)   default value for--log-cli-level
  log_cli_format (string)  default value for--log-cli-format
  log_cli_date_format (string) default value for--log-cli-date-format
  log_file (string)        default value for--log-file
  log_file_level (string)  default value for--log-file-level
  log_file_format (string) default value for--log-file-format
  log_file_date_format (string) default value for--log-file-date-format
  addopts (args)           extra command line options
  minversion (string)      minimally required pytest version

environment variables:
  PYTEST_ADDOPTS           extra command line options
  PYTEST_PLUGINS           comma-separated plugins to load during startup
  PYTEST_DISABLE_PLUGIN_AUTOLOAD set to disable plugin auto-loading
  PYTEST_DEBUG             set to enable debug tracing of pytest's internals

to see available markers type: pytest --markers
to see available fixtures type: pytest --fixtures
(shown according to specified file_or_dir or current dirifnot specified; fixtures with leading '_' are only shown with the '-v' option

理解Pytest的配置文件

Pytest里有哪些配置文件:
配置文件描述:无论选择使用哪种配置文件,它们的格式几乎是一样的

pytest.ini

pytest主配置文件,可以改变pytest默认行为

conftest.py

本地插件库,其中的hook函数和fixture将作用于该文件所在目录及其子目录

__init__.py

每个测试子目录都包含该文件时,在多个测试目录中可以出现同名的测试文件

tox.ini

如果你使用tox工具,会用到tox.ini,它与pytest.ini类似,只不过是tox的配置文件,可以把pytest的配置写在tox.ini里,就无需同时使用pytest.ini和tox.ini了

setup.cfg

它也采用ini文件格式,而且可以影响setup.py的行为,如果要发布一个python包,它的作用也很大,可以在setup.py文件里添加几行代码,使用python setup.py test 运行所有的pytest测试用例;如果打算发布python包,也可以使用setup.cfg文件存储pytest的配置信息pytest.ini

;---; Excerpted from"Python Testing with pytest",; published by The Pragmatic Bookshelf.; Copyrights apply to this code. It may not be used to create training material,; courses, books, articles,and the like. Contact us if you are in doubt.; We make no guarantees that this code is fit forany purpose.; Visit http://www.pragmaticprogrammer.com/titles/bopytest for more book information.;---[pytest]
addopts =-rsxX -l --tb=short --strict
xfail_strict = true
;... more options ...

tox.ini

;---; Excerpted from"Python Testing with pytest",; published by The Pragmatic Bookshelf.; Copyrights apply to this code. It may not be used to create training material,; courses, books, articles,and the like. Contact us if you are in doubt.; We make no guarantees that this code is fit forany purpose.; Visit http://www.pragmaticprogrammer.com/titles/bopytest for more book information.;---;... tox specific stuff ...[pytest]
addopts =-rsxX -l --tb=short --strict
xfail_strict = true
;... more options ...

setup.cfg

;... packaging specific stuff ...[tool:pytest]
addopts =-rsxX -l --tb=short --strict
xfail_strict = true
;... more options ...

执行命令pytest --help能够看到所有设置选项

[pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg file found:

  markers (linelist)       markers for test functions
  empty_parameter_set_mark (string) default marker for empty parametersets
  norecursedirs (args)     directory patterns to avoid for recursion
  testpaths (args)         directories to search for tests when no files or directories are given in the command line.
  usefixtures (args)list of default fixtures to be used with this project
  python_files (args)      glob-style file patterns for Python test module discovery
  python_classes (args)    prefixes or glob names for Python test classdiscovery
  python_functions (args)  prefixes or glob names for Python test function and method discovery
  disable_test_id_escaping_and_forfeit_all_rights_to_community_support (bool) disable string escape non-ascii characters, might cause unwanted side effects(use at your own

  console_output_style (string) console output:"classic",orwith additional progress information ("progress"(percentage)|"count").
  xfail_strict (bool)      default for the strict parameter of xfail markers when not given explicitly (default:False)
  junit_suite_name (string) Test suite name for JUnit report
  junit_logging (string)   Write captured log messages to JUnit report: one of no|system-out|system-err
  junit_duration_report (string) Duration time to report: one of total|call
  junit_family (string)    Emit XML for schema: one of legacy|xunit1|xunit2
  doctest_optionflags (args) option flags for doctests
  doctest_encoding (string) encoding used for doctest files
  cache_dir (string)       cache directory path.
  filterwarnings (linelist) Each line specifies a pattern for warnings.filterwarnings. Processed after -W and--pythonwarnings.
  log_print (bool)         default value for--no-print-logs
  log_level (string)       default value for--log-level
  log_format (string)      default value for--log-format
  log_date_format (string) default value for--log-date-format
  log_cli (bool)           enable log display during test run (also known as"live logging").
  log_cli_level (string)   default value for--log-cli-level
  log_cli_format (string)  default value for--log-cli-format
  log_cli_date_format (string) default value for--log-cli-date-format
  log_file (string)        default value for--log-file
  log_file_level (string)  default value for--log-file-level
  log_file_format (string) default value for--log-file-format
  log_file_date_format (string) default value for--log-file-date-format
  addopts (args)           extra command line options
  minversion (string)      minimally required pytest version
  rsyncdirs (pathlist)list of (relative) paths to be rsynced for remote distributed testing.
  rsyncignore (pathlist)list of (relative) glob-style paths to be ignored for rsyncing.
  looponfailroots (pathlist) directories to check for changes
  timeout (string)         Timeout in seconds before dumping the stacks.  Default is0 which
means no timeout.
  timeout_method (string)  Timeout mechanism to use.'signal' uses SIGALRM if available,'thread' uses a timer thread.  The default is to use 'signal'and fall
back to 'th
  timeout_func_only (bool) When set to True, defers the timeout evaluation to only the test
function body, ignoring the time it takes when evaluating any fixtures
used in t
  pytester_example_dir (string) directory to take the pytester example files from

environment variables:
  PYTEST_ADDOPTS           extra command line options
  PYTEST_PLUGINS           comma-separated plugins to load during startup
  PYTEST_DISABLE_PLUGIN_AUTOLOAD set to disable plugin auto-loading
  PYTEST_DEBUG             set to enable debug tracing of pytest's internals

插件可以添加ini文件选项

除了前边列出来的这些选项,利用插件和conftest.py文件还可以添加新的选项,而且新增的选项也可以使用pytest --help查看。

更改默认命令行选项

经过前边的文章,已经涉猎到很多pytest选项了,例如-v/–verbose可以输出详细信息,-l/–showlocals可以查看失败测试用例里堆栈中的局部变量,你可能经常用到这些选项,但又不想重复输入,此时就可以借助pytest.ini文件里的addopts设置

[pytest]
addopts =-rsxX -l --tb=short --strict

选项介绍-rsxX表示pytest报告所有测试用例被跳过、预计失败、预计失败但实际通过的原因-l表示pytest报告所有失败测试用例的对战中的局部变量–tb=short表示简化堆栈回溯信息,只保留文件和行数–strict选项表示禁止使用未在配置文件中注册的标记

注册标记来防范拼写错误

在pytest.ini中注册标记:

[pytest]
markers=
    smoke: Run the smoke test functions for tasks project
    get:Run the test functions that test tasks.get()

标记注册好后,可以通过pytest --markers来查看

(venv) E:\Programs\Python\Python_Pytest\pytest-nice>pytest --markers
@pytest.mark.smoke: Run the smoke test functions for tasks project

@pytest.mark.get:Run the test functions that test tasks.get()

这样当我们给addopts加上–strict时,没有注册的标记就不能再使用,因此也就尽可能减少拼写错误

指定pytest的最低版本号

minversion选项可以指定运行测试用例的pytest的最低版本,例如测试两个浮点数的值是否接近,我们会使用approx()函数,但这个功能直到pytest3.0才出现,为此我们可以在pytest.ini文件中添加

[pytest]
minversion =3.0

指定pytest忽略某些目录

pytest执行搜索时,会递归遍历所有子目录,可以使用norecurse选项简化pytest的搜索工作。
norecurse的默认值是

.* build dist CVS _darcs {arch} 

*.egg

如果让pytest忽略Tasks项目的src目录,则需要加入norecursedirs里

[pytest]
norecursedirs =.* venv src *.egg dist build

指定测试目录

testpaths只是pytest去哪里访问,它是一系列相对于根目录的路径,用于限定测试用例的搜索范围,只有在pytest未指定文件目录参数或者测试用例标识符时,该选项才会启动。

task_proj/|------pytest.ini
|------src
||------tasks
||------api.py
||------......|------test
        |------conftest.py
        |------func
        ||------__init__py
        ||------test_add.py
        ||------......|------unit
                |------__init__.py
                |------test_task.py
                |------......

例如这样的机构目录,我们要指定test目录为pytest的执行路径

[pytest]
testpaths = test

然后只需要从tasks_proj开始运行pytest,pytest就会直接去找test路径。

更改测试搜索的规则

pytest的执行,是根据一定的规则搜索并运行测试的:

  • 从一个或多个目录开始查找
  • 在该目录和所有子目录下递归查找测试模块
  • 测试模块是指定文件名为test_*.py和*_test.py的文件
  • 在测试模块中查找以test_开头的函数名
  • 查找名字以Test开头的类,首先筛选掉包含__init__函数的类,再查找类中以Test_开头的类方法

接下来修改规则:
默认规则pytest寻找Test*开头的类,而这个类不能含有__init__()函数,可以使用python_classes来修改

[pytest]
 python_classes =*Test Test**Suite

像python_classes一样,python_files可以更改默认的测试搜索规则,而不是仅查找以test_开头的文件和_test结尾的文件

[pytest]
python_files = test_**_test check_*

同样的可以修改搜索测试函数和方法的命名规则

[pytest]
python_functions = test_* check_*

禁用XPASS

设置

xfail_strict = true

将会使那些被标记为@pytest.mark.xfail但是实际通过的测试用例也会报告为失败。

避免文件名冲突

duplicate
|------dup_a
||------test_func.py
|        dup_b
||------test_func.py        

两个py文件中分别写入函数test_a()和test_b()

deftest_a():pass
deftest_b():pass

如此目录结构,两个同名文件,虽然文件内容不同,但他们还是会冲突,可以单独运行py文件,但在duplicate路径下执行就不行了,会报如下错误:

(venv) E:\Programs\Python\Python_Pytest\SourceCode\ch6\duplicate>pytest
================== test session starts ===================================
platform win32 -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
rootdir: E:\Programs\Python\Python_Pytest, inifile: pytest.ini
plugins: xdist-1.29.0, timeout-1.3.3, repeat-0.8.0, nice-0.1.0, instafail-0.4.1, forked-1.0.2, emoji-0.2.0, allure-pytest-2.6.3
collected 1 item /1 errors                                                                                                                                                

=========================================== ERRORS ============================
____________________ ERROR collecting SourceCode/ch6/duplicate/b/test_func.py __________________________
importfile mismatch:
imported module 'test_func' has this __file__ attribute:
  E:\Programs\Python\Python_Pytest\SourceCode\ch6\duplicate\a\test_func.py
which isnot the same as the test file we want to collect:
  E:\Programs\Python\Python_Pytest\SourceCode\ch6\duplicate\b\test_func.py
HINT: remove __pycache__ /.pyc files and/or use a unique basename for your test file modules
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted:1 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
==============================1 error in0.32 seconds ==========================================

报错信息中也并没明显指出问题在哪,要解决这个问题,只需要在各个子目录里添加一个空的__init__.py文件即可,测试子目录添加__init__.py是个好习惯


本文转载自: https://blog.csdn.net/dawei_yang000000/article/details/140273888
版权归原作者 Davieyang.D.Y 所有, 如有侵权,请联系我们删除。

“Pytest单元测试系列[v1.0.0][Pytest基础]”的评论:

还没有评论