Are you looking for something like this? def pytest_generate_tests(metafunc): if 'file_name' in metafunc.fixturenames: files = [] if metafunc.config.option.all_files: files = list_all_files() fn = metafunc.config.option.file_name if fn: files.append(fn) metafunc.parametrize('file_name', all_files, scope='module') No need to define file_name function....
To get a fixture by a string, you can use request.getfuncargvalue() inside a test function or another fixture. You can try something along those lines: import pytest @pytest.fixture def fixture1(): return "I'm fixture 1" @pytest.fixture(scope='module') def fixture2(): return "I'm fixture 2" @pytest.fixture(params=[1, 2]) def all_fixtures(request): your_fixture = request.getfuncargvalue("fixture{}".format(request.param)) # here...
A better approach is to just write a fixture that creates NewObject for you and cleans up afterwards: import pytest @pytest.fixture def example_smtp(): return "example" class TestClass(object): @pytest.yield_fixture(autouse=True) def obj(self): obj = NewObject(example_smtp) obj.initialize() yield obj obj.cleanup() def test_function(self, obj, example_smtp): # use obj here some_action(obj) But if you really...
I'm not sure you can access the exception message from a fixture, but you can implement a custom pytest_runtest_logreport hook (untested): def pytest_runtest_logreport(report): fo = open("/Users/mahesh.nayak/Desktop/logs/test1.log", "a") fo.write('%s (duration: %s)\n' % (report.nodeid, report.duration)) fo.close() Hope that helps....
D'oh, I figured it out after all: You add it in the py.tests Options area of the Run/Edit configuration dialog....
Use the conditional import machinery pytest provides: nlopt = pytest.importorskip('nlopt') Put that line inside the specific test function that uses nlopt (or in the setup method for a set of functions) and it will only skip those when it can't do the import....
python,unit-testing,mocking,py.test,monkeypatching
So, after some more digging into the matter, I found a solution which satisfies me for now. I want to share it, in case anyone else runs into the same problem. Actually it is quite simple, and with some helper class from https://gist.github.com/daltonmatos/3280885 I came up with the following testcode:...
Implement a pytest_exception_interact in a conftest.py file, which according to the documentation: called when an exception was raised which can potentially be interactively handled. def pytest_exception_interact(node, call, report): if report.failed: # call.excinfo contains an ExceptionInfo instance It's not clear from your question exactly what you want to gather from the...
python,unit-testing,testing,mocking,py.test
To use patch in these kind of tests you should use create parameter that will force to create the attribute if not exist. So your test should do something like this: def test_MyContextManager(): with patch.object(MyClass, 'myfunc', create=True, return_value=None) as mock_obj: with MyContextManager(): pass ...
python,decorator,py.test,python-decorators
It seems py.test doesn't use the test fixtures when evaluating the expression for skipif. By your example, test_ios is actually successful because it is comparing the function platform found in the module's namespace to the "ios" string, which evaluates to False hence the test is executed and succeeds. If pytest...
You can use a pytest fixture and parametrize it: @pytest.fixture( scope='module', # so that it's reused in the module scope params=[100, 200] ) def simulation(request): speed = request.param # create the simulation return df class Test: def test1(self, simulation): ... def test2(self, simulation): ... ...
python,command-line-arguments,py.test
If I understand your question correctly, you can create a conftest.py file and add options to pytest using pytest_addoption hook: def pytest_addoption(parser): parser.addoption( '--arg1', dest="arg1", help="first argument") Now py.test understands this options, so if you execute: py.test --arg1 hey tests in the same directory and below can access the option...
It seems that this decorator does not work well with pytest's funcargs. The only solution I see is to manually call httprertty.enable() and httpretty.disable() methods. Or create a fixture: @pytest.yield_fixture def http_pretty_mock(): httpretty.enable() yield httpretty.disable() def test_write_file_from_datasource_failing(http_pretty_mock, tmpdir): tmpdir = str(tmpdir) # mock the connection ...
python,webdriver,py.test,allure
Instead of setting the type as a string png, you need to use allure module attachment type constant, which is an Enum with extension attribute defined: from allure.constants import AttachmentType allure.attach('screenshot', driver.get_screenshot_as_png(), type=AttachmentType.PNG) ...
python,ios,instruments,appium,py.test
Ok, after digging for a while I finally found the answer. Unlike Instruments, where you can perform any tap relative to the full screen, Appium has chosen to limit you to the bounds of your application. This means if you have the menu-bar showing it will reduce your clickable area...
1) First of all you can declare those fixtures not only in conftest.py, but in every python module you want. And you can import that module. Also you can use fixtures in the same way as you used setUp method: @pytest.fixture(scope='class') def input(request): request.cls.varA = 1 request.cls.varB = 2 request.cls.varC...
No, you're doing everything correctly as documentation shows. I'm unable to get it working either. On other hand, pytest has fixture tmpdir, which does what you need in the test: gives you unique temporary directory for your test. Here is an example of test: def test_one(tmpdir): test_file = tmpdir.join('dir', 'file').ensure(file=True)...
python,unit-testing,python-2.7,py.test
Use pytest.mark.usefixtures marker when you don't need to directly access the fixture object (the return value of the fixture function). If you need to access the fixture object, use fixture-as-function-argument (the first way in your code). The reason of the error in the second code: my_fixture is not defined in...
The function f accepts keyword arguments, so you need to assign your test parameters to keywords. Luckily, Python provides a very handy way of passing keyword arguments to a function...the dictionary: d = {'h': 4} f(**d) The ** prefix before d will "unpack" the dictionary, passing each key/value pair as...
I found out that you can access the gateway id in the following way: slaveinput = getattr(session.config, "slaveinput", None) if slaveinput: gatewayid = slaveinput['slaveid'] Of course you need to be in a place where you can access the session.config object....
The failure message will include all the parametrized values, so you can add a description text to the parameters: @pytest.mark.parametrize("description,unsorted,expected", [ ("positive", [2,1,3], [1,2,3]), ("negative", [-2,-1,-3], [-3,-2,-1]), ("including zero", [2,0,1], [0,1,2]), ("duplicate values", [0,1,0], [0,0,1]), ("floats", [0.1,0.3,0.2], [0.1,0.2,0.3]), ]) def test_merge_sort(description unsorted, expected): assert merge_sort(unsorted) == expected ...
python,unit-testing,flask,py.test,werkzeug
The credentials for HTTP Basic authentication must have a username and a password separated by a colon. Try this: def test_index(test_client): res = test_client.get("/", headers={"Authorization": "Basic {user}".format(user=b64encode(b"test_user:test_password"))}) assert res.status_code == 200 ...
There is a py.test plugin, pytest-pycharm, that will halt the PyCharm debugger when a test emits an uncaught exception.
python,linux,module,typeerror,py.test
As jcoppens mentioned you will want to fix your imports. But your test has a couple of further issues. Your test should perhaps be: def test_getSex(): assert len(person1.getSex()) == 1 Note getSex() - if you don't have the parentheses you are asserting the length of the method, not the result...
python,unit-testing,flask,flask-sqlalchemy,py.test
1. According to Session Basics - SQLAlchemy documentation: commit() is used to commit the current transaction. It always issues flush() beforehand to flush any remaining state to the database; this is independent of the “autoflush” setting. .... So transaction.rollback() in session fixture function does not take effect, because the transaction...
I think parametrized fixture is what will work for you very well: import pytest @pytest.fixture def backends(): """Mapping of possible backend ids to their constructor functions.""" return { 1: connect_to_backend_1, 2: connect_to_backend_2 } @pytest.fixture(scope="session", params=[1, 2]) def backend(request, backends): """Parametrized backend instance.""" return backends[request.param]() def test_contract_method_1(backend): result = run_contract_method_1() assert...
use this code @pytest.fixture(params=['web01-east.domain.com', 'redis01-master-east.domain.com', 'web01.domain.com']) def patch_socket(request, monkeypatch): def gethostname(): return request.param monkeypatch.setattr(socket, 'gethostname', gethostname) def test__get_pod(patch_socket): assert __get_pod() == 'east' This will create on the fly 3 tests. If you run with -vv you will see something like: <FILE>::test__get_pod[web01-east.domain.comm PASSED <FILE>::test__get_pod[redis01-master-east.domain.com] PASSED...
You get this error because you are trying to mix two independent testing styles that py.test supports: the classical unit testing and pytest's fixtures. What I suggest is not to mix them and instead simply define a class scoped fixture like this: import pytest class A_Helper: def __init__(self, fixture): print...
It looks like the script is trying to use ANSI escape sequences to display different colours etc. However, these seem to be getting interpreted as UTF-8. My recommendation would be to check your terminal settings....
According to the documentation you should either do: self.pytest_args = ["--cov", "my_pkg"] or: self.pytest_args = "--cov my_pkg" ...
The way I handle this situation is move all fixtures which are shared to a toplevel conftest file like this: conftest.py tests/ [conftest.py] test_integration.py server/ tests/ [conftest.py] test_server.py client/ tests/ [conftest.py] test_client.py This does make it sometimes a bit less nice as you end up with a bunch of not...
Mmm.. it's not well documented mainly because it's a bit confused and not that well defined. You can use 'and', 'or' and 'not' to match strings in a test name and/or its markers. At heart, it's an eval. For the moment (until the syntax is hopefully improved) my advice is...
You need to add the marker names to your pytest.ini to register them. See http://pytest.org/latest/example/markers.html#registering-markers
You can't specify the type of report in the .coveragerc file. If you want to stop using pytest-cov, then you need two commands: one to run the tests under coverage, and one to generate the report: $ coverage run -m py.test etc etc $ coverage xml ...
I see more options 1) use the path py.test <path_to_my_app> py.test will search for test only in that subdir or py.test <path_to_file>::<test> will execute only 2) decorate each test with a specific marker @pytest.mark.app1 def test_bla_bla(): .... and run with py.test -m app1 3) use -k passing the test funnction...
python,python-2.7,pandas,py.test
Suppose you want to select columns two and three to add: col_to_add = ['two', 'three'] Use sum(axis=1) to concatenate these columns: df['uid'] = df[col_to_add].sum(axis=1) ...
python,google-app-engine,unit-testing,py.test,gae-sessions
The problem is that your gae sessions is not yet called until the app is also called. The app is only called when you make a request to it. Try inserting a request call before you check for the session value. Check out the revised test_handlers.py code below. def test_session(anon_user):...
It is not yet explicitly documented, but pytest-allure-adaptor of version 1.5.4 converts pytest's statuses to their allure counterparts as follows: PASSED => PASSED FAILED => FAILED SKIPPED => CANCELLED ERROR => BROKEN xfail => PENDING XPASS => FAILED (because allure has no special status for that) ...
py.test,parallel-testing,astropy
I would suspect the SSD is the limiting factor there. Many of the tests are CPU bound, but just as many make heavy disk usage--temp files and the like. Those could perhaps be made even slower by running in parallel. Beyond that it's hard to say much since it depends...
Ask yourself the question, "why is the order important?" If you can't tell from outside the difference of calls, you don't have to test them. If these are for example database updates you have to write a database mockup which logs the order of updates, or make a select statement,...
xdist output is a little different than standard's pytest because the tests are running in parallel. When you see: test_save_load.py:135: test_save_load[obj0] it means that the test has been sent to one of the nodes for execution. When you see: [gw0] FAILED test_save_load.py:135: test_save_load[obj0] it just means that the test has...
nose,py.test,nosetests,python-unittest,python-nose
I found a solution for it using PyTest ordering plugin provided here. I ran tests using py.test MyTestModule.py -vv and the results were as follows and test were run in the order of their appearance: MyTestModule.py::test2::test1 PASSED MyTestModule.py::test2::test0 PASSED MyTestModule.py::test1::testB PASSED MyTestModule.py::test1::testA FAILED ...
Are you sure you have pytest-django installed. I installed pytest-django on my machine and ran a simple project. Install pip install pytest-django Setup and run my sample test: platform linux -- Python 3.4.3 -- py-1.4.30 -- pytest-2.7.2 rootdir: /home/matt/projects/test_app/src, inifile: pytest.ini plugins: django collected 1 items tests/test_example.py . Sample code:...
Approach #1 This is I think the road you were heading down. Basically, just treat test.py as a black box process and use the exit code to determine if there were any test failures (e.g. if there is a non-zero exit code) exit_code = subprocess.Popen(["py.test", "smoke_test_suite.py"]).wait() test_failures = bool(exit_code) Approach...
postgresql,fixtures,py.test,python-decorators,python-asyncio
I’m currently trying to solve a similar problem. Here’s what I’ve come up with so far. It seems to work but needs some clean-up: # tests/test_foo.py import asyncio @asyncio.coroutine def test_coro(loop): yield from asyncio.sleep(0.1) assert 0 # tests/conftest.py import asyncio @pytest.yield_fixture def loop(): loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) yield loop loop.close()...
You can make the __init__.py file aware of py.test as described here. So basically create a mymodule/conftest.py file with the following content def pytest_configure(config): import sys sys._called_from_test = True def pytest_unconfigure(config): del sys._called_from_test and in the __init__.py file simply check if you are inside the py.test session like import sys...
Create a setUp() or tearDown() method which is called before each test starts and remove the persisting object.
Thanks to help from Holger himself (thanks @hpk42!), I've got something that works. Only slightly magic/hacky. The solution is to use a py.test hook called pytest_pyfunc_call, alongside a decorator called hookwrapper. They give me a way to hook in some code both before and after the test runs, but also...
python,mocking,py.test,python-mock
The simpler and cleaner way to do it is with mock.patch("mymodule.requests.post", side_effect=[Mock(status_code=400), Mock(status_code=200)]) as mock_post: mymodule.some_function() patch create mock_post object by MagicMock(side_effect=mock_responses) and replace mymodule.requests.post reference. You can also use mock_post to check post() calls by something like: mock_post.assert_has_calls([mock.call(first_url, first_params), mock.call(second_url, second_params)]) You can do the same work by build...
As per: https://pytest.org/latest/example/pythoncollection.html#changing-naming-conventions I've added these lines to my setup.cfg file: [pytest] python_files=*py ...
I have changed my code to look like try: import foo_fast as foo except ImportError import foo def some_function(a, b): return foo.bar(a, b) and can now test it like. @pytest.fixture(params=(True, False)) def use_fast(request, monkeypatch): if request.param: import foo import foo_fast monkeypatch.setattr(foo_fast, 'bar', foo.bar) return request.param def test_foo(use_fast): assert some_function(1, 2)...
I'm not sure this will solve your problem, but you can pass --durations=N to print the slowest N tests after the test suite finishes.
This configuration option isn't part of pytest-cov. In the configuration file for the underlying tool coverage.py, which is called .coveragerc by default, you can add: [html] directory = differentname See the documentation for details: https://github.com/nedbat/coveragepy/blob/master/doc/config.rst...
You can put your stuff in other modules and reference them using a pytest_plugins variable in your conftest.py: pytest_plugins = ['module1', 'module2'] This will also work if your conftest.py has hooks on them....
python,code-coverage,nose,py.test
The issue was that a few tests were using setup and teardown but the classes were not inheriting from unittest.TestCase. pytest was skipping these tests.
python,apache-spark,py.test,pyspark
you can use your config file spark.driver.extraClassPath to sort out the problem. Spark-default.conf and add the property spark.driver.extraClassPath /Volumes/work/bigdata/CHD5.4/spark-1.4.0-bin-hadoop2.6/lib/spark-csv_2.11-1.1.0.jar:/Volumes/work/bigdata/CHD5.4/spark-1.4.0-bin-hadoop2.6/lib/commons-csv-1.1.jar After setting the above you even don't need packages flag while running from shell. sqlContext = SQLContext(sc) df = sqlContext.read.format('com.databricks.spark.csv').options(header='false').load(BASE_DATA_PATH + '/ssi.csv')...
python,nose,py.test,python-unittest
That is a very wide question with a lot of resources available. However, I will recommend py.test because getting started is very easy despite having a full set of tools. Nose has a bit more configuration needed than py.test before starting. Unittest is like junit in java, which is not...
Got it worked finally! After downloading pytest, I ran the following commands and it worked like magic. I think,earlier, I missed putting "sudo" infront of the install command: $python setup.py build $sudo python setup.py install The output said: .. Installing py.test script to /usr/local/bin Installing py.test-2.7 script to /usr/local/bin Installed...
python,python-3.x,tdd,py.test,python-cmd
You can mock input or input stream passed to cmd to inject user input but I find more simple and flexible test it by onecmd() Cmd API method and trust how Cmd read input. In this way you cannot care how Cmd do the dirty work and test directly by...
python,unit-testing,testing,py.test
Testing helper functions makes a lot of sense - in this context, these helper functions are the basic building blocks (read: units) for your application. Having tests that prove that they function properly will allow you to easily change their implementation without worrying about whether you're breaking something else or...
Instead of monkey-patching socket.gethostname, make the __get_pod to accept a parameter. It will make the code more testable: Here's an example with pytest.mark.parametrize: import re import pytest def __get_pod(hostname): # dummy impl. hostname = hostname.split('.', 1)[0] if '-' not in hostname: return 'Unknown' hostname = re.sub(r'\d+', '', hostname) return hostname.rsplit('-',...
If the plugin is not importable it is because it is not on sys.path. Try explicitly adding it using the PYTHONPATH variable: PYTHONPATH=/path/to/dir/of/myplugin py.test -p myplugin ../mytool/test ...
python,testing,mocking,py.test
Is little bit hard to guess the best approach from a synthetic example. When is possible use just the needed mocks and the most of possible real object is the best. But it is a trade off game and when create real object is hard and dependencies are deep and...
I just tried your example verbatim and it worked fine in pytest 2.6.4. Perhaps you are misspelling parametrize? You misspelled it in the title and is a common mistake, as can be seen in this issue.
The first thing you need to do it modify your test function so that it takes an argument named patch_socket: def test__get_pod_single_dash(patch_socket): assert __get_pod() == 'east' This means that py.test will call your fixture, and pass the result to your function. The important thing here is that is does get...
python,python-3.x,flask,py.test
app.client is already an instance, you shouldn't call it again. Ultimately, this test makes no sense. Of course client is a test client, that's how you just created it in the fixture. Also, the clients will never be equal, they are different instances. from flask.testing import FlaskClient assert app.client ==...
Unfortunately, there seems to be no configuration or command line flag for that, since that's hard-coded deep inside pytest: when you define --verbose, you get the whole package. However, I've managed to come up with this hackish hack. Put the following function into your conftest.py: def pytest_configure(config): terminal = config.pluginmanager.getplugin('terminal')...
You really don't need to subclass unittest.TestCase here: You can also "parametize" tests using pytest as well: Example: import pytest from app.objects import Root # Example known_links = [ "http://www.google.com", "http://www.walla.com" ] @pytest.fixture() def root(request): return Root() # Root object @pytest.mark.parametrize("known_links", known_links) def test_get_all_links_known_links(root, known_link): html = Parser( open(os.path.normpath(os.path.join(root, "test.html")))...
ddt is meant to be used by TestCase subclasses, so it won't work for bare test classes. But note that pytest can run TestCase subclasses which use ddt just fine, so if you already have a ddt-based test suite it should run without modifications using the pytest runner. Also note...
python,python-3.x,methods,py.test
Pytest captures stdout; print() writes to stdout and you'll only see the output if there is a test failure. Use the -s flag if you want to see stdout output instead: py.test -s ...
You can use metafunc and create conftest.py file with pytest_addoption and pytest_generate_tests functions: def pytest_addoption(parser): parser.addoption("--libname", action="append", default=[], help="name of the tested library") def pytest_generate_tests(metafunc): if 'libname' in metafunc.fixturenames: metafunc.parametrize("libname", metafunc.config.option.libname) And in the function in your tests.py file you can use importlib and ask for libname: def test_import(libname): import...