Run automated tests from test plans in the Test hub

Last Update: 6/21/2017

Team Services | TFS 2017 Update 2

Automate test cases in your test plans and run them directly from the Test hub:

  • Provides a user-friendly process for testers who may not be well versed with running tests in Build or Release workflows.

  • Gives you the flexibility to run selected tests on demand, rather than scheduled testing in Build or Release workflows where all tests meeting the filter criteria are run.

  • Useful when you want to rerun a few tests that failed due to test infrastructure issues, or you have a new build that includes fixes for failed tests.

You will need:

Set up your environment

  1. In the Test plans tab of the Test hub, choose your test plan, open the shortcut menu, and choose Test plan settings.

    Choosing Test plan settings

  2. In the Test plan settings dialog, select the build definition that generates builds which contain the test binaries. You can then select a specific build number to test, or let the system automatically use the latest build when tests are run.

    Selecting the build and build number

  3. You will need a release definition that was created from the Run automated tests from Test Manager template to run tests from test plans in the Test hub. If you have an existing release definition that was created using this template, select it and then select the existing environment in the release definition where the tests will be executed. Otherwise, choose the Create new link in the dialog to create a new release definition containing a single environment with the Visual Studio Test task already added.

    Selecting a release definition or creating a new one

  4. To configure the Visual Studio Test task and the release definition, start by assigning meaningful names to the release definition and environment. Then select the Visual Studio Test task and configure it as follows:

    • Verify that version 2 of the Visual Studio Test task is selected. The version number is shown in the drop-down list at the top left of the task settings panel.

    • Verify that Select tests using is set to Test run (for on-demand-runs). What does this setting mean?

    • If you have UI tests that run on real browsers or thick clients, set (tick) the Test mix contains UI tests checkbox. This is not required if you are running UI tests on a headless browser.

    • Select how is the test platform is provisioned, and the version of Visual Studio or the location of the test platform that is installed on the test machines

    • If your tests need input parameters such as app URLs or database connection strings, select the relevant settings file from the build artifacts. You can use the Publish build artifacts tasks in you build definition to publish the settings file in a drop location if this file is not included in the artifacts. In the example shown below, the application URL is exposed in the run settings file, and is overridden to set it to a staging URL using the Override test run parameters setting.

    Specifying the properties for the Visual Studio Test task

  5. Choose the Run on agent link and verify that the deployment queue is set to the one containing the machines where you want to run the tests. If your tests require special machines from the agent pool, you can add demands that will select these at runtime.

    Specifying the properties for the Agent phase

    You may be able to minimize test times by distributing tests across multiple agents by setting Type of parallelism to Multi-agent and specifying the number of agents. More details.

    Note: If you are running UI tests such as CodeUI or Selenium on physical browsers such as IE, Firefox, or Chrome, the agent on the machines must be running in interactive mode and not as a service. More details.

  6. Open the Artifacts tab of the release definition and verify that the build definition containing the test binaries is linked to this release definition as an artifact source.

    Verifying the linked build artifacts

  7. Save the release definition.

  8. If you chose Create new in the Test plan settings dialog in step 2 of this example, return to the browser tab containing your test plan settings. In the Test plan settings dialog, select the release definition and environment you just saved.

    Selecting the release definition and environment

Run the automated tests

  1. In the Test hub, open the test plan and select a test suite that contains the automated tests.

  2. Select the test(s) you want to run, open the Run menu, and choose Run test.

    Selecting Run test

    The test binaries for these tests must be available in the build artifacts generated by your build definition.

  3. Choose OK to start the testing process. The system checks that only automated tests are selected (any manual tests are ignored), validates the environment to ensure the Visual Studio Test task is present and has valid settings, checks the user's permission to create a release for the selected release definition, creates a test run, and then triggers the creation of a release to the selected environment.

    Starting the test execution

  4. Choose View test run to view the test progress and analyze the failed tests. Test results have the relevant information for debugging failed tests such as the error message, stack trace, console logs, and attachments.

  5. After test execution is complete, the Runs tab of the Test hub shows the test results. The Run summary page shows an overview of the run.

    Viewing the test run summary

    There is a link to the Release used to run the tests, which makes it easy to find the release that ran the tests if you need to come back later and analyze the results. Also use this link if you want to open the release to view the release logs.

    What are the typical error scenarios or issues I should look out for if my tests don't run?

  6. The Test results page lists the results for each test in the test run. Select a test to see debugging information for failed tests such as the error message, stack trace, console logs, and attachments.

    Viewing the test results details

  7. Open the Test Plans page and select the test plan to see the status of your tests if tests are updated after test execution is complete. Select a test to see the recent test results.

    Viewing the test plan

Q & A

Q: Can I override the build or environment set at the test plan level for a specific instance of test run?

A: Yes, you can do this using the Run with options command. Open the shortcut menu for the test suite in the left column and choose Run with options.

Configuring the Run with options dialog

Enter the following values in the Run with options dialog and then choose OK:

  • Test type and runner: Select Automated tests using Release environment.

  • Build: Select the build that has the test binaries. The test results will be associated this build.

  • Release Definition: Select a definition from the list of release definitions that can consume the selected build artifact.

  • Release Environment: Select the name of the environment configured in your release definition.

Configuring the Run with options dialog

Q: Why use release environments to run tests?

A: Release Management offers a compelling orchestration workflow to obtain test binaries as artifacts and run tests. This workflow shares the same concepts used in the scheduled testing workflow, meaning users running tests in scheduled workflow will find it easy to adapt; for example, by cloning an existing scheduled testing release definition.

Another major benefit is the availability of a rich set of tasks in the task catalog that enable a range of activates to be performed before and after running tests. Examples include preparing and cleaning test data, creating and cleaning configuration files, and more.

Q: How does selecting "Test run (for on-demand runs)" in the Visual Studio Test task work?

A: The Test management sub-system uses the test run object to pass the list of tests selected for execution. The test task looks up the test run identifier, extracts the test execution information such as the container and test method names, runs the tests, updates the test run results, and sets the test points associated with the test results in the test run. From an auditing perspective, the Visual Studio task provides a trace from the historical releases and the test run identifiers to the tests that were submitted for on-demand test execution.

Q: Should the agent run in interactive mode or as a service?

A: If you are running UI tests such as coded UI or Selenium tests, the agent on the test machines must be running in interactive mode, not as a service, to allow the agent to launch a web browser. If you are using a headless browser such as PhantomJS, the agent can be run as a service or in interactive mode. See Build and Release Agents, Deploy an agent on Windows, and Agent pools and queues.

Q: Where can I find detailed documentation on how to run Selenium tests?

A: See Get started with Selenium testing.

Q: What happens if I select multiple configurations for the same test?

A: Currently, the on-demand workflow is not configuration-aware. In future releases, we plan to pass configuration context to the test method and report the appropriate results.

Q: What if I need to download product binaries and test binaries from different builds? Or if I need to obtain artifacts from a source such as Jenkins?

A: The current capability is optimized for a single team build to be tested on-demand using a Release Management workflow. We will evaluate support for multi-artifact releases, including non-Team Build artifacts such as Jenkins, based on user feedback.

Q: I already have a scheduled testing release definition. Can I reuse the same definition to run test on-demand, or should I create a new definition as shown above?

A: We recommend you use a separate release definition and environment for on-demand automated testing from the Test hub because:

  • You may not want to deploy the app every time you want to run a few on-demand tests. Scheduled testing environments are typically set up to deploy the product and then run tests.

  • New releases are triggered for every on-demand run. If you have many testers executing a few on-demand test runs every day, your scheduled testing release definition could be overloaded with releases for these runs, making it difficult to find releases that were triggered for the pipeline that contains scheduled testing and deployment to production.

  • You may want to configure the Visual Studio Test task with a Test run identifier as an input so that you can trace what triggered the release. See How does selecting "Test run (for on-demand runs)" in the Visual Studio Test task work?.

Q: Can I trigger these runs and view the results in Microsoft Test Manager?

A: No. MTM will not support running automated tests against Team Foundation builds. It only works in the web-based interface for Team Services and TFS. All new manual and automated testing product development investments will be in the web-based interface. No further development is planned for MTM. See Guidance on Microsoft Test Manager usage.

Q: I have multiple testers in my team. Can they run tests from different test suites or test plans in parallel using the same release definition?

A: They can use the same release definition to trigger multiple test runs in parallel if:

  • The agent pool associated with the environment has sufficient agents to cater for parallel requests. If sufficient agents are not available, runs can still be triggered but releases will be queued for processing until agents are available.

  • You have sufficient concurrent pipelines to enable concurrent releases. See Concurrent pipelines in Team Services or Concurrent pipelines in TFS for more information.

  • Testers do not run the same tests in parallel. Doing so may cause results to be overwritten depending on the order of execution.

To enable multiple different test runs to execute in parallel, set the Release Management environment trigger option for behavior when multiple releases are waiting to be deployed as follows:

  • If your application supports tests running in parallel from different sources, set this option to Allow multiple releases to be deployed at the same time.

  • If your application does not support tests running in parallel from different sources, set this option to Allow only one active deployment at a time.

Q: What are the typical error scenarios or issues I should look out for if my tests don't run?

A: Check and resolve issues as follows:

  • The release definition and environment in which I want to run tests are not shown after I select the build.

    • Make sure the build definition that is generating the build is linked as the primary artifact in the Artifacts tab of the release definition.

  • I get an error that I don't have sufficient permission to trigger a release.

    • Configure Create releases and Manage deployments permissions for the user in the Security menu of the release definition. See Release permissions.

  • I get an error that no automated tests were found.

    • Check the automation status of the selected tests. Do this in the work item for the test case, or use the Column options link in the Test Plans page of the Test hub to add the Automation status column to the list of tests. See the pre-requisites section for information about automating manual tests.

  • My tests didn't execute, and I suspect the release definition is incorrect.

    • Use the link in the Run summary page to access the release instance used to run the tests, and view the release logs.

  • My tests go into the error state, or remain "in-progress" even after release to the environment is triggered.

    • Check if the release environment that you selected has the correct task and version selected. You must use version 2 or higher of the Visual Studio Test task. Version 1 of the task, and the Run Functional Tests task, are not supported.

See Also

Help and support

Submit bugs through Connect, make suggestions on Uservoice, and send quick thoughts using the Send-a-Smile icon link in the Visual Studio, Team Services, or TFS title bar. We look forward to your feedback.