diff --git a/README.md b/README.md index 0d7b625..9d4c00d 100644 --- a/README.md +++ b/README.md @@ -6,215 +6,190 @@ [![Build Status](https://github.com/hrcorval/behavex/actions/workflows/python-package.yml/badge.svg)](https://github.com/hrcorval/behavex/actions) [![GitHub last commit](https://img.shields.io/github/last-commit/hrcorval/behavex.svg)](https://github.com/hrcorval/behavex/commits/main) -# BehaveX - -BehaveX is a BDD testing solution designed to enhance your Behave-based testing workflow by providing additional features and performance improvements. It's particularly beneficial in the following scenarios: - -* **Accelerating test execution**: When you need to significantly reduce test run times through parallel execution by feature or scenario. -* **Enhancing test reporting**: When comprehensive and visually appealing HTML and JSON reports are required for in-depth analysis and integration with other tools. -* **Improving test visibility**: When detailed evidence, such as screenshots and logs, is essential for understanding test failures and successes. -* **Optimizing test automation**: When features like test retries, test muting, and performance metrics are crucial for efficient test maintenance and analysis. -* **Managing complex test suites**: When handling large test suites demands advanced features for organization, execution, and reporting. - -## Features provided by BehaveX - -* Perform parallel test executions - * Execute tests using multiple processes, either by feature or by scenario. -* Get additional test execution reports - * Generate friendly HTML reports and JSON reports that can be exported and integrated with third-party tools -* Provide images/screenshots evidence as part of the HTML report - * Include images or screenshots as part of the HTML report in an image gallery linked to the executed scenario -* Provide additional evidence as part of the HTML report - * Include any testing evidence by pasting it to a predefined folder path associated with each scenario. This evidence will then be automatically included as part of the HTML report -* Generate test logs per scenario - * Any logs generated during test execution using the logging library will automatically be compiled into an individual log report for each scenario -* Mute test scenarios in build servers - * By just adding the @MUTE tag to test scenarios, they will be executed, but they will not be part of the JUnit reports -* Generate metrics in HTML report for the executed test suite - * Automation Rate, Pass Rate and Steps executions & duration -* Execute dry runs and see the full list of scenarios into the HTML report - * This is enhanced implementation of Behave's dry run feature, allowing you to see the full list of scenarios in the HTML report without actually executing the tests -* Re-execute failing test scenarios - * By just adding the @AUTORETRY tag to test scenarios, so when the first execution fails the scenario is immediately re-executed - * Additionally, you can provide the wrapper with a list of previously failing scenarios, which will also be re-executed automatically - -![test execution report](https://github.com/hrcorval/behavex/blob/master/img/html_test_report.png?raw=true) - -![test execution report](https://github.com/hrcorval/behavex/blob/master/img/html_test_report_2.png?raw=true) - -![test execution report](https://github.com/hrcorval/behavex/blob/master/img/html_test_report_3.png?raw=true) - - -## Installing BehaveX - -Execute the following command to install BehaveX with pip: - -> pip install behavex - -## Executing BehaveX - -The execution is performed in the same way as you do when executing Behave from command line, but using the "behavex" command. +# BehaveX Documentation + +## Table of Contents +- [Introduction](#introduction) +- [Features](#features) +- [Installation Instructions](#installation-instructions) +- [Execution Instructions](#execution-instructions) +- [Constraints](#constraints) +- [Supported Behave Arguments](#supported-behave-arguments) +- [Specific Arguments from BehaveX](#specific-arguments-from-behavex) +- [Parallel Test Executions](#parallel-test-executions) +- [Test Execution Reports](#test-execution-reports) +- [Attaching Images and Evidence](#attaching-images-and-evidence) +- [Test Logs and Metrics](#test-logs-and-metrics) +- [Muting Test Scenarios](#muting-test-scenarios) +- [Handling Failing Scenarios](#handling-failing-scenarios) +- [FAQs](#faqs) +- [Show Your Support](#show-your-support) + +## Introduction + +**BehaveX** is a BDD testing solution designed to enhance your Behave-based testing workflow by providing additional features and performance improvements. It's particularly beneficial in the following scenarios: + +- **Accelerating test execution**: Significantly reduce test run times through parallel execution by feature or scenario. +- **Enhancing test reporting**: Generate comprehensive and visually appealing HTML and JSON reports for in-depth analysis and integration with other tools. +- **Improving test visibility**: Provide detailed evidence, such as screenshots and logs, essential for understanding test failures and successes. +- **Optimizing test automation**: Utilize features like test retries, test muting, and performance metrics for efficient test maintenance and analysis. +- **Managing complex test suites**: Handle large test suites with advanced features for organization, execution, and reporting. + +## Features + +BehaveX provides the following features: + +- **Parallel Test Executions**: Execute tests using multiple processes, either by feature or by scenario. +- **Enhanced Reporting**: Generate friendly HTML and JSON reports that can be exported and integrated with third-party tools. +- **Evidence Collection**: Include images/screenshots and additional evidence in the HTML report. +- **Test Logs**: Automatically compile logs generated during test execution into individual log reports for each scenario. +- **Test Muting**: Add the `@MUTE` tag to test scenarios to execute them without including them in JUnit reports. +- **Execution Metrics**: Generate metrics in the HTML report for the executed test suite, including Automation Rate and Pass Rate. +- **Dry Runs**: Perform dry runs to see the full list of scenarios in the HTML report without executing the tests. +- **Auto-Retry for Failing Scenarios**: Use the `@AUTORETRY` tag to automatically re-execute failing scenarios. + +![Test Execution Report](https://github.com/hrcorval/behavex/blob/master/img/html_test_report.png?raw=true) +![Test Execution Report](https://github.com/hrcorval/behavex/blob/master/img/html_test_report_2.png?raw=true) +![Test Execution Report](https://github.com/hrcorval/behavex/blob/master/img/html_test_report_3.png?raw=true) + +## Installation Instructions + +To install BehaveX, execute the following command: + +```bash +pip install behavex +``` -Examples: +## Execution Instructions ->Run scenarios tagged as **TAG_1** but not **TAG_2**: -> ->
behavex -t=@TAG_1 -t=~@TAG_2
+Execute BehaveX in the same way as Behave from the command line, using the `behavex` command. Here are some examples: ->Run scenarios tagged as **TAG_1** or **TAG_2**: -> ->
behavex -t=@TAG_1,@TAG_2
+- **Run scenarios tagged as `TAG_1` but not `TAG_2`:** + ```bash + behavex -t=@TAG_1 -t=~@TAG_2 + ``` ->Run scenarios tagged as **TAG_1**, using **4** parallel processes: -> ->
behavex -t=@TAG_1 --parallel-processes=4 --parallel-scheme=scenario
+- **Run scenarios tagged as `TAG_1` or `TAG_2`:** + ```bash + behavex -t=@TAG_1,@TAG_2 + ``` ->Run scenarios located at "**features/features_folder_1**" and "**features/features_folder_2**" folders, using **2** parallel processes -> ->
behavex features/features_folder_1 features/features_folder_2 --parallel-processes=2
+- **Run scenarios tagged as `TAG_1` using 4 parallel processes:** + ```bash + behavex -t=@TAG_1 --parallel-processes=4 --parallel-scheme=scenario + ``` ->Run scenarios from "**features_folder_1/sample_feature.feature**" feature file, using **2** parallel processes -> ->
behavex features_folder_1/sample_feature.feature --parallel-processes=2
+- **Run scenarios located at specific folders using 2 parallel processes:** + ```bash + behavex features/features_folder_1 features/features_folder_2 --parallel-processes=2 + ``` ->Run scenarios tagged as **TAG_1** from "**features_folder_1/sample_feature.feature**" feature file, using **2** parallel processes -> ->
behavex features_folder_1/sample_feature.feature -t=@TAG_1 --parallel-processes=2
+- **Run scenarios from a specific feature file using 2 parallel processes:** + ```bash + behavex features_folder_1/sample_feature.feature --parallel-processes=2 + ``` ->Run scenarios located at "**features/feature_1**" and "**features/feature_2**" folders, using **2** parallel processes -> ->
behavex features/feature_1 features/feature_2 --parallel-processes=2
+- **Run scenarios tagged as `TAG_1` from a specific feature file using 2 parallel processes:** + ```bash + behavex features_folder_1/sample_feature.feature -t=@TAG_1 --parallel-processes=2 + ``` ->Run scenarios tagged as **TAG_1**, using **5** parallel processes executing a feature on each process: -> ->
behavex -t=@TAG_1 --parallel-processes=5 --parallel-scheme=feature
+- **Run scenarios located at specific folders using 2 parallel processes:** + ```bash + behavex features/feature_1 features/feature_2 --parallel-processes=2 + ``` ->Perform a dry run of the scenarios tagged as **TAG_1**, and generate the HTML report: -> ->
behavex -t=@TAG_1 --dry-run
+- **Run scenarios tagged as `TAG_1`, using 5 parallel processes executing a feature on each process:** + ```bash + behavex -t=@TAG_1 --parallel-processes=5 --parallel-scheme=feature + ``` ->Run scenarios tagged as **TAG_1**, generating the execution evidence into the "**exec_evidence**" folder (instead of the default "**output**" folder): -> ->
behavex -t=@TAG_1 -o=execution_evidence
+- **Perform a dry run of the scenarios tagged as `TAG_1`, and generate the HTML report:** + ```bash + behavex -t=@TAG_1 --dry-run + ``` +- **Run scenarios tagged as `TAG_1`, generating the execution evidence into a specific folder:** + ```bash + behavex -t=@TAG_1 -o=execution_evidence + ``` ## Constraints -* BehaveX is currently implemented on top of Behave **v1.2.6**, and not all Behave arguments are yet supported. -* The parallel execution implementation is based on concurrent Behave processes. Therefore, any code in the **before_all** and **after_all** hooks in the **environment.py** module will be executed in each parallel process. The same applies to the **before_feature** and **after_feature** hooks when the parallel execution is set by scenario. - -### Additional Comments - -* The JUnit reports have been replaced by the ones generated by the test wrapper, just to support muting tests scenarios on build servers - -## Supported Behave arguments - -* no_color -* color -* define -* exclude -* include -* no_snippets -* no_capture -* name -* capture -* no_capture_stderr -* capture_stderr -* no_logcapture -* logcapture -* logging_level -* summary -* quiet -* stop -* tags -* tags-help - -IMPORTANT: It worth to mention that some arguments do not apply when executing tests with more than one parallel process, such as **stop**, **color**, etc. - -Also, there might be more arguments that can be supported, it is just a matter of extending the wrapper implementation to use these. - -## Specific arguments from BehaveX - -* **output-folder** (-o or --output-folder) - * Specifies the output folder where execution reports will be generated (JUnit, HTML and JSon) -* **dry-run** (-d or --dry-run) - * Overwrites the existing Behave dry-run implementation - * Performs a dry-run by listing the scenarios as part of the output reports -* **parallel-processes** (--parallel-processes) - * Specifies the number of parallel Behave processes -* **parallel-scheme** (--parallel-scheme) - * Performs the parallel test execution by [scenario|feature] -* **show-progress-bar** (--show-progress-bar) - * Displays a progress bar in console while executing the tests in parallel - -You can take a look at the provided examples (above in this documentation) to see how to use these arguments. - -## Parallel test executions -The implementation for running tests in parallel is based on concurrent executions of Behave instances in multiple processes. - -As mentioned as part of the wrapper constraints, this approach implies that whatever you have in the Python Behave hooks in **environment.py** module, it will be re-executed on every parallel process. - -BehaveX will be in charge of managing each parallel process, and consolidate all the information into the execution reports - -Parallel test executions can be performed by **feature** or by **scenario**. - -Examples: -> behavex --parallel-processes=3 - -> behavex -t=@\ --parallel-processes=3 - -> behavex -t=@\ --parallel-processes=2 --parallel-scheme=scenario - -> behavex -t=@\ --parallel-processes=5 --parallel-scheme=feature - -> behavex -t=@\ --parallel-processes=5 --parallel-scheme=feature --show-progress-bar - -When the parallel-scheme is set by **feature**, all tests within each feature will be run sequentially. - -### Identifying each parallel process - -BehaveX populates the Behave contexts with the `worker_id` user-specific data. This variable contains the id of the current behave process. - -E.g If BehaveX is started with `--parallel-processes 2`, the first instance of behave will receive `worker_id=0`, and the second instance will receive `worker_id=1`. - -This variable can be accessed within the python tests using `context.config.userdata['worker_id']` - -## Test execution reports -### HTML report -This is a friendly test execution report that contains information related to test scenarios, execution status, execution evidence and metrics. A filters bar is also provided to filter scenarios by name, tag or status. - -It should be available by default at the following path: -> /report.html - - -### JSON report -Contains information about test scenarios and execution status. - -It should be available by default at the following path: -> /report.json - -The report is provided to simplify the integration with third party tools, by providing all test execution data in a format that can be easily parsed. - -### JUnit report - -The wrapper overwrites the existing Behave JUnit reports, just to enable dealing with parallel executions and muted test scenarios +- BehaveX is currently implemented on top of Behave **v1.2.6**, and not all Behave arguments are yet supported. +- The parallel execution implementation is based on concurrent Behave processes. Hooks in the `environment.py` module will be executed in each parallel process. + +## Supported Behave Arguments + +- no_color +- color +- define +- exclude +- include +- no_snippets +- no_capture +- name +- capture +- no_capture_stderr +- capture_stderr +- no_logcapture +- logcapture +- logging_level +- summary +- quiet +- stop +- tags +- tags-help + +**Important**: Some arguments do not apply when executing tests with more than one parallel process, such as **stop** and **color**. + +## Specific Arguments from BehaveX + +- **output-folder** (-o or --output-folder): Specifies the output folder for execution reports (JUnit, HTML, JSON). +- **dry-run** (-d or --dry-run): Performs a dry-run by listing scenarios in the output reports. +- **parallel-processes** (--parallel-processes): Specifies the number of parallel Behave processes. +- **parallel-scheme** (--parallel-scheme): Performs parallel test execution by [scenario|feature]. +- **show-progress-bar** (--show-progress-bar): Displays a progress bar in the console during parallel test execution. + +## Parallel Test Executions + +BehaveX manages concurrent executions of Behave instances in multiple processes. You can perform parallel test executions by feature or scenario. + +### Examples: +```bash +behavex --parallel-processes=3 +behavex -t=@ --parallel-processes=3 +behavex -t=@ --parallel-processes=2 --parallel-scheme=scenario +behavex -t=@ --parallel-processes=5 --parallel-scheme=feature +behavex -t=@ --parallel-processes=5 --parallel-scheme=feature --show-progress-bar +``` -By default, there will be one JUnit file per feature, no matter if the parallel execution is performed by feature or scenario. +## Test Execution Reports -Reports are available by default at the following path: -> /behave/*.xml +### HTML Report +A friendly test execution report containing information related to test scenarios, execution status, evidence, and metrics. Available at: +```bash +/report.html +``` -## Attaching images to the HTML report +### JSON Report +Contains information about test scenarios and execution status. Available at: +```bash +/report.json +``` -It is possible to attach images or screenshots to the HTML report, and the images will be displayed in an image gallery linked to the executed scenario. +### JUnit Report +One JUnit file per feature, available at: +```bash +/behave/*.xml +``` -You can used your own mechanism to capture screenshots or retrieve the images you want to attach to the HTML report, and then call to the **attach_image_file** or **attach_image_binary** methods provided by the wrapper. +## Attaching Images and Evidence -The provided methods can be used from the hooks available in the environment.py file, or directly from step definitions to attach images to the HTML report. For example: +You can attach images or screenshots to the HTML report. Use the following methods: -* **Example 1**: Attaching an image file from a step definition +### Example 1: Attaching an Image File ```python -... from behavex_images import image_attachments @given('I take a screenshot from current page') @@ -222,116 +197,62 @@ def step_impl(context): image_attachments.attach_image_file(context, 'path/to/image.png') ``` -* **Example 2**: Attaching an image binary from the `after_step` hook in environment.py +### Example 2: Attaching an Image Binary ```python -... from behavex_images import image_attachments from behavex_images.image_attachments import AttachmentsCondition def before_all(context): - image_attachements.set_attachments_condition(context, AttachmentsCondition.ONLY_ON_FAILURE) + image_attachments.set_attachments_condition(context, AttachmentsCondition.ONLY_ON_FAILURE) def after_step(context, step): - image_attachements.attach_image_binary(context, selenium_driver.get_screenshot_as_png()) + image_attachments.attach_image_binary(context, selenium_driver.get_screenshot_as_png()) ``` -By default, the images will be attached to the HTML report only when the test fails. However, you can change this behavior by setting the condition to attach images to the HTML report using the **set_attachments_condition** method. - -![test execution report](https://github.com/abmercado19/behavex-images/blob/master/behavex_images/img/html_test_report.png?raw=true) - -![test execution report](https://github.com/abmercado19/behavex-images/blob/master/behavex_images/img/html_test_report_2.png?raw=true) - -![test execution report](https://github.com/abmercado19/behavex-images/blob/master/behavex_images/img/html_test_report_3.png?raw=true) - -For more information, you can check the [behavex-images](https://github.com/abmercado19/behavex-images) library, which is already installed with BehaveX 3.3.0 and above. - -If you are using BehaveX < 3.3.0, you can also attach images to the HTML report, but you need to install the **behavex-images** package. You can install it by executing the following command: - -> pip install behavex-images - - -## Attaching additional execution evidence to the HTML report - -It is considered a good practice to provide as much as evidence as possible in test executions reports to properly identify the root cause of issues. - -Any evidence file you generate when executing a test scenario, it can be stored into a folder path that the wrapper provides for each scenario. - -The evidence folder path is automatically generated and stored into the **"context.evidence_path"** context variable. This variable is automatically updated by the wrapper before executing each scenario, and all the files you copy into that path will be accessible from the HTML report linked to the executed scenario - -## Test logs per scenario - -The HTML report provides test execution logs per scenario. Everything that is being logged using the **logging** library will be written into a test execution log file linked to the test scenario. - -## Metrics +## Test Logs and Metrics -* Automation Rate -* Pass Rate -* Steps execution counter and average execution time +The HTML report provides test execution logs per scenario. Metrics include: -All metrics are provided as part of the HTML report +- Automation Rate +- Pass Rate +- Steps execution counter and average execution time -## Dry runs +## Muting Test Scenarios -The wrapper overwrites the exiting Behave dry run implementation just to be able to provide the outputs into the wrapper reports. +To mute failing test scenarios, add the `@MUTE` tag. This allows the test to run without being included in JUnit reports. -The HTML report generated as part of the dry run can be used to share the scenarios specifications with any stakeholder. +## Handling Failing Scenarios -Example: +### @AUTORETRY Tag +This tag can be used for flaky scenarios or when the testing infrastructure is not stable. The `@AUTORETRY` tag can be applied to any scenario or feature, and it is used to automatically re-execute the test scenario when it fails. -> behavex -t=@TAG --dry-run - -## Muting test scenarios - -Sometimes it is necessary that failing test scenarios continue being executed in all build server plans, but having them muted until the test or product fix is provided. - -Tests can be muted by adding the @MUTE tag to each test scenario. This will cause the wrapper to run the test but the execution will not be notified in the JUnit reports. However, you will see the execution information in the HTML report. - -## What to do with failing scenarios? - -### @AUTORETRY tag - -This tag can be used for flaky scenarios or when the testing infrastructure is not stable at all. - -The @AUTORETRY tag can be applied to any scenario or feature, and it is used to automatically re-execute the test scenario when it fails. - -### Rerun all failed scenarios - -Whenever you perform an automated test execution and there are failing scenarios, the **failing_scenarios.txt** file will be created into the execution output folder. -This file allows you to run all failing scenarios again. - -This can be done by executing the following command: - -> behavex -rf=.//failing_scenarios.txt +### Rerun All Failed Scenarios +Whenever you perform an automated test execution and there are failing scenarios, the **failing_scenarios.txt** file will be created in the execution output folder. This file allows you to run all failing scenarios again. +To rerun failing scenarios, execute: +```bash +behavex -rf=.//failing_scenarios.txt +``` or +```bash +behavex --rerun-failures=.//failing_scenarios.txt +``` -> behavex --rerun-failures=.//failing_scenarios.txt - -To avoid the re-execution to overwrite the previous test report, we suggest to provide a different output folder, using the **-o** or **--output-folder** argument. - -It is important to mention that this argument doesn't work yet with parallel test executions - - -## Display the progress bar in console - -When executing tests in parallel, you can display a progress bar in the console to see the progress of the test execution. - -To enable the progress bar, just add the **--show-progress-bar** argument to the command line. +To avoid overwriting the previous test report, provide a different output folder using the **-o** or **--output-folder** argument. -Example: +## FAQs -> behavex -t=@TAG --parallel-processes=3 --show-progress-bar +- **How do I install BehaveX?** + - Use `pip install behavex`. -In case you are printing logs in the console, you can configure the progress bar to be displayed in a new line on every update, by adding the following setting to the BehaveX configuration file +- **What is the purpose of the `@AUTORETRY` tag?** + - It automatically re-executes failing scenarios. -> [progress_bar] -> -> print_updates_in_new_lines="true" +- **How can I mute a test scenario?** + - Add the `@MUTE` tag to the scenario. ## Show Your Support -**If you find this project helpful or interesting, we would appreciate it if you could give it a star** (:star:). It's a simple way to show your support and let us know that you find value in our work. - -By starring this repository, you help us gain visibility among other developers and contributors. It also serves as motivation for us to continue improving and maintaining this project. +If you find this project helpful, please give it a star! Your support helps us gain visibility and motivates us to continue improving the project. -Thank you in advance for your support! We truly appreciate it. +Thank you for your support!