diff --git a/README.md b/README.md index 0d7b625..9d4c00d 100644 --- a/README.md +++ b/README.md @@ -6,215 +6,190 @@ [![Build Status](https://github.com/hrcorval/behavex/actions/workflows/python-package.yml/badge.svg)](https://github.com/hrcorval/behavex/actions) [![GitHub last commit](https://img.shields.io/github/last-commit/hrcorval/behavex.svg)](https://github.com/hrcorval/behavex/commits/main) -# BehaveX - -BehaveX is a BDD testing solution designed to enhance your Behave-based testing workflow by providing additional features and performance improvements. It's particularly beneficial in the following scenarios: - -* **Accelerating test execution**: When you need to significantly reduce test run times through parallel execution by feature or scenario. -* **Enhancing test reporting**: When comprehensive and visually appealing HTML and JSON reports are required for in-depth analysis and integration with other tools. -* **Improving test visibility**: When detailed evidence, such as screenshots and logs, is essential for understanding test failures and successes. -* **Optimizing test automation**: When features like test retries, test muting, and performance metrics are crucial for efficient test maintenance and analysis. -* **Managing complex test suites**: When handling large test suites demands advanced features for organization, execution, and reporting. - -## Features provided by BehaveX - -* Perform parallel test executions - * Execute tests using multiple processes, either by feature or by scenario. -* Get additional test execution reports - * Generate friendly HTML reports and JSON reports that can be exported and integrated with third-party tools -* Provide images/screenshots evidence as part of the HTML report - * Include images or screenshots as part of the HTML report in an image gallery linked to the executed scenario -* Provide additional evidence as part of the HTML report - * Include any testing evidence by pasting it to a predefined folder path associated with each scenario. This evidence will then be automatically included as part of the HTML report -* Generate test logs per scenario - * Any logs generated during test execution using the logging library will automatically be compiled into an individual log report for each scenario -* Mute test scenarios in build servers - * By just adding the @MUTE tag to test scenarios, they will be executed, but they will not be part of the JUnit reports -* Generate metrics in HTML report for the executed test suite - * Automation Rate, Pass Rate and Steps executions & duration -* Execute dry runs and see the full list of scenarios into the HTML report - * This is enhanced implementation of Behave's dry run feature, allowing you to see the full list of scenarios in the HTML report without actually executing the tests -* Re-execute failing test scenarios - * By just adding the @AUTORETRY tag to test scenarios, so when the first execution fails the scenario is immediately re-executed - * Additionally, you can provide the wrapper with a list of previously failing scenarios, which will also be re-executed automatically - -![test execution report](https://github.com/hrcorval/behavex/blob/master/img/html_test_report.png?raw=true) - -![test execution report](https://github.com/hrcorval/behavex/blob/master/img/html_test_report_2.png?raw=true) - -![test execution report](https://github.com/hrcorval/behavex/blob/master/img/html_test_report_3.png?raw=true) - - -## Installing BehaveX - -Execute the following command to install BehaveX with pip: - -> pip install behavex - -## Executing BehaveX - -The execution is performed in the same way as you do when executing Behave from command line, but using the "behavex" command. +# BehaveX Documentation + +## Table of Contents +- [Introduction](#introduction) +- [Features](#features) +- [Installation Instructions](#installation-instructions) +- [Execution Instructions](#execution-instructions) +- [Constraints](#constraints) +- [Supported Behave Arguments](#supported-behave-arguments) +- [Specific Arguments from BehaveX](#specific-arguments-from-behavex) +- [Parallel Test Executions](#parallel-test-executions) +- [Test Execution Reports](#test-execution-reports) +- [Attaching Images and Evidence](#attaching-images-and-evidence) +- [Test Logs and Metrics](#test-logs-and-metrics) +- [Muting Test Scenarios](#muting-test-scenarios) +- [Handling Failing Scenarios](#handling-failing-scenarios) +- [FAQs](#faqs) +- [Show Your Support](#show-your-support) + +## Introduction + +**BehaveX** is a BDD testing solution designed to enhance your Behave-based testing workflow by providing additional features and performance improvements. It's particularly beneficial in the following scenarios: + +- **Accelerating test execution**: Significantly reduce test run times through parallel execution by feature or scenario. +- **Enhancing test reporting**: Generate comprehensive and visually appealing HTML and JSON reports for in-depth analysis and integration with other tools. +- **Improving test visibility**: Provide detailed evidence, such as screenshots and logs, essential for understanding test failures and successes. +- **Optimizing test automation**: Utilize features like test retries, test muting, and performance metrics for efficient test maintenance and analysis. +- **Managing complex test suites**: Handle large test suites with advanced features for organization, execution, and reporting. + +## Features + +BehaveX provides the following features: + +- **Parallel Test Executions**: Execute tests using multiple processes, either by feature or by scenario. +- **Enhanced Reporting**: Generate friendly HTML and JSON reports that can be exported and integrated with third-party tools. +- **Evidence Collection**: Include images/screenshots and additional evidence in the HTML report. +- **Test Logs**: Automatically compile logs generated during test execution into individual log reports for each scenario. +- **Test Muting**: Add the `@MUTE` tag to test scenarios to execute them without including them in JUnit reports. +- **Execution Metrics**: Generate metrics in the HTML report for the executed test suite, including Automation Rate and Pass Rate. +- **Dry Runs**: Perform dry runs to see the full list of scenarios in the HTML report without executing the tests. +- **Auto-Retry for Failing Scenarios**: Use the `@AUTORETRY` tag to automatically re-execute failing scenarios. + +![Test Execution Report](https://github.com/hrcorval/behavex/blob/master/img/html_test_report.png?raw=true) +![Test Execution Report](https://github.com/hrcorval/behavex/blob/master/img/html_test_report_2.png?raw=true) +![Test Execution Report](https://github.com/hrcorval/behavex/blob/master/img/html_test_report_3.png?raw=true) + +## Installation Instructions + +To install BehaveX, execute the following command: + +```bash +pip install behavex +``` -Examples: +## Execution Instructions ->Run scenarios tagged as **TAG_1** but not **TAG_2**: -> ->
behavex -t=@TAG_1 -t=~@TAG_2+Execute BehaveX in the same way as Behave from the command line, using the `behavex` command. Here are some examples: ->Run scenarios tagged as **TAG_1** or **TAG_2**: -> ->
behavex -t=@TAG_1,@TAG_2+- **Run scenarios tagged as `TAG_1` but not `TAG_2`:** + ```bash + behavex -t=@TAG_1 -t=~@TAG_2 + ``` ->Run scenarios tagged as **TAG_1**, using **4** parallel processes: -> ->
behavex -t=@TAG_1 --parallel-processes=4 --parallel-scheme=scenario+- **Run scenarios tagged as `TAG_1` or `TAG_2`:** + ```bash + behavex -t=@TAG_1,@TAG_2 + ``` ->Run scenarios located at "**features/features_folder_1**" and "**features/features_folder_2**" folders, using **2** parallel processes -> ->
behavex features/features_folder_1 features/features_folder_2 --parallel-processes=2+- **Run scenarios tagged as `TAG_1` using 4 parallel processes:** + ```bash + behavex -t=@TAG_1 --parallel-processes=4 --parallel-scheme=scenario + ``` ->Run scenarios from "**features_folder_1/sample_feature.feature**" feature file, using **2** parallel processes -> ->
behavex features_folder_1/sample_feature.feature --parallel-processes=2+- **Run scenarios located at specific folders using 2 parallel processes:** + ```bash + behavex features/features_folder_1 features/features_folder_2 --parallel-processes=2 + ``` ->Run scenarios tagged as **TAG_1** from "**features_folder_1/sample_feature.feature**" feature file, using **2** parallel processes -> ->
behavex features_folder_1/sample_feature.feature -t=@TAG_1 --parallel-processes=2+- **Run scenarios from a specific feature file using 2 parallel processes:** + ```bash + behavex features_folder_1/sample_feature.feature --parallel-processes=2 + ``` ->Run scenarios located at "**features/feature_1**" and "**features/feature_2**" folders, using **2** parallel processes -> ->
behavex features/feature_1 features/feature_2 --parallel-processes=2+- **Run scenarios tagged as `TAG_1` from a specific feature file using 2 parallel processes:** + ```bash + behavex features_folder_1/sample_feature.feature -t=@TAG_1 --parallel-processes=2 + ``` ->Run scenarios tagged as **TAG_1**, using **5** parallel processes executing a feature on each process: -> ->
behavex -t=@TAG_1 --parallel-processes=5 --parallel-scheme=feature+- **Run scenarios located at specific folders using 2 parallel processes:** + ```bash + behavex features/feature_1 features/feature_2 --parallel-processes=2 + ``` ->Perform a dry run of the scenarios tagged as **TAG_1**, and generate the HTML report: -> ->
behavex -t=@TAG_1 --dry-run+- **Run scenarios tagged as `TAG_1`, using 5 parallel processes executing a feature on each process:** + ```bash + behavex -t=@TAG_1 --parallel-processes=5 --parallel-scheme=feature + ``` ->Run scenarios tagged as **TAG_1**, generating the execution evidence into the "**exec_evidence**" folder (instead of the default "**output**" folder): -> ->
behavex -t=@TAG_1 -o=execution_evidence+- **Perform a dry run of the scenarios tagged as `TAG_1`, and generate the HTML report:** + ```bash + behavex -t=@TAG_1 --dry-run + ``` +- **Run scenarios tagged as `TAG_1`, generating the execution evidence into a specific folder:** + ```bash + behavex -t=@TAG_1 -o=execution_evidence + ``` ## Constraints -* BehaveX is currently implemented on top of Behave **v1.2.6**, and not all Behave arguments are yet supported. -* The parallel execution implementation is based on concurrent Behave processes. Therefore, any code in the **before_all** and **after_all** hooks in the **environment.py** module will be executed in each parallel process. The same applies to the **before_feature** and **after_feature** hooks when the parallel execution is set by scenario. - -### Additional Comments - -* The JUnit reports have been replaced by the ones generated by the test wrapper, just to support muting tests scenarios on build servers - -## Supported Behave arguments - -* no_color -* color -* define -* exclude -* include -* no_snippets -* no_capture -* name -* capture -* no_capture_stderr -* capture_stderr -* no_logcapture -* logcapture -* logging_level -* summary -* quiet -* stop -* tags -* tags-help - -IMPORTANT: It worth to mention that some arguments do not apply when executing tests with more than one parallel process, such as **stop**, **color**, etc. - -Also, there might be more arguments that can be supported, it is just a matter of extending the wrapper implementation to use these. - -## Specific arguments from BehaveX - -* **output-folder** (-o or --output-folder) - * Specifies the output folder where execution reports will be generated (JUnit, HTML and JSon) -* **dry-run** (-d or --dry-run) - * Overwrites the existing Behave dry-run implementation - * Performs a dry-run by listing the scenarios as part of the output reports -* **parallel-processes** (--parallel-processes) - * Specifies the number of parallel Behave processes -* **parallel-scheme** (--parallel-scheme) - * Performs the parallel test execution by [scenario|feature] -* **show-progress-bar** (--show-progress-bar) - * Displays a progress bar in console while executing the tests in parallel - -You can take a look at the provided examples (above in this documentation) to see how to use these arguments. - -## Parallel test executions -The implementation for running tests in parallel is based on concurrent executions of Behave instances in multiple processes. - -As mentioned as part of the wrapper constraints, this approach implies that whatever you have in the Python Behave hooks in **environment.py** module, it will be re-executed on every parallel process. - -BehaveX will be in charge of managing each parallel process, and consolidate all the information into the execution reports - -Parallel test executions can be performed by **feature** or by **scenario**. - -Examples: -> behavex --parallel-processes=3 - -> behavex -t=@\