Skip to content

Commit

Permalink
Additional improvements in documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
anibalinn committed Sep 24, 2024
1 parent a9805ea commit 774cb5a
Showing 1 changed file with 37 additions and 7 deletions.
44 changes: 37 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,9 +44,9 @@ BehaveX provides the following features:
- **Evidence Collection**: Include images/screenshots and additional evidence in the HTML report.
- **Test Logs**: Automatically compile logs generated during test execution into individual log reports for each scenario.
- **Test Muting**: Add the `@MUTE` tag to test scenarios to execute them without including them in JUnit reports.
- **Execution Metrics**: Generate metrics in the HTML report for the executed test suite, including Automation Rate and Pass Rate.
- **Dry Runs**: Perform dry runs to see the full list of scenarios in the HTML report without executing the tests.
- **Auto-Retry for Failing Scenarios**: Use the `@AUTORETRY` tag to automatically re-execute failing scenarios.
- **Execution Metrics**: Generate metrics in the HTML report for the executed test suite, including Automation Rate, Pass Rate, Steps execution counter and average execution time.
- **Dry Runs**: Perform dry runs to see the full list of scenarios in the HTML report without executing the tests. It overrides the `-d` Behave argument.
- **Auto-Retry for Failing Scenarios**: Use the `@AUTORETRY` tag to automatically re-execute failing scenarios. Also, you can re-run all failing scenarios using the **failing_scenarios.txt** file.

![Test Execution Report](https://github.com/hrcorval/behavex/blob/master/img/html_test_report.png?raw=true)
![Test Execution Report](https://github.com/hrcorval/behavex/blob/master/img/html_test_report_2.png?raw=true)
Expand Down Expand Up @@ -117,7 +117,7 @@ Execute BehaveX in the same way as Behave from the command line, using the `beha
## Constraints

- BehaveX is currently implemented on top of Behave **v1.2.6**, and not all Behave arguments are yet supported.
- The parallel execution implementation is based on concurrent Behave processes. Hooks in the `environment.py` module will be executed in each parallel process.
- Parallel execution is implemented using concurrent Behave processes. This means that any hooks defined in the `environment.py` module will run in each parallel process. This includes the **before_all** and **after_all** hooks, which will execute in every parallel process. The same is true for the **before_feature** and **after_feature** hooks when parallel execution is organized by scenario.

## Supported Behave Arguments

Expand Down Expand Up @@ -164,6 +164,15 @@ behavex -t=@<TAG> --parallel-processes=5 --parallel-scheme=feature
behavex -t=@<TAG> --parallel-processes=5 --parallel-scheme=feature --show-progress-bar
```

### Identifying Each Parallel Process

BehaveX populates the Behave contexts with the `worker_id` user-specific data. This variable contains the id of the current behave process.

For example, if BehaveX is started with `--parallel-processes 2`, the first instance of behave will receive `worker_id=0`, and the second instance will receive `worker_id=1`.

This variable can be accessed within the python tests using `context.config.userdata['worker_id']`.


## Test Execution Reports

### HTML Report
Expand All @@ -183,16 +192,19 @@ One JUnit file per feature, available at:
```bash
<output_folder>/behave/*.xml
```
The JUnit reports have been replaced by the ones generated by the test wrapper, just to support muting tests scenarios on build servers

## Attaching Images to the HTML Report

## Attaching Images and Evidence
You can attach images or screenshots to the HTML report using your own mechanism to capture screenshots or retrieve images. Utilize the **attach_image_file** or **attach_image_binary** methods provided by the wrapper.

You can attach images or screenshots to the HTML report. Use the following methods:
These methods can be called from hooks in the `environment.py` file or directly from step definitions.

### Example 1: Attaching an Image File
```python
from behavex_images import image_attachments
@given('I take a screenshot from current page')
@given('I take a screenshot from the current page')
def step_impl(context):
image_attachments.attach_image_file(context, 'path/to/image.png')
```
Expand All @@ -209,6 +221,24 @@ def after_step(context, step):
image_attachments.attach_image_binary(context, selenium_driver.get_screenshot_as_png())
```
By default, images are attached to the HTML report only when the test fails. You can modify this behavior by setting the condition using the **set_attachments_condition** method.
![Test Execution Report](https://github.com/abmercado19/behavex-images/blob/master/behavex_images/img/html_test_report.png?raw=true)
![Test Execution Report](https://github.com/abmercado19/behavex-images/blob/master/behavex_images/img/html_test_report_2.png?raw=true)
![Test Execution Report](https://github.com/abmercado19/behavex-images/blob/master/behavex_images/img/html_test_report_3.png?raw=true)
For more information, check the [behavex-images](https://github.com/abmercado19/behavex-images) library, which is included with BehaveX 3.3.0 and above.
If you are using BehaveX < 3.3.0, you can still attach images to the HTML report by installing the **behavex-images** package with the following command:
> pip install behavex-images
## Attaching Additional Execution Evidence to the HTML Report
Providing ample evidence in test execution reports is crucial for identifying the root cause of issues. Any evidence file generated during a test scenario can be stored in a folder path provided by the wrapper for each scenario.
The evidence folder path is automatically generated and stored in the **"context.evidence_path"** context variable. This variable is updated by the wrapper before executing each scenario, and all files copied into that path will be accessible from the HTML report linked to the executed scenario.
## Test Logs and Metrics
The HTML report provides test execution logs per scenario. Metrics include:
Expand Down

0 comments on commit 774cb5a

Please sign in to comment.