Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flaky Test Workflow #8055

Merged
merged 23 commits into from
Jul 11, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
103 changes: 74 additions & 29 deletions .github/workflows/test-repeater.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# **what?**
# This workflow will test a single test a given number of times to determine if it's flaky or not. You can test with any supported OS/Python combination.

# This workflow will test all test(s) at the input path given number of times to determine if it's flaky or not. You can test with any supported OS/Python combination.
# This is batched in 10 to allow more test iterations faster.

# **why?**
# Testing if a test is flaky and if a previously flaky test has been fixed. This allows easy testing on supported python versions and OS combinations.
Expand Down Expand Up @@ -38,29 +38,45 @@ on:
- 'ubuntu-latest'
- 'macos-latest'
- 'windows-latest'
num_runs:
description: 'Max number of times to run the test'
num_runs_per_batch:
description: 'Max number of times to run the test per batch. We always run 10 batches.'
type: number
required: true
default: '100'
default: '50'

jobs:
pytest:
runs-on: ${{ inputs.os }}
env:
DBT_TEST_USER_1: dbt_test_user_1
DBT_TEST_USER_2: dbt_test_user_2
DBT_TEST_USER_3: dbt_test_user_3
permissions: read-all

defaults:
run:
shell: bash

jobs:
debug:
runs-on: ubuntu-latest
steps:
- name: "[DEBUG] Output Inputs"
run: |
echo "Branch: ${{ inputs.branch }}"
echo "test_path: ${{ inputs.test_path }}"
echo "python_version: ${{ inputs.python_version }}"
echo "os: ${{ inputs.os }}"
echo "num_runs: ${{ inputs.num_runs }}"
echo "num_runs_per_batch: ${{ inputs.num_runs_per_batch }}"

pytest:
runs-on: ${{ inputs.os }}
strategy:
# run all batches, even if one fails. This informs how flaky the test may be.
fail-fast: false
# using a matrix to speed up the jobs since the matrix will run in parallel when runners are available
matrix:
batch: ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10"]
env:
PYTEST_ADDOPTS: "-v --color=yes -n4 --csv integration_results.csv"
DBT_TEST_USER_1: dbt_test_user_1
DBT_TEST_USER_2: dbt_test_user_2
DBT_TEST_USER_3: dbt_test_user_3

steps:
- name: "Checkout code"
uses: actions/checkout@v3
with:
Expand All @@ -76,30 +92,59 @@ jobs:

- name: "Set up postgres (linux)"
if: inputs.os == 'ubuntu-latest'
uses: ./.github/actions/setup-postgres-linux
run: make setup-db

- name: Set up "postgres (macos)"
# mac and windows don't use make due to limitations with docker with those runners in GitHub
- name: "Set up postgres (macos)"
if: inputs.os == 'macos-latest'
uses: ./.github/actions/setup-postgres-macos

- name: "Set up postgres (windows)"
if: inputs.os == 'windows-latest'
uses: ./.github/actions/setup-postgres-windows

- name: Run test
id: pytest
- name: "Test Command"
id: command
run: |
echo "Running test ${{ inputs.test_path }} ${{ inputs.num_runs }} times with Python ${{inputs.python_version }} on ${{ inputs.os }} for branch/commit ${{ inputs.branch }}"
python -m pytest ${{ inputs.test_path }} --force-flaky --min-passes=${{ inputs.num_runs }} --max-runs=${{ inputs.num_runs }}
test_command="python -m pytest ${{ inputs.test_path }}"
echo "test_command=$test_command" >> $GITHUB_OUTPUT

- uses: actions/upload-artifact@v3
if: always()
with:
name: logs_${{ inputs.python_version }}_${{ inputs.os }}_${{ github.run_id }}
path: ./logs
- name: "Run test ${{ inputs.num_runs_per_batch }} times"
id: pytest
run: |
set +e
for ((i=1; i<=${{ inputs.num_runs_per_batch }}; i++))
do
echo "Running pytest iteration $i..."
python -m pytest ${{ inputs.test_path }}
exit_code=$?

if [[ $exit_code -eq 0 ]]; then
success=$((success + 1))
echo "Iteration $i: Success"
else
failure=$((failure + 1))
echo "Iteration $i: Failure"
fi

echo
echo "==========================="
echo "Successful runs: $success"
echo "Failed runs: $failure"
echo "==========================="
echo
done

echo "failure=$failure" >> $GITHUB_OUTPUT

- name: "Success and Failure Summary: ${{ inputs.os }}/Python ${{ inputs.python_version }}"
run: |
echo "Batch: ${{ matrix.batch }}"
echo "Successful runs: ${{ steps.pytest.outputs.success }}"
echo "Failed runs: ${{ steps.pytest.outputs.failure }}"

- uses: actions/upload-artifact@v3
if: always()
with:
name: integration_results_${{ inputs.python_version }}_${{ inputs.os }}_${{ github.run_id }}.csv
path: integration_results.csv
- name: "Error for Failures"
if: ${{ steps.pytest.outputs.failure }}
run: |
echo "Batch ${{ matrix.batch }} failed ${{ steps.pytest.outputs.failure }} of ${{ inputs.num_runs_per_batch }} tests"
exit 1
Loading