Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(test): Add test reports that can be generated when executing markdown with ie test --report <file-path> #212

Merged
merged 12 commits into from
Aug 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions cmd/ie/commands/test.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,8 @@ func init() {
String("subscription", "", "Sets the subscription ID used by a scenarios azure-cli commands. Will rely on the default subscription if not set.")
testCommand.PersistentFlags().
String("working-directory", ".", "Sets the working directory for innovation engine to operate out of. Restores the current working directory when finished.")
testCommand.PersistentFlags().
String("report", "", "The path to generate a report of the scenario execution. The contents of the report are in JSON and will only be generated when this flag is set.")

testCommand.PersistentFlags().
StringArray("var", []string{}, "Sets an environment variable for the scenario. Format: --var <key>=<value>")
Expand All @@ -40,6 +42,7 @@ var testCommand = &cobra.Command{
subscription, _ := cmd.Flags().GetString("subscription")
workingDirectory, _ := cmd.Flags().GetString("working-directory")
environment, _ := cmd.Flags().GetString("environment")
generateReport, _ := cmd.Flags().GetString("report")

environmentVariables, _ := cmd.Flags().GetStringArray("var")

Expand Down Expand Up @@ -67,6 +70,7 @@ var testCommand = &cobra.Command{
CorrelationId: "",
WorkingDirectory: workingDirectory,
Environment: environment,
ReportFile: generateReport,
})
if err != nil {
logging.GlobalLogger.Errorf("Error creating engine %s", err)
Expand Down
216 changes: 216 additions & 0 deletions docs/specs/test-reporting.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,216 @@
# Test Reports

## Summary

When users are testing their executable documentation using `ie test`, being
able to see the results of their execution is important, especially in instances
where the test fails. At the moment, there are only two ways to see what
happened during the test execution:

1. The user can look through the standard output from `ie test <scenario>`
(Most common).
1. The user can look through `ie.log` (Not as common).

While these methods are effective for troubleshooting most issues, they also have
a few issues:

1. Storing the output of `ie test` in a file doesn't provide a good way to
navigate the output and see the results of the test.
1. The log file `ie.log` is not user-friendly and can be difficult to navigate, especially
so if the user is invoking multiple `ie test` commands as the log file is cumulative.
1. It's not easy to reproduce the execution of a specific scenario, as most of
the variables declared by the scenario are randomized and don't have their
values rendered in the output.
1. The output of `ie test` is not easily shareable with others.

To address these issues, we propose the introduction of test reports, a feature
for `ie test` that will generate a report of the scenario execution in JSON
format so that users can easily navigate the results of the test, reproduce
specific runs, and share the results with others.

## Requirements

- [x] The user can generate a test report by running `ie test <scenario> //report=<path>`
- [x] Reports capture the yaml metadata of the scenario.
- [x] Reports store the variables declared in the scenario and their values.
- [x] The report is generated in JSON format.
- [ ] Just like the scenarios that generated them, Reports are executable.
- [x] Outputs of the codeblocks executed are stored in the report.
- [x] Expected outputs for codeblocks are stored in the report.

## Technical specifications

- The report will be generated in JSON format, but in the future we may consider
other formats like yaml or HTML. JSON format was chosen for v1 because it is
easy to parse, and it is a common format for sharing data.
- Users must specify `//report=<path>` to generate a report. If the path is not
specified, the report will not be generated.

### Report schema

The actual JSON schema is a work in progress, and will not be released with
the initial implementation, so we will list out the actual JSON with
documentation about each field until then.

```json
{
// Name of the scenario
"name": "Test reporting doc",
// Properties found in the yaml header
"properties": {
"ms.author": "vmarcella",
"otherProperty": "otherValue"
},

// Variables declared in the scenario
"environmentVariables": {
"NEW_VAR": "1"
},
// Whether the test was successful or not
"success": true,
// Error message if the test failed
"error": "",
// The step number where the test failed (-1 if successful)
"failedAtStep": -1,
"steps": [
// The entire step
{
// The codeblock for the step
"codeBlock": {
// The language of the codeblock
"language": "bash",
// The content of the codeblock
"content": "echo \"Hello, world!\"\n",
// The header paired with the codeblock
"header": "First step",
// The paragraph paired with the codeblock
"description": "This step will show you how to do something.",
// The expected output for the codeblock
"resultBlock": {
// The language of the expected output
"language": "text",
// The content of the expected output
"content": "Hello, world!\n",
// The expected similarity score of the output (between 0 - 1)
"expectedSimilarityScore": 1,
// The expected regex pattern of the output
"expectedRegexPattern": null
}
},
// Codeblock number underneath the step (Should be ignored for now)
"codeBlockNumber": 0,
// Error message if the step failed (Would be same as top level error)
"error": null,
// Standard error output from executing the step
"stdErr": "",
// Standard output from executing the step
"stdOut": "Hello, world!\n",
// The name of the step
"stepName": "First step",
// The step number
"stepNumber": 0,
// Whether the step was successful or not
"success": true,
// The computed similarity score of the output (between 0 - 1)
"similarityScore": 0
},
{
"codeBlock": {
"language": "bash",
"content": "export NEW_VAR=1\n",
"header": "Second step",
"description": "This step will show you how to do something else.",
"resultBlock": {
"language": "",
"content": "",
"expectedSimilarityScore": 0,
"expectedRegexPattern": null
}
},
"codeBlockNumber": 0,
"error": null,
"stdErr": "",
"stdOut": "",
"stepName": "Second step",
"stepNumber": 1,
"success": true,
"similarityScore": 0
}
]
}
```

## Examples

Assuming you're running this command from the root of the repository:

```bash
ie test scenarios/testing/reporting.md --report=report.json >/dev/null && cat report.json
```

The output of the command above should look like this:

<!-- Need to increase this score once I fix issue #214 -->
<!-- expected_similarity=0.8 -->

```json
{
"name": "Test reporting doc",
"properties": {
"ms.author": "vmarcella",
"otherProperty": "otherValue"
},
"environmentVariables": {
"NEW_VAR": "1"
},
"success": true,
"error": "",
"failedAtStep": -1,
"steps": [
{
"codeBlock": {
"language": "bash",
"content": "echo \"Hello, world!\"\n",
"header": "First step",
"description": "This step will show you how to do something.",
"resultBlock": {
"language": "text",
"content": "Hello, world!\n",
"expectedSimilarityScore": 1,
"expectedRegexPattern": null
}
},
"codeBlockNumber": 0,
"error": null,
"stdErr": "",
"stdOut": "Hello, world!\n",
"stepName": "First step",
"stepNumber": 0,
"success": true,
"similarityScore": 1
},
{
"codeBlock": {
"language": "bash",
"content": "export NEW_VAR=1\n",
"header": "Second step",
"description": "This step will show you how to do something else.",
"resultBlock": {
"language": "",
"content": "",
"expectedSimilarityScore": 0,
"expectedRegexPattern": null
}
},
"codeBlockNumber": 0,
"error": null,
"stdErr": "",
"stdOut": "",
"stepName": "Second step",
"stepNumber": 1,
"success": true,
"similarityScore": 1
}
]
}
```
17 changes: 9 additions & 8 deletions internal/engine/common/codeblock.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,15 @@ import "github.com/Azure/InnovationEngine/internal/parsers"
// State for the codeblock in interactive mode. Used to keep track of the
// state of each codeblock.
type StatefulCodeBlock struct {
CodeBlock parsers.CodeBlock
CodeBlockNumber int
Error error
StdErr string
StdOut string
StepName string
StepNumber int
Success bool
CodeBlock parsers.CodeBlock `json:"codeBlock"`
CodeBlockNumber int `json:"codeBlockNumber"`
Error error `json:"error"`
StdErr string `json:"stdErr"`
StdOut string `json:"stdOut"`
StepName string `json:"stepName"`
StepNumber int `json:"stepNumber"`
Success bool `json:"success"`
SimilarityScore float64 `json:"similarityScore"`
}

// Checks if a codeblock was executed by looking at the
Expand Down
33 changes: 19 additions & 14 deletions internal/engine/common/commands.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,17 @@ import (

// Emitted when a command has been executed successfully.
type SuccessfulCommandMessage struct {
StdOut string
StdErr string
StdOut string
StdErr string
SimilarityScore float64
}

// Emitted when a command has failed to execute.
type FailedCommandMessage struct {
StdOut string
StdErr string
Error error
StdOut string
StdErr string
Error error
SimilarityScore float64
}

type ExitMessage struct {
Expand Down Expand Up @@ -49,9 +51,10 @@ func ExecuteCodeBlockAsync(codeBlock parsers.CodeBlock, env map[string]string) t
if err != nil {
logging.GlobalLogger.Errorf("Error executing command:\n %s", err.Error())
return FailedCommandMessage{
StdOut: output.StdOut,
StdErr: output.StdErr,
Error: err,
StdOut: output.StdOut,
StdErr: output.StdErr,
Error: err,
SimilarityScore: 0,
}
}

Expand All @@ -62,7 +65,7 @@ func ExecuteCodeBlockAsync(codeBlock parsers.CodeBlock, env map[string]string) t
expectedRegex := codeBlock.ExpectedOutput.ExpectedRegex
expectedOutputLanguage := codeBlock.ExpectedOutput.Language

outputComparisonError := CompareCommandOutputs(
score, outputComparisonError := CompareCommandOutputs(
actualOutput,
expectedOutput,
expectedSimilarity,
Expand All @@ -77,17 +80,19 @@ func ExecuteCodeBlockAsync(codeBlock parsers.CodeBlock, env map[string]string) t
)

return FailedCommandMessage{
StdOut: output.StdOut,
StdErr: output.StdErr,
Error: outputComparisonError,
StdOut: output.StdOut,
StdErr: output.StdErr,
Error: outputComparisonError,
SimilarityScore: score,
}

}

logging.GlobalLogger.Infof("Command output to stdout:\n %s", output.StdOut)
return SuccessfulCommandMessage{
StdOut: output.StdOut,
StdErr: output.StdErr,
StdOut: output.StdOut,
StdErr: output.StdErr,
SimilarityScore: score,
}
}
}
Expand Down
Loading
Loading