Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add perf tests for eventgrid #16949

Merged
merged 4 commits into from
Feb 26, 2021
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 49 additions & 0 deletions sdk/eventgrid/azure-eventgrid/tests/perfstress_tests/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# EventGrid Performance Tests

In order to run the performance tests, the `azure-devtools` package must be installed. This is done as part of the `dev_requirements`.
Start by creating a new virtual environment for your perf tests. This will need to be a Python 3 environment, preferably >=3.7.

### Setup for test resources

These tests will run against a pre-configured Eventgrid topic. The following environment variable will need to be set for the tests to access the live resources:
```
EG_ACCESS_KEY=<access key of your eventgrid account>
EG_TOPIC_HOSTNAME=<hostname of the eventgrid topic>
```

### Setup for perf test runs

```cmd
(env) ~/azure-eventgrid> pip install -r dev_requirements.txt
(env) ~/azure-eventgrid> pip install -e .
```

## Test commands

```cmd
(env) ~/azure-eventgrid> cd tests
(env) ~/azure-eventgrid/tests> perfstress
```

### Common perf command line options
These options are available for all perf tests:
- `--duration=10` Number of seconds to run as many operations (the "run" function) as possible. Default is 10.
- `--iterations=1` Number of test iterations to run. Default is 1.
- `--parallel=1` Number of tests to run in parallel. Default is 1.
- `--no-client-share` Whether each parallel test instance should share a single client, or use their own. Default is False (sharing).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can remove this option unless you want to implement sharing a single client between test instances.
Probably not needed for now.

- `--warm-up=5` Number of seconds to spend warming up the connection before measuring begins. Default is 5.
- `--sync` Whether to run the tests in sync or async. Default is False (async). This flag must be used for Storage legacy tests, which do not support async.
rakshith91 marked this conversation as resolved.
Show resolved Hide resolved
- `--no-cleanup` Whether to keep newly created resources after test run. Default is False (resources will be deleted).

### EventGrid Test options
These options are available for all eventgrid perf tests:
- `--num-events` Number of events to be published using the send method.

### T2 Tests
The tests currently written for the T2 SDK:
- `EventGridPerfTest` Publishes a list of eventgrid events.

## Example command
```cmd
(env) ~/azure-eventgrid/tests> perfstress EventGridPerfTest --num-events=100
```
Empty file.
69 changes: 69 additions & 0 deletions sdk/eventgrid/azure-eventgrid/tests/perfstress_tests/send.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
import random
import asyncio
from azure_devtools.perfstress_tests import PerfStressTest

from azure.eventgrid import EventGridPublisherClient as SyncPublisherClient, EventGridEvent
from azure.eventgrid.aio import EventGridPublisherClient as AsyncPublisherClient

from azure.core.credentials import AzureKeyCredential

class EventGridPerfTest(PerfStressTest):
def __init__(self, arguments):
super().__init__(arguments)

# auth configuration
topic_key = self.get_from_env("EG_ACCESS_KEY")
endpoint = self.get_from_env("EG_TOPIC_HOSTNAME")

# Create clients
self.publisher_client = SyncPublisherClient(
endpoint=endpoint,
credential=AzureKeyCredential(topic_key)
)
self.async_publisher_client = AsyncPublisherClient(
endpoint=endpoint,
credential=AzureKeyCredential(topic_key)
)

services = ["EventGrid", "ServiceBus", "EventHubs", "Storage"]
self.event_list = []
for _ in range(self.args.num_events):
self.event_list.append(EventGridEvent(
event_type="Contoso.Items.ItemReceived",
data={
"services": random.sample(services, k=random.randint(1, 4))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will test a different data set every time - we should probably keep each test consistent with throughput.
If you expect there to be variation in consuming each of these values that you want to compare - you could make it a cmd flag:

parser.add_argument('-e', '--event-service', nargs='?', type=str, help='The event service type. Default is "EventGrib"', default='EventGrid')

But otherwise I think we can just stick to a single value.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sticking to single value should be good

},
subject="Door1",
data_version="2.0"
))

async def close(self):
"""This is run after cleanup.

Use this to close any open handles or clients.
"""
await self.async_publisher_client.close()
await super().close()

def run_sync(self):
"""The synchronous perf test.

Try to keep this minimal and focused. Using only a single client API.
Avoid putting any ancilliary logic (e.g. generating UUIDs), and put this in the setup/init instead
so that we're only measuring the client API call.
"""
self.publisher_client.send(self.event_list)

async def run_async(self):
"""The asynchronous perf test.

Try to keep this minimal and focused. Using only a single client API.
Avoid putting any ancilliary logic (e.g. generating UUIDs), and put this in the setup/init instead
so that we're only measuring the client API call.
"""
await self.async_publisher_client.send(self.event_list)

@staticmethod
def add_arguments(parser):
super(EventGridPerfTest, EventGridPerfTest).add_arguments(parser)
parser.add_argument('-n', '--num-events', nargs='?', type=int, help='Number of events to be sent. Defaults to 100', default=100)