Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Upgrade Ruff to 0.3.0 #20249

Merged
merged 2 commits into from
Mar 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.2.0
rev: v0.3.0
hooks:
- id: ruff
args: [--fix, --exit-non-zero-on-fix]
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ unannotated_pyright:
python scripts/run-pyright.py --unannotated

ruff:
-ruff --fix .
-ruff check --fix .
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A breadcrumb for other reviewers: ruff <path> was deprecated in favor of ruff check <path>. See https://github.com/astral-sh/ruff/releases/tag/v0.3.0.

ruff format .

check_ruff:
Expand Down
2 changes: 1 addition & 1 deletion docs/Makefile
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
docs_ruff:
-ruff --fix ../examples/docs_snippets
-ruff check --fix ../examples/docs_snippets
ruff format ../examples/docs_snippets

apidoc-build:
Expand Down
33 changes: 11 additions & 22 deletions docs/content/concepts/assets/asset-auto-execution.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -39,13 +39,11 @@ from dagster import AutoMaterializePolicy, asset


@asset
def asset1():
...
def asset1(): ...


@asset(auto_materialize_policy=AutoMaterializePolicy.eager(), deps=[asset1])
def asset2():
...
def asset2(): ...
```

This example assumes that `asset1` will be materialized in some other way - e.g. manually, via a [sensor](/concepts/partitions-schedules-sensors/sensors), or via a [schedule](/concepts/partitions-schedules-sensors/schedules).
Expand All @@ -64,13 +62,11 @@ from dagster import (


@asset
def asset1():
...
def asset1(): ...


@asset(deps=[asset1])
def asset2():
...
def asset2(): ...


defs = Definitions(
Expand Down Expand Up @@ -101,8 +97,7 @@ wait_for_all_parents_policy = AutoMaterializePolicy.eager().with_rules(


@asset(auto_materialize_policy=wait_for_all_parents_policy)
def asset1(upstream1, upstream2):
...
def asset1(upstream1, upstream2): ...
```

#### Auto-materialize even if some parents are missing
Expand All @@ -118,8 +113,7 @@ allow_missing_parents_policy = AutoMaterializePolicy.eager().without_rules(


@asset(auto_materialize_policy=allow_missing_parents_policy)
def asset1(upstream1, upstream2):
...
def asset1(upstream1, upstream2): ...
```

#### Auto-materialize root assets on a regular cadence
Expand All @@ -136,8 +130,7 @@ materialize_on_cron_policy = AutoMaterializePolicy.eager().with_rules(


@asset(auto_materialize_policy=materialize_on_cron_policy)
def root_asset():
...
def root_asset(): ...
```

### Auto-materialization and partitions
Expand All @@ -152,17 +145,15 @@ from dagster import AutoMaterializePolicy, DailyPartitionsDefinition, asset
partitions_def=DailyPartitionsDefinition(start_date="2020-10-10"),
auto_materialize_policy=AutoMaterializePolicy.eager(),
)
def asset1():
...
def asset1(): ...


@asset(
partitions_def=DailyPartitionsDefinition(start_date="2020-10-10"),
auto_materialize_policy=AutoMaterializePolicy.eager(),
deps=[asset1],
)
def asset2():
...
def asset2(): ...
```

If the last partition of `asset1` is re-materialized, e.g. manually from the UI, then the corresponding partition of `asset2` will be auto-materialized after.
Expand All @@ -181,8 +172,7 @@ from dagster import AutoMaterializePolicy, DailyPartitionsDefinition, asset
max_materializations_per_minute=7
),
)
def asset1():
...
def asset1(): ...
```

For time-partitioned assets, the `N` most recent partitions will be selected from the set of candidates to be materialized. For other types of partitioned assets, the selection will be random.
Expand All @@ -208,6 +198,5 @@ def source_file():
deps=[source_file],
auto_materialize_policy=AutoMaterializePolicy.eager(),
)
def asset1():
...
def asset1(): ...
```
18 changes: 6 additions & 12 deletions docs/content/concepts/assets/asset-checks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -113,8 +113,7 @@ from dagster import (


@asset
def my_asset():
...
def my_asset(): ...


@asset_check(asset=my_asset)
Expand Down Expand Up @@ -184,13 +183,11 @@ from dagster import (


@asset
def orders():
...
def orders(): ...


@asset
def items():
...
def items(): ...


def make_check(check_blob: Mapping[str, str]) -> AssetChecksDefinition:
Expand Down Expand Up @@ -250,18 +247,15 @@ from dagster import (


@asset
def my_asset():
...
def my_asset(): ...


@asset_check(asset=my_asset)
def check_1():
...
def check_1(): ...


@asset_check(asset=my_asset)
def check_2():
...
def check_2(): ...


# includes my_asset and both checks
Expand Down
6 changes: 2 additions & 4 deletions docs/content/concepts/configuration/advanced-config-types.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -124,8 +124,7 @@ class MyDataStructuresConfig(Config):
user_scores: Dict[str, int]

@asset
def scoreboard(config: MyDataStructuresConfig):
...
def scoreboard(config: MyDataStructuresConfig): ...

result = materialize(
[scoreboard],
Expand Down Expand Up @@ -161,8 +160,7 @@ class MyNestedConfig(Config):
user_data: Dict[str, UserData]

@asset
def average_age(config: MyNestedConfig):
...
def average_age(config: MyNestedConfig): ...

result = materialize(
[average_age],
Expand Down
3 changes: 1 addition & 2 deletions docs/content/concepts/io-management/io-managers.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -405,8 +405,7 @@ class ExternalIOManager(IOManager):
# setup stateful cache
self._cache = {}

def handle_output(self, context: OutputContext, obj):
...
def handle_output(self, context: OutputContext, obj): ...

def load_input(self, context: InputContext):
if context.asset_key in self._cache:
Expand Down
3 changes: 1 addition & 2 deletions docs/content/concepts/logging/loggers.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -203,8 +203,7 @@ from dagster import Definitions, define_asset_job, asset


@asset
def some_asset():
...
def some_asset(): ...


the_job = define_asset_job("the_job", selection="*")
Expand Down
6 changes: 2 additions & 4 deletions docs/content/concepts/ops-jobs-graphs/graphs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -272,13 +272,11 @@ from dagster import asset, job, op


@asset
def emails_to_send():
...
def emails_to_send(): ...


@op
def send_emails(emails) -> None:
...
def send_emails(emails) -> None: ...


@job
Expand Down
3 changes: 1 addition & 2 deletions docs/content/concepts/ops-jobs-graphs/job-execution.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -220,8 +220,7 @@ For example, the following job will execute at most two ops at once with the `da
}
}
)
def tag_concurrency_job():
...
def tag_concurrency_job(): ...
```

**Note:** These limits are only applied on a per-run basis. You can apply op concurrency limits across multiple runs using the <PyObject module="dagster_celery" object="celery_executor" /> or <PyObject module="dagster_celery_k8s" object="celery_k8s_job_executor" />.
Expand Down
6 changes: 2 additions & 4 deletions docs/content/concepts/ops-jobs-graphs/op-jobs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -87,8 +87,7 @@ from dagster import graph, op, ConfigurableResource


class Server(ConfigurableResource):
def ping_server(self):
...
def ping_server(self): ...


@op
Expand Down Expand Up @@ -222,8 +221,7 @@ from dagster import Definitions, job


@job
def do_it_all():
...
def do_it_all(): ...


defs = Definitions(jobs=[do_it_all])
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -156,8 +156,7 @@ images_partitions_def = DynamicPartitionsDefinition(name="images")


@asset(partitions_def=images_partitions_def)
def images(context: AssetExecutionContext):
...
def images(context: AssetExecutionContext): ...
```

Partition keys can be added and removed for a given dynamic partition set. For example, the following code snippet demonstrates the usage of a [sensor](/concepts/partitions-schedules-sensors/sensors) to detect the presence of a new partition and then trigger a run for that partition:
Expand Down Expand Up @@ -256,8 +255,7 @@ partitions_def = DailyPartitionsDefinition(start_date="2023-01-21")


@asset(partitions_def=partitions_def)
def events():
...
def events(): ...


@asset(
Expand All @@ -271,8 +269,7 @@ def events():
)
],
)
def yesterday_event_stats():
...
def yesterday_event_stats(): ...
```

</TabItem>
Expand All @@ -296,8 +293,7 @@ partitions_def = DailyPartitionsDefinition(start_date="2023-01-21")


@asset(partitions_def=partitions_def)
def events():
...
def events(): ...


@asset(
Expand All @@ -310,8 +306,7 @@ def events():
)
},
)
def yesterday_event_stats(events):
...
def yesterday_event_stats(events): ...
```

</TabItem>
Expand Down Expand Up @@ -340,13 +335,11 @@ hourly_partitions_def = HourlyPartitionsDefinition(start_date="2022-05-31-00:00"


@asset(partitions_def=hourly_partitions_def)
def asset1():
...
def asset1(): ...


@asset(partitions_def=hourly_partitions_def)
def asset2():
...
def asset2(): ...


partitioned_asset_job = define_asset_job(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -164,8 +164,7 @@ from dagster import build_schedule_from_partitioned_job, job


@job(config=my_partitioned_config)
def do_stuff_partitioned():
...
def do_stuff_partitioned(): ...


do_stuff_partitioned_schedule = build_schedule_from_partitioned_job(
Expand Down
12 changes: 4 additions & 8 deletions docs/content/concepts/partitions-schedules-sensors/schedules.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,7 @@ Here's a simple schedule that runs a job every day, at midnight:

```python file=concepts/partitions_schedules_sensors/schedules/schedules.py startafter=start_basic_schedule endbefore=end_basic_schedule
@job
def my_job():
...
def my_job(): ...


basic_schedule = ScheduleDefinition(job=my_job, cron_schedule="0 0 * * *")
Expand Down Expand Up @@ -109,8 +108,7 @@ from dagster import build_schedule_from_partitioned_job, job


@job(config=my_partitioned_config)
def do_stuff_partitioned():
...
def do_stuff_partitioned(): ...


do_stuff_partitioned_schedule = build_schedule_from_partitioned_job(
Expand All @@ -130,8 +128,7 @@ from dagster import (


@asset(partitions_def=HourlyPartitionsDefinition(start_date="2020-01-01-00:00"))
def hourly_asset():
...
def hourly_asset(): ...


partitioned_asset_job = define_asset_job("partitioned_job", selection=[hourly_asset])
Expand Down Expand Up @@ -264,8 +261,7 @@ class DateFormatter(ConfigurableResource):
return dt.strftime(self.format)

@job
def process_data():
...
def process_data(): ...

@schedule(job=process_data, cron_schedule="* * * * *")
def process_data_schedule(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -97,8 +97,7 @@ Once a sensor is added to a <PyObject object="Definitions" /> object with the jo

```python file=concepts/partitions_schedules_sensors/sensors/sensors.py startafter=start_running_in_code endbefore=end_running_in_code
@sensor(job=asset_job, default_status=DefaultSensorStatus.RUNNING)
def my_running_sensor():
...
def my_running_sensor(): ...
```

If you manually start or stop a sensor in the UI, that will override any default status that is set in code.
Expand Down Expand Up @@ -250,8 +249,7 @@ class UsersAPI(ConfigurableResource):
return requests.get(self.url).json()

@job
def process_user():
...
def process_user(): ...

@sensor(job=process_user)
def process_new_users_sensor(
Expand Down
Loading
Loading