Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bpo-42316: Allow unparenthesized walrus operator in indexes #23317

Merged
merged 3 commits into from
Nov 16, 2020

Conversation

lysnikolaou
Copy link
Contributor

@lysnikolaou lysnikolaou commented Nov 16, 2020

@lysnikolaou lysnikolaou merged commit cae6018 into python:master Nov 16, 2020
@lysnikolaou lysnikolaou deleted the named-expression-index branch November 16, 2020 23:09
adorilson pushed a commit to adorilson/cpython that referenced this pull request Mar 13, 2021
@Brando753 Brando753 mannequin mentioned this pull request Oct 6, 2022
zsol added a commit to Instagram/LibCST that referenced this pull request May 27, 2023
zsol added a commit to Instagram/LibCST that referenced this pull request Jun 7, 2023
#936, #935, #934, #933, #932, #931)

* Allow walrus in slices

See python/cpython#23317

Raised in #930.

* Fix parsing of nested f-string specifiers

For an expression like `f"{one:{two:}{three}}"`, `three` is not in an f-string spec, and should be tokenized accordingly.

This PR fixes the `format_spec_count` bookkeeping in the tokenizer, so it properly decrements it when a closing `}` is encountered but only if the `}` closes a format_spec.

Reported in #930.

* Fix tokenizing `0else`

This is an obscure one.

`_ if 0else _` failed to parse with some very weird errors. It turns out that the tokenizer tries to parse `0else` as a single number, but when it encounters `l` it realizes it can't be a single number and it backtracks.

Unfortunately the backtracking logic was broken, and it failed to correctly backtrack one of the offsets used for whitespace parsing (the byte offset since the start of the line). This caused whitespace nodes to refer to incorrect parts of the input text, eventually resulting in the above behavior.

This PR fixes the bookkeeping when the tokenizer backtracks.

Reported in #930.

* Allow no whitespace between lambda keyword and params in certain cases

Python accepts code where `lambda` follows a `*`, so this PR relaxes validation rules for Lambdas.

Raised in #930.

* Allow any expression in comprehensions' evaluated expression


This PR relaxes the accepted types for the `elt` field in `ListComp`, `SetComp`, and `GenExp`, as well as the `key` and `value` fields in `DictComp`.

Fixes #500.

* Allow no space around an ifexp in certain cases

For example in `_ if _ else""if _ else _`.

Raised in #930. Also fixes #854.

* Allow no spaces after `as` in a contextmanager in certain cases

Like in `with foo()as():pass`

Raised in #930.

* Allow no spaces around walrus in certain cases

Like in `[_:=''for _ in _]`

Raised in #930.

* Allow no whitespace after lambda body in certain cases

Like in `[lambda:()for _ in _]`

Reported in #930.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants