Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix llama tokenizer #22402

Merged
merged 23 commits into from
Apr 3, 2023
Merged

Conversation

ArthurZucker
Copy link
Collaborator

What does this PR do?

Draft but:

  • Fixes the conversion script
  • update the llama default special tokens
  • fixed compatibility issues
  • cleanup llama tokeniztion code
  • add tests

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Mar 27, 2023

The documentation is not available anymore as the PR was closed or merged.

@ArthurZucker
Copy link
Collaborator Author

cc @Narsil for visibility!

@ArthurZucker ArthurZucker marked this pull request as ready for review March 28, 2023 12:04
@ArthurZucker
Copy link
Collaborator Author

This will need to wait for #22341

Copy link
Contributor

@Narsil Narsil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's put some tests before merging. (And a PR description)

Probably focusing on the breaking changes we're making here.

@ArthurZucker
Copy link
Collaborator Author

Yes, on it!

@ArthurZucker
Copy link
Collaborator Author

Will finish this tomorrow!

@Varal7
Copy link

Varal7 commented Mar 30, 2023

Hi! Does this PR the decoding part of the tokenizer? Seems like it always prefixes the output with space.

For instance, tokenizer.decode(1) returns <s>,

Varal7 added a commit to Varal7/lama.vim that referenced this pull request Mar 30, 2023
Waiting for huggingface/transformers#22402 to
fix llama tokenizer
@ArthurZucker
Copy link
Collaborator Author

Yes, it does: print(f'\'{tokenizer.decode(tokenizer.encode("Hello world"), skip_special_tokens = True)}\'',) outputs `'Hello world' 😉

assert tokenizer_fast.clean_up_tokenization_spaces is False
assert tokenizer.clean_up_tokenization_spaces is False
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is such a small nit that I included it 😅

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After rebasing, this test fails for me :( just reproduced on main:

>       assert decoded == "[CLS] this shouldn ' t be! he ' ll go. [SEP]"
E       assert "[CLS] this s...'ll go. [SEP]" == "[CLS] this s... ll go. [SEP]"
E         - [CLS] this shouldn ' t be! he ' ll go. [SEP]
E         ?                   - -        - -
E         + [CLS] this shouldn't be! he'll go. [SEP]

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is not pointing to the correct part of the test. If the cleanup_tokenization_spaces is indeed False, the fail can happen for cache reasons or anything else (also failed for me at some point).
Will check again

Copy link
Collaborator

@sgugger sgugger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, thanks for all the fixes and for adding the tests!

Copy link
Contributor

@Narsil Narsil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@sgugger sgugger merged commit c0f99b4 into huggingface:main Apr 3, 2023
raghavanone pushed a commit to raghavanone/transformers that referenced this pull request Apr 5, 2023
* draft

* update tokenization limma and conversion script

* more udpates

* initial commit

* style

* default pad to None

* draft tokenization tests

* update test

* update tokenization tests

* nits

* update

* versioning test

* major fix

* fix more testst

* finish fixing special masks

* last nit

* more nits

* add encode decode tests

* add more

* fix token type ids

* style
xloem pushed a commit to xloem/transformers that referenced this pull request Apr 9, 2023
* draft

* update tokenization limma and conversion script

* more udpates

* initial commit

* style

* default pad to None

* draft tokenization tests

* update test

* update tokenization tests

* nits

* update

* versioning test

* major fix

* fix more testst

* finish fixing special masks

* last nit

* more nits

* add encode decode tests

* add more

* fix token type ids

* style
xloem pushed a commit to xloem/transformers that referenced this pull request Apr 10, 2023
* draft

* update tokenization limma and conversion script

* more udpates

* initial commit

* style

* default pad to None

* draft tokenization tests

* update test

* update tokenization tests

* nits

* update

* versioning test

* major fix

* fix more testst

* finish fixing special masks

* last nit

* more nits

* add encode decode tests

* add more

* fix token type ids

* style
novice03 pushed a commit to novice03/transformers that referenced this pull request Jun 23, 2023
* draft

* update tokenization limma and conversion script

* more udpates

* initial commit

* style

* default pad to None

* draft tokenization tests

* update test

* update tokenization tests

* nits

* update

* versioning test

* major fix

* fix more testst

* finish fixing special masks

* last nit

* more nits

* add encode decode tests

* add more

* fix token type ids

* style
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants