We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dataset: sacrebleu_aug-mix_mtedx/valid
sacrebleu_aug-mix_mtedx/valid
It worked fine for flores_aug-mix_dev and mtdata_aug-mix_Neulab-tedtalks_dev-1-eng-ell
flores_aug-mix_dev
mtdata_aug-mix_Neulab-tedtalks_dev-1-eng-ell
https://firefox-ci-tc.services.mozilla.com/tasks/KmZrLAdcQtGnQd8ZdgxDhQ/runs/0
[task 2024-09-03T23:26:56.604Z] tokenizer.json: 0%| | 0.00/1.96M [00:00<?, ?B/s] [task 2024-09-03T23:26:56.604Z] tokenizer.json: 100%|██████████| 1.96M/1.96M [00:00<00:00, 30.5MB/s] [task 2024-09-03T23:26:56.787Z] 2024-09-03 23:26:56,787 - simalign.simalign - INFO - Initialized the EmbeddingLoader with model: bert-base-multilingual-cased [task 2024-09-03T23:27:03.176Z] Traceback (most recent call last): [task 2024-09-03T23:27:03.176Z] File "/builds/worker/checkouts/vcs/pipeline/data/dataset_importer.py", line 272, in <module> [task 2024-09-03T23:27:03.176Z] main() [task 2024-09-03T23:27:03.176Z] File "/builds/worker/checkouts/vcs/pipeline/data/dataset_importer.py", line 267, in main [task 2024-09-03T23:27:03.176Z] run_import(args.type, args.dataset, args.output_prefix) [task 2024-09-03T23:27:03.177Z] File "/builds/worker/checkouts/vcs/pipeline/data/dataset_importer.py", line 235, in run_import [task 2024-09-03T23:27:03.177Z] augment(output_prefix, aug_modifer) [task 2024-09-03T23:27:03.177Z] File "/builds/worker/checkouts/vcs/pipeline/data/dataset_importer.py", line 146, in augment [task 2024-09-03T23:27:03.177Z] corpus = add_alignments(corpus) [task 2024-09-03T23:27:03.177Z] File "/builds/worker/checkouts/vcs/pipeline/data/dataset_importer.py", line 119, in add_alignments [task 2024-09-03T23:27:03.177Z] sent_aln = aligner.get_word_aligns(src_sent, trg_sent)["itermax"] [task 2024-09-03T23:27:03.177Z] File "/builds/worker/.local/lib/python3.10/site-packages/simalign/simalign.py", line 209, in get_word_aligns [task 2024-09-03T23:27:03.177Z] vectors = self.embed_loader.get_embed_list([src_sent, trg_sent]).cpu().detach().numpy() [task 2024-09-03T23:27:03.177Z] File "/builds/worker/.local/lib/python3.10/site-packages/simalign/simalign.py", line 62, in get_embed_list [task 2024-09-03T23:27:03.177Z] inputs = self.tokenizer(sent_batch, is_split_into_words=True, padding=True, truncation=True, return_tensors="pt") [task 2024-09-03T23:27:03.177Z] File "/builds/worker/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2803, in __call__ [task 2024-09-03T23:27:03.177Z] encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs) [task 2024-09-03T23:27:03.177Z] File "/builds/worker/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2889, in _call_one [task 2024-09-03T23:27:03.178Z] return self.batch_encode_plus( [task 2024-09-03T23:27:03.178Z] File "/builds/worker/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3080, in batch_encode_plus [task 2024-09-03T23:27:03.178Z] return self._batch_encode_plus( [task 2024-09-03T23:27:03.178Z] File "/builds/worker/.local/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 798, in _batch_encode_plus [task 2024-09-03T23:27:03.178Z] elif is_split_into_words and not isinstance(ids_or_pair_ids[0], (list, tuple)): [task 2024-09-03T23:27:03.178Z] IndexError: list index out of range
The text was updated successfully, but these errors were encountered:
No branches or pull requests
dataset:
sacrebleu_aug-mix_mtedx/valid
It worked fine for
flores_aug-mix_dev
andmtdata_aug-mix_Neulab-tedtalks_dev-1-eng-ell
https://firefox-ci-tc.services.mozilla.com/tasks/KmZrLAdcQtGnQd8ZdgxDhQ/runs/0
The text was updated successfully, but these errors were encountered: