Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

train-text-from-scratch : fix assert failure in ggml-alloc #3618

Merged
merged 1 commit into from
Oct 17, 2023

Conversation

slaren
Copy link
Collaborator

@slaren slaren commented Oct 13, 2023

Fixes #3617

@pliablepixels
Copy link

Thank you! That seems to fix the initial build error - training proceeding atm!

@slaren
Copy link
Collaborator Author

slaren commented Oct 17, 2023

@ggerganov did you mean to merge this PR when closing #3617? It is the same fix as for finetune, but still needs to be merged.

Copy link
Owner

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the ping - I missed that

@ggerganov ggerganov merged commit a5e8c1d into master Oct 17, 2023
35 of 40 checks passed
@slaren slaren deleted the ttfs-alloc-fix branch October 17, 2023 17:01
joelkuiper added a commit to vortext/llama.cpp that referenced this pull request Oct 19, 2023
* 'master' of github.com:ggerganov/llama.cpp:
  fix embeddings when using CUDA (ggerganov#3657)
  llama : avoid fprintf in favor of LLAMA_LOG (ggerganov#3538)
  readme : update hot-topics & models, detail windows release in usage (ggerganov#3615)
  CLBlast: Fix temporary buffer size for f16 conversion (wsize)
  train-text-from-scratch : fix assert failure in ggml-alloc (ggerganov#3618)
  editorconfig : remove trailing spaces
  server : documentation of JSON return value of /completion endpoint (ggerganov#3632)
  save-load-state : fix example + add ci test (ggerganov#3655)
  readme : add Aquila2 links (ggerganov#3610)
  tokenizer : special token handling (ggerganov#3538)
  k-quants : fix quantization ranges (ggerganov#3646)
  llava : fix tokenization to not add bos between image embeddings and user prompt (ggerganov#3645)
  MPT : support GQA for replit-code-v1.5 (ggerganov#3627)
  Honor -ngl option for Cuda offloading in llava (ggerganov#3621)
@zamalex
Copy link

zamalex commented Jan 13, 2024

i've cloned the [github.com:ggerganov/llama.cpp] and i still facing that error
any one can help ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants