Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add llama.cpp backend #94

Merged
merged 3 commits into from
May 23, 2024
Merged

feat: add llama.cpp backend #94

merged 3 commits into from
May 23, 2024

Conversation

McPatate
Copy link
Member

@McPatate McPatate mentioned this pull request May 21, 2024
Copy link

Repository name Source type Average hole completion time (s) Pass percentage
smol-rs/async-executor github 6.904 0.00%
jaemk/cached github 11.474 0.00%
tkaitchuck/constrandom github 17.101 0.00%
tiangolo/fastapi github 27.458 29.95%
null null null null%
huggingface/huggingface_hub github 25.113 40.00%
gcanti/io-ts github 20.485 60.00%
lancedb/lancedb github 120.696 0.00%
mmaitre314/picklescan github 6.164 0.00%
simple local 6.485 0.00%
encode/starlette github 12.268 0.00%
colinhacks/zod github 8.601 0.00%

Note: The "hole completion time" represents the full process of:

  • copying files from the setup cache directory
  • replacing the code from the file with a completion from the model
  • building the project
  • running the tests

@McPatate McPatate merged commit 8ee6d96 into main May 23, 2024
13 checks passed
@McPatate McPatate deleted the feat/add-llama-cpp branch May 23, 2024 14:56
McPatate added a commit that referenced this pull request May 24, 2024
* feat: add `llama.cpp` backend

* fix(ci): install stable toolchain instead of nightly

* fix(ci): use different model

---------

Co-authored-by: flopes <FredericoPerimLopes@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant