Skip to content

Conversation

McPatate
Copy link
Member

@McPatate McPatate mentioned this pull request May 21, 2024
@McPatate McPatate force-pushed the feat/add-llama-cpp branch from 2e33cec to 52a1699 Compare May 23, 2024 12:25
Copy link

Repository name Source type Average hole completion time (s) Pass percentage
smol-rs/async-executor github 6.904 0.00%
jaemk/cached github 11.474 0.00%
tkaitchuck/constrandom github 17.101 0.00%
tiangolo/fastapi github 27.458 29.95%
null null null null%
huggingface/huggingface_hub github 25.113 40.00%
gcanti/io-ts github 20.485 60.00%
lancedb/lancedb github 120.696 0.00%
mmaitre314/picklescan github 6.164 0.00%
simple local 6.485 0.00%
encode/starlette github 12.268 0.00%
colinhacks/zod github 8.601 0.00%

Note: The "hole completion time" represents the full process of:

  • copying files from the setup cache directory
  • replacing the code from the file with a completion from the model
  • building the project
  • running the tests

@McPatate McPatate merged commit 8ee6d96 into main May 23, 2024
@McPatate McPatate deleted the feat/add-llama-cpp branch May 23, 2024 14:56
McPatate added a commit that referenced this pull request May 24, 2024
* feat: add `llama.cpp` backend

* fix(ci): install stable toolchain instead of nightly

* fix(ci): use different model

---------

Co-authored-by: flopes <FredericoPerimLopes@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant