Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama.cpp b3003 (new formula) #172915

Merged
merged 2 commits into from
May 27, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/autobump.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1373,6 +1373,7 @@ literate-git
little-cms2
livekit
livekit-cli
llama.cpp
llm
llvm
lmdb
Expand Down
59 changes: 59 additions & 0 deletions Formula/l/llama.cpp.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
class LlamaCpp < Formula
desc "LLM inference in C/C++"
homepage "https://github.com/ggerganov/llama.cpp"
# CMake uses Git to generate version information.
url "https://github.com/ggerganov/llama.cpp.git",
tag: "b3003",
revision: "d298382ad977ec89c8de7b57459b9d7965d2c272"
version "b3003"
license "MIT"
head "https://github.com/ggerganov/llama.cpp.git", branch: "master"

depends_on "cmake" => :build
uses_from_macos "curl"

on_linux do
depends_on "openblas"
end

def install
args = %W[
-DBUILD_SHARED_LIBS=ON
-DLLAMA_LTO=ON
-DLLAMA_CCACHE=OFF
-DLLAMA_ALL_WARNINGS=OFF
-DLLAMA_NATIVE=#{build.bottle? ? "OFF" : "ON"}
-DLLAMA_ACCELLERATE=#{OS.mac? ? "ON" : "OFF"}
-DLLAMA_BLAS=#{OS.linux? ? "ON" : "OFF"}
-DLLAMA_BLAS_VENDOR=OpenBLAS
-DLLAMA_METAL=#{OS.mac? ? "ON" : "OFF"}
-DLLAMA_METAL_EMBED_LIBRARY=ON
-DLLAMA_CURL=ON
-DCMAKE_INSTALL_RPATH=#{rpath}
]
args << "-DLLAMA_METAL_MACOSX_VERSION_MIN=#{MacOS.version}" if OS.mac?

system "cmake", "-S", ".", "-B", "build", *args, *std_cmake_args
system "cmake", "--build", "build"
system "cmake", "--install", "build"

libexec.install bin.children
libexec.children.each do |file|
next unless file.executable?

new_name = if file.basename.to_s == "main"
"llama"
else
"llama-#{file.basename}"
end

bin.install_symlink file => new_name
end
end

test do

Check failure on line 54 in Formula/l/llama.cpp.rb

View workflow job for this annotation

GitHub Actions / macOS 12-x86_64

`brew test --verbose llama.cpp` failed on macOS Monterey (12)!

/usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/3.3.1/bin/bundle clean ==> Testing llama.cpp ==> /usr/local/Cellar/llama.cpp/b3003/bin/llama --hf-repo ggml-org/tiny-llamas -m stories15M-q4_0.gguf -n 400 -p I -ngl 0 Log start main: build = 3003 (d298382a) main: built with Apple clang version 14.0.0 (clang-1400.0.29.202) for x86_64-apple-darwin21.6.0 main: seed = 1716795763 llama_download_file: no previous model file found stories15M-q4_0.gguf llama_download_file: downloading from https://huggingface.co/ggml-org/tiny-llamas/resolve/main/stories15M-q4_0.gguf to stories15M-q4_0.gguf (server_etag:"f15a5ea82f07243d28aed38819d443c6-2", server_last_modified:Wed, 22 May 2024 13:14:21 GMT)... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1152 100 1152 0 0 13753 0 --:--:-- --:--:-- --:--:-- 13753 100 1152 100 1152 0 0 13633 0 --:--:-- --:--:-- --:--:-- 0 100 18.1M 100 18.1M 0 0 57.6M 0 --:--:-- --:--:-- --:--:-- 57.6M Error: llama.cpp: failed ::error::llama.cpp: failed An exception occurred within a child process: BuildError: Failed executing: /usr/local/Cellar/llama.cpp/b3003/bin/llama --hf-repo ggml-org/tiny-llamas -m stories15M-q4_0.gguf -n 400 -p I -ngl 0 /usr/local/Homebrew/Library/Homebrew/formula.rb:3044:in `block in system' /usr/local/Homebrew/Library/Homebrew/formula.rb:2980:in `open' /usr/local/Homebrew/Library/Homebrew/formula.rb:2980:in `system' /usr/local/Homebrew/Library/Homebrew/vendor/bundle/ruby/3.3.0/gems/sorbet-runtime-0.5.11391/lib/types/private/methods/call_validation.rb:270:in `bind_call' /usr/local/Homebrew/Library/Homebrew/vendor/bundle/ruby/3.3.0/gems/sorbet-runtime-0.5.11391/lib/types/private/methods/call_validation.rb:270:in `validate_call' /usr/local/Homebrew/Library/Homebrew/vendor/bundle/ruby/3.3.0/gems/sorbet-runtime-0.5.11391/lib/types/private/methods/_methods.rb:277:in `block in _on_method_added' /usr/local/Homebrew/Library/Taps/homebrew/homebrew-core/Formula/l/llama.cpp.rb:59:in `block in <class:LlamaCpp>' /usr/local/Homebrew/Library/Homebrew/formula.rb:2786:in `block (3 levels) in run_test' /usr/local/Homebrew/Library/Homebrew/extend/kernel.rb:529:in `with_env' /usr/local/Homebrew/Library/Homebrew/formula.rb:2785:in `block (2 levels) in run_test' /usr/local/Homebrew/Library/Homebrew/formula.rb:1189:in `with_logging' /usr/local/Homebrew/Library/Homebrew/formula.rb:2784:in `block in run_test' /usr/local/Homebrew/Library/Homebrew/mktemp.rb:75:in `block in run' /usr/local/Homebrew/Library/Homebrew/mktemp.rb:75:in `chdir' /usr/local/Homebrew/Library/Homebrew/mktemp.rb:75:in `run' /usr/local/Homebrew/Library/Homebrew/formula.rb:3095:in `mktemp' /usr/local/Homebrew/Library/Homebrew/formula.rb:2778:in `run_test' /usr/local/Homebrew/Library/Homebrew/test.rb:46:in `block in <main>' /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/3.3.1/lib/ruby/3.3.0/timeout.rb:186:in `block in timeout' /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/3.3.1/lib/ruby/3.3.0/timeout.rb:41:in `handle_timeout' /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/3.3.1/lib/ruby/3.3.0/timeout.rb:195:in `timeout' /usr/local/Homebrew/Library/Homebrew/test.rb:50:in `<main>'
system bin/"llama", "--hf-repo", "ggml-org/tiny-llamas",
"-m", "stories15M-q4_0.gguf",
"-n", "400", "-p", "I", "-ngl", "0"
end
end
Loading