You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Silicon macs are pretty fast for this kind of stuff and benefit from the unified memory. If I've got a Macbook with 24GB and one with 36GB ram could I technically run ~60GB models? I assume I won't get the performance of MLX llama implementations but can I assume similar performance of llama.cpp granted I've got fast internet?
The --workers flag in the examples only have 1 IP address, do I add comma separated values or a spaced out list?
Thanks in advance, will post in discussions with results if I get some answers. Might try and pool a few colleagues Macs together to see how far we can push it.
AWESOME PROJECT. Massive respect.
The text was updated successfully, but these errors were encountered:
You just separate them with spaces like so:
./dllama inference ... --workers 10.0.0.2:9998 10.0.0.3:9998 10.0.0.4:9998
You can also run several from the same IP, like so:
./dllama inference ... --workers 10.0.0.1:9996 10.0.0.1:9997 10.0.0.1:9998
As for 1. performance on workers that have unified memory would be faster due to their increased memory bandwidth.
The root node consumes a bit more memory than the workers so I'd use the 36gb macbook as the root node, though typically it divides the memory required to load the model by the amount of workers though the number of workers need to be a power of 2 so 2, 4, 8 workers etc.
Also it's worth experimenting with the number of threads you specify, in my case I have 6 cores and 12 threads, but I get the best performance by using 8 threads.
Larger models require more data transferred during each inference pass, something like Q80 Llama 70B might already hit the limits of gigabit ethernet, switching capacity of your ethernet switch also becomes a factor then.
Hi, been dreaming of a project like this.
Some questions:
Thanks in advance, will post in discussions with results if I get some answers. Might try and pool a few colleagues Macs together to see how far we can push it.
AWESOME PROJECT. Massive respect.
The text was updated successfully, but these errors were encountered: