Skip to content

Commit

Permalink
Merge pull request #36 from undreamai/fix/release_fixes_and_readme
Browse files Browse the repository at this point in the history
Fix/release fixes and readme
  • Loading branch information
amakropoulos authored Jan 18, 2024
2 parents fa8996f + 700c0f1 commit 697988f
Show file tree
Hide file tree
Showing 6 changed files with 59 additions and 27 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,4 @@ hooks/pre-commit.meta
setup.sh.meta
*.api
*.api.meta
CHANGELOG.release.md.meta
13 changes: 0 additions & 13 deletions CHANGELOG.md.diff

This file was deleted.

45 changes: 41 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,27 @@
</p>

<h3 align="center">Integrate LLM models in Unity!</h3>
<br>
LLMUnity allows to integrate, run and deploy LLMs (Large Language Models) in the Unity engine.<br>

[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
<a href="https://discord.gg/tZRGntma"><img src="https://discordapp.com/api/guilds/1194779009284841552/widget.png?style=shield"/></a>
[![Reddit](https://img.shields.io/badge/Reddit-%23FF4500.svg?style=flat&logo=Reddit&logoColor=white)](https://www.reddit.com/user/UndreamAI)

LLMUnity allows to integrate, run and deploy LLMs (Large Language Models) in the Unity engine.<br>
LLMUnity is built on top of the awesome [llama.cpp](https://github.com/ggerganov/llama.cpp) and [llamafile](https://github.com/Mozilla-Ocho/llamafile) libraries.


<sub>
<a href="#at-a-glance" style="color: black">At a glance</a>&nbsp;&nbsp;&nbsp;
<a href="#setup" style=color: black>Setup</a>&nbsp;&nbsp;&nbsp;
<a href="#how-to-use" style=color: black>How to use</a>&nbsp;&nbsp;&nbsp;
<a href="#examples" style=color: black>Examples</a>&nbsp;&nbsp;&nbsp;
<a href="#use-your-own-model" style=color: black>Use your own model</a>&nbsp;&nbsp;&nbsp;
<a href="#multiple-client--remote-server-setup" style=color: black>Multiple client / Remote server setup</a>&nbsp;&nbsp;&nbsp;
<a href="#options" style=color: black>Options</a>&nbsp;&nbsp;&nbsp;
<a href="#license" style=color: black>License</a>
</sub>


## At a glance
- :computer: Cross-platform! Supports Windows, Linux and macOS ([supported versions](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#supported-oses-and-cpus))
- :house: Runs locally without internet access but also supports remote servers
Expand All @@ -17,7 +33,13 @@ LLMUnity is built on top of the awesome [llama.cpp](https://github.com/ggerganov
- :wrench: Easy to setup, call with a single line code
- :moneybag: Free to use for both personal and commercial purposes

[:vertical_traffic_light: Upcoming Releases](https://github.com/orgs/undreamai/projects/2/views/10)
🧪 Tested on Unity: 2021 LTS, 2022 LTS, 2023<br>
:vertical_traffic_light: [Upcoming Releases](https://github.com/orgs/undreamai/projects/2/views/10)

## How to help
- Join us at [Discord](https://discord.gg/tZRGntma) and say hi!
- Star the repo and spread the word about the project ❤️!
- Submit feature requests or bugs as [issues](https://github.com/undreamai/LLMUnity/issues) or even submit a PR and become a collaborator!

## Setup
To install the package you can follow the typical asset / package process in Unity:<br>
Expand Down Expand Up @@ -45,7 +67,7 @@ For a step-by-step tutorial you can have a look at our guide:
Create a GameObject for the LLM :chess_pawn::
- Create an empty GameObject. In the GameObject Inspector click `Add Component` and select the LLM script (`Scripts>LLM`).
- Download the default model with the `Download Model` button (this will take a while as it is ~4GB).<br>You can also load your own model in .gguf format with the `Load model` button (see [Use your own model](#use-your-own-model)).
- Define the role of your AI in the `Prompt`. You can also define the name of the AI (`AI Mame`) and the player (`Player Name`).
- Define the role of your AI in the `Prompt`. You can also define the name of the AI (`AI Name`) and the player (`Player Name`).
- (Optional) By default the LLM script is set up to receive the reply from the model as is it is produced in real-time (recommended). If you prefer to receive the full reply in one go, you can deselect the `Stream` option.
- (Optional) Adjust the server or model settings to your preference (see [Options](#options)).
<br>
Expand Down Expand Up @@ -95,6 +117,21 @@ That's all :sparkles:!
You can also:


<details>
<summary>Add or not the message to the chat/prompt history</summary>

The last argument of the `Chat` function is a boolean that specifies whether to add the message to the history (default: true):
``` c#
void Game(){
// your game function
...
string message = "Hello bot!"
await llm.Chat(message, HandleReply, ReplyCompleted, false);
...
}
```

</details>
<details>
<summary>Wait for the reply before proceeding to the next lines of code</summary>

Expand Down
23 changes: 15 additions & 8 deletions Runtime/LLM.cs
Original file line number Diff line number Diff line change
Expand Up @@ -171,14 +171,22 @@ private void CheckIfListening(string message)
serverBlock.Set();
}
}
catch {}
catch { }
}

private void ProcessExited(object sender, EventArgs e)
{
serverBlock.Set();
}

private string EscapeSpaces(string input)
{
if (Application.platform == RuntimePlatform.WindowsEditor || Application.platform == RuntimePlatform.WindowsPlayer)
return input.Replace(" ", "\" \"");
else
return input.Replace(" ", "' '");
}

private void RunServerCommand(string exe, string args)
{
string binary = exe;
Expand All @@ -194,11 +202,12 @@ private void RunServerCommand(string exe, string args)
if (Application.platform == RuntimePlatform.LinuxEditor || Application.platform == RuntimePlatform.LinuxPlayer)
{
// use APE binary directly if on Linux
arguments = $"{binary.Replace(" ", "' '")} {arguments}";
arguments = $"{EscapeSpaces(binary)} {arguments}";
binary = SelectApeBinary();
} else if (Application.platform == RuntimePlatform.OSXEditor || Application.platform == RuntimePlatform.OSXPlayer)
}
else if (Application.platform == RuntimePlatform.OSXEditor || Application.platform == RuntimePlatform.OSXPlayer)
{
arguments = $"-c \"{binary.Replace(" ", "' '")} {arguments}\"";
arguments = $"-c \"{EscapeSpaces(binary)} {arguments}\"";
binary = "sh";
}
Debug.Log($"Server command: {binary} {arguments}");
Expand All @@ -218,13 +227,11 @@ private void StartLLMServer()
loraPath = GetAssetPath(lora);
if (!File.Exists(loraPath)) throw new System.Exception($"File {loraPath} not found!");
}
modelPath = modelPath.Replace(" ", "' '");
loraPath = loraPath.Replace(" ", "' '");

int slots = parallelPrompts == -1 ? FindObjectsOfType<LLMClient>().Length : parallelPrompts;
string arguments = $" --port {port} -m {modelPath} -c {contextSize} -b {batchSize} --log-disable --nobrowser -np {slots}";
string arguments = $" --port {port} -m {EscapeSpaces(modelPath)} -c {contextSize} -b {batchSize} --log-disable --nobrowser -np {slots}";
if (numThreads > 0) arguments += $" -t {numThreads}";
if (loraPath != "") arguments += $" --lora {loraPath}";
if (loraPath != "") arguments += $" --lora {EscapeSpaces(loraPath)}";

string GPUArgument = numGPULayers <= 0 ? "" : $" -ngl {numGPULayers}";
RunServerCommand(server, arguments + GPUArgument);
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.0.1
1.0.2
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "ai.undream.llmunity",
"version": "1.0.1",
"version": "1.0.2",
"displayName": "LLMUnity",
"description": "LLMUnity allows to run and distribute LLM models in the Unity engine.",
"unity": "2022.3",
Expand Down

0 comments on commit 697988f

Please sign in to comment.