node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. ...

README

node-llama-cpp Logo

node-llama-cpp


Run AI models locally on your machine

Pre-built bindings are provided with a fallback to building from source with cmake

Features

Run a text generation model locally on your machine
Metal and CUDA support
Pre-built binaries are provided, with a fallback to building from source without node-gyp or Python
Chat with a model using a chat wrapper
Use the CLI to chat with a model without writing any code
Up-to-date with the latest version of llama.cpp. Download and compile the latest release with a single CLI command.
Force a model to generate output in a parseable format, like JSON, or even force it to follow a specific JSON schema


Installation

  1. ```bash
  2. npm install --save node-llama-cpp
  3. ```

This package comes with pre-built binaries for macOS, Linux and Windows.

If binaries are not available for your platform, it'll fallback to download the latest version of llama.cpp and build it from source with cmake.
To disable this behavior set the environment variable NODE_LLAMA_CPP_SKIP_DOWNLOAD to true.

Usage

  1. ```typescript
  2. import {fileURLToPath} from "url";
  3. import path from "path";
  4. import {LlamaModel, LlamaContext, LlamaChatSession} from "node-llama-cpp";

  5. const __dirname = path.dirname(fileURLToPath(import.meta.url));

  6. const model = new LlamaModel({
  7.     modelPath: path.join(__dirname, "models", "codellama-13b.Q3_K_M.gguf")
  8. });
  9. const context = new LlamaContext({model});
  10. const session = new LlamaChatSession({context});


  11. const q1 = "Hi there, how are you?";
  12. console.log("User: " + q1);

  13. const a1 = await session.prompt(q1);
  14. console.log("AI: " + a1);


  15. const q2 = "Summerize what you said";
  16. console.log("User: " + q2);

  17. const a2 = await session.prompt(q2);
  18. console.log("AI: " + a2);
  19. ```

For more examples, see the getting started guide


Contributing

To contribute to node-llama-cpp read the contribution guide.

Acknowledgements