Brain.js

GPU accelerated Neural networks in JavaScript for Browsers and Node.js

README

Logo


brain.js


GPU accelerated Neural networks in JavaScript for Browsers and Node.js

GitHub

  npm js-standard-style
  Backers on Open Collective
  Sponsors on Open Collective
  Gitter
  Slack
  CI
  codecov
Twitter
  
  NPM


About


brain.js is a GPU accelerated library for Neural Networks written in JavaScript.

:bulb: This is a continuation of the [harthur/brain](https://github.com/harthur/brain), which is not maintained anymore. More info

Table of Contents


  - NPM
  - CDN
  - Download
    - [For training with RNNTimeStep, LSTMTimeStep and GRUTimeStep](#for-training-with-rnntimestep-lstmtimestep-and-grutimestep)
    - [For training with RNN, LSTM and GRU](#for-training-with-rnn-lstm-and-gru)
  - train
  - run
  - forecast
  - Example
  - Transform
  - [likely](#likely)
  - [toSVG](#toSVG)

Installation and Usage


NPM


If you can install brain.js with npm:

  1. ``` sh
  2. npm install brain.js
  3. ```

CDN


  1. ``` html
  2. <script src="//unpkg.com/brain.js"></script>
  3. ```

Download



Installation note


Brain.js depends on a native module headless-gl for gpu support. In most cases installing brain.js from npm should just work. However, if you run into problems, this mean prebuilt binaries are not able to download from github repositories and you might need to build it yourself.

Building from source


Please make sure the following dependencies are installed and up to date and then run:

  1. ``` sh
  2. npm rebuild
  3. ```

System dependencies

Mac OS X


Ubuntu/Debian

- A GNU C++ environment (available via the build-essential package on apt)
- Working and up to date OpenGL drivers

  1. ``` sh
  2. sudo apt-get install -y build-essential libxi-dev libglu1-mesa-dev libglew-dev pkg-config
  3. ```

Windows

- run in cmd: npm config set msvs_version 2015
- run in cmd: npm config set python python2.7

\* If you are using Build Tools 2017 then run npm config set msvs_version 2017

Examples


Here's an example showcasing how to approximate the XOR function using brain.js:
more info on config here.


  1. ``` js
  2. // provide optional config object (or undefined). Defaults shown.
  3. const config = {
  4.   binaryThresh: 0.5,
  5.   hiddenLayers: [3], // array of ints for the sizes of the hidden layers in the network
  6.   activation: 'sigmoid', // supported activation types: ['sigmoid', 'relu', 'leaky-relu', 'tanh'],
  7.   leakyReluAlpha: 0.01, // supported for activation type 'leaky-relu'
  8. };

  9. // create a simple feed forward neural network with backpropagation
  10. const net = new brain.NeuralNetwork(config);

  11. net.train([
  12.   { input: [0, 0], output: [0] },
  13.   { input: [0, 1], output: [1] },
  14.   { input: [1, 0], output: [1] },
  15.   { input: [1, 1], output: [0] },
  16. ]);

  17. const output = net.run([1, 0]); // [0.987]
  18. ```

or
more info on config here.

  1. ``` js
  2. // provide optional config object, defaults shown.
  3. const config = {
  4.   inputSize: 20,
  5.   inputRange: 20,
  6.   hiddenLayers: [20, 20],
  7.   outputSize: 20,
  8.   learningRate: 0.01,
  9.   decayRate: 0.999,
  10. };

  11. // create a simple recurrent neural network
  12. const net = new brain.recurrent.RNN(config);

  13. net.train([
  14.   { input: [0, 0], output: [0] },
  15.   { input: [0, 1], output: [1] },
  16.   { input: [1, 0], output: [1] },
  17.   { input: [1, 1], output: [0] },
  18. ]);

  19. let output = net.run([0, 0]); // [0]
  20. output = net.run([0, 1]); // [1]
  21. output = net.run([1, 0]); // [1]
  22. output = net.run([1, 1]); // [0]
  23. ```

However, there is no reason to use a neural network to figure out XOR. (-: So, here is a more involved, realistic example:

More Examples


You can check out this fantastic screencast, which explains how to train a simple neural network using a real world dataset: How to create a neural network in the browser using Brain.js.

- experimental (NeuralNetwork only, but more to come!) using the gpu in a browser or using node gpu fallback to cpu and typescript version

Training


Use train() to train the network with an array of training data. The network has to be trained with all the data in bulk in one call to train(). More training patterns will probably take longer to train, but will usually result in a network better at classifying new patterns.

Note


Training is computationally expensive, so you should try to train the network offline (or on a Worker) and use the toFunction() or toJSON() options to plug the pre-trained network into your website.

Data format


For training with NeuralNetwork


Each training pattern should have an input and an output, both of which can be either an array of numbers from 0 to 1 or a hash of numbers from 0 to 1. For the color contrast demo it looks something like this:

  1. ``` js
  2. const net = new brain.NeuralNetwork();

  3. net.train([
  4.   { input: { r: 0.03, g: 0.7, b: 0.5 }, output: { black: 1 } },
  5.   { input: { r: 0.16, g: 0.09, b: 0.2 }, output: { white: 1 } },
  6.   { input: { r: 0.5, g: 0.5, b: 1.0 }, output: { white: 1 } },
  7. ]);

  8. const output = net.run({ r: 1, g: 0.4, b: 0 }); // { white: 0.99, black: 0.002 }
  9. ```

Here's another variation of the above example. (_Note_ that input objects do not need to be similar.)

  1. ``` js
  2. net.train([
  3.   { input: { r: 0.03, g: 0.7 }, output: { black: 1 } },
  4.   { input: { r: 0.16, b: 0.2 }, output: { white: 1 } },
  5.   { input: { r: 0.5, g: 0.5, b: 1.0 }, output: { white: 1 } },
  6. ]);

  7. const output = net.run({ r: 1, g: 0.4, b: 0 }); // { white: 0.81, black: 0.18 }
  8. ```

For training with RNNTimeStep, LSTMTimeStep and GRUTimeStep


Each training pattern can either:

- Be an array of numbers
- Be an array of arrays of numbers

Example using an array of numbers:

  1. ``` js
  2. const net = new brain.recurrent.LSTMTimeStep();

  3. net.train([[1, 2, 3]]);

  4. const output = net.run([1, 2]); // 3
  5. ```

Example using an array of arrays of numbers:

  1. ``` js
  2. const net = new brain.recurrent.LSTMTimeStep({
  3.   inputSize: 2,
  4.   hiddenLayers: [10],
  5.   outputSize: 2,
  6. });

  7. net.train([
  8.   [1, 3],
  9.   [2, 2],
  10.   [3, 1],
  11. ]);

  12. const output = net.run([
  13.   [1, 3],
  14.   [2, 2],
  15. ]); // [3, 1]
  16. ```

For training with RNN, LSTM and GRU


Each training pattern can either:

- Be an array of values
- Be a string
- Have an input and an output
  - Either of which can have an array of values or a string

CAUTION: When using an array of values, you can use ANY value, however, the values are represented in the neural network by a single input. So the more _distinct values_ has _the larger your input layer_. If you have a hundreds, thousands, or millions of floating point values _THIS IS NOT THE RIGHT CLASS FOR THE JOB_. Also, when deviating from strings, this gets into beta

Example using direct strings:
Hello World Using Brainjs
  1. ``` js

  2.   const net = new brain.recurrent.LSTM();

  3.   net.train(['I am brainjs, Hello World!']);

  4.   const output = net.run('I am brainjs');
  5.   alert(output);
  6. ```

  1. ``` js
  2. const net = new brain.recurrent.LSTM();

  3. net.train([
  4.   'doe, a deer, a female deer',
  5.   'ray, a drop of golden sun',
  6.   'me, a name I call myself',
  7. ]);

  8. const output = net.run('doe'); // ', a deer, a female deer'
  9. ```

Example using strings with inputs and outputs:

  1. ``` js
  2. const net = new brain.recurrent.LSTM();

  3. net.train([
  4.   { input: 'I feel great about the world!', output: 'happy' },
  5.   { input: 'The world is a terrible place!', output: 'sad' },
  6. ]);

  7. const output = net.run('I feel great about the world!'); // 'happy'
  8. ```

Training Options


train() takes a hash of options as its second argument:

  1. ``` js
  2. net.train(data, {
  3.   // Defaults values --> expected validation
  4.   iterations: 20000, // the maximum times to iterate the training data --> number greater than 0
  5.   errorThresh: 0.005, // the acceptable error percentage from training data --> number between 0 and 1
  6.   log: false, // true to use console.log, when a function is supplied it is used --> Either true or a function
  7.   logPeriod: 10, // iterations between logging out --> number greater than 0
  8.   learningRate: 0.3, // scales with delta to effect training rate --> number between 0 and 1
  9.   momentum: 0.1, // scales with next layer's change value --> number between 0 and 1
  10.   callback: null, // a periodic call back that can be triggered while training --> null or function
  11.   callbackPeriod: 10, // the number of iterations through the training data between callback calls --> number greater than 0
  12.   timeout: number, // the max number of milliseconds to train for --> number greater than 0. Default --> Infinity
  13. });
  14. ```

The network will stop training whenever one of the two criteria is met: the training error has gone below the threshold (default 0.005), or the max number of iterations (default 20000) has been reached.

By default training will not let you know how it's doing until the end, but set log to true to get periodic updates on the current training error of the network. The training error should decrease every time. The updates will be printed to console. If you set log to a function, this function will be called with the updates instead of printing to the console.
However, if you want to use the values of the updates in your own output, the callback can be set to a function to do so instead.

The learning rate is a parameter that influences how quickly the network trains. It's a number from 0 to 1. If the learning rate is close to 0, it will take longer to train. If the learning rate is closer to 1, it will train faster, but training results may be constrained to a local minimum and perform badly on new data.(_Overfitting_) The default learning rate is 0.3.

The momentum is similar to learning rate, expecting a value from 0 to 1 as well, but it is multiplied against the next level's change value. The default value is 0.1

Any of these training options can be passed into the constructor or passed into the updateTrainingOptions(opts) method and they will be saved on the network and used during the training time. If you save your network to json, these training options are saved and restored as well (except for callback and log, callback will be forgotten and log will be restored using console.log).

A boolean property called invalidTrainOptsShouldThrow is set to true by default. While the option is true, if you enter a training option that is outside the normal range, an error will be thrown with a message about the abnormal option. When the option is set to false, no error will be sent, but a message will still be sent to console.warn with the related information.

Async Training


trainAsync() takes the same arguments as train (data and options). Instead of returning the results object from training, it returns a promise that when resolved will return the training results object.  Does NOT work with:
brain.recurrent.RNN
brain.recurrent.GRU
brain.recurrent.LSTM
brain.recurrent.RNNTimeStep
brain.recurrent.GRUTimeStep
brain.recurrent.LSTMTimeStep

  1. ``` js
  2. const net = new brain.NeuralNetwork();
  3. net
  4.   .trainAsync(data, options)
  5.   .then((res) => {
  6.     // do something with my trained network
  7.   })
  8.   .catch(handleError);
  9. ```

With multiple networks you can train in parallel like this:

  1. ``` js
  2. const net = new brain.NeuralNetwork();
  3. const net2 = new brain.NeuralNetwork();

  4. const p1 = net.trainAsync(data, options);
  5. const p2 = net2.trainAsync(data, options);

  6. Promise.all([p1, p2])
  7.   .then((values) => {
  8.     const res = values[0];
  9.     const res2 = values[1];
  10.     console.log(
  11.       `net trained in ${res.iterations} and net2 trained in ${res2.iterations}`
  12.     );
  13.     // do something super cool with my 2 trained networks
  14.   })
  15.   .catch(handleError);
  16. ```

Cross Validation


[Cross Validation]() can provide a less fragile way of training on larger data sets. The brain.js api provides Cross Validation in this example:

  1. ``` js
  2. const crossValidate = new brain.CrossValidate(() => new brain.NeuralNetwork(networkOptions));
  3. crossValidate.train(data, trainingOptions, k); //note k (or KFolds) is optional
  4. const json = crossValidate.toJSON(); // all stats in json as well as neural networks
  5. const net = crossValidate.toNeuralNetwork(); // get top performing net out of `crossValidate`

  6. // optionally later
  7. const json = crossValidate.toJSON();
  8. const net = crossValidate.fromJSON(json);
  9. ```

Use CrossValidate with these classes:

- brain.NeuralNetwork
- brain.RNNTimeStep
- brain.LSTMTimeStep
- brain.GRUTimeStep

An example of using cross validate can be found in examples/javascript/cross-validate.js

Methods


train(trainingData) -> trainingStatus


The output of train() is a hash of information about how the training went:

  1. ``` js
  2. {
  3.   error: 0.0039139985510105032,  // training error
  4.   iterations: 406                // training iterations
  5. }
  6. ```

run(input) -> prediction


Supported on classes:

- brain.NeuralNetwork
- brain.NeuralNetworkGPU -> All the functionality of brain.NeuralNetwork but, ran on GPU (via gpu.js in WebGL2, WebGL1, or fallback to CPU)
- brain.recurrent.RNN
- brain.recurrent.LSTM
- brain.recurrent.GRU
- brain.recurrent.RNNTimeStep
- brain.recurrent.LSTMTimeStep
- brain.recurrent.GRUTimeStep

Example:

  1. ``` js
  2. // feed forward
  3. const net = new brain.NeuralNetwork();
  4. net.fromJSON(json);
  5. net.run(input);

  6. // time step
  7. const net = new brain.LSTMTimeStep();
  8. net.fromJSON(json);
  9. net.run(input);

  10. // recurrent
  11. const net = new brain.LSTM();
  12. net.fromJSON(json);
  13. net.run(input);
  14. ```

forecast(input, count) -> predictions


Available with the following classes. Outputs a array of predictions. Predictions being a continuation of the inputs.

- brain.recurrent.RNNTimeStep
- brain.recurrent.LSTMTimeStep
- brain.recurrent.GRUTimeStep

Example:

  1. ``` js
  2. const net = new brain.LSTMTimeStep();
  3. net.fromJSON(json);
  4. net.forecast(input, 3);
  5. ```

toJSON() -> json


Serialize neural network to json

fromJSON(json)


Deserialize neural network from json

Failing


If the network failed to train, the error will be above the error threshold. This could happen if the training data is too noisy (most likely), the network does not have enough hidden layers or nodes to handle the complexity of the data, or it has not been trained for enough iterations.

If the training error is still something huge like 0.4 after 20000 iterations, it's a good sign that the network can't make sense of the given data.

RNN, LSTM, or GRU Output too short or too long


The instance of the net's property maxPredictionLength (default 100) can be set to adjust the output of the net;

Example:

  1. ``` js
  2. const net = new brain.recurrent.LSTM();

  3. // later in code, after training on a few novels, write me a new one!
  4. net.maxPredictionLength = 1000000000; // Be careful!
  5. net.run('Once upon a time');
  6. ```

JSON


Serialize or load in the state of a trained network with JSON:

  1. ``` js
  2. const json = net.toJSON();
  3. net.fromJSON(json);
  4. ```

Standalone Function


You can also get a custom standalone function from a trained network that acts just like run():

  1. ``` js
  2. const run = net.toFunction();
  3. const output = run({ r: 1, g: 0.4, b: 0 });
  4. console.log(run.toString()); // copy and paste! no need to import brain.js
  5. ```

Options


NeuralNetwork() takes a hash of options:

  1. ``` js
  2. const net = new brain.NeuralNetwork({
  3.   activation: 'sigmoid', // activation function
  4.   hiddenLayers: [4],
  5.   learningRate: 0.6, // global learning rate, useful when training using streams
  6. });
  7. ```

activation


This parameter lets you specify which activation function your neural network should use. There are currently four supported activation functions, sigmoid being the default:

- [relu]()- [leaky-relu]()
  - related option - 'leakyReluAlpha' optional number, defaults to 0.01

Here's a table (thanks, Wikipedia!) summarizing a plethora of activation functions — Activation Function

hiddenLayers


You can use this to specify the number of hidden layers in the network and the size of each layer. For example, if you want two hidden layers - the first with 3 nodes and the second with 4 nodes, you'd give:

  1. ``` js
  2. hiddenLayers: [3, 4];
  3. ```

By default brain.js uses one hidden layer with size proportionate to the size of the input array.

Streams


The network now has a WriteStream. You can train the network by usingpipe() to send the training data to the network.

Example


Refer to [stream-example.js](examples/javascript/stream-example.js) for an example on how to train the network with a stream.

Initialization


To train the network using a stream you must first create the stream by calling net.createTrainStream() which takes the following options:

- floodCallback() - the callback function to re-populate the stream. This gets called on every training iteration.
- doneTrainingCallback(info) - the callback function to execute when the network is done training. The info param will contain a hash of information about how the training went:

  1. ``` js
  2. {
  3.   error: 0.0039139985510105032,  // training error
  4.   iterations: 406                // training iterations
  5. }
  6. ```

Transform


Use a Transform to coerce the data into the correct format. You might also use a Transform stream to normalize your data on the fly.

Utilities


likely


  1. ``` js
  2. const likely = require('brain/likely');
  3. const key = likely(input, net);
  4. ```

Likely example see: simple letter detection

toSVG


  1. ``` js
  2. <script src="../../src/utilities/svg.js"></script>
  3. ```

Renders the network topology of a feedforward network

  1. ``` js
  2. document.getElementById('result').innerHTML = brain.utilities.toSVG(
  3.   network,
  4.   options
  5. );
  6. ```

toSVG example see: network rendering

The user interface used:
screenshot1

Neural Network Types


- [brain.NeuralNetwork](src/neural-network.ts) - Feedforward Neural Network with backpropagation
- [brain.NeuralNetworkGPU](src/neural-network-gpu.ts) - Feedforward Neural Network with backpropagation, GPU version
- [brain.recurrent.RNNTimeStep](src/recurrent/rnn-time-step.ts) - Time Step Recurrent Neural Network or "RNN"
- [brain.recurrent.LSTMTimeStep](src/recurrent/lstm-time-step.ts) - Time Step Long Short Term Memory Neural Network or "LSTM"
- [brain.recurrent.GRUTimeStep](src/recurrent/gru-time-step.ts) - Time Step Gated Recurrent Unit or "GRU"
- [brain.recurrent.RNN](src/recurrent/rnn.ts) - Recurrent Neural Network or "RNN"
- [brain.recurrent.LSTM](src/recurrent/lstm.ts) - Long Short Term Memory Neural Network or "LSTM"
- [brain.recurrent.GRU](src/recurrent/gru.ts) - Gated Recurrent Unit or "GRU"
- [brain.FeedForward](src/feed-forward.ts) - Highly Customizable Feedforward Neural Network with backpropagation
- [brain.Recurrent](src/recurrent.ts) - Highly Customizable Recurrent Neural Network with backpropagation

Why different Neural Network Types


Different neural nets do different things well. For example:

- A Feedforward Neural Network can classify simple things very well, but it has no memory of previous actions and has infinite variation of results.
- A Time Step Recurrent Neural Network _remembers_, and can predict future values.
- A Recurrent Neural Network _remembers_, and has a finite set of results.

Get Involved


W3C machine learning standardization process


If you are a developer or if you just care about how ML API should look like - please take a part and join W3C community and share your opinions or simply support opinions you like or agree with.

Brain.js is a widely adopted open source machine learning library in the javascript world. There are several reasons for it, but most notable is simplicity of usage while not sacrificing performance.
We would like to keep it also simple to learn, simple to use and performant when it comes to W3C standard. We think that current brain.js API is quite close to what we could expect to become a standard.
And since supporting doesn't require much effort and still can make a huge difference feel free to join W3C community group and support us with brain.js like API.

Get involved into W3C machine learning ongoing standardization process here.
You can also join our open discussion about standardization here.

Issues


If you have an issue, either a bug or a feature you think would benefit your project let us know and we will do our best.

Create issues here and follow the template.

brain.js.org


Source for brain.js.org is available at Brain.js.org Repository. Built using awesomevue.js & bulma. Contributions are always welcome.

Contributors


This project exists thanks to all the people who contribute. [Contribute].

Backers


Thank you to all our backers! 🙏 [Become a backer]


Sponsors


Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [Become a sponsor]