JS-Torch

A JavaScript library like PyTorch, built from scratch.

README

PyTorch in JavaScript


- JS-Torch is a Deep Learning JavaScript library built from scratch, to closely follow PyTorch's syntax.
- It contains a fully functional Tensor object, which can track gradients, Deep Learning Layers and functions, and an Automatic Differentiation engine.
- Feel free to try out the

Web Demo

Note: You can install the package locally with: npm install js-pytorch


1. Project Structure


- assets/ : Folder to store images and the Demo.
  - assets/demo/ : JS-Torch's Web Demo.
- src/ : Framework with JavaScript files.
  - src/tensor.ts: File with the Tensor class and all of the tensor Operations.
  - src/utils.ts: File with operations and helper functions.
  - src/layers.ts: Submodule of the framework. Contains full layers.
  - src/optim.ts: Submodule of the framework. Contains Adam Optimizer.
- tests/: Folder with unit tests. Contains test.ts.

2. Running it Yourself


Simple Autograd Example:


  1. ```typescript
  2. const { torch } = require("js-pytorch");

  3. // Instantiate Tensors:
  4. let x = torch.randn([8, 4, 5]);
  5. let w = torch.randn([8, 5, 4], (requires_grad = true));
  6. let b = torch.tensor([0.2, 0.5, 0.1, 0.0], (requires_grad = true));

  7. // Make calculations:
  8. let out = torch.matmul(x, w);
  9. out = torch.add(out, b);

  10. // Compute gradients on whole graph:
  11. out.backward();

  12. // Get gradients from specific Tensors:
  13. console.log(w.grad);
  14. console.log(b.grad);
  15. ```

Complex Autograd Example (Transformer):


  1. ```typescript
  2. const { torch } = require("js-pytorch");
  3. const nn = torch.nn;

  4. class Transformer extends nn.Module {
  5.   constructor(vocab_size, hidden_size, n_timesteps, n_heads, p) {
  6.     super();
  7.     // Instantiate Transformer's Layers:
  8.     this.embed = new nn.Embedding(vocab_size, hidden_size);
  9.     this.pos_embed = new nn.PositionalEmbedding(n_timesteps, hidden_size);
  10.     this.b1 = new nn.Block(
  11.       hidden_size,
  12.       hidden_size,
  13.       n_heads,
  14.       n_timesteps,
  15.       (dropout_p = p)
  16.     );
  17.     this.b2 = new nn.Block(
  18.       hidden_size,
  19.       hidden_size,
  20.       n_heads,
  21.       n_timesteps,
  22.       (dropout_p = p)
  23.     );
  24.     this.ln = new nn.LayerNorm(hidden_size);
  25.     this.linear = new nn.Linear(hidden_size, vocab_size);
  26.   }

  27.   forward(x) {
  28.     let z;
  29.     z = torch.add(this.embed.forward(x), this.pos_embed.forward(x));
  30.     z = this.b1.forward(z);
  31.     z = this.b2.forward(z);
  32.     z = this.ln.forward(z);
  33.     z = this.linear.forward(z);
  34.     return z;
  35.   }
  36. }

  37. // Instantiate your custom nn.Module:
  38. const model = new Transformer(
  39.   vocab_size,
  40.   hidden_size,
  41.   n_timesteps,
  42.   n_heads,
  43.   dropout_p
  44. );

  45. // Define loss function and optimizer:
  46. const loss_func = new nn.CrossEntropyLoss();
  47. const optimizer = new optim.Adam(model.parameters(), (lr = 5e-3), (reg = 0));

  48. // Instantiate sample input and output:
  49. let x = torch.randint(0, vocab_size, [batch_size, n_timesteps, 1]);
  50. let y = torch.randint(0, vocab_size, [batch_size, n_timesteps]);
  51. let loss;

  52. // Training Loop:
  53. for (let i = 0; i < 40; i++) {
  54.   // Forward pass through the Transformer:
  55.   let z = model.forward(x);

  56.   // Get loss:
  57.   loss = loss_func.forward(z, y);

  58.   // Backpropagate the loss using torch.tensor's backward() method:
  59.   loss.backward();

  60.   // Update the weights:
  61.   optimizer.step();

  62.   // Reset the gradients to zero after each training step:
  63.   optimizer.zero_grad();
  64. }
  65. ```



3. Distribution & Devtools


- To Build for Distribution, run npm run build. CJS and ESM modules and index.d.ts will be output in the dist/ folder.
- To check the code with ESLint at any time, run npm run lint.
- To improve code formatting with prettier, run npm run prettier.

4. Results


- The models implemented in the unit tests all converged to near-zero losses.
- Run them with npm test!
- This package is not as optimized as PyTorch yet, but I tried making it more interpretable. Efficiency improvements are incoming!
- Hope you enjoy!

5. Benchmarks


- Performance benchmarks are also included and tracked in the tests/benchmarks/ directory.
- Run all benchmarks with npm run bench
- Save new benchmarks with npm run bench:update and add the updated files to your commit.