Turbowatch

Extremely fast file change detector and task orchestrator for Node.js.

README

Turbowatch 🏎


Extremely fast file change detector and task orchestrator for Node.js.

If you ever wanted something like Nodemon but more capable, then you are at the right place.

Basic usage:

  1. ```bash
  2. npm install turbowatch
  3. cat > turbowatch.ts <<'EOD'
  4. import { defineConfig } from 'turbowatch';

  5. export default defineConfig({
  6.   project: __dirname,
  7.   triggers: [
  8.     {
  9.       expression: ['match', '*.ts', 'basename'],
  10.       name: 'build',
  11.       onChange: async ({ spawn }) => {
  12.         await spawn`tsc`;
  13.       },
  14.     },
  15.   ],
  16. });
  17. EOD
  18. npm exec turbowatch ./turbowatch.ts
  19. ```

Note See logging instructions to print logs that explain what Turbowatch is doing.


Refer to recipes:

[Handling the AbortSignal](#handling-the-abortsignal)
[Throttling spawn output](#throttling-spawn-output)

||Turbowatch|Nodemon|
|---|---|---|
|[Node.js
|[Graceful
|[Scriptable
|Retries|✅|❌|
|Debounce|✅|❌|
|Interruptible
|Concurrent
|[Log
|[Bring-your-own
|Works
|Works
|Watch
|Ignoring
|Open

1 Undocumented
2 Nodemon only provides the ability to [send a custom signal](https://github.com/remy/nodemon#gracefully-reloading-down-your-script) to the worker.

API


Note defineConfig is used to export configuration for the consumption by turbowatch program. If you want to run Turbowatch programmatically, then use watch. The API of both methods is equivalent.


Turbowatch defaults are a good choice for most projects. However, Turbowatch has _many_ options that you should be familiar with for advance use cases.

  1. ```ts
  2. import {
  3.   watch,
  4.   type ChangeEvent,
  5. } from 'turbowatch';

  6. void watch({
  7.   // Debounces triggers by 1 second.
  8.   // Most multi-file spanning changes are non-atomic. Therefore, it is typically desirable to
  9.   // batch together information about multiple file changes that happened in short succession.
  10.   // Provide { debounce: { wait: 0 } } to disable debounce.
  11.   debounce: {
  12.     wait: 1000,
  13.   },
  14.   // The base directory under which all files are matched.
  15.   // Note: This is different from the "root project" (https://github.com/gajus/turbowatch#project-root).
  16.   project: __dirname,
  17.   triggers: [
  18.     {
  19.       // Expression match files based on name.
  20.       // https://github.com/gajus/turbowatch#expressions
  21.       expression: [
  22.         'allof',
  23.         ['not', ['dirname', 'node_modules']],
  24.         [
  25.           'anyof',
  26.           ['match', '*.ts', 'basename'],
  27.           ['match', '*.tsx', 'basename'],
  28.         ]
  29.       ],
  30.       // Indicates whether the onChange routine should be triggered on script startup.
  31.       // Defaults to false. Set it to false if you would like onChange routine to not run until the first changes are detected.
  32.       initialRun: true,
  33.       // Determines what to do if a new file change is detected while the trigger is executing.
  34.       // If {interruptible: true}, then AbortSignal will abort the current onChange routine.
  35.       // If {interruptible: false}, then Turbowatch will wait until the onChange routine completes.
  36.       // Defaults to true.
  37.       interruptible: false,
  38.       // Name of the trigger. Used for debugging
  39.       // Must match /^[a-z0-9-_]+$/ pattern and must be unique.
  40.       name: 'build',
  41.       // Routine that is executed when file changes are detected.
  42.       onChange: async ({ spawn }: ChangeEvent) => {
  43.         await spawn`tsc`;
  44.         await spawn`tsc-alias`;
  45.       },
  46.       // Routine that is executed when shutdown signal is received.
  47.       onTeardown: async ({ spawn }) => {
  48.         await spawn`rm -fr ./dist`;
  49.       },
  50.       // Label a task as persistent if it is a long-running process, such as a dev server or --watch mode.
  51.       persistent: false,
  52.       // Retry a task if it fails. Otherwise, watch program will throw an error if trigger fails.
  53.       // Defaults to { retries: 0 }
  54.       retry: {
  55.         retries: 5,
  56.       },
  57.     },
  58.   ],
  59. });
  60. ```

Motivation


To abstract the complexity of orchestrating file watching operations.

For context, we are using Turborepo. The reason this project came to be is because Turborepo does not have "watch" mode (issue #986).

At first, we attempted to use a combination of tsc --watch, concurrently and Nodemon, but started to run into things breaking left and right, e.g.

services restarting prematurely (before all the assets are built)
services failing to gracefully shutdown and then failing to start, e.g. because ports are in use

Furthermore, the setup for each workspace was repetitive and not straightforward, and debugging issues was not a great experience because you have many workspaces running in watch mode producing tons of logs. Many of the workspaces being dependencies of each other, this kept re-triggering watch operations causing the mentioned issues.

In short, it quickly became clear that we need the ability to have more control over the orchestration of what/when needs to happen when files change.

We started with a script. At first I added _debounce_. That improved things. Then I added _graceful termination_ logic, which mostly made everything work. We still had occasional failures due to out-of-order events, but adding _retry_ logic fixed that too... At the end, while we got everything to work, it took a lot of effort and it still was a collection of hacky scripts that are hard to maintain and debug, and that's how Turbowatch came to be –

Turbowatch is a toolbox for orchestrating and debugging file watching operations based on everything we learned along the way.

Note If you are working on a very simple project, i.e. just one build step or just one watch operation, then you don't need Turbowatch. Turbowatch is designed for monorepos or otherwise complex workspaces where you have dozens or hundreds of build steps that depend on each other (e.g. building and re-building dependencies, building/starting/stopping Docker containers, populating data, sending notifications, etc).


We also shared these learnings with Turborepo team in hopes that it will help to design an embedded file watching experience.

Use Cases


Turbowatch can be used to automate any sort of operations that need to happen in response to files changing, e.g.,

You can run (and conditionally restart) long-running processes (like your Node.js application)
You can build assets (like TypeScript and Docker images)

spawn


Turbowatch exposes spawn function that is an instance of zx. Use it to evaluate shell commands:

  1. ```ts
  2. async ({ spawn }: ChangeEvent) => {
  3.   await spawn`tsc`;
  4.   await spawn`tsc-alias`;
  5. },
  6. ```

The reason Turbowatch abstracts zx is to enable graceful termination of child-processes when triggers are configured to be interruptible.

Persistent tasks


Your setup may include tasks that are not designed to exit, e.g. next dev (starts Next.js in development mode).

It is important that these tasks are marked as persistent to distinguish them from tasks that run to completion as that changes how Turbowatch treats them.

||Persistent|Non-Persistent|
|---|---|---|
|Ignore

Expressions


Expressions are used to match files. The most basic expression is match – it evaluates as true if a glob pattern matches the file, e.g.

Match all files with *.ts extension:

  1. ```ts
  2. ['match', '*.ts', 'basename']
  3. ```

Expressions can be combined using allof and anyof, e.g.,

Match all files with `.ts or .tsx` extensions:

  1. ```ts
  2. [
  3.   'anyof',
  4.   ['match', '*.ts', 'basename'],
  5.   ['match', '*.tsx', 'basename']
  6. ]
  7. ```

Finally, not evaluates as true if the sub-expression evaluated as false, i.e. inverts the sub-expression.

Match all files with *.ts extension, but exclude index.ts:

  1. ```ts
  2. [
  3.   'allof',
  4.   ['match', '*.ts', 'basename'],
  5.   [
  6.     'not',
  7.     ['match', 'index.ts', 'basename']
  8.   ]
  9. ]
  10. ```

This is the gist behind Turbowatch expressions. However, there are many more expressions. Inspect Expression type for further guidance.

  1. ```ts
  2. type Expression =
  3.   // Evaluates as true if all of the grouped expressions also evaluated as true.
  4.   | ['allof', ...Expression[]]
  5.   // Evaluates as true if any of the grouped expressions also evaluated as true.
  6.   | ['anyof', ...Expression[]]
  7.   // Evaluates as true if a given file has a matching parent directory.
  8.   | ['dirname' | 'idirname', string]
  9.   // Evaluates as true if a glob matches against the basename of the file.
  10.   | ['match' | 'imatch', string, 'basename' | 'wholename']
  11.   // Evaluates as true if the sub-expression evaluated as false, i.e. inverts the sub-expression.
  12.   | ['not', Expression];
  13. ```

Note Turbowatch expressions are a subset of Watchman expressions. Originally, Turbowatch was developed to leverage Watchman as a superior backend for watching a large number of files. However, along the way, we discovered that Watchman does not support symbolic links (issue #105). Unfortunately, that makes Watchman unsuitable for projects that utilize linked dependencies (which is the direction in which the ecosystem is moving for dependency management in monorepos). As such, Watchman was replaced with chokidar. We are hoping to provide Watchman as a backend in the future. Therefore, we made Turbowatch expressions syntax compatible with a subset of Watchman expressions.


Note Turbowatch uses micromatch for glob matching. Please note that you should be using forward slash (/) to separate paths, even on Windows.


Recipes


Rebuilding assets when file changes are detected


  1. ```ts
  2. import { watch } from 'turbowatch';

  3. void watch({
  4.   project: __dirname,
  5.   triggers: [
  6.     {
  7.       expression: [
  8.         'allof',
  9.         ['not', ['dirname', 'node_modules']],
  10.         ['match', '*.ts', 'basename'],
  11.       ],
  12.       name: 'build',
  13.       onChange: async ({ spawn }) => {
  14.         await spawn`tsc`;
  15.         await spawn`tsc-alias`;
  16.       },
  17.     },
  18.   ],
  19. });
  20. ```

Restarting server when file changes are detected


  1. ```ts
  2. import { watch } from 'turbowatch';

  3. void watch({
  4.   project: __dirname,
  5.   triggers: [
  6.     {
  7.       expression: [
  8.         'allof',
  9.         ['not', ['dirname', 'node_modules']],
  10.         [
  11.           'anyof',
  12.           ['match', '*.ts', 'basename'],
  13.           ['match', '*.graphql', 'basename'],
  14.         ]
  15.       ],
  16.       // Because of this setting, Turbowatch will kill the processes that spawn starts
  17.       // when it detects changes when it detects a change.
  18.       interruptible: true,
  19.       name: 'start-server',
  20.       onChange: async ({ spawn }) => {
  21.         await spawn`tsx ./src/bin/wait.ts`;
  22.         await spawn`tsx ./src/bin/server.ts`;
  23.       },
  24.     },
  25.   ],
  26. });
  27. ```

Watching node_modules


There is more than one way to watch node_modules. However, through trial and error we found that the following set of rules work the best for a generalized solution.

  1. ```ts
  2. import { watch } from 'turbowatch';

  3. void watch({
  4.   project: path.resolve(__dirname, '../..'),
  5.   triggers: [
  6.     {
  7.       expression: [
  8.         'anyof',
  9.         [
  10.           'allof',
  11.           ['dirname', 'node_modules'],
  12.           ['dirname', 'dist'],
  13.           ['match', '*', 'basename'],
  14.         ],
  15.         [
  16.           'allof',
  17.           ['not', ['dirname', 'node_modules']],
  18.           ['dirname', 'src'],
  19.           ['match', '*', 'basename'],
  20.         ],
  21.       ],
  22.       name: 'build',
  23.       onChange: async ({ spawn }) => {
  24.         return spawn`pnpm run build`;
  25.       },
  26.     },
  27.   ],
  28. });
  29. ```

This setup makes an assumption that your workspaces sources are in src directory and build task outputs to dist directory.

Reusing expressions


This might be common sense, but since Turbowatch scripts are regular JavaScript scripts, you can (and should) abstract your expressions and routines.

How you do it is entirely up to you, e.g. You could abstract just expressions or you could go as far as abstracting the entire trigger:

  1. ```ts
  2. import { watch } from 'turbowatch';
  3. import {
  4.   buildTrigger,
  5. } from '@/turbowatch';

  6. void watch({
  7.   project: __dirname,
  8.   triggers: [
  9.     buildTrigger(),
  10.   ],
  11. });
  12. ```

Such abstraction helps to avoid errors that otherwise may occur due to duplicative code across workspaces.

Reducing unnecessary reloads


Something that is important to consider when orchestrating file watching triggers is how to avoid unnecessary reloads. Consider if this was your "build" script:

  1. ```bash
  2. rm -fr dist && tsc && tsc-alias
  3. ```

and let's assume that you are using an expression such as this one to detect when dependencies are updated:

  1. ```ts
  2. [
  3.   'allof',
  4.   ['dirname', 'node_modules'],
  5.   ['dirname', 'dist'],
  6.   ['match', '*'],
  7. ],
  8. ```

Running this script will produce at least 3 file change events:

1. when rm -fr dist completes
1. when tsc completes
1. when tsc-alias completes

What's even worse is that even if the output has not changed, you are still going to trigger file change events (because dist get replaced).

To some degree, debounce setting helps with this. However, it will only help if there is no more than 1 second (by default) inbetween every command.

One way to avoid this entirely is by using an intermediate directory to output files and swapping only the files that changed. Here is how we do it:

  1. ```bash
  2. rm -fr .dist && tsc --project tsconfig.build.json && rsync -cr --delete .dist/ ./dist/ && rm -fr .dist
  3. ```

This "build" script will always produce at most 1 event, and won't produce any events if the outputs have not changed.

This is not specific to Turbowatch, but something worth considering as you are designing your build pipeline.

Retrying failing triggers


Retries are configured by passing a retry property to the trigger configuration.

  1. ```ts
  2. /**
  3. * @property factor The exponential factor to use. Default is 2.
  4. * @property maxTimeout The maximum number of milliseconds between two retries. Default is Infinity.
  5. * @property minTimeout The number of milliseconds before starting the first retry. Default is 1000.
  6. * @property retries The maximum amount of times to retry the operation. Default is 0. Seting this to 1 means do it once, then retry it once.
  7. */
  8. type Retry = {
  9.   factor?: number,
  10.   maxTimeout?: number,
  11.   minTimeout?: number,
  12.   retries?: number,
  13. }
  14. ```

Gracefully terminating Turbowatch


Note SIGINT is automatically handled if you are using turbowatch executable to evaluate your Turbowatch script. This examples shows how to programmatically gracefully shutdown Turbowatch if you choose not to use turbowatch program to evaluate your watch scripts.


Warning Unfortunately, many tools do not allow processes to gracefully terminate. There are open support issues for this in npm (#4603), pnpm (#2653) and yarn (#4667), but they haven't been addressed. Therefore, do not wrap yourturbowatch script execution using these tools if you require processes to gracefully terminate.


watch returns an instance of TurbowatchController, which can be used to gracefully terminate the script:

  1. ```ts
  2. const abortController = new AbortController();

  3. const { shutdown } = watch({
  4.   abortSignal: abortController.signal,
  5.   project: __dirname,
  6.   triggers: [
  7.     {
  8.       name: 'test',
  9.       expression: ['match', '*', 'basename'],
  10.       onChange: async ({ spawn }) => {
  11.         // `sleep 60` will receive `SIGTERM` as soon as `abortController.abort()` is called.
  12.         await spawn`sleep 60`;
  13.       },
  14.     }
  15.   ],
  16. });

  17. // SIGINT is the signal sent when we press Ctrl+C
  18. process.once('SIGINT', () => {
  19.   void shutdown();
  20. });
  21. ```

Invoking shutdown will propagate an abort signal to all onChange handlers. The processes that were initiated using [spawn](#spawn) will receive SIGTERM signal.

Handling the AbortSignal


Workflow might be interrupted in two scenarios:

when Turbowatch is being gracefully shutdown
when routine is marked as interruptible and a new file change is detected

Implementing interruptible workflows requires that you define AbortSignal handler. If you are using [zx](https://npmjs.com/zx), such abstraction could look like so:

Note Turbowatch already comes with [zx](https://npmjs.com/zx) bound to the AbortSignal. Just use spawn. Documentation demonstrates how to implement equivalent functionality.


  1. ```ts
  2. import { type ProcessPromise } from 'zx';

  3. const interrupt = async (
  4.   processPromise: ProcessPromise,
  5.   abortSignal: AbortSignal,
  6. ) => {
  7.   let aborted = false;

  8.   const kill = () => {
  9.     aborted = true;

  10.     processPromise.kill();
  11.   };

  12.   abortSignal.addEventListener('abort', kill, { once: true });

  13.   try {
  14.     await processPromise;
  15.   } catch (error) {
  16.     if (!aborted) {
  17.       console.log(error);
  18.     }
  19.   }

  20.   abortSignal.removeEventListener('abort', kill);
  21. };
  22. ```

which you can then use to kill your scripts, e.g.

  1. ```ts
  2. export default watch({
  3.   project: __dirname,
  4.   triggers: [
  5.     {
  6.       expression: ['match', '*.ts', 'basename'],
  7.       interruptible: false,
  8.       name: 'sleep',
  9.       onChange: async ({ abortSignal }) => {
  10.         await interrupt($`sleep 30`, abortSignal);
  11.       },
  12.     },
  13.   ],
  14. });
  15. ```

Tearing down project


onTeardown is going to be called when Turbowatch is gracefully terminated. Use it to "clean up" the project if necessary.

Warning There is no timeout for onTeardown.


  1. ```ts
  2. import { watch } from 'turbowatch';

  3. export default watch({
  4.   abortSignal: abortController.signal,
  5.   project: __dirname,
  6.   triggers: [
  7.     {
  8.       expression: ['match', '*.ts', 'basename'],
  9.       name: 'build',
  10.       onChange: async ({ spawn }) => {
  11.         await spawn`tsc`;
  12.       },
  13.       onTeardown: async () => {
  14.         await spawn`rm -fr ./dist`;
  15.       },
  16.     },
  17.   ],
  18. });
  19. ```

Throttling spawn output


When multiple processes are sending logs in parallel, the log stream might be hard to read, e.g.

  1. ```yaml
  2. redis:dev: 973191cf > #5 sha256:7f65636102fd1f499092cb075baa95784488c0bbc3e0abff2a6d853109e4a948 4.19MB / 9.60MB 22.3s
  3. api:dev: a1e4c6a7 > [18:48:37.171] 765ms debug @utilities #waitFor: Waiting for database to be ready...
  4. redis:dev: 973191cf > #5 sha256:d01ec855d06e16385fb33f299d9cc6eb303ea04378d0eea3a75d74e26c6e6bb9 0B / 1.39MB 22.7s
  5. api:dev: a1e4c6a7 > [18:48:37.225]  54ms debug @utilities #waitFor: Waiting for Redis to be ready...
  6. worker:dev: 2fb02d72 > [18:48:37.313]  88ms debug @utilities #waitFor: Waiting for database to be ready...
  7. redis:dev: 973191cf > #5 sha256:7f65636102fd1f499092cb075baa95784488c0bbc3e0abff2a6d853109e4a948 5.24MB / 9.60MB 22.9s
  8. worker:dev: 2fb02d72 > [18:48:37.408]  95ms debug @utilities #waitFor: Waiting for Redis to be ready...
  9. redis:dev: 973191cf > #5 sha256:7f65636102fd1f499092cb075baa95784488c0bbc3e0abff2a6d853109e4a948 6.29MB / 9.60MB 23.7s
  10. api:dev: a1e4c6a7 > [18:48:38.172] 764ms debug @utilities #waitFor: Waiting for database to be ready...
  11. api:dev: a1e4c6a7 > [18:48:38.227]  55ms debug @utilities #waitFor: Waiting for Redis to be ready...
  12. ```

In this example, redis, api and worker processes produce logs at almost the exact same time causing the log stream to switch between outputting from a different process every other line. This makes it hard to read the logs.

By default, Turbowatch throttles log output to at most once a second per task, producing a lot more easier to follow log output:

  1. ```yaml
  2. redis:dev: 973191cf > #5 sha256:7f65636102fd1f499092cb075baa95784488c0bbc3e0abff2a6d853109e4a948 4.19MB / 9.60MB 22.3s
  3. redis:dev: 973191cf > #5 sha256:d01ec855d06e16385fb33f299d9cc6eb303ea04378d0eea3a75d74e26c6e6bb9 0B / 1.39MB 22.7s
  4. redis:dev: 973191cf > #5 sha256:7f65636102fd1f499092cb075baa95784488c0bbc3e0abff2a6d853109e4a948 5.24MB / 9.60MB 22.9s
  5. redis:dev: 973191cf > #5 sha256:7f65636102fd1f499092cb075baa95784488c0bbc3e0abff2a6d853109e4a948 6.29MB / 9.60MB 23.7s
  6. api:dev: a1e4c6a7 > [18:48:37.171] 765ms debug @utilities #waitFor: Waiting for database to be ready...
  7. api:dev: a1e4c6a7 > [18:48:37.225]  54ms debug @utilities #waitFor: Waiting for Redis to be ready...
  8. api:dev: a1e4c6a7 > [18:48:38.172] 764ms debug @utilities #waitFor: Waiting for database to be ready...
  9. api:dev: a1e4c6a7 > [18:48:38.227]  55ms debug @utilities #waitFor: Waiting for Redis to be ready...
  10. worker:dev: 2fb02d72 > [18:48:37.313]  88ms debug @utilities #waitFor: Waiting for database to be ready...
  11. worker:dev: 2fb02d72 > [18:48:37.408]  95ms debug @utilities #waitFor: Waiting for Redis to be ready...
  12. ```

However, this means that some logs might come out of order. To disable this feature, set { throttleOutput: { delay: 0 } }.

Watching multiple scripts


By default, turbowatch will look for turbowatch.ts script in the current working directory. However, you can pass multiple scripts to turbowatch to run them concurrently:

  1. ```bash
  2. turbowatch ./foo.ts ./bar.ts
  3. ```

You can also provide a glob pattern:

  1. ```bash
  2. turbowatch '**/turbowatch.ts'
  3. ```

Using custom file watching backend


Many of the existing file watching solutions come with tradeoffs, e.g. Watchman does not track symbolic links (#105), chokidar is failing to register file changes (#1240),fs.watch behavior is platform specific, etc. For this reason, Turbowatch provides several backends to choose from and allows to bring-your-own backend by implementing FileWatchingBackend interface.

By default, Turbowatch uses fs.watch on MacOS (Node.js v19.1+) and fallsback to chokidar on other platforms.

  1. ```ts
  2. import {
  3.   watch,
  4.   // Smart Watcher that detects the best available file-watching backend.
  5.   TurboWatcher,
  6.   // fs.watch based file watcher.
  7.   FSWatcher,
  8.   // Chokidar based file watcher.
  9.   ChokidarWatcher,
  10.   // Interface that all file watchers must implement.
  11.   FileWatchingBackend,
  12. } from 'turbowatch';

  13. export default watch({
  14.   Watcher: TurboWatcher,
  15.   project: __dirname,
  16.   triggers: [],
  17. });
  18. ```

Logging


Turbowatch uses Roarr logger.

Export ROARR_LOG=true environment variable to enable log printing to stdout.

Use @roarr/cli to pretty-print logs.

  1. ```bash
  2. ROARR_LOG=true turbowatch | roarr
  3. ```

Alternatives


The biggest benefit of using Turbowatch is that it provides a single abstraction for all file watching operations. That is, you might get away with Nodemon, concurrently, --watch, etc. running in parallel, but using Turbowatch will introduce consistency to how you perform watch operations.

Why not use X --watch?


Many tools provide built-in watch functionality, e.g. tsc --watch. However, there are couple of problems with relying on them:

Running many file watchers is inefficient and is probably draining your laptop's battery faster than you realize. Turbowatch uses a single server to watch all file changes.
Native tools do not allow to combine operations, e.g. If your build depends on tsc --watch and tsc-alias --watch, then you cannot combine them. On the other hand, Turbowatch allows you to chain arbitrary operations.

Note Turbowatch is not a replacement for services that implement Hot Module Replacement (HMR), e.g. Next.js. However, you should still wrap those operations in Turbowatch for consistency, e.g.

ts

void watch({

project: __dirname,

triggers: [

{

expression: ['dirname', __dirname],

// Marking this routine as non-interruptible will ensure that

// next dev is not restarted when file changes are detected.

interruptible: false,

name: 'start-server',

onChange: async ({ spawn }) => {

await spawnnext dev;

},

// Enabling this option modifies what Turbowatch logs and warns

// you if your configuration is incompatible with persistent tasks.

persistent: true,

},

],

});


Why not concurrently?


I have seen concurrently used to "chain" watch operations such as:

  1. ```bash
  2. concurrently "tsc -w" "tsc-alias -w"
  3. ```

While this might work by brute-force, it will produce unexpected results as the order of execution is not guaranteed.

If you are using Turbowatch, simply execute one command after the other in the trigger workflow, e.g.

  1. ```ts
  2. async ({ spawn }: ChangeEvent) => {
  3.   await spawn`tsc`;
  4.   await spawn`tsc-alias`;
  5. },
  6. ```

Why not Turborepo?


Turborepo currently does not have support for watch mode (issue #986). However, Turbowatch has been designed to work with Turborepo.

To use Turbowatch with Turborepo:

1. define a persistent task
1. run the persistent task using --parallel

Example:

  1. ```json
  2. "dev": {
  3.   "cache": false,
  4.   "persistent": true
  5. },
  6. ```

  1. ```bash
  2. turbo run dev --parallel
  3. ```

Note We found that using dependsOn with Turbowatch produces undesirable effects. Instead, simply use Turbowatch expressions to identify when dependencies update.


Note Turbowatch is not aware of the Turborepo dependency graph. Meaning, that your builds might fail at the first attempt. However, if you setup Turbowatch to [watch node_modules](#watching-node_modules), then Turbowatch will automatically retry failing builds as soon as the dependencies are built.