Skip to content

patrickkunka/bandwidth-throttle-stream

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

68 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CI Coverage Status

Bandwidth Throttle Stream

A Node.js and Deno transform stream for throttling bandwidth which distributes available bandwidth evenly between all requests in a "group", accurately simulating the effect of network conditions on simultaneous overlapping requests.

Features

  • Idiomatic pipeable Transform API for use in Node.js
  • Idiomatic pipeable TransformStream API for use in Deno
  • Distributes the desired bandwidth evenly over each second
  • Distributes the desired bandwidth evenly between all active requests
  • Abortable requests ensure bandwidth is redistributed if a client aborts a request

Contents

Node.js Installation

Firstly, install the package using your package manager of choice.

npm install bandwidth-throttle-stream

You may then import the createBandwidthThrottleGroup() factory function into your project.

import {createBandwidthThrottleGroup} from 'bandwidth-throttle-stream';

Deno Installation

In Deno, all modules are imported from URLs as ES modules. Versioned releases of bandwidth_throttle_stream are available from deno.land/x. Note that as per Deno convention, the package name is delineated with underscores (_).

import {createBandwidthThrottleGroup} from 'https://deno.land/x/bandwidth_throttle_stream/mod.ts';

The above URL will return the latest release, but it is strongly advised to lock your import to a specific version using the following syntax, where the x.y.z semver can be any published version of the library:

import {createBandwidthThrottleGroup} from 'https://deno.land/x/[email protected]/mod.ts';

Usage

Creating a Group

Using the imported createBandwidthThrottleGroup factory function, we must firstly create a "bandwidth throttle group" which will be configured with a specific throughput in bytes (B) per second.

// Create a group with a configured available bandwidth in bytes (B) per second.

const bandwidthThrottleGroup = createBandwidthThrottleGroup({
    bytesPerSecond: 500000 // 500KB/s
});

Typically we would create a single group only for a server running a simulation, which all incoming network requests to be throttled are routed through. However, we could also create multiple groups if we wanted to run multiple simulations with different configurations on a single server.

Attaching Throttles

Once we've created a group, we can then attach individual pipeable "throttles" to it, as requests come into our server.

The most simple integration would be to insert the throttle (via .pipe, or .pipeThrough) between a readable stream (e.g file system readout, server-side HTTP response), and the response stream of the incoming client request to be throttled.

Node.js example: Piping between readable and writable streams
// Attach a throttle to a group (e.g. in response to an incoming request)

const throttle = bandwidthThrottleGroup.createBandwidthThrottle(contentLength);

// Throttle the response by piping a `stream.Readable` to a `stream.Writable`
// via the throttle

someReadableStream
    .pipe(throttle)
    .pipe(someWritableStream);
Deno example: Piping between a readable stream and a reader:
// Attach a throttle to a group (e.g. in response to an incoming request)

const throttle = bandwidthThrottleGroup.createBandwidthThrottle(contentLength);

// Throttle the response by piping a `ReadableStream` to a `ReadableStreamDefaultReader`:

const reader = someReadableStream
    .pipeThrough(throttle)
    .getReader()

Note that a number value for contentLength (in "bytes") must be passed when creating an individual throttle. This should be the total size of data for the request being passed through the throttle, and is used to allocate memory upfront in a single Uint8Array typed array, thus preventing expensive GC calls as backpressure builds up. When throttling HTTP requests, contentLength can be obtained from the 'content-length' header, once the headers of the request have arrived:

Node.js (Express) example: Obtaining content-length from req headers:
const contentLength = parseInt(req.get('content-length'))
Deno example: Obtaining content-length from fetch headers:
const { body, headers } = await fetch(destination);

const contentLength = parseInt(headers.get("content-length"));

If no contentLength value is available (e.g. the underlying server does not implement a content-length header), then it should be set to a value no smaller than the size of largest expected request. To keep memory usage within reason, arbitrarily high values should be avoided.

Handling Completion

We may want to perform some specific logic once a request is complete, and all data has been processed through the throttle.

In Node.js, rather than piping directly to a response, we can use the done event to manually write data, and the end event to manually handled completion.

Node.js example: Hooking into the end event of a writable stream
request
    .pipe(throttle)
    .on('data', chunk => response.write(chunk)
    .on('end', () => {
        response.end();

        // any custom completion logic here
    });

In Deno, the call to request.respond() returns a promise which resolves once the request is completed and all data has been pulled into the body reader.

Deno example: responding to a request with a reader and a status code
import {readerToDenoReader} from 'https://deno.land/x/[email protected]/mod.ts';

const reader = request
    .pipeThrough(throttle)
    .getReader()

await request.respond({
    status: 200
    body: readerToDenoReader(reader, contentLength),
});

// any custom completion logic here

Converting Between Reader Formats in Deno

Note that in the Deno example above, a reader may be passed directly to request.respond() allowing real-time streaming of the throttled output. However, the Deno std server expects a Deno.Reader as a body (rather than the standard ReadableStreamDefaultReader), meaning that conversion is needed between the two.

The readerToDenoReader util is exposed for this purpose, and must be provided with both a reference to ReadableStreamDefaultReader (reader), and the contentLength of the request.

Configuration Options

Each bandwidth throttle group accepts an optional object of configuration options:

const bandwidthThrottleGroup = createBandwidthThrottleGroup({
    bytesPerSecond: 500000 // 500KB/s,
    ticksPerSecond: 20 // aim to write output 20x per second
});

The following options are available.

interface IConfig {
    /**
     * The maximum number of bytes allowed to pass through the
     * throttle, each second.
     *
     * @default Infinity
     */

    bytesPerSecond?: number;

    /**
     * Defines how frequently the processing of bytes should be
     * distributed across each second. Each time the internal
     * scheduler "ticks", data will be processed and written out.
     *
     * A higher value will ensure smoother throttling for requests
     * that complete within a second, but will be more expensive
     * computationally and will ultimately be constrained by the
     * performance of the JavaScript runtime.
     *
     * @default 40
     */

    ticksPerSecond?: number;
}

Dynamic Configuration

A group can be reconfigured at any point after creation via its .configure() method, which accepts the same configuration interface as the createBandwidthThrottleGroup() factory.

// Create a group with no throttling

const bandwidthThrottleGroup = createBandwidthThrottleGroup();

// ... after some configuration event:

bandwidthThrottleGroup.configure({
    bytesPerSecond: 6000000
})

Aborted Requests

When a client aborts a requests, its important that we also abort the throttle, ensuring the group can re-balance available bandwidth correctly, and backpressure buffer memory is released.

Node.js example: Handling aborted requests
const throttle = bandwidthThrottleGroup.createBandwidthThrottle(contentLength);

request.on('aborted', () => {
    // Client aborted request

    throttle.abort();
});

request
    .pipe(throttle)
    .pipe(response);
Deno example: Handling aborted requests
const throttle = bandwidthThrottleGroup.createBandwidthThrottle(contentLength);

const reader = request
    .pipeThrough(throttle)
    .getReader()

try {
    await request.respond({
        status: 200
        body: readerToDenoReader(reader, contentLength),
    });
} catch(err) {
    // request aborted or failed

    throttle.abort();
}

Repo Structure

This repository contains shared source code for consumption by both Deno (TypeScript ES modules), and Node.js (JavaScript Common.js modules).

Wherever a Deno or Node.js specific API is needed, a common abstraction is created that can be swapped at build time. Platform specific implementations are denoted with either a .deno.ts or .node.ts file extension. Platform specific entry points to these abstractions reside in the lib/Platform/ directory.

The source code (contained in the lib/ directory) is ready for direct consumption by Deno is written in ESNext TypeScript, but requires some modifications to produce Node.js compatible NPM distribution code.

The Node.js build process comprises the following steps:

  1. Copy all contents of lib/ to src/ (git ignored)
  2. Remove all .ts file extensions from modules in src/ (see scipts/replace.ts)
  3. Replace any imports from src/Platform/* with a @Platform alias (see scipts/replace.ts)
  4. Run tsc on contents of src/ using the ts-transform-paths plugin to replace @Platform alias with Node.js entry points.
  5. Output compiled, Common.js code to dist/ (git ignored), and publish dist/ to NPM.