Skip to main content



This documentation describes v3.3. Docs for older versions are available on github: v3.2, v3.1, v3.0, v2.3, v1.3.

Overview is a collection of open source loaders and writers for file formats including tabular, geospatial, and 3D formats. It is focused on supporting visualization and analytics of big data. is packaged and published as a suite of composable loader modules offering consistent APIs and features across file formats, and offers advanced features such as running loaders on workers and incremental parsing, and all loaders work in both the browser and in Node.js.

By design, other frameworks such as and integrate seamlessly with, however itself has no dependencies on those frameworks, and all loaders and writers can be used with any JavaScript application or framework.

Loaders provides a wide selection of loaders organized into categories:

Table LoadersStreaming tabular loaders for CSV, JSON, Arrow etc
Geospatial LoadersLoaders for geospatial formats such as GeoJSON KML, WKT/WKB, Mapbox Vector Tiles etc.
Image LoadersLoaders for images, compressed textures, supercompressed textures (Basis). Utilities for mipmapped arrays, cubemaps, binary images and more.
Pointcloud and Mesh LoadersLoaders for point cloud and simple mesh formats such as Draco, LAS, PCD, PLY, OBJ, and Terrain.
Scenegraph LoadersglTF loader
Tiled Data LoadersLoaders for 3D tile formats such as 3D Tiles, I3S and potree

Code Examples provides a small core API module with common functions to load and save data, and a range of optional modules that provide loaders and writers for specific file formats.

A minimal example using the load function and the CSVLoader to load a CSV formatted table into a JavaScript array:

import {load} from '';
import {CSVLoader} from '';

const data = await load('data.csv', CSVLoader);

for (const row of data) {
console.log(JSON.stringify(row)); // => '{header1: value1, header2: value2}'

Streaming parsing is available using ES2018 async iterators, e.g. allowing "larger than memory" files to be incrementally processed:

import {loadInBatches} from '';
import {CSVLoader} from '';

for await (const batch of await loadInBatches('data.csv', CSVLoader)) {
for (const row of batch) {
console.log(JSON.stringify(row)); // => '{header1: value1, header2: value2}'

To quickly get up to speed on how the API works, please see Get Started.

Supported Platforms provides consistent support for both browsers and Node.js. The following platforms are supported:

  • Evergreen Browsers supports recent versions of the major evergreen browsers (e.g. Chrome, Firefox, Safari) on both desktop and mobile.
  • Node.js LTS (Long-Term Support) releases are also supported. Note that the module should be imported under Node.js. It installs the required Node.js polyfills for fetch etc.
  • IE11 is no longer officially supported from v3.0, however 2.3 is known to run on IE11.
    • To run on IE11, both and additional appropriate polyfills (e.g. babel polyfills) need to be installed which will increase your application bundle size.
    • Note that because of lack of regular testing on IE11, regressions can occur, so pinning your versions in package.json is advisable.
    • For IE11, additional transpilation of packages in your node_modules folder may also be required.

Design Goals

Framework Agnostic - Files are parsed into clearly documented data structures (objects + typed arrays) that can be used with any JavaScript framework.

Streaming Support - Several loaders can parse in batches from both node and browser Streams, allowing "larger than memory" files to be processed, and initial results to be available while the remainder of a file is still loading.

Browser Support - supports recent versions of evergreen browsers.

Worker Support - Many loaders are automatically run in web workers, keeping the main thread free for other tasks while parsing completes.

Node Support - All loaders work under Node.js and can be used when writing backend and cloud services, and when running your unit tests under Node.

Loader Categories - groups similar data formats into "categories". loaders in the same category return parsed data in "standardized" form, making it easier to build applications that can handle multiple similar file formats.

Format Autodection - Applications can specify multiple loaders when parsing a file, and will automatically pick the right loader for a given file based on a combination of file/url extensions, MIME types and initial data bytes.

Bundle Size Reduction - Loaders for each file format are published in independent npm modules to allow applications to cherry-pick only the loaders it needs. In addition, modules are optimized for tree-shaking, and many larger loader libraries and web workers are loaded from CDN on use and not included in your application bundle.

Modern JavaScript - is written in standard ES2018 and the API emphasizes modern, portable JavaScript constructs, e.g. async iterators instead of streams, ArrayBuffer instead of Buffer, etc.

Binary Data - is optimized to load into compact memory representations and use with WebGL frameworks (e.g. by returning typed arrays whenever possible). Note that in spite of the .gl naming, has no any actual WebGL dependencies and loaders can be used without restrictions in non-WebGL applications.

Multi-Asset Loading - Some formats like glTF, Shapefile, or mip mapped / cube textures can require dozens of separate loads to resolve all linked assets (external buffers, images etc). Tracking all the resulting async loads can cause complications for applications. By default, loads all linked assets before resolving the returned Promise.

Licenses itself is MIT licensed but various modules contain code under several permissive open source licenses, currently MIT, BSD and Apache licenses. Each loader module comes with its own license, so if the distinction matters to you, please check the documentation for each module and decide accordingly, however will never include code with non-permissive, commercial or copyLeft licenses.

Credits and Attributions is maintained by a group of organizations collaborating through open governance under the Linux Foundation.

While contains a lot of original code, it is also partly a repackaging of superb work done by others in the open source community. We try to be as explicit as we can about the origins and attributions of each piece of code, both in the documentation page for each module and in the preservation of comments relating to authorship and contributions inside forked source code.

Even so, we can make mistakes, and we may not have the full history of the code we are reusing. If you think that we have missed something, or that we could do better in regard to attribution, please let us know.

Primary maintainers

The organizations and individuals that contribute most significantly to the development and maintenance of are:

Open Governance is a part of the framework suite, an open governance Linux Foundation project that is developed collaboratively by multiple organizations and individuals and the Urban Computing Foundation.