DEV Community

Cover image for The Case for TS+
Michael Arnaldi for MATECHS

Posted on

The Case for TS+

Hi!

So, what's this post gonna be about?

Well, with the Effect community, we've been hard at work for the last ~6 months on something new which I am now truly proud to introduce in detail for the first time.

Table Of Contents

  1. Why TS+
  2. Introducting TS+
  3. Installing TS+
  4. Fluent and Static Extensions
  5. Call Extensions
  6. Binary Operators
  7. Lazy Arguments
  8. Type Unification
  9. Global Imports
  10. Runtime Types and Derivation
  11. Module Structure
  12. Compiled Output
  13. What's Next

Why TS+?

Let's begin with a little bit of a back story. For the last four years (yeah, it's already been that long...), we have been working hard on the development of Effect.

Originally, Effect started as a small project inspired by the ideas brought forth by the developers of ZIO in Scala. However, it progressively evolved to a full blown port of all the core modules of ZIO, together with a very rich set of extension modules that bridges the gap between the data structures available in Scala and in TypeScript.

During all this work we have grown to love much of the TypeScript language and we were very surprised since the very first days how much we could do, and how safe and pleasant from a developer experience perspective the language was to use.

However, we also encountered a decent amount of limitations - not really limitations on what we could do in the language, but rather what we could do efficiently in the language, both in terms of verbosity and in terms of tree-shakeability of the produced code.

Let's start with an example:

The Development of Fluent APIs

A fluent API in TypeScript might look something like:

export class IO<A> {
  static succeed = <A>(a: A) => new IO(() => a);
  static succeedWith = <A>(a: () => A) => new IO(a);

  private constructor(readonly io: () => A) {}

  map<A, B>(this: IO<A>, f: (a: A) => B): IO<B> {
    return new IO(() => f(this.io()));
  }

  flatMap<A, B>(this: IO<A>, f: (a: A) => IO<B>): IO<B> {
    return new IO(() => f(this.io()).io());
  }

  zip<A, B>(this: IO<A>, b: IO<B>): IO<[A, B]> {
    return new IO(() => [this.io(), b.io()]);
  }

  run<A>(this: IO<A>) {
    return this.io();
  }
}
Enter fullscreen mode Exit fullscreen mode

which can be used like:

export const program = IO.succeed(0)
  .map((n) => n + 1)
  .flatMap((r) =>
    IO.succeedWith(() => {
      console.log(`result: ${r}`);
    })
  );

program.run();
Enter fullscreen mode Exit fullscreen mode

In this approach we have a data-type IO<A> together with a "companion object" IO. This approach lends itself to a certain organization of code:

  • Constructors like succeed and succeedWith are placed in the "companion object" as static members
  • Methods like map, flatMap, and zip are placed in the class instance as members of the type IO<A>

This approach is very common in object-oriented languages and its advantages lie in the ease of use. In fact, for the user of such a module, the editor experience is beautiful - if you want to construct an IO you type IO. and get a full list of suggestions on ways you can construct an IO. In addition, when you want to do something with a value of type IO<A>, you type value. and you are again prompted with a nice list of options.

This fluent type of API is "a dream", but also has a few drawbacks that have led the biggest part of the FP community away from it.

The limitations are staggering:

  • All methods have to be inside the class (or inside a common abstract class for ADTs)
  • Once a class is defined adding methods to it from the outside requires module augmentation and unsafe mutation of object prototypes
  • Worst of all, none of it can be optimized from a tree shaking perspective, so you'll end up with a huge bundle size that becomes prohibitive at a certain point

The current "solution" that the FP community has adopted is the use of pipeable APIs.

The Development of Pipeable APIs

In a pipeable API we would rewrite the prior example to:

// file: IO.ts
export class IO<A> {
  constructor(readonly io: () => A) {}
}

export const succeed = <A>(a: A) => new IO(() => a);

export const succeedWith = <A>(a: () => A) => new IO(a);

export function map<A, B>(f: (a: A) => B): (self: IO<A>) => IO<B> {
  return (self: IO<A>) => new IO(() => f(self.io()));
}

export function flatMap<A, B>(f: (a: A) => IO<B>): (self: IO<A>) => IO<B> {
  return (self) => new IO(() => f(self.io()).io());
}

export function zip<A, B>(b: IO<B>): (self: IO<A>) => IO<[A, B]> {
  return (self) => new IO(() => [self.io(), b.io()]);
}

export function run<A>(self: IO<A>) {
  return self.io();
}

// file: Function.ts

// omitted definition of pipe due to its length
// For the curious, take a look at https://github.com/Effect-TS/core/blob/master/packages/system/src/Function/pipe.ts

// file: Program.ts
import * as IO from "./IO";
import { pipe } from "./Function";

export const program = pipe(
  IO.succeed(0),
  IO.map((n) => n + 1),
  IO.flatMap((r) =>
    IO.succeedWith(() => {
      console.log(`result: ${r}`);
    })
  )
);

IO.run(program);
Enter fullscreen mode Exit fullscreen mode

What we have essentially done is extracted all the constructors and methods outside of the class, and moved the this parameter to be a normal curried function parameter that the pipe function will carry through each function call.

The resulting API is still quite nice visually, but it does have a few drawbacks.

First of all, auto-imports do not work well with namespaced imports (recently folks at Unsplash have been working on language service plugins for this). Even if you do get auto-imports to work, you'll end up with tons of imports.

Additionally, we have lost the meaning and categorization of IO - namely we no longer have a data-type IO<A> and a "companion object" IO. We now only have a data-type IO<A> and a module of functions that include both constructors, such as IO.succeed, and pipeable functions (which are also called aspects), such as IO.flatMap. So when programming a user needs to know exactly which module to import, exactly which functions are constructors of the datatype and which are methods for the datatype, and exactly how to use them.

Finally, having had the experience of teaching those concepts to other developers, it seems fairly common that folks sometimes have issues reading pipeable function signatures.

Introducing TS+

TS+ is a new language developed as a super-set of TypeScript and maintained as a fork of the original TypeScript compiler that is rebased daily.

In order to guarantee full support with the TypeScript ecosystem, we limit what can be extended. TS+ emits standard declaration files consumable from plain TypeScript and consumes plain definition files. TS+ also does not add any new syntax.

TS+ diverges from TypeScript in that we do not limit ourselves from emitting code using type information. Instead, we leverage the type information as much as possible to produce both highly optimized code and improve the developer experience.

With those decisions comes a set of trade-offs. TS+ can be compiled only with the TS+ compiler (a fork of tsc) and cannot be compiled by tools like babel which "simply" remove the types.

We suggest using a compilation pipeline where tsc emits ES2022 modules, followed by another tool such as esbuild/swc/babel that takes over from there leveraging the powerful infrastructure of project references to optimize compilation.

Installing TS+

To install TS+, you must first add it as a dev dependency with:

yarn add -D @tsplus/installer
# or
npm i -D @tsplus/installer
Enter fullscreen mode Exit fullscreen mode

Then, you can add a postinstall script to your package.json. For example:

{
  "scripts": {
    "postinstall": "tsplus-install"
  }
}
Enter fullscreen mode Exit fullscreen mode

This will replace the TypeScript that is installed in your node_modules with the TS+ version.

Note: this install process is temporary until a better mechanism for installation is determined.

After installing, you will want to also ensure that your IDE uses the TypeScript language service from the workspace and not the default one.

If you want to start playing around with a pre-configured repository, you can open up https://github.com/ts-plus/starter-lib in GitPod.

Using TS+ (Fluent Methods)

Using our initial example, we will start by extracting all methods from the main IO class:

/**
 * @tsplus type article.IO
 */
export class IO<A> {
  static succeed = <A>(a: A) => new IO(() => a);
  static succeedWith = <A>(a: () => A) => new IO(a);

  constructor(readonly io: () => A) {}
}

/**
 * @tsplus fluent article.IO map
 */
export function map<A, B>(self: IO<A>, f: (a: A) => B): IO<B> {
  return new IO(() => f(self.io()));
}

/**
 * @tsplus fluent article.IO flatMap
 */
export function flatMap<A, B>(self: IO<A>, f: (a: A) => IO<B>): IO<B> {
  return new IO(() => f(self.io()).io());
}

/**
 * @tsplus fluent article.IO zip
 */
export function zip<A, B>(self: IO<A>, b: IO<B>): IO<[A, B]> {
  return new IO(() => [self.io(), b.io()]);
}

/**
 * @tsplus fluent article.IO run
 */
export function run<A>(self: IO<A>) {
  return self.io();
}
Enter fullscreen mode Exit fullscreen mode

One thing that you may notice in the example above is that we have added a JSDoc annotation to our IO class. This annotation identifies the IO type in TS+ with a "type tag" of article.IO. The "type tag" for each type you define with TS+ should be globally unique. Prefixing type tags with your package's name is common, since non-prefixed tags should be reserved for the TS+ standard library.

Now that we have added our type tag to the IO class, we can begin by extracting each method from inside the class to a function, renaming this to something else (in this case we have chosen to use self).

We further added a JSDoc annotation above each function, which associates the function with a type in TS+. For example, to associate the flatMap function to the IO datatype, we annotate flatMap with @tsplus fluent article.IO flatMap. This essentially tells the compiler "put this fluent method in any type that matches the tag article.IO and name it flatMap".

Having done that we can already use IO exactly as before:

export const program = IO.succeed(0)
  .map((n) => n + 1)
  .flatMap((r) =>
    IO.succeedWith(() => {
      console.log(`result: ${r}`);
    })
  );

program.run();
Enter fullscreen mode Exit fullscreen mode

But we can already note from the IDE the following:

Fluent Method

We see from the IDE quick documentation that this is a "fluent" extension and not a classical "method".

That's about it for fluent extensions, they are declared as normal functions that you can even call directly like map(a, f). They can be declared anywhere you want and the path to the method will be resolved during compilation.

In order to support the folks who prefer a pipeable API, we also have a macro to derive a pipeable function from a data first one. This can be used as follows:

export function map<A, B>(self: IO<A>, f: (a: A) => B): IO<B> {
  return new IO(() => f(self.io()));
}

const mapPipeable = Pipeable(map)
Enter fullscreen mode Exit fullscreen mode

Note: this is a compiler extension that will deal with the generics of the signature in a correct way, it doesn't use conditional types.

Additionally fluent methods can be defined directly using pipeable functions like:

/**
 * @tsplus pipeable article.IO flatMap
 */
export function flatMap<A, B>(f: (a: A) => IO<B>): (self: IO<A>) => IO<B> {
  return self => new IO(() => f(self.io()).io());
}
Enter fullscreen mode Exit fullscreen mode

This only solves half of the problem though. We managed to extract methods from a datatype, but constructors are still static members of the "companion object" (non extensible and non tree-shakable).

Let's solve that next:

/**
 * @tsplus type article.IO
 * @tsplus companion article.IO/Ops
 */
export class IO<A> {
  constructor(readonly io: () => A) {}
}

/**
 * @tsplus static article.IO/Ops succeed
 */
export const succeed = <A>(a: A) => new IO(() => a);

/**
 * @tsplus static article.IO/Ops succeedWith
 */
export const succeedWith = <A>(a: () => A) => new IO(a);
Enter fullscreen mode Exit fullscreen mode

Here we identified a "companion" object for the IO dataype by annotation the IO class with with @tsplus companion article.IO/Ops. Then, to extract the constructors from the class, we used @tsplus static article.IO/Ops succeed to say "put this value as a static member of any type tagged as article.IO/Ops"

Note: there is no difference between a type tag and one defined with companion, but a class has 2 types the instance type and the constructor type so we need a way of distinguishing which tag links to what.

An alternative pattern for those who prefer to avoid using a class is the following:

/**
 * @tsplus type article.IO
 */
export interface IO<A> {
  readonly io: () => A;
}

/**
 * @tsplus type article.IO/Ops
 */
export interface IOOps {
  <A>(io: () => A): IO<A>;
}

export const IO: IOOps = io => ({ io });

/**
 * @tsplus static article.IO/Ops succeed
 */
export const succeed = <A>(a: A) => IO(() => a);

/**
 * @tsplus static article.IO/Ops succeedWith
 */
export const succeedWith = <A>(a: () => A) => IO(a);

/**
 * @tsplus fluent article.IO map
 */
export function map<A, B>(self: IO<A>, f: (a: A) => B): IO<B> {
  return IO(() => f(self.io()));
}

/**
 * @tsplus pipeable article.IO flatMap
 */
export function flatMap<A, B>(f: (a: A) => IO<B>): (self: IO<A>) => IO<B> {
  return self => IO(() => f(self.io()).io());
}

/**
 * @tsplus fluent article.IO zip
 */
export function zip<A, B>(self: IO<A>, b: IO<B>): IO<[A, B]> {
  return IO(() => [self.io(), b.io()]);
}

/**
 * @tsplus fluent article.IO run
 */
export function run<A>(self: IO<A>) {
  return self.io();
}

//
// Usage
//

export const program = IO.succeed(0)
  .map((n) => n + 1)
  .flatMap((r) =>
    IO.succeedWith(() => {
      console.log(`result: ${r}`);
    })
  );

program.run();
Enter fullscreen mode Exit fullscreen mode

Another thing we usually do is to put related types to IO like for example a type like ExtractResult in a namespace named IO like:

/**
 * @tsplus type article.IO
 */
export interface IO<A> {
  readonly io: () => A;
}

export declare namespace IO {
  export type ExtractResult<I extends IO<any>> = [I] extends [IO<infer A>] ? A : never;
}

/**
 * @tsplus type article.IO/Ops
 */
export interface IOOps {
  <A>(io: () => A): IO<A>;
}

export const IO: IOOps = io => ({ io });
Enter fullscreen mode Exit fullscreen mode

Basically giving IO 3 meanings:

  1. As an interface/type (i.e. for methods)
  2. As a term/value (i.e. for constructors)
  3. As a namespace (i.e. for related types)

All of this together provides an extensible way of developing easily to usable and discoverable APIs that are fully tree-shakable and optimized. In fact, the program that we have written here is a valid program that utilizes the IO module that we have defined.

Using TS+ (Call Extension)

There are cases where we would like to add a call signature to something that isn't a function, for example we could refactor the above constructor like:

/**
 * @tsplus type article.IO/Ops
 */
export interface IOOps {}
export const IO: IOOps = {};

/**
 * @tsplus static article.IO/Ops __call
 */
export function make<A>(io: () => A): IO<A> {
  return { io };
}
Enter fullscreen mode Exit fullscreen mode

This allows us to construct values for a datatype using the __call expression that we define. For example, we can create an IO<string> using the __call expression for the IO datatype above:

const io: IO<string> = IO(() => "Hello, World!")
Enter fullscreen mode Exit fullscreen mode

The name __call is a special name that basically says "use the function as the call signature for a type". In the example above, TS+ will resolve the __call expression for the IO datatype to the make function that we defined for IO.

In some extreme cases you may want access to the "this" and for that you can use a "fluent" variant of a __call expression instead of a "static" one.

/**
 * @tsplus fluent article.IO/Ops __call
 */
export function make<A>(self: IOOps, io: () => A): IO<A> {
  return { io };
}
Enter fullscreen mode Exit fullscreen mode

You may think the set of features we have described thus far is enticing enough... ohh but we have only just started.

Using TS+ (Binary Operators)

There are many binary operators in JS - it's a shame there isn't a way of extending those (and the proposal to do so is inefficient in terms of tree shaking, limited, and may never gonna come to fruition)... Yeah, we can do that too :)

Looking at the IO type that we defined above, the zip combinator that we defined for IO looks quite a bit like a binary operator. Given that the "+" symbol doesn't make any sense when used between two IO types in plain JavaScript/TypeScript, we can override it in TS+:

/**
 * @tsplus fluent article.IO zip
 * @tsplus operator article.IO +
 */
export function zip<A, B>(self: IO<A>, b: IO<B>): IO<[A, B]> {
  return IO(() => [self.io(), b.io()]);
}
Enter fullscreen mode Exit fullscreen mode

With just that additional JSDoc annotation we can now call zip on two IO types using the + operator:

const zipped = IO.succeed(0) + IO.succeed(1);
Enter fullscreen mode Exit fullscreen mode

And looking at the quick info from VSCode:

Operator

Furthermore, you can define multiple operators and multiple fluent extensions with the same target type by specify an operator precedence. This enables neat DSL's, such as the ones we have created for @tsplus/stdlib/collections. (by the way, if you are here you should also probably install @tsplus/stdlib).

Collection DSL

You can take a more detailed look at how these operators are used in the test suite for the TS+ standard library. For example: List.test.ts (and the other tests).

One last thing - we've also implemented go-to-definition for custom operators, so you can click an operator and use your IDE's go-to-definition to quickly navigate to the implementation of that operator.

Using TS+ (Lazy Arguments)

In JavaScript/TypeScript, lazy evaluation of a computation is generally implemented via a thunk (i.e. () => A). Deferring evaluation of computations can be quite beneficial in a variety of circumstances, particularly when we want to avoid eager computation of a value. This paradigm of "lazy" programming becomes even more powerful when combined with effect systems.

However, if you attempt to write a truly lazy program in JavaScript/TypeScript, you'll quickly find yourself writing a lot of annoying arrow functions like T.succeed(() => console.log("A")). This is because we want the effect system to assume control over whether or not the console.log is actually called.

To avoid this, TS+ allows us to define function parameters as "lazy":

/**
* @tsplus type LazyArgument
*/
interface LazyArg<A> {
  (): A
}

/**
 * @tsplus static Effect/Ops succeed
 */
function succeed<A>(f: LazyArg<A>) {
  f()
}
Enter fullscreen mode Exit fullscreen mode

When calling Effect.succeed(x), if x is not already evaluated lazily (i.e. if x is not already a thunk), the compiler will transform it to Effect.succeed(() => x) making it possible to trim annoying boilerplate.

So something like:

Effect.succeed(() => console.log("Hello, world!"))
Enter fullscreen mode Exit fullscreen mode

becomes

Effect.succeed(console.log("Hello, world!"))
Enter fullscreen mode Exit fullscreen mode

Note that the behavior is only added to function arguments of a type that has a type tag of value LazyArgument and not generally to any function arguments.

Using TS+ (Type Unification)

Ever ended up with something like Left<0> | Left<1> | Right<string> in something that should have been Either<number, string>?

That's because TypeScript assumes that the right-most type is always the strictest (which is a good assumption, if you manually type it, it compiles).

We can be explicit about type unification in TS+ though:

/**
 * @tsplus type Option
 */
export type Option<A> = None | Some<A>;

/**
 * @tsplus unify Option
 * @tsplus unify Option/Some
 * @tsplus unify Option/None
 */
export function unifyOption<X extends Option<any>>(
  self: X
): Option<[X] extends [Option<infer A>] ? A : never> {
  return self;
}
Enter fullscreen mode Exit fullscreen mode

or

/**
 * @tsplus type Eval
 */
export interface Eval<A> {
  readonly _A: () => A;
  readonly _TypeId: unique symbol;
}

/**
 * @tsplus unify Eval
 */
export function unifyEval<X extends Eval<any>>(self: X): Eval<[X] extends [Eval<infer AX>] ? AX : never> {
  return self;
}
Enter fullscreen mode Exit fullscreen mode

and TS+ will use the result of the unify function to unify any time a union is generated.

So that for example:

Type Unification

Using TS+ (Global Imports)

As mentioned before, having a lot of imports can be quite painful. To avoid this, we have found that many users of Effect, for example, define their own "prelude" or "utils" files that re-export commonly used modules. Unfortunately, this often leads to edge cases in tree-shaking algorithms used by bundlers that have only recently begin to improve shaking of deeply nested dependency trees.

With TS+, we solve this problem using the concept of global imports.

A global import is an import defined in a declaration file (.d.ts) using the following syntax:

/**
 * @tsplus global
 */
import { Chunk } from "@tsplus/stdlib/collections/Chunk/definition";
Enter fullscreen mode Exit fullscreen mode

When defining a type as "global", TS+ will make the type and its associated constructors/methods available globally in the project. However, during compilation TS+ will resolve usage of the constructors/methods associated with a datatype and add relevant imports to each file using the datatype. However, note that imports will only be added to a file during compilation if the global datatype is actually used in that file.

It is a common practice to define a prelude.d.ts in your root and add it to the "files" portion of the "tsconfig.json". For example:

// tsconfig.json
{
  "files": [
    "./prelude.d.ts"
  ]
}
Enter fullscreen mode Exit fullscreen mode

Also you can share your preludes across your projects like we do with @tsplus/stdlib-global that if you import in your prelude file it will give you access to the full blown standard library everywhere in your project.

Using TS+ (Runtime Types and Derivation)

Thought it couldn't get any better? close.

One of the biggest pain points we've ever experienced in app development is the definition of types that are safe at runtime. Many solutions have been attempted so far, including but not limited to: io-ts, morphic-ts, zod, @effect-ts/schema, @effect-ts/morphic, etc.

All of the mentioned libraries are lovely because they are all attempting to solve a huge problem - encoding, decoding, guarding (and potentially generating arbitraries) for types.

They all use the same trick, instead of defining a type you define some sort of value from which the type is derived. The difference between the libraries lies in the details of how that sort of value is modeled.

This leads to a set of trade-offs where the libraries can end up emitting unoptimized types, end up being verbose, hard to extend, and sometimes quite painful to use.

To be clear they are still better than the alternative which would require manual implementation of everything.

After months of work we think we finally have a solution to (limited/structural) custom typeclass derivation so that you can forget the problem!

Let's dig into it:

export interface Person {
  name: string;
  age: Option<number>;
  friends: Chunk<Person>;
}

export const PersonEncoder: Encoder<Person> = Derive();
export const PersonDecoder: Decoder<Person> = Derive();
export const PersonGuard: Guard<Person> = Derive();
Enter fullscreen mode Exit fullscreen mode

That's it, you can now do:

const encoded = PersonEncoder.encodeJSON({
  name: "Mike",
  age: Option.some(30),
  friends: Chunk()
});

const decoded = PersonDecoder.decodeJSON(encoded);

if (decoded.isRight()) {
  //
}

const maybePerson = {};

if (PersonGuard.is(maybePerson)) {
  maybePerson.age;
}
Enter fullscreen mode Exit fullscreen mode

You may ask, how?? well the compiler can tell you, add a "explain" parameter to a call to Derive like:

Derivation

And the best thing is that there is nothing special to the 3 instances above in fact all the rules are custom and extensible:
Guard.ts
Encoder.ts
Decoder.ts

Each derivable type is defined as an interface with a tag like:

/**
 * A Guard<A> is a type representing the ability to identify when a value is of type A at runtime
 *
 * @tsplus type Guard
 */
export interface Guard<A> {
  readonly is: (u: unknown) => u is A;
}
Enter fullscreen mode Exit fullscreen mode

Then implicit instances are defined as:

/**
 * Guard for a number
 *
 * @tsplus implicit
 */
export const number: Guard<number> = Guard((u): u is number => typeof u === "number");
Enter fullscreen mode Exit fullscreen mode

And rules as:

/**
 * @tsplus derive Guard lazy
 */
export function deriveLazy<A>(
  fn: (_: Guard<A>) => Guard<A>
): Guard<A> {
  let cached: Guard<A> | undefined;
  const guard: Guard<A> = Guard((u): u is A => {
    if (!cached) {
      cached = fn(guard);
    }
    return cached.is(u);
  });
  return guard;
}

/**
 * @tsplus derive Guard<_> 10
 */
export function deriveLiteral<A extends string | number>(
  ...[value]: Check<Check.IsLiteral<A> & Check.Not<Check.IsUnion<A>>> extends Check.True ? [value: A] : never
): Guard<A> {
  return Guard((u): u is A => u === value);
}

/**
 * @tsplus derive Guard[Option]<_> 10
 */
export function deriveOption<A extends Option<any>>(
  ...[element]: [A] extends [Option<infer _A>] ? [element: Guard<_A>]
    : never
): Guard<A> {
  return Guard((u): u is A => {
    if (u instanceof None) {
      return true;
    }
    if (u instanceof Some) {
      return element.is(u.value);
    }
    return false;
  });
}
Enter fullscreen mode Exit fullscreen mode

The only special rule here is the one called lazy which is used when the compiler encounters a recursive derivation, the remaining are all custom.

A rule is a function with a rule tag which has the following format:

@tsplus derive Guard[Option]<_> 10

Where:

  • Guard is the tag of the typeclass we want to derive
  • [Option] lets you (if not omitted) further scope when the rule is applied (in this case to types like Show<Option<A>>)
  • <_> tells the compiler how to call the function arguments and has to be specified per each generic of the typeclass we are deriving (for example Refinement<_,_>), this can be
    • _ calls it with the type, this case "Option"
    • | match a union and calls with a tuple of the members
    • & match an intersection and calls with a tuple of the members
    • [] match a tuple and calls with a tuple of the members
  • 10 is a priority that defines how early this rule is applied

Using TS+ (Module Structure and Config)

We've been going trough a list of example and by now you probably noticed we tend to use fully qualified imports like "@tsplus/stdlib/collections/Chunk/definition" even for local references.

You are not forced to do the same. However, for global imports and extensions in general your files must also be importable via a fully qualified name, the mapping between "file => fully qualified import" together with a map of "file => tracing name" is defined in a dedicated config like the following: tsplus.config.json.

The trace map is needed because in TS+ you can support compile time tracing of your function calls, for example if you have a function like:

function hello(message: string, __tsPlusTrace?: string) {
}
Enter fullscreen mode Exit fullscreen mode

when a call expression like hello("world") is found if a trace is not passed explicitly the compiler fill it up with hello("world", "file_trace_as_per_in_map:line:col").

While this is not something that can be relied upon as always present (because clearly a non TS+ user will not have a compiler that fill up the trace), when it is present this can be a powerful tool to build easily debuggable systems. For example, the tracing system inside Effect allows for rendering full stack traces, even for programs with async operations.

Using TS+ (Compiled Output)

So how does it compile in practice? When an exported value has a tag like static/fluent/implicit/derivation etc when we emit definition files .d.ts per each function we add a further jsdoc annotation called "location" like:

/**
 * @tsplus fluent Decoder decode
 * @tsplus location "@tsplus/stdlib/runtime/Decoder"
 */
export declare function decode<A>(decoder: Decoder<A>, value: unknown): import("../data/Either").Either<string, A>;
Enter fullscreen mode Exit fullscreen mode

This is how we know how to import things and from where, in a JS file when something like that is used an import is introduced accordingly.

This design ensures modules are always imported as close to the declaration as possible and it helps preventing circular references, it may lead to a big number of imports in the compiled js files but tree-shaking can inline very well.

What's next for TS+

We think we are pretty much feature ready for the first decent release, we have to write tons of tests and finish extracting out the standard library from the effect codebase before this is considered usable at a decent level externally (internally effect is already written in TS+) but we feel very confident and the TS folks have been amazing in offering valuable advise.

Discussion (26)

Collapse
patroza profile image
Patrick Roza

First Effect, now TS+. There was no better day to be a functional programmer than today :)

After spending a lot of time with pipe, no useful language supported pipe in sight, and lots of imports, hard to discover modules and apis, ts+ is such a breath of fresh air, hitting all the right notes at the right time!

Already adopted fluent, static and call extensions, and globals. Can’t wait to go further with operators, lazy args, rules and beyond.
And soon, the stdlib with next gen Effect!

Collapse
ninofiliu profile image
Nino Filiu

Maybe I didn't get something bug I feel like the base issues could have been solved in an extremely easier fashion by not using class-based OOP

All methods have to be inside the class (or inside a common abstract class for ADTs)

No, just use functions

Once a class is defined adding methods to it from the outside requires module augmentation and unsafe mutation of object prototypes

No, just use functions

Worst of all, none of it can be optimized from a tree shaking perspective, so you'll end up with a huge bundle size that becomes prohibitive at a certain point

No, just put the functions in different files

The current "solution" that the FP community has adopted is the use of pipeable APIs.

No, some people just use functions

Collapse
mikearnaldi profile image
Michael Arnaldi Author

"No, just use functions" => this is in the context of wanting a fluent api, if you want a fluent API plain functions are of no help.

"No, just put the functions in different files" => by the way this is 100% irrelevant for tree shaking, you can put all the functions in the same file and the result is the same (actually even better in size terms because you have less modules).

"No, some people just use functions" => yes people like me use just functions, but they also know what a function actually is.

Collapse
mikearnaldi profile image
Michael Arnaldi Author

both RxJS and fp-ts uses pipeable functions. pipeable functions are as you say "just functions", the same way extension methods are "just functions"

Collapse
tamusjroyce profile image
tamusjroyce

“reactive” libraries like rxjs are event systems. Because they have a disconnect between the data being sent, and overhead for subscribing/unsubscribing, I wouldn’t consider them anything related to fluent design patterns.

I personally prefer pure functions, and changing state at the very last pop of the call stack (firsr method that calls pure functions is responsible for setting state, no where else can set state). Similar to fluent, but without the overhead of returning a new object. And easier to debug what is going on (fluent can turn into a finite state machine).

Thread Thread
mikearnaldi profile image
Michael Arnaldi Author • Edited on

Fluent here refers to a style of API not the design pattern and has nothing to do with mutating state, the IO example above is 100% pure and functional in every possible interpretation.

Collapse
hamidb80 profile image
Hamid Bluri

Use Nim programming language and you don't care whether it's a method or function

You write whatever Style you want

Thread Thread
mikearnaldi profile image
Michael Arnaldi Author

Big part of the project is to create something both really nice and with great interop with the wider ts ecosystem it's not only about style :)

Collapse
ninofiliu profile image
Nino Filiu

So when programming a user needs to know exactly which module to import, exactly which functions are constructors of the datatype and which are methods for the datatype, and exactly how to use them.

That I really don't understand. I'm not familiar with IO or Effect but for example in lodash when I wanna use cloneDeep I'll just do import cloneDeep from 'lodash/cloneDeep';, yeah I have to know which module to import but it's not giving me a headache. Also there's no constructors and no additional methods, and the use is evident. So I don't see any case where a similar lib would cause these issues

Collapse
mikearnaldi profile image
Michael Arnaldi Author

Sure you can use directly lodash and direct imports, the problem becomes more apparent when using libraries that cover a bit more surface compared to lodash (like Effect, like fp-ts, etc) and in many codebases that translates to a large set of namespace imports import * as F, when using F you have everything inside and discoverability is problematic. For example only the Effect module in effect-ts provides 1000+ functions inside (they are all as you say "just functions") but you have to know what the "just functions" do, i.e. is the function constructing an effect or is a function combining? i.e. map combines, succeed creates

Collapse
mikearnaldi profile image
Michael Arnaldi Author

This is a fork of the compiler it is not adding any further compilation step, and it is fully compatible with normal TypeScript (it really doesn't want to be against TS at all :))

Collapse
patroza profile image
Patrick Roza

I guess you could say it's a ttypescript or ts-patch with already great built-in patches.

Collapse
jfbrennan profile image
Jordan Brennan

And this can't be done with vanilla JS? Looks like a case of "TypeScript begets TypeScript" with no compelling value to justify the cost.

Collapse
mikearnaldi profile image
Michael Arnaldi Author

What can you do of the above with vanilla JS? Derive runtime codecs? have tree-shakable fluent APIs?

Collapse
jfbrennan profile image
Jordan Brennan

Sorry, not super familiar with FP, but seems like the same benefits and fluent API can and have been achieved with vanilla JS for more than a decade (e.g. jQuery, Underscore/Lodash, many others). Are you saying tree-shaking is not as effective?

Thread Thread
mikearnaldi profile image
Michael Arnaldi Author

Lodash/Underscore are not Fluent APIs they are just modules of functions that is ok, there is a usability compromise though take for example an array "a" to map "a" you do "a.map(f)" doing "map(a, f)" isn't as nice, nor doing "pipe(a, map(f))", jQuery has a fluent api but that sort of comes from a different era and nobody is tree shaking jq. Fluent APIs cannot be tree-shaken (at all), class methods cannot be removed nor renamed, libraries like fp-ts or rx-js have moved away from those API because they aren't shakable and they aren't extensible (i.e. adding a method to a class coming from a library)

Thread Thread
jfbrennan profile image
Jordan Brennan

Curious, why would a functional programmer not want to say this counts as a fluent API:

_.chain(lyrics)
  .map(line => line.words.split(' '))
  .flatten()
  .reduce((counts, word) => {
    counts[word] = (counts[word] || 0) + 1;
    return counts;
  }, {})
  .value()
Enter fullscreen mode Exit fullscreen mode

My understanding is chaining = fluent.

Thread Thread
jfbrennan profile image
Jordan Brennan

...and importing those functions (instead of using _) makes the code tree-shakable, right?

Thread Thread
mikearnaldi profile image
Michael Arnaldi Author

Yeah, incorporating those functions (map/flatten/etc) you end up with all, for effect that can be huge there are modules with 1000+ functions

Thread Thread
patroza profile image
Patrick Roza • Edited on

@jfbrennan importing individually gains tree shaking but would mean losing chaining/fluent and therefore discoverability and usage context.

reduce(flatten(map(lyrics, line => line.words.split(' '))), (counts, word) => {
    counts[word] = (counts[word] || 0) + 1;
    return counts;
  }, {})
Enter fullscreen mode Exit fullscreen mode

or piped

pipe(
  lyrics,
  map(line => line.words.split(' ')),
  flatten,
  reduce((counts, word) => {
    counts[word] = (counts[word] || 0) + 1;
    return counts;
  }, {})
)
Enter fullscreen mode Exit fullscreen mode

With ts+ you keep chaining/fluent, discoverability and usage context, while it gets compiled down to individual imports for tree shakability and optimisations. win-win.
The bigger the library (or the more libraries), with the more type classes/modules, the more the win is.

Collapse
danielo515 profile image
Daniel Rodríguez Rivero

Are the functions showed here part of the compiler itself ? So I can just use them without any extra installation?
It looks awesome, good work, and kudos for going one step further of what most libraries could do and fork the compiler.

Collapse
mikearnaldi profile image
Michael Arnaldi Author

The functions/values as in their material implementation is in library code, in this case what's shown is part of @tsplus/stdlib that you can use with or without the compiler fork and should be installed with classic npm/yarn, when using the compiler fork the same functions defined in stdlib (or any other packages like @effect/core, or your local code) are used as concrete implementations but additional syntax is generated and made available to use, the additional syntax includes fluent methods, operators, etc. Also when using the compiler fork your functions can specify derivation rules and values can be defined as implicit instances that are used when a call to Derive() is compiled in order to generate things like Encoder/Decoder/Guard/Equals and any custom typeclass that you may want.

Collapse
milottit profile image
milottit

Looks promising! But why the use of comments? It is not really friendly to use. Why did you end up with this solution over new keywords or typescript decorators for instance?

Collapse
mikearnaldi profile image
Michael Arnaldi Author

because changing syntax would break compatibility with tools like eslint or prettier, additionally you are supposed to be able to integrate TS+ progressively in code potentially even directly for projects that have types maintained separately (so you need something that you can write in d.ts which is add only and doesn't break any standard TS/JS). TypeScript itself is progressively thinking more and more of types as comments and their recent TC39 proposal goes in that direction too. I can see how it may be perceived as non friendly to use but in practice our feeling after having built huge codebases like github.com/effect-TS/core/ with it is that it is indeed very friendly to use and those comments only end up representing a small portion of your code.

Collapse
patroza profile image
Patrick Roza • Edited on

I was also not amused by that at first either, but because it's basically just enhancing wiring metadata (actual types are still expressed as usual), it's actually pretty good!