DEV Community

Maël Nison
Maël Nison

Posted on

Plug'n'Play and Tink

The npm folks recently made an FAQ on Tink, explaining some of the motivations behind Tink and the differences they see with Plug'n'Play. Although interesting, it obviously doesn't detail the context or reasoning behind some of those differences, so let's have a talk and clarify all this! 🔬

  • First of all, note that Plug'n'Play is a specification. As such, it's not meant to be unique to Yarn, and the most important thing about it is that it formalizes the contract between your application and your package manager. It often happened that some incorrect assumptions were made, causing unexpected (and painful) breakages at the worst possible times. Deploying to production with suddenly nothing working wasn't a great experience. Cloning an old project to find out the hoisting had changed neither.
  • Plug'n'Play has been specifically designed to be easy to integrate with third-parties - much easier, in fact, that the current Node resolution relying on node_modules. It only takes a single line of code from the dedicated PnP API to find out the location of the package you're looking for, no matter which tool you use. It's made to provide better tools to packages that previously had to reimplement the module resolution logic by themselves, and certainly not make their lifes harder. On top of this, we're fully transparent through the standard Node APIs, meaning that require.resolve works just as you expect. This makes Plug'n'Play compatible with the vast majority of the ecosystem, and the few packages that aren't can usually just migrate to require.resolve and be done with it.
  • Tink, for better or worse, overrides the fs and child_process built-in modules. While it's certainly an interesting approach with a lot of value for fast iterations, we decided it would put too much of a risk on our users to be a permanent feature. We're not perfect, and the risks of a bug creeping in a monkey-patched fs function could be devastating - both in terms of reliability and security. In contrast the module resolution API is much more contained with faster fail response if something goes wrong, and is just a temporary measure until Node finishes implementing proper loaders.
  • Something that I think is a bit misleading in NPM's communication: Tink doesn't get rid of the node_modules. Or at least not much more than what pnpm already does, in that in the node_modules still exist from Node's point of view, even if they are virtualized. See, two of the main issues with node_modules are that they make the boundaries between packages blurry (allowing to require dependencies by the sheer virtue of hoisting), and that they put various optimizations entirely off the table due to the limitations of a filesystem-based resolution approach (the main one being path conflicts). Tink might be able to ensure better boundaries depending on its implementation, but a perfect optimization and single instancing simply cannot be done without additional hooks into the Node resolution. The node_modules being backed by the filesystem isn't the (only) problem - the fact that the resolution relies on the node_modules nested hierarchy at all is.
  • I'm a bit concerned that Tink doesn't promote good practices. I believe we have a responsibility in ensuring the ecosystem is in a sane state that makes it possible for other tools to emerge and eventually replace us. This is one of the reasons why we went with an approach that allows us to throw semantic errors when something is off (rather than silently making it work). Compatibility at all costs is a worthy goal, but considering the speed at which JavaScript is scaling I believe we need to rethink some things we took for granted before they drag us down (also see: Deno).
  • Tink's feature set seems closely aligned with what pnpm already promises, except that it doesn't work without an heavy monkey-patch layer on top of Node's primitives. If anything, I don't see why it couldn't have been a package built on top of pnpm (not necessarily in its core - just that would have used it internally, like Lerna uses workspaces).

Overall: Tink is an interesting idea, with trade offs very different from ours! I’ve always been a strong proponent of different tools for different uses, and as such I find it wonderful to see such approaches. I just think the problems it solves are quite different from those that we’re addressing with Plug’n’Play, despite the communication made around their similarities 😉

Final notes:

Yarn's philosophy in general is very different from the one npm seemed to have adopted for their long-term plans. It's clear that they now wish to wrap Node behind an opinionated layer with various native integrations, as evidenced by the "automatic TypeScript and JSX support" features, and the way they prefer to lazily install their dependencies at runtime.

As for Yarn, we believe in a sounder and more generic approach where as much as possible is done ahead of time, leaving little to no work on the CI and production servers, shortening the feedback loop, and increasing the trust you can have that your deployment will go through. Our responsibility goes into building a solid foundation that will allow for your applications to stay stable and predictable for the years to come. You can be sure that we won't let that go into the way of the user experience though, and we'll soon have very exciting features to show you how serious we are 😊

Top comments (1)

Collapse
 
zkochan profile image
Zoltan Kochan

Good article. I learned a lot!

And thanks for mentioning pnpm!