Cover image for What if we would share code instead of packages?

What if we would share code instead of packages?

valeriavg profile image Valeria ใƒป2 min read

Let's start a new project. You probably would run something like npm init followed by a list of packages you already have in mind. And that's how it goes, right? Well, sure, cleaning up gigabytes of node_modules from our dev machines is annoying, but otherwise, it's okay, or is it?

Code trust issues

We shouldn't, by all means, look into the source code of every module we use. After all, it has 5.5K stars on GitHub, and lots of projects rely on it. They know better after all...

I hate to break it to you, but not always. I'm talking about a package called graphql-express that just so happens validates graphql schema on every single request ProofLink: Line 274.

Surely, mistakes happen and pull requests fix them after a while. Long after the project's deadline.

Now, you probably realised it's not about this particular package. We base our work on code we can't trust. And I'm not talking about cases where npm packages are purposely hazardous.

Everyone is aware of that, some try to do something about it. Deno, for example, tries to solve this issue by allowing to include code from any https link. Which is great, except I still can't edit the code. I have all the responsibility for my projects, and still, I don't have control over half of it. So, what do we do?

StackOverflow Driven Development

Surprisingly we had a perfect solution, and we have been mocking it. Copy-Paste, change some letters, and good to go. Extremely inconvenient way of sharing code: in the form of code.

But wait, how would we fix the issues or add new features? The code would be spread worldwide; with so many people changing it, it would be practically impossible to control and probably require peer-to-peer connections, a global hub as well as encryption. In other words: some trustworthy distributed network...

Pretty much like blockchain, right?

I know, it's hard to take in; I'm just as surprised as you are. But it does solve all the problems I can think of:

Trust, Availability, Easy access and Small size.

We just need to build it, so who's with me?

Photo by ๐Ÿ‡จ๐Ÿ‡ญ Claudio Schwarz | @purzlbaum on Unsplash

Posted on by:

valeriavg profile



Re-inventor extraordinaire, node/js lover, optimisation addict and open web advocate


Editor guide

Just some are info about the express-graphql example. Based on this closed issue (github.com/graphql/express-graphql...), it is intentional and based on the caching behavior of the graphql function that is being used. Even if that is the case, it's kind of interesting to the point of your article -- layers of packages can tell you what the function does (validate the schema), but not what the code does or why it is/isn't a performance hit.

Instead of taking the word of the devs on the Issue, I checked for myself. Here's the code for validateSchema. The GraphQLSchema object contains an internal property called __validationErrors. The validation context will return an empty array which still triggers the short-circuit if there has been some previous validation with no errors.


Thank you, I was not aware of that. In my defence, I cannot imagine a use case where updating schema at runtime won't be a terrible idea.

Totally agree with you. Just wanted to add that complexity is a consequence of a "black box" approach in code sharing: it forces maintainers to introduce "what-if" and "who-knows" features to keep their module universal.


No defense needed (in my opinion). It just made me sweat a little bit given my use of express-graphql so I wanted to verify. I ran into a problem with Apollo just a couple weeks back where their "hidden" caching of query Documents was giving me problems because I hadn't been deep-copying parts of the resolveInfo that I was passing through part of my own library for modification. I mean, partially my fault for not writing a pure function in the first place, but there's also no disclaimer about their memoization/caching relying on you only ever using pure functions?

Edit: There used to be a code for graphql validation function, its explained much better in the comments above
In my case this server was meant to run in Kubernetes, so if my schema is invalid I need it to crush on start and therefore stop new deployment version rollout.


You know, I encounter binary builds from time to time on npm, and I start wondering what if build step was even slightly altered by packager. It makes me kind of uncomfortable with the terms of openness, when some step in deployment is a black box, and I can't be sure that I get a 100% genuine experience from source code or some telemetry is taken behind my back.
But at the same time, it's so convenient that everything's built, so I can test it in timely manner.

Extra exhausting would be installing a package AND the build tools required for it to build on machine, because why use package then, LOL? ๐Ÿ˜†


Yeah, it's scary if you start to think about it. Packages are built out of whatever files you're packaging and there's absolutely no check if it is the same source code provided on GitHub or if it was altered.
What's for the binaries WebAssembly could help with it - it's cross platform and no extra build tools required.

Everything we use in dev (or even live) is based on someone else's work; And its great, we are where we are, because it was refined and perfected over and over. Wouldn't it be nice to be involved in this evolution?


Well, you still need a build tool to wasm-ify what you just wrote in some readable programming language, but I get the idea.

Nevertheless, even if wasm is somehow readable, it will be more handy if build and packaging in repository were scheduled automatically, without "strings attached". It's more of like an option that raises the credibility of the package.

I dunno what's even beyond of WebAssembly for web today? It's certainly an awesome progress.


If by code sharing you mean code is checked in on your project, I feel most of the time, it's not the best idea in my opinion, and that's because it'll increase what you have to maintain, when I use a package, it's them who will update their package, not you...


I agree, there are packages that are not meant to be edited - a database protocol implementation for example; And in this case a compiled binary library would do a better job.

Like it or not, we are maintaining packages we use. It's not just install and forget. At the very best, you'll need to update it from time to time and adjust your code to the changes.

Debugging, optimisation and sometimes even testing gets really hard without full access to the code.

The package needs to be a bit different too. Closer to Dino - minimalistic, preferably with no dependencies, meant to be edited and adjusted.

Now you probably thinking about git submodules. And I agree with you - working with them is a hell. The smooth tech for this concept doesn't exist. Yet:)


Copy & Paste is a great solution for code sharing. In fact I was thinking of adding "Copy&Pastes" to my nojs static site generator mkws.sh instead of extensions or plugins. You would just copy and paste let's say the code for social widgets to your prefered locations and you're done.


I would say we should go back to the good old times, when we used to write functional code. And refer the required functions when and where needed in our project. All we needed to know about those functions was the interface(arguments and return values)

If we could simplify the packages into plain simple dead functions, which we will use as it is in our projects that would be great.

If there is an update, i will replace that function with a new function.

If someone has a bug, he can peek into that imported function and see if there is an error in that code.

The code is simply in front of you, if there is a suspicious activity. You will see it right there.


There are more examples where duplicated code from external resources contain bugs which are never fixed. There are plenty examples where external projects were cloned and changed which never received any but fixes because those cloned projects were no longer maintained.

The most expensive task in software development is maintenance. By adding more code to your project this activity becomes larger.


I totally agree, but packages are also code. The less you have - the better. You fairly pointed out that having a limited amount of maintainers is an issue, that's where decentralised code storage comes in handy.


Do you mean that every module is a separate blockchain and new versions are blocks?


That's a good point. Yes, having a separate chain for each module might just work. Though duplicating data with each transaction seems like an overkill. I think Git might have a solution for it though


I am not sure I really understand. Isn't the npm/PyPI/... doing exactly that with the versions? Yes, you can delete the package but otherwise where is the advantage? Do you mean something like nest.land/ for Deno or even more advanced somehow?