Any time a new project is announced, especially in a relatively young language such as Rust, there will be the inevitable responses regarding "fragmentation", "wasted effort", "duplication of work" and so on.
I believe these responses are usually born out of a genuine desire to build a better, more mature ecosystem, and the fear that competing projects will mean slower progress towards that goal. There may be cases where these fears are justified, but in general I disagree with them.
In this article I aim to show why I think these responses are usually counter-productive.
The benefits of pooling resources
Firstly, let's look at some of the positives that come from pooling resources.
Maturity
Anyone familiar with Open Source knows that lots of eyes on the code can lead to more bug reports, more contributions towards fixing those bugs, and thus a more stable and mature project.
There is also the ecosystem that can build up around a project in the form of plug-ins, supporting libraries, interoperability with other projects, and perhaps more importantly, better tutorials, books, and online courses. The guarantee that a project is going to be around for the long term should not be understated, and this allows developers and companies to invest resources in building products based on the stable and mature codebase. These are all very good things.
Productivity
When a project becomes stable and mature, it is likely that it will meet the needs of a greater percentage of downstream developers, companies, and so on. That means they can use the project to build products that benefit their customers, and they can spend their time and effort building their own products rather than debugging the upstream project or trying to figure out how it works. The increased educational resources mentioned above also lead to greater productivity across the board. Now companies can mention the project in their hiring criteria and they can likely find developers who already know how to work with the upstream software.
The cost of pooling resources
The above benefits are great, but there are also some downsides.
Are larger teams always more productive?
Let's assess the idea that if everyone worked on a single project, that project would move faster, and conversely if many people worked on different projects, progress would be slower.
Anyone who has managed a large team of people knows that progress in a software project does not scale linearly with the number of people contributing to it. There is real complexity that emerges once a team grows beyond certain milestones (the actual numbers are fuzzy). The more developers you have, the greater the potential there is for bugs, internal fragmentation (feature A does not interoperate well with feature B), and instability. Larger projects need more maintainers willing to oversee the project design and hold regular team meetings to determine the technical direction of the project.
I am not saying that large teams never work. Clearly they do. Projects such as the Linux kernel show that large software projects can work incredibly well. Open Source is built on the incredible ability of humans to collaborate and achieve amazing things. But this collaboration isn't easy, and it does come with several costs. More contributors will bring technical and practical challenges that must be overcome, and this will generally mean the project must move much slower in order to guarantee consistency and harmony between the contributions and the overall vision and direction of the project.
Do larger projects really move faster?
Once a project matures it will also see a growing user base, with many other projects that now depend on it. Stability and backwards-compatibility then become far more important goals than adding new experimental features. Every change requires more and more testing to ensure it does not break some other part of the software, and does not break any projects that rely on it.
The result of all of this is, contrary to the original assumption, projects with a larger codebase may actually move much slower. The slowdown comes from some combination of having a greater emphasis on API stability and backwards-compatibility, and also the technical and practical complexity that comes from managing a larger team of people.
In general (though there surely are exceptions), the larger the project, the less it is able to innovate rapidly. Backwards-compatibility and API stability are benefits that often come at the cost of reduced innovation.
An alternative view
If larger projects tend to move slower and innovate less, should we abandon all large projects in favour of smaller ones? Or should we cap the number of contributors?
No, of course not! As mentioned at the top of this article, there are clear benefits to having larger projects that are more robust, mature, and stable. This article is not suggesting we stop supporting these projects. We have to stop thinking in terms of "there can only be one". It's not either-or. We can do both.
Rather, what I am proposing is that we continue to support these larger projects where they provide benefits to us, but I think we should also allow space in the ecosystem for more upstarts and offshoots. Look at them as budding experiments that may or may not provide valuable benefits down the track.
Where does innovation come from?
Choose any large software project and take a look back in history to when it was just starting out. How did it become the project it is today? Did it do so by people only contributing their efforts onto other projects that existed before? Or was there instead some point where an individual or small team started a new project and tried to build something better. They may have seen a niche that needed to be filled, or a better way to do something. It is very often the entrepreneurial drive of such individuals that sees the creation of new paradigms, new frameworks, and new ways of making software.
Almost every piece of software we use today was once a competitor to some larger, more mature product. The new projects could innovate much faster because they did not have the baggage of legacy software, legacy features and a long history of dependencies and unfortunate design decisions. There will always be early-adopters willing to try a new piece of software and put up with the bugs and instability in exchange for some benefit not available elsewhere. As more people join the project, the trend towards maturity and away from innovation begins, but the original innovation remains. Innovation can still happen of course - but with every stable API release, the pace of innovation and experimentation must inevitably slow down.
Does fragmentation really slow the pace of software?
If we were building physical products, then it follows logically that if a person spends their time and energy on one product, that time and energy is not being spent on other products. However, software is not like that, especially when it comes to Open Source.
If I make a change to a piece of software, or implement a useful feature, the only thing preventing others from copying said feature would be Copyright/Trademark laws and the like. In the world of Open Source this restriction is often reduced or eliminated (depending on the licenses used). We can and should share code between projects, and benefit from each other's work, giving appropriate attribution generously as we go. Definitely ask for permission from the author if you are unsure. Software patents complicate this a lot but that is beyond the scope of this article.
If a feature is implemented in a competing project and becomes well-received by users, other projects should be encouraged to implement it also, either by re-using the code or the design or otherwise tweaking the overall idea to fit. In this way, fragmentation isn't zero-sum in terms of features and effort. Everyone benefits from the ideas of others, and we end up with the best of all possible features.
Fragmentation = Experimentation, and that is a Good ThingTM
When we view competing projects as little experiments designed to explore some corner or niche in design space, the picture changes. No longer are we looking at "wasted effort", but rather we are looking at parallel development that might lead to exciting new innovations down the track. Some of those innovations may even be incorporated back into the other projects once they have proven their effectiveness and success.
True, most of these experiments will go nowhere and may fizzle out early, but we can learn from those outcomes too. It is this repeated process of experiment, implementation, evaluation, and reincorporation of what we learned, that drives software innovation right across the industry. It has always been thus, and we are all the beneficiaries of it.
The downsides of fragmentation
There are some notable downsides to fragmentation, however, and I would like to mention a few of them so that we can learn how best to avoid them.
Confusion for newcomers
Probably the main downside of fragmentation is the confusing landscape presented to newcomers. Anyone who has used Linux in the 90's or early 2000's knows of the confusing process of figuring out which distro they should try. Each distro had its own pros and cons, and each had its own band of proponents advocating one and disparaging others.
This is clearly counter-productive and can put new developers off entirely. The state of npm packages in Javascript is another example where it sometimes is not clear which of the 900 competing frameworks one should use. We use the number of weekly downloads as a proxy for determining which projects are reliable or stable. Over time some clear winners do emerge, such as React, Angular, and Vue, and their popularity will compound their success to the point where they become the mature projects we mentioned at the start of this article.
But how do we get there without the confusion?
Unfortunately I don't have any clear solutions for this, mostly because it's difficult to predict which project will become successful in order to know which to promote ahead of time. This kind of thing will usually naturally emerge over time.
My suggestion is for the core team of any programming language to at least provide a single page outlining the various options, and attempt to provide pros and cons for each (as unbiased as possible) and continually update the page to report the latest state of the ecosystem, including popularity within the community. The Rust community has such central pages, for example https://www.arewewebyet.org/ and these are extremely helpful. By reporting this feedback regularly in one central location, the stand-out projects will become known more quickly, and this will accelerate the community out of the confusing landscape and towards a more mature ecosystem containing a few large projects and many experiments from which they can draw inspiration.
Disagreements, insults, and factions
The tendency to view fragmentation as unwelcome competition or even as a threat can unfortunately bring out the worst human behaviours in all of us. We compete for "customers" for our projects as if our very livelihoods depended on it (perhaps in some cases it does).
But if we have learned anything from past experience, competition almost always leads to better outcomes for the consumer. Perhaps this sounds a little like free-market capitalism, and yes there are some parallels, though not intentionally. I think any system where individuals or groups compete in an evolutionary playing field will show similar traits. It is notable that the cases where such markets do not serve the consumer are precisely where there is a lack of competition, not too much of it. This would suggest we should look more favourably on those projects willing to experiment and try something different.
Too much reinventing the wheel
There probably is a point beyond which we should stop trying to reinvent the wheel or improve on past inventions, but the problem is that we don't know where on that scale we are. Perhaps if no new web framework delivers any benefits in 10 years that might suggest we should go back to incrementally improving existing ones, but I don't think we are there yet. The space of possible designs is potentially infinite, and we often cannot assess the effectiveness (or lack thereof) of a design until we try it out.
There is a practical aspect as well. When it comes to the modern world, we all need to earn money and get paid. In this market, there is a significant cost to working on failed experiments and an increased benefit to being more cautious and sticking to tried-and-tested designs. The space of innovation is still open, but the innovations often need to deliver benefit in the short term. The upshot of this is that we should probably not be too concerned with people reinventing the wheel. The projects that should succeed, probably will. The rest will cease due to lack of funding, or interest, or both.
So perhaps we should be hesitant to reinvent the wheel, but keep an open mind when it comes to reinventing the web framework, for example. We don't need to jump on board every new project, but we can wish their contributors all the best just the same.
Embrace fragmentation, but not too much
In general, fragmentation = experimentation. We should celebrate it, and encourage people to explore their ideas and see what they can build.
At the same time, we can (and should) still put our efforts and resources into building a stable ecosystem via the more popular community projects. This is not a zero-sum game. We can borrow ideas (and sometimes code if licenses permit) from other projects and all benefit as a result. Communities will form naturally around projects that deliver good results.
There is an optimal balance between the two which probably somewhat favours mature ecosystems and stable APIs while ensuring that experimentation can and does still freely occur.
At the end of the day, having a few mature projects that can incorporate new features that were designed and perfected via little experiments and other offshoot projects, is the best of both worlds. It's not fragmentation vs pooling resources. It's a big world full of amazing developers. We can support both approaches. And as consumers of the resulting software, we all benefit.
Top comments (1)
I think software dev is the only place I've heard people complaining about too much competition and so many asking for more oligo/monopolies, it's really weird to me. I think some people are convinced that instead of pair programming you put 20 programmers instead of 2 it would be faster. I wouldn't work faster with 10 more people working in the same thing with me. And what I find more strange is that the "fragmentation" argument is in the FOSS community, if someone ask themselve how awesome would be a *NIX OS with less "fragmentation" they should just look at iOS, that's exactly that, why not just use Apple's "unfragmented" system and let the FOSS "fragmented" way to do things, it has worked so far, quite good I would say.