Sprockets, Webpack(er), Importmaps, esbuild, propshaft, rollup, sass, postCSS, tailwind?
It's easy to get lost between all the tools offered by Rails to manage your assets, especially if you're a beginner in web software development or new to the Rails ecosystem.
In this article, we'll take a deep dive into the ways Rails has helped us managing assets, starting with sprockets back in 2012, and ending with the most recent available tools released in 2022. We'll try to understand the reasons between the different changes, and deeply comprehend the ways they work, reading some code along the way.
The Dark Age
Before Rails 3.1 in 2011, there was no official way to help with the management of assets in a Rails app.
You would have to simply put your assets in the public
directory and require them directly.
Importing external libraries meant copying and pasting the source code in your public
folder.
Back then JavaScript modules did not exist, so all JS code would be global and living in the window
namespace, SPAs were not a thing, nor was React or TypeScript. What a blissful time...
Enters Sprockets
Sprockets was first released in 2009 but was integrated into rails in 2011 with Rails 3.1.
Surprisingly, it hasn't changed much since.
Let's look at the release notes from Rails 3.1 :
The asset pipeline provides a framework to concatenate and minify or compress JavaScript and CSS assets. It also adds the ability to write these assets in other languages such as CoffeeScript, Sass and ERB.
Prior to Rails 3.1 these features were added through third-party Ruby libraries such as Jammit and Sprockets. Rails 3.1 is integrated with Sprockets through Action Pack which depends on the sprockets gem, by default.
Making the asset pipeline a core feature of Rails means that all developers can benefit from the power of having their assets pre-processed, compressed and minified by one central library, Sprockets. This is part of Rails’ “fast by default” strategy as outlined by DHH in his keynote at RailsConf 2011.
Sprockets allowed us to integrate third party dependencies much faster (through vendored assets in gems, remember bootstrap-rails
?) and offered us out of the box tools such as minification or the ability to use variables in our front-end code.
Let's have a look at how it works.
All code and behaviour described here comes from the latest available version at the time I wrote this article, which is v4.2.0
General behaviour
Assets go into app/assets/javascripts
and app/assets/stylesheets
respectively.
(Sprockets also supports app/assets/images
and app/assets/fonts
but we won't look into them in this article)
To include them on our web pages, we need to use the javascript_include_tag
and stylesheet_link_tag
helpers.
It is possible to include 3rd party libs or our own files by using special comments in JS files, and classic @import
statement in CSS files.
//= require jquery
$().ready({
// my custom code here
})
If config.assets.compile
is set to true, assets will be compiled live when requested. This is perfect for development environments.
On production environment, assets are generally precompiled, meaning they are all generated at one point, copied to the public/assets
directory, then forwarded directly when requested. This is done by running the rails assets:precompile
command.
To avoid issues with caching, the assets are fingerprinted, meaning a hash of the contents of the file is added to the filename, so if a change happens in the file, a new fingerprint will be generated, ensuring the browser has to request a fresh file it does not have in cache. This happens in all environments, and is handled through a manifest file, which is a simple JSON file mapping the original filename to the fingerprinted one.
This way, when calling javascript_include_tag('application')
, the helper will look into the manifest file to find the correct filename, and generate the correct script tag, such as <script src="/assets/application-1234567890.js"></script>
.
In the code
Sprockets::Server
The most interesting part too look at to me is the Sprockets web server.
It is run when the browser tries to fetch an asset through a HTTP Query and compile mode is enabled. Otherwise the assets will be rendered through the classic static files middleware if they are present in public/assets
.
As most web-related ruby things, it is indeed a simple rack middleware.
Its call
method is actually defined inside the sprockets "environment" class, which is itself included in the middleware stack through the sprockets-rails
gem.
Let's have a look at a (really) stripped-down version of the call
method.
def call(env)
# ...
# Extract the path from everything after the leading slash
full_path = Rack::Utils.unescape(env['PATH_INFO'].to_s.sub(/^\//, ''))
path = full_path
# ...
# Strip fingerprint
if fingerprint = path_fingerprint(path)
path = path.sub("-#{fingerprint}", '')
end
# ...
# Look up the asset.
asset = find_asset(path)
if asset.nil?
status = :not_found
# elsif ....
else
status = :ok
end
case status
when :ok
logger.info "#{msg} 200 OK (#{time_elapsed.call}ms)"
ok_response(asset, env)
end
# ...
end
First, we get the path we want through the rack env and some cleaning.
Then, the fingerprint is removed, we don't need it in dev environment as we just want to find the correct file and compile it.
Then, we try to find_asset(path)
, that will return an Asset
object.
Digging into it could be a whole article, so I'll try to describe what is done in my own words :
- Possibly return a cached version of the file
- Find the correct processor(s) given the file type (sass, babel, etc)
- Run the processor(s), if any
- Run the compressor, to minify the file
- Build an
Asset
object with the result
One of the processors is DirectiveProcessor
, it is used to resolve the magic //= require
comments to replace them with the correct file contents.
Note that the same code will be run when using assets:precompile
, in order to write the assets to the public/assets
directory.
Once the asset is returned, and if everything is ok, ok_response
will be called, it's as simple as that :
# Returns a 200 OK response tuple
def ok_response(asset, env)
if head_request?(env)
[ 200, headers(env, asset, 0), [] ]
else
[ 200, headers(env, asset, asset.length), asset ]
end
end
It will just pass the asset object to rack.
Rack is expecting an array of strings, let's have a look into the Asset
class.
# Public: Add enumerator to allow `Asset` instances to be used as Rack
# compatible body objects.
#
# block
# part - String body chunk
#
# Returns nothing.
def each
yield to_s
end
# Public: Alias for #source.
#
# Returns String.
def to_s
source
end
These methods make the asset object compatible with the rack API, rack will call each
on it and the block will be yielded once with the full source of the asset, nice !
Webpacker
Around 2015, JavaScript bundlers were getting a lot of traction with the promise of easily using npm packages in the browser, transpiling modern JS, etc.
Webpack quickly became the reference and became inescapable for frontend heavy applications.
If a Rails developer wanted to include a NPM package into their app, they would have to either rely on a gem being built and maintained to deliver the assets through sprockets, or implement a fully-fledged JS bundler stack such as webpack in their app, which was quite tedious for our simple ruby brains.
Webpacker was designed as a way to benefit from webpack inside a rails app without having to know and understand how webpack worked.
It was added as default in rails 6.0, for JS only, sprockets still being the default for CSS; although it is possible to use webpacker for CSS too.
General behaviour
Let's consider the default behaviour, with webpacker only used for JS.
Webpacker will look for JS files in app/javascript
, and especially webpack entry files in app/javascript/packs
.
To include them on our web pages, we need to use the javascript_pack_tag
helper.
This helper will generate a script tag with the correct path to the compiled JS file.
In development,the file will either be compiled directly by the javascript_pack_tag
method, straight into the public/packs
folder, and then served as any static asset, or proxified on-the fly to a tool named webpack-dev-server
which allows for code reloading and other goodies.
In production, the file is served statically from the public/packs
directory. It will be generated when we run rails assets:precompile
, which will run webpack in production mode under the hood.
Some configuration is possible, through a YML config file and some JS files, the @rails/webpacker
package is used to generate the webpack config. I remember it being quite tedious as it was not possible to override the webpack config directly. The web was full of articles, and stackoverflow answers that worked with webpack directly, but adapting them to webpacker was not always easy.
Similarly to sprockets, webpack (or its dev server) will fingerprint the assets, and store the mapping in a manifest file stored in public/packs/manifest.json
.
In the code
In production, most of the work is done by webpack directly, webpacker acting as a wrapper and a configuration generator. It's not super interesting to look at.
In development though, since assets are compiled on-demand, or proxied to the webpack dev server, it's a bit more interesting.
Let's first look at the javascript_pack_tag
method and follow the rabbit hole.
def javascript_pack_tag(*names, **options)
javascript_include_tag(*sources_from_manifest_entries(names, type: :javascript), **options)
end
# ...
def sources_from_manifest_entries(names, type:)
names.map { |name| current_webpacker_instance.manifest.lookup!(name, type: type) }.flatten
end
end
Basically, this helper will try to find the correct files from the webpacker manifest, and then call the classic javascript_include_tag
method.
Let's look at the lookup!
method and its friends.
def lookup!(name, pack_type = {})
lookup(name, pack_type) || handle_missing_entry(name, pack_type)
end
def lookup(name, pack_type = {})
compile if compiling?
find(full_pack_name(name, pack_type[:type]))
end
def compiling?
config.compile? && !dev_server.running?
end
If the dev server is not running, and the config is set to compile on demand (which is true in development by default), it will run the compilation synchronously.
Then, it tries to find the asset in the manifest, and if it can't find it, it will call handle_missing_entry
which will raise an error.
If the dev server is running though, it will skip the compilation step and just try to find the asset in the manifest.
The asset is going to be found because the webpack-dev-server
adds it to the manifest when it starts.
We don't have the file compiled yet, but the dev server will compile it on the fly when the browser requests it.
This is done through the Webpacker::DevServerProxy
which is a rack middleware that is added by Webpacker.
Let's see its complete code :
require "rack/proxy"
class Webpacker::DevServerProxy < Rack::Proxy
delegate :config, :dev_server, to: :@webpacker
def initialize(app = nil, opts = {})
@webpacker = opts.delete(:webpacker) || Webpacker.instance
opts[:streaming] = false if Rails.env.test? && !opts.key?(:streaming)
super
end
def perform_request(env)
if env["PATH_INFO"].start_with?("/#{public_output_uri_path}") && dev_server.running?
env["HTTP_HOST"] = env["HTTP_X_FORWARDED_HOST"] = dev_server.host
env["HTTP_X_FORWARDED_SERVER"] = dev_server.host_with_port
env["HTTP_PORT"] = env["HTTP_X_FORWARDED_PORT"] = dev_server.port.to_s
env["HTTP_X_FORWARDED_PROTO"] = env["HTTP_X_FORWARDED_SCHEME"] = dev_server.protocol
unless dev_server.https?
env["HTTPS"] = env["HTTP_X_FORWARDED_SSL"] = "off"
end
env["SCRIPT_NAME"] = ""
super(env)
else
@app.call(env)
end
end
private
def public_output_uri_path
config.public_output_path.relative_path_from(config.public_path).to_s + "/"
end
end
We can see it inherits from Rack::Proxy
, which is a rack middleware that allows us to proxy requests to another server.
It will only proxy requests that start with the public_output_uri_path
which is by default packs/
. And only if the dev server is running.
When this is the case, it will change the request headers to point to the dev server instead of the rails server, and then call super(env)
which will call the perform_request
method of the parent class, which will proxy the request to the dev server.
Propshaft & friends
Propshaft is a new asset pipeline that was introduced in Rails 7.0, although it is not expected to be the default until Rails 8.0.
Here are the first lines of its README :
Propshaft is an asset pipeline library for Rails. It's built for an era where bundling assets to save on HTTP connections is no longer urgent, where JavaScript and CSS are either compiled by dedicated Node.js bundlers or served directly to the browsers, and where increases in bandwidth have made the need for minification less pressing. These factors allow for a dramatically simpler and faster asset pipeline compared to previous options, like Sprockets.
Propshaft is kind of sprockets stripped down to the bare minimum.
Its philosophy is based on the idea that assets will be bundled by external tools, such as bun, esbuild, rollup, webpack for JS, and postCSS, sass, tailwind standalone for CSS.
Those tools are integrated through the jsbundling-rails
and cssbundling-rails
gems, which provide simple config and generators, and hooks into the commands we already know, such as rails assets:precompile
. In development, they are configured in "watch" mode, so they will recompile the assets when they change.
Propshaft is expected to be the default asset pipeline in Rails 8.0. Note that you can also use jsbundling-rails
and cssbundling-rails
with Sprockets.
General behaviour
Propshaft will look for files in the app/assets/builds
directory and handle them in a similar way to sprockets. However, besides the fingerprinting, no extra processing is done, and the files are served as-is. All the heavy lifting is expected to be done by the external tools (esbuild, postCSS, etc).
In the code
Propshaft is much simpler than sprockets, and it's easier to understand what is going on.
It has a rack middleware as well, which is used to compile and serve the assets in development mode.
Here's its call
method :
def call(env)
path, digest = extract_path_and_digest(env)
if (asset = @assembly.load_path.find(path)) && asset.fresh?(digest)
compiled_content = @assembly.compilers.compile(asset)
[
200,
{
Rack::CONTENT_LENGTH => compiled_content.length.to_s,
Rack::CONTENT_TYPE => asset.content_type.to_s,
VARY => "Accept-Encoding",
Rack::ETAG => asset.digest,
Rack::CACHE_CONTROL => "public, max-age=31536000, immutable"
},
[ compiled_content ]
]
else
[ 404, { Rack::CONTENT_TYPE => "text/plain", Rack::CONTENT_LENGTH => "9" }, [ "Not found" ] ]
end
end
Nothing too fancy, if the asset is found, it will be compiled and served.
The @assembly
object is an instance of Propshaft::Assembly
, which is the main object of the gem.
Here's its compilers
method :
def compilers
@compilers ||=
Propshaft::Compilers.new(self).tap do |compilers|
Array(config.compilers).each do |(mime_type, klass)|
compilers.register mime_type, klass
end
end
end
Propshaft::Compilers
is a class that handles a list of registered compilers and allow to compile an asset to be compiled by them when relevant (based on the mime type).
At the time of writing, there are only 2 compilers :
- css_asset_urls.rb , to resolve asset urls in CSS files (background images, etc) and replace them with the fingerprinted version.
- source_mapping_urls.rb, to do the exact same with source maps. (
sourceMappingURL=xxx
)
That's the only processing that is done by propshaft, although it is built to support custom compilers.
The compile
method of the Propshaft::Compilers
class is quite simple :
def compile(asset)
if relevant_registrations = registrations[asset.content_type.to_s]
asset.content.dup.tap do |input|
relevant_registrations.each do |compiler|
input.replace compiler.new(assembly).compile(asset.logical_path, input)
end
end
else
asset.content
end
end
It tries to find the correct compilers for the asset, and if it finds one, it will call its compile
method, which will return the compiled asset.
The same code path is used in production when running assets:precompile
. The task is implemented in lib/railties/assets.rake
.
It calls Rails.application.assets.processor.process
, which writes the manifest and compiles the assets.
Importmaps
Importmaps are a relatively recent tool to control the way we import JS modules in the browser.
By specifying a mapping between a module name and a URL, we can import modules without having to specify the full path to the file.
For example, if we have an import map that looks like this :
<script type="importmap">
{
"imports": {
"react": "https://ga.jspm.io/npm:react@17.0.1/index.js"
}
}
</script>
and a JS file that looks like this :
import React from "react"
The browser will know to fetch the module from the URL specified in the import map, allowing us to write code without having to specify the full path to the module.
Combined with some tooling such as importmap-rails, it allows us to use npm packages in the browser without having to use tools like yarn, npm, or even a bundler such as webpack, rollup, or esbuild.
The developer experience is quite nice as managing versions of packages is done through the import map, and the browser will fetch the correct version of the package automatically, without the need to change any code.
Usage
We can add libraries by using the CLI provided by the importmap-rails gem.
bin/importmap pin react
# or with a specific version
bin/importmap pin react@17.0.1
This will :
- add the react package (and its dependencies) to the importmap, through the
config/importmap.rb
file - download the files to the
vendor/javascript
directory, so they can be served by the asset pipeline
Then, in the layout, we can use the javascript_importmap_tags
helper to generate the importmap script tag as well as import the application.js
entry point.
<%= javascript_importmap_tags %>
In the code
The importmap-rails gem is quite simple, most of the heavy lifting is done by the browser anyway !
It just adds a few rake tasks to manage the importmap and JS files, and a few helpers to generate the necessary tags in the layout.
First, let's have a look at what happens when you run bin/importmap pin react
.
It is implemented as a thor task in lib/importmap/cmmands.rb
option :env, type: :string, aliases: :e, default: "production"
option :from, type: :string, aliases: :f, default: "jspm"
def pin(*packages)
if imports = packager.import(*packages, env: options[:env], from: options[:from])
imports.each do |package, url|
puts %(Pinning "#{package}" to #{packager.vendor_path}/#{package}.js via download from #{url})
packager.download(package, url)
pin = packager.vendored_pin_for(package, url)
if packager.packaged?(package)
gsub_file("config/importmap.rb", /^pin "#{package}".*$/, pin, verbose: false)
else
append_to_file("config/importmap.rb", "#{pin}\n", verbose: false)
end
end
else
puts "Couldn't find any packages in #{packages.inspect} on #{options[:from]}"
end
end
# ...
def packager
@packager ||= Importmap::Packager.new
end
Looks simple enough, it will call the import
method on the packager
object, which is an instance of Importmap::Packager
.
This returns an hash mapping the package names with their urls, which are then used to generate the importmap entry and download the files.
Let's dig into it.
We'll start with the import
method for Importmap::Packager
.
class Importmap::Packager
# ...
self.endpoint = URI("https://api.jspm.io/generate")
# ...
def import(*packages, env: "production", from: "jspm")
response = post_json({
"install" => Array(packages),
"flattenScope" => true,
"env" => [ "browser", "module", env ],
"provider" => from.to_s,
})
case response.code
when "200" then extract_parsed_imports(response)
when "404", "401" then nil
else handle_failure_response(response)
end
end
# ...
def post_json(body)
Net::HTTP.post(self.class.endpoint, body.to_json, "Content-Type" => "application/json")
rescue => error
raise HTTPError, "Unexpected transport error (#{error.class}: #{error.message})"
end
def extract_parsed_imports(response)
JSON.parse(response.body).dig("map", "imports")
end
end
importmap-rails
uses the jspm.io Generator API to get the list of packages to be imported. Interestingly, it can return an url on jspm (by default), but also on jsdelivr or unpkg.
It will return something like this :
{
"dynamicDeps": [],
"map": {
"imports": {
"react": "https://ga.jspm.io/npm:react@18.2.0/index.js"
}
},
"staticDeps": [
"https://ga.jspm.io/npm:react@18.2.0/index.js"
]
}
And importmap-rails
will fetch the imports
key from the map
object in extract_parsed_imports
.
Going back to the pin
method, we can see that it will then call download
on the packager
object.
def download(package, url)
ensure_vendor_directory_exists
remove_existing_package_file(package)
download_package_file(package, url)
end
# ...
def download_package_file(package, url)
response = Net::HTTP.get_response(URI(url))
if response.code == "200"
save_vendored_package(package, response.body)
else
handle_failure_response(response)
end
end
Gotta love those explicit method names !
This is pretty self explanatory, it will download the file from the url and save it in the vendor/javascript
directory.
Finally, the pin
method will try to update the config/importmap.rb
file with the new entry, either update the existing line if it exists (packager.packaged?(package)
), or append it to the end of the file.
javascript_importmap_tags
handles most of the rest, it will generate the importmap script tag, which is generated from the config/importmap.rb
file which is a sort of glorified ruby hash.
This method will also generate tags to preload module files (this is the default now), and to import the application.js
entry point.
def javascript_importmap_tags(entry_point = "application", importmap: Rails.application.importmap)
safe_join [
javascript_inline_importmap_tag(importmap.to_json(resolver: self)),
javascript_importmap_module_preload_tags(importmap),
javascript_import_module_tag(entry_point)
], "\n"
end
Conclusion
Along the years, Rails has evolved to adapt to the ever-changing landscape of web development, and the asset pipeline is no exception.
It has gone through many hoops, and it's interesting to see how it has evolved, and how it has been simplified over time. I really like the direction it's taking with propshaft and the bundling gems, that I use professionally in production with great success.
Importmaps look very promising, and are actually the default JS experience if you run rails new
, but I feel the developer experience is not quite there yet, especially when it comes to managing versions of packages (no dependabot support, no easy way to update everything).
I remember fighting many times with the asset pipeline along the years. Taking time to understand how it works and reading some code helped me solve many issues, and I hope this article will help you too !
Top comments (1)
This is a great snapshot over the history of rails interaction with web development patterns!