<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Viktoriia Lurie</title>
    <description>The latest articles on DEV Community by Viktoriia Lurie (@viktoriialurie).</description>
    <link>https://dev.to/viktoriialurie</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/viktoriialurie"/>
    <language>en</language>
    <item>
      <title>Module Federation v7 featuring Delegate Modules Part II</title>
      <dc:creator>Viktoriia Lurie</dc:creator>
      <pubDate>Thu, 13 Apr 2023 17:23:04 +0000</pubDate>
      <link>https://dev.to/valorsoftware/module-federation-v7-featuring-delegate-modules-part-ii-68</link>
      <guid>https://dev.to/valorsoftware/module-federation-v7-featuring-delegate-modules-part-ii-68</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Viktoriia (Vika) Lurie is the product owner for Module Federation and works for Valor Software. Zackary Jackson is the creator of webpack module federation and principal engineer at Lululemon. &lt;/p&gt;

&lt;p&gt;This interview is the second part of the Module Federation v7 featuring Delegate Modules interview released earlier. Read &lt;a href="https://valor-software.com/articles/module-federation-v7-featuring-delegate-modules"&gt;Part 1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="100%" height="166" src="https://w.soundcloud.com/player/?url=https://soundcloud.com/medusa-398299374/delegate-modules-audio&amp;amp;auto_play=false&amp;amp;color=%23000000&amp;amp;hide_related=false&amp;amp;show_comments=true&amp;amp;show_user=true&amp;amp;show_reposts=false&amp;amp;show_teaser=true"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Magic comments
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Vika:&lt;br&gt;
So, let's start our conversation and talk about magic comments and how it's significant that Rspack added that.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack Jackson:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Yeah. So &lt;a href="https://webpack.js.org/api/module-methods/#magic-comments"&gt;magic comments&lt;/a&gt; is pretty much just a way to decorate what webpack should do to provide hints about what's going on when a certain import. You can get around magic comments and do it stuff via webpack rules. But sometimes that's a lot trickier. It's like, do in a case by case scenario, something like a magic comment. There's a couple like we can tell it what chunk it should be called. &lt;/p&gt;

&lt;p&gt;So we do have dynamic import, we can define its name that it should be generated as we can tell it to, you know, do like a recursive imports, like import this whole folder, import anything and chunk it all out. But we could say, but don't bundle or chunk any .json files. So and that cannot be done through changing the webpack context through the magic comments. And then the other ones are like webpack, ignore, that's probably the one that I've used the most for, I want to tell webpack, skip messing with this import, leave it as a vanilla ESM dynamic import, and the environment itself will handle it. So does quite a few like bespoke webpack'y things. But the challenges that we've had originally with looking at this for SWC; SWC doesn't have any thing to parse comments, it wasn't considered a valid part of the AST, but ESTree, which is what Babel and webpack are based off with acorn. They use comments as like metadata markers or additional things to perform on something. &lt;/p&gt;

&lt;p&gt;So anyway, it looks like they landed comment support for SWC. And now that will unlock the whole magic comment thing, because originally that was the one limits. It's like well, we could implement it, but we wouldn't be able to read any comments and perform anything accordingly. Which isn't a huge deal. But it is really nice to have lots of webpack users implement the magic comments to make webpack do more complex things on a chunk by chunk or import by import basis. So having that means there'll be a lot of feature parity with how certain things work where. Like they depend on webpack importing the mega nav as you know, Mega nav.js, not just 381.js or whatever name that it's going to come up with. So preserving those kind of capabilities. In the parser itself it is a really big bonus. Not having to write everything is regular expressions or rules or stuff like that up front in build. But being able to do this on the fly, it lets us do some interesting things like we can create a loader that creates imports with magic comments. &lt;/p&gt;

&lt;p&gt;So you can get into like meta programming. Because now I can say okay, based on what you're doing, we're gonna say, here's a fake file that does these importing things and change how you import it. But we can do that as we're printing out the require statements, not like I have to go and reconfigure Rspack or do something like that. So it offers that really nice capability of generative adjustments to how the applications built while the application is in the middle of building. That's not a wide use case. But when you need it, you really need it. And I think another cool one that we do with magic comments is this one called provide exports. And let's say you dynamic import icons. And it's going to make, you know, 5000 file chunk, because you have 5000 icons. &lt;/p&gt;

&lt;p&gt;So usually I would go like import icon slash, wash with care. And then I just download the one icon. But if we get off the index file, I'm going to get everything and if I'm doing a dynamic import, that means I'm going to split that file off and get what I need out of it. But webpack is going to know, what do I need out of it after I've split it off. But with a magic comment, this is similar to our reverse tree shaking ideas. Is it a magic comment? Then you can pass a thing called provided exports or used exports. And you can actually tell it, hey, I'm importing icons, but I'm only using three icons. So when you split this thing, tree shake everything else out of this dynamic import except for these four exports or so that I use. And so that's really powerful to create code splittable tree shakable code in really advanced scenarios where you're trying to lazy load something that's usually a big library but you're only want one thing out of it. And so the magic comments for provided exports, or used exports is super handy.&lt;/p&gt;
&lt;h2&gt;
  
  
  Reference architecture updates
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Viktoriia Lurie:&lt;br&gt;
That sounds really interesting and cool. But let's talk about delegate modules. So, the post we did about delegate modules, it really got a lot of comments and a lot of interest. Can you please share on how you've updated your reference architecture?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack Jackson:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Yeah, so I still have delegate modules filed as a beta capability for the Next.js federation plugin. Mostly, I've just left it in there, because our next implementation is the most advanced we've got. And so anything new that we want to do, we can do it on the Next one, quite easily, since we've created kind of our own monster inside of Next that lets us do anything that we want. And it's a really nice platform for reverse engineering Next.js, at this point.&lt;/p&gt;

&lt;p&gt;So delegate modules fit in really easily there, because we already had big, like intelligent plugin on top of it to make the Next play nicely. The idea is going to be with delegates is to extract this logic out of it, and move it into one of my other universal packages. That's not directly tied to the Next repo, like I might put some of this in the Node package, or might put it in the utils package or something like that. And that'll give it to everybody. But yeah, the progress so far has been pretty good. We've been able to find a couple of bugs along the way, and how to implement these things, just the right way everywhere. But if you see some of the examples staring with the word delegate on my example folder, then that is a delegate module example.&lt;/p&gt;

&lt;p&gt;So we've got a Next delegate and we have just vanilla webpack delegate, which was like the first one that we did, just like test the theory. And then it worked. And so it was like, okay, cool. We'll make one more example, with the delegate module, we're using Medusa and all the vanilla ways. So what delegates has kind of given us so far, and in my examples, like, I think the best one is going to be the Medusa plugin that we use in Federation. So as of yesterday, I have a Next.js app deployed to Vercel. And it's, you know, for federated Next applications. One of them is a shell, and the rest are components or other pages. And now, with those delegate modules, I can go to any pull requests that I have open on Vercel. So a different branch, where it's just it, my delegate module is implemented here, and maybe I forked the branch and I made like a blue version of the header, or something like that to kind of test it out. And then let's say I fork it again, and I make a red version of the header. Now I can go to either original Red or Blue versions of the header. And on either of them, I can go back into Medusa and I can change the version and I can make the blue one be red, or just be nothing and all the other pull requests, I can essentially change them as well. So my open pull requests don't actually mean anything anymore. It's just like a domain that I can go and hit. But effectively, all three pull requests. If I change in Medusa, all three pull requests are going to show me the exact same code change, because I'm able to link it and say well use the header from pull request number two, even though you're currently green, use the one from red and pull it back in and do that on the server or in the browser. So seeing that be manageable was a really really big thing to see because we've mostly only seen Medusa managing things up to the browser. So seeing this now actually go all the way into the server. The server is not responding asking Medusa what to do, and Medusa is telling it what to do and then it comes back to life in the browser. Without any hydration errors or warnings or anything like that it is really really impressive. And then on top of it, the delegate also has like this concept of an override protocol. &lt;/p&gt;

&lt;p&gt;So this is very similar to what we wanted to do with Medusa and adding like a Chrome extension, so that I can say, well, this is what Medusa is configured to. But when I go to production, I want to see blue header just for me, nobody else. So we kind of just implemented the poor man's version of it, where I wrote something that we pick up right at the beginning, and then I process it, and then I update how webpack does something and I call hot reload, and then it pushes me through with to the updated site. And so that is, let me now have Medusa managing everything and then right above Medusa, I have if overrides exist in the buffer, read the override, find the current remote entry, that override is forced. So if it's like overrides home, and then here's like, the version and the hash or whatever, then just from a query string, my browser and will change the blue nav back to red or to green. And then if I delete the core string and reload it again, I'm getting the red now. And I'm able to do that across any of the pull requests. Again, we're now have Medusa and I have a way to override before this thing asked Medusa, it'll ask some system that I have on top of it, and then I'll go okay, well, you're not doing anything with it. We'll go to Medusa for the main config. And this is really powerful about delegate modules, because we can keep adding layers above or below it. Medusa can just be one of the calls. And you know, I was speaking to somebody about security and compliance. And they were saying, well, if Medusa got hacked, couldn't somebody do a lot damage, like changing your script URLs? And then it's pointing to your source of truth? And I was like, well, yes, but we've got security layers, several security layers kind of baked in here. But we can also set policies inside the delegate module actually. So we could say, when you asked Medusa for something, check the domain Medusa gave back, is the domain somewhere registered inside of your infrastructure? Is the URL part of your company, like there's no rogue URL coming from some other location? We could have the delegate module, kind of be a safety check and read what Medusa is about to give to webpack and validate, if that should be allowed or no, that shouldn't be allowed. And if it's not allowed, we could always just short circuited again and say, okay, well now fall back to just the stable release, like maybe we have a bucket, like the stable channel that we hard code. And so we know whatever stable release we put up is lululemon.com/you know, remote slash stable slash remote entry.js. And so now I have three mechanisms available to me, I can override it on the fly, I can ask Medusa for it, I can verify what Medusa is doing if I need to do any additional checks. And then lastly, I can also just retrieve, what should I do if both scenarios don't match the requirements, and I can have a third fallback on how to go and do something in there.&lt;/p&gt;

&lt;p&gt;But it's three completely different mechanisms on how to acquire the connection interface to the two different webpack containers. So it's just offers a ton of power. Like, I think the way that I would see delegate modules is with it, we could probably create our own metal framework around Module Federation. That's how much power it gives you because it's got middleware in there, if we want to do something, say like Next.js, where every page loads data, and you know, it does that whole thing, we could probably wire a lot of that stuff through the delegate module, if it needs to load data, then we could attach that on. So just what webpack gets is an interface specific to any kind of side effect that we want to analyze or understand or respond to. So if we know hey, the page coming in, it is going to be this type of data fetching page, we could wrap the delegate module to return that kind of construct to for fetching data, like if it was get server props, which is something special in Next. So it's really nice that we have that level of control. It makes me feel a lot like delegate modules is just like Express middleware inside of webpack's require function where you know in between asking for something and getting it back, you can do whatever you want with it. And then finally, you feed it to webpack so it's a ton of control compared to anything we've had before.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Viktoriia Lurie:&lt;br&gt;
Yeah, this one sounds really powerful.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack Jackson:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
This is probably the the biggest technical unlock since Federation was created. From all the features it's got this is probably the most powerful one made available, which is why I'm so excited about it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Viktoriia Lurie:&lt;br&gt;
Could you also use something like circuit breakers with delegates module switching a federated remote based on error percentage, or latency?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack Jackson:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
This was something I was speaking. And it's also kind of where I think Medusa could be useful. Because when we're speaking about a lot of these type of capabilities, the one area that always kind of gets blocked is who ingests the information to respond to it, so I can have a performance monitor. And that's great. And I can either make it trigger something in my CI or do anything like that. But you get to the spate of the problem I find with it is whenever you do things in CI, it's very dumb. Like CI doesn't know much. We've made efforts to do things like static analysis for security, or linting, or other kinds of tools like that. But CI, effectively doesn't understand what's happening, it's just going to do it, do a job. And as long as it doesn't break, doing that job, that's kind of all it knows about. Performance monitoring, on the other hand, might know a little bit more in depth of here's the area or here's where it's tagged to be slow. But it doesn't actually know well, what to do with that. So if it can only send me a very small piece of information, like the header is slow. How do you translate that back into a big company with like, 1000 repos that are created and destroyed all the time? That oh, this map is still out? Or how do you maintain that link? So that you know what they're talking about - here is actually this header over here. So with delegate modules that offers us this option to say, okay, well, we can retrieve some info to understand what our performance looks like and adjust it accordingly. But we need to know somebody needs to be the adjuster almost. So if we use something like Medusa, where we started sending back RUM information to Medusa, Medusa could see, hey, the header was just released, that slowed only the site's down that are using this new pin version of headers.&lt;/p&gt;

&lt;p&gt;So now we've reduced the scope, it's not something slowed the site down, it's this release just happened. And everybody who took this really soon saw a similar increase in latency or performance. So now we already have a good understanding of what most likely caused it. And then we've also got a good understanding of what's the impact radius of this. So now I could start reporting say, hey, the navigation is have a performance problem. And it's currently impacting these four applications here. If it's a critical problem, where you could create rules to say, you know, like a threshold for an alert, if it becomes X percent slower, we could say, okay, Medusa sees a big change in it, pin it down to the previous version, and see if maybe do that on A/B test. So set a cookie or something to track and switch this user back with a different identifier to the mitigated mode and makes a 10% of traffic, get that mitigation response. Are we seeing mitigation mode, improved performance, and there's no error increases? If yes, we could then say, okay, push that to all delegate modules, and now we've rolled back the site, but we're able to programmatically do it and almost validate what Medusa thinks it is. Like, you know, it's a self fulfilling validation, you know, we're sending it wrong data and well, let me tweak this, what did that do? Okay, everything went well, let me roll it up. Oh, if we rolled it up long, we suddenly see a problem, okay, undo that option. And it's back to whatever. But either way through delegates, it gives us this these capabilities, where we can now dynamically change how things are done. In the browser, it could be say rolling things back or rolling things forward. On the server side, I think it's a little more interesting, because if we say look at edge workers with Netlify, and Module Federation, we could then measure what's cheaper. Is it cheaper and faster to send a request to another edge worker to print out headers, HTML, and then have Webpack get to have a federated import of header? But we have a delegate module that changes it to not download code, but instead, fetch the HTML and then return it as like module exports a string.&lt;/p&gt;

&lt;p&gt;So now I'm importing a string, that's actually the reply from another edge worker. And that becomes the stuff that that other edge worker did all work to make my header. But if that's slow, like if it takes, 50 milliseconds to connect to the header, and we're saying, well, header only takes two milliseconds to render, the system could self optimize and say, well, we've seen that it's actually faster if we just pull the runtime down and run it on this one worker. So we'll do that unless a partner comes under heavy strain. And then you could say, well, in the next invocation, push it back out to another worker. And now we can kind of have an elastic computing system where it can become a distributed parallel computing system, or it can fall back into more monolithic in memory based patterns. But you know, that's something you'd have to usually build a whole big framework around. And you'd have to deploy your application specifically for the limits of workers and stuff like that. With Federation and the Node Federation on Netlify, you can kind of just deploy an app, like in whatever shape that you want. And it will work. So I can deploy this thing to no JS. And I could then say, okay, well, let me push this up to the Edge, and it would work just fine. I don't actually change how I wrote any code, it'll just know it's in the Edge network, and how certain things need to be done are a little different. But I didn't have to design and develop an Edge worker application, I just built the app and let the building tool take care of making sure it runs wherever it's supposed to run. So it gives a ton of flexibility there, even for things like imagine. Edge workers really good, but it's lightweight. So if you have a really heavy task that needs to be done, sometimes it's better to send that back to the Node Lamda.&lt;/p&gt;

&lt;p&gt;So this gives us this kind of three dimensional scaling, where we can either scale, you know, horizontally across more workers or, you know, contract down to fewer workers. Or we can also push the computing between Node.js and the Edge on the fly. So now you could have your slow note server does a cold start does the one complex job that it needs to do and then there's another 10 things that could do. And it could say, well, those things have been light in the past, let's send them out to 10 separate workers and process them all in one go. Instead of sequentially, do one, do two, do three, do four, and then send it back. But yeah, so that's like one of like, the more out there, possibilities, but it's definitely something that the design of this delegate system allows for things like that stuff that you previously, that's just not possible to make stuff like that work, especially on like an Edge layer. But for us, it would just be, one NPM package wrapper, like a special delegate module called like, the elastic compute delegate, or whatever. And then that thing's designed know, okay, I can go here, I can go there, I can go wherever. And then how you use this component is similar to normal, like Module Federation patterns that we would want, like how server components would be, you don't make it, you don't send it a bunch of data, you don't pass it context, it's more will serialize, a little bit of data, send it over somewhere else, it will do the work. And the little data that I send it is enough for it to understand what it's supposed to do. But it does its own heavy lifting, fetches its own data, and returns everything back, which is the component level ownership model.&lt;/p&gt;

&lt;p&gt;So if you're following that already to make distributed systems more reliable. That also means there's a high chance you could start splitting it across different compute primitives as needed, and actually scale up and down your workload, because it would follow someone to kind of construct and now we're providing that glue code to let something like this happen, which would be very hard to manually do in a like, time friendly way.&lt;/p&gt;
&lt;h2&gt;
  
  
  Medusa Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Viktoriia Lurie:&lt;br&gt;
Thanks for sharing! All right, so now - demo. Speaking of making distributed systems more reliable, we haven't shared about this for a while. Let's do a quick demo of the new reference architecture, and its configuration was delegate modules.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Zack Jackson:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Sure. So um, heads up, I can't show my reference architecture right now, but I have a simpler app that still working and letting me click around so I can go through like a three you know, the each important page nothing super fancy, but it shows the all the parts that we want working.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Viktoriia Lurie: &lt;br&gt;
It works perfectly.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack Jackson:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
So Medusa has undergone several drastic iterations of improvement. A lot of really good work has been done around the UX and the design of it. I think when this originally started, it was a very simple project. It worked, but it was more like, here's a concept proven, achievable. Not, you could run a real application off of it. And it'd be quick. So, since those early days with the help of Valor team, this thing has really exploded into a nice really first class product. So one of the big things that I just saw is we've gotten this new UML diagram. My old UML diagram was pretty flaky. But it mostly did the job. But this thing is a lot more well laid out, and offers quite a few room like just more room for improvement if we need to continue increasing the amount of data that you can see in the UML view. So you get better views, and better interconnects. Like, it's easier to see who connects to what and things like that. And into the future, we will see a lot more feature capability to be able to come out of the UI that we've laid down here, which I think is the big one is how do we build up a UI that's going to allow us to move forward without redesigning it like five times over? Oh, you know, what's the most complicated use cases, cool, those are far away. Now, let's just make things better. And it trends in this way, that gives us you more power over time.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/BpRdnt_6kkM"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;So wherever UML is in here, we have our dependency table that still shows shop is vending pdp shop and a page map checkout is, you know, title, the checkout page or map. And then home is, navigation, the homepage, and it's page map. And then in here, we also see like who is vending modules, so we can see everybody who shares so we're seeing all of these all offer, this package is shared. So it gives you a nice idea on what's available and what's required in various parts of the application.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/nMaIYlyTH3I"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;We've also got our Node Graph here, which has come a long way as well. A lot more readable. And I love the sizing, that it's scoped to the size, so you better understand how big a remote is, or how many connections are made to a remote over certain other ones. But you know, if I want to see who uses this title component from home, I can click on and I see okay, checkout depends on title. And I could see well, who uses shop, okay, shop is consuming shop as well. And it's also used in home and checkout. And you know, we could go and look at the product page and say, okay, product page is used by shop. But look at shop, we can see shop is connected to nav page map product page, it's connected several parts of the application here. And then you can also go in here and like choose the Node that you're that you're trying to find if you're trying to search a specific note up. So it's also a lot easier to navigate as the systems get much larger. And we can also look at the depth of the nodes, which is a really nice feature to be able to see like how deep down do we get? Or how many or few nodes do we want to display? Magic, you had like 1000 nodes in here, being able to filter the depth, those down would be useful, especially as nested remotes and things like that come along. And we can also filter these things out by direct connections, not 100% sure if those are all wired up yet. Oh, yeah, direction of connections. So then if I have that on there, I can see which direction it's going. It's a little hard to see. But you can see I have these arrows on here. So now I know who's consuming it. And who's providing it which thing is a big, useful thing to know is like, well, not that these two are connected somehow. But like, who is it? Who is it that I need to go and spend? Like if I'm going to change nav, who do I need to go and update? Okay, so I need to go and look at checkout shop and home because nav is going to be impacting these three.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ReLPFx-IiYA"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;And then we get into our Dependency graph, which is like our old Chord graph that we still have, which is just another way to visualize what's going on to see what are all the interconnects overall, and how everything kind of spreads across and connects to our other dependencies.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/hH_v3nWH68o"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;So once we've kind of gone through these applications, if we go back to their UML, I could flick into home and go to the remote. And now I've got you know, what are the modules exposed? And where are they on the file system of this repo? If it's requiring anything additional, like anything that's a shared module or something like that, it usually will get listed here. I don't get React or some of the default Next.js things listed because those are considered hardwired to next so we just mark them as external so the remotes don't even worry about negotiating React, because in order to live inside of Next, React has to be there. So we don't really track those kinds of things in here. But if I were to add Lodash, it would pop up and say, hey, you know, this thing requires Lodash, because that's the one like shared vendor or outside package that it's dependent on. We've also got, you know, everything that is shared and network versions that shared out. So it makes it easy to understand who's on what. And we've got the direct dependencies. So this is everything that's listed in your package json, and as well as we can see who consumes it so I can see cool this thing consumed shop, checkout and title. And then up here, of course, I've got my version manager. So I can go in here. And I can choose between, you know, I have a timestamp, I also have the git commit hash, you could have a pull request number, or you could calculate like a semantic version, like you would for an npm package. And those can be listed here as like what you're pinning to. And so then the other thing, as well as we also have, like the version comparison, so you can see over time, how has this container changed, like if we upgrade React, I can see the date that happened. If I change what I'm sharing or consuming, I can see the date that a new shared module was added, or it started importing a new federated module. So I mean, even here, you can see I'm using 2.8.3. and then I'm now using 2.8, beta, beta two. And then over here, and you know, this dependency was 6.1. and now it's 6.2. So it's very useful to see like, well, when did change occur on your dependency tree? And in distributed systems, or even in a single repo, this is really complex to find out when this type of information happens. If I see a bug start occurring in production, well, what happened? Okay, release one out, well, did we only find it now? Or did it actually happen on that release? So you have to dig through the Git history and try and understand what might have happened. But with a view like this would make it a lot more digestible to go in here and see, okay, something's wrong, what recently changed in our supply chain. Okay, somebody updated some cookie utility, right around this time. And imagine if this view take this feature, send to Medusa, imagine if this view had a API connection to Datadog or to Sentry. So you can start to see under every release that gets cut, here are the tags and error types that are coming along, or here are new errors that were never seen before, only when this release showed up. And it helps you to be able to start correlating information. And again, with a lot of tools like Datadog, they aggregate so much about what's going on. But none of these tools natively understand how the application was built, and how it's supposed to behave. Really only webpack has a deep understanding of that. So when you start taking these tools that don't know much, and you apply them to basically an ingest engine that understands the webpack part very well, we can start to draw conclusions about, hey, this is likely this thing. So it just adds a lot of new type of options.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/tW9Ff5jYanY"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;That's been very hard to tame or control, even in a single repo frontend. It's still hard to manage who's using what and where, even with npm packages, we'll who's still using version one of the carousel because we want to remove it from the component library. Okay, now I have to like do a search across 1000 repos and hope that GitHub search is good enough. But if everybody was reporting to Medusa, I could just go into Medusa and say, okay, who uses this package everywhere? Cool, here's the exact file and line that the import is on. And you instantly can know this huge amount of information about your supply chain. So there's two other really big things that have come along recently is we've gotten organizations. So now if you're a company, you can register with an org. And you can provide other roles and permissions to your users under that org and you can start to manage and scope it. So certain users might only be able to have read access to it. And maybe you want to have only your AWS keys or something like that have the write tokens or anything like that to edit or change. You've now got like a policy in there. So it's not just the trust scenario, and you'd be able to scope out certain apps. So hey, you know, the retail group doesn't need to see the North American Webapp group.&lt;/p&gt;

&lt;p&gt;So we could have a Lululemon organization, but we could have two separate groups under there that each see everything about what they're doing, but there's no interconnect, so nothing has to be implemented. And another idea is that by putting the ability to put policies around the apps I think was something we've thought about doing possibly in the future. So then you would hold them it's like role based access permissions. So now, you know, if I'm in a bank, and I have accounting, trying to pull in a federated module that's usually on like the public frontend site, you could put security measures in place to say you're not approved to be able to consume that remote from this post, like, there's not allowed to be crosstalk here. So that provides a layer of governance on top of something that's very hard to govern, because I can just go and drop a script in anywhere or add a cookie. And once it's there, it's very hard, you have to do something like, you know, what is it content security policy, that still only works like the domain level, so you either have to build infrastructure to block it behind a reverse proxy. Meanwhile, a lot of it if the glue code was driven by something like Medusa, all that rules could be applied right to the webpack runtime, and it will be much harder to circumvent webpack and reconnect something you're not supposed to, because Medusa is kind of driving the whole graph, and everybody's using this thing. So, you know, it adds a good layer of security, a good layer of separation and multi tenant users. So it's just a lot. You know, it's a big feature of always wanted in here, to go after enterprise customers, where you often can't just do a single login, they're gonna want it behind their SSO, and they're gonna want a org based thing to revoke and grant access to users as they, you know, come and go.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zackary Chapple:&lt;br&gt;
I would say, I think what you're describing there, to kind of dive in some details, I think part of it is, as a developer, creating a federated remote, you can specify this is just for EMEA, APAC, or internal, this is just for external. And then being able to do that other people can find stuff that's targeted for that. But when they do find something that's targeted for that, that they can't use, they at least understand why, and have a way to then reach out to those teams and communicate with them. Okay, this is labeled as internal only, I need it for my external application, can we either add it to be external?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Flexible Environment Management with Medusa
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack Jackson:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Or can we intake and basically open a ServiceNow ticket go through the intake saying I request this federated module from a different director, Umbrella. And now that now there's a governance thing in place, you can't just hot inject something or things like that, you still have that flexibility, but it's, you'd have, like your team, and governance know what's happening, which is really hard to do with npm packages, like you'd have to either way, it's a really expensive problem to solve managing code and permissions of who can do what you have to set up your own custom npm registry, or what rules are other things that could be bypassed. And that still is like deploy based, but want to now approve this, I have to do some update to all the code bases, I can't just go to a central engine, and say, yes, so and so authorized apps wants to access this in these environments with this token, and only this token, no other read token is allowed to access it.&lt;/p&gt;

&lt;p&gt;So that does provide a lot of real maturity and flexibility, given the wide landscape of how different companies and compliance kind of come together. So then I think the last one, which is really great is we've, for a long time, Medusa supported two environments, development and production, and there's a kind of hard coded into its database. So that sounded good initially, because really call them either in dev mode, I'm in prod mode, but it gets a little trickier with, like staging servers, or things like that, where maybe I want to control the staging environment, or if I have, I have about 15 different environments, they're all hooked up to different backends or versions of say, GraphQL API's or something like that. So maybe they're testing a feature against, say, stage or preview or QA environment, so and so. So might want to just say, okay, in Medusa, if you're this environment, here's your pin controls, here's how you're being managed that I can just say cool, bump stage to the latest now in QA, or bumped some other part of the application stack, not just either development or production. But now I can have multiple layers. And the idea is all the builds would feed into the database. And the build isn't set told that it's production or development in like a hard manner. When you're sending the build to Medusa, you can say, yeah, this is intended for productions. And productions pin, the latest it'll grab this incoming one. But I could also say, well, this is a stage PR, and it would just show up tag the stage, but I could still go into say production and I could see that release in there and I could say okay, use the one that's on stage, and I could connect them, which is again really flexible to be able to add unlimited environments and change there's lock files accordingly. It's very nice. Like, you know, you could create almost like a code freeze environment. So it's still production but you could just call a new one code freeze as soon as we hit code freeze, this is the environment that we're going to be going for. It's the frozen one, which we know is like solid and stable. And we can also set up another environment that's like failsafe.&lt;/p&gt;

&lt;p&gt;So if in code freeze, something goes wrong, and we need to rollback, we could like battle test QA, a backup, you know, like, configuration of the application, if we need to do anything emergency, we could just go in one place and say, okay, production, you're now going to read the, you know, backup, frozen backup environment. And now the next invocation that will listen to all of that, but I can go and swap those things out on the fly, and you know, reallocate what this environment is, or create another copy of that environment, apply changes to it, and make it point to, you know, a different, more robust config. Which is, which is really nice. Like, you know, imagine if we had personal environments. So what if I had, like Zack's environment in here. And so then I had an override inside of my initial request. So if I go to production, and I've got the Zack's use X environment tag, then production will do a one off response with paragraph configured the entire app. So I don't have to go and tweak production to see what's going on or override each remote individually. But I could just say, hey, use my personal environment, execute my federation, kind of schema against some Lambda somewhere that's managed by Medusa. But that's also very nice if you want to say have like a personalized thing, like I'm working on four different teams on implementing the same feature. And we're all in separate repos, where we could create, you know, JIRA ticket environment. And now, locally in stage, wherever you're going to have every contributing party's code pulled together just for this features that you can all look at it and work on it easily. And you're all just pointing to an environment that you can then remove later. But you know, it gives you a ton of flexibility to do things like that just reworking or, you know, or other things where I could say, hey, okay, I could use Zack's environment as the connection, and I can open a local tunnel on my machine so you're actually getting your remote is actually my computer's local build serving to you over a tunnel.&lt;/p&gt;

&lt;p&gt;Now, if they're connected to my environment, I'm kind of acting as their remote and I can edit things while they're working on my feature, but we can work remotely in tandem. With our changes being pushed and pulled without Git, just every time they press Save, I see the change show up when I refresh my page, I don't have to get pull or do anything. So that's also a really powerful potential impact for stuff like this could help change how we work and collaborate, especially in distributed systems, or in Palaeolithic systems, where there's usually many moving parts that need to come together. But it gets very hard to scaffold, how do those moving parts come together just right for developing whatever the use case is without creating a ton of infrastructure and kind of manual work, to recreate 10 services over here, just that we can customize them? What all I really want is I want to link to 10 different folders than usual. I don't actually need 10 servers available and so on to do that. But traditionally, that's how we'd have to do it within femoral environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exploring the Potential of Medusa and Module Federation in Reducing Deployment Infrastructure Costs for Multiple Environments
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Viktoriia Lurie:&lt;br&gt;
And talking about multiple and unlimited environments, how much do you think Medusa and Module Federation can help to save on the deployment infrastructure?&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Zack Jackson:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
So this has been a big one for me. I've personally been on this. Maybe I'm right, maybe I'm wrong kind of tangent. But I think also, if you've ever read anything from Tyson, he worked with me on Aegis and Node Federation, actually. But Tyson had a really good viewpoint when he started working with federated backends. And he kind of put it as when you have something like Medusa and Module Federation together, the concept of CI starts to lose meaning like there isn't really CI anymore. It's just continuous delivery. And, you know, most of the whole build and deploy infrastructure is kind of eradicated under a system like this, because the whole reason these things get so complicated is because it's all based around uploading a Zip file with everything it needs to this machine. And if you need something else, you have to give it a new Zip file. And so that means you need lots of unique Lambdas so on and so forth, because they can only do one job at a time. But if we decouple the file system from the compute primitive, which is what federation does, you know, in theory, a really large company could have all of their QA, all of their lower environments could just be one Lambda called stage. And every time you hit stage, it becomes a different codebase on the fly just for you, and responds accordingly. I don't need ephemeral environments or anything, because stage doesn't have a file system that it's coupled to, it's pretty much it's kind of a way I think of it as like, imagine, if you have all on GitHub as a Symlink folder.&lt;/p&gt;

&lt;p&gt;So then, all I'm doing is saying, okay, for this run, change what this folder links to and go require the same thing. And that's kind of what webpack and Federation is giving us is that ability to say, well, the file systems anything, and we can change it whenever. And if that's true, we don't need hundreds of Lambdas and ephemeral environments, and you know, a big deploy system to manage, because fundamentally, there's just not that much that they need to do, like, I don't need an femoral environment, because the only reason I have one is because I need a different Zip file. So you know, you could just have two Lambdas stage in production, and that would probably handle all of your development requirements for the team of 500. And it's just two Lambdas. So that simplifies everything a ton in terms of maintenance, and offers companies things like the managed model. So similar to like, you know, how Vercel does manage hosting, you just connect the Git repo and you don't have to think about much else. Federation offers you the way to kind of make your own managed service. So everybody wants to, say a Next.js SSR, front end. But what all they really want is they just want to create a page, they don't actually want the whole Next app and to maintain it, and to have a Lambda and all the CI/CD, they just want the page and a little dev environment. And then once that leaves their computer, as long as it runs, that's kind of what everybody wants.&lt;/p&gt;

&lt;p&gt;So these kind of avenues allow you to offer that where it's like, hey, you basically are just create React app, upload some static assets and that's the end of anything you do. And there's just one or two servers in here that are actually real server Lambdas. And their only job is to do anything that Webpack tells them to do per execution. So if you have that kind of model, you don't need so much infrastructure, you just eradicate it naturally, there's just not a need for the problem that a lot of heavy, expensive infrastructure solves. Which is what I liked the most about it, because I've always been frustrated out. Why is it so much work, just just to upload some JavaScript. If we think before server rendering, and before no JS, we think like to WordPress, and jQuery, it was super simple. Like you change something in PHP, and you just drag it to the server and you refresh the page and it shows up right away, kind of like hot reloading. Soon as it's up there, you have it. There was no concept of like a build or anything like that. So it's real easy, you just FTP and the next invocation and whatever you've done to the PHP is updated. And then on the front end side, we had stuff like jQuery or whatever, where you could add a jQuery widget to the page. And, you know, I feel like we could make probably sites that couldn't scale forever. But you could create a pretty robust experience quite quickly, just because of how easy these pieces are. There's no builds and wasn't anything complex was no build needed. It's just a couple lines of js. And there we go. And I really liked that model, because it was so simple. Like, you know, it took a couple minutes to upload a frontend, because you know, it was just a folder inside of a PHP server. And it was just some jQuery widgets that are on a CDN. But we lost a lot of that when we moved over to built applications. So you know, where I kind of see all of this as, hey, it just brings us back to a simpler time, but allows us to keep using more advanced systems. But the the kind of the operational expense doesn't have to continue to bloat as the technology becomes more complicated. So seeing the simplify, and you know, if I only have two or three Lambdas, I can now focus instead of on scaling, Lambdas, and managing load balancers and route 53. And all of the other network stuff that comes with it. I could probably focus most of that effort on something like multi region deployments.&lt;/p&gt;

&lt;p&gt;So instead of deploying everything to one or two availability zones, which gets tricky to do when you have 40,50,60 different code bases that need to all be deployed multi region, it's just a lot of pieces to manage and a lot of network to repeat 60 times over. But imagine if we only have one or two Lambdas deploying it multi region is just changing the YAML like, you know, the Get lab or the TerraForm file in one codebase. And now I can deploy this application across you know, 50 availabilities in the US. So I could scale it a whole lot faster, a whole lot more than what you usually could, because there's not a big cost of change management anymore. It's kind of managed. So you make the change, everybody gets it, you don't have to ask anybody to go and do it. And they just want to build their page or their feature. That's all they care about. And that's exactly what they get stable place to build the page. But all the management pains is now in a centralized, more intelligent place. So it just makes life easier. Like I can't imagine working on a non Federation powered system after working with one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Flexible Deployment Strategies with Edge, Node, and Container-Based Systems
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Viktoriia Lurie:&lt;br&gt;
Make sense. And would you still need two Lambdas if you're using Netlify Edge?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack Jackson:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Possibly not. So I think when it comes down to the Edge, the only thing you've got to think about is what does your application use. So if you need to do something like use fs, which is Node file system, like package accessor thing. If I need to use fs, that's a Node only API and edge workers is a is just V8. So it's just the JavaScript engine of Chrome, it's not actually Node itself, it's just the one JavaScript handler. So it doesn't really know what a require is or things like that. So it depends on what you're trying to do. Some cases, it might be, hey, I need Node to handle like, these three or four pieces of workload. But maybe 70% of the app is just you know, standard React components or something simple, cool, only use Node for what's needed and automatically propagate anything possible out to the Edge. And if you see that the Edge networks are getting slow to reply consolidated back onto one of the onto the Node process.&lt;/p&gt;

&lt;p&gt;So now Node doesn't have to wait on a network call to the Edge, it's just in memory, and it can instantly do whatever it wants. But being able to flip back and forwards as needed capability by capability is also a really big deal to be able to have. If we say you have a more like agnostic application, like let's say it's not something like Next.js, which has like a lot of Node specific implementations, then, like if we use Remix, Remix is pretty agnostic from needing Node or running on Dino or so on. So with something like that, I would say with the Federation capabilities on Netlify, you don't really need a Node, like an actual Node server. Unless you need one that makes sense. Like my default way of going would be similar to how I'm approaching Rspack. I'm going to start with Edge. And if the Edge hits its limit, and I need to do this one thing, then I can just switch over this part to Node, but I don't have to re-implement my entire system. Now for Node, it could just be okay, well, this won't work for me any further over here. And I just drop it into a different spot. And I'm still good to go. But I can still move them back and forward in the future. It just keeps that interoperability there. So you can use the system best handled to cater for whatever need you want. Like let's say we used Edge and we had Lambda for a couple of things. And let's imagine we also had Docker. Now we have EC2's persistent compute, that's always online, always hot. We have Edge super close to user, but not extremely, like resource powerful. And we have Lambda, which is kind of like in between, it's cheaper, but a little slower to start. But you know, it's good for like, you know, burst loads.&lt;/p&gt;

&lt;p&gt;So now imagine if we have something say like a GraphQL endpoint, and we want to push GraphQL to the Edge. And we see actually, we're not getting the level of caching or optimization that we want with GraphQL at the Edge, because there's too many invocations on different CPUs, so it can't build up an internal cache. So then you can say, okay, well, let's rather run that back on the containers where they're always hot. And they can have a big in memory cache of data and so through systems like this, you could just say, okay, we'll send that over here to the Docker container. And now Docker become GraphQL. For me, and you know, all my rendering, let's move that over to the Edge. And oh, well, this one little Lambda handler needs to do a couple things. It's a bit memory heavy, but we'll put that on Lambda for now. And then maybe if we optimize it in the future, we'll send it back out to another edge. But imagine doing that with almost like a UI where you could just drag and drop bricks into a bucket like I want this remote to run here and that one to run there. And you don't actually have to like think about the networking and the wiring but if it was something as simple as just you know, drag the square onto the type of machine you want it to run. There you go, or possibly a more upgraded one would be a, we try to automatically figure out the best place to run this. And we learn from every successful execution. And we can adjust the how things get computed based on how it's working and find the most optimized path that gives you the most performance. And if something changes in infrastructure, the system could then immediately respond to that change, like an outage on us on AWS. We could say, okay, we'll move on Lambda to Edge, it might not be perfect, but we're just going to reallocate all the compute somewhere where we know to run while AWS is having failures, which is quite nice. And usually that has to be done through like multi cloud. It's all infrastructure based to do that, usually uploading Zip files to several different places. But under this type of model, it's more just well, here's a zombie computer and tell it what to do. &lt;/p&gt;

&lt;p&gt;So now all you care about is the will, what's the what's the command that I'm going to tell it take care of at this point in time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Viktoriia Lurie:&lt;br&gt;
All right, thanks for sharing. This was really super interesting and helpful.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>modulefederation</category>
      <category>medusa</category>
      <category>webpack</category>
    </item>
    <item>
      <title>Module Federation v7 featuring Delegate Modules</title>
      <dc:creator>Viktoriia Lurie</dc:creator>
      <pubDate>Thu, 16 Mar 2023 15:27:31 +0000</pubDate>
      <link>https://dev.to/valorsoftware/module-federation-v7-featuring-delegate-modules-45n</link>
      <guid>https://dev.to/valorsoftware/module-federation-v7-featuring-delegate-modules-45n</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Viktoriia (Vika) Lurie is the product owner for Module Federation and works for Valor Software. Zackary Jackson is the creator of webpack module federation and principal engineer at Lululemon. This interview is the first of hopefully many diving deeper into module federation to help the community better understand this rapidly growing and evolving technology.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Vika: &lt;br&gt;
Hello Zack! Welcome, I’m glad that we got the opportunity to have this conversation. I’d love for us to start by  talking about the upcoming release. You already shared some initial details on our community call. What would you like to add?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack:&lt;/em&gt;&lt;/strong&gt; &lt;br&gt;
Yeah, so module federation version 7 has a new main feature in beta and this is the use of delegate modules everywhere. &lt;/p&gt;

&lt;h2&gt;
  
  
  How did we get here?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Vika:&lt;br&gt;
Could you explain what delegate modules are?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Delegate modules solve a challenge that has been in the module federation space since day one. This concept that everybody refers to as dynamic remotes. All the examples we currently have out around dynamic remotes are mostly about how you inject a script using the low level module federation API. &lt;/p&gt;

&lt;p&gt;When you do this you lose all the nice stuff that webpack has to offer, just so that you can programmatically inject a script. What I’ve found most engineers want is the ability to dynamically choose the right kind of “glue” code. &lt;/p&gt;

&lt;p&gt;What they are trying to achieve is something like when a user clicks on a button there is an import that is based on a config the developer provided that points to a remote application somewhere. I found that in most use cases, developers don't want full dynamic remotes; they still want to be able to use “require” and “input from”. They really want to control the glue code part of when webpack goes to request a remote and how it gets that container, and what methods they can use to retrieve it? &lt;/p&gt;

&lt;p&gt;The older implementation of this was achieved with the “promise new promise” syntax. The idea was that I can put in a giant string that webpack will take verbatim. When I copy that in and it'll do whatever that string says. &lt;/p&gt;

&lt;p&gt;The problem with that approach though was that it is not very scalable. It's great if you need to grab something off the window or make one API call. If you're trying to use a library, or you want to do something like hook LaunchDarkly up to control decisioning, you couldn't because you cannot directly import anything, in this case it's very brittle and restricting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Diving into delegate modules
&lt;/h2&gt;

&lt;p&gt;Delegate modules allow us to just tell webpack that this remote entry is actually code inside the webpack build already. With delegate modules you can kind of make a framework out of it, because it can bundle all kinds of entry point logic. What you're exporting back is essentially a promise that resolves to a federated remote. &lt;/p&gt;

&lt;p&gt;If I want to use elastic file system (EFS) to get the remote entry on the server I can’t easily because by default the plugin only uses HTTP. While this is the easiest way to get a federated remote in the future I plan to add other bindings to read from the file system directly. The hope is to get this into version 7, but it will probably be like 7.1.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Vika:&lt;br&gt;
Reading from the file, can you dive into a little bit on the use case of that? I believe you mentioned before that it's for fallbacks but correct me if I'm wrong.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack:&lt;/em&gt;&lt;/strong&gt; &lt;br&gt;
One of the use cases is for fallbacks. Scenarios where something isn't there when we expect it to be. There's a couple of ways to handle it. If it's a React component, you could do an error boundary or dynamic import, then follow that with a catch. In that case, if it's offline, the application will throw an error and you would have to catch the error and handle it on an implementation by implementation basis to recover the federated remote. &lt;/p&gt;

&lt;p&gt;With the delegate modules, you can shim the module federation interface itself. What webpack gets back is a container, but the container’s functions are your own logic. So you can initialize it, however, you would normally initialize it. &lt;/p&gt;

&lt;p&gt;Then when you're calling the get property on it, you could say if the get fails look at what webpack is currently asking for. If for example it's looking for the navigation remote, and navigation tried to get the mega nav and that failed. The catch could just be dynamic “import from node_modules/”, you know, company name slash, whatever the original request was, meganav and to webpack, it'll still think that it's retrieving a federated chunk. &lt;/p&gt;

&lt;p&gt;By doing this you've actually just redirected webpack to say, well, that didn't work now go and get this other piece of code and just return it in a federation like way. Then webpack doesn't know if federation fails or not, you still just use that one import interface. With these changes you have a really robust set of middleware  between the connection points between the webpack graphs, you have a lot of control over graphs and what happens. Fallbacks, yes that is one very useful scenario. &lt;/p&gt;

&lt;p&gt;The other big use case is on the server side. I might want to use HTTP to go and get a string off the VPC, evaluate that string inside of the VM, and then return it. However, that comes with some potential security issues. &lt;/p&gt;

&lt;h2&gt;
  
  
  Alternative ways to fetch
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Vika:&lt;br&gt;
So in that scenario you could be using the AWS SDK or even pulling that string from a database value, right?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack:&lt;/em&gt;&lt;/strong&gt; &lt;br&gt;
This is the beauty of it. A database is one of the potential options that I've spoken about a couple times with bigger organizations like BitDev. If we put entries in a database it would be super fast to query where the remote is. The entries themselves wouldn't be really that large. &lt;/p&gt;

&lt;p&gt;I think another really interesting aspect of using a database is from the security perspective. If you did use a database, you could have really strong user based access controls. If a host is not allowed to query a database or they don't have the roles and permissions needed they can't query this federated remote container back out. You could return a container that's allowed that has a similar interface, but it's not the admin one. The unauthenticated reply might still return a page, but it's a page that says you need to log in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Vika:&lt;br&gt;
Does that also help with Edge Side Includes (ESI) and Key Value (KV) stuff on the edge that you talked about before?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
It can, because in places where delegates don't work natively, like CDN’s that are not Netlify, what you can do is you could say, well, here's a delegate module. When webpack requests a chunk, what you could do is return your own remote entry, where all doing is it's fetching HTML, and it's returning it as React components. &lt;/p&gt;

&lt;p&gt;On the edge network, you technically would have that ESI stitching layer, but it's the webpack runtime. It would depend on when you render it, you have to be able to hit something that will render. It's not super automatic, but it's a whole lot less work when it comes to the implementation, because now you need a little infrastructure to do something with that. &lt;/p&gt;

&lt;p&gt;You wouldn’t have to build your application to run on the edge, which usually requires a very different kind of development look and feel when compared to a normal monolithic app. If you wanted to have some app run partially on node, and have part of it run on the edge, this would offer a more agnostic way for distributed systems to still work without you having to build your implementation top to bottom to be deployed to an edge worker. You could just say, I'm going to import this and this thing is going to live on an edge worker. &lt;/p&gt;

&lt;p&gt;This import then does the equivalent of markup stitching. Fetch the HTML, convert it into a little lightweight React component on the response and render that as if it was a React component.&lt;/p&gt;

&lt;p&gt;Another big concern has always been the security around fetch. What if you want to use your own fetch client, or you want to have cookies or bearer tokens or headers attached to the fetch request. It's currently very hard to offer that to the end user with the current module federation interface. With a delegate modules how the code gets to webpack is up to you, the only thing that the delegate wants is to resolve a remote entry container. &lt;/p&gt;

&lt;p&gt;In the browser, it's “window.remote”. On the server, it can be however you want to acquire that remote entry code. As long as you resolve back an executed remote, everything else is in your control. &lt;/p&gt;

&lt;p&gt;Another big use case I see it for is scenarios like how I don't currently support file system bindings in the plugin. With delegate modules nobody has to wait for me or the team to build out the support. They don’t have to think of how to differentiate between when to use HTTP and when to use something like elastic file system. &lt;/p&gt;

&lt;p&gt;All they would need to do is in the delegate module do something like fs.readFile and point to the remote they're asking for. Typically this is something like a mounted store slash this team name slash whatever version that I'm after. From there I can just use vanilla require to get that. Another option is to use the util, which would be based on the same one that webpack uses for its async load target. This would be similar to fs.readfile, and then VM run in this context. That way, we could refresh the container whenever we want to, because there's no require cache that the container itself is getting stuck in. It's reading a file and then passing it to a JavaScript VM. This is how webpacks async node target works today. Which is also not really any different from how a standard we pack build works when you put it in async mode.&lt;/p&gt;

&lt;h2&gt;
  
  
  What was wrong with promise new promise
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Vika:&lt;br&gt;
Can you talk a little bit about how using delegate modules increases the reliability of the code versus the “promise new promise” syntax?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Reliability is a great topic to mention. Promise new promise is technically a sound option, if you're only doing something simple. The problem with it though is when you're sticking a bunch of code in a template string. There's no syntax highlighting, you can’t use require, and you can't use anything that is not already like in a transpiled form. The template string is not going through Babel or anything else. That also means I can't use es6 in there or optional chaining, which would be really helpful or even async await. It's also just brittle. Unless you make sure that you're just putting simple es5in there, it gets a little tricky to try and manage it. &lt;/p&gt;

&lt;p&gt;The bigger problem with using a promise new promise template string is that you can't really test it because it's just a promise like it's very hard to mock “well what is that going to do?”. When it's a file loaded with delegate modules though you can put a unit test on it you could mock some environment for it to reach out to. You can confirm, hey, this thing resolves this mocked object that says get whatever they requested for you or it just returns a string saying I'm the fallback. Then you could know, cool when I do this import and it fails, the delegate module failure mode is doing what we want it to do. &lt;/p&gt;

&lt;p&gt;The bottom line is it's testable,it has syntax highlighting, and it can be written in TypeScript not just a string. At Lululemon our promise new promise is over 200 lines of code. At that scale is where the problems start to come in. A lot of logic starts going in here, because you can kind of build a framework out of module federation now that you control the glue code. &lt;/p&gt;

&lt;p&gt;Webpack is your router, and how webpack gets to these chunks is basically up to you. So you can do a lot with delegate modules. From decisioning to permission based access, fail overs. Anything that you would really want to do, you could, you could do it without your developers having to learn another framework. They don't have to know how to inject the script and do all of that. It would be one file that one team owns. &lt;/p&gt;

&lt;p&gt;The idea is to try and extract this out to a more reliable location. developers don't really need to know about it. It's more like a platform team thing. It offers the entire team as much control as possible to do what they want in regards to what is being fed to webpack. How's it going to work, the rest of the development team still just uses require or import from, their implementation doesn't change. Yet, they now have one of the core concepts of dynamic remotes, which is, I know what I'm importing. &lt;/p&gt;

&lt;p&gt;It's not completely dynamic, where I don't even know that I want something like checkout, or what I want from checkout. This is in that case, where you know, this is going to be a checkout page. I want to import, checkout/my- bag. That's a very common use case, we know the string of a thing that we want to import, we know what the intent is at a certain location. Often though, they can't control what remote gets loaded in there, because it's usually hard coded. This is a very nice mix of static and dynamic you still get to use important or require, but you also get to write the connection code between webpacks host, the incoming remote and how that's all going to look.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dynamic programming
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Vika:&lt;br&gt;
You touched on something really interesting just now, you said dynamic but what you described requires having an understanding of what you're importing. What about the folks that want to have it the 100% dynamic, where you don't even know what you're importing, you just get a JSON from somewhere that gives you the remotes.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack:&lt;/em&gt;&lt;/strong&gt; &lt;br&gt;
We still haven't tested that fully in the server side environments, because I haven't had a good use for it. My general recommendation has been to try not to lean on the low level API. Just because there's some quirks to it. One of the issues we've had with Next.js in the past was, you'd always get this error that's like, “can't initialize this external” and it would throw a warning in the browser, we would still make it initialize, but webpack wasn't able to start your remotes for you. So we had to put a proxy on top of the object so that when you try to access it, we could initialize it at that point in time. &lt;/p&gt;

&lt;p&gt;What webpack wants is all the remotes to get initialized up front. When you're doing the super dynamic remotes thing, webpack has no idea there's a federated module on the way. Webpack can't try to prepare this thing ahead of time. The problem with that is, you can end up in a space where if you do a lot of daisy chaining of these super dynamic remotes,the first remote you initialized has less share scope than the last one initialized. This is because once you call init, webpack makes a copy of the object and seals it. If you add more keys and more shared packages from other remotes on webpack can't do that whole negotiation thing where it checks, what do other remotes offer. share all the packages in everybody pick what we're going to use, you kind of lose that because it doesn't have that circular option to go around and, and check what everybody's got. It's going to initialize what it's got, and share that. Then every time you tack something on, it's gonna initialize and seal it in that same way. That's the one reason I tried to avoid the fully dynamic option. &lt;/p&gt;

&lt;p&gt;We have other little low level functions in there that you can do it and developers and companies have used this in the past with minimal issues. Next.js used to work like this. You know it's a viable option. It's just one I prefer to say, if you can at least know what the import is. rather do that but you can also hook into and we might need to adjust the tool slightly, but we have all the low level bits and pieces for you to be able to access is similar to saying window dot remote name.init, window.remoteName, get and manually call things out of the out of the interface yourself like we could do that server or client side.&lt;/p&gt;

&lt;h2&gt;
  
  
  Slots and Zones
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Vika:&lt;br&gt;
If you are doing SSR, the moment of the page request, you know all of the federated remotes for that user for that session. Does that resolve part of the issue?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
If you had a map of them, and you said, okay, as this company, you query, and the query of the whole company, what are all the remotes that shell so and so could use? All right, there's 25 teams that work under this shell, we don't know when or where they're gonna come from. But we know there's 25 teams, okay, awesome. When the app starts, you could say go loop over all the remotes and call initialize on them, and just start initializing everybody, then, initialization is almost separate from getting. &lt;/p&gt;

&lt;p&gt;So once initialization happens, then you can dynamically flip between whatever you want programmatically, or flip between two different remotes on the fly, just saying, you know, like, I'm gonna get for ease of use, let's say we had a utility called get remote, and you give it a name, and it pulls the remote off the scope, whether it's window, or it's my global scoping that I have a node, get remote name, cool, here's the container. Then you can initialize or call a getter or from that, however you want to, and that would offer you full programmatic control, while still ensuring that hey, we've initialized all potential things before we started trying to pull stuff out of it. So we're not fragmenting when new webpack runtimes are attached to the host. &lt;/p&gt;

&lt;p&gt;The other option that I really do like is this concept of building out slots. Since you can have a delegate module in there, what you're importing doesn't actually have to mean anything anymore. Imagine if we just had a list of remotes and slots for remotes inside of zone one, through zone 50. There's just slots for remotes and now you could say, Okay, this part of the header, I'm gonna call that zone one slot one. So now when I import zone, one slot one, kind of like, you know, a template or something that you'd have in a CMS, you could tell webpack, zone one slot one is assigned to the header team, and it's the mega nav. &lt;/p&gt;

&lt;p&gt;Now webpack is still using import from and you have other static imports, but there's slots and rather, you're using this delegate module to assign meaning to those slots. Now that it's aliased internally webpack knows I'm gonna need this right away, because it's import from not a lazy import, so it can set up whatever it needs to. You can translate zone one goes to slot one. If we augment the little object that we're sending back there, you can intercept it and go, okay, they just called the get method for slot one. &lt;/p&gt;

&lt;p&gt;That means and I know that the current remote is zone one. So then you can know, okay, if I'm in Zone One, I'm the header. So slot one is going to be mega nav so you can end up calling slot one, then get dot slash mega nav and resolve and return that container. It turns your site to just a bunch of slots and zones with nothing assigned to them. Then through a CMS or some kind of back end you could assign meaning to every slot on the page. &lt;/p&gt;

&lt;p&gt;Imagine if you were doing a/b testing, you have to create a zone where the test gets injected into. If you don't know what's happening, you could just import a whole bunch of import zone statements. If some of them don't exist, then you just resolve them to nothing, but you could build out something like that where you could just say, there's five possible things that could be here so import, zone 123 or zone 12345, slot 12345. &lt;/p&gt;

&lt;p&gt;If there's three on the page, these are the three teams that we want them to be in the first, second and third slot. And the zone represents just a unique alias so that you can tell the bundler I want to remote that's not a different remote. So there's, you know, remote one remote to remote three. Okay, which remote do you want, remote one could be window mega nav. But webpack doesn't care about its outer name, it only cares about the inner name that we bound it to. That inner name is all determined by creating a delegate module, that it's completely detached from what you're calling it internally.&lt;/p&gt;

&lt;h2&gt;
  
  
  Circling back to version 7
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Vika: &lt;br&gt;
And all of this is going to become significantly easier with 7.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Yeah, with 7, this is, I think, the only thing that's really missing because it sounds it's, these are more advanced concepts. But usually, when you do need something like this, you are approaching the upper bounds of the of the standard API, and you're looking for a bit more power. I think what will really help here is to demonstrate some of the concepts because that's where it's probably going it's going to be harder to adapt into is, well, what can we do? It looks cool, and it's really interesting. But making sure the community fully realizes the scope of how you can do stuff like creating that zone slotted example, would be a really powerful one to say, hey, here's a map, it's a JSON file that we just get off the network. And all I have is a bunch of, you know, very, un specific imports throughout this application. And Webpack reads the JSON file to translate those nonspecific names into what actually should go there, like a schema. So you can basically just say, here's a schema, I have a template that imports various things. Here's the schema that's now going to define what it is. kind of like you would do in Contentful, you create the schema, and then it sends it down, you have a loop that loops over kind of renders out the components according to whatever like, the Contentful schema is. But imagine having an import schema where your whole site is just zones of customization.&lt;/p&gt;

&lt;p&gt;I think one or two like examples of using delegates and various ways will help a lot, just to understand, Okay, well, these are, this is a different way of using it, that's not immediately obvious. If you see two or three, like wildly different scenarios, it would probably be enough to spark okay, I get what I can do, I get the extent as to which I can change how I think about developing a system be dynamic and respond to things.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Vika:&lt;br&gt;
Okay, and then when do you think 7 will be ready to go live.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack:&lt;/em&gt;&lt;/strong&gt; &lt;br&gt;
So right now, I'm busy working on the Medusa integration with version six. So 7, I'm leaving it in beta. Right now, if you want to use 7, there's a section on delegate modules, you would just have to expand in the readme, and you can see how to do it. &lt;/p&gt;

&lt;p&gt;The code that's in there is essentially going to become what is in the plugin. I'm going to just call create delegate module. From there I'm going to interpret if you use delegate module syntax, or whatever syntax, you're passing the federation plugin is going to be reinterpreted into a delegate module. This it's very similar to how we did it before where you're @ syntax was converted into promise to promise. &lt;/p&gt;

&lt;p&gt;Instead of converting to promise new promise we're going to convert it into something that's more robust. If you use the little delegate module creation function, my internal one won't get applied. Pretty much it'll either be that federation resolves what you pass to one that's generic instead of the webpack plugin, or it'll be one that you point to, and it's yours. If it's yours, it’ll still provide the underlying utility, which is important delegated modules, that allows you to just pass it a global and URL and it will return a container. &lt;/p&gt;

&lt;p&gt;You can resolve that back to webpack, you don't necessarily have to think too much about what's going on. All the pieces are there for you. Timeline wise, the beta is pretty much there, I don't think it's going to change in its shape. I'm planning to try and roll that out maybe sometime this month, if possible. Before that, I have turned some focus to Medusa and seeing if Medusa is working with Next.js and updating the Medusa plugin. I'm working on verifying some wiring there. Medusa would use delegate modules anyway. It's kind of built on the foundations of what's already there. &lt;/p&gt;

&lt;p&gt;That's kind of like my prioritization roadmap right now is. I want to ensure that the Medusa support is there, since we're in this, and we're putting all this together to actually work with it at Lululemon. Then if everything is happy, and it's all good, I'll probably make a few other slight adjustments to maybe some of the default options. I think one that might be good to turn on is the automatic async boundaries. So you know, the The pages dynamic import themselves and re export themselves all the time. So you won't ever see an eager error.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Vika:&lt;br&gt;
That would solve a lot of issues that get reported.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack:&lt;/em&gt;&lt;/strong&gt; &lt;br&gt;
Yeah, async boundaries that it's currently a flag if you just flip it to true, it works. I need to do a little bit more work around the static analysis, because I have to understand what you're exporting. When it's evaluated? Does it have the getInitialProps, or getServerProps export? Is it a barrel export from somewhere else in a monorepo? Right now how my loader works is just looks at the current page, and it checks for a string called getServerProps, getStaticProps or getInitialProps, if it sees that string in your file, it will manufacture the Data Loader, along with the dynamic import boundary that it wraps around it. &lt;/p&gt;

&lt;p&gt;It's quite important that if you're using it, you still have to have that word somewhere in there, so that I can pick it up and kind of stamp out another equivalent for it. That's the piece that still needs a bit of work. I would love to see it turned on by default, because that is the kind of prime way that you would want to utilize it, it would follow the same rules we do everywhere else in webpack, where you start with an import Bootstrap, and then everything else happens from there. &lt;/p&gt;

&lt;p&gt;Since the entry points and next are their pages, this is kind of what you would want each page to be a dynamic import to the actual page thing that you want it to do. Now everything that you share is protected behind the fact that it's a dynamic import.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Vika:&lt;br&gt;
I think a lot of people could learn from this.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
It'd be great to get this kind of stuff documented down. I still haven't written anything about delegate modules, why they're cool, and what you can do with them. It would be nice to have something that just goes into a bit more depth on it. This interview helped get a lot of the information out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rspack and module federation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Vika: &lt;br&gt;
What’s next? Planning to work on Rspack support for Module Federation?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Zack:&lt;/strong&gt;&lt;/em&gt; &lt;br&gt;
With Rspack coming out as well I mean we've already used webpack as our bundler tool for NPM packages. I've been looking at and I’m heavily considering using Rspack for everything that's not Next.js at Lululemon. For all of our NPM package builds, we can just use Rspack because it's gonna be super fast. Where we still have all the flexibility of webpack itself to build out these packages. There's few bugs for webpacks ESM implementation that I have opened PRs for but they never have been merged. &lt;/p&gt;

&lt;p&gt;If those bugs were able to be fixed and Rspack that makes for an even stronger case for using it. Once federation lands in Rspack, it's like, hey, it's way faster and it has federation. It's gonna be like the ES build of the webpack era stuff, it's gonna be the super quick thing that you default to. It's got some of the killer features that really offer it scalability. And I could see federation support also being one of the key things that help it stand out from Turbopack because we don't know if Turbopack is going to implement federation or not. If somebody wanted a module federation friendly Turbopack Rspack is where you would go.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Vika:&lt;br&gt;
It was really very interesting. Thank you!&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Zack:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
It was great to chat about this. I'm super excited to see what Delegate Modules and Rspack ends up unfolding for us. It's gonna be great. Have a great day, cheers!&lt;/p&gt;

</description>
      <category>modulefederation</category>
      <category>interview</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
