<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Emma Humphries</title>
    <description>The latest articles on DEV Community by Emma Humphries (@emceeaich).</description>
    <link>https://dev.to/emceeaich</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/emceeaich"/>
    <language>en</language>
    <item>
      <title>React is a subsidy</title>
      <dc:creator>Emma Humphries</dc:creator>
      <pubDate>Sat, 12 Sep 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/emceeaich/react-is-a-subsidy-a0c</link>
      <guid>https://dev.to/emceeaich/react-is-a-subsidy-a0c</guid>
      <description>&lt;p&gt;After a week of job interviews over video, &lt;a href="https://blaseball.fandom.com/wiki/Landry_Violence#Incineration"&gt;while the sky was the color of Landry Violence&lt;/a&gt;, I decided to watch &lt;a href="https://youtu.be/e1L2WgXu2JY?t=30"&gt;Stuart Langridge’s GOTO; 2020 talk (YouTube) on JavaScript&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Thirty seconds in, Langridge relates &lt;a href="https://twitter.com/zachleat/status/1169998370041208832"&gt;Zack Leatherman’s example of 8.5MB of tweets in static HTML rendering 1/5 of a second faster than a React site rendering a single tweet (Hellsite)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Reader, I howled, despite that being a bad idea after a week of dangerous air quality and six hours a day on video calls. Then &lt;a href="https://twitter.com/infinite_scream/status/1304905286164058122"&gt;I summoned The Infinite Scream (Hellsite)&lt;/a&gt; to do the howling for me while I wrote a blog post.&lt;/p&gt;

&lt;p&gt;I’ve been thinking of the costs of the Javascript-first, particularly the React-first, state of web development:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Users have to buy and use high-end devices (phones, tablets, and laptops) to access content&lt;/li&gt;
&lt;li&gt;Developers abandon the web for native applications

&lt;ul&gt;
&lt;li&gt;Which in turn demand rents (transaction fees)&lt;/li&gt;
&lt;li&gt;And concessions (non-political content, what content can be sold)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Orgs sticking with the with web use tool chains with high overhead and requirements (every dev needs a high end laptop and training in the React tool chain)&lt;/li&gt;
&lt;li&gt;Development jobs go to people who have the time and skills to use React and native frameworks instead of the open web&lt;/li&gt;
&lt;li&gt;JavaScript-first and native apps encourage privacy intrusive practices that siphon behavioral data, and reward getting a user “Hooked”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;JavaScript in general, and React in particular, is a tax on the Open Web which subsidizes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Device manufacturers&lt;/li&gt;
&lt;li&gt;App stores&lt;/li&gt;
&lt;li&gt;Surveillance capitalism&lt;/li&gt;
&lt;li&gt;Elite developers&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;at the expense of:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Users stuck on a device upgrade treadmill&lt;/li&gt;
&lt;li&gt;Projects which don’t fit the JavaScript-first economic model

&lt;ul&gt;
&lt;li&gt;Especially anti-racist, anti-policing, and anti-colonialist projects&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Creators who have to work under the precarity of large Social Media platforms&lt;/li&gt;
&lt;li&gt;Developers without access to tools and training for elite jobs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This subsidy will continue to harm all of us who were told that the Web was a boon for everyone.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>react</category>
      <category>openweb</category>
      <category>diversity</category>
    </item>
    <item>
      <title>Rediscovering APAs</title>
      <dc:creator>Emma Humphries</dc:creator>
      <pubDate>Fri, 11 Sep 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/emceeaich/rediscovering-apas-73g</link>
      <guid>https://dev.to/emceeaich/rediscovering-apas-73g</guid>
      <description>&lt;p&gt;&lt;a href="http://interconnected.org/home/2020/09/09/organizine"&gt;Matt Webb describes a fever dream of an idea for a website&lt;/a&gt; where a group of people agree to a publishing frequency, start ‘zines, and at the deadline the ‘zines are compiled and distributed to the membership.&lt;/p&gt;

&lt;p&gt;Those are APAs (amateur press associations) except instead of the web, members were cutting and pasting zine’s and making copies at Kinko’s (remember when Kinko’s wasn’t FedEx?) to mail to or drop off at the Organizing Editor’s (OE,) who would host a compilation party where they would assemble everyone’s APA zine’s into a issue. You made as many copies of your zine as members in good standing, and a couple more for people who were on-spec.&lt;/p&gt;

&lt;p&gt;Then the OE would mail copies to the out of town members (you contributed to a mailing fund for this) and you’d pick up your issue to take home and make notes in the margin for comments on everyone else’s zine.&lt;/p&gt;

&lt;p&gt;And yes, APAs influenced Live Journal (and later Dreamwidth) culture.&lt;/p&gt;

&lt;p&gt;I was a member of the Madison SciFi fan APA, &lt;em&gt;The Turbo Charged Party Animal&lt;/em&gt; for close to 10 years. &lt;a href="http://unionstreetdesign.com/portfolio.html"&gt;Jeanne Gomoll’s design portfolio has examples of her and her partner’s ‘zines for the APA over the years&lt;/a&gt;. My favorite memory was a multiple issue narrative arc about Jeanne’s Diet Coke stash. Her friends may have bought her a room full of cases of Diet Coke for her birthday one year.&lt;/p&gt;

&lt;p&gt;Who wants to make an APA?&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The virtue of boring</title>
      <dc:creator>Emma Humphries</dc:creator>
      <pubDate>Tue, 19 Nov 2019 00:00:00 +0000</pubDate>
      <link>https://dev.to/emceeaich/the-virtue-of-boring-2843</link>
      <guid>https://dev.to/emceeaich/the-virtue-of-boring-2843</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--avNHztV_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://upload.wikimedia.org/wikipedia/commons/thumb/e/ee/Everyday_african_urbanism_yasser_booley_61198.JPG/512px-Everyday_african_urbanism_yasser_booley_61198.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--avNHztV_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://upload.wikimedia.org/wikipedia/commons/thumb/e/ee/Everyday_african_urbanism_yasser_booley_61198.JPG/512px-Everyday_african_urbanism_yasser_booley_61198.JPG" alt="Everyday african urbanism yasser booley 61198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small&gt;Yasser Booley &lt;a href="https://creativecommons.org/licenses/by-sa/3.0"&gt;CC BY-SA 3.0&lt;/a&gt;, via &lt;a href="https://commons.wikimedia.org/wiki/File:Everyday_african_urbanism_yasser_booley_61198.JPG"&gt;Wikimedia Commons&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;At some point I’ll finish that essay on how toxic masculinity derailed the web, but &lt;a href="https://css-tricks.com/no-absolutely-not/"&gt;Robin Rendle’s essay on restraint in development&lt;/a&gt; made a few things clear.&lt;/p&gt;

&lt;p&gt;Rendle mentions diagnosing a performance problem that’s harming your users who can’t load your site to know if they will be affected by power cuts in part because of all the third party scripts it loads:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;But fixing that problem? It requires going through each script, talking to the marketing department, finding out who owns what script, why they use it, what data is ultimately useful to the organization and what is not. Then, finally, you can delete the script. The solution to the problem is boring as dirt and trying to explain why the work is important—even vital—will get you nowhere in many organizations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is not exciting work. It’s not work that gets solved with a single heroic pull request, replacing the site’s templating with a new JavaScript library, or a whole new feature.&lt;/p&gt;

&lt;p&gt;Over the past few months &lt;a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1476564"&gt;we’ve been cleaning up old metadata in Bugzilla&lt;/a&gt;. It takes video calls, emails, and working shared documents with long runs of comments to get to a consensus to turn off a field. But all those fields removed are that much more cognitive overhead removed from people reporting, triaging, planning, fixing, and verifying bugs.&lt;/p&gt;

&lt;p&gt;It’s sorting the pantry so the cooking spices you use every day are always to-hand when you need them. Searching the pantry or the show bug page is gets in the way of getting work done. New cabinets and new UX might be engaging, but it’s expensive and not incremental.&lt;/p&gt;

</description>
      <category>development</category>
      <category>management</category>
      <category>process</category>
    </item>
    <item>
      <title>Extracting a list from a webpage</title>
      <dc:creator>Emma Humphries</dc:creator>
      <pubDate>Tue, 10 Sep 2019 00:00:00 +0000</pubDate>
      <link>https://dev.to/emceeaich/extracting-a-list-from-a-webpage-4ca0</link>
      <guid>https://dev.to/emceeaich/extracting-a-list-from-a-webpage-4ca0</guid>
      <description>&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;You have a webpage with a list of things: values, prices, emails, or links. And you want to copy that into a string you can use elsewhere like a spreadsheet or data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bs0jZyfK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://emmas.site/public/images/grid.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bs0jZyfK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://emmas.site/public/images/grid.png" alt="Table of names"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There’s not an API you can use to fetch these. You know that you can construct a CSS3 selector to get them all. So you can use the developer view of the page (a.k.a. F12) and use JavaScript on the console tab as your ‘API’.&lt;/p&gt;

&lt;h2&gt;
  
  
  Extracting the list
&lt;/h2&gt;

&lt;p&gt;You look at the page in your browser’s inspector and the email addresses you want to pull out are coded as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;table&amp;gt;
&amp;lt;tr&amp;gt;
    …
    &amp;lt;td&amp;gt;&amp;lt;a class="email" href="mailto:a@b.tld"&amp;gt;a@b.tld&amp;lt;/a&amp;gt;&amp;lt;/td&amp;gt;
    …
&amp;lt;/tr&amp;gt;
…
    &amp;lt;td&amp;gt;&amp;lt;a class="email" href="mailto:e@m.tld"&amp;gt;e@m.tld&amp;lt;/a&amp;gt;&amp;lt;/td&amp;gt;
…
&amp;lt;/table&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;CSS3 selector is &lt;code&gt;'a.email'&lt;/code&gt;. That is you want to pull every &lt;code&gt;A&lt;/code&gt; element with the class name &lt;code&gt;email&lt;/code&gt; out of the current page. And each of those &lt;code&gt;A&lt;/code&gt; elements has an &lt;code&gt;href&lt;/code&gt; of the form &lt;code&gt;mailto:name@example.tld&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So we’ll get the list and iterate over it, chopping up the &lt;code&gt;href&lt;/code&gt; values and turning it into a list.&lt;/p&gt;

&lt;p&gt;We open the JavaScript console on the page and run this one-liner.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$('a.email') // &amp;lt;= $() is console shorthand for document.getElementsBySelector()
.map((el) =&amp;gt; { return el.href.split(':')[1]; })
.join('\n');
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;But the browser reports an error here, because &lt;code&gt;$('a.mail')&lt;/code&gt; is a node list, not an array.&lt;/p&gt;

&lt;p&gt;You can use &lt;code&gt;Array.prototype.from()&lt;/code&gt; to make that node list into an array.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Array.from($('a.email'))
.map((el) =&amp;gt; {
    return el.href.split(':')[1];
})
.join('\n')
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now you’ll get a list of email addresses, unsorted, and with duplicates.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;e@m.tld
a@b.tld
c@d.tld
a@b.tld
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You could clean that up in a text editor but let’s go further.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cleaning the list
&lt;/h2&gt;

&lt;p&gt;Sorting is simple.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Array.from($('a.email'))
.map((el) =&amp;gt; {
    return el.href.split(':')[1];
})
.sort()
.join('\n')
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That doesn’t get rid of the duplicates.&lt;/p&gt;

&lt;p&gt;JavaScript supplies the &lt;code&gt;filter&lt;/code&gt; method, but to use it, we’d have to define an accumlator on a separate line, so we don’t get a nice, context-minimal one-liner.&lt;/p&gt;

&lt;p&gt;ES6 provides a new object, &lt;code&gt;Set&lt;/code&gt;. Sets don’t allow duplicate values. And it takes any &lt;em&gt;iterable&lt;/em&gt; type as an input.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new Set([1, 1, 2, 2, 3]) // =&amp;gt; Set(3) [1, 2, 3]
new Set('committee') // =&amp;gt; Set(6) ["c", "o", "m", "i", "t", "e"]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So we can de-dupe the list using that, and turn it back into an array to sort and join it into a string.&lt;/p&gt;

&lt;p&gt;But what does Set use to de-dupe?&lt;/p&gt;

&lt;p&gt;It turns out that &lt;code&gt;new Set(*node list*)&lt;/code&gt; is an empty set. This is because of how &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set#Value_equality"&gt;the comparison operator works when creating the set from an iterator&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So you have to process the the list into an array of strings before you turn it into a set.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Array.from(new Set(Array.from($('a.email'))
.map((el) =&amp;gt; {
    return el.href.split(':')[1];
})));
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Then you can sort the array of unique text values, then join it into a string.&lt;/p&gt;

&lt;p&gt;The complete one-liner, formatted for legibility, is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Array.from(new Set(Array.from($('a.email'))
.map((el) =&amp;gt; {
    return el.href.split(':')[1];
})))
.sort()
.join('\n');
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Which will return:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;a@b.tld
c@d.tld
e@m.tld
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



</description>
      <category>f12</category>
      <category>javascript</category>
      <category>devtools</category>
    </item>
    <item>
      <title>Using SSH tunnels to deploy your site</title>
      <dc:creator>Emma Humphries</dc:creator>
      <pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
      <link>https://dev.to/emceeaich/using-ssh-tunnels-to-deploy-your-site-106l</link>
      <guid>https://dev.to/emceeaich/using-ssh-tunnels-to-deploy-your-site-106l</guid>
      <description>&lt;p&gt;Note: &lt;em&gt;Updated on 2019-09-09&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Running a local service on the open Internet is still a thing, even in the age of the Cloud.&lt;/p&gt;

&lt;p&gt;For instance, I have a local set of dashboards using &lt;a href="https://github.com/mozilla/corsica"&gt;Corsica&lt;/a&gt; that I’d like to have access to from the road. You may want to show a client work in progress without having to bother with deploying to an external host. You may want to connect to a security camera you run attached to a Raspberry Pi. Or run a web application on a non-privileged port.&lt;/p&gt;

&lt;p&gt;For a long time, the way to do that would had been to set up port-forwarding on your router and use a personal DNS service. Port-forwarding requires router configuration, and limits you to exposing one service to the web, and your ISP most likely blocks ports 80 and 443.&lt;/p&gt;

&lt;p&gt;With an SSH tunnel, you can expose your service to the internet without the hassle of router configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -R 80:localhost:8080 external.example.com
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Your service at &lt;code&gt;localhost:8080&lt;/code&gt; would be on the web at &lt;a href="http://external.example.com"&gt;http://external.example.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The host &lt;code&gt;external.example.com&lt;/code&gt; needs to be reachable from the public internet. You could host your endpoint yourself in the Cloud, but endpoints for SSH tunnels are available as a service from several providers.&lt;/p&gt;

&lt;p&gt;Here’s my notes on two services I’ve tried.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pagekite
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://pagekite.net/"&gt;Pagekite&lt;/a&gt; is a paid service. The company calls the tunnels ‘kites’ and meters bandwidth used by them.&lt;/p&gt;

&lt;p&gt;To use it, you install their &lt;a href="https://pagekite.net/pk/pagekite.py"&gt;script&lt;/a&gt;, or &lt;a href="https://pagekite.net/pk/pagekite.py"&gt;a package&lt;/a&gt; and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pagekite.py 8080 yourname.pagekite.me
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And your service will be available at &lt;a href="https://yourname.pagekite.me"&gt;https://yourname.pagekite.me&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you don’t have an account, the script will prompt you to set up an account. Pagekite takes care of monitoring the tunnel for you. They provide documentation for &lt;a href="https://pagekite.net/wiki/Howto/GNULinux/ConfigureYourSystem/"&gt;starting up the service at boot using rc scripts&lt;/a&gt;. Again, you have to start your service before you start the tunnel.&lt;/p&gt;

&lt;p&gt;If you only want to host a set of static files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pagekite.py /path/to/folder yourname.pagekite.me
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;They provide &lt;a href="https://pagekite.net/support/security/"&gt;built-in defenses for so-called ‘drive-by’ attacks&lt;/a&gt;, by blocking requests to &lt;code&gt;/wp-admin/&lt;/code&gt; and similar paths by default. You can also add passwords, or restrict access by IP address (useful for the IoT camera example.)&lt;/p&gt;

&lt;p&gt;For six hosts and 150,000 Mb of transfer, I pay 40 USD a year. They also provide a white label service for IoT device makers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serveo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://serveo.net/"&gt;Serveo’s&lt;/a&gt; port-forwarding service is free-as-in-beer. It’s offered by its creator, &lt;a href="https://twitter.com/trevordixon"&gt;@trevordixon&lt;/a&gt;. To use it for a service running on port 3000 run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -R 80:localhost:3000 serveo.net
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;which will return:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Hi there
Forwarding HTTP traffic from https://randomname.serveo.net
Press g to start a GUI session and ctrl-c to quit.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Notice that resulting url is &lt;code&gt;https&lt;/code&gt; so you don’t have to set up your own certificates.&lt;/p&gt;

&lt;p&gt;You can specify a sub-domain, in your ssh command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -R emma:80:localhost:8888 serveo.net
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If the subdomain &lt;code&gt;emma&lt;/code&gt; was available, you’d be reachable at &lt;code&gt;https://emma.serveo.net&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Using a tool like &lt;code&gt;autossh&lt;/code&gt; and startup scripts, you can make sure your service lives between restarts. Remember that you’ll also need to start the service you’re tunnelling. There’s a whole &lt;a href="https://www.everythingcli.org/ssh-tunnelling-for-fun-and-profit-autossh/"&gt;post on setting up &lt;code&gt;autossh&lt;/code&gt; with systemd&lt;/a&gt; you which is useful here.&lt;/p&gt;

&lt;p&gt;Serveo’s ease-of-use is also its flaw. Anyone can set up a forward with it, and the serveo.net certificate is a wild card, so if a browser trusts your service using Serveo, it trusts every other service using it to forward.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Services
&lt;/h2&gt;

&lt;p&gt;Chen Hui Jing has a post, written after the first publication of this article, on &lt;a href="https://dev.to/huijing/tunnelling-services-for-exposing-localhost-to-the-web-2in6"&gt;other free tunneling services&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caveats
&lt;/h2&gt;

&lt;p&gt;A tunnel is only as good as the endpoint. You have to trust the endpoint with your service. If it’s compromised, then you have to consider your service compromised. Google has blocklisted these services before when people acting from bad intent have used these to distribute malware. A service which requires an account might not be &lt;em&gt;libre&lt;/em&gt;, but will have some accountability.&lt;/p&gt;

</description>
      <category>indieweb</category>
      <category>sshtunnels</category>
      <category>selfhosted</category>
    </item>
    <item>
      <title>Really, XML catalogs matter</title>
      <dc:creator>Emma Humphries</dc:creator>
      <pubDate>Fri, 16 Jan 2004 00:00:00 +0000</pubDate>
      <link>https://dev.to/emceeaich/really-xml-catalogs-matter-1gbf</link>
      <guid>https://dev.to/emceeaich/really-xml-catalogs-matter-1gbf</guid>
      <description>&lt;p&gt;Back in the early 2000's, I had a weblog, I had started it before the term &lt;em&gt;blog&lt;/em&gt; had been coined.&lt;/p&gt;

&lt;p&gt;This is a post, originally from January 16, 2004, about XML Catalogs, and figuring out what the right thing to test is. It was popular enough to be linked from PHP documentation for the DOMXML extension.&lt;/p&gt;

&lt;p&gt;I've updated links and corrected typos. I just realized that Marc's last name had been misspelled for many years!&lt;/p&gt;

&lt;p&gt;And some four years after this was originally posted, &lt;a href="http://www.w3.org/blog/systeam/2008/02/08/w3c_s_excessive_dtd_traffic/"&gt;the W3C had noticed all the validation request traffic from all the XML applications not using catalogs or caches&lt;/a&gt; and did something about it.&lt;/p&gt;

&lt;p&gt;This post is a snapshot of an era. Remember when we were going to do &lt;em&gt;all the things&lt;/em&gt; in XML?&lt;/p&gt;

&lt;p&gt;— Emma&lt;/p&gt;




&lt;p&gt;This week I learned that XML Catalogs are important.&lt;/p&gt;

&lt;p&gt;This started when I updated &lt;a href="https://github.com/liyanage"&gt;Marc Liyanage's PHP binary for Mac OS X&lt;/a&gt; on my development machine.&lt;/p&gt;

&lt;p&gt;Pages went from taking milliseconds to over a minute to render. To say I was puzzled would be an understatement. I rolled back to an earlier version.&lt;/p&gt;

&lt;h4&gt;
  
  
  Looking for Clues
&lt;/h4&gt;

&lt;p&gt;Some initial testing on another machine determined that the slowdown was in the DOMXML extensions to PHP. The extension exposes the Gnome XML and XSLT libraries as functions and objects to PHP.&lt;/p&gt;

&lt;p&gt;After searching Google, php.net, and xmlsoft.com, I sent an email to &lt;a href="https://blog.liip.ch/archive/author/chregu"&gt;Christian Stocker in Zurich&lt;/a&gt;. Christian works on the DOMXML extensions, and he might know of a bug.&lt;/p&gt;

&lt;p&gt;I had gotten it in my head that the problem lay in nesting XInclude statements. XInclude is a specification for including one XML document inside another. We use XInclude to keep content for one of our sites isolated to a well-formed, valid XHTML document that can be edited in BBEdit.&lt;/p&gt;

&lt;p&gt;A section of the intranet is described as an Atom feed, &lt;a href="http://web.archive.org/web/20031202023814/http://bitsko.slc.ut.us/blog/feed-data.html"&gt;and each article's contents included into the feed&lt;/a&gt;. The Atom feed is included in an &lt;a href="http://www.xmlpatterns.com/EnvelopeMain.shtml"&gt;envelope document&lt;/a&gt; that contains the rest of the XML needed to render any page in the section.&lt;/p&gt;

&lt;p&gt;I had jumped on the conclusion that somehow LibXML2 had changed and it had become inefficient at resolving nested XIncludes.&lt;/p&gt;

&lt;p&gt;Christian wrote back that there weren't any issues he knew of, but asked me to send a test case.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Wrong Test
&lt;/h4&gt;

&lt;p&gt;I had devised the test case:&lt;/p&gt;

&lt;p&gt;foo.xml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?xml version="1.0" encoding="utf-8"?&amp;gt;&amp;lt;foo xmlns:xi="http://www.w3.org/2001/XInclude"&amp;gt;&amp;lt;xi:include href="bar.xml" /&amp;gt;&amp;lt;/foo&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;bar.xml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?xml version="1.0" encoding="utf-8"?&amp;gt;&amp;lt;bar xmlns:xi="http://www.w3.org/2001/XInclude"&amp;gt;&amp;lt;xi:include href="baz.xml" /&amp;gt;&amp;lt;/bar&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;baz.xml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?xml version="1.0" encoding="utf-8"?&amp;gt;&amp;lt;baz&amp;gt;Content!&amp;lt;/baz&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When run with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?php$dom = domxml\_open\_file ("foo.xml");$start1=gettimeofday(); $dom-&amp;gt;xinclude ();$end1=gettimeofday(); $totaltime1 = (float)($end1['sec'] - $start1['sec']) + ((float)($end1['usec'] - $start1['usec'])/1000000); echo "Time to handle includes: $totaltime1&amp;lt;br&amp;gt;"; echo $dom-&amp;gt;dump\_mem ();?&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That should return:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?xml version="1.0" encoding="utf-8"?&amp;gt;&amp;lt;foo xmlns:xi="http://www.w3.org/2001/XInclude"&amp;gt;&amp;lt;bar xmlns:xi="http://www.w3.org/2001/XInclude"&amp;gt;&amp;lt;baz&amp;gt;Content!&amp;lt;/baz&amp;gt;&amp;lt;/bar&amp;gt;&amp;lt;/foo&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Which it did, but faster than I expected. It timed at less than a second instead of over a minute.&lt;/p&gt;

&lt;p&gt;I changed bar.xml to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?xml version="1.0" encoding="utf-8"?&amp;gt;&amp;lt;bar xmlns:xi="http://www.w3.org/2001/XInclude"&amp;gt;&amp;lt;xi:include href="baz.html" /&amp;gt;&amp;lt;/bar&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;and baz.html was:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?xml version="1.0"?&amp;gt;&amp;lt;!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"&amp;gt;&amp;lt;html xmlns="http://www.w3.org/1999/xhtml"&amp;gt;&amp;lt;head&amp;gt;&amp;lt;title&amp;gt;Untitled&amp;lt;/title&amp;gt;&amp;lt;/head&amp;gt;&amp;lt;body&amp;gt;&amp;lt;p&amp;gt;New document&amp;lt;/p&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Which did take several seconds as I thought it would.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Right Test
&lt;/h4&gt;

&lt;p&gt;That's where it dawned on me that XInclude between the version of the libraries PHP used, had started validating by default.&lt;/p&gt;

&lt;p&gt;The XHTML DTD URL: &lt;a href="http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"&gt;http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd&lt;/a&gt; is a busy place. Try loading it and see. &lt;strong&gt;And I bet that URL is loaded because a lot of people didn't know their tool was calling over there every time it needed to load or validate something.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Commenting out the DTD declaration in baz.html and re-running the test brings back the earlier level of performance. However, I don't want to comment out the DTD references in my documents.&lt;/p&gt;

&lt;h4&gt;
  
  
  Going to Catalogs
&lt;/h4&gt;

&lt;p&gt;I wrote back to Christian asking if LibXML, as built for PHP, honored &lt;a href="http://www.cafeconleche.org/books/effectivexml/chapters/47.html"&gt;XML Catalog files&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With a catalog file, I can tell my validating processor to resolve any reference to "&lt;a href="http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"&gt;http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd&lt;/a&gt;" as a local file. Catalogs can do more than that, but the local resolution of DTD files is important.&lt;/p&gt;

&lt;p&gt;Christian replied that by default, &lt;a href="http://xmlsoft.org/catalog.html"&gt;LibXML looks for a catalog at &lt;code&gt;/etc/xml/catalog&lt;/code&gt;&lt;/a&gt;. So I created a catalog there.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?xml version="1.0"?&amp;gt;&amp;lt;!DOCTYPE catalog PUBLIC "-//OASIS//DTD Entity Resolution XML Catalog V1.0//EN" "http://www.oasis-open.org/committees/entity/release/1.0/catalog.dtd"&amp;gt;&amp;lt;catalog xmlns="urn:oasis:names:tc:entity:xmlns:xml:catalog"&amp;gt; &amp;lt;public publicId="-//W3C//DTD XHTML 1.0 Transitional//EN" uri="file:///etc/xml/xhtml/DTD/xhtml1-transitional.dtd" /&amp;gt;&amp;lt;/catalog&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;I pointed "&lt;a href="http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"&gt;http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd&lt;/a&gt;" to a local directory Apache could read from, put a copy of the DTD files there, and tried the tests again. Not as fast as without validation, but certainly faster since it didn't have to go over the Internet to validate the included file.&lt;/p&gt;

&lt;p&gt;So there you go, catalog files, really important. I am chastened.&lt;/p&gt;

&lt;p&gt;Thanks to Christian for getting me pointed in the right direction on this.&lt;/p&gt;

&lt;p&gt;&lt;small&gt;Originally published on January 16, 2004 on whump.com (but nobody goes there anymore) and updated February 11, 2020, by ECH.&lt;/small&gt;&lt;/p&gt;

</description>
      <category>xml</category>
      <category>debugging</category>
      <category>php</category>
    </item>
  </channel>
</rss>
