<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alexander Caro</title>
    <description>The latest articles on DEV Community by Alexander Caro (@alexcaro).</description>
    <link>https://dev.to/alexcaro</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alexcaro"/>
    <language>en</language>
    <item>
      <title>The ACLU and EFF have lost the plot on TikTok</title>
      <dc:creator>Alexander Caro</dc:creator>
      <pubDate>Sat, 25 Mar 2023 12:14:39 +0000</pubDate>
      <link>https://dev.to/alexcaro/the-aclu-and-eff-have-lost-the-plot-on-tiktok-5gpl</link>
      <guid>https://dev.to/alexcaro/the-aclu-and-eff-have-lost-the-plot-on-tiktok-5gpl</guid>
      <description>&lt;p&gt;With discussions of a potential US government ban of TikTok increasing, civil libertarian organizations like the &lt;a href="https://twitter.com/ACLU/status/1630341121426505729?lang=en"&gt;ACLU&lt;/a&gt; and &lt;a href="https://www.eff.org/deeplinks/2023/03/government-hasnt-justified-tiktok-ban"&gt;EFF&lt;/a&gt;, &lt;a href="https://pen.org/letter-to-congress-opposing-federal-law-to-impose-a-ban-on-tiktok/"&gt;among others&lt;/a&gt;, have come to TikTok's defense. They have released impassioned defenses of the platform, arguing that a ban on TikTok would be an unconstitutional violation of the First Amendment's protections of free speech and expression by limiting the ability of Americans to express themselves.&lt;/p&gt;

&lt;p&gt;The ACLU's historic and principled defense of &lt;a href="https://www.aclu.org/issues/free-speech/rights-protesters/skokie-case-how-i-came-represent-free-speech-rights-nazis"&gt;deeply unpopular, offensive, and even dangerous speech&lt;/a&gt; and the EFF's "&lt;a href="https://www.eff.org/cyberspace-independence"&gt;A Declaration of the Independence of Cyberspace&lt;/a&gt;" manifesto have earned these organizations respect and cemented their reputations as principled defenders of the free expression. However, rapidly evolving technologies and geopolitical circumstances lead to novel situations and uncertainty about how to interpret these deeply held principles.&lt;/p&gt;

&lt;p&gt;Take for instance the Berman Amendment, which the ACLU cites alongside the First Amendment in its defense of TikTok. As the ACLU &lt;a href="https://twitter.com/ACLU/status/1630341121426505729"&gt;argues&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;[This ban] would also gut the Berman Amendment, which was passed in 1988 to protect our right to receive information regardless of what country that information was created in. It's what keeps the U.S. government from banning a book by a Chinese author simply because it's from China and in this case it means that you can't ban a social media app simply because the app is from a Chinese company.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ars Technica provides &lt;a href="https://arstechnica.com/tech-policy/2023/01/us-tiktok-ban-could-be-thwarted-by-import-law-enacted-in-the-1980s/"&gt;further details&lt;/a&gt; on the context in which the Berman Amendment was passed:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Back in 1977, Congress passed the International Emergency Economic Powers Act (IEEPA) to empower the president to impose sanctions on and oversee trade with hostile nations. The plan was to prevent average American citizens from assisting US enemies, but the law troubled publishers doing business with book authors and movie makers based in hostile nations. Those concerns led Congressman Howard Berman (D-Calif.) to propose an amendment in 1988, which passed, exempting "information and informational materials" from IEEPA and blocking presidents from regulating these materials. As technology evolved, in 1994, another IEEPA amendment specifically exempted electronic media, leading to today, when everything from a tweet to a TikTok would be free from presidential regulation under the so-called Berman amendments. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The nuance that the ACLU fails to grapple with is just how much the information and technology spaces have evolved from 1988 to 2023. In 1988, the concern was about ensuring Americans had the ability to both access information coming from and publish information to hostile nations. A free society has little to fear from authors publishing books advocating for their country's political ideologies, whether it be Communism or nationalism, while same cannot be said in the reverse. The intention of the marketplace of ideas is to allow even bad ideas to be openly argued for and, if found to be flawed, to be rejected.&lt;/p&gt;

&lt;p&gt;A ban on TikTok would violate neither the spirit nor the letter of the First Amendment or the principle of the marketplace of ideas. Such a ban would not be based on the content that Americans are viewing on the app, but rather based on the app's utility for the Chinese government in signals intelligence gathering. In other words, it would be a &lt;a href="https://www.mtsu.edu/first-amendment/article/1028/viewpoint-discrimination"&gt;viewpoint neutral ban&lt;/a&gt;. TikTok is not being considered for a ban because of its content, but rather because of its capabilities.&lt;/p&gt;

&lt;p&gt;TikTok is not like a book nor is it simply a repository of "cat videos". As the ACLU and EFF are well-aware, the nature of modern apps is that they can harvest sensitive information on their users, including their contacts, location data, private messages, apps installed on their devices, and countless other pieces of sensitive Personally Identifiable Information. The bulk collection of such information is what made the &lt;a href="https://irp.fas.org/nsa/bulk-215.pdf"&gt;NSA's bulk telephony metadata&lt;/a&gt; collection so controversial: without even knowing the &lt;em&gt;contents&lt;/em&gt; of phone calls, simply knowing the metadata of calls (e.g., who you're calling, when you're calling, how often you're calling, the length of your calls, and where you're calling from) is extremely powerful. In the wrong hands, that information can be used for highly nefarious purposes.&lt;/p&gt;

&lt;p&gt;However, unlike with the NSA's program, there are no legal protections for Americans' data scooped up by TikTok or other foreign-owned apps. While Americans have legal protections against the collection of their data by US government organizations like the FBI or NSA, with limited exceptions for &lt;a href="https://www.intelligence.gov/foreign-intelligence-surveillance-act/1239-incidental-collection-in-a-targeted-intelligence-program"&gt;incidental data collection&lt;/a&gt;, foreign citizens generally have no such protections. Furthermore, once that data has been acquired, there is no means by which it can be clawed back. Once the US government has acquired data on foreign citizens, those governments cannot force the US to delete that data any more than the US government can force the Chinese government to delete data on Americans. Even if it could, there is no means by which it could be verified that the foreign government did not simply create a copy of the data.&lt;/p&gt;

&lt;p&gt;Consider for example China's hack of the &lt;a href="https://www.wired.com/story/china-equifax-anthem-marriott-opm-hacks-data/"&gt;Office of Personnel Management&lt;/a&gt;. The hack gave China access to the Central Personnel Data File, which includes, "all personnel data for every federal employee, every federal retiree, and up to one million former federal employees". In response to this hack, the Director of National Intelligence, James Clapper, said, "You have to kind of salute the Chinese for what they did. If we had the opportunity to do that, I don't think we'd hesitate for a minute." Additionally, consider the &lt;a href="https://www.theguardian.com/world/2013/jun/06/us-tech-giants-nsa-data"&gt;leaks of NSA's PRISM program&lt;/a&gt;, we know American companies including Microsoft, Yahoo, Google, Facebook, PalTalk, AOL, Skype, YouTube, and Apple provided the US government with the contents of their users' communications. As &lt;a href="https://www.theguardian.com/world/2013/jun/06/us-tech-giants-nsa-data"&gt;noted by the Guardian&lt;/a&gt;, "the 'home-field advantage' due to housing much of the internet's architecture" enabled NSA analysts to readily access a treasure trove of signals intelligence, with an ability to query the data of any user of these services.&lt;/p&gt;

&lt;p&gt;Likewise, the popularity of TikTok with Americans and others around the world gives the Chinese government a similar home-field advantage. If governments of liberal societies are willing and able to use their home-field advantage with digital technologies to acquire such signal intelligence, it is virtually guaranteed that the governments of illiberal, authoritarian societies are willing and able to.&lt;/p&gt;

&lt;p&gt;Furthermore, there is the issue of reciprocity. The Chinese government has long banned the websites and apps of US Internet companies. Included in these blocks are websites such as Facebook, Google, Twitter, YouTube, and WhatsApp. As James Andrew Lewis of CSIS has &lt;a href="https://www.csis.org/analysis/ban-tiktok-again"&gt;noted&lt;/a&gt;, "The disparity in the treatment of Chinese and U.S. companies is part of a larger Chinese strategy to gain technological and commercial advantage—as when Chinese companies could do business in the United States, but U.S. companies were forced to take a Chinese partner or provide access to technology." This state of affairs resembles Popper's &lt;a href="https://en.wikipedia.org/wiki/Paradox_of_tolerance"&gt;Paradox of Tolerance&lt;/a&gt;, which states, "If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them."&lt;/p&gt;

&lt;p&gt;The ACLU, EFF, and a number of others seem to have gotten caught up in this paradox, as evidenced by a &lt;a href="https://pen.org/letter-to-congress-opposing-federal-law-to-impose-a-ban-on-tiktok/"&gt;joint letter&lt;/a&gt; put out by a number of digital civil liberties organizations that argues, "a legislative ban on TikTok in the U.S. would set an alarming global precedent, lending legitimacy to authoritarian regimes in shutting down and excluding services they disfavor." This argument falls flat. Authoritarian regimes have already shown they are more than willing to ban services they disfavor, and democracies like India have already conducted nationwide bans of TikTok with few repercussions. Indian TikTok users simply moved to other platforms, like Instagram or the Indian-made TikTok alternative &lt;a href="https://www.bloomberg.com/news/articles/2022-04-13/josh-app-surges-in-india-as-tiktok-replacement"&gt;Josh&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;One might argue that if the US bans TikTok based on national security concerns, China would also be able to legitimize its bans of US websites based on similar concerns. Where this argument fails is that China's bans include informational services like Wikipedia or the New York Times and privacy protecting services like DuckDuckGo and Signal. The blocks in China are so broad that even the architect of Chinese "Great Firewall" was &lt;a href="https://www.nytimes.com/2016/04/07/world/asia/china-internet-great-firewall-fang-binxing.html"&gt;forced to employ a VPN&lt;/a&gt; to complete a presentation at his alma mater. What this demonstrates is that China's decision to ban foreign websites is based on the fact that these sites provide users with the ability to freely access information and points of view contrary to the party line.&lt;/p&gt;

&lt;p&gt;This argument also suffers from a degree of whataboutism. It fails to recognize that in the US there is a clear separation between government and private companies. While the US government can request and incentivize private companies to participate in programs like PRISM, it cannot readily force them to, as is evidenced by &lt;a href="https://techcrunch.com/2013/06/17/apple-nsa/"&gt;Apple's resistance to joining PRISM until 2012&lt;/a&gt;, while Microsoft joined in 2007. On the other hand, consider the Chinese government's treatment of a number of Chinese tech giants like Jack Ma, many of whom the government has &lt;a href="https://www.bbc.com/news/business-64781986"&gt;simply disappeared&lt;/a&gt; for periods of time. Imagine the outrage if the US government one day extrajudicially abducted Bill Gates, Mark Zuckerberg, or Tim Cook until they acquiesced to the demands of the NSA. Such a scenario is completely unthinkable outside of authoritarian nations like China and &lt;a href="https://www.nytimes.com/2017/11/04/world/middleeast/saudi-arabia-waleed-bin-talal.html"&gt;Saudi Arabia&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In addition to the aforementioned arguments, there are a couple of less serious arguments that can be quickly dispensed with.&lt;/p&gt;

&lt;p&gt;There is the argument that none of this matters because TikTok is largely just a bunch of teenagers posting silly videos. What this argument fails to recognize is that these teenagers will one day be the leaders of the country. Providing a foreign adversary with the opportunity to harvest the data on hundreds of millions of young Americans with little effort runs the risk of providing opportunities for China to influence US persons in the future using data collected when they were in their youth, such as embarrassing or sensitive messages, photos, and videos shared privately on the platform. Also, this is a massive platform consisting of all sorts of people. It is no longer a platform of a bunch of teenagers, even if they are a disproportionate share of its user base.&lt;/p&gt;

&lt;p&gt;Another spurious claim, &lt;a href="https://www.schneier.com/essays/archives/2023/02/why-the-u-s-should-not-ban-tiktok.html"&gt;made by Bruce Schneier&lt;/a&gt;, is that what TikTok is doing is little different than what Meta's Facebook and Instagram are doing. On the surface, this is true. Meta and TikTok's owner ByteDance are both companies trying to run profitable businesses. The primary difference is that TikTok provides a foreign adversary with the capability of accessing this data, while Meta does not, and the secondary difference is that, as was noted earlier, China long ago banned Facebook and Instagram itself. Schneier and others have also argued that we should not be focusing on banning TikTok and instead focusing on shoring up US data privacy regulations. This is a false dilemma. Banning TikTok and shoring up US data privacy regulations can both be done.&lt;/p&gt;

&lt;p&gt;Yet another &lt;a href="https://www.vice.com/en/article/epv48n/banning-tiktok-is-unconstitutional-ludicrous-and-a-national-embarrassment"&gt;weak claim&lt;/a&gt; made is that there is no "proof that the Chinese government has used TikTok’s data for intelligence purposes." The Chinese government is not dumb or impatient and is well-aware of the US intelligence community's capabilities. It is either accessing the data in a manner it is absolutely certain the US government is not capable of becoming aware of, or it is simply amassing as much data as possible for access at a later date. We do not have to wait for proof that the Chinese government has harvested the data of hundreds of millions of Americans to ban it, and banning it at that point would not get that data back. We know the intentions and capabilities of the Chinese government. That alone is sufficient.&lt;/p&gt;

&lt;p&gt;Perhaps the most unserious claim, &lt;a href="https://www.nytimes.com/2023/03/22/nyregion/jamaal-bowman-tiktok.html"&gt;made by Rep. Jamaal Bowman&lt;/a&gt;, is that discussion of a ban of TikTok is a result of racial animus toward China. A social media company based in Taiwan, Japan, South Korea, Thailand, Malaysia, Germany, the UK, France, Italy, Brazil, Argentina, India, Sri Lanka, Nigeria, Ethiopia, or any other free society where companies are not beholden to totalitarian, hostile, single-party states would not be facing this kind of scrutiny. On the other hand, if Russia-based VK or Yandex achieved TikTok-levels of popularity, it is absolutely guaranteed there would be a similar discussion of banning such a platform among American policymakers.&lt;/p&gt;

&lt;p&gt;On the other hand, a serious point is the nature of &lt;em&gt;how&lt;/em&gt; TikTok should be banned. There are a number of ways this could be accomplished. All else being equal, it would be better to ban it in such a way that is less restrictive of what Americans are allowed to do with their own devices. That means Americans should not be banned from using TikTok on the web — we definitely do not want to create a "Great Firewall of America" — or installing it on their phones via methods like sideloading.&lt;/p&gt;

&lt;p&gt;America's Internet must remain totally free of censorship, as it has always been, and the threat posed by TikTok is insufficient to justify such a change. Instead, banning the app from the app stores in the US should be sufficient to achieve the desired goals without being unnecessarily restrictive. Those Americans who truly wish to access TikTok by jailbreaking their iPhones or sideloading the app to their Android devices would remain free to do so. Erecting this barrier to entry should be sufficient to achieve the desired goal, particularly since jailbreaking an iPhone is significantly more involved than sideloading an Android app, and iPhones are &lt;a href="https://www.axios.com/2021/10/14/teen-iphone-use-spending-habits"&gt;ubiquitous&lt;/a&gt; among American teens. If it is insufficient, then at that point the restrictions required to achieve the goal would be outweighed by their costs.&lt;/p&gt;

&lt;p&gt;Balancing the needs of national security and free expression is at times a tricky one. Cases like the incidental, warrantless collection of Americans' data by the NSA is one such example of a difficult tension between ensuring national security with protecting domestic civil liberties. The case of TikTok, on the other hand, is not a tricky one. A ban on TikTok is viewpoint neutral and has minimal ramifications for free expression. Americans have plenty of other platforms on which they can post their cat videos that are not beholden to the whims of hostile, illiberal nations.&lt;/p&gt;

</description>
      <category>tiktok</category>
      <category>china</category>
      <category>firstamendment</category>
      <category>freespeech</category>
    </item>
    <item>
      <title>AI is Not a Legal Loophole</title>
      <dc:creator>Alexander Caro</dc:creator>
      <pubDate>Tue, 21 Feb 2023 08:39:54 +0000</pubDate>
      <link>https://dev.to/alexcaro/ai-is-not-a-legal-loophole-3coa</link>
      <guid>https://dev.to/alexcaro/ai-is-not-a-legal-loophole-3coa</guid>
      <description>&lt;p&gt;With the recent release of the open source Stable Diffusion software, the public now has access to a powerful AI tool for generating images using only text prompts. A number of artists have expressed frustration and anger at this new tool, as the models used to generate images was trained on millions of artists' copyrighted images without their knowledge or consent.&lt;/p&gt;

&lt;p&gt;The technology underlying Stable Diffusion, Latent Diffusion Model (LDM), was itself only invented in 2022 by Stability AI. Yet the technology is already widely available for anyone to use, for any reason, free of charge. Anyone with an Internet connection can now run Stable Diffusion in the cloud or, if they have a reasonably modern computer, they can do so from the convenience of their personal computer. Given the power of this new technology and the extremely rapid pace at which it diffused to the public — Stability AI reports there are more than 10 million daily users and 1.5 million subscribers to their paid DreamStudio cloud service — it should come as no surprise that artists are feeling vulnerable. &lt;/p&gt;

&lt;p&gt;One such artist, Sarah Andersen, who creates the popular &lt;a href="https://sarahcandersen.com/" rel="noopener noreferrer"&gt;Sarah's Scribbles comic&lt;/a&gt;, &lt;a href="https://www.nytimes.com/2022/12/31/opinion/sarah-andersen-how-algorithim-took-my-work.html" rel="noopener noreferrer"&gt;recently wrote&lt;/a&gt; about this new technology in a New York Times op-ed. In the op-ed, she encapsulated well the grievances many artists hold about this novel technology:&lt;/p&gt;

&lt;p&gt;“For artists, many of us had what amounted to our entire portfolios fed into the data set without our consent. This means that A.I. generators were built on the backs of our copyrighted work, and through a legal loophole, they were able to produce copies of varying levels of sophistication. When I checked the website &lt;a href="https://designtaxi.com/news/420442/New-Website%20Tells-You-If-Your-Images-Have-Been-Fed-To-Art-Generating-AI/" rel="noopener noreferrer"&gt;haveibeentrained.com&lt;/a&gt;, a site created to allow people to search LAION data sets, so much of my work was on there that it filled up my entire desktop screen.”&lt;/p&gt;

&lt;p&gt;While I empathize with the anger and frustration artists like Sarah are feeling, in my view, there are a few issues with how they are thinking about technologies like Stable Diffusion that I want to address. Firstly, let us consider the LAION data set Sarah is referring to here, the LAION-5B. It is a curated data set that LAION put together using data from Common Crawl. Much like Google or the Internet Archive, Common Crawl uses a technology called a web crawler that scours the Internet, moving from hyperlink to hyperlink, and downloading whatever it finds. The Common Crawl data has been used by a number of projects, including the other new AI technology making waves recently, Open AI's GPT-3.&lt;/p&gt;

&lt;p&gt;Given that the LAION-5B data set consists of nearly 5.8 billion images crawled from across the web, it should come as no more a surprise to anyone who publicly posts images online to find their work in that data set than it would be to find it in a Google search. Common Crawl &lt;a href="https://commoncrawl.org/big-picture/frequently-asked-questions/" rel="noopener noreferrer"&gt;supports&lt;/a&gt; the Robots Exclusion Protocol, and one step artists could take would be to tell CCBot not to crawl the pages where they post their content, if they are worried about it ending up in an AI data set. As Sarah's Scribbles is a wildly popular webcomic that has been running since 2011, and it is unlikely that she was aware of CCBot's scraping, it would be surprising if LAION consisted of only an "entire desktop screen" of her comics. I would in fact expect dozens of desktop screens of consisting of Sarah's content to be included in that data set.&lt;/p&gt;

&lt;p&gt;However, the more important problem here is the issue of copyright. Intellectual property is an issue I am passionate about; it is the reason I spent several months working through Harvard Law School's challenging &lt;a href="http://copyx.org/harvard-law-school/" rel="noopener noreferrer"&gt;CopyrightX&lt;/a&gt; course, for example. The common sentiment among artists like Sarah is this new technology exists thanks a legal loophole in copyright law that did not anticipate tools that can generate art in the style of existing artists using nothing more than a text prompt.&lt;/p&gt;

&lt;p&gt;This way of thinking about Stable Diffusion and similar tools is, in my view, misguided. What Stable Diffusion does is provide novel technological affordances to its users by dramatically lowering the barrier to entry for creating artwork mimicking the style of existing artists. Prior to the advent of Stable Diffusion, you would yourself have to be quite skilled to successfully mimic the style of another artist, and even then it would take time and practice to do so successfully. Now any average Joe can do so.&lt;/p&gt;

&lt;p&gt;However, it is no more of a legal “loophole” to use Stable Diffusion to mimic the style of another artist than it is for a talented artist to do so. Artistic style is simply not copyrightable. Works created in the style of existing artist are new works altogether, with their copyright going to the person using the tool used to create the work, whether that be Stable Diffusion, Photoshop, or a pen and paper.&lt;/p&gt;

&lt;p&gt;Sarah touches on this point in her piece when she writes, "The way I draw is the complex culmination of my education, the comics I devoured as a child and the many small choices that make up the sum of my life." What Sarah neglects to consider is that Stable Diffusion's drawings are also informed by its education, i.e., the programming instructions provided to it by its developers, and the billions of images it has devoured. The artwork generated with Stable Diffusion need not be in the style of an existing artist, but it is certainly informed by the millions of pieces of art it has likewise devoured. Just as a skilled artist could mimic the style of another artist whose works they have previously absorbed, Stable Diffusion is capable of the same. However, the choice of whether to generate such works using artistic tools is still in the hands of the individual. What Stable Diffusion does is take the cognitive load of the requisite education and media consumption off of the human brain and move it to software; it democratizes a skill previously reserved for those who dedicated years to devouring comics and honing their talents.&lt;/p&gt;

&lt;p&gt;Whether it is ethical to mimic the style of an artist is separate from the question of whether it should be legal to do so. The law is not and cannot be the remedy to all problematic acts. There are other methods of regulating behavior. For example, when one artist mimicked the style of another artist and sold those works for a profit in the past, they risked a bad reputation in the art industry.&lt;/p&gt;

&lt;p&gt;Regulating such behavior was, however, easier when the skill barrier was much higher than simply entering a few words into a text box. The question we face now is how to mitigate the mimicking of artists' works now that this novel technology has made it possible for anyone to do so. It may simply no longer be possible to regulate using social pressures now that the genie is out of the bottle.&lt;/p&gt;

&lt;p&gt;Technological innovation will, for better or worse, continue to violate our cultural sensitivities. We should be wary of rushing to enact reactive legislation and regulations that enshrine our sensitivities into law and impede that innovation. To reduce the mistakes we make, we should be thinking proactively about the future capabilities of AI and other technologies and designing rules and regulations beforehand to meet these challenges, as Congress wisely did with the Genetic Information Nondiscrimination Act (GINA) in 2008. While it will not always be possible to anticipate future developments, we need to do better in this area so that we are having these debates on technology policy dispassionately and with the benefit of time on our side.&lt;/p&gt;

</description>
      <category>community</category>
      <category>forem</category>
      <category>gratitude</category>
      <category>devrel</category>
    </item>
    <item>
      <title>Blocking user-unfriendly content</title>
      <dc:creator>Alexander Caro</dc:creator>
      <pubDate>Wed, 22 Jul 2020 09:25:14 +0000</pubDate>
      <link>https://dev.to/alexcaro/blocking-user-unfriendly-content-3j14</link>
      <guid>https://dev.to/alexcaro/blocking-user-unfriendly-content-3j14</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xKGOUMat--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/6mhcwxzj8khvl0ymzk5e.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xKGOUMat--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/6mhcwxzj8khvl0ymzk5e.jpg" alt="Alt Text" width="640" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Rather than following user-centric design principles, all too often, websites and apps are built using &lt;a href="https://screentimenetwork.org/apa?eType=EmailBlastContent&amp;amp;eId=5026ccf8-74e2-4f10-bc0e-d83dc030c894"&gt;persuasive design principles&lt;/a&gt; with the intention of being as addictive as possible. The companies that design apps are incentivized by profits, and profits are linked to clicks and time spent on their websites and in their apps. Despite some &lt;a href="https://wellbeing.google/our-commitment/"&gt;notable exceptions&lt;/a&gt;, companies generally do not stop to take the time to consider the impact design has on the wellbeing of their users, and we cannot expect them to.&lt;/p&gt;

&lt;p&gt;Instead of depending upon the goodwill of companies, users must take responsibility for their own digital wellbeing. With good reason, uBlock Origin is one of the most popular cross-browser extensions. The extension provides a simple method to help users in &lt;a href="https://github.com/gorhill/uBlock/blob/master/MANIFESTO.md"&gt;controlling their browser content&lt;/a&gt;. The blocking extension has proven so popular that uBlock Origin was the &lt;a href="https://blog.mozilla.org/addons/2020/02/06/ublock-origin-for-firefox-android-nightly/"&gt;first extension&lt;/a&gt; adapted for Mozilla’s reinvented Firefox Android app. While uBlock is excellent at its primary use — blocking ads — it is in fact intended to be used for blocking any annoying or otherwise undesired bits of websites. As uBlock itself puts it, “uBlock Origin is NOT an ‘ad blocker’: it is a wide-spectrum blocker – which happens to be able to function as a mere ‘ad blocker’.”&lt;/p&gt;

&lt;p&gt;To illustrate, consider LinkedIn’s news module. I can only speak for myself, but LinkedIn is not the first place that comes to mind when I think of news sources. I would rather use a web page dedicated to that purpose like Google News, or rather simply visit trusted sources directly instead of having an &lt;a href="https://en.wikipedia.org/wiki/Filter_bubble"&gt;algorithm curate my news for me&lt;/a&gt;. Instead of building this feature to help users looking for news, it seems more likely the news module was designed to make the site more addictive, to further the mission of endlessly increasing “engagement” metrics, and to provide another &lt;a href="https://news.linkedin.com/2017/introducing-trending-storylines-on-the-linkedin-feed"&gt;revenue stream through referrals&lt;/a&gt;. I do not blame them for that, as generating returns for their shareholders is their raison d'être, but I also am not interested in being engaged with news stories when I visit their site. I could just ignore the news module, but why do that when I can simply remove the anti-feature altogether?&lt;/p&gt;

&lt;p&gt;To accomplish this, first install uBlock Origin. Note that I will be referring to uBlock Origin as simply uBlock. However, uBlock and uBlock Origin are &lt;a href="https://en.wikipedia.org/wiki/UBlock_Origin#History"&gt;technically different extensions&lt;/a&gt;. For this purpose, you simply need to be sure to use uBlock Origin.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Firefox Add-On&lt;/th&gt;
&lt;th&gt;Chrome Add-On&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FyRneMZ3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://alex.caro.us/img/firefox-add-on.png" alt="Firefox add-on" width="172" height="60"&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://chrome.google.com/webstore/detail/ublock-origin/cjpalhdlnbpafiamejdnhcphjbkeiagm?hl=en"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--91aYz1aY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://alex.caro.us/img/chrome-add-on.png" alt="Chrome add-on" width="172" height="60"&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Versions of uBlock are also available for &lt;a href="https://github.com/gorhill/uBlock/blob/master/README.md#installation"&gt;other browsers&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you have installed the extension, simply click on the uBlock extension icon in your browser at the top-right hand corner. You will need to then click the dashboard icon to enter the dashboard. Next, navigate to the “My filters” tab. Once there, add the following code:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;www.linkedin.com##.feed-shared-news-module&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The first part of this code tells uBlock which site to apply the rule to, and the second part tells uBlock which HTML element to block using &lt;a href="https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Selectors"&gt;CSS selectors&lt;/a&gt;. In this case, the code refers to all HTML elements with a “class” attribute of &lt;code&gt;.feed-shared-news-module&lt;/code&gt;. You may save the new rule by clicking the “Apply changes” button and refresh your LinkedIn homepage. The new module should have disappeared!&lt;/p&gt;

&lt;p&gt;You may also give this a try on other sites like Facebook. To get you started, here are two filters for Facebook (YMMV):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;www.facebook.com##div._2s24._cy6:nth-of-type(2) &amp;gt; div._4kny:nth-of-type(3)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;www.facebook.com###stream_pagelet&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;These block the notifications icon and the news feed, respectively. You do not need to know CSS selectors to block these elements, either. All you need do is utilize uBlock’s built-in element picker to rid yourself of minor annoyances and distractions. To use the element picker, navigate to any website and click the uBlock extension icon. Then click on the dropper icon. You will be presented with a crosshair cursor allowing you to select an HTML element for blocking.&lt;/p&gt;

&lt;p&gt;Once you have selected your desired element, click it and the extension will provide you a pre-determined CSS selector. If you are happy with it, click “create” and uBlock will start blocking that element. You can remove this filter anytime by navigating back to the filter tab in uBlock. Note that if you want to get more creative about which types of elements you want to block, you will need to &lt;a href="https://css-tricks.com/almanac/selectors/"&gt;learn a bit about CSS selectors&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;What do you think about designing with users' interests in mind? Is it up to the user to block undesirable content, or should designers and technologists play a role here, too?&lt;/p&gt;

</description>
      <category>socialmedia</category>
      <category>manipulation</category>
      <category>addiction</category>
      <category>design</category>
    </item>
    <item>
      <title>Say Goodbye to WWW</title>
      <dc:creator>Alexander Caro</dc:creator>
      <pubDate>Tue, 21 Jul 2020 09:08:06 +0000</pubDate>
      <link>https://dev.to/alexcaro/say-goodbye-to-www-2aem</link>
      <guid>https://dev.to/alexcaro/say-goodbye-to-www-2aem</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jAxe9X59--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/14xaitthvo0ld6usiq6x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jAxe9X59--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/14xaitthvo0ld6usiq6x.png" alt="Alt Text" width="768" height="768"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Google has recently taken two steps forward and one step back when it comes to domain security. On the one hand, Google has been pushing for increased adoption and normalisation of HTTPS. On the other hand, the company is now stripping “trivial” subdomains, presenting users with inaccurate information about the precise webpage they are visiting by hiding “www” and “m” — for mobile — subdomains from the URL bar.&lt;/p&gt;

&lt;p&gt;Regarding Google’s HTTPS efforts, these have been largely laudable. To accomplish the normalisation of HTTPS, among other things, Google now factors whether a website has a HTTPS certificate in its search rankings, supports the Let’s Encrypt initiative to provide HTTPS certificates readily and free-of-charge, and is gradually shifting toward explicitly marking websites serving HTTP content as “not secure,” rather than marking websites that serve HTTPS as “secure”. Despite these advances in domain security, not all of the company’s efforts are furthering the user interest.&lt;/p&gt;

&lt;p&gt;Unlike their HTTPS efforts, the decision to hide www and m subdomains is a clear setback in terms of user security. While the www and m subdomains are frequently extraneous, they are not always so, and the browser should not assume them to be. To illustrate, consider social media websites that follow the convention of allowing users to register their own subdomains. A malicious actor could register &lt;a href="http://www.example.com"&gt;www.example.com&lt;/a&gt; or m.example.com to phish a user. This is especially problematic for web hosts that allow their users to register their own subdomains. While in most cases www subdomains will be reserved, the m subdomain may not be, allowing a malicious actor to masquerade as the domain owner as a result of this change.&lt;/p&gt;

&lt;p&gt;Don’t get me wrong, I am no fan of the www subdomain. I am defending the presentation of www in the URL bar despite being an early supporter of the No-WWW movement. I learned about this approach when I was beginning the process of teaching myself web development back in 2003. As the now-defunct No-WWW website &lt;a href="https://web.archive.org/web/20030425083424/http://no-www.org:80/"&gt;put it&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;By default, all popular Web browsers assume the HTTP protocol. In doing so, the software prepends the ‘http://’ onto the requested URL and automatically connect to the HTTP server on port 80. Why then do many servers require their websites to communicate through the www subdomain? Mail servers do not require you to send emails to &lt;a href="//mailto:recipient@mail.domain.com"&gt;recipient@mail.domain.com&lt;/a&gt;. Likewise, web servers should allow access to their pages through the main domain unless a particular subdomain is required.&lt;/p&gt;

&lt;p&gt;Succinctly, use of the www subdomain is redundant and time consuming to communicate. The internet, media, and society are all better off without it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Back then, I found this argument to be convincing and still do today. I think in almost every case web developers should redirect traffic from the www subdomain to the actual domain itself. On the more extreme end, although I do not personally agree with this approach, some developers choose not to recognize the www subdomain altogether. In No-WWW’s parlance, these are Class C domains, and attempts to visit &lt;a href="http://www.example.com"&gt;www.example.com&lt;/a&gt; on such sites will simply be met with an error message.&lt;/p&gt;

&lt;p&gt;Personally, I find this approach to be a bit too much. Simply put, it is not particularly user-centric. In my own view, I think redirecting users, such as by using an .htaccess rewrite, is the ideal approach. That said, there are some good justifications out there for avoiding the &lt;a href="https://dropwww.com/why"&gt;creation of extraneous subdomains&lt;/a&gt;, despite their popularity.&lt;/p&gt;

&lt;p&gt;Regardless of my own aversion toward the www subdomain, the decision of how to present one’s website should be strictly left to the web developer and site owner. &lt;a href="https://www.yes-www.org"&gt;Those who prefer&lt;/a&gt; to redirect users from example.com to &lt;a href="http://www.example.com"&gt;www.example.com&lt;/a&gt; should feel free to do so and have the URL bar properly reflect the site’s URL. Furthermore, unless the DNS records of the domain and the www sub-domain are precisely the same, the browser is not justified in making assumptions about their content.&lt;/p&gt;

&lt;p&gt;In making such assumptions, the browser adds an extra layer of confusion and insecurity by misleading users about precisely which web page they are currently visiting. With phishers using increasingly advanced techniques to scam unwitting users, hiding valuable information from the URL bar can prove dangerous. With this change, a subdomain like &lt;a href="http://www.m.www.example.com"&gt;www.m.www.example.com&lt;/a&gt; simply appears as example.com or &lt;a href="http://www.hello.m.example.com"&gt;www.hello.m.example.com&lt;/a&gt; appears as hello.example.com, providing potential avenues for phishing unsuspecting users.&lt;/p&gt;

&lt;p&gt;To prevent such situations from ever arising and to do right by their users, Google should not simply copy Apple, which follows a similar practice in their Safari browser. As we have seen with the removal of the headphone jack in high-end smartphones, blindly copying Apple is not necessarily always the best route. To remedy this, it is helpful to take a lesson from Google’s own approach to HTTPS. Here, Google strikes the ideal balance by both fully and accurately informing the user while also providing a nudge for developers to modify their practices. By following similar methods with the www and m subdomains, Google can likewise nudge developers down a desired path without presenting their users with misleading — and potentially dangerous — information.&lt;/p&gt;

&lt;p&gt;What do you think? Is Google doing the right thing by simplifying the user interface? Or is it going too far in making changes that are likely to confuse users and provide malicious actors yet another avenue of attack?&lt;/p&gt;

</description>
      <category>google</category>
      <category>userinterface</category>
      <category>www</category>
      <category>phishing</category>
    </item>
  </channel>
</rss>
