Microsoft has played a significant role in the damage of the WannaCry ransomware. Certainly the proximate cause lies with the malware's authors, and they should be held accountable. The complacent NSA is also culpable in their role in creating or discovering, yet failing to report the exploit. We can even say that users must share part of the blame for not keeping their system up-to-date. But in no uncertain terms, it is the design of Microsoft's Windows operating system allowed the attack to happen.
Remote code execution
WannaCry uses an exploit in the SMB (file server) subsystem of Windows. It executes arbitrary code and takes control of the machine. This raises a vital question:
Why is a component that is responsible for file sharing capable of taking over the machine?
If we look at how Window's architecture it's not a problem explaining why this is possible. These subsystems are treated as privileged users and given extensive access to the computer. This is the core of the problem. There is a lack of privilege separation and an assumption that components are well-behaved.
If we had no alternative to this design we could cut Microsoft some slack. But we do have ways to mitigate such attacks, and it appears Microsoft has chosen not to implement them. Thus they must bear a significant part of the responsibility for the WannaCry ransomware.
Windows 10 does not appear to have been hit by the malware. If this is due to actual architectural changes, as I describe here, then great! That's a solid reason to upgrade. But it's not clear if that is the case; the security bulletin indicates they patched remote code execution on Windows 10 as well.
Injecting code
Let's assume for a moment that all software has defects, ones that would allow an attacker to compromise security. Given our known history this isn't a bad assumption to make. Yet we continue to ignore this while writing software. We are still coding as though the system is impenetrable, which is a terrible practice.
We need to be defensive. Obviously the first line of defense is safer coding and execution: buffer protection, safe types, address randomization, etc. There's lots of work in this direction, but it isn't perfect, so we have to assume we'll continue to fail here.
The second line of defense is not allowing an attacker to run arbitrary code. It sounds so obvious, so why isn't it done? The WannaCry attacker injected their own code via the SMB system.
CPUs have no-execute and read-only flags for memory. An OS can separate executable code and data memory from each other. Had this been done the attack vector would not have worked. The attacker would still be able to corrupt the data memory, but there would be no way for them to jump into that code.
CPUs didn't always have the no-execute ability, but it's been around for over 15 years now. Is Windows not using this feature? And if it is, how exactly was the code injected? (It's kind of understandable if WinXP didn't support this feature, as it wasn't widely available when that OS was released.)
Privilege escalation
Let's extend our assumption to distrusting software entirely. A typical downloaded application cannot take over the system on its own, so why can the SMB component?
Consider some of the features of file sharing: we need access to a particular set of files, not the entire filesystem; we need some way to authenticate users; we need a way to access the internet. These are all well definable interfaces that an operating system can provide. By partitioning privileges the OS can limit what an application is capable of doing.
Yet it seems the WannaCry malware has gained full control of the system. This is only possible if the SMB component is not segregated. We know from Samba that this protocol can run as isolated software. There are also numerous technologies on other OSs that further segregate and isolate components. Were none of those employed in Windows SMB?
I understand changing the structure of an OS is a phenomenal amount of work, but I have to assume Microsoft has the resources. Maybe they are doing this and it just isn't working. Why did the exploit gain so much access to the system?
And on and on
Assume the worst, that all our protections have failed. Surely we can still protect the user's data somebody. Isn't the sudden change of many files something Windows Defender could detect? Even if it didn't, why isn't there a rollback mechanism?
In fairness, Windows has options for making versioned backups. It is a user error for not enabling this, but there's obviously something preventing people from doing so. I'm also not sure if these backup files would be protected from WannaCry.
We need to stop assuming our computers are safe and instead design assuming they will be compromised. This is a core tenet of secure server design, so why isn't it applied to desktop systems?
Microsoft puts a lot of effort into security, but this doesn't absolve them of blame in the WannaCry affair. Their system design has allowed this attack to happen, despite there being known techniques that could have prevented it, or at least mitigated the severity.
Top comments (32)
One of the most frustrating aspects of software engineering is this idea that we can solve problems by throwing more "resources" at it. It definitely helps, but you are trivializing the undertaking of fundamentally changing a 30 year old operating system while not breaking backwards compatibility for millions of users.
They most certainly are taking steps, but the internet collectively throws a fit whenever they do:
Aggressively push users to update to the latest supported version of Windows, even making it a free upgrade? Check.
Aggressively push users to patch their systems? Check.
Vend a version of their Windows that will only run applications in a sandbox? Check.
I understand Microsoft is in a very tricky position here. Part of the problem is that Windows is trying to be everything to everyone, which realistically cannot be achieved.
There's no reason a medical facility should suffer under design decisions made to enable gaming. Nor should a server suffer for design allowances made for desktop computing.
There are definitely hurdles for Microsoft to surmount, but it in no way lessens their share of the blame. I do appreciate in their release that they indicate they have a responsibility, but at the same time they are trying to shift the blame.
So let me get this straight: Windows, originally released as a consumer operating system, should cease to exist as a consumer operating system because the healthcare industry chose to adopt it for their use?
To reiterate: It's Microsoft's fault that the healthcare industry built software on a platform that might not have been the most appropriate? That's of course playing along with the premise that it's a reasonable argument to make, which it isn't.
These weren't servers. These were desktop machines running Windows XP, originally released 16 years ago, which Microsoft stopped supporting 3 years ago, exploited via a vulnerability that Microsoft patched two months ago.
But yes, this is totally Microsoft's fault.
Microsoft actively pushes their OS into every market segment. It's not like they advertise it solely as a desktop consumer OS.
I've already excluded WinXP numerous times from my criticism, stating clearly that the techniques to mitigate this attack did not exist when it was designed. WannaCry however didn't just attack old systems. Indeed it appears an unpatched Windows 10 would have been affected as well.
I'm not blaming only Microsoft for WannCray, I'm just establishing they are not blameless, and unless they change somethign fundamental these attacks will never cease.
You continue to gloss over the fact that Microsoft patched the vulnerability far in advance of it being used (or at least, used widely).
With all these points being made, your argument boils down to "Microsoft is at fault because their software has vulnerabilities." Which, sure. Point me to a large C/C++ codebase that doesn't have any vulnerabilities. It's not reasonable to say that software just shouldn't have vulnerabilities.
What is reasonable to say is that vulnerabilities should be patched in an expedient manner. Which is was.
We have to assume there are vulenerabilities, precisely as you say. The goal is to design a system around this assumption. For this there are known techniques, which Windows does not appear to be using.
That is, I'm not holding anybody accountable for the particular error in SMB. This is unavoidable. What I take issue with is how this error allowed code injection and escalation.
The irony here is that the title of your article places blame on Microsoft.
Yes, it's a counter to arguments being made pinning blame primarily on the NSA or users who failed to upgrade their system. Both of those are clearly part of the problem, but I'm trying to specifically highlight that Microsoft itself shares a portion of the blame.
This has been a conversation in many cybersec circles. A lot blame poor patching and a disregard for updates. I'm more inclined to blame un-patched machines and a slowness to update as well. A machine would have had to be two months behind on patches to be impacted (in regards to the SMB vuln specifically).
Blame an industry for running a 16 year old OS with support that ended 3 years ago. Still, an interesting perspective.
I exclude WinXP since it would not have been privilege to have some of these techniques at the time. However, it appears an unpatched Windows 10 would still have been vulnerable, indicating the OS has still not been improved.
twitter.com/NerdPyle would probably disagree with you on that one.
He's been warning people for a LONG time to get their systems updated and sort out the SMB1 problems. Hardly MS's fault for consumers not patching / keeping their systems secure.
In fact, good-guy-Microsoft for patching the XP flaw while they were at it, as far as I'm concerned.
I hold WinXP users at fault for any problems they are having. It's clearly an unsupported and insecure operating system.
The issue I'm addressing is not one of individual patches. I applaud Microsoft for keeping their system patched in a timely manner.
What I'm taking issue with is the these types of exploits are allowed to happen at all. The OS could be designed to prevent this type of exploit from either happening, or at least significantly mitigating the damage. Until this underlying flaw is addressed we'll continue to see these attacks.
So, I see this argument being equivalent to saying websites shouldn't allow 3rd party ads because those ads can be used to drop malware. Websites shouldn't allow for iframes because a XSS could drop an iframe that drops ransomware via drive-by attack. In this regard, Microsoft should also be held responsible for allowing VB scripts to be linked in a Word document because those are also common methods of malware dissemination.
Is that your line of thinking?
In a way yes. We must be designing software assuming that these vectors will be used to attack a system. As you correctly show, this isn't a problem limited to just Microsoft. It's a design issue that all projects face. We continue to use designs that do not adequately product our systems from attacks.
Websites allowing 3rd party ads is one particular thing that is a security/privacy issue. I mentioned this in another article of mine: mortoray.com/2017/05/02/fix-your-c...
The underlying flaw(s) in this case have been mitigated. The current SMB protocol is versions ahead of what was exploited here - the problem is that MS has to keep backwards compatibility for products / clients running older software. The onus is on the consumer to stay up-to-date.
A patch for vulnerabilities in SMBv1 was released by Microsoft in March.
microsoft has backdoors for the n.s.a.
so do all of the other big corporations.
it is not an "accident" or "poor coding".
you don't need to look for explanations.
it is a considered, intentional decision,
its purpose being to curry official favor.
(do you know how costly and unpleasant
it can be to go against the government?
backdoors are a bribe that must be paid
for a corporation to become a behemoth.)
-bowerbird
Okay let's not get all tin foil hat here. The NSA employs hackers to track and document zero days, as does Google (they just report them). It is dangerous to even begin to suggest or imply that large corporations intentionally leave their software or systems vulnerable to the government without any substantial proof/evidence. This is especially careless since many large tech organizations have repeatedly stated their positions on this topic and are on record as fighting this type of action.
you can believe whatever you want to.
and even cast "tin foil hat" aspersions.
but we have now actually witnessed the
uncovering of many widespread actions
on the part of our government to spy on
a wide array of private citizens; so if you
say backdoors are outlandish, who cares?
tracking has been built deep in the guts
of our computers since the earliest days,
with and without the complicit knowledge
of the hardware and software companies.
the public protestations of those companies
is a charade they are compelled to construct,
for the sake of covering up their butts so that
shareholders won't suffer if the truth gets out.
note that i'm not even saying it's a bad thing.
there are arguments in support of both sides.
(including a risk that bad guys will find them.)
but if you think there are no backdoors being
placed intentionally, i think that you are naive.
-bowerbird
And yet...you have no proof to back up your claims.
and you're completely correct that
i have absolutely no proof. none!
and if i were to have the slightest bit,
any at all, i would promptly "lose" it.
since that's the kind of stuff
that can and will get you killed
if you're not part of the plot.
whether or not you wear a tin-foil hat.
-bowerbird
Look, you either know something, or you don't. If it's the former, it would be great if you could explain; if it's the former, you're the one being naive, believing in things with no proof supporting them.
Windows' code is being constantly combed by security experts all around the world and bugs have been constantly found and eventually patched. It's pretty normal. So normal that I find way easier to believe that the NSA just hogged those bugs for themselves rather than forcing an unwilling Microsoft to create holes for them... and in exchange of what, exactly?
At Microsoft's they well know that there's no such thing as a backdoor for the "good guys" only.
two months ago, nobody had "proof" of this backdoor.
except the government (for sure), and maybe microsoft.
now we "blame" microsoft because "it should've known",
and further, shouldn't have built its software so shoddily.
even though its code was "being constantly combed by
security experts around the world", who missed this hole.
speaking of swiss cheese, this line of arguments qualifies.
and thus has become too tedious to proceed.
please believe whatever you need to believe.
i know what i think.
-bowerbird
I think nobody with a minimal understanding of software development thinks Microsoft "should have known" - bugs happen, unbeknown to their developers, period. And blaming Microsoft for having developed SMB the way it did is also generally disagreed with, because it doesn't take historic reasons into account.
As far as it's not religion, what I believe must be supported by facts. Otherwise I don't believe and even less I speak. Yet you speak while providing no facts. I have no idea why you think it's reasonable.
so you disagree with this article's point. that's fine.
but the government knew about this vulnerability.
so, what you believe is that the government knows more about microsoft's code than microsoft itself, more than the programmers who wrote that code.
and you believe the government explicitly decided not to inform microsoft about its code's deficiency.
who knows? you might be right. i certainly don't know.
but you don't have any more "proof" for your position than i have for mine, and it's disingenuous to imply so.
i think it's far more likely that both the government and microsoft knew about this hole in the fence, and rather than patch it, they decided to monitor it closely instead, to catch any bad guys who might try to slip through it... (and yes, use it themselves, also to catch the bad guys.)
of course, once the hole was widely known to the public, and thus garden-variety criminals, they had to patch it.
but up until that time, it was more useful as a honeypot.
and once you see a "vulnerability" can be used this way, it doesn't take a whole lot of imagination to propose that you introduce a few of them, or a few dozen, as tools...
but, of course, you'd have to be very careful to not leave any "proof" that you'd done that. and you would have to publicly disavow such efforts, and have plausible deniability. maybe even have a law that you are not allowed to admit it. you could call it a "national security letter", or some such.
and now i doff my tin-foil hat to all of you, and exit...
but again, please believe whatever you need to believe.
-bowerbird
Hang on though. I don't actually know if there are any systems like this, but indulge me a hypothetical (that I think likely is not hypothetical somewhere, because bad software is everywhere). What if there was software for a defunct brand of MRI machines that, as written, actually requires some sort of sanctioned remote code execution?
It would be easy to say "the MRI manufacturer should pay for new software to be written that isn't terrible" but what if they don't exist anymore? Perhaps a very clever programmer could be hired to reverse engineer the machines and rewrite their code, but who would hire them? In the case of a greedy, or worse, actively malicious rights holder controlling the old company, how would you grant them legal protection?
Should Microsoft go for broke and remove the entire capability for remote code execution via your second line of defense, forcing hospitals to buy new MRI machines, which range in cost from the low hundred thousand to multiple million dollar range?
Or perhaps when Windows 11 comes out and is fully immune from code injection via SMB, the hospital will simply stay on Windows 10 to avoid that cost, and in 2040 will suffer a GonnaCry ransomware attack via an RCE only discovered after Windows 10 is EoL?
In an ideal world, everyone would update their software and everything would be fine. What should we do in our non-ideal world, where breaking backwards compatibility in the name of future proofed security might cripple businesses working towards bringing about a more ideal world?
I hesitate to ever defend Microsoft, but how would Microsoft be "forcing hospitals to buy new MRI machines", how are they responsible for increased security on their operating systems resulting in the breakage of insecure software? Even if the MRI software can't be re-written or upgraded, how is Microsoft responsible for that?
Sorry, may not have made my position clear there. If Microsoft decided to patch out remote execution entirely, both legitimately and illegitimately, that would be a hard decision with both pros and cons, and in some cases where I think the former outweighs the latter, I would applaud them for it.
But if they did, that still puts the hospitals between a rock and several hard places if their MRI machines depend on legitimate remote code execution. Do they not ever install the patch, leaving them open to RCE exploits that would likely never be patched?
Do they buy new MRI machines, which might be millions of dollars of one time investment, over something that only doesn't work because of a patch?
Do they risk who-knows-what legal trouble trying to get an unofficial patch for their machines, if the maker will not provide?
Do they spend the money on a top notch InfoSec team that can mitigate the risks, investing less up front but needing them around forever to keep the ship floating?
I don't blame Microsoft, but that doesn't erase the challenge for the hospital.
I love this because it is, in essence, why having a good cyber security program in place in so important. Not allowing outbound/inbound connections through a firewall over certain ports, having a good endpoint protection software, having good IDS & IPS systems, having a good incident response plan and team in place, etc.. all of these things along with patching and updating software are important to minimize the impact of an infection or breach. You're so right, there isn't a perfect world and when it comes to security it's not an "if" but "when". So having the proper staff and processes in place is crucial.
Somebody please help guide me to wisdom.
How do you run a mission-critical system (such as for a health care system), and have your machines running an OS that had its EoL some 3 years ago?
I see the blame more on the system admins than the OS or whoever made the OS. I'm no Microsoft fan (stopped using MS stuff some 10 years ago), however, I feel the pointing of fingers at MS has to stop when an admin doesn't take the necessary upgrade precautions.
3 years behind OS? And when attacked, the OS Maker is at fault, but the System admins walk?
If the attack affected only WinXP then I wouldn't hold Microsoft responsible at all. The problem is that the attack affects still supported versions, as well as also requiring a patch on Windows 10.
The article is about a design failure, that unless addressed, means we will continue to see these attacks, even on up-to-date systems.
Organizations that deal with sensitive materials are often slow to apply M$ patches. They test and validate the patches before pushing them out to hundreds or thousands of systems. A 2-month gap seems overmuch. Still, a patch can break an existing system configuration. You can't necessarily just pass through updates.