Now that we have more information about what happened on Twitter, and how the company dealt with things, what are the major lessons software development and security practitioners can take away?
Now that we have more information about what happened on Twitter, and how the company dealt with things, what are the major lessons software development and security practitioners can take away?
For further actions, you may consider blocking this person and/or reporting abuse
Dusan Petkovic -
Esperanza Najera -
Bryan Primus Lumbantobing -
Sudhi Ranjan Gupta -
Top comments (28)
I am having trouble finding verifies facts so I think transparency is the biggest issue here. As of right now, I don't trust a darn thing Twitter says because they want to defend their image more than they want to take responsibility for their data. If staff can take over accounts and manually override content, I am very concerned. I can think of no valid reason for that and it sku ds like it was done with no oversight or auditing either. Might be deleting mine tonight.
Accountability would be nice too. Don't play the victim when you're a multi billion dollar company. Your security sucked and you need to do something about it.
I agree with you. If my account is secured with a password that (presumably) is stored and encrypted in a manner where only I know the password and I have 2fa to further protect my account but even one person could possibly perform actions with my account without possessing these, is it really secure? No, it is not.
The fact that this was even possible shows that in no way did Twitter take securing their application seriously.
Where is told that they can direct gain access to a account? I haven't seen the news recently but I think that the delete of the post is something that can be mande by the moderators.
Twitter said this directly in their posts following up from the ongoing investigation.
twitter.com/TwitterSupport/status/...
Additionally,
Based on this, they either have internal tools that allow their employees direct access to individual accounts without the need to authenticate with the account credentials or lax security protocols in place that allow an employee to hijack an account credential reset without the owner having any knowledge (which is just as bad, if not worse).
Thanks, I haven't read that yet. This is really concerning, I don't know why a employee has permission to get control in a user account. It's good that is a social media, but if this is a enterprise website...
Agreed. I think that's the ultimate lesson learned from this attack. The weakest link in your security is always the human; do not allow the tools you create to exploit this weakness any more than is necessary.
It'll be interesting to see how this plays out for them over the next few days/weeks.
So the lesson (or one of the lessons) is that their internal tools and their internal employees had way too many and powerful permissions granted to them. Oh and (I saw this mentioned somewhere else) an internal employee doing something security/privacy sensitive should not be allowed to perform that task alone, there should always be someone else looking over their shoulder (4 eyes principle).
Twitter has been exceptionally open about it's investigation... That is in day 2. The Twitter Support thread that Ben links to has a lot of detail for the very beginning of an investigation. We know pretty much what Twitter knows at this point.
If you are surprised that Twitter's customer service team can modify account settings, I would go to your company's support team and ask what abilities they have to help their customers.
Indeed. Impersonating a user is a common troubleshooting tool used in a lot of web applications. I don't believe this attack (it wasn't a "hack", not even a "crack") was made any worse by the presence of the tools, or their wide-ranging ability.
Usually the mitigation for security risks in such a tool are:
In this case, it appears that the engineer's credentials have been obtained, and that 2FA was ineffectual or not employed. The tool itself may already audit the actions, which might have helped to remove the fake posts quickly, as they would have been recorded as such.
There's a few to take away that I can see:
If users can do bad things, users will do bad things.
It's only a matter of time before someone does something they weren't supposed to, or which goes against the principles of the organisation (see theguardian.com/politics/2020/may/...)
People are often the weakest point in security.
If this was a social engineering or bribery for access attack, then there's only so much you can do from a technical point. If the attackers had someone on the inside, that's not much more different from the Cold War double-agent type intelligence officers.
People are greedy
It doesn't matter if they are complicit in the attack, or victims. If someone was bribed to help with the attack, they are greedy. If someone actually believed that they would double their money because some prominent figure "said they would", they are greedy. It's a very easy attack vector.
Smaller organisations are screwed when it comes to security
If the big players can't get it right, either through lax measures or not caring, then smaller organisations are always going to struggle with security. They can't afford to pay the salaries the big players can for the top talent
The big lesson for me is you can NEVER put absolute trust in ANYTHING you see on the internet - always reckon with the possibility (even remote) that a server/source/account could be hacked and that info could be fake, manipulated, etc. As long as people ALWAYS behave with the mindset "this piece of info COULD be false or fake, so I am going to act as if it WERE false or fake" then collectively we should be a lot safer. So rule 1, be sceptical, rule 2, be sceptical and rule 3, see rule 1 and 2. :-)
Of course there are also lessons to be learned about how Twitter and other companies operate, but things will always go wrong no matter what, so an internet user's basic mindset should be "this info could be fake, hacked or manipulated".
A philosophical skeptic might tell you absolute trust can't be put in anything, not just things observed on the internet.
Oh I agree absolutely, rather than "the internet" I was rather looking at social media versus traditional media, difference being that with social media the threshold for publishing is almost zero, literally anyone can throw anything on social media ... but with 'traditional' news you can also go wrong.
As of this point (1.5 to 2 days after the incident), Twitter has stated it does not yet know exactly how attackers accessed it's internal customer support admin tool. That makes sense, as these things take time to verify. They are being remarkably candid about their ongoing investigation in the Twitter Support thread that Ben has linked to in this post. I hope this sets a standard of communication for other companies, although I am not hopeful.
Joseph Cox at Vice published an article during the incident with sources inside the hacking group claiming they paid an internal customer support admin for either their credentials to the admin interface or paid them directly to modify account settings via the interface.
There was another recent article from a different journalist revealing information about one of the hackers that seems to verify this reporting that I'm not going to link to. As an aside, Joseph Cox is an excellent reporter to follow for information security journalism.
So, we have an internal customer management tool that can change an account's settings, such as change the registered email address and disable 2FA. These seem like typical customer support actions, presumably not available to tier 1 support but can be escalated to someone with the authorization to perform these actions after verifying a user. Hackers paid off one of these authorized customer support admins for access to this tool and used it to change the primary email address of accounts to one under their control. They additionally either disabled 2FA or set the registered phone number to one under their control as well. They then triggered a password reset, received that email to their email address, and proceeded from there to take over the account. They appeared to script this whole process to quickly capture a number of accounts.
Questions I want to leave you with:
How do you protect against someone using an internal tool in the way it was designed? Someone who has access to the tool as part of their regular responsibilities?
You can require 2+ people to sign off on account activities like this. So instead of buying 1 person, you'd need to buy 2 who could then modify settings. I'm sure Twitter's security teams will be implementing interesting new monitoring checks around these internal tools as well, which brings me to my next point.
How well architected is your monitoring and logging in your application? Are you capable of detecting anomalous behavior patterns? Are you only checking for increased error rates? Monitoring and logging are such an important aspect of information security they've made it into the OWASP Top Ten in 2017. It is hard to be effective preventing a lot of insider threat scenarios and still be a functioning organization. A company needs to be able to detect and respond to incidents quickly, which is where logging and monitoring come into play. If you are throwing everything into Splunk but then don't have any automatic alerts actioning on the logs, you're not helping anyone.
Finally, let's cool it with the diatribes against Twitter's security teams. Even in situations like Equifax's breach, it is very rare for the security team to be behind any mistakes. I have yet to encounter a security group that is ambivalent toward protecting their users (Facebook, for example, has one of the best security teams in the world). It is usually the business who prevents the security team from implementing the controls they want due to real or perceived friction for business operations. If you want to be frustrated at Twitter, go ahead. But I will be highly surprised if future articles about this incident reveal that Twitter's security team had any part in this story.
Edit: and this is why I didn't link to the other article.
I feel like Twitter should've always audited verified accounts incredibly thoroughly since it seems so easy to just pop in and do whatever you want.
Major lessons:
Observation 2 follows from observation 1.
support people will need to be able to recover accounts, but not without owner's consent, and that consent should be in form of answer to question only owner might know, not what was your pet's name or something, ask things like, when was the last time you changed your password, which phone do you use? etc, if support can't answer these then they shouldn't get access, and how will support answer this? only if they're talking to real owner.
This is what the banks do.
Most of the 'something only you know' can be worked out from content Twitter already:
I don't know if you actually read my full reply or not, I said not like that, that can be public knowledge,
but questions like:
which phone do you use to make most of the tweets? (system knows this, and this isn't public knowledge)
Which 2FA auth have you setup? (same, user doesn't set this as an answer, but things like did you use sms? which phone number did you use? etc)
When you got your account verified, which identity did you use? did you use passport? or citizenship?
Which email did you use to create this account?
Tell me the phone number you've used on this account for 2FA,
please tell me which of these questions you can work out? and if any of these aren't actually relevant for those people?
Pet name can be worked out, and not everyone has a pet, but you can't find elon's phone number on random site, and again, I'm saying, ask 5 of these questions, only when they all 5 correctly, only then the customer support person can do anything to the account.
Please read the reply correctly first :) (not being toxic, just thought you didn't read it before jumping into attack mode)
What is really amazing to me is there is some still some number of stupid people in the world that would fall for this.
You would think if you could figure out how to purchase bitcoin that would qualify you into a slightly less stupid group.
It does not.
I think the target group is well picked. Bitcoin investors are already among the highest risk takers. Especially after the HODL hype a few years ago, a good number of people looking for an easy way to get rich should have gathered together in Bitcoin network. Not much different from scams like fake cloud farms or pyramid schemes.
What is more scary is a social media post can be legally binding. It would be very hard to defend against if this was not targeted high profile people as an organized attack, but instead targeted individual people in disorganized individual attacks.
I can't think of what could happen if this was used to blackmail, or worse to calumniate people. It is then quite possible to send someone to jail, or shut their voice down.
We learned that Least Privilege Principle is not followed on Twitter.
Why on earth would ANY Twitter employee need to publish a tweet as someone else? I mean, ever?
Having some employees with authority to delete a tweet? Fine.
But publish a tweet as someone else? Why would they give employees such enourmous power in the first place.
This is only going to foster people's suspicions of ideological/politically motivated shadow behavior by Twitter employees. And now I'm thinking, they might be right about it.
Correct me if I'm wrong but from what I read on their tweeds there were no asswords captured. May this rather be a problem with role-based access control? Would not be the first time that systems allow third party to act in behalf of someone with lifted privileges.
Also - shouldn't be only the owner being capable to change posts? If there is a role, besides the owner, that can change posts, there would be the possibility to plant false evidence and that be a juridical desaster.