Having a strong, complicated and unique password for each of your online accounts is super important, but also super difficult if you’re relying on remembering all of those passwords yourself. Writing them down is an option, but there are lots of caveats with that, which I explain here. Writing passwords down is only a good idea if you only need them in the house, never when you’re on the move or at work, and if you completely trust everyone you live with.
Enter password managers. Password managers act like a vault: you just need to remember one complicated password (do make it a good one!) for the password manager itself, and then you store all of your other passwords in the ‘vault’. This means you can have incredibly long, complicated passwords that offer high levels of security and you don’t need to try to remember them. You can use the password manager to suggest new passwords for you and they generally have copy and paste features so you don’t even have to type the password out when you go to log in to a site. Password managers also allow you to sync your passwords across your devices and so they are available to you when you’re on the go. Finally, password managers make it really effortless to change a password, which comes in handy when a site gets breached and you realise you were using an old password there which you may have used elsewhere.
People understandably worry about the idea of putting all of their eggs in one basket, and trusting a password manager with all of their login details. However, we know that there is no such thing as perfect security, it’s all about getting the right balance. Currently, too many people are in the position of reusing the same one or two weak passwords for all of their accounts because it is impossible to remember so many different, complicated passwords. A password manager, protected with one very strong password, offers much greater security than this. Don’t just take my word for it, see what the UK Government’s NCSC has to say on the matter, too.
So, hopefully I’ve convinced you to look into getting a password manager. They’re easy to set up and will make your online life easier and more secure. You’re may be wondering which one to use: three that are commonly recommended are 1Password, Dashlane and KeePass.
There are some simple things you can do, as an individual, to better-protect yourself online.
Look After Your Accounts
Take care of your passwords. Make them strong and don’t use the same password across different websites. You might want to look into using a password manager; the UK government has provided some helpful information on that here.
For a home computer user, you can also consider writing your passwords down in a book and storing that book in a safe place. Bear in mind: what is the worst thing that can happen here? People you share the house with may find the book and use the passwords to get into your accounts. If that would be a problem for you, then don’t do it. But if this risk doesn’t pose a threat to you, then you can use complicated passwords without having to remember them or use a password manager, and just keep them in the book. Someone is more likely to break a weak password over the internet than they are to break into your house and steal your book of passwords as a way of getting into your accounts. This approach is fine for most people at home, but not for people who live with those they cannot trust and not for use in an office.
Don’t just rely on one password for each account. Setting up two-factor authentication adds a second layer of security to your accounts, which is important in case your password gets compromised (for example, if it is stored insecurely by the service provider and a criminal accesses it, as happened to Yahoo). It’s pretty easy to set up and use two-factor authentication and there is some really helpful guidance on setting it up for many popular sites and apps here.
Look After Your Browsing
Be careful what you do over the internet when using public wifi, as you don’t really know if you can trust the other people on the network. For example avoid checking your online banking or buying things over the internet when you are on your coffee shop’s wifi network. Look into using a VPN if you often use public wifi, and make sure you pick a trusted and verified VPN provider.
If you receive an email with links or attachments in, and you were not expecting it, pick up the phone and check with the sender that it is legitimate. Some phishing emails are hard to spot. To understand what ‘phishing’ is, wikipedia has a pretty good explanation here.
Look After Your Devices and Data
Buy an external hard drive and back up your data, so you won’t lose your data if you get infected with ransomware. Ransomware is often spread via phishing emails and it locks your device / data files before serving you a ‘ransom note’. There is more information on ransomware, and what to do if you get infected, here.
Keep your devices and software updated, as known vulnerabilities and bugs get fixed with each update.
Put a PIN or password on your phone / laptop / tablet and don’t leave your device unlocked and unattended.
Cyber insecurity often manifests as a combination of human, technical and physical dimensions. Yet, the cyber security industry is incredibly siloed when it comes to these different facets. Myself and FC have had many conversations about how frustrating it is that the industry is so fragmented when the problems are so interconnected.
This is why we have founded Redacted Firm. We’re bringing together our backgrounds in the human, technical and physical areas of cyber security to offer a more comprehensive and integrated approach to training, testing and talking about cyber security.
For the last few months we’ve been working with clients on behavioural and cultural-change training, penetration testing and various consultancy projects. We’ve also been talking about cyber security at events. In the coming months, we will be speaking in Sweden, Canada, the US, the UK, the Netherlands, Belgium and Austria (among many other places!). Details of our upcoming speaking events are here.
A couple of the events we have spoken at were for the Cyber Security Challenge. As individuals, we have been supporters of the Cyber Security Challenge for many years and we are delighted to be official Affiliate Sponsors of the initiative going forwards.
Check out our website, follow us on twitter and please don’t hesitate to get in touch if you would like a more detailed conversation about what we do.
Today I went to register for a new GP practice. In order to register I was told that I needed a photo ID ✅, two utility bills with my name and address on ✅ and my NHS number ❎. I have never needed to know my NHS number before, but I was told it was the policy of this surgery that people provide them when they register so that they can verify the individual’s identity. I was surprised and a bit irritated, because my passport and utility bills should be sufficient, but it seemed non-negotiable. I was told if I rang my old GP surgery they would provide me with my NHS number 🤷♀️.
I dutifully rang my old GP surgery with my request. The receptionist who answered told me that I needed to put my request in writing as they cannot give out personal information over the phone. I pressed her on this and she explained it was for “data protection” as she could not verify my identity over the telephone (“you could be anyone”). I was told that if I sent an email, they would reply with my NHS number. The rest of the conversation went roughly like this:
Me: “But how can you verify my identity over email?”
🤔 In the pause before she replied, I could hear the realisation sink in
Her “…well… do you have your name in your email address?”
Me: “I do, but anyone can create an email with somebody’s name in”
Her: “exactly and it’s the same with the phone” 🤦♀️
Her: “yes but we’re not supposed to give personal information over the phone” 😔
And so, we got to the crux of the matter. “Supposed”. Policies and training that are not fit for purpose. People being told what not to do, without training them in risk and why they should not do certain things. A policy that does not account for practicalities, that has been determined without consultation with the personnel who have to enact it; someone being told what they cannot do without being advised what they can do.
In trying to receive healthcare today, the policies of two GP surgeries acted as a blocker to me doing so. According to the NHS website, I should not have needed to know my NHS number to register with a new surgery, and my old practice should have told me what my number is. The two receptionists I communicated with were not malicious, they did not want to get in my way or to be interrogated about their data privacy and security policies. They were just trying to do their jobs, as much as I was just trying to see a doctor. Too often data security and privacy approaches undermine people, when they should empower them.
In the end, I did email the surgery and ask for my NHS number, as well as offering free consultation and training. I have not heard back, most likely because the recipient is paralysed over whether they can share my number via email or any means. I doubt they will take up the offer of free help, but I hope they do. Meanwhile, I went to a walk-in centre and was treated by a brilliant doctor who also gave me my NHS number. I gave my name, address and date of birth when I made the appointment, but at no point did I have to do anything further to verify my identity.
We need to push harder to make security and privacy more fit-for-purpose and less ridiculous.
It’s hard to get all of the interesting points about an issue into a quick news interview; the key points about the story from my perspective are:
The stealing and leaking of Game of Thrones scripts before the episodes air gives the criminals their headlines, but HBO are probably far more concerned with the internal documents which have been breached
This parallels with the Sony hack a few years ago, in which many embarrassing internal emails were leaked and which ultimately led to the Co-Chair of Sony, Amy Pascal, stepping down
In HBO’s case, it seems that thousands of internal documents have been stolen, from personal information of employees through to legal and financial documents of the corporation
There are reports that Game of Thrones actors’ personal contact details, including phone numbers and email addresses, have been leaked
From reports, we can deduce that one person’s machine has definitely been compromised, a top executive who seems to be the Vice-President for Film Programming. Login details for dozens of her online accounts have been released by the criminals, including possibly her work email
I would speculate, as I mentioned in the Sky News interview, that the attack was most likely carried out by a spear-phishing email which compromised the VP for Film Programming, but there are of course many other scenarios which could have facilitated the attack. Film and TV studios work with many third parties, so a third party could have been leveraged as a means of attacking HBO. An episode of Game of Thrones was recently leaked via Star India, one of HBO’s distribution partners, so it is feasible that there is a link here, for example one scenario being that Star India (or another third party) was compromised and used as a vehicle to send a spear-phishing email
HBO are claiming that their internal email network has not been fully compromised
The criminals are claiming that they have 1.5 terabytes of HBO data, which could just come from compromising one machine
The attackers seemingly sent a ransom note to HBO, which has now been made public, in which they ask for ~$6m in bitcoin. They claim that they have spent six months on the attack and that ~$6m represents six months of their income. This is a novel way of calculating a ransom demand, to say the least. Despite this, the criminals also say that the attack was not financially motivated, that they are white-hats and that HBO should see them as partners. Like all good protection rackets, it seems they are trying to frame this as an on-going business relationship.
If the attackers really spent six months on the attack, it’s quite surprising that they didn’t get more than 1.5 terabytes of data
$6m is a lot of money, especially in bitcoin: only approximately 900 accounts in the world have over $6m worth of bitcoin in them
The timing of this attack could be particularly difficult for HBO, coming when AT&T are looking to close their acquisition of Time Warner (who own HBO) for $85bn
As an update, since the Sky News piece a few days ago, the attackers have claimed that HBO offered them a $250,000 “bug bounty” as part of their negotiations; HBO seem to be claiming that they made this offer as a delay tactic:
A piece of research published recently by Ponemon Institute has found that consumer trust in certain industries may be misplaced:
68% of consumers say they trust healthcare providers to preserve their privacy and to protect personal information. In contrast, only 26% of consumers trust credit card companies
Yet, healthcare organisations account for 34% of all data breaches while banking, credit and financial organisations account for only 4.8%. Banking, credit and financial industries also spend two-to-three times more on cyber security than healthcare organisations
It is worth noting that the research was conducted before WannaCry hit the NHS: it would be interesting to see whether the perception of trustworthiness in healthcare providers has been impacted by that high profile incident.
Why do some sectors have a much stronger image of trustworthiness when it comes to cyber security, even when this contradicts reality? Is this consumer wishful thinking, media reporting bias or something else? Should organisations that are not rated highly for trustworthiness be more vocal about the efforts they undertake to be more secure?
71% of IT respondents in the research do not believe that protecting their company’s brand is their responsibility
However, 43% of these respondents do believe a material cybersecurity incident or data breach would diminish the brand value of their company
The research surveyed three groups (IT practitioners, Chief Marketing Officers (CMOs) and consumers) to ascertain their perspectives on data breaches.
Perhaps unsurprisingly, CMOs allocate more money in their budgets to brand protection than IT:
42% of CMOs surveyed say a portion of their marketing and communications budget is allocated to brand preservation and 60% of these respondents say their department collaborates with other functions in maintaining company brand
Only 18% of IT practitioners say they allocate a portion of the IT security budget to brand preservation and only 18% collaborate with other functions on brand protection
Should IT practitioners be more proactive in pursuing brand preservation? Or, is it the responsibility of organisations to encourage their IT departments to be more engaged in reputational protection?
The loss of stock price seems to be a blind spot of CMOs and IT practitioners. Reputation loss due to a data breach is one of the biggest concerns to both IT practitioners and CMOs, and yet:
Only 23% of CMOs and 3% of IT practitioners say they would be concerned about a decline in their companies’ stock price
In organisations that had a data breach, only 5% of CMOs and 6% of IT professionals say a negative consequence of the breach was a decline in their companies’ stock price
IT practitioners and CMOs share the same concern about the loss of reputation as the biggest impact after a breach, but after that, the concerns are specific to their function.
For CMOs, the top three concerns about a data breach are:
Loss of reputation (67% of respondents)
Decline in revenues (53% of respondents)
Loss of customers (46% of respondents)
For IT, the biggest concerns are:
The loss of their jobs (63% of respondents)
Loss of reputation (43% of respondents)
Time to recover decreases productivity (41% of respondents)
What are the implications on cyber security for an organisation when the marketing department is so externally focused and the IT department is internally focused?
A couple of months ago, I was invited to Seattle to discuss the human elements of cyber security for Microsoft Modern Workplace. We talked about topics like communication, fear, social engineering, how to engage with senior executives and the idea of people as the ‘weakest link’. It inspired me to pull together some of my thoughts and work regarding how I define the human nature of cyber security.
“If you engage in changing your culture, if you engage in empowering your staff… then people go from being the weakest link to the biggest part of defence” Dr Jessica Barker speaking to Microsoft
Bruce Schneier popularised the concept in 1999: cyber security is about people, process and technology. Yet almost two decades later, the industry still focuses so much more on technology than the other dimensions of our discipline. My consultancy work has always centred on the human nature of cyber security and I’ve been talking publicly (at conferences and in the media) about this subject in the four years I’ve been running my business. In the last year or so, it’s been gratifying and encouraging to see industry and the government really start to actively and vocally engage with the sociological and psychological dimensions of cyber security.
For a long time, when the industry has considered the human nature of cyber security, it has been within the context of a narrative that ‘humans are the weakest link’. My argument has long been that if that is the case, then that is our failing as an industry. Cyber security relies on engaging, educating and supporting people, and enabling them to create, share and store information safely. If people are failing to conceive of the threats online, if they are unable to appreciate the value of the information they are handling, and if they are struggling to enact the advice we provide to keep their information more secure, then we are not doing our job. At the recent NCSC flagship conference in Liverpool, Emma W expressed it perfectly:
For security to work for people, we need to communicate with people clearly, which means speaking in a way people understand. This sounds straight-forward, but we use terms in this industry which most people don’t get, including terms that most of us would probably not think of as technical, like two-factor authentication. There is even a disconnect between what we, in the industry, call our profession and what the general public call it. We need communication skills to get people engaged with the subject, to empower them to behave more securely online and to get senior support of our work. We know from psychology that the more clearly and concisely we can communicate a message, the more likely people are to actually engage with it (known as the fluency heuristic).
I often hear that the solution to the ‘people problem’ is awareness-raising training. The logic behind this makes sense, but lacks nuance. Awareness is not an end in itself: you want to raise awareness to change behaviours and contribute to a positive cyber security culture. My research, such as this on passwords from a few years ago, suggests that awareness of cyber security is actually quite high, but we’ve been less successful in changing behaviours.
Why have we had less success in changing behaviours? One reason, of course, is that we put too much responsibility for security onto people. This, in turn, leads to security fatigue, as reported by NIST last year.
An inevitable part of cyber security awareness-raising involves talking about threats, which essentially embodies a ‘fear appeal’ (a message that attempts to arouse fear in order to influence behavior). Research on the sociology and psychology of fear teaches us that we have to be very careful using fear appeals. If we talk about scary things in the wrong way, this can backfire and lead to denial (“hackers wouldn’t want my data so I don’t need to worry about security”) or avoidance (“the internet is the wild west so I try not to use it”). I’ve talked in much more detail about the implications of the sociology and psychology of fear on cyber security elsewhere, such as here.
Why are these nuances so often overlooked when it comes to the design and delivery of cyber security awareness-raising training? Partly, it is because the people responsible for the training are usually experts in technology and security, but not in people, as this research from SANS Securing The Human shows (pdf link). Exacerbating this, how many organisations train their IT and information security teams in issues relating to sociology, psychology and communications? When it comes to awareness-raising, all of our focus is on training people about cyber security; we don’t train cyber security professionals about people. I spoke about this issue at Cybercon recently and the NCSC picked up on this at their flagship event, quoting a comment I made about the need to train not just ‘users’ in technology, but also technologists in ‘users’:
Lacking training, technical professionals are unaware of the psychological power of so-called hot states. Cyber criminals, on the other hand, are well aware of the psychological irresistibility of temptation, curiosity, ego and greed, which is why so many social engineering attacks capitalise on the power of these triggers.
Without an understanding of why people do what they do, is it any surprise that when people click links in phishing emails, the technical team will label them ‘stupid’? To the IT or infosec team, when people who have been trained to be wary of suspicious-looking emails continue to click links in suspicious looking emails, they are being illogical and stupid. The problem with this (other than it not being true and not being very nice, of course) is that ‘the user is stupid’ narrative only creates more disconnect between cyber security and the rest of the business. When we expect little of people, we get little back (the Golem Effect) and when we expect a lot, we get a lot (the Pygmalion Effect).
Another problem with the ‘people are stupid’ narrative is that it undermines people within our industry, too. There is a tendency, in our culture, to tear people down when they make a mistake or get something wrong. Not only does this contribute to a culture of imposter syndrome, but it arguably also makes our organisations more insecure, too. Human vulnerabilities can lead to technical ones, as I discuss here. If we continue to lazily push the ‘people are stupid’ narrative, we continue to be a big part of the problem, not the solution.
NB: there are many, many other elements to the human nature of cyber security, of course. For example, I haven’t even begun to tackle the motivations of malicious cyber actors or issues around management and organisational learning here. I’ve also barely scratched the surface of the culture(s) of our industry or ethics and cyber security in this blog post. Nor have I commented on diversity, what that means and why it matters. I’ll save my thoughts on those topics, and more, for another day.
I spoke on Channel 4 News earlier this week about the debate surrounding end-to-end encryption. The debate, which is often framed in terms of privacy vs security, emerged last weekend when Amber Rudd (the UK’s Home Secretary) argued that it was “completely unacceptable” that the government could not read messages protected by end-to-end encryption. Her comments were in response to reports that Khalid Masood was active on WhatsApp just before he carried out the attack on Westminster Bridge on 22 March 2017. Rudd was, therefore, talking in this case about WhatsApp, although her comments obviously have connotations for other messaging services, and the use of encryption in general.
WhatsApp explain what end-to-end encryption means when using their app:
“WhatsApp’s end-to-end encryption ensures only you and the person you’re communicating with can read what is sent, and nobody in between, not even WhatsApp. Your messages are secured with a lock, and only the recipient and you have the special key needed to unlock and read your message. For added protection, every message you send has a unique lock and key. All of this happens automatically: no need to turn on settings or set up special secret chats to secure your messages” WhatsApp FAQs
This is not the first time that anti-encryption messages from the government have hit the headlines. In 2015, for example, then-PM David Cameron called for encryption to be banned. As I mentioned earlier, the argument is often presented as being about privacy vs security. Those in favour of banning encryption argue that it would protect national security, by preventing malicious actors (such as terrorists) from being able to communicate in secret. Those who make this argument claim that this overrides any individual right to privacy. The government have a challenging job to do and surely this is never more challenging than in the wake of a terrorist attack. We also cannot expect ministers to be subject matter experts on all issues which are presented before them, let alone have a deep understanding of technical complexities.
The issue, from my point of view, is that this is not about security vs privacy at all. To ban or undermine encryption has security implications, given that encryption underpins online banking, financial transactions and more sensitive communications. The UK government has pledged to make this country the most secure place in the world to do business online, which appears at odds with their messages on encryption. The true debate we need to have, then, is about security vs security.
I recently spoke to Jeremy Vine and his listeners on Radio 2 about ransomware.
Action Fraud reported last year that 4,000 people in the UK have been a victim of ransomware, with over £4.5 million paid out to cyber criminals. As these are the reported figures, it is unfortunately guaranteed that the number of people impacted, and the sum paid out to criminals, will be significantly higher.
The first known ransomware was reported in 1989 and called the ‘AIDS Trojan’. It was spread via floppy disks and did not have much of an impact. Times have changed since 1989 and ransomware as a means of extortion has grown exponentially in recent years due to a combination of factors:
Society’s growing use of computers and the internet
Developments in the strength of encryption
The evolution of bitcoin and the associated opportunity for greater anonymity
Last year we saw reports of strains whereby victims are promised the decrypt key if they infect two of their contacts (called Popcorn Time) and others in which criminals promise to unlock data when the victim reads some articles on cybersecurity (known as Koolova). Ransomware-as-a-service, in which criminals essentially franchise their ransomware tools on the dark web, appears to be very profitable for criminals, with Cerber ransomware reportedly infecting 150,000 devices and extracting $195,000 in ransom payments in July 2016 alone.
Listen to my chat with Jeremy Vine and his listeners for more information on ransomware and what to do if you’re hit. *Spoiler*: I recommend offline back-ups a lot and plug The No More Ransom Project, an initiative between law enforcement and IT Security companies to fight against ransomware.