This morning I was invited on to BBC 5 Live to discuss Britain’s readiness in terms of state-on-state cyber activities. Listen to what I had to say below:
This morning I was invited on to BBC 5 Live to discuss Britain’s readiness in terms of state-on-state cyber activities. Listen to what I had to say below:
There are some simple things you can do, as an individual, to better-protect yourself online.
Look After Your Accounts
For a home computer user, you can also consider writing your passwords down in a book and storing that book in a safe place. Bear in mind: what is the worst thing that can happen here? People you share the house with may find the book and use the passwords to get into your accounts. If that would be a problem for you, then don’t do it. But if this risk doesn’t pose a threat to you, then you can use complicated passwords without having to remember them or use a password manager, and just keep them in the book. Someone is more likely to break a weak password over the internet than they are to break into your house and steal your book of passwords as a way of getting into your accounts. This approach is fine for most people at home, but not for people who live with those they cannot trust and not for use in an office.
Look After Your Browsing
Look After Your Devices and Data
Cyber insecurity often manifests as a combination of human, technical and physical dimensions. Yet, the cyber security industry is incredibly siloed when it comes to these different facets. Myself and FC have had many conversations about how frustrating it is that the industry is so fragmented when the problems are so interconnected.
This is why we have founded Redacted Firm. We’re bringing together our backgrounds in the human, technical and physical areas of cyber security to offer a more comprehensive and integrated approach to training, testing and talking about cyber security.
For the last few months we’ve been working with clients on behavioural and cultural-change training, penetration testing and various consultancy projects. We’ve also been talking about cyber security at events. In the coming months, we will be speaking in Sweden, Canada, the US, the UK, the Netherlands, Belgium and Austria (among many other places!). Details of our upcoming speaking events are here.
A couple of the events we have spoken at were for the Cyber Security Challenge. As individuals, we have been supporters of the Cyber Security Challenge for many years and we are delighted to be official Affiliate Sponsors of the initiative going forwards.
Check out our website, follow us on twitter and please don’t hesitate to get in touch if you would like a more detailed conversation about what we do.
Today I went to register for a new GP practice. In order to register I was told that I needed a photo ID ✅, two utility bills with my name and address on ✅ and my NHS number ❎. I have never needed to know my NHS number before, but I was told it was the policy of this surgery that people provide them when they register so that they can verify the individual’s identity. I was surprised and a bit irritated, because my passport and utility bills should be sufficient, but it seemed non-negotiable. I was told if I rang my old GP surgery they would provide me with my NHS number 🤷♀️.
I dutifully rang my old GP surgery with my request. The receptionist who answered told me that I needed to put my request in writing as they cannot give out personal information over the phone. I pressed her on this and she explained it was for “data protection” as she could not verify my identity over the telephone (“you could be anyone”). I was told that if I sent an email, they would reply with my NHS number. The rest of the conversation went roughly like this:
Me: “But how can you verify my identity over email?”
🤔 In the pause before she replied, I could hear the realisation sink in
Her “…well… do you have your name in your email address?”
Me: “I do, but anyone can create an email with somebody’s name in”
Her: “exactly and it’s the same with the phone” 🤦♀️
Me: “exactly!”
Her: “yes but we’re not supposed to give personal information over the phone” 😔
And so, we got to the crux of the matter. “Supposed”. Policies and training that are not fit for purpose. People being told what not to do, without training them in risk and why they should not do certain things. A policy that does not account for practicalities, that has been determined without consultation with the personnel who have to enact it; someone being told what they cannot do without being advised what they can do.
In trying to receive healthcare today, the policies of two GP surgeries acted as a blocker to me doing so. According to the NHS website, I should not have needed to know my NHS number to register with a new surgery, and my old practice should have told me what my number is. The two receptionists I communicated with were not malicious, they did not want to get in my way or to be interrogated about their data privacy and security policies. They were just trying to do their jobs, as much as I was just trying to see a doctor. Too often data security and privacy approaches undermine people, when they should empower them.
In the end, I did email the surgery and ask for my NHS number, as well as offering free consultation and training. I have not heard back, most likely because the recipient is paralysed over whether they can share my number via email or any means. I doubt they will take up the offer of free help, but I hope they do. Meanwhile, I went to a walk-in centre and was treated by a brilliant doctor who also gave me my NHS number. I gave my name, address and date of birth when I made the appointment, but at no point did I have to do anything further to verify my identity.
We need to push harder to make security and privacy more fit-for-purpose and less ridiculous.
I was interviewed on Sky News on Tuesday 8 August discussing the HBO hack.
“Cyber attacks can happen to anyone” – security consultant Dr Jessica Barker talks about the #GameOfThrones hack #GoT #SNT pic.twitter.com/LSWe74S0ya
— Sky News Tonight (@SkyNewsTonight) August 8, 2017
It’s hard to get all of the interesting points about an issue into a quick news interview; the key points about the story from my perspective are:
#GameOFthroneSTAR #HBOhhhNOweGotHACKED #BitcoiNEWransomware #NewMoney #CosmicCoin #CNBC pic.twitter.com/X8AbWW9hhJ
— David Borah (@davidborah_kpi) August 11, 2017
Perceptions of trust
A piece of research published recently by Ponemon Institute has found that consumer trust in certain industries may be misplaced:
It is worth noting that the research was conducted before WannaCry hit the NHS: it would be interesting to see whether the perception of trustworthiness in healthcare providers has been impacted by that high profile incident.
Why do some sectors have a much stronger image of trustworthiness when it comes to cyber security, even when this contradicts reality? Is this consumer wishful thinking, media reporting bias or something else? Should organisations that are not rated highly for trustworthiness be more vocal about the efforts they undertake to be more secure?
Is brand protection part of your job?
The survey The Impact of Data Breaches on Reputation & Share Value: A Study of Marketers, IT Practitioners and Consumers in the United Kingdom, sponsored by Centrify, also found that IT practitioners do not believe that brand protection is their responsibility:
The research surveyed three groups (IT practitioners, Chief Marketing Officers (CMOs) and consumers) to ascertain their perspectives on data breaches.
Perhaps unsurprisingly, CMOs allocate more money in their budgets to brand protection than IT:
Should IT practitioners be more proactive in pursuing brand preservation? Or, is it the responsibility of organisations to encourage their IT departments to be more engaged in reputational protection?
Impact blindspots
The loss of stock price seems to be a blind spot of CMOs and IT practitioners. Reputation loss due to a data breach is one of the biggest concerns to both IT practitioners and CMOs, and yet:
IT practitioners and CMOs share the same concern about the loss of reputation as the biggest impact after a breach, but after that, the concerns are specific to their function.
For CMOs, the top three concerns about a data breach are:
For IT, the biggest concerns are:
What are the implications on cyber security for an organisation when the marketing department is so externally focused and the IT department is internally focused?
A couple of months ago, I was invited to Seattle to discuss the human elements of cyber security for Microsoft Modern Workplace. We talked about topics like communication, fear, social engineering, how to engage with senior executives and the idea of people as the ‘weakest link’. It inspired me to pull together some of my thoughts and work regarding how I define the human nature of cyber security.
“If you engage in changing your culture, if you engage in empowering your staff… then people go from being the weakest link to the biggest part of defence” Dr Jessica Barker speaking to Microsoft
Bruce Schneier popularised the concept in 1999: cyber security is about people, process and technology. Yet almost two decades later, the industry still focuses so much more on technology than the other dimensions of our discipline. My consultancy work has always centred on the human nature of cyber security and I’ve been talking publicly (at conferences and in the media) about this subject in the four years I’ve been running my business. In the last year or so, it’s been gratifying and encouraging to see industry and the government really start to actively and vocally engage with the sociological and psychological dimensions of cyber security.
For a long time, when the industry has considered the human nature of cyber security, it has been within the context of a narrative that ‘humans are the weakest link’. My argument has long been that if that is the case, then that is our failing as an industry. Cyber security relies on engaging, educating and supporting people, and enabling them to create, share and store information safely. If people are failing to conceive of the threats online, if they are unable to appreciate the value of the information they are handling, and if they are struggling to enact the advice we provide to keep their information more secure, then we are not doing our job. At the recent NCSC flagship conference in Liverpool, Emma W expressed it perfectly:
Emma W: people are not the weakest link. They are the /only/ link. If security doesn’t work for people, it doesn’t work #CYBERUK17 pic.twitter.com/WKHpAgQv5f
— Harry Metcalfe (@harrym) March 15, 2017
For security to work for people, we need to communicate with people clearly, which means speaking in a way people understand. This sounds straight-forward, but we use terms in this industry which most people don’t get, including terms that most of us would probably not think of as technical, like two-factor authentication. There is even a disconnect between what we, in the industry, call our profession and what the general public call it. We need communication skills to get people engaged with the subject, to empower them to behave more securely online and to get senior support of our work. We know from psychology that the more clearly and concisely we can communicate a message, the more likely people are to actually engage with it (known as the fluency heuristic).
I often hear that the solution to the ‘people problem’ is awareness-raising training. The logic behind this makes sense, but lacks nuance. Awareness is not an end in itself: you want to raise awareness to change behaviours and contribute to a positive cyber security culture. My research, such as this on passwords from a few years ago, suggests that awareness of cyber security is actually quite high, but we’ve been less successful in changing behaviours.
Why have we had less success in changing behaviours? One reason, of course, is that we put too much responsibility for security onto people. This, in turn, leads to security fatigue, as reported by NIST last year.
An inevitable part of cyber security awareness-raising involves talking about threats, which essentially embodies a ‘fear appeal’ (a message that attempts to arouse fear in order to influence behavior). Research on the sociology and psychology of fear teaches us that we have to be very careful using fear appeals. If we talk about scary things in the wrong way, this can backfire and lead to denial (“hackers wouldn’t want my data so I don’t need to worry about security”) or avoidance (“the internet is the wild west so I try not to use it”). I’ve talked in much more detail about the implications of the sociology and psychology of fear on cyber security elsewhere, such as here.
Why are these nuances so often overlooked when it comes to the design and delivery of cyber security awareness-raising training? Partly, it is because the people responsible for the training are usually experts in technology and security, but not in people, as this research from SANS Securing The Human shows (pdf link). Exacerbating this, how many organisations train their IT and information security teams in issues relating to sociology, psychology and communications? When it comes to awareness-raising, all of our focus is on training people about cyber security; we don’t train cyber security professionals about people. I spoke about this issue at Cybercon recently and the NCSC picked up on this at their flagship event, quoting a comment I made about the need to train not just ‘users’ in technology, but also technologists in ‘users’:
Lacking training, technical professionals are unaware of the psychological power of so-called hot states. Cyber criminals, on the other hand, are well aware of the psychological irresistibility of temptation, curiosity, ego and greed, which is why so many social engineering attacks capitalise on the power of these triggers.
Without an understanding of why people do what they do, is it any surprise that when people click links in phishing emails, the technical team will label them ‘stupid’? To the IT or infosec team, when people who have been trained to be wary of suspicious-looking emails continue to click links in suspicious looking emails, they are being illogical and stupid. The problem with this (other than it not being true and not being very nice, of course) is that ‘the user is stupid’ narrative only creates more disconnect between cyber security and the rest of the business. When we expect little of people, we get little back (the Golem Effect) and when we expect a lot, we get a lot (the Pygmalion Effect).
Another problem with the ‘people are stupid’ narrative is that it undermines people within our industry, too. There is a tendency, in our culture, to tear people down when they make a mistake or get something wrong. Not only does this contribute to a culture of imposter syndrome, but it arguably also makes our organisations more insecure, too. Human vulnerabilities can lead to technical ones, as I discuss here. If we continue to lazily push the ‘people are stupid’ narrative, we continue to be a big part of the problem, not the solution.
NB: there are many, many other elements to the human nature of cyber security, of course. For example, I haven’t even begun to tackle the motivations of malicious cyber actors or issues around management and organisational learning here. I’ve also barely scratched the surface of the culture(s) of our industry or ethics and cyber security in this blog post. Nor have I commented on diversity, what that means and why it matters. I’ll save my thoughts on those topics, and more, for another day.
I spoke on Channel 4 News earlier this week about the debate surrounding end-to-end encryption. The debate, which is often framed in terms of privacy vs security, emerged last weekend when Amber Rudd (the UK’s Home Secretary) argued that it was “completely unacceptable” that the government could not read messages protected by end-to-end encryption. Her comments were in response to reports that Khalid Masood was active on WhatsApp just before he carried out the attack on Westminster Bridge on 22 March 2017. Rudd was, therefore, talking in this case about WhatsApp, although her comments obviously have connotations for other messaging services, and the use of encryption in general.
WhatsApp explain what end-to-end encryption means when using their app:
“WhatsApp’s end-to-end encryption ensures only you and the person you’re communicating with can read what is sent, and nobody in between, not even WhatsApp. Your messages are secured with a lock, and only the recipient and you have the special key needed to unlock and read your message. For added protection, every message you send has a unique lock and key. All of this happens automatically: no need to turn on settings or set up special secret chats to secure your messages” WhatsApp FAQs
This is not the first time that anti-encryption messages from the government have hit the headlines. In 2015, for example, then-PM David Cameron called for encryption to be banned. As I mentioned earlier, the argument is often presented as being about privacy vs security. Those in favour of banning encryption argue that it would protect national security, by preventing malicious actors (such as terrorists) from being able to communicate in secret. Those who make this argument claim that this overrides any individual right to privacy. The government have a challenging job to do and surely this is never more challenging than in the wake of a terrorist attack. We also cannot expect ministers to be subject matter experts on all issues which are presented before them, let alone have a deep understanding of technical complexities.
The issue, from my point of view, is that this is not about security vs privacy at all. To ban or undermine encryption has security implications, given that encryption underpins online banking, financial transactions and more sensitive communications. The UK government has pledged to make this country the most secure place in the world to do business online, which appears at odds with their messages on encryption. The true debate we need to have, then, is about security vs security.
I recently spoke to Jeremy Vine and his listeners on Radio 2 about ransomware.
Action Fraud reported last year that 4,000 people in the UK have been a victim of ransomware, with over £4.5 million paid out to cyber criminals. As these are the reported figures, it is unfortunately guaranteed that the number of people impacted, and the sum paid out to criminals, will be significantly higher.
The first known ransomware was reported in 1989 and called the ‘AIDS Trojan’. It was spread via floppy disks and did not have much of an impact. Times have changed since 1989 and ransomware as a means of extortion has grown exponentially in recent years due to a combination of factors:
Last year we saw reports of strains whereby victims are promised the decrypt key if they infect two of their contacts (called Popcorn Time) and others in which criminals promise to unlock data when the victim reads some articles on cybersecurity (known as Koolova). Ransomware-as-a-service, in which criminals essentially franchise their ransomware tools on the dark web, appears to be very profitable for criminals, with Cerber ransomware reportedly infecting 150,000 devices and extracting $195,000 in ransom payments in July 2016 alone.
Listen to my chat with Jeremy Vine and his listeners for more information on ransomware and what to do if you’re hit. *Spoiler*: I recommend offline back-ups a lot and plug The No More Ransom Project, an initiative between law enforcement and IT Security companies to fight against ransomware.
For my second Digital Guardian blogpost, I continued looking at GDPR. As is the case with many cybersecurity projects, getting senior-level support for GDPR compliance efforts requires effective communication. As research from (ISC)2 has highlighted, one of the biggest challenges with GDPR projects is securing senior-level support (and the budget that goes with it). Read what I have to say in Digital Guardian for some tips on how to get the board on board.