Yesterday Yahoo confirmed that it was the target of the biggest known cyber attack in history. In 2014, 500m Yahoo accounts were compromised, with data including names, email addresses, telephone numbers, date of births, hashed passwords (most with bcrypt) and security question answers, some of which were unencrypted. Yahoo has claimed that this attack was carried out by a state-sponsored attacker.
I’ve done a few radio interviews giving my thoughts today, and you can listen to what I had to say on BBC Radio London.
Or watch a quick clip of me discussing the breach on Channel 4 News:
I’ve talked about the need for people to change their Yahoo password and set up two-factor authentication and to use strong, unique passwords on all of their accounts. This breach also highlights the need for people to deactivate accounts they no longer use.
Some thoughts which I didn’t get chance to go into on the radio or TV:
This is the biggest known cyber attack or data breach in terms of number of accounts compromised, and the impact on users could be significant. However, it’s interesting to consider how we classify the ‘biggest hack ever’. While this is the largest in terms of volume, is it the biggest in terms of impact? I’m thinking particularly of the Ashley Madison breach, following which at least one user reportedly committed suicide, and the US Office of Personnel Management breach, when information, including fingerprint data, of US government employees (some of whom of course have security clearance) was compromised.
It also needs to be noted, of course, that this is the biggest known breach. When the Myspace breach of 360 million accounts came to light in May of this year, that was reported as the biggest breach only because we didn’t know about this one. Who knows what breaches have taken place that are simply not known?
When Yahoo confirmed the breach yesterday, many people highlighted the fact that security researchers informed Yahoo in July that account information stolen in 2012 was seemingly being sold on the dark web. At the time, Yahoo responded by saying they were investigating. Many people understandably assumed that the breach Yahoo were confirming yesterday was the same data advertised for sale earlier this summer. Apparently this is not the case. It seems, rather, that when Yahoo investigated the purported 2012 breach they found no evidence to support that it was legitimate but the investigation found another breach, the 2014 one we’re now hearing about.
Yahoo actually seem to be handling the breach communications pretty well, albeit belatedly. Many have expressed surprise that they are so confident that the attacker is a state-sponsored individual, not least because attribution in this space is so notoriously difficult. Putting that to one side, Yahoo have been prompting users to change their passwords and have put in place communications such as this FAQ, which are really helpful. It is a shame, however, that they aren’t using the opportunity to get more people using their two-step verification. My research suggests that only 20% of British people use 2FA, which is a real concern given how much more effective it is than simply having a password.
The news of this breach comes as Yahoo is in the process of finalising the sale of its business to Verizon, an acquisition which began in July. It will be interesting to see if the breach has any impact on this.
A couple of days ago I was interviewed on LBC radio about the recommendation from FBI director Jim Comey that everyone should cover their webcams. You can listen to what I had to say here:
The media response to this advice (much like the response to the fact that Mark Zuckerberg covers his) seems to have been one of surprise. However, for most in the cybersecurity industry, it won’t come as a shock. I’ve covered my laptop webcam for years as one of many precautions to stay safer online.
How a webcam can be hacked
Criminals can gain access to a webcam by using malware or Remote Administration Tools (RATs). Malware and RATs can be planted on your machine most commonly via infected files or malicious links, so being wary of what you click on whilst using the internet and opening emails is crucial.
Remote-access webcams are vulnerable to hacking like anything else connected to the internet, often by owners using default or weak passwords.
How often does it happen and why should I care?
Like all crime, let alone cybercrime, it’s impossible to say how often it happens. They key consideration for me, here, is impact. If your webcam is hacked, the impact of that can be huge. Think about the amount of time your laptop screen is left open, ‘looking’ at you. Perhaps you leave it open in your room while you get changed, perhaps you work in your underwear, perhaps I’ll leave you to think about all of the other things you do in front of your laptop screen that you would rather not share with the rest of the world.
There have been some pretty well-known cases of webcam hacking:
In 2014, Jared James Abrahams was sentenced to 18 months in prison for hacking the webcams of women and girls and secretly taking photos of them while they were undressed. He then contacted his victims and threatened to publish the photos online if they did not send more or undress for him via Skype. Abrahams reportedly told investigators that he hacked the webcams of 150 women and girls. One of his victims was Miss Teen USA, Cassidy Wolf, who has since campaigned to raise awareness of cybersecurity among young people.
In 2014 it came to light that a Russian website was sharing videos illegally captured from 10,000 webcams worldwide (584 of which were in the UK). The site targeted remote-access cameras that were still ‘protected’ by the manufacturer’s default password, whilst also providing the information needed to hack into the camera systems, plus GPS locations and postcodes. The site proclaimed that it was in operation to highlight the importance of security settings.
In 2015, Stefan Rigo was convicted in the UK of using the malware ‘Blackshades’ to infect victims computers and take over their webcams. Forensic examination of his computers found images of people engaged in sexual acts over Skype or in front of their computers. During his trial he admitted to being addicted to monitoring people via their computers, spending 5 to 12 hours a day doing so over a three year period.
So, should I cover my webcam and then I don’t have to worry about it?
I recommend covering your webcam. You probably don’t use it much and it’s easy to cover it with a little sticker or piece of sticky paper which you can simply temporarily remove when you need to use it. This will stop anyone being able to see you or take images of you via our webcam without you knowing about it.
However, this is – literally – a sticking plaster for the problem. Covering your webcam is one thing but if your webcam is hacked, that means your machine has been hacked and the attacker could be accessing all of your other information and / or using your machine as part of a DDoS botnet. So at the same time as covering your webcam, you should also:
Be wary of clicking links and downloading documentswhen you browse the internet and read emails, texts, whatsapp messages, etc
Use anti-virus and anti-malwaresoftware
Keep devices and software up-to-date so that known bugs will be patched and can’t be exploited by attackers
Don’t use public wifi where you could become the victim of a man-in-the-middle attack
If you have a remote-access webcam, change the password from the defaultone. Use a strong password
Remember: there is a webcam on your mobile phone and your phone probably sees more intimate images of you than even your laptop does. Your mobile is a computer and can be hacked just like your laptop, so all of my advice relates to them, too. Chances are that you use your phone camera more than your laptop one and so a sticker might not be practical, in which case there are products available which can cover the front and back lenses whilst still giving you access to the camera.
In the final blog post of Chris Ratcliff’s series about speaking at a security conference, it’s SteelCon 2016 and the day of the presentation itself. Read on for a presenters-eye-view of standing up and giving a talk – and why, if you haven’t already, you should give it a go.
So far in these blog posts I’ve gone from never having set foot in a security con, to doing my first talk to preparing my latest one. This post is the talk itself…
The day. Oh, what a day. Steelcon is a fantastic event, with great talks and a great atmosphere. Since it started I’ve been taking photos for the event. That means bouncing from hall to hall, taking photos, chatting with folk, Tweeting photos and generally keeping an eye on everything going on. The downside of this is that I never really get a chance to sit down and watch a full talk live, but on the upside I don’t have time to worry about my presentation. About half an hour before my slot I go into presentation mode. Boot the laptop, check my clicker is working, make sure the presentation is there and quickly run through the slides. I start pacing and I’ve got about 10 mins before I need to go up. I’m making idle chit chat with people, but mostly I’m thinking about the talk.
The talk. The easy bit. Once I’d resolved an AV issue getting my laptop hooked up to both the room and recording equipment, I could begin. This really is the easy bit, because the slides can’t be changed now, you’ve rehearsed (hopefully) the talking bit, it’s now just a case of doing it. The one piece of advice I was given very early on was “know your opening”. If you know the first thing you’re going to say when you open your mouth, then the rest will flow from there. And I fluffed the first thing I had to say.
The important thing is not to dwell on your mistakes. They’ve happened, and they can’t be changed. Just keep moving on. I knew the next bit, and off I went.
I made a few mistakes as I went through, especially getting myself horribly confused on the Fight Club reference about the cost of a safety recall vs the cost of the fines or compensation. I also knew going in I was struggling with the name of the airbag manufacturer. Takata, simply pronounced ta-cat-a. I can do it now, takatatakatatakatatakatatakatatakatatakatatakatatakatatakata, but I knew I’d struggled with it going into the talk and sure enough I saw it looming and tripped again.
I had to remember that at a friendly event, filled with likeminded people, the room is one of the most supportive you’ll ever get as a presenter. Everyone there wants to enjoy the talk, and wants you to do well. There’s also no harm in being vulnerable, being open, being honest. When I forgot one of the important researcher’s names I was citing and someone called it out, I had to say something so I was honest about why I kept forgetting. It’s humanising.
The review. The important bit. I was lucky this year that Steelcon had video recording running, and running reliably, across all the talks so I could watch my talk back. It’s weird, it’s a bit uncomfortable, but it’s crucial in being able to identify what went well, and what you can improve. I discovered errors I’d made and not even noticed – such as mixing up a clutch and throttle pedal – while others were not having planned out a slide properly and instead of making the point I wanted, I’d ended up making a different one.
I was pleased with the response I received both at the event straight after the talk, and subsequently on social media. If you can, however, solicit feedback and critique. While it’s nice to get praise, constructive feedback will give you a different perspective on how things went and areas to work on next time.
Thinking back to my crisis of confidence I remembered the thing I worried about most; the lack of technical depth. The truth is that I’d laid my stall out well in my CFP so it was clear that my talk was going to be high level. On top of that I only really had 45 minutes or so, and a lot of ground to cover – 60 slides in fact! Those who knew the subject took away something new, but what I came to realise was that many people came in with little or no knowledge of the subject. What’s obvious or basic to me is only that because I’ve been dabbling in car systems for three or four years. The field of computer security is so huge that no one knows everything about everything, and what’s basic for one person is enlightening for another.
If you have a desire to talk at a security con (or anywhere!) then I hope this has been useful, or at the very least shown what it’s like from behind the microphone. It can be daunting to stand up and ask people to listen to you, but it’s a thrilling challenge, and one I would recommend anyone tries at least once.
In the second of a great three-parter, Chris Ratcliff talks us through what it’s like to go from attending security conferences to speaking at them. This post is specifically about honing the idea for a CFP through to preparing the presentation in its entirety. If you’ve ever thought about speaking at a conference, these posts are full of helpful, reassuring and practical advice. If you’re already experienced in conference speaking, there will be so much you can relate to (hi, google image search!) and probably something you can learn from, too.
In the last post I talked about my introduction into the world of security cons and why people talk at them. With a desire to present at Steelcon 2016 here’s how I went through the process step by step.
The subject. A tricky one. I still don’t have a technical field of research with breakthroughs I can talk about. Can I do people-y stuff again? Do I have anything to say? I keep work stuff separate from what I present about otherwise it opens a whole new can of worms.
I do have an interest in cars, and I’ve done a bit of tinkering with them. I start thinking, I start talking in my head. I start doing a presentation just to see where it leads, and I think I can bring some insight. I’m really interested and excited about the subject. This is key, as if you’re not passionate then that will come across. Most of all, I think I can add value to the event and to the community. Great, I have an idea, and I have a sense of purpose.
The CFP. This is the tricky bit. Most cons have a Call For Presentations where you submit your idea and an abstract that will become essentially your advert in the event schedule. You want to give a flavour of your talk without going through the content, to be detailed enough as to why you should present, and to attract attendees to come and see you! I’ve seen some really underplay their value, while others really go OTT to stand out.
My success ratio isn’t brilliant, so I wrote what I thought was best, and bounced it off a few people with more experience than I who helped shape it to a couple of simple sentences. I submitted it, and it was accepted!
I had an idea, and I had a slot. So now I just have to write and prepare and give my talk. Easy, right?
When it comes to presenting, I’m constantly aware of other people who do it really, really well. Great presenters tell great stories, and I’m a huge stand-up comedy fan. Dave Gorman’s Googlewhack Adventure is a fantastic story, but it’s also a good example of working with Powerpoint. Steve Jobs’ Apple keynotes are case studies in what to not include in presentations, and how to build a narrative and take the audience with you. TED talks can be great too, watching people like Mark Ronson ooze his personal calmness or Adam Savage be passionate is inspiring and informative. It’s not just about what they say, it’s how they say it.
These people are professionals at this though, how does this apply to me?
Every time you can watch anyone get up and talk, you can learn about presenting. If you attend an event with many speakers, watch each and look for things you could do too, what the speaker did well, and also where they could have improved and apply all of those to yourself. It may feel like imitation at times, but it’s a great way to identify a good trait and try it out.
I had an idea, a concept. I wrote down four major bullet points for my talk:
I’d start slotting things into those headings, and discarding ideas that didn’t fit. I carried on presenting my talk to myself while out walking or doing the shopping or cleaning the kitchen. Letting my mind go down new paths and make new connections. I’d jot down ideas or update notes on my phone so I didn’t forget them – though ‘photo of footwell’ in isolation isn’t always very helpful. I started outlining what slides would be needed. With a couple of weeks to go before the event, I obviously had a full slide deck ready for polishing.
The more I outlined the slides, the more research ideas turned up, which meant more Googling, which opened up yet more avenues for discovery. I had my slide ideas though, I had my bullet points, I had to move on.
The Slides. Or how I learned to stop worrying and love Powerpoint. It’s really easy to get distracted, and I did, with relatively trivial things. With the presentations being recorded, should by slides be 4:3 or 16:9? I tweeted the organisers and the head AV guy, I looked up how to set them differently in Powerpoint. That probably lost me a couple of hours overall. It’s the little things, the animations, the choice of fonts and colours that can take hours to choose but add little if the content isn’t there. I was in a time crunch so ploughed on, which was actually a blessing in disguise. The worst habit people have with Powerpoint is too many words. Putting up sentence after sentence which distracts the viewer, and can lead to you reading the words off the screen too. A bad habit, but very easy to do.
I didn’t have time to write words, and words which I’d have to stick to rigidly. Instead I hit Google Image search. I’m looking for images to illustrate my point – or be a punchline – and then it’s onto the next.
My way is not to write out talks long hand, but essentially make sure I know the bullet points I want to hit in a slide, and what gets me to the next slide. In some cases, too, knowing when to trigger an element within a slide.
On that note, if you’ve never used Powerpoint with a multiscreen set up then you’re missing out on Powerpoint’s great presenter view. On your screen you can see the next element to be displayed, your notes, the elapsed time and the current time. To have those available to you is a god send. You don’t have to remember every slide, or turn to face the screen the audience are watching. One tip: make notes very short and in a decent sized font!
The crisis. It will come, one day. About 10 days before Steelcon it hit. Should I really be doing a talk? Is it technical enough? Should I have more actual examples and less of my conjecture? I know I can stand up and talk, but what if people say it was too self-indulgent or, even worse, boring? The crisis is like stage fright for your Powerpoint deck. I reviewed the slides, I made sure things still fitted and were relevant. I even looked back at the stuff I’d cut out. Actually, would it be better in, or was I right to cut it? This is the moment that panic can set in and rash decisions made, but best to be methodical and stick to the course.
For me the worry was that the more I researched the topic of car hacking, the more I found whole reams of working groups, forums, projects and documentation outlining how things work and the standards they use. If anyone had an interest in car hacking, and they did more than 5 minutes Googling then they’d find most things I would talk about and more. Heck, I cited two PDFs in my talk that I find fascinating. They go into the greatest depth about car hacking and anyone who’d read them would realise I was barely scratching the surface. There was some original thinking though, but it was around the industry and not massively technical. I hoped I wouldn’t lose my audience.
Rehearsal, or do what I say, not as I do. You should absolutely rehearse your talk. The process of doing it and refining and iterating will help you give a more polished talk and iron out any bugs or errors. And I didn’t. Well, that’s not strictly true, but I never performed it. I went through it again and again, I drummed in the words and the ideas. I stepped through the powerpoint deck, going through the bullet points, the asides, the links and making sure it worked.
So far so good. In the final blog of this series it’s the big day of the talk itself, and then reflecting on how it went.
Biometrics hit the headlines again recently with news that Barclays is rolling out voice recognition technology to its telephone banking customers as a replacement for passwords. In recent years, there has been an increased focus on biometrics, for example with many people getting used to fingerprint technology to access iPhones. It’s an interesting subject from a cybersecurity point of view, as any new technology brings with it the opportunity / threat of compromise, demonstrated, for example, by this story about exploring 3D printing to bypass fingerprint access to an iPhone.
With the news of Barclays voice recognition, I was approached by a few media outlets to comment on whether we are about to see the end of the password, and the cybersecurity implications of biometric systems, including this interview on Radio 4’s Today programme.
When I’m able to, I’m always happy to give my opinion to the media on issues which relate to cybersecurity, but with something like this it’s the opinions of the general public which interest me the most. After all, it’s the attitudes and behaviours of the ‘average user’ that we’re often trying to engage with and influence, so how do they feel about biometrics?
A few days ago, I asked members of the general public in the UK the following question:
Would you use a biometric system (voice activation, fingerprinting etc.) instead of a password to access your internet accounts (e.g email and online banking)?
1,003 people completed the survey, 51.6% were male and 48.4% female. The overall findings were:
The most popular response was that people would consider using a biometric system, at 35.5% of the sample, closely followed by those who would not use it because they don’t trust it, at 28.7%. Behind that were those who wouldn’t use it because they don’t understand it, at 22.3%, and finally the group of people who already use it, 12.9% of the sample. There was a tiny percentage of people, 0.6%, who selected ‘other’ and their responses included ‘don’t know’, ‘too tech’, ‘maybe’, ‘not sure’ and ‘boring’*. I could dismiss the respondent who inputted ‘boring’ but this response has value in itself. The response rate for this survey was only 15.4% and this could be related to the perception that cybersecurity is boring and onerous – a challenge the industry faces when trying to encourage engagement from the ‘average user’.
Returning to the top level findings, the proportion of people that either currently use a biometric system in place of a password, or would consider doing so, is 48.4% and those who would not use it either because they don’t trust it or don’t understand it is 51%. So there are marginally more people in this sample who reject, rather than feel quite comfortable with, biometrics. However, it’s such a small margin that it’s hard to put stock in it, and so it seems pretty much 50/50 whether people in the UK are willing to embrace biometrics or not.
When we unpick the data further, the findings offer more insight.
People in the East of England were most likely to consider replacing their passwords with biometric systems, at 52.9%, and Londoners were least likely, at 29.3%.
The East of England was the area which displayed the most trust in biometrics (only 16.7% rejected the idea of biometrics due to distrust). The least trusting place was the North East, with 34.8% of people from that area saying they would not use biometrics due to distrust.
Comparing how women and men feel about biometrics shows that men have more faith and trust in replacing their passwords with biometrics. The gap between women who would consider replacing their password with biometrics (33.2%) and those who would not trust it (30.5%) was much smaller than the gap between men who would consider it (39.7%) and those who do not trust it (27.3%).
When we breakdown the findings by gender and geography, we discover that the least trusting population is women from Wales, 43.5% of whom would not use biometrics due to distrust. This contrasts quite sharply with the most trusting population, men from the East of England, where only 10.3% reject biometrics due to trust issues.
Attitudes by Age
Attitudes to biometrics also varied according to age group, and probably not in the way many people would expect. It is often said that ‘millennials’ have a laissez-faire attitude to privacy and security. However, my findings here contradict the notion that 18-24 year olds are oblivious to issues of technology and security.
18-24 year olds were the age group least likely to consider replacing their passwords with biometrics, with only 25.4% of that age group saying that they would consider doing so. They were also the age group least likely to trust biometrics, with 38.1% saying they would not use biometrics in place of passwords because of distrust.
It’s interesting to speculate why attitudes to biometrics vary according to age. Perhaps the younger age group feel more comfortable with passwords, having grown up with the internet? Are the older age groups more willing to trust biometrics because they, perhaps, have more work accounts and are fed up with trying to manage so many passwords? Could it be that younger people are more privacy conscious, and more aware of the pitfalls of technology, and so more considerate of the risks of giving away their biometric data?
Sharing this article on Twitter elicited the following suggestion regarding why young people in the UK may be the age group most likely to distrust biometrics:
@drjessicabarker@cyberdotuk it would be interesting to cross check this with whether they had been required to use biometrics in school
At least 3,500 schools in the UK use biometric security systems and as this article highlights “a data breach will mean these type of scans will be untrustworthy for the pupils – for the rest of their lives”. Perhaps the very experience of being expected to entrust their schools with their biometric data has instilled in many young people an awareness of the potential pitfalls of such systems?
Without more research, it is impossible to know exactly why people feel differently about passwords depending on where they live, their gender and their age. However, if organisations want consumers to use biometrics more, they will need to address the sections of the population which are most sceptical about how biometrics work and whether the systems can be trusted. In particular, 18-24 year olds are an important cohort they will have to engage with if they are to have any success. The password isn’t going to die anytime soon if the younger generation has little trust in the alternatives.
They say you always remember your first time. For me it was 1995 at the British Educational Training and Technology show at Olympia. The college I attended had a sponsorship deal with a company who were exhibiting and they wanted someone to talk about the use of IT in day to day education. While my contemporaries stared at their shoes I thrust my hand up and a few weeks later was being handed a full on Madonna wireless headset and presenting to an audience of bemused show visitors and stand staff who enthusiastically watched a 15 year old geek talk about using IT while a friend of mine used various trigger words to know when to change slide or quickly alt-tab to another application to make a point.
21 years later, that’s the only time I’ve ever had a voice activated slide deck.
Since then I MC’d mountain bike events for Future Publishing and Red Bull, appeared on the Extreme Channel and even had a few goes doing car videos on Youtube. I’m not an extrovert but public speaking isn’t something I shy away from, it’s fair to say.
Then my first con. BSides London 2014. It was overwhelming and exhilarating. I’d never even watched a hacking talk online, but now I was surrounded by people sharing the results of their research, sharing their secrets, doing things I understood but could never actually do. On the train ride home, I started preparing my ideas for a talk of my own.
There are often discussed points as to why you should do a talk at a con; it may be for personal development, commercial reasons, to try and raise your professional profile to help your career, or simply because you have a cool thing you want to share with people. There’s no requirement to, though. I think it’s a great thing to give back to the security community, but there are many, many ways to do that. For me, I think a lot of presenters don’t do it because they think they should, but rather because they are driven to.
It’s also worth remembering that organisers choose talks, and that’s your first round of validation. If your talk is not selected then they think it wouldn’t be a good fit. It stings to be rejected, but if you are accepted then it’s a sign that other people believe in your idea.
My ideas weren’t technical, they didn’t unveil a new attack or highlight vulnerabilities in a thing, they talked about people. They were both offensive and defensive, they brought some experience I’d gained through work and hoped to fill in some of the gaps I saw at different BSides events. And no-one would accept them. Eventually I got a slot at the second Steelcon event in 2015 and I found the talk on the other track was about the myths of plane hacking. In hindsight this was A Good Thing, as it meant my first security talk was for a modest audience, and it was definitely a learning experience.
To build on that first experience I decided to do a presentation for Steelcon 2016. In my next guest post I’ll take you through the process; the highs, the lows and how Google image search is your friend.
News of a data breach at the UK software company Sage is a reminder of the potential damage which can be done by an insider. Sage is a FTSE 100 company and provides business management software for companies in 23 countries. It has reported the breach to the City of London police and has informed customers who they believe may have been affected that the personal informaton of employees at 280 firms may have been compromised by someone using an internal computer login.
The insider threat can take the form of malicious and non-malicious activity. From an attacker (or attackers) with inside access who are consciously stealing or using internal information for their own gain and / or to harm the organisation, to an individual accidentally emailing information to the wrong recipient or leaving documents on a train. Research suggests that the malicious insider is the most costly threat facing organisations.
What can your company do the mitigate the insider threat?
Access to information should be based on the principle of ‘need to know‘. Ensure that this is in place with an access rights review and set up procedures to make sure that IT is informed, and amends access accordingly, when employees move jobs or leave the organisation. Data should be segregated, with network segregation at least according to department.
Train employees in cybersecurity, instilling them with an understanding of the threats and of the impact that their behaviours can have on keeping information safe.
Situational awareness is important on an organisational, as well as individual, level. Participate in threat sharing communities if at all possible. Communicate with peers to develop and maintain awareness of the threats which they are managing and consider what this means for you. Remain aware of current cybersecurity threats hitting the headlines.
Look at your password policy, and practices of employees. Do people leave passwords on post-it notes on their desks, or share them with colleagues? If so, this increases the potential for unauthorised insider access and could be an indication that you should simplify your approach to passwords.
Consider your other cybersecurity policies. Have you clearly and concisely communicated how you expect employees (and anyone else with access) to handle information? How are the policies communicated and enforced? It’s really important to understand where people find workarounds in the policies and procedures – what they routinely don’t comply with as a means of getting their job done. These workarounds shows where policies and procedures are hindering the business and inducing risk. Work with the business to address these in a proportionate way – how can you find a balance that enables people to get their job done whilst maintaining security of information?
Review how employees and contractors access your network. Remote working and Bring Your Own Device (BYOD) open up new risks and so you may need to consider how you can support flexible working whilst minimising those risks.
Encrypt data at rest and in transit. Don’t make it easy for unauthorised people to access and view data.
Look at your personnel and physical security. How easy would it be for an attacker to take advantage of an internal weakness, such as tailgating, poor CCTV or a lax approach to wearing name badges? A social engineering test is a good way of attaining an ‘attacker’s eye view’ of your organisation.
Keep logs. Many organisations don’t make this a priority, and whilst logs of course will not protect you from the insider threat, they will provide an audit trail to help you unpick what has happened and provide supportng evidence in a criminal trail, should it come to that.
Have an incident response plan, which outlines roles, responsibilities and avenues of communication – and test it. Again, an incident response plan will not protect you from the insider threat, but it will enable you to respond as quickly and effectively as possible. However, it will only do this if you test it to ensure that when you really need it, the theory works in practice.
Some of these tips are aimed at minimising the insider threat, whilst others are about managing an incident should one occur. Ponemon’s latest data breach report highlighted that having an incident response team, extensive use of encryption, employee training, participation in threat sharing or business continuity management decreased the per capita cost of a data breach.
Finally, remember that the definition of an insider threat is not limited to employees, but rather relates to anyone (employees, contractors, third-part suppliers) who accesses your information or networks.
At Bsides London 2016, I gave a presentation on a topic I’ve been thinking about for a long time: why we should embrace the term ‘cyber’.
There’s a tendency for the industry to roll its collective eyes at the term cyber. There’s an unwritten rule that it’s not credible, that it’s a buzzword which means nothing and is used by people who don’t really belong in the field. Actually, it’s not an unwritten rule at all: you see references to it in memes and tweets all the time. Obviously as someone who describes herself as a ‘cyber security consultant’ and publishes on this domain name, I don’t prescribe to that view. I wanted to speak at bsides about why, and why I would like more people in the industry to consider embracing cyber, too.
We have many terms for what we do. Information security, cyber security, information assurance, data security, IT security; the list goes on. While they all technically have their own definitions, as consulting NIST will confirm, we often use them interchangeably. In different contexts and speaking to different people, the terms get muddied and even contradict themselves. Only one of the terms is in the dictionary:
Words go in the dictionary when they’re used a lot. Only one of the different terms that we use to describe what we do has gained enough traction outside of our industry to go in the dictionary.
The history of cyber
Cyber is perceived to be a pretty new word, and is often accused of being a word which means nothing. In fact, it actually has quite a long heritage:
In Ancient Greece, the term kubernao was used to mean “steer a ship”
The Latin kubernetes gives us “cybernetes“
The Romans turned kubernao into guberno, from which we get “govern”
Plato used “kubernetika” to mean skill in steering
In the 1940s the American mathematician Norbert Wiener used “cybernetics” to mean “control and communication theory, whether in the machine or in the animal”
In the 1980s, William Gibson coined the phrase ‘cyberspace’ in his short story Burning Chrome; it became popular after he used it again in Neuromancer
The association of these terms with cyber and cyber security is obvious to me: cyber security is about governing information, it is about where humans and machines meet.
For the presentation I didn’t want to simply rely on my own assumptions and biases, so I did a couple of surveys to explore the terms which resonate most with people.
To elicit the opinions of my peers, I did a twitter poll. I was relying on the fact that most of my followers work in the industry, and most of their followers – if they retweeted it – probably do, too (according to my twitter analytics, 89% of my followers are interested in tech news, 76% in technology and 67% in network security).
Which of the following terms do you use to mean protecting against hacking and other data loss? (Please RT!)
The poll got a good response: over 8,000 impressions, 403 votes and a fair bit of discussion. Thanks to everyone who voted in it, commented on it and shared it. The wording of the question is of course open to criticism (it could have been more precise / it could have been more general) and I’m relying on the assumption that most of the people who responded to the poll are involved in, or identify with, the industry. I’m happy with those caveats and feel pretty confident that the poll is a good reflection of the industry, in which most people identify with the term ‘information security’ (over twice as many as ‘cyber security’).
As much as I was interested in the poll results, I was also keen to hear people’s opinions. Some which stood out to me, and summed up what others had to say, were:
I also did a survey with the UK general public. I asked the same question and got over 700 responses:
So, based on 737 responses, cybersecurity* was the top response. Information security, in contrast to the twitter poll, was the least favourite response. Most tellingly, it was less popular than e-security, which I put in on a bit of a whim (it was omitted from the twitter poll because you can only give four answer options on twitter). I have never heard anyone use the term e-security, so to discover that it was ever-so-slightly more popular with the general public than information security was pretty surprising.
Why does it matter?
If research tells us that the industry and the general public use different terms to refer to the protection of information, does that matter?
I think so.
Language has existed for perhaps 150,000 years, at least 80,000 years – it is mainly used as speech, evolving as we talk. Language changes a lot, from Ye Olde English to textspeak. There are many words we use now which used to mean something completely different. So, for people who resist the use of ‘cyber’ because it meant something altogether different 20 years ago, I would say: that’s the nature of language, it changes. When we say something it is always ambiguous and when people speak, they do so with the intention of being understood by the listener, or perhaps to intimidate and impress. Language relies on mutual understanding and cooperative communicators consider the listener’s assumptions, knowledge and prior experience.
In our industry, we are trying to engage with, and change the behaviours of, individuals, organisations and society. At the micro, meso and macro levels, we want people to listen to us more. We want individuals to better protect themselves, for example with password managers, two factor authentication and taking care of what they post online. We want organisations to be more responsible with the data they are entrusted with, we want them to build security into their products and give us the resources we need to do our jobs. We want the media to understand what we do so that the most important messages are represented, which helps us communicate more effectively with individuals and organisations. We want the law to reflect the realities of our jobs and the challenges we face, and the justice system to punish people intelligently and appropriately (i.e. criminals not researchers). If we truly want those things, then we can’t afford to reinforce silos of communication where everyone speaks a different language and fails to understand one another.
In psychology, heuristics are simple rules of thumb that explain how people make decisions and why they act in a certain way. The fluency heuristic explains that the more clearly, skilfully and elegantly an idea is communicated, the more people will engage with it. The media have embraced cyber. The board has embraced cyber. The public have embraced cyber. Far from being meaningless, it resonates far more effectively than ‘information’ or ‘data’. So, for me, the use of cyber comes down to one question: what is our goal? If our goal is to engage with and educate as broad a range of people as possible, using ‘cyber’ will help us do that. A bridge has been built, and I suggest we use it.
“Data space didn’t work and infospace didn’t work but cyberspace! It sounded like it meant something or it might mean something… My whole delight was that I knew it meant absolutely nothing so I would then be able to specify the rules for the arena”
Cyber is here to stay. We have a choice as an industry whether to keep trying to resist and undermine it, or whether to embrace it, engage with it and start shaping the rules of the arena ourselves. Otherwise, we can continue allowing businesses, governments and the media define it for themselves.
*Notes: I haven’t tackled the fact that sometimes we use ‘cyber security’ as two words and sometimes (and in the dictionary) ‘cybersecurity’ as one word. As a habit, I have always used the term as two words and in the public survey it was a typo / auto-correct error that I compounded them. An interesting thought made by someone at the end of my presentation: ‘information’ and ‘security’ have many meanings and can be applied to many contexts. ‘Cybersecurity’ has one meaning. This seems like a very valid point to me and combined with the fact it’s the term used in the dictionary, is making me think I should compound the words in future.
At Infosecurity 2016, I gave a keynote talk on the elements of human nature and social norms which make us so susceptible to social engineering attacks. Infosecurity invited me to give the keynote drawing on my background in sociology and experience helping organisations mitigate against social engineering.
What is social engineering?
Social engineering is about using psychological tools such as charm, manipulation and deceit to elicit information or access to places and systems from people who should be keeping it safe. We associate social engineering with cyber security, because the way we use technology – the way we share, store and use information on the internet – has increased the attack space. This is why phishing attacks are at a 12 year high.
However, social engineering has undoubtedly been around for as long as mankind. History is littered with examples of it, some of which have been pivotal in social, economic and global development. For example, when the American industrialist Francis Cabot Lowell visited England in the early 1800s, he feigned ill health to win the sympathy of Lancashire mill owners and, using this sympathy combined with flattery, he was given tours of the cotton mills. In doing so he took the opportunity to memorise the mill blueprints. He used this information to build the mill towns of Massachusetts, enabling the United States to become the global leader in the cotton industry.
Why is social engineering so successful?
Whether online or not, we fall for social engineering attacks because they take advantage of human nature, fundamental parts of how we all tend to think and act, and social norms, the cultural and social pressures to do what is generally expected of us. For example, some argue that reciprocity is part of human nature. Our ancestors survived by sharing goods and services before we had currency and governments and so, it is argued, reciprocity is ingrained in our survival instincts. Even when a favour is uninvited, people feel obliged to repay someone who gives them something. This explains why 47.9% of people gave away their password when they were given chocolate immediately before being asked for it.
Much like reciprocity, humanity is innately curious, which brings many benefits. It underpins education, innovation and social interaction. But, when it comes to cyber security, curiosity can be a huge obstacle. Phishing emails thrive on the irresistibility of curiosity, enticing the reader to open the email and click a link or download an attachment. Curiosity may have killed the cat, but it makes a phish live.
People have a tendency to believe stories, whilst being sceptical of facts. Even the most senior and successful people can be taken advantage of by a well-crafted story and their success makes them more likely to both be targeted with social engineering attacks and, arguably, more likely to fall for them, too. This was the case in the recent example of the Austrian aerospace CEO who fell for a spear-phishing attack that cost the organisation £40 million, and cost him his job.
Use of social media has risen phenomenally in the last decade or so, with 20% of the world’s population now on facebook. At the same time, narcissistic personality traits have risen sharply. While a correlation has not been proven, research does suggest narcissism is related to the way young people use social media. Research suggests that the desire to have as many friends as possible, and to want those friends to know what they are doing, is higher among young people with narcissistic traits. This provides perfect breeding ground for social engineering attacks.
We have a tendency to assume that people are rational, and always make rational decisions. But when you’re running between meetings, hurriedly checking emails across devices and you receive a phishing email that plays to base emotions like those outlined above, taking time to make a rational, security-conscious decision is not the priority. Getting through the backlog of emails and progressing business issues is the priority. Mindlessness reigns and, as Dr Helen Langer commented a recent Security Through Education podcast, “when you’re not there, you’re not there to know you’re not there”. As Sunstein and Thaler outline so eloquently in Nudge, our brains are a battleground between Homer Simpson and Dr Spock – a rational, long-term planner trying to reign in a short-termist, impulsive thrill-seeker. Our challenge in battling social engineering attacks is encouraging people to engage more with the Spock in their brain and less with the Homer.
So, what can organisations do to mitigate social engineering threats? Having a robust cyber security culture, in which staff are empowered to challenge and prioritise security appropriately is the key. This culture provides the framework on which policies and procedures are designed and adhered to with security in mind. To achieve this, consider the following:
Awareness-raising training should be focused on changing behaviours and making people conscious of the most prevalent threats and how they relate to them. So, for example, senior executives and finance staff should be made particularly aware of ‘CEO Fraud’ phishing emails.
Procedures should be in place to ensure that financial transactions have to be signed off by more than one person. Pressure points in the process, such as a particular member of staff being overworked, need to be identified and managed so that people have more time to follow security procedures whilst meeting business requirements.
Receptionists should be trained to stick to security procedures regardless of the apparent seniority of the visitor. Senior staff should be trained to know that this is a good thing for the organisation and everyone’s security, not an affront to their status.
Wearing security passes in a company premises should be mandatory – as should taking the passes off when outside the premises (so that copies cannot be easily made).
The organisation should have a social media policy which takes account of social engineering attacks.
Developing a strong cyber security culture is not straight-forward and it takes time, but it is worth it.
If you’d like to discuss your cyber security needs and how I might be able to help, or you would be interested in having me speak at your organisation or event, please email me at firstname.lastname@example.org.