It’s often said that you can either have security or you can have usability, but you can’t have both. Before I worked for a security company I believed this accepted wisdom without question – security was going to be something frustrating and annoying that would get added to my design at some point.
But once I started thinking about designing security products (and listening to people who’ve been thinking about that much longer than me) I changed my mind. It is possible to create better, lower friction experiences for people that also make them more secure.
Security is a human problem
Many negative attitudes towards the balance of security and user experience are based on the assumption that humans don’t care about security. But if we look at how people behave in the physical world, we can see that they do – indeed, safety and security is perhaps one of the most basic human needs.
Did you lock the door of your house this morning? How about your car? Is all your money left on your kitchen table? We all take action (and make effort) every single day to keep our possessions secure.
How about your physical security? Is it important for you to live in a safe area? Would you walk down a dark alleyway in an inner-city area at night? We modify our own behaviour to keep safe – sometimes taking expensive, difficult action to look after ourselves and our family: even moving house, or country.
The safety of our things, ourselves, and that of our family is really important to us as humans.
Digital safety is difficult
Although it’s not perfect, we’re generally well adjusted for evaluating risk and staying safe in the physical environment. We’ve had millennia of evolution to develop senses for perceiving danger and reacting effectively to it. In our individual lives, we’ve been in training since we were small infants to be careful, know what’s risky, and modify our behaviour. If you’re a parent, you’re doubtless doing training like this every day.
The digital world is difficult though – it’s an abstract space that’s harder for us to understand; evolution hasn’t given a head start in navigating it; and we’re not well-trained to know what to do.
Security failures have many costs
We’re all too aware of the breaches and attacks that are taking place – there’s barely a day that goes by without mainstream news carrying yet another story of an enormous breach. These are only the tip of the iceberg though, with the majority of attacks and security failures not making it to the news.
Obviously, these failures have enormous impacts on the organizations responsible for them – fines, lost customers, brand damage, official investigations, and so on. But there’s also a human cost – the individuals who’ve had their personal, financial, or medical information stolen and potentially used against them.
There’s also an impact on wider technological and civil society. Breaches like that of the US Office of Personnel Management expose detailed personal information that could be used against many security-cleared individuals. And the potential of technology to improve society is under threat if people start to withdraw from online activities, as the US Census Bureau found they were already doing in 2015.
We’re not making good design decisions
As people designing systems, we’re often not doing a good job. There are a few common mistakes we’re making that stop us creating software that’s both good for people and secure:
- Overloading memory – humans have limited memory resources. When we ask them to create complex passwords, change those passwords regularly, or answer esoteric security questions, we’re causing them to choose simple passwords, repeat those password across sites, and choose easily guessable security questions
- Technically driven barriers – we deliberately put barriers in place that end up causing user frustration, abandonment, or bad practice. Examples include preventing password managers from working on websites and using CAPTCHAs
- Relying on user making good decisions – users will struggle to make good decisions, but we often rely on them to decide whether an application should be installed, a link should be clicked, or an email attachment should be opened
- Not promoting good practice – people have limited information and experience to help them do the right things. Sometimes commercial expediency is put before good practice –not suggesting people use two-factor, promoting side-loading Android apps, suggesting they allow macros, or showing meaningless padlocks.
We can do better
I once had a 1985 Nissan Micra, which I proudly secured with a gearstick lock and a steering wheel lock. Getting in and out of the car involved a lot of keys, locks, and jangling bits of metal. Despite all those locks, that car was stolen in 5 minutes while I went to the supermarket. My current car has keyless entry – I simply walk up to it and pull the door to get in, then press a button to start. But despite this much better experience, it’s also much harder to steal. And this is a physical object, built in a factory, shipped halfway around the world, and required to stand outside in the rain for 15 years.
As people responsible for software and systems, we can do lots of things to help humans understand risk, make better decisions, and have safer digital places they can spend time in.
As situations differ (your applications, data, users, and regulations) these are not specific recommendations: they’re more things for you to think about and consider if they could work for you.
- Multi-factor authentication – implementing and promoting the use of multi-factor. This doesn’t have to be text message codes: there are interesting new solutions based on smartwatches, mobile applications, and even ambient sound. We can step-up security on unknown devices, and step-down on devices we recognize – giving users more frictionless experiences much of the time, while backing them up with better underlying security.
- Don’t reinvent the wheel – consider using 3rd parties for identity or payments. While doing so might impact on the amount of control you have, you could benefit from the investments those 3rd parties make in updating and keeping their systems secure, as well as making it easier for end users who already use those services. For example, you could use well-known payment providers rather than implementing your own version of a web store.
- Biometrics – the rapid advances and commercialization of technologies that use biometrics provide great opportunities to make people more secure using the things they can’t forget or lose – their faces, fingerprints, veins, and so on. Things like Google’s Trust API promise to make this type of seamless authentication broadly available.
- Passphrases – passphrases work better with human memory, making them easier to remember – and harder to brute force – than “complex” passwords (as this famous cartoon describes).
- Don’t make users change their passwords unless you think they’ve been compromised (as recommended by the National Cyber Security Centre).
- Look into honeypots on web forms and throttling, rather than using CAPTCHAs as a first resort – perhaps fall back to CAPTCHAs if there’s evidence of abuse or attack bypassing more user-friendly alternatives.
- Don’t break password managers – although not flawless, these are one of the best ways for humans to create and “remember” strong, unique passwords. Avoid customizations that stop them from working.
- Set safe defaults and be proactive – given that users mostly don’t change default settings, give them safe ones to start with. Because they’ll struggle to perceive and understand digital risk, be proactive in guiding them to make wise choices.
- Create secure by design places – think about how to create secure places, where people are less exposed to the risks of making bad decisions. By reducing their scope to accidentally install bad applications, expose their data by clicking dangerous links, or inadvertently execute malware in email attachments, you can give them a safer place to do their work and live their lives.
Security and UX are great bedfellows
As designers of systems we must not see security as an inconvenience, bolted on at the end. And as security experts, we need to remember that systems that prevent users from using them – or worse still, cause them to create unsecure workarounds – are not serving their purpose.
Security is a basic human need – and crucially important for individuals, organisations, and society as a whole. It is possible to have security and usability, if we’re willing to try.