Speech

Front doors and strong locks: encryption, privacy and intelligence gathering in the digital era

Robert Hannigan, Director GCHQ, shares his thoughts about the pivotal topic of encryption with an influential audience at the Massachusetts Institute of Technology.

Speech - 08 Mar 2016

A full transcript of the speech by Robert Hannigan, Director GCHQ, as delivered at the Massachusetts Institute of Technology, on 07 March 2016.

Well, good afternoon everybody. It’s great to be back here in Cambridge and always a privilege to be at MIT. I want to thank Danny Weitzner and the Internet Policy Research Initiative for inviting me. Thank you also to our Consul General Susie Kitchens, who's an old friend, and to her excellent team, Rebecca Leshan in particular. I know these things take some setting up, so thank you very much.

Danny mentioned the Northern Ireland peace process, and it's appropriate because solving a four/five hundred year problem can be done, and interestingly it was done and critically it was done with US help, including a lot of Boston help and with people like Senator George Michell from Maine as well, so one of the reasons I'm back here. And thank you for taking the time to come here to the Media Labs!

Though I love technology and I love the Media Labs, I'm not a cryptologist and so I approach this shrine with some humility on this. I’m hoping that not being a deep expert in this space may actually be an advantage; we'll see by the end of the talk. And above all, I’m here to learn and to pose problems rather than offer solutions.

When I took up the post of Director of GCHQ, as Danny mentioned, shortly after I’d spent time here in the US doing some study on privacy policy, I wrote in the Financial Times about the challenges agencies like mine, MI5 and the police in the UK, or the FBI and NSA in the United States, were facing as we began to see a new generation of terrorists and criminals take advantage for their own purposes of the extraordinary opportunities offered to all of us by the internet and the web.

The comments caused a bigger stir than I expected to be honest, and were widely seen as an attack on the tech industry. In fact I wanted to start a debate in the UK about how democratic Governments and the tech sector could work together within a clear and sufficiently transparent legal framework. And I'm very grateful to MIT for allowing me to develop that a little further this afternoon.

It’s hard to think of a more appropriate setting, partly because of MIT’s role in the development of the technology itself - so many of the creators and shapers of the internet and web are here - but also wider Cambridge involvement in the policy implications: CSAIL’s ‘Keys Under Doormats’ report and the Berkmann Centre’s more recent ‘Don’t Panic’ report, along with the US National Academy of Science’s report on Collection of Bulk Signals Intelligence, are for me the key contributions of the last few years in my area.

I was writing in the Financial Times at the end of 2014 and I suggested that it would be better to start this debate before rather than after acts of violence. I said that partly because we all want to stop those events happening - and I’ve never doubted the shared good intentions of all concerned - but also because in my experience the worst possible time for decision making is after an atrocity; certainly something I learnt in Northern Ireland. Emotions are heightened and positions polarised. One of terrorism’s objectives will always be to get free societies to over-react, or to turn in on themselves.

My own experience looking back over the past 18 months has been more encouraging than that to be honest: we've had some sensible and constructive dialogue with the tech sector and academia. As I have consistently said in private and in public, Government agencies do not have the answer here. The solutions lie with those who run the internet: that wonderful collaboration of industry, academia, civil society, governments and, above all, the public. The perception that there is nothing but conflict between Governments and the tech industry is a caricature; in reality companies are routinely providing help within the law and I want to acknowledge that today.

So, reflecting on the experience of the past year, I want to do three things this afternoon: first, to say a little about our own approach to encryption, if only to lay to rest a few myths; second, to look at one aspect of the moral problem presented by what I would describe as the abuse of encryption; and finally, to say how we might work together to address this shared problem, albeit from a UK perspective. Throughout, I am consciously avoiding offering solutions, because I don’t have them, and I think we will need to find them together, and many of them may be in this room. I suspect those solutions will be diverse and fragile and dynamic in the future: but they will not be 20th Century solutions.

Encryption

So, encryption first. The idea that we do not favour strong encryption is alien to anyone who works in my organisation, and there are a few of them here today. Information Assurance is at the heart of everything we do. And I am accountable to our Prime Minister just as much, if not more, for the state of cyber security in the UK as I am for intelligence collection.

For nearly 100 years we have been intimately involved in strengthening encryption. From traditional protection of military communications, through personal privacy online - including identity verification for those critical Government digital services - through the security of domestic ‘smart’ power meters and electricity meters - where the design principle is that homeowners are in control of their data - to the security of the nuclear firing chain at the high end, we understand the importance of encryption for the economy and for the individual. That importance grows as more of our private lives move online and the economy becomes increasingly dependent on digital currency and block-chain systems. We advise Government, industry, and individuals on how to protect their information appropriately, as Danny said.

Much of GCHQ‘s work is on cyber security, and given the industrial-scale theft of intellectual property from our companies and universities, I’m acutely aware of the importance of promoting strong protections in general, and strong encryption in particular. The stakes are high and they are not all about counter terrorism.

GCHQ has a long term strategy, called ‘Secure By Default’, to raise the security bar in commodity products. You’ll see more of this over the coming year, but in 2012 we published a set of principles for secure platforms that was fully endorsed by the industry. We’re starting to see those come to fruition now, making commodity platforms more secure off the shelf. My challenge to our new National Cyber Centre, launched as part of GCHQ later this year, is to emulate the breakthroughs of our predecessors by developing and promoting similar advances in secure communications and services for the benefit of our people and our economy.

We have a history here. We may have become famous for cracking encryption, most notably at Bletchley Park in the Second World War (you have seen The Imitation Game), but it’s worth remembering that Alan Turing, our staff member who is most publicly associated with defeating Enigma, actually spent slightly more of his career with us designing a secure telephony system - here in the US, alongside US industry and Government colleagues.

Strengthening and improving encryption has been the focus of our most brilliant mathematicians, and still is. Today, I am publishing on our website facsimiles of the two original papers by the late James Ellis from January 1970 on “The Possibility of Secure Non-Secret Digital Encryption”, and from May 1970 a parallel paper on the possibilities for analogue encryption. I will leave it to those of you far better qualified than I am cryptography to judge their significance (and I will leave these first facsimiles with the MIT library).

I note that the analogue paper represents a direction which cryptography never took, seeming less relevant at a time when communications were becoming increasingly digital; but as we look again at ideas to introduce security at the physical layer, perhaps this paper will be revisited.

And I publish them for three reasons, apart from the obvious point that in an age of greater transparency, forty-six years seems a long-enough wait.

The three reasons:

First, the sheer boldness of Ellis’ concept  (and of Malcom Williamson and Clifford Cocks’ subsequent work) - mirrored independently on the outside by Diffie, Hellman, Rivest, Shamir, Adelman and others, names very familiar to you and many congratulations to Whitt Diffie and Martin Helman for winning the Turing Award recently. That boldness is still staggering. It reversed centuries of assumptions about how communications could be protected and it gives me some hope that our current difficulties can be overcome. In the face of their achievement, I instinctively question arguments that suggest technological innovation has no part in the solutions we see - I'm sure it does.

The second reason for publishing; the transformational power of PKI and RSA came, of course, from the combination of the altruistic academic brilliance of the people I’ve already mentioned, working on the same issues in the secret and public academic domain, and the industry players of the emerging internet and web. Strong, relatively cheap encryption became ‘democratised’ and enabled more secure communications on a global scale. Encryption went from being a tool of strategic advantage between super-power blocs, to a key enabler of individual freedom and safety.

And, if we’re honest, we were not instinctively keen to share with academia outside Government in those days. Things have changed as the Cold War declined and academic research expanded, and most of all encryption became of such fundamental importance to us as individuals and the societies we live in, we began to engage from GCHQ. We have promptly published several advances due to Cliff Cocks, which you may have seen. More recently, CESG (part of GCHQ) released details of a failed quantum safe protocol, in order to help the wider cryptographic community better understand the complexity of that important area of research.

And third, economics was a key factor in the creation of PKI, as it has been in the development and direction of the internet itself. The sheer cost of secure distribution of symmetric keys during the Cold War prompted Ellis to look for ‘unthinkable’ alternatives. And of course, economics - the relatively high cost of processing - made early development of PKI difficult. But the internet and the web on the other hand made it absolutely essential. And if Ellis were alive I’m sure he would be proud that almost every aspect of life online relies on PKI.

The Berkmann Centre’s recent report "Don't Panic", which some of you will have seen, makes this point very well - the economics of the internet makes ‘going dark’ more complex than it seems, especially where end-to-end encryption is concerned.

The moral issue

So, let me turn to the second area, the moral issue. Leaving aside those three reasons for publishing those papers, what the history of our cryptology teaches me above all is that the enduring problems in the debate over privacy and security are essentially moral rather than technical. And they're not really new, or at least new only in their application to the still relatively young domain of the internet and the web. In our generation, we are immersed in a debate which has much in common with that around wiretap - ‘alligator clips’ on wires - or the interception of letters for earlier generations. We should take some comfort from that; if our predecessors could reach a sensible accommodation, albeit on a much smaller scale, then so can we.

At its root, the ethical problem presented by encryption is the problem presented by any powerful, good invention, including the internet itself, namely that it can be misused. TOR is the most topical example: a brilliant invention that is still invaluable to those who need high degrees of anonymity, notably dissidents, human right advocates and journalists; but an invention that is these days dominated in volume at least by criminality of one sort or another. The technology of the internet and the web is morally neutral, but those of us who use it aren’t.

The rational response is not to panic, or to assume that encryption itself is therefore bad, but to look for a sensible, pragmatic and proportionate response to a shared problem: the abuse of encrypted services by a minority of people who want to do harm to others.

This is the shared problem to which I referred earlier. It isn’t my problem or law enforcement’s or Government’s, but society’s.

The solution is not, of course, that encryption should be weakened, let alone banned. But neither is it true that nothing can be done without weakening encryption.

I am not in favour of banning encryption just to avoid doubt. Nor am I asking for mandatory backdoors. I am puzzled by the caricatures in the current debate, where almost every attempt to tackle the misuse of encryption by criminals and terrorists is seen as a ‘backdoor’. It is an over-used metaphor, or at least mis-applied in many cases, and I think it illustrates the confusion of the ethical debate in what is a highly-charged and technically complex area.

One problem is that we approach this from very different perspectives. For those of us in intelligence and law enforcement, the key question is not which door to use, but whether entry into the house is lawful at all. In our case that means applying the European Convention on Human Rights, as set out in UK domestic law: is what we are doing lawful (and appropriately sanctioned), is it necessary, and is it proportionate, notably in its infringement of privacy?

Proportionality is, of course, the most contentious area. These are very difficult judgements. If I look back to the earlier examples, they seem straightforward and life was simpler. The exploitation of a few key flaws in the otherwise brilliant design of the commercial Enigma machine, along with clever maths, early computing power, the outstanding industry engineering of Tommy Flowers and his colleagues, and some old-fashioned espionage, enabled Allied victory, and not only, as Eisenhower acknowledged, saved thousands of Allied lives, but also brought the Holocaust to an end before the Nazis could complete their task.

This then is an easy example. Even though it was very large scale, it would be hard to argue that this was not proportionate, particularly in wartime. But it is worth remembering that the benefits at the time were far less clear than they appear with hindsight and with the benefit of history.

Nor would it make much sense to see this as the exploitation of a ‘backdoor’. What Turing and his colleagues recognised was that no system is perfect and anything that can be improved can almost inevitably be exploited, for good or ill. That is still true today. It does not follow that Enigma was a poor system, or ‘weak’, or easily broken by anyone. In fact we continued to build and use improved derivatives of Enigma - ‘Typex’ - for some years after the War. So for Turing, I do not think there was any such thing as an impregnable door, whether front, back or side: there were strong doors and degrees of security.

Turing also knew that human behaviour was rarely as consistent as technology: we are all to some extent too busy or careless in our use of it. Put that more positively, we routinely make trade-offs not just between privacy and security, but privacy and usability. The level of security I want to protect the privacy of my communications with my family is high, but I don’t need or want the same level of security applied to protect a nuclear submarine’s communications, and I wouldn’t be prepared to make the necessary trade-offs.  It is not inconsistent to want, on the one hand, to design out or mitigate poor user behaviour, but at the same time exploit that behaviour when lawful and necessary and warranted.

Today, judging the benefits and therefore what is proportionate is made more complex by the diversity of the threats, the diversity of technology - particularly the shift in terrorism from bespoke systems to the commodity services we all use everyday - and the proliferation and transnational nature of communications, which heightens the potential for intrusion into the privacy of more individuals than ever before (though this can be exaggerated; as UK court judgements over the past two years have confirmed, our bulk collection does not equal bulk surveillance. Those are different things.)

To come up to date from Enigma, much of our effort in GCHQ at the moment is inevitably directed against ISIL/Daesh. Few would disagree that this is a terrorist group that needs to be stopped. It is an easy area in which to establish common ground. Daesh is an organisation which crucifies young children in front of their parents and posts this online; that uses rape and sexual violence against girls and young women as a routine weapon of terror; that finds ever more creative and perverted ways of torturing and killing dissidents, journalists and gay people. And of course, it projects both radicalising propaganda and attacks back into western countries and many other countries too.

Faced with this threat, or with the proliferation of fissile material or chemical weapons, or the live-streaming of child-abuse to order, or the more routine day-to-day dramas of abduction and kidnap which affect ordinary families in both our countries, what is a proportionate response?

A key point I want to make this afternoon is that it is not for me, as an intelligence official and civil servant, or for a law enforcement officer, to make these broad judgements, whether about the use of data in general or encryption in particular; nor is it for tech company colleagues nor even for independent academics.

Since the trade-offs are for society as a whole, it must surely be for elected representatives to decide the parameters of what is acceptable. Within a transparent legal framework it is for those involved - Government agencies, tech companies, academia, civil society, to work out what is possible together. And of course it is for the Courts to monitor, test, and enforce compliance.

Whether you are operating in a framework like the US, with its constitutional protections and separation of powers, or the UK, with common law framework and the European Convention on Human Rights, lawmakers and the Courts are essentially trying to reconcile the state's first duty to protect its citizens with their right to privacy. Total security is not possible, and both frameworks, in their different ways, qualify the right to privacy - it does not extend to the right to harm others.

Democracy, for all its flaws, remains our best defence against the abuse of power and infringement of liberty and privacy, whether by Governments, or industry, or the individual. It is, after all, those democratic values from which the internet was created and flourished, not vice versa. The internet is enhancing democracy I'd argue in exciting new ways, but it is not a replacement for the democratic process, or a parallel universe.

The UK Position

Let me come to the third are, the UK position. In the UK we are in a slightly different stage of the debate and have just embarked on a new discussion of these broad issues and powers. Our Parliament is discussing at the moment a new Bill which is intended to set out what is necessary and acceptable in as transparent a way as possible, not an easy task. It is based on a number of independent reviews by lawyers, parliamentarians, and experts, over the past 18 months.

It does not give the intelligence agencies new powers but tries to put in one place powers which were spread across numerous statutes.

On encryption, it simply repeats the position of earlier legislation: where access to data is legally warranted, companies should provide data in clear where it is practicable or technically feasible to do so. No-one in the UK Government is advocating the banning or weakening of encryption.

Defining what is reasonable and practical of course immediately engages proportionality. Does providing the data in clear endanger the security of others’ data?
The unwelcome answer which dissatisfies advocates at both ends of the spectrum is: it depends. Not everything is a back door, still less a door which can be exploited outside a legal framework, so it really does depend.

The truth is that within the parameters set by legislation, it should be possible for technical experts to sit down together and work out solutions to particular manifestations of the abuse of encryption, as I would describe it. I suspect those solutions will not be single, but diverse and increasingly dynamic, as the Berkmann Centre report suggests.

But here is the major challenge which the Berkmann report understandably leaves hanging: law enforcement in particular needs solutions now, often against a ticking clock, and cannot safely wait for complex or fragile possibilities to be offered. There is an urgency which needs to be met, even if a comprehensive solution is beyond reach at the moment. The kind of big data solutions which are critical to us in what we call ‘target discovery’ - for example finding people we don’t know about in Northern Syria who are planning attacks against the UK - are not a substitute for what law enforcement needs now against a known individual for investigation and prosecution.

To address this, we recognise that we need a new relationship between the tech sector, academia, civil society and Government agencies. We should be trying to bridge the divide, sharing ideas and building a constructive dialogue in a less highly-charged atmosphere. That’s why I’m here, in short.

I’ve no doubt that we will need a new forum to facilitate this, bringing together the tech industry, Government agencies, academia and civil society. A space where we can build confidence, have a frank dialogue, and work out how we can best tackle the problems we all recognise within the law.

For our part we’re fully committed to a collaborative approach and want to support this actively. Our Prime Minister will be setting out further details in the coming months on how the UK Government plans to facilitate this dialogue on our side of the water.

And this will be a dialogue that starts from the position I’ve outlined today - that the Government and its agencies support, and want to actively promote, effective encryption and wider security.

I hope also that this process will find ways of increasing the public understanding of the issues raised by encryption. There is a strong shared interest in tech company customers and concerned citizens and voters, fully understanding how their data is protected: they are, after all, the same people. And this may be generational. People in this room understand this far better than people in my generation and age.

While our jurisdictions are separate, the internet and its technologies are not, which is partly why I am here in Cambridge today. I’m sure that any UK process will also therefore want to consider the international dimension and what norms of proportionality and reasonableness might apply.

I do not know where this dialogue might take us; the best kind of dialogue really. It would be surprising if it ever reaches a final conclusion, not least because the internet and the technologies operating across it are unlikely to become static anytime soon. But pragmatic answers, developed in an atmosphere which is less heated than at the moment, must be in everyone’s interests.

I hope it will help bring us closer to the goal which I think is shared by all sides: moving those who misuse encryption and abuse the internet and web into the reach of the criminal justice system. Even agreeing this goal frees us to begin a new approach. The crucial point on which I hope everyone in the debate can agree is that there are solutions to this problem, some of which are technological. Goodwill, expertise and cooperation are the key to finding them.

We manage this cooperation in every other area of public safety. As The Spectator, a libertarian UK magazine recently pointed out when writing about Government and tech industry relations, agricultural fertiliser can be misused; we work with producers to make that less likely, and we hope that retailers would naturally report obvious suspicious activity, in the way that any concerned citizen would under the law, although they are not mandated to do so.

The fact that technology is more complex, or that the internet is still young and developing, doesn't seem to me alter the fundamental approach in a democracy: agreed objectives - what is ‘good’ and ‘bad’ - enshrined in transparent law-making, implemented by all with responsibility; and a presumption that we all want to help, within the parameters of what is lawful and practicable.

I think we would all of us - customers, companies, agencies - be happy to see the worst behaviour driven off major platforms. Tech industry leaders understand this: as Mark Zuckerberg’s comments in Berlin ten days ago about addressing anti-immigrant hate crime illustrate, we all worry about bad behaviour by a minority on those platforms.

Of course, some people will find new places to hide unlawful activities, and new channels of communication, but agencies like mine were created to tackle those most difficult problems; what we need to avoid is effort diverted on all sides in tension between Governments and the world’s major providers. Instead we should apply our collective goodwill, expertise, and technical brilliance, to meeting the hardest threats to societies.

Our problem at the moment for agencies like mine, in short, is that those who do harm are hiding in the noise of the internet by using what the rest of us use: pushing them off these channels is surely a shared goal for customers, industry and Government. We do not expect to reach perfection in this: but we need to clear some ground and know where to focus our efforts as agencies.

So finally, this brings me back to my first point. The intersection between the world I am trying to tackle - involving the worst of human behaviour - and the world of fantastic economic and social opportunity the internet is offering is relatively small. We are talking about a small minority.  The security tail should not wag the dog. And of course sometimes there will be nothing we can do and we will have to accept that; but those surely should be the exceptions, and we should seek to minimise them.

I do not for a second doubt that terrorism and the other abuses I’ve mentioned are any less abhorrent to leaders of the tech industry than they are to me. Nor do I think they are any less interested in the safety and security of their fellow citizens than my staff. But where this is my core business, I realise that this isn’t theirs and shouldn’t be theirs. Our worlds overlap, but they are not the same; my point this afternoon is that they do not need to collide.

For Governments, protection of their citizens is the primary duty. And those citizens expect both their safety and their privacy to be protected in a proportionate way, not one or the other. I think this is common ground between Governments and the tech industry, and a number of tech leaders have used exactly the same form of words. It cannot be an unreasonable or undeliverable demand, if approached as a shared problem. We can, for example, work together - industry, Governments, academia, civil society - to drive ISIL off the internet and bring them to justice, and that is beginning to happen.

So the debate for me, as I said earlier, is not about backdoors or front doors. It is about whether entry into the house is lawful at all. It is about whether you risk letting anyone else in if you accept that the lawful authorities can enter with a warrant. This is a fundamental issue for all liberal democracies and we all have to grapple with, about striking the right balance. It is for constitutional and democratic processes, for elected lawmakers and, in some cases, for the Courts to determine the outcome.

The duty of those of us charged with public protection is twofold. One is to make clear to our elected leaders what assurance they can expect from different postures. I would never advocate a surveillance state, not least because I would not want my family to live in one; nor would my staff. The price of security can be too high.

My job is to say what my organisation can and cannot do under various legal and operational frameworks, and what impact that will have on the risks posed to the public.

And a second duty is then to operate whatever framework and risk posture the democratic process decides on, to the best of our ability. In the area of encryption that must mean some very practical cooperation with the industry - there is no other way to do it. Whatever high level framework, whatever posture democratic nations decide upon will need to be implemented by those commercial providers. And this will get very technical, which is why this is the right place to be. That's why we will need goodwill on all sides. It is where, in the UK, I hope, the process the Prime Minister will set out in the coming months can shed some really useful light.

And for my part, finally, my promise today is to engage in that process on behalf of my agency with the tech industry openly, respectfully, and in good faith.

robert-hannigan_bv110_mit.jpg

Director GCHQ Robert Hannigan speaking at MIT, 20160308
©Bryce Vickmark