Age Verification as the new cookie law?

Age Verification is just months away from becoming the law and, for all the criticisms to date, opposition to it has been ineffective. When the ‘cookie law’ was introduced in 2011, it was expected by regulators and others that a good design would supplant the initial one, which is merely compliant. That did not happen. To avoid Age Verification becoming the new cookie law, there needs to be a design pattern that offers privacy as a benefit, not as a burden.

Age Verification is a narrow form of ‘identity assurance’ – where only one attribute (age) need be defined. The method by which this is done is not prescribed, but it would be perverse were the desire for privacy and protection to create more new databases and even more risk. And these issues have been solved before, replacing the ID card and scheme with Verify; that infrastructure is rolling out EU-wide, and can be reused.

Given GDPR, the patterns should be as shown to the right:

Users get a choice of skipping verification this time – and being asked against next time – or the ability to verify their age with a separate service which will provide the website with a single attribute of confirmation:

“The user is over the age you need them to be” (i.e. 13 / 16 / 18 / 21 / 65).

There is nothing else a site needs to store in order to be completely compliant with this requirement of the GDPR: when a site sends a user to a verification service, the user only comes back with a token signed by the service if they are of the required age. Simple!

Such services should follow the EU’s eIDAS Regulation and the UK’s GOV.UK Identity Assurance standards, i.e. reusing credentials that already exist (also providing an incentive for countries to finish deployment). Whether an adult is approving a site for themselves, or for a child for whom they are responsible, would be a record held by the verification service.

If facebook and Google want to get into the ID assurance game, then that’s OK too; but they must also only provide the same answer, and no provider can be preferred over any other. Each assurer must, of course, be insured.

While we suggest the “GDPR_Age_Verification” as an HTML button ID, we suggest W3C and browser vendors agree a standard name.

If you provide a service that can validate age or parental responsibility internally, where you simply need to know that parental permission has been asserted for the child, the second button can link to a page with a unique reference code – so the URL for that page can be sent to the parent or carer, who can log in on a different device. When the parent then logs in using their credentials, they are asked to confirm responsibility, approval, and (optionally) any other parental-control settings you wish to offer. The child should not be expected to know their parent’s username.

As better prototypes emerge, we’ll link to them from here. (If you are a company looking for changes in order to implement this, our friends at Projects By IF are the design house to talk to.)

N.B. A longer version of this post is on Sam’s blog.

Posted in age verification, ID cards, identity | 1 Comment

May the Fourth be with you!

It’s Local Election day today where I live, maybe where you live too.

What you may not know is the added significance of this and future local elections to the provision of NHS and care services in your area.

To explain: some of the Councillors you elect today will end up being on a Council body called the ‘Health and Wellbeing Board’ (HWB). You can read more about HWBs here:

Your local  Health and Wellbeing Board will be the body that chooses how your area’s health budget gets spent – what gets funded, what gets cut, and what happens to your medical records.

Why this is extra significant in this election, and in future elections, is because of a reorganisation of the NHS that many people don’t know about.

In England, the NHS is being divided up into 44 areas – called Sustainability and Transformation Plan (STP) “footprints”, more details here: – each of which will be given a single devolved budget to spend on NHS and care services in that area. Essentially, central government is handing the tough decisions on cuts, etc. to local government and the CCGs – i.e. the Health and Wellbeing Boards, where the Councillors and the Commissioners sit together to decide things.

So, as these STP ‘footprints’ come into being – and the process is well underway in many parts of the country – the power of your Health and Wellbeing Board in helping determine where resources are spent (or not) in your area becomes much greater.

We’ve written about this in our last two e-mail newsletters, also published on medConfidential’s website:

and also on our facebook page:

So, as you vote today, you may want to think what the candidate you are voting for might do if given greater power over NHS & care spending in your area.

They’ll all say nice things about the NHS of course – that’s a given in British party politics – but do you know what your Council candidates’ priorities are, or what and how they’ll decide cuts? Because, make no mistake, there’ll continue to be cuts WHOEVER ends up winning the General Election in June.

Do you trust they have the wisdom… the compassion… the sense… to make such decisions?

Because you’re the ones electing them – and, in this context, not voting’s a decision too.

Please vote mindfully today.

I know many people do already, but we’ve had a difficult time recently – lots of misinformation, anger and confusion. Some are fearful of what may happen, others eager for change – though without knowing what the consequences of that change will be.

I hope this information has been helpful in informing your choice today.

I’m certainly not going to suggest who to vote for – that’s for each of us to decide for ourselves. And it may be that you care about other things more than health and care (and, in my case, medical confidentiality and data privacy) issues. Which is entirely fair.

To #democracy! Not quite as much fun as sex, but certainly just as messy…

Posted in choice and consent, database state, democracy, Facebook, medical confidentiality, medical records, open data, privacy, transparency | Leave a comment

What does a Citizen’s View of Government look like?

Rather than a “single Government Department” that does whatever it wishes, the alternative is to operate a citizen’s view of Government: a view which doesn’t assume the citizen has to learn how all of Government works – but for which, in any and all interactions with the public services, citizens can see how their data was used for those purposes.

Government is taking for itself the powers to copy any data it likes to anywhere. And, in the context of the power imbalance between citizen and State, even individual consent is insufficient as a check; “Give consent or you don’t get your benefits (and therefore you can’t eat)” is not consent freely given.

However, for citizens who have no need to deal with particular parts of Government – and whose data isn’t therefore used – there’s no necessity to know about them in the first instance. Over time, pretty much everyone touches pretty much everywhere, but the starting point isn’t a big list of Government organisations and abstract possibilities; the starting point should be each citizen and the reality of his or her interactions with Government.

In much the same way as you have a unique medical history – and you were in each of the appointments – you have a more or less unique series of interactions with Government, and you were probably at most of those appointments too.

As the rate of technologically-driven change gets ever faster, Government needs to meet expectations, or suffer from letting people down. GOV.UK was just a start of that process, not an end – and it has stalled.

On data, Government is using the same measures that suggested a junk mail leaflet about copying medical records would be enough. With ongoing transparency to citizens, and an expectation of engagement for those who are interested (not necessarily monthly, but regularly) the scale becomes equivalent to the rate of change, and the curve becomes a straight line.

It’s unclear how data will be used in 100 months’ time in any industry, but it is pretty clear that it will be only incrementally different to how it is used in 98 months’ time – and in between those two is the opportunity to talk to people about how their data is used, and how the world is moving.

A 20th century government relying on 19th century institutions is struggling in a 21st century world. Government could, if it chose to, use digital technologies to explain itself to its people– restoring confidence both in Government and in vital institutions, through accountability and integrity: doing what you say you will do, and showing that you have done it.

Below, for example, is a pretty on-screen dashboard for a Government Data Usage Report telling you how the various public services that you’ve logged into via GOV.UK Verify have used your data:

'How the Government uses your data' interface mock-up

To steal a phrase from Baroness Lane-Fox, accountability of how data is used should mean ‘reaching the furthest first’. Not solely in terms of digital exclusion – assisted digital is, of course, vital – but in terms of Departmental inclusion, and in terms of providing information (and meaningful action) for those whose trust in Government handling of their individual-level data has been most damaged.

Had the Cabinet Office run a substantive “Open Policy Making Process” around data, with real engagement and discussion, rather than the meaningless charade they chose, this is where it could have ended up.

Egregious cockups around public data will continue until there is leadership on a new approach

Accountability for the way in which patient data is used came out of the wreckage of – and, as a consequence of a separate nation-scale data breach, you can now see where your GP record has been accessed for direct care. Given the Government’s plans for data usage, it may be that accountability will come out of similar future wreckage – designed and implemented by those who see as a playbook, not a warning.

It is data projects in secret that cause the most problems. Transparency drives up data quality, as citizens can see that errors mean that their data wasn’t accessed when it should have been –- or was accessed when it shouldn’t have been. Transparency to individuals about their own data provides a scaleable feedback mechanism that allows for projects to correct in small increments, rather than exploding.

Most data handling in Government, and in commercial contexts, is no worse than it was in the Health and Social Care Information Centre in 2013. It just so happens that public expectations of the NHS mean the issue was addressed there first. With rare exceptions, all data handling is terrible – the main difference being that the NHS has been more honest about it than most. The number of British passports issued to people born in the great country of Yorkshire might astound you. (They misread the form.)

It is deeply ironic, although not amongst particularly strong competition, that the Minister most accountable for the way in which UK citizens’ data is used is Theresa May – with the requirement that she must know how MI5 uses bulk personal datasets.

Without a properly independent “Partridge Report”-style process – doing one on yourself, like Public Health England, simply locks in broken thinking – and without a full accounting of the status quo, Government will not know what it is currently doing with data. And without knowing what is currently happening, it cannot get better.

The lack of coherent and consistent information asset registers across government departments, following events like the HMRC’s child benefits data disaster was striking. GDS 2010-2015 may have had a go, but Sir Humphrey fought back. How many databases does he use? We actually still don’t know – and it’s 2017 and, via the Digital Economy Bill, the Cabinet Office just got the Single Government Department Clause it tried sneaking through in 2009…

Accountability, possible under a Digital Economy Act’s Codes of Practice, will need either high-level political leadership, or another data catastrophe. Or both. The burning question is whether leadership will come from the unfortunate Minister who finds a project in his or her portfolio, or whether there is a Minister willing to lead from the front.

Related pieces:

Posted in choice and consent, communications data, database state, GDS, ID cards, medical confidentiality, open data, privacy, transparency | Leave a comment

Text of speech given at Rowntree’s Governance Seminar on The Database State, 22 October 2008

I am posting this here, on 20/03/17, as I cannot find a copy elsewhere on the web. This is the text of a speech I gave while I was national coordinator of NO2ID at a CAOS (‘Combining All Our Strengths’) meeting convened by the Joseph Rowntree Foundation, I think – certainly one of the Joseph Rowntree family of organisations.

For convenience, I have highlighted in red below my articulation of the concept of “informational privity”, which Guy and I had been discussing for a while at this point, which – as the footnote records – I edited shortly after giving the speech, before submitting a copy of my text to the organisers of the meeting.

[In passing, it is interesting to note that much of the thinking that underpins our work at medConfidential appears to have been pretty well developed by this point.]


Rowntree’s Governance Seminars: The Database State

I’m billed this afternoon as talking about ‘ID cards and the NIR’ but as it was my colleague, NO2ID’s general secretary, Guy Herbert, who coined the phrase ‘the database state’ back in 2004 when we were all setting up the public campaign, and this is a database state seminar, I hope you’ll afford me some leeway.

What do we mean by the database state? Simply, that tendency to try to use computers to manage society by watching people.

This afternoon I don’t so much want to talk about the problem – which I think we all acknowledge – but rather to focus on what is being sadly, and sometimes deliberately, overlooked in the so-called debate around compulsory state identity management and related initiatives.

NO2ID believes the time has come to talk about workable solutions and practical ways forward, though it is vital to note that the Home Office ID scheme, to which we remain implacably opposed, cannot be ‘altered’ to accommodate these proposals. The principles on which the ID scheme is based are fundamentally incompatible and it must be scrapped – and preferably the Identity Cards Act 2006 repealed – before any progress can be made.

The government has lost the argument on pretty much every front, and though it keeps cycling through the same tired excuses, ministers have recently tried to develop a narrative that its vision of the future – the database state – is inevitable [e.g. David Blunkett, re. biometric passports]. That, once started, the process is irreversible [e.g. Meg Hillier, at Labour conference]. That, with almost religious zeal, theirs is the only way forward and anyone who disagrees is at best an accessory to terrorism or paranoid.

It’s simply not true.

Indeed, Sir Ken Macdonald’s [outgoing Director of Public Prosecutions] analysis seems to be that it is the state’s obsession and paranoia driving this forward. Others would say that by taking away our freedoms and fundamentally changing the relationship between citizen and state, the government is actually doing the terrorists’ job for them. For if we end up losing our liberty, have the terrorists not achieved victory?

Our campaign may be named NO2ID but we are far from Luddite, as many of you will know. Our collective technical awareness certainly exceeds that of the Home Office, even when they are spending over £100,000 per day on consultants. And, though negative in name, we are a hugely positive bunch fighting for freedom, privacy and a future where these basic rights cannot be falsely traded off against security, either national or personal.

In the time I have left, I’d like to rehearse for you three possible positive approaches, and briefly try to draw out some basic principles.

The first is the concept of ‘informational privity‘. This is a new idea, which I may have previously explained misleadingly [1]. We are all used to leases of real property, or licenses of copyright, for example, occurring through a chain of contracts each of which gives specific and limited rights to the recipient – and just as clearly gives no rights to those outside the chain. It is natural that someone with rights can get direct redress against an infringer with no rights or someone lower down the chain who exceeds their authority.

Informational privity would be a new sort of enforceable property right, with some of the features of confidentiality, but extending to all personal information. Casual talk of data “ownership” leads to all sort of traps. Were all transfers of Personally Identifying Information (PII) subject to such a right, and functionally constrained thereby – it is important that it should not be possible casually to waive it by contract – then we have the means to build conceptual and technological structures that will far more effectively discipline the use and abuse of personal information.

Note that, perhaps controversially, this approach allows for individuals to assert their own rights, and can be developed by the courts, rather than relying on the foresight, vigilance or funding of regulators. And personal rights are politically harder to interfere with than the scope of statutory bodies. The net result would almost certainly be better data protection and most probably better privacy [2].

This is, after all, our data. It makes sense to give me, the person most motivated to do something as the one who will suffer any effects, the power to protect my data and to seek redress if wronged.

[The Information Commissioner himself freely admits his Office is too underfunded and overstretched to properly prosecute Data Protection. One has only to compare the annual 20p per head of population assigned to regulating DP, FOIA and environmental data with the Health and Safety Executive’s £800 million+ per year – on which even it struggles to keep up – to see that increased regulation is just a money pit.]

The second concept is simply to distinguish between identification (‘identity management’) and authentication (‘identity assurance’) – a crucial distinction that successive Home Secretaries and the so-called Minister for Identity seem wilfully to ignore.

If you need a way to prove something about yourself to a third party, use an appropriate trusted credential which verifies that particular fact.

The requisite technologies are already with us, in the form of digital certificates and encryption. One need give away no more personal information than is actually required for each authentication or verification event [cf. Dave Birch of Consult Hyperion’s ‘psychic paper’], and you are not putting all your eggs into one basket as you would with a National Identity Register – the logic and design of which would put it (and thereby the state) at the centre of every trusted relationship and transaction.

[This is especially true of irreplaceable data such as your biometrics. You only have ten fingers. Why should those unconvicted of any crime be forced to surrender them all to the state? As biometric technologies and markets mature, this is like saying UK citizens should lodge copies of every key they own with the authorities – your front door key, your car key, your computer password, the combination to your safety deposit box.]

In reality, we don’t need to know who someone is in order to be able to trust something about them. It is the paranoia of the database state that says otherwise.

A market of overlapping, interdependent ‘identity tokens’ – maybe even some issued by the government, but on a level playing field rather than monopolistic, bullying basis – could provide the security and trust required to transact in a 21st century society without compromising privacy, liberty or personal security.

The third concept is to precisely target the problem, not broadly prescribe against the symptom. And by this I clearly don’t mean touting around your solution (‘ID cards’) as a salve to all ills. An example of what I do mean, most recently endorsed by the Liberal Democrats but also supported by consumer advocates [e.g. NCC, now Consumer Focus] is ‘credit freezes’, giving each individual the ability to lock their own credit record so that no-one can gain credit in their name.

[Interestingly, NO2ID suggested credit freezes to the Home Office and members of the Home Affairs Committee after Meg Hillier challenged opponents of the ID scheme in the Financial Times in February 2008 to say how they would tackle ‘identity fraud’ without employing features of the Home Office ID scheme. Our suggestion, though we had confirmation at the time that it was received, has been studiously ignored for eight months.]

Credit freezes are a practical solution to a real problem, working effectively in the US and for victims of ‘identity fraud’ in the UK, but it is a radical departure from the database state in that it hands meaningful control over personal information already held by an organisation or group of organisations back to the individual. Credit freezes demonstrate that control can be given to the individual in ways that make the citizen (or consumer, though the two are not synonymous) safer.

If control is the first of my basic principles, then the other two are choice and consent. Without properly attending to all three, you will find it well nigh impossible to build or retain trust – which is absolutely essential for any identity system or infrastructure.

Let us not forget that the so-called ‘voluntary’ ID cards are being imposed by, in officials’ own words, “various forms of coercion” – picking on soft targets who can’t refuse or speak out, bullying those already heavily vetted [airside workers] or duping the young [though they don’t look like they’ll buy it]. The government’s notion of ‘choice’ is at best a perversion of both logic and language.

The intention – indeed the core design principle – has from the outset been universal compulsory registration on the National Identity Register (NIR), whether by designation of documents (e.g. passports or driving licenses) or making it more difficult or impossible to live your life, earn a living, travel, register with a GP, receive benefits or services without submitting to lifelong surveillance and surrendering the master copy of your personal details, and a copy of your only biometric keys to the state.

Citizens who live only at the behest of the state are not free, and any government contemplating “managing the identities” of its citizens would do well to remember that it serves at our pleasure.

Government, like business, must seek our consent – which should always be properly informed. For the last four years, NO2ID has been talking to people, based on Home Office documents, ministerial statements, etc. about what is actually intended and what the consequences might be. And all but one independent poll of which we are aware (not the IPS ‘tracking research’ which polling professionals inform us is a ‘push-poll’) since shortly before the HMRC child benefit disaster show that more people now think ID cards are a bad idea than a good one. Opposition to the database behind the cards is more like 2 to 1 against.

I shall close with a thought, the significance of which has been properly understood, I think – based on their recent public pronouncements – by Sir Ken Macdonald and Dame Stella Rimington.

In an information society, things done to your data can have as much effect on your life as things done to your person.

We have to get this right. And we have to get this right NOW.

In a generation, it may be too late and we’ll be fighting generational battles against the information equivalent of slavery, people trafficking and ‘data rape’. If we get it wrong, we won’t just be living in a surveillance society – our very freedom will be subsumed by the database state.

Phil Booth, 22 October 2008

[1] After speaking at CAOS on 22/10/08, I have taken the liberty of editing this section on informational privity in order to more clearly explain what we consider to be a significant new concept.

[2] There is a distinction. As I keep having to remind people, information security is not data protection is not privacy.

Posted in choice and consent, database state, ID cards, identity, NO2ID, privacy, transparency | Leave a comment

A safer, fairer information society

Thoughts in response to Francis Irving’s post, Making our information society safe and fair, to which I added the following comment:

I don’t disagree with these, Francis, but would maybe (because I have increasingly tended to come at the problem from the campaigning end of things?) take a tougher – or at least different – line on some of them.

I’m glad your #1 was access to (use of?) culture, and your #2 literacy. Both essential. No point arguing chicken and egg, but I fear you have to be more radical yet if you’re relying on the public libraries to ‘save us’.

What I – and others, but possibly most articulately @billt – think we need is a genuine ‘Digital *Public* Space’. This is difficult to unpack, but (for me) lies somewhere around the notions of public parks, public libraries, public service broadcasting and pop-up art spaces. What most people think of as ‘public’ these days is nothing of the sort; this is becoming as true off-line as on.

Key to truly public is truly anonymous.

So, while I agree with #5, I believe ‘fair and equal’ requires the ability to join the network anonymously – though, of course, to be able to provide trustworthy bona fides when/if justifiably  challenged. This requires a radical rethink of the network, which is why redecentralisation caught my attention when I first saw you mention it.

I’m pretty hard core on ‘literacy’. I was training as a teacher as the current National Curriculum was being issued, and had a go at what briefly became known as the BBC’s ‘Digital Curriculum’ in the late 90s/early 2000s – but what we have in schools and more generally these days is woefully inadequate.

Media Studies used to be a ‘joke’ subject; these days, I’m half convinced a radically-improved version of it should be a core subject or key component of every subject.

I now know several people from their 20s to mid-30s who were failed utterly by school, who are functionally illiterate when it comes to the written word, but who over the last 5-7 years have educated themselves on YouTube (or equivalent, but mainly YouTube). For free. They are interested/engaged, interesting to talk to and coherent – but it cost them a LOT of effort. What they lack is a map.

Maps are hard.

Search is easy. Search makes you think you know stuff you actually don’t – because if you can’t even identify the context you borrowed for the information, you don’t know what you ‘know’, and what you don’t.

Maps distil a bunch of stuff that helps people find their way around; to get a sense of what they know, and what they don’t. It’s entirely possible – if costly – to make (good) maps but we should do MUCH more of that, and publish them for free.

A person with a map can make all sorts of choices they otherwise wouldn’t know were there. People with maps tend to be freer / more autonomous than those without them…

(N.B. Better maps may also help other initiatives, such as ‘open’ – which is flailing around a lot at present, trying to find how it relates to principles and disciplines it barely appreciates and a landscape it hasn’t even really begun to explore.)

A lot of the (digital) learning and ‘literacy’ I see is misdirected at activities that aren’t fit for purpose; teaching people to drive software – rather than to build it themselves, or to be able to fix it, or at the very least to be able to appreciate the good, bad, ugly and dangerous parts.

#3 shows you appreciate that coders will always be an elite, so you clearly appreciate that teaching everyone to code isn’t the answer. I think the (mass) answer will ultimately be somewhere on the ‘aesthetic’ rather than the technical end of things – ‘play’ vs ‘study’; educating people to at a minimum be able identify code/data products and services that safely meet their needs and desires.

Professionalising programming is, I fear, a more-than-generational problem.

I know folks at BCS and others are trying to think about this. I’ve spoken to several members of the Worshipful Company of Information Technologists(!) over the years – at least one a University Vice-Chancellor – and no-one who takes this seriously doubts that this is huge.

Take psychology as an analogy; a practice most people would recognise as some sort of scientific discipline. As a professional practitioner, you can be a Chartered Psychologist or member of one of the established Psychological or broader Scientific Institutions or (Royal) Colleges. You can study a bunch of internationally-recognised courses in established Universities to get a bunch of letters after your name.

This has been true across the world for quite a while and while it doesn’t stop, e.g. Scientology or NLP ‘life coaches’ continuing to abuse psychological techniques for money, or didn’t stop Nazis doing appalling experiments in WWII, it does tend to mean that people who do such things are sanctioned to the extent that a professional community can do so, i.e. various forms of marginalisation / exclusion or removal of official approval.

And this has taken about 100 years.

Ethics in psychology ‘borrow from’ general research and medical ethics; programming has no such ‘base’ to work from, but – as we’ve seen with and NHS handling of medical information more generally – research and even medical ethics can be applied, when what you’re doing affects people (which, by definition, personal data does).

Of course, while people like @RossJAnderson build conversations between the psychologists and security engineers, Number 10 reads an interview with Malcolm Gladwell and builds itself a ‘nudge unit’ which a couple of years later privatises itself…

So I agree with #3, but would (pragmatically) prefer to encourage a feeling of ‘chivalry’ amongst the taught and self-taught for now – rather than put too much effort into creating yet another ‘priesthood’.

I do think you’re right about standards and ethics and professionalism. I just don’t think things will settle enough for several decades or more for anything other than a handful of highly dedicated people to keep steering things as best they can from the handful of international and international bodies that haven’t been corrupted or co-opted.

Revolutions aren’t the times to build institutions; they’re the times we discover (and defend) what our REAL values are – or what we want them to be.

The only one I instinctively disagree with is #4. Do we want to trigger another ‘Elf and Safety culture? Bad enough that Data Protection in the UK and elsewhere seems to have gone that way (it’s always a stupid idea to separate legal compliance from fundamental human rights).

Let’s leave laws for discernable crimes and transgressions, and be much clearer about (and stick to!) the underlying principles. Giving people handbooks makes them stupid – cf. the standards-compliant British e-passport, the chip with which ‘we’ were able to do pretty much everything the Home Office said we couldn’t. Because (a) we actually read the standards, and (b) we understood them.

I care a lot about #crypto and #control, but I confess I rely on trusted others’ deeper knowledge to guide me. I’ll really miss @CasparBowden for that, and e.g. recently @richietynan‘s take on the destruction of the Guardian laptops gave me even more serious pause for thought.

As I said above, I think (initial) anonymity is key. But what does this even mean if at a hardware/firmware level you can’t even guarantee your keypad and its invisible 2Mbit of storage isn’t a keylogger for the Chinese Central Communist Party?

Thanks for provoking what I hope wasn’t too verbose a response. I’ll cross-post to my blog, just in case this doesn’t upload.


Posted in uncategorized | Leave a comment