Twitter, Human Technology FoundationLinkedIn, Human Technology Foundation
Digital identity: between trust and distrust

Digital identity is a fundamental concern for individuals wishing to access online services. With the dematerialization of many public services and administrative procedures, how can we successfully prove our identity online to gain access to the services to which we are entitled? The same question arises in the private sector: how do we identify ourselves online to gain access to the service or product we have purchased?

The solutions proposed by governments and companies have to strike a fine balance between ease of use and the need for security. Leakage of sensitive personal data, or worse, digital identity theft, can have catastrophic effects for the individuals concerned [1].

Conversely, too much security can lead to low adoption- the tool is too complex to use - or even mistrust, as verification systems such as facial recognition are deemed too intrusive. The French Ministry of theInterior's AliceM project [2], for example, was abandoned due to this rejection by the users consulted, and an unfavorable opinion from the CNIL(Commission Nationale Informatique et Liberté).

How can we design a digital identity system that respects users and provides superior benefits to the existing identity system?

The Human Technology Foundation led an international research group to identify the key features of this optimal system. Last year, in association with the Digital Identification and Authentication Council of Canada (DIACC), we published our "Recommendations on digital identity to optimize its benefits for users" [3].

 

The key points are as follows:

1.       Adoption must be voluntary, not forced. Several alternatives must coexist.

2.       The system must give the user control over his or her data. In connection with point 1, users must be able to withdraw their consent.

3.       The system must be robust. It must be impossible for service to be interrupted, or for data to be lost or leaked.

4.       It must be easy to use: in terms of interface, but also in terms of operation(e.g. via interoperability between the various services, so that a user can do everything with a single tool).

5.       To enable point 4, system governance and operations must be transparent and understandable.

6.       Finally, responsibilities must be clearly defined. Every user must have the right to recourse if he or she encounters a problem or suffers damage.

Our white paper also introduces a number of notions that are still little known to the general public, but which have a crucial impact on improving our privacy protection.

Data minimization, for example, is an advantage that only digital identity can offer. When you ask for physical identification to access certain services (e.g. to prove your majority to buy alcohol), the person checking the ID has access to a great deal of information (date of birth, name, address). Yet they do not need to have access to all this sensitive personal information.

Thanks to a digital identity system, it will be possible to prove one's majority without sharing irrelevant personal data other than one's date of birth.

This is one of the benefits made possible by digital identity systems, which compartmentalize information and give control back to the user.

Once an optimal system has been designed, a crucial task remains: how to encourage beneficiaries to adopt this new tool?

Unfortunately, simply imagining the best possible system is not enough to get it into widespread use. Launching a new system inevitably involves major changes, as well as raising awareness and training future users.

This is precisely what the Human Technology Foundation, in partnership with the ProjectLiberty Institute, is currently researching. We want to understand the main points of mistrust among users when a new digital identity system is proposed to them, but also to identify the levers of trust to ensure that the benefits promised at the design stage materialize in everyday use.

Numerous initiatives are already being tested or deployed around the world [4]. In addition to these concrete examples of digital identity systems in daily use, several international institutions such as the OECD (Organization for Economic Co-operation and Development) have published their own recommendations on governance and deployment [5].

Our aim is to draw up as exhaustive a map as possible of the digital identity systems already in existence or under development around the world, in 4 specific verticals that we consider crucial:

1.  Protecting children online via age verification: this is a particularly interesting area, as many under-age users will want to bypass these security measures to access adult-only content.

2.  Health data: these are extremely sensitive, yet must be shared to improve patient treatment, advance research and ensure the smooth operation of social security or health insurance systems.

3.  Elections and the democratic process: remote voting isa solution to enable all citizens to take part in an election, but also to reduce the operational costs and logistical complexity of an election (e.g.transporting ballot boxes). The question of inclusion is fundamental if we are to avoid a part of the population feeling excluded from political participation because of the need to use an overly complex technological tool.

4.  Finally, the question of responsibility in putting content online: with the democratization and multiplication of generative AI tools, it has become particularly easy to generate content that is false, manipulative or even violent or obscene. Digital identity can be a relevant solution for identifying the people, particularly on social networks, who publish content generated in whole or in part by generative AI tools.

The analysis of use cases identified around the world for each vertical will be carried out using a framework that draws on much of the information and lessons learned from our previous report [3]. We will study the use cases one by one, then identify common points and thus define the best practices as well as the main points of friction.

The ultimate aim is to provide a concrete, actionable playbook for public and private decision-makers. We want to show that it is possible to deploy a large-scale digital identity system that benefits all users, while ensuring that they:

-      Feel listened to in the implementation of the system and have the choice to use it.

-      Understand how it works, and the additional benefits that using the system will bring them.

-      Are certain that the system is reliable, robust, and secure.

-      Are convinced that the system does not infringe on their privacy and respects data confidentiality.

It is an exciting project that we will be documenting, and for which you will have access to several interim publications.

 

Further readings

[1] EricA. Caprioli. L’usurpation d’identité numérique, un fléau grandissant(heureusement) sanctionné. L’Usine digitale, 17 octobre 2016. https://www.usine-digitale.fr/article/l-usurpation-d-identite-numerique-un-fleau-grandissant-heureusement-sanctionne.N451422

[2] Alice Vitard. Alicem sera déployée dès le mois de novembre malgré les critiques L’Usine digitale, 8 octobre2019. https://www.usine-digitale.fr/article/alicem-sera-deployee-des-le-mois-de-novembre-malgre-les-critiques.N892224

[3] Universal Digital Identity Policy Principles to Maximize Benefits for People: a shared European and Canadian Perspective. DIACC-CCIAN and Human TechnologyFoundation, 2022. https://diacc.ca/2022/11/02/policy-design-principles-to-maximize-people-centered-benefits-of-digital-identity/

[4]Andrew Sever. Digital Identity In Developing Countries: What Lessons Can Be Learned? Forbes, April 12, 2023. https://www.forbes.com/sites/forbestechcouncil/2023/04/12/digital-identity-in-developing-countries-what-lessons-can-be-learned/

[5]Recommendation of the Council on the Governance of Digital Identity. OECD,2023. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0491

Related Articles