In the final part of a three-part series, an anonymous banking insider explains the failures of modern digital identity protocols and the ways we can fix them.
As discussed in the first two parts of our series, the current state of digital identity in the US is suboptimal for citizens, businesses, and governments. While the American inclinations toward free markets and federalism have, under many circumstances, fostered experimentation that led to success, our fractured approach to digital identity has left the US without a comprehensive consumer privacy framework or a functional digital identity infrastructure. This vacuum has led to massive amounts of fraud, and government agencies, businesses, and consumers in the US remain largely unequipped to defend against the surge of identity-related threats posed by AI and nation-state attacks.
For example, in the US, businesses and governments have historically deferred to profit-driven entities, like financial institutions and their contracted identity verification service providers, to verify an individual’s identity. These identity verification processes use an individual’s possession of personal identifying information as de facto evidence of identity, thus relying on the assumption that our personal identifying information is private. That assumption is blatantly false, given the staggering volume of personal data breaches affecting US consumers. Ironically, many of the trusted entities that Americans rely on to verify their digital identity are also among the biggest culprits in disclosing (inadvertently or otherwise) personal information to bad actors.
It doesn’t have to be like this. In a non-digital context, Americans are accustomed to peer-to-peer interactions with individuals, businesses, and state agencies through which they assert their identities directly - for example, by presenting various forms of physical credentials, such as a driver’s license, birth certificate, or other privately-issued claim. This time-tested model preserves individual autonomy by eliminating the need for third-party involvement in credential presentation and verification. It also preserves personal privacy, as once a person is issued a physical credential, they can use that credential without the issuing entity’s involvement or awareness.
Although the majority of economic growth in the future will likely take place in a digital context, the internet remains constrained by our inability to easily and economically establish trust that individuals are who they say they are. By evaluating the methods that we use to establish varying levels of confidence in someone’s identity in the physical realm, we can derive insights and policy recommendations that will help us improve our digital identity infrastructure here in the US. Additionally, by learning from the risks posed by current centralized identity systems, we can ensure that our future-state digital identity infrastructure is designed to avoid outcomes which are not in the best interest of citizens.
For the sake of personal autonomy, privacy, and security, we must move away from a world where only private, centralized entities can credibly author, own, and interpret identity claims. To do that, we must enhance and implement the standards, technology, and tools to empower any person or entity to author, hold, present, and verify identity claims in a resilient and purposefully-designed digital identity infrastructure. Part 2 in our series discussed some of the open standards and technologies that enable individuals, businesses, and nation states to build digital trust: specifically, Decentralized Identifiers (DIDs), which enable the separation of our identifiers as our authenticators, and Verifiable Credentials (VCs), which are cryptographically secure and tamper-evident digital credentials that anyone can author, hold, present, and verify. These tools are the foundation for a secure, open, and progressive digital trust infrastructure in the US.
In this piece, I introduce the notion of trust frameworks, which are the “glue” that binds human and social trust systems together, and make the policy recommendations regarding adoption and advancement of these technologies and standards, all of which are aimed at:
Adoption of these frameworks, technologies and policies will usher in a new age of digital identity in the U.S.
Part 1 highlighted the power of a digital signature, and how it provides for cryptographically verifiable trust that a given DID controller authored or signed a specific message. Part 2 covered the “trust triangle”, which introduced the notion of an issuer, holder, and verifier of claims - and how DIDs and VCs leverage the power of digital signatures to assert, present, and verify claims (of all types).
In this way, digital signatures enable others to verify which DID authored a message. But how do we know that the individual or entity who controls that DID is who they say they are? If we rely on a third party to assert that a DID controller is a specific person, how do we know that the entity making this assertion got it right? Cryptographic trust may solve for “who” authored a message, but only at an abstracted level. Further, Cryptographic trust does not address the reliability or veracity of the claim itself. As noted in Part 2, belief in or reliance on any claim is a function of social trust.
So how can we derive sufficient social trust to verify that a DID controller is who they purport to be, and that their claims are reliable? To answer this question, we must realize that trust (in the physical and digital worlds) is not a binary concept. Different contexts will require varying levels of information and assurance to satisfy the individuals or entities who will be relying on those claims.
In his article “Musings of a Trust Architect: Building Trust in Gradients,” Christopher Allen argues that trust in the physical world is built in gradients, and that a progressive trust framework should inform the guiding principles for building digital trust. The current concept of trust on the internet is narrow and binary, which is a natural product of the internet’s maturation path. At the dawn of the web, not only were controls governing online interactions far less rigid (or non-existent), but being denied access to an online service was more of an inconvenience than an impediment to daily life, as most of those online services were either discretionary, and / or had alternative methods of authentication based in the physical world where we could remedy our issues. As the internet has scaled, however, online access to many services has become increasingly centralized and rigid due to the involvement of large commercial enterprises - they either let us in, or they don’t. This increase in centralized access provisioning coincided with the internet’s evolution into a public utility for most Americans; the amount and variety of “essential” digital interactions has grown exponentially, with many of them having no physical (in-person) means of remedying access issues related to identity verification.
As Mr. Allen states, this binary trust approach to online interactions has actually increased the risk of important digital interactions today, leaving us at the mercy of tech giants who give us the options of “blind trust or total skepticism,” with little between. This absolutist trust framework is not only unnecessarily restrictive and vulnerable to compromise; it also doesn’t reflect how trust is established or managed in the physical world.
In the physical world, the context of an interaction sets the stage for what information each party will require to move forward in building a relationship. Context also informs how much confidence (or “assurance”) each party will need that the information provided from others - including the claim that they are who they say they are - is trustworthy.
Take for example. Mr. Allen’s illustration of a homeowner (“Hank”) and a prospective contractor (“Carla”) having a series of discussions about a kitchen remodel job. Due to the potential risks and costs to each party, Hank and Carla engage in a “progressive trust lifecycle,” in which trust is built incrementally over time. Hank realizes that hiring someone for this job is an important decision, so he wants to explore his options thoughtfully and take incremental, progressive actions sufficient to ensure his goals are reached and his risk is appropriately mitigated (such as contacting Carla’s references and reading online reviews, confirming her business’s license and insured status with the state registry, executing a legally binding contract detailing the scope of work, timelines, applicable warranties, etc.). Similarly, given the potential financial, reputational, and other risks to Carla’s business, Carla needs assurances that Hank’s project is legitimate,that the work site will be safe, and that Hank will pay for services rendered in a timely manner. Carla’s progressive actions in this context could include checking with the local building authority regarding Hank’s ownership of the site and any restrictions/required permits, evaluating Hank’s credit/financial standing, inspecting the work site, and entering into a legally binding contract that states the fees, payment terms, and scope of work. Each step in the process that aligns with the parties’ expectations and requirements adds another layer of credibility to the interaction. If the context were different - for example, if Hank just needed someone to fix a loose door handle in his home - each party’s actions would likely be less thorough, due to the decreased risks and costs.
Throughout the progressive trust process, the parties accumulate several points of verified data related to the other party’s claims from a variety of different sources including individuals and government agencies. Certainly some of the issuers of these claims might be more or less trustworthy than others, but by being able to aggregate them all together, each party is able to acquire enough trust and assurance that the other party is who they say they are and that “the interaction is likely to fulfill their needs without exposing them to undue harm.”
Let’s compare this progressive trust framework to the way financial institutions assess risk related to identity claims. As mentioned in part 1 of our series, banks have two distinct risks to address broadly: regulatory compliance and financial risk to the bank itself. Banks mitigate regulatory compliance risk by satisfying basic legal requirements around customer identification (and accompanying oversight routines), but these practices don’t generally satisfy concerns related to financial risk. As previously discussed, financial risk (a situational context) also drives requirements around identity verification. If the financial exposure to the bank is high, so too will be the amount of assurance the bank requires regarding their customer’s identity claim. If the financial risk to the bank is low, the bank’s requirements will reflect that - just like Hank’s assurances required for a kitchen remodel differ from those required for a much smaller repair job.
If we zoom out, we start to see that the credibility of any claim (be it a digitally signed attestation or otherwise) is a function of the context and ecosystem from which the claim originates.
In the example above, when Hank verified Carla’s contractor number in the state registry, what gave him confidence that this information was trustworthy? In this case, the state is the authoritative, issuing source of these types of claims, so it would be the best place to verify this information. Would Hank trust this information if he got it from a different source, like an annual trade association publication? What about a county agency that periodically aggregates state licensure information? What if one of Carla’s former clients assured Hank that her license was valid and up to date? Perhaps, it depends?
We’ve discussed how cryptographic trust can help us derive certainty about the provenance of a claim (which DID authored it), but trust in the actual claim itself is only derived from human or social trust. We also discussed how trusting that a DID controller is in fact a verified entity is also a function of social trust to some degree. Social trust, however, is context-dependent and constructed progressively, and so can be time-consuming to establish.
To solve for this, we need to introduce the missing piece to the “trust triangle” articulated in Part 2 - trust and governance frameworks. Trust and governance frameworks help us simplify daily life by delegating the inspection and verification of underlying processes to others we trust. They can also be viewed as the business, legal, and technical rules that enable us to standardize and scale social trust.
Although they come in many forms, these frameworks help us establish and maintain trust in the counterparties issuing claims. These frameworks may outline underlying diligence and issuance procedures, what happens when there’s a dispute, and what rights different parties may have in a given context. It’s also important to note that trust and governance frameworks are not necessary in many contexts to create a network of credibly reliable claims. Take, for example, all of the research that Hank did in vetting Carla’s claims: while some input (her contractor license verification and insurance status) was authored by an “authoritative” source which is governed by a formalized set of processes, many other aspects of the assurance (checking references, reading reviews) were acquired solely via social exploration, which is built on individual trust and social reputation.
When we apply the notion of trust frameworks to digital identity trust in the US, we see that the private sector and the public sector engage in two different approaches to identity verification.
The private sector’s approach to identity verification assurance can be defined as having a floor of common requirements driven by regulatory mandate, with additional layers of internally-derived requirements that are contextually dependent and risk-based. State and federal agencies oversee practices and outcomes related to regulatory compliance (effectively verifying compliance via periodic audits of processes and results), but the rest of the discretionary identity verification is largely up to the private institutions themselves.
By contrast, with respect to public sector claims related to unemployment assistance or state-sponsored aid, the National Institute for Standards and Technology (NIST) publishes standards that government agencies must adhere to in order to obtain the legally mandated assurance that the agency is distributing benefits to their intended target, and not wasting tax-payer funds. Unfortunately, because state and federal agencies all have their own budgets and systems which are not universally compatible or integrated with federal identity verification / authentication resources (like Login.gov), they must either:
The public sector’s approach to identity proofing and authentication is much more rigorous and formalized than the private sector’s approach, due to standards imposed on the public sector by federal law and regulations. These standards inform the verification and authentication procedural guidelines that service providers must adhere to in order to claim they are compliant with certain “levels of assurance”. To demonstrate that they are adhering to these standards, the service providers obtain certifications from third-party technical audit companies who inspect and then attest to the providers’ practices.
These third party audit / certification bodies (like the Kantara Initiative, a non-profit trade association) serve as independent inspectors of the identity verification service providers. Once the inspection of a service provider is successfully completed, Kantara publishes their results on their website, which serves as a verifiable credential of sorts (hosted on Kantara’s VDR). In this way, the Kantara Initiative serves as a delegated root of trust to the public sector ecosystem, and their claims in turn satisfy the requirements of those who are overseeing the compliance of an agency's legal obligations.
Is this a perfect system? Likely not, but it enables clear delineation of requirements, standards, and verification of adherence to those standards across the ecosystem. By pointing to a discrete chain of trust (NIST standards -> service provider -> Kantara certification), state agencies have a clear and reasonable path to explain to the public and their overseers how they are adhering to the standards as required by law.
To be clear, standards and governance frameworks don’t solve for all risks or uncertainty of any context, including this specific one around identity verification. The standards can and should be updated regularly, as nothing is ever perfect; but the level of detail in the standards - along with the independent inspection and verification of their adherence by a service provider - allows for those who are interested in relying on the claims that come out of this ecosystem to have a moderately informed view of their value and credibility.
So if trust is context-dependent, and naturally manifests in gradients, how should we think about which ecosystems are more or less trustworthy than others, and how does this relate to digital identity?
While this question is broad, one of the biggest pieces of the answer may be in understanding the level of decentralization of a given trust or governance framework. For example, in part 2 we discussed the idea of a verifiable data registry (VDR), and how it can be used to anchor a DID. The Bitcoin network, as an example, serves as a robust VDR for a few important reasons:
The Bitcoin network is one of limitless VDRs, and its examination in this context helps us articulate some of the factors that ultimately inform trustworthiness of VDRs and other components on which this ecosystem relies, including various DID methods and trust frameworks. For example, consider one of the most widely used VDRs in the world, the DNS system. DNS is a globally trusted “lookup” system - taking in a human readable domain name like “btcpolicy.com” and returning an IP address.
The DNS system is governed by ICANN, but unlike Bitcoin, the trust framework that governs ICANN is not open source. Instead, this governance framework relies on humans and a federation of for-profit enterprises which are delegated by ICANN (via a privately executed 10-step accreditation process) to determine how new and existing entries are managed in the global ledger of DNS entries. I’m not suggesting that this process is insufficient, but simply attempting to illustrate that even the most popular VDRs that act as the “root of trust” for many aspects of daily life have varying levels of transparency, rigor, and security. Some have trust and governance models that are articulated to the public to some degree, while others may do so to a lesser degree, or not at all. Depending on our individualized context, we may or may not find that a given system (and the trust and governance models that surround it) to be providing satisfactory transparency, security, decentralization, and / or assurance for our requirements.
Beyond VDRs, consider the governance frameworks of other entities that act as “authoritative” sources of information, like the Department of Motor Vehicles (DMV) in a state that maintains a database of vehicle registrations. The DMV’s processes rely on humans, as well as some automated processes which are managing entries in a permissioned and private database. These processes are subject to some degree of oversight given that they are funded by the public, but the details of their internal processes are not available for public release / consumption.
Another example is Wikipedia, which has a distributed and quasi-open governance model. Anyone can make changes to entries on Wikipedia, given the approval of delegated moderators who are bound to a guiding principle that information published in their given domain meets their interpretation of being “verifiable”. What does verifiable mean in that context? Well, it’s up to the moderator to decide if a given source sufficiently meets their required “standards” of verification assurance - another layer of delegated and subjective interpretation. It’s trust frameworks all the way down!
Attempting to categorize or rank the trustworthiness of a given “identity system” is a fool’s errand, because requirements relating to information and identity assurance are context-dependent. That said, there are some good faith attempts at evaluating different models and approaches for establishing decentralized trust: the W3C recently released a rubric to evaluate different DID methods (which help a DID controller anchor their identifier in a public way), which covers important aspects like rulemaking, design, operation, enforcement and auditability, security, and privacy. As we zoom out further, it’s perhaps more helpful not to think about the “how” behind a given trust or governance framework, but instead “when” we might find ourselves wanting or needing to rely on them.
In general, when it comes to disclosing sensitive information, “less is more”. Because of the trust gap that we’ve discussed in modern digital life - where you’re either completely anonymous or fully vetted by third-parties - we’ve become used to having very few options to self-assert our identity in a digital context.
We also discussed briefly in part 2 how some states in the US are beginning to issue digital driver’s licenses (mDLs) to citizens, making it easier for them to present an authoritative identification credential in a digital context. These initiatives are exciting because they will reduce identity-related fraud in both the public and private sectors; however, we need to ensure that we understand the broader contexts in which this solution will be used.
Making it easier for citizens to identify themselves with a high-assurance credential from a DMV is great; but we should also remember that when something becomes easier to do, it will generally happen more often. The internet grew to what it is today because it relied on an open set of protocols that anyone could use and build on, with no identification or credentials required to do so. While a citizen’s mDL may satisfy a multitude of identity assurance requirements of a given business, we need to ensure that the current digital trust gap is not solely filled by asking for citizens to present their state-issued identification at every turn. Just as Hank and Carla chose to reveal their personal information selectively and incrementally to satisfy their real world progressive trust contexts, so too should citizens be able to do this more broadly in a digital context in a way that they fully control.
To fill this gap of progressive identity assurance requirements, the state DMV should not be the only issuer of identity related claims, as the requirements of many relying parties are well below the level of assurance that the DMV provides. Certainly a mDL could in fact satisfy most requirements of an online business (financial institution or otherwise), but the identification requirements of a given digital interaction should ideally mirror the social norms of a similar action in a physical context. Just as it isn’t appropriate for the clerk at the bookstore to check my ID before buying a book, I shouldn’t need to present identification online to read the news.
This notion of selective disclosure is grounded in how society works in a physical context. Fears of over identification (and of the resulting normalization of requiring identification for benign commerce / social interactions) are also grounded in terrible but important lessons from history. As Mr. Allen articulates in one examination of this lesson, “ultimately, data will be used to the fullest extent that it can be and it may be used for the worst purposes possible, entirely at odds with the original purpose of the collection”.
Fortunately, emerging standards may care for some parts of this risk, which would enable citizens to use credentials of all kinds to make abstracted claims which are cryptographically built from their credentials. With the help of these standards, citizens would be able to use verifiable credentials of all kinds to selectively share context-appropriate proofs of their claims. For example, a young adult trying to get into a bar would have the option to share verified proof that “I’m over 21,” as opposed to sharing all information available on a driver’s license (full name, address, date of birth, etc.).
However, we should be thoughtful about the information we collect and require in a digital context broadly, and understand that standards alone will not curb the risk and trend of overidentification and surveillance. At the same time, we should work to enable citizens to present their information in ways that respect their privacy, align with societal norms of non-digital interactions, and enable control of the use of their digital identity. To that end, and with all of the above context, the US would be well-served to consider the following as it relates to digital identity policy moving forward:
Rethink what digital identity is / should be under the law, and what gaps may remain
Provide federal guidance and leadership on mobile driver’s license (mDL) standards to ensure that privacy is on-par with physical credentials
Seize the moment and elevate NIST’s role in setting domestic and international standards related to digital identity
Leverage our leadership role at FATF to advance American ideals and build more effective AML / CFT frameworks
Bring federal regulators together to provide explicit guidance on how reusable identity can thrive in financial services, and the US broadly
Source: https://www.w3.org/reports/identity-web-impact/#architecture, inspired and adapted by ToIP (https://trustoverip.org/wp-content/uploads/Introduction-to-ToIP-V2.0-2021-11-17.pdf)