Identity and access management

High level local stuff

See also local files:

High level stuff

DNA and related


Identity proofing


DID, DID document and DID Auth

The purpose of DIDs is to facilitate the creation of persistent encrypted private channels between entities without the need for any central registration mechanism. They can be used, for example, for credential exchanges and authentication. An entity can have multiple DIDs, even one or more per relationship with another entity

DIDs are the core component of a decentralized digital identity and PKI (DPKI) for the Internet. Previous GUID solutions (IETF RFC 4122) and URNs (IETF RFC 2141 and 8141) were either not resolvable (GUID) or required a centralised authority (URNs). And neither included the ability to cryptographically verify ownership.

The DID infrastructure can be thought of as a global virtual key-value database in which the database is all DID-compatible blockchains or distributed ledgers. It is the base layer for the decentralised identity infrastructure, the next layer is verifiable credentials (VCs).

Decentralized Identifiers (DIDs) are globally unique identifiers, implemented as did = "did:" method-name ":" method-specific-id. E.g. did:example:123456789abcdefghi.

A DID method defines how a DID and DID document are created, resolved, and managed on a specific blockchain. As part of a DID method, a DID resolver takes a DID as input and return the associated metadata, called a DID document, formatted as a JavaScript Object Notation for Linked Data (JSON-LD) object.

There is a distinction between the DID subject and the DID controller, which may or may not be the same.
DID document
A DID is associated with a DID document, which contains a.o. a public key (the corresponding private key is stored under control of the subject or controller, the DID specs do not address this).

The association is done through the id property of the DID document, which contains the matching DID.

The public key is defined e.g. according a DID key method. E.g. a simple DID document:
DID Auth and JWT

Verifiable Credentials

Previously called verifiable claims. A VC is a tamper-resistant credential, cryptographically signed by its issuer. A VC contains claims about a subject. Claims are expressed using subject-property-value relationships. The subject is identified in the VC as "credentialSubject": {"id": "did:example:abcdef1234567", "name": "Jane Doe"}. So a DID is used. Github information

A simple VC:

Illustration of a VC in VC data model .

Verifiable Presentations

The W3C specification also defines Verifiable Presentations. A VP is tamper-resistant presentation derived from a VC and cryptographically signed by the subject disclosing it. Certain types of verifiable presentations might contain data that is synthesized from, but do not contain, the original verifiable credentials (for example, zero-knowledge proofs). Illustration of a VP in VC data model . It includes:

DIF and W3C

OpenID Connect and VP


An API for accessing Public Key Credentials.


Global implementations


Refer to the Internet information.

Bearer and POP tokens

As per MDN: Bearer Token: A security token with the property that any party in possession of the token (a bearer) can use the token in any way that any other party in possession of it can. Using a bearer token does not require a bearer to prove possession of cryptographic key material (proof-of-possession - POP). The Bearer Token or Refresh token is created for you by the Authentication server. When a user authenticates your application (client) the authentication server then goes and generates for your a Bearer Token (refresh token) which you can then use to get an Access Token. The Bearer Token is normally some kind of cryptic value created by the authentication server, it isn't random it is created based upon the user giving you access and the client your application getting access.


OAuth by IETF

OAuth 1 RFC 5849 - 2010

The OAuth 1.0 protocol was published as RFC 5849 in April 2010. Invented by Twitter who wanted to get rid of passwords.

OAuth 2.0 basics - RFC 6749 and 6750 - 2012

The OAuth 2.0 framework was published as RFC 6749, and the Bearer Token Usage as RFC 6750 in October 2012. Why two RFCs? Because the original editor disliked bearer tokens.
OAuth 2.0 abstract protocol flow OAuth 2.0 refresh expired token protocol flow
It can be observed that: Used e.g. by AWS where access tokens are OAuth 2.0 bearer tokens, used in implicit and authorisation code grants.

OAuth 2.1

The OAuth 2.1 protocol is work in progress.

OAuth interop

OAuth security

Consider: Testing OAuth security

OAuth extensions

FAPI (Financial API) to meet legal requirements (OpenID).

OAuth implementation

For implementations, refer to vendors such as Auth0 and okta.

OpenID - by the OpenID Foundation

OpenID is a decentralised authentication protocol promoted by the non-profit OpenID Foundation. It allows users to be authenticated using a third-party service, eliminating the need for webmasters to provide their own ad hoc login systems. The original OpenID authentication protocol was developed in May 2005.

It was succeeded by OpenID Connect which is an identity layer on top of the IETF's OAuth 2.0 protocol.

OpenID Connect - OIDC

OpenID Connect is an identity layer on top of the OAuth 2.0 protocol. It introduced the Identity Token and the UserInfo Endpoint. OIDC reinvented the terminology: Bridging OIDC and W3C VC

JWT - JSON Web Token - a claims set

In a nutshell

JWT definitions are messy. Best to read RFC 8725 (best practices).

RFC 7519: JWTs represent a set of claims as a JSON object that is encoded in a JWS and/or JWE structure. This JSON object is the JWT Claims Set.

The member names within the JWT Claims Set are referred to as Claim Names. The corresponding values are referred to as Claim Values.

The overall structure is: A JWT may be enclosed in another JWE or JWS structure to create a Nested JWT, enabling nested signing and encryption to be performed.

JWTs are represented using the JWS Compact Serialization or the JWE CompactvSerialization.


Before the JWT revolution, a token was just a string with no intrinsic meaning, e.g. 2pWS6RQmdZpE0TQ93X. That token was looked-up in a database, which held the claims for that token. This means that DB access (or cache) is required everytime the token is used.

JWTs encode and verify (via signing) their own claims. This allows to issue short-lived JWTs that are stateless (read: self-contained, don't depend on anybody else). They do not need to hit the DB. This reduces DB load and simplifies application architecture because only the service that issues the JWTs needs to worry about hitting the DB/persistence layer (the refresh_token).



JWT structure

JWT security


JWT libraries

JWT in JavaScript


AWS, Google, Facebook, ...

Federations and alliances

Federations and allicances - general

Federations and alliances - academic and education

Solutions and tools

Strong authentication

Directories - concepts and samples

Vendors - identity focus

Vendors - identity focus - Europe

Vendors - identity focus - UK, US Australia and Canada

Vendors - identity focus - others

Vendors - directory focus







Access control - concepts - RACF - RBAC - Attrib Certs - ABAC - ZBAC

ABAC - Attribute Based Access Control and related

ABAC roots

ABAC implementations

AC - Attribute Certificates

Refer also to SAML and XACML.

RBAC - Role Based Access Control (NIST/ANSI 359-2004) and related

PSL - Policy Specification Languages