Identity and access management
High level local stuff
See also local files:
High level stuff
DNA and related
Other
Identity proofing
- DE BSI TR03147 - Assurance Level Assessment of Procedures for Identity Verification of Natural Persons
W3C
DID, DID document and DID Auth
DID
The purpose of DIDs is to facilitate the creation of persistent encrypted private channels between entities
without the need for any central registration mechanism.
They can be used, for example, for credential exchanges and authentication.
An entity can have multiple DIDs, even one or more per relationship with another entity
DIDs are the core component of a decentralized digital identity and PKI (DPKI) for the Internet.
Previous GUID solutions (IETF RFC 4122) and URNs (IETF RFC 2141 and 8141) were either not resolvable (GUID) or required a centralised authority (URNs).
And neither included the ability to cryptographically verify ownership.
The DID infrastructure can be thought of as a global virtual key-value database in which the database is all DID-compatible blockchains or
distributed ledgers. It is the base layer for the decentralised identity infrastructure, the next layer is verifiable credentials (VCs).
Decentralized Identifiers (DIDs) are globally unique identifiers, implemented as
did = "did:" method-name ":" method-specific-id. E.g. did:example:123456789abcdefghi.
There is a distinction between the DID subject and the DID controller, which may or may not be the same.
DID method
DID methods are the mechanism by which a particular type of DID and its associated DID document are created, resolved, updated, and deactivated.
As part of a DID method, a DID resolver takes a DID as input and return the associated metadata, called a DID document, formatted as a JavaScript Object Notation for Linked Data (JSON-LD) object.
DID document
A DID is associated with a DID document, which contains a.o. a public key (the corresponding private key is stored under control of the subject or controller, the DID specs do not address this).
The association is done through the id property of the DID document, which contains the matching DID.
The public key is defined e.g. according a DID key method.
- A DID document is a valid JSON-LD object that uses the DID context (the RDF vocabulary of property names) defined in the DID specification.
- Linked Data (LD) is a term commonly used to refer to data that complies to Tim Berners-Lee's
four principles formulated in 2006:
- Use URIs as names for things,
- Use HTTP URIs so that people can look up those names,
- When someone looks up a URI, provide useful information, using the standards (RDF*, SPARQL),
- Include links to other URIs so that they can discover more things.
- Linked Data gave rise to Linked Data Proofs:
- A DID document includes six components (all optional):
- The associated DID
- A set of cryptographic material, such as public keys, that can be used for authentication or interaction with the DID subject.
- A set of cryptographic protocols for interacting with the DID subject, such as authentication and capability delegation.
- A set of service endpoints that describe where and how to interact with the DID subject.
- Timestamps for auditing.
- A optional JSON-LD signature if needed to verify the integrity of the DID document.
E.g. a simple DID document:
- {"@context": "https://www.w3.org/ns/did/v1",
- "id": "did:example:123456789abcdefghi",
- "authentication": [{
- "id": "did:example:123456789abcdefghi#keys-1",
- "type": "RsaVerificationKey2018",
- "controller": "did:example:123456789abcdefghi",
- "publicKeyPem": "-----BEGIN PUBLIC KEY...END PUBLIC KEY-----\r\n" }],
- "service": [{
- "id":"did:example:123456789abcdefghi#vcs",
- "type": "VerifiableCredentialService",
- "serviceEndpoint": "https://example.com/vc/" }] }
DID Auth and JWT
- did-auth DID-based authentication -
challenge/response (relying party/identity owner) to demonstrate ownership of a DID
- DID JWT
- allows to sign and verify JSON Web Tokens (JWT) using ES256K, ES256K-R and Ed25519 algorithms.
- DID JWT VC
- allows to create and verify W3C VC and VP in JWT format
- the Issuer object must contain a did attribute and a signer function
- currently (2020-04) there is only support for ethr-did issuers to sign JWTs using the ES256K-R algorithm
Verifiable Credentials (VCs)
Previously called verifiable claims.
A VC is a tamper-resistant credential, cryptographically signed by its issuer.
A VC contains claims about a subject. Claims are expressed using subject-property-value relationships.
The subject is identified in the VC as "credentialSubject": {"id": "did:example:abcdef1234567", "name": "Jane Doe"}. So a DID is used.
Github information
A simple VC:
- {"@context":
- ["https://www.w3.org/2018/credentials/v1",
- "https://www.w3.org/2018/credentials/examples/v1" ],
- "id":"http://example.com/credentials/4643",
- "type": ["VerifiableCredential"],
- "issuer": "https://example.com/issuers/14",
- "issuanceDate": "2018-02-24T05:28:04Z",
- "credentialSubject": { "id": "did:example:abcdef1234567", "name": "Jane Doe" },
- "proof": { ... } }
Illustration of a VC in VC data model .
Verifiable Presentations
The W3C specification also defines Verifiable Presentations. A VP is tamper-resistant presentation derived from a VC and cryptographically signed by
the subject disclosing it. Certain types of verifiable presentations might contain data that is synthesized from, but do not contain,
the original verifiable credentials (for example, zero-knowledge proofs).
Illustration of a VP in VC data model . It includes:
- URI to uniquely identify contexts
- URI to identify the presentation
- One or more verifiable credentials, or data derived from them
- Cryptographic signature of the subject
DIF and W3C
DIF - identity.foundation
- DIF - Decentralized Identity Foundation
- mostly participants involved in W3C DID etc
- creates specifications and reference implementations
- DID JWT and Auth code donated by uPort
Working Groups
- DIF Working Groups
- DIF DID Auth Working Group
- SIOP DID: Self-Issued OpenID Connect Provider DID profile, a specific flavor of DID AuthN used in the OIDC SIOP flow,
using DIDs to integrate SSI wallets into web applications. This is based on the SIOP specification, a part of the OIDC core specification. For clarity's sake: DID AuthN refers to a method of proofing control over a DID for the purpose of authentication.
- DIDComm JS Lib, a shared effort with the HL Aries project to create a standardized means of authenticated general message passing between DID controllers
- created an Universal Resolver enabling application code to be written to a single resolver interface that enables communication to multiple decentralized identifier systems.
- A DID-based blockchain IDMS that supports the Universal Resolver must define and implement a DID Driver that links the Universal Resolver to their system-specific DID Method for reading DID documents. This allows applications relying on the IDMS to query DIDs in a common interface so they do not have to deal with fetching the system-specific DID methods themselves.
- DIF Secure Data Storage (SDS) Work Group
- Many other WGs on Identifiers and discovery, Storage and compute, Claims and credentials, Sidetree, ...
Specifics
DIDcomm messaging
DIDcomm messaging/routing
- The routing protocol defines how a sender and a recipient cooperate, using a partly trusted mediator
DIF github
- DIF github
- DIF specifications
- includes various did resolvers, did jwt, sidetree (i.e. a layer 2 PKI), identity hub, ...
- DIF sidetree
- a layer-2 protocol for anchoring and tracking DID Documents across a blockchain.
- The central design idea involves batching multiple DID Document operations into a single blockchain transaction.
- DIF Universal Resolver
- comparable to bind in DNS, provides information that explains how to communicate with the entity represented by the identifier
- discovers and retrieves this information which at a minimum includes service endpoints for communicating with the entity and
as well as the cryptographic keys associated with it
- see here for a publicly hosted instance of a Universal Resolver.
- DIF Universal Resolver Front-end
- DIF Identity Hub - replaced by 'secure-data-store' concept
- DID support for RSA and secp256k1, crypto libraries
- DID cryptographic extensions, did-auth-jose package
- OIDC Libs used to allow a user login with an SSI wallet onto an application server using an OIDC SSI Token, includes e.g. node openid-client for JS and a mod_auth_openidc 2.3.1 for Apache HTTP server
- DIDcomm messaging/routing on github - specs and reference code
- The purpose of DIDComm Messaging is to provide a secure, private communication methodology built atop the decentralized design of DIDs.
- The fundamental paradigm for DIDComm Messaging is message-based, asynchronous, and simplex.
- This is different from the dominant paradigm in mobile and web development today, which is duplex request-response, where you call an API with certain inputs, and you wait to get back a response with certain outputs over the same channel, shortly thereafter. This is the world of OpenAPI/Swagger.
- Three types of messages:
- encrypted = dcem = DIDComm Encrypted Messages = JWE envelope: authN, confidentiality, integrity, routing (may contained dcsm)
- signed = dcsm = = optional JWS envelope for non-repudiation (may contain dcpm)
- plaintext = dcpm = plaintext message = app data + meta data
- The casual phrase “DIDComm message” is ambiguous, but usually refers to dcem (DIDComm Encrypted Messages)
- All three message formats — plaintext, signed, and encrypted — can be correctly understood as more generic JWMs (JSON Web Messages) or even as arbitrary JOSE content.
- DIDcomm.org
- DIDcomm v2 book - v2 is incubated by DIF, finalized in early 2022
OpenID Connect and VP
WebAuthn
An API for accessing Public Key Credentials.
Other
Global implementations
HTTP and HTTPS
Refer to the Internet information.
Bearer and POP tokens
As per MDN: Bearer Token:
A security token with the property that any party in possession of the token (a bearer) can use the token in any way that any other party
in possession of it can.
Using a bearer token does not require a bearer to prove possession of cryptographic key material (proof-of-possession - POP).
The Bearer Token or Refresh token is created for you by the Authentication server.
When a user authenticates your application (client) the authentication server then goes and generates for your a Bearer Token
(refresh token) which you can then use to get an Access Token.
The Bearer Token is normally some kind of cryptic value created by the authentication server,
it isn't random it is created based upon the user giving you access and the client your application getting access.
SAML
OAuth by IETF
OAuth 1 RFC 5849 - 2010
The OAuth 1.0 protocol was published as RFC 5849 in April 2010. Invented by Twitter who wanted to get rid of passwords.
- OAuth - Open Authentication - simple web service authentication - how to let a printing service access your private pictures without
passing your credentials (by making you login and approve)
OAuth 2.0 basics - RFC 6749 and 6750 - 2012
The OAuth 2.0 framework was published as RFC 6749, and the Bearer Token Usage as RFC 6750 in October 2012. Why two RFCs? Because the original editor disliked bearer tokens.
OAuth 2.0 abstract protocol flow |
OAuth 2.0 refresh expired token protocol flow |
|
|
It can be observed that:
- this was created for the use case where a 'user' IS the 'resource owner' - hence in protocol flows the 'resource owner' in the flow consists of interaction with the user (or can be implied by the fact that the user initiates the request grant to access his calendar
- the term 'authorisation server' is far away from an enterprise PEP - it is use case specific
Used e.g. by AWS where access tokens are OAuth 2.0 bearer tokens, used in implicit and authorisation code grants.
Specs
- OAuth 2 - Open Authentication - overview and list of RFCs
- OAuth 2 - grant types for different use cases
- Most common OAuth grant types:
- Authorization Code - allows confidential and public clients to exchange an authorization code for an access token. After the user returns to the client via the redirect URL, the application will get the authorization code from the URL and use it to request an access token.
- PKCE (RFC 7636: Proof Key for Code Exchange)- an extension to the Authorization Code flow to prevent CSRF and authorization code injection attacks. PKCE is not a replacement for a client secret, and PKCE is recommended even if a client is using a client secret.
- Client Credentials - used by clients to obtain an access token outside of the context of a user. Typically used by clients to access resources about themselves rather than to access a user's resources.
- Device Code - used by browserless or input-constrained devices in the device flow (RFC 8628) to exchange a previously obtained device code for an access token.
- Refresh Token - used by clients to exchange a refresh token for an access token when the access token has expired. Allows clients to continue to have a valid access token without further interaction with the user.
- Legacy grant types:
- Implicit Flow (exposes token to browser, also broken because it assumed browsers will not append anything to redirect uri, but today's browsers DO)
- Password Grant (based on password forwarding, included for migration purposes)
- OAuth 2 Wikipedia summary
- provides a variety of standardized message flows based on JSON and HTTP; OpenID Connect uses these to provide Identity services
- IETF framework specified in RFCs 6749 and 6750 (published in 2012) designed to support authentication and authorization protocols
- RFC 6749 - The OAuth 2.0 Authorization Framework
- Enables a third-party application to obtain access to an HTTP service, either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP service,
or by allowing the third-party application to obtain access on its own behalf.
- Separates the roles (of the resource owner (user), client (i.e. application) and the authorisation server).
- Client requests access to resources controlled by the resource owner and hosted by the resource server, and is issued a different set of credentials than those of the resource owner.
- Instead of using the resource owner's credentials to access protected resources, the client obtains an Access Token -- a string denoting a specific scope, lifetime, and other access attributes. Access tokens are issued to third-party clients by an authorization server with the approval of the resource owner. The client uses the access token to access the protected resources hosted by the resource server.
- An Access Token is 'a string representing an access authorization issued to the client'
- Also: "Access token attributes and the methods used to access protected resources are beyond the scope of
this specification and are defined by companion specifications such as [RFC6750]."
- OAuth defines four key roles:
- resource owner: an entity capable of granting access to a protected resource. When the resource owner is a person, it is referred to as an end-user.
- resource server (RS): the server hosting the protected resources, capable of accepting and responding to protected resource requests using access tokens.
- client: an application making protected resource requests on behalf of the resource owner and with its authorization. The term 'client' does not imply any particular implementation characteristics (e.g., whether the application executes on a server, a desktop, or other devices).
- authorisation server (AS): issuing access tokens to the client.
- RFC 6750 - The OAuth 2.0 Authorization Framework: Bearer Token Usage
- Introduces an Access Token of the bearer-type and describes how to use such tokens in HTTP requests to access OAuth 2.0
protected resources. Any party in possession of a bearer token (a 'bearer') can use it to get access to the associated resources
(WITHOUT demonstrating possession of a cryptographic key). To prevent misuse, bearer tokens need to be
protected from disclosure in storage and in transport.
- From the RFC: 'This specification defines the use of bearer tokens over HTTP/1.1 using TLS to access
protected resources. TLS is mandatory to implement and use with this specification.'
- Token contents:
- The RFC does not specify the format or content of a token
- The core part of the token is the bearer credential which is a b64token, base 64 encoded.
- For the bearer credential one can use e.g. encrypted name-value pairs or JSON. Any character can be used in the token
but the RFC says it must be base 64 encoded, which makes it URL-safe.
- There are three ways to transmit the token:
- In the HTTP Authorization Request Header Field
For example, using 'mF_9.B5f-4.1JqM' as value for the bearer credential:
- GET /resource HTTP/1.1
- Host: server.example.com
- Authorization: Bearer mF_9.B5f-4.1JqM
- In in the HTTP request entity-body, by adding the token to the request-body using the 'access_token' parameter
For example, the client makes the following HTTP request using transport-layer security:
- POST /resource HTTP/1.1
- Host: server.example.com
- Content-Type: application/x-www-form-urlencoded access_token=mF_9.B5f-4.1JqM
- In the HTTP request URI, by adding the access token to the request URI query component using the 'access_token' parameter
For example, the client makes the following HTTP request using transport-layer security:
- GET /resource?access_token=mF_9.B5f-4.1JqM HTTP/1.1
- Host: server.example.com
- RFC 6819 OAuth 2 Threat Model and Security Considerations
- OAuth 2 Security Best Current Practices - draft RFC
- RFC 8252 OAuth2 for native apps
- OAuth 2 for browser-based apps/SPAs (Single Page Application)
- Yes.com banking - based on OAuth
- Torsten Lodderstedt CTO
- idea: simply sign in to apps and websites with your online banking login, verify your identity, sign documents or initiate payments
- OAuth 2 resources from Aaron Parecki - an Oauth contributor
- OAuth 2 and OIDC update
- OAuth 2 and OIDC explained by okta
- OAuth 2 and OIDC in plain English with utilities
- OAuth 2 playground - okta sponsored
- OAuth 2 device flow
- OAuth 2 security best practice
- OAuth 2 development by okta
OAuth 2.1
The OAuth 2.1 protocol is work in progress.
OAuth interop
- RFC 7521 - Assertion Framework for OAuth 2.0 Client Authentication and Authorization Grants
- Provides a framework for OAuth 2.0 to interwork with other identity systems using assertions
and to provide alternative client authentication mechanisms.
- RFC 7522 - Security Assertion Markup Language (SAML) 2.0 Profile for OAuth 2.0 Client Authentication and Authorization Grants
- Defines an instantiation for SAML 2.0 Assertions
- RFC 7523 - JSON Web Token (JWT) Profile for OAuth 2.0 Client Authentication and Authorization Grants
- Defines an instantiation of OAuth for JWTs, with issuer, subject, audience, etc
- There's also OAuth 2.0 Token Exchange draft-ietf-oauth-token-exchange-16, which defines a protocol for an HTTP- and JSON- based Security Token Service (STS) by defining how to request and obtain security tokens from OAuth 2.0 authorization servers, including security tokens employing impersonation and delegation.
OAuth security
Consider:
Testing OAuth security
OAuth extensions
FAPI (Financial API) to meet legal requirements (OpenID).
OAuth implementation
For implementations, refer to vendors such as Auth0 and okta.
OpenID - by the OpenID Foundation
OpenID is a decentralised authentication protocol promoted by the non-profit OpenID Foundation.
It allows users to be authenticated using a third-party service,
eliminating the need for webmasters to provide their own ad hoc login systems.
The original OpenID authentication protocol was developed in May 2005.
It was succeeded by OpenID Connect which is an identity layer on top of the IETF's OAuth 2.0 protocol.
- OpenID.net (US driven, Not-for-profit) - your username as uri, your credentials managed by a party of choice, and an OpenID provider
- Yadis/OpenID (obsolete)
- OpenID Authentication 1.0, 2.0 (obsolete)
- OpenID.or.jp - OpenID Foundation Japan - established in 2008
OpenID Connect - OIDC
OpenID Connect is an identity layer on top of the OAuth 2.0 protocol. It introduced the Identity Token and the UserInfo Endpoint. OIDC reinvented the terminology:
- OAuth Authorisation Server (AS) = OIDC STS
- OAuth Resource Owner (RO) = OIDC end user
- OpenID Connect in plain English - Micah Silverman
- Great OpenID Connect and OAuth risk analysis (ETSI) - ETSI GS NFV-SEC 022 V2.8.1 (2020-06)
- OpenID Connect - successor to OpenID
- (Identity, Authentication) + OAuth 2.0 = OpenID Connect
- REST/JSON protocol
- allows for clients of all types, including browser-based JavaScript and native mobile apps, to launch sign-in flows and receive verifiable assertions about the identity of signed-in users
- Relationship to SAML: OpenID Connect can satisfy the same use cases as SAML but with a simpler, JSON/REST based protocol.
OpenID Connect was designed to also support native apps and mobile applications, whereas SAML was designed only for Web-based applications.
- OpenID Connect has many architectural similarities to OpenID 2.0, and its protocols solve a very similar set of problems.
However, OpenID 2.0 used XML and a custom message signature scheme that in practice sometimes proved difficult for developers to get right,
with the effect that OpenID 2.0 implementations would sometimes mysteriously refuse to interoperate.
OAuth 2.0, the substrate for OpenID Connect, outsources encryption to the TLS infrastructure, which is universally implemented on both
client and server platforms. OpenID Connect uses standard JSON Web Token (JWT) data structures when signatures are required.
This makes OpenID Connect easier for developers to implement, and resulted in better interoperability.
- Actors:
- OAuth defines four key roles:
- resource owner: an entity capable of granting access to a protected resource.
When the resource owner is a person, it is referred to as an end-user.
- resource server: the server hosting the protected resources, capable of accepting
and responding to protected resource requests using access tokens.
- client: an application making protected resource requests on behalf of the resource owner and with its authorization.
The term "client" does not imply any particular implementation characteristics
(e.g., whether the application executes on a server, a desktop, or other devices).
- authorization server: the server issuing access tokens to the client after successfully
authenticating the resource owner and obtaining authorization.
- End user (e.g. human person), Oauth refers to this as resource owner
- Client (synonym: Relying Party, RP), meaning the website or service (e.g. an ngnix) the user agents wants to authenticate to, Oauth's Resource Server (RS)
- User agent (e.g. browser), OAuth refers to this as client
- OpenID Provider (OP), the IdP which authenticates the end-user, creating the ID token and the Access Token (there are Self-Issued OPs, SIOPs), exposes:
- Token Endpoint, carried over from OAuth (AS), allows the requester to directly retrieve tokens. This endpoint is machine to machine interaction. This endpoint doesn't ever need to see the resource owner or be accessed via a front-channel.
- UserInfo Endpoint - New to OpenID Connect, allows to make a request using an Access Token to receive identity information (claims) about the authenticated end-user .
Otherwise stated, when presented with an Access Token by the Client, returns authorized information about the End-User represented by the corresponding Authorization Grant.
The UserInfo Endpoint URL MUST use the https scheme and MAY contain port, path, and query parameter components.
- Authorization Endpoint, carried across from OAuth (Authorisation Server), this endpoint authorises access a protected resource.
This resource could be the resource owners identity or an API.
This endpoint will require the resource owner to first authenticate (log in by human intervention) and then give their consent to for you to access their protected resources. Assume that this endpoint will always require interaction with the resource owner.
- Registration endpoint (optional) - for the OP's Dynamic Client Registration
- Mozilla on OIDC
- ngnix OIDC deployment
- ngnix SSO using OIDC with okta as IdP and ngnix as RP
- Sample: Google OP configuration information listing the endpoints above and many more such as certificates
- KeyCloak (Redhat origin - OIDC and SAML - and SSO
- Certified Open ID Connect Providers - from Akamai, Google, Microsoft,... to Yahoo!
- Certified OpenID Connect Implementations - software libraries, RP clients etc
- Used e.g. by BE FAS in Authorisation Flow see https://dtservices.bosa.be/nl/Services/FAS
- Client application sends an Authorization Code Request towards the FAS Authorization server via the browser.
- User authenticates using one of the authentication methods.
- FAS authorization server sends an Authorization Code via the browser to the redirect-URI of the client application.
- The client application requests to exchange the Authorization Code for an Access Token, Refresh Token and ID Token using server
to server communication and OIDC client authentication method client_secret_basic.
- FAS sends an Access Token, Refresh Token and ID Token via server to server communication to the client application.
- The client application calls the userinfo endpoint of the FAS using the Access Token.
- FAS sends a signed JWT with extra user information to the client application.
- The client application should be able to create a local session for the user.
- OpenID Connect specifications:
- Core Defines the core OpenID Connect functionality: authentication built on top of OAuth 2.0 and the use of claims to communicate information about the End-User
- QUOTE - OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 [RFC6749] protocol.
It enables Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server,
as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner. - UNQUOTE
- QUOTE - This specification assumes that the Relying Party has already obtained sufficient credentials and provided
information needed to use the OpenID Provider. This is normally done via Dynamic Registration, as described in
OpenID Connect Dynamic Client Registration 1.0, or may be obtained via other mechanisms. - UNQUOTE ('other mechanisms'
include self-issued credentials)
- The OpenID Connect protocol, in abstract, follows the following steps.
- The RP (Client) sends a request to the OpenID Provider (OP). (Client corresponds to e.g. a website visited by an end user, NOT to the end user or his agent)
- The OP authenticates the End-User and obtains authorization.
- QUOTE from Core '3.1.2.3. Authorization Server Authenticates End-User'- The methods used by the Authorization Server
to Authenticate the End-User (e.g. username and password, session cookies, etc.) are beyond the scope of this specification. - UNQUOTE
- The OP responds with one or more tokens, including the mandatory Access Token (JWT or opaque) and an optional
ID Token (JWT). There also exists a Refresh Token to update an Access Token.
- The Access Token is the OAuth 2 Access Token specified in RFC 6749.
- The ID token:
- Is the primary extension that OIDC makes to OAuth 2.0 to enable End-Users to be Authenticated.
It contains Claims about the authentication of an End-User by an Authorization Server when
using a Client, and potentially other requested Claims.
- Must be signed using JWS and optionally both signed and then encrypted using JWS and JWE.
- May contain a 'level of assurance' in the optional 'acr' field, a string specifying 'Authentication Context Class Reference'.
This identifies the Authentication Context Class that the authentication performed satisfied.
The value "0" indicates the End-User authentication did not meet the requirements of ISO/IEC 29115 [ISO29115] level 1.
Authentication using a long-lived browser cookie, for instance, is one example where the use of "level 0" is appropriate.
Authentications with level 0 SHOULD NOT be used to authorize access to any resource of any monetary value.
(This corresponds to the OpenID 2.0 PAPE nist_auth_level 0.)
- The RP can send a request with the Access Token to the UserInfo Endpoint.
- The UserInfo Endpoint returns Claims about the End-User.
- Three authentication flows, the flow used is determined by the response_type value contained in the Authorization Request:
- Authorization Code Flow (more secure flow), where all tokens are returned from the Token Endpoint.
- Returns an Authorization Code to the Client, which can then exchange it for an ID Token and an Access Token directly. This provides the benefit of not exposing any tokens to the User Agent and possibly other malicious applications with access to the User Agent. The Authorization Server can also authenticate the Client before exchanging the Authorization Code for an Access Token. The Authorization Code flow is suitable for Clients that can securely maintain a Client Secret between themselves and the Authorization Server.
- Implicit Flow (deprecated in 2018) , where all tokens are returned from the Authorization Endpoint; the Token Endpoint is not used.
- This flow is mainly used by Clients implemented in a browser using a scripting language.
The Access Token and ID Token are returned directly (in the url) to the Client, which may expose them to the End-User and applications
that have access to the End-User's User Agent. The Authorization Server does not perform Client Authentication.
- The OAuth 2.0 specification included the Implicit Flow at a time when browser support for single page applications
was much more limited. In particular, JS did not have access to browser history or local storage.
And most providers did not allow cross-site POST requests to a /token endpoint, which is a requirement of the
Authorization Code flow.
- Hybrid Flow, where some tokens are returned from the Authorization Endpoint and others from the Token Endpoint.
- The mechanisms for returning tokens in the Hybrid Flow are specified in OAuth 2.0 Multiple Response Type Encoding Practices.
- More flows are possible:
- Microsoft's info on other grant flows
- Authorization code grant flow (regular) - preferably with Proof of Key Code Exchange (PKCE, a client-side created nonce)
- Implicit Grant flow (regular)
- On behalf of flow (OBO, e.g. for chaining API2API calls)
- Client credential flow (for batch or server-to-server applications that must run in the background,
without immediate interaction with a user)
- Device code flow grant (simplified for applications running on input constrained devices)
- Resource owner password credential grant (for applications that are authorised to touch (and see) a users password - high risk)
- Discovery - How clients discover information about OPs
- OPs must make configuration information available in a location url/openid-configuration
- Here RPs can discover the End-User's OP and obtain information needed to interact with it, including its OAuth 2.0 endpoint locations.
- Uses WebFinger RFC 7033
- Describes a.o. scope (can be openid, profile, email, phone) and claims supported (attributes to support the scope)
- E.g. scope openid which is mandatory and supports claims for attributes such as 'sub(ject)', 'iss(uer)', 'aud(ience)' etc are supported
- While scope profile brings 'user_id', 'family_name' etc
- And scope email brings 'email' and 'email_verified'.
- OpenID Connect Dynamic Registration - Defines how clients dynamically register with OpenID Providers
- OAuth 2.0 Multiple Response Types - Defines several specific new OAuth 2.0 response types
- OAuth 2.0 Form Post Response Mode - Defines how to return OAuth 2.0 Authorization Response parameters (including OpenID Connect Authentication Response parameters) using HTML form values that are auto-submitted by the User Agent using HTTP POST
- OpenID 2.0 to OpenID Connect Migration 1.0 - Defines how to migrate from OpenID 2.0 to OpenID Connect
- Implementations and libraries
- OpenID workgroups - includes F-API (financial-grade API), Health, ...
- WSO2 - including OIDC identity server, cloud identity, API security etc
- BE ITSME OpenID implementation - OpenID Connect - Core 1.0
OIDC and self-sovereignity
- A proposed extension to OpenID Connect for Self Issued OpenID Providers
- In the traditional OpenID Connect model when an OP (OpenID Provider) acts as an ID token issuer, it is common for the OP to have a legal stake with the RPs(Relying/Service Use Party) and a reputation-based stake with both RPs and End-Users to provide correct information.
- In the Self-Issued OP model, the RPs' trust relationship is directly with the End-User. The Self-Issued OP allows the End-User to authenticate towards the RP with an identifier controlled by the End-User instead of an identifier assigned to the End-User by a third-party provided OP.
- An End-User controlled identifier might be a public key fingerprint or a Decentralized Identifier (see [DID-Core]).
- This changes the trust model and the way signatures of the Self-Issued ID Tokens are validated in comparison to the traditional OpenID Connect model.
Bridging OIDC and W3C VC
JWT - JSON Web Token - a claims set
In a nutshell
JWT definitions are messy. Best to read RFC 8725 (best practices).
RFC 7519: JWTs represent a set of claims as a JSON object that is encoded in a JWS and/or JWE structure.
This JSON object is the JWT Claims Set.
The member names within the JWT Claims Set are referred to as Claim Names.
The corresponding values are referred to as Claim Values.
The overall structure is:
- JOSE Header, indicating JWS or JWE, e.g. {"typ":"JWT", "alg":"HS256"}, this JWT is a JWS that is MACed using HMAC SHA-256
- JWT Claims Set, e.g. {"iss":"joe", "exp":1300819380, "http://example.com/is_root":true}
- Claims include "iss" (Issuer), "sub" (Subject), "aud" (Audience), "exp" (Expiration Time),
"exp" (Expiration Time), "nbf" (Not Before), "iat" (Issued At), "jti" (JWT ID)
- If the JWT is a JWS, the Claims Set is encoded into a message which becomes the JWS Payload,
if the JWT is a JWE, the Claims Set is encoded and used as the JWE plaintext
A JWT may be enclosed in another JWE or JWS structure to create a Nested JWT, enabling nested signing
and encryption to be performed.
JWTs are represented using the JWS Compact Serialization or the JWE CompactvSerialization.
Rationale
Before the JWT revolution, a token was just a string with no intrinsic meaning, e.g. 2pWS6RQmdZpE0TQ93X.
That token was looked-up in a database, which held the claims for that token.
This means that DB access (or cache) is required everytime the token is used.
JWTs encode and verify (via signing) their own claims.
This allows to issue short-lived JWTs that are stateless (read: self-contained, don't depend on anybody else).
They do not need to hit the DB. This reduces DB load and simplifies application architecture because only the service that
issues the JWTs needs to worry about hitting the DB/persistence layer (the refresh_token).
Wikipedia
- JSON Web Token - Wikipedia
- IETF RFC for creating JSON-based access tokens that assert claims
- E.g. a server generates a token that has the claim 'logged in as admin' and provides that to a client. The client could use that token to prove that it is logged in as admin
- Tokens are signed by one party's private key (usually the server's), so that both parties (the other already being, by some suitable and trustworthy means, in possession of the corresponding public key) are able to verify that the token is legitimate
- Tokens are designed to be compact, URL-safe, and usable in a web-browser single-sign-on (SSO) context. JWT claims can be typically used to pass identity of authenticated users between an identity provider and a service provider, or any other type of claims as required by business processes
- JWT tokens are preferably sent via HTTPS Header in authorisation Bearer parameter. It is less recommended (e.g. Google good practise) to send them via an attribute in the plaintext of the query.
RFCs
JWT structure
- JSONwebtoken.io
- JOSE Header contains: {
"alg" : "HS256",
"typ" : "JWT" }
- Payload contains: {
"loggedInAs" : "admin",
"iat" : 1422779638 }
- Signature contains:
HMAC-SHA256(
base64urlEncoding(header) + '.' +
base64urlEncoding(payload),
secret (check)
)
- creation: step 1: const token = base64urlEncoding(header) + '.' + base64urlEncoding(payload) + '.' + base64urlEncoding(signature)
- creation: step 2, result (usable in html/http): eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJsb2dnZWRJbkFzIjoiYWRtaW4iLCJpYXQiOjE0MjI3Nzk2Mzh9.gzSraSYS8EXBxLN_oWnFSRgCzcmJmMjLiuyu5CSpyHI
JWT security
Consider:
JWT libraries
JWT in JavaScript
- NPM jsonwebtoken package
- developed against draft-ietf-oauth-json-web-token-08
- makes use of node-jws
- makes use of Node.js crypto module
which provides wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign, and verify functions
- Token creation: jwt.sign(payload, secretOrPrivateKey, [options, callback])
- Returns:
- Asynchronous: if a callback is supplied, the callback is called with the err or the JWT
- Synchronous: returns the JsonWebToken as string
- payload could be an object literal, buffer or string representing valid JSON
- secretOrPrivateKey is a string, buffer, or object containing either the secret for HMAC algorithms or the PEM encoded private key for RSA and ECDSA.
In case of a private key with passphrase an object { key, passphrase } can be used
- Token verification: jwt.verify(token, secretOrPublicKey, [options, callback])
- Returns:
- Asynchronous: if a callback is supplied, function acts asynchronously. The callback is called with the decoded payload if the signature is valid and
optional expiration, audience, or issuer are valid. If not, it will be called with the error.
- Synchronous: if a callback is not supplied, function acts synchronously. Returns the payload decoded if the signature is valid and optional expiration,
audience, or issuer are valid. If not, it will throw the error.
- token is the JsonWebToken string
- secretOrPublicKey is a string or buffer containing either the secret for HMAC algorithms, or the PEM encoded public key for RSA and ECDSA.
If jwt.verify is called asynchronous, secretOrPublicKey can be a function that should fetch the secret or public key.
- Using JWT
- jwt.io - by Auth0
- Auth0
- JWT security best practise from Logrocket
JOSE
- JOSE IETF data tracker
- JOSE JavaScript Object Signing and Encryption
- JOSE supports JWE (JavaScript Web Encryption) and JWS (JavaScript Web Signature) operations e.g. jwe = jose.encrypt(claims, pub_jwk)
- A JSON Web Key (JWK) is a JSON data structure that represents a cryptographic key. Using a JWK rather than one or more parameters allows for a generalized key as input that can be applied to a number of different algorithms that may expect a different number of inputs.
All JWE and JWS operations expect a JWK rather than inflexible function parameters.
- JOSE Header Parameter Names
- "alg" (Algorithm)
- "jku" (JWK Set URL)
- "jwk" (JSON Web Key)
- "kid" (Key ID)
- "x5u" (X.509 URL)
- "x5c" (X.509 Certificate Chain)
- "x5t" (X.509 Certificate SHA-1 Thumbprint)
- "x5t#S256" (X.509 Certificate SHA-256 Thumbprint)
- "typ" (Type)
- "cty" (Content Type)
- "crit" (Critical)
AWS, Google, Facebook, ...
Federations and alliances
Federations and allicances - general
- US NIEF - DOJ - National Identity Exchange Federation
- US NIEF - attribute registry
- US NIEF - technical specs - Cryptographic Trust Model,
Web Browser User-to-System Profile, Web Services System-to-System Profile, REST Services Profile, Attribute Registry,
Attribute Profile, Attribute Encoding Rules
- NIEF supports different types of protocols, including
- SAML2 and SOAP, such endpoints are not REST and use the SAML Metadata format, and
- OIDC and OAuth, which are REST and use JSON formats.
- Therefore, the NIEF Cryptographic Trust Fabric is comprised of two Trust Fabric documents: one for SAML and one for REST
- US DOJ GFIPM initiative - Global Federated Identity and Privilege Management
- US GFIPM.net - Global Federated Identity and Privilege Management
- US NIEF - DOJ - National Identity Exchange Federation
- FIDO - Fast IDentity - let the user select - Google...
- an open industry association launched in 2013 to develop and promote authentication standards that help reduce the worlds over-reliance on passwords
- addresses the lack of interoperability among strong authentication devices
- specifications: Universal Authentication Framework (UAF), Universal 2nd Factor (U2F),
FIDO 2.0 (contributed to the W3C), Client to Authenticator Protocol (CTAP)
- FIDO 2 project with W3C - WebAuthn
- Shibboleth.net - consortium - Internet2
- Liberty Alliance (2001) - Federated Identity Management - circles of trust - SAML
- Internet2 Middleware
- ID-FF - Identity Federation Framework (identity/account linkage, simplified sign-on, session management)
- ID-WSF - Web Services Framework
- ID - SIS - Services Interface Specification (personal identity profile service, alert, calendar services, wallet, contact, geo-location, presence, ...)
Federations and alliances - academic and education
Solutions and tools
ForgeRock
ForgeRock basics
ForgeRock enterprise - IDM
The ForgeRock Identity Platform includes:
- Access Management (based on the OpenAM open source project),
- Identity Management (based on the OpenIDM open source project),
- Directory Services (based on the OpenDJ open source project/LDAP), and
- Identity Gateway (based on the OpenIG open source project, gateway for web traffic and APIs).
ForgeRock Access Management provides access management. ForgeRock also offers a Profile and Privacy Management Dashboard for compliance with the EU General Data Protection Regulation (GDPR) and provides support for the User-Managed Access (UMA) 2.0 standard.
ForgeRock related
ForgeRock community - OpenAM
Shibboleth
Other
Strong authentication
- OpenAuthentication - OATH - a.o. Thales/Gemalto - rooted in SafeNet
- authentication: HOTP, HMAC-based OTP (RFC 4226), TOTP Time-based OTP (RFC 6238), OCRA, challenge/response (RFC 6287)
- provisioning: PSKC, portable symmetric key container (RFC 6030), DSKPP dynamic symmetric key provisioning protocol (RFC 6063)
- other: OATH token identifier specification (specification to make authentication credentials uniquely globally identifiable, classes A,B,C, work in progress), sharing transaction fraud data (RFC 5941)
- SafeNet - eToken used by eg SWIFT
- Trust1Team - connector
Directories - concepts and samples
Vendors - identity focus
Vendors - identity focus - Europe
Vendors - identity focus - UK, US Australia and Canada
- CA - M-Tech - founded in 1992, based in Calgary, Canada
- UK - Avoco - including cloud identity
- UK - Onfido - including document verification
- UK - Yoti
- WSO2 - IAM of RHUL, Open Source, founded in 2005, offices in Australia, Brazil, Sri Lanka, UK and US
- US Auth0 - solutions for OAuth, OIDC, JWT, SAML, WS-Fed
- Cloud-based authentication API
- Auth0 sits between your app and the identity provider that authenticates the users (eg Google or Facebook).
Auth0 keeps your app isolated from changes of the provider's implementation
- US - Okta
- US - ForgeRock
- US - Centrify
- US e-DMZ security
- US - Sailpoint - roles, identity, compliance, ...(ex-Waveset people)
- US - Bridgestream (Oracle) - managing business relationships - founded in 2000 - partners with Tivoli, BEA, Oracle, Netegrity, ...
- US/IL - BMC - Control-SA suite
- US - BusinessLayers - provisioning - acquired by Netegrity (which was acquired by CA)
- US - Aveksa (Massachuset/India) - ex-Netegrity roots
- US - Courion - provisioning
- US - IDMLogic NYC/Tel-Aviv - CA RCM/SAP impl
- US - VAAU Consulting - RBACX on top of CA/Netegrity
- US - Entrust (GetAccess/SelectAccess)
- US - HP - Select Identity (including Baltimore's SelectAccess and Trulogica)
- US - Microsoft - MIIS - Authorization Manager - CardSpace - ...
- US - IBM - TIM/TAM
- US - Computer Associates - e-TRUST
- US - Netegrity - now CA - including BusinessLayers (provisioning)
- US - Oracle - Thor's Xellerate Identity Manager
- US - Oblix (now Oracle)
- US - Oblix - old url - including Confluant (Web Services)
- US - RSA - ClearTrust (ex-Securant)
- US - Sun - SunIdM (ex-Waveset Lighthouse)
- US - Sun - Neogent - "bi-level RBAC" (bought by Sun)
- US - Thor Technologies
- Waveset - acquired by Sun
- Waveset - ROI calculator
Vendors - identity focus - others
Vendors - directory focus
- AU - ViewDS
- DE - Siemens - DirX and related tools
- FR - Calendra - directory and identity
- NO - Maxware - Norwegian directory and mail products
- NO - MetaMerge - Norwegian directory product now part of IBM
- US - RadiantLogic
- X - Isode.com
- Critical Path - X.500-LDAP-MetaDir - used by e.g. US government's FBCA, beTRUSTed, shipped with Entrust PKI, used by Nortel Networks, ...
- Critical Path's directory is based on ICL's i500 engine:
- PeerLogic acquired this engine from ICL, and was acquired by Critical Path, bTd uses PeerLogic's version 8 product (uid/psw authentication), version 9 supports SASL but this is not yet supported by the current CA's (2000-12)
- ISOCOR also acquired the engine, and was also acquired by Critical Path
- "InJoin Directory Server" will be the integrated product (merging the modifications both from PeerLogic and ISOCOR)
- Used by Elisa, the Finnish digital-ID card, together with Sonera/iD2
- "InJoin MetaDirectory" will span across all directories and provide a unified access mechanism. Consolidation seems to be an unrealistic approach for all the different information sources, so MetaDirectory will rather "join" them
- US - Novell - NDS - eChain ...
- BE - OPNS
- US - Netscape - Directory Server product - merged into iPlanet and later Sun
- US - Microsoft - MMS/MIIS - InfoCard/CardSpace (MMS = ex-Zoomit - used e.g. by EC because it was free)
Examples
Interoperability
EU
- EU - EA - European Accreditation
- Stork
- Stork2
- REFEDS/Terena Stork-SAML Interop WG (SSIWG)
- ABC4TRUST - FP7
- PrimeLife - FP7
- EU FIDIS.net - Future of Identity in Information Society ***
- EU - FIDIS-project - shaping the future of IdM - ref also fidis.net
- TAS3
- ZXID - TAS3 reference implementation
- OSOR.eu - eid community
- CEN TC 224 Workgroup 15 - ECC - European Citizen Card standardization (vendor initiative)
- EU IDABC programme including eIdentity with interop (EC DIGIT)
- EC modinis to check ***
- EC DG Infosoc - GUIDE-project - secure open systems
- EC DG Infosoc - eEpoch - demonstrate interoperable secure smartcard for the citizen
- EC DG Infosoc - PRIME-project - Privacy and Identity Management for Europe
- EC DG Infosoc - eEurope SmartCards
- EC DG Enterprise/CEN/ISSS - many working groups such as eInvoice, eBiz Interop, CyberIdentity, eAuthentication
- s-Travel - EC-sponsored bio-idm project with Alitalia, Swiss Office for Eduction, IATA, Gemplus, Bio-wise, ...
- EU - EuropePKI - European Open Source PKI based on requirements from EESSI, ETSI, ECMA
- EU Biometrics Portal
Scandinavia
US
Other
Access control - concepts - RACF - RBAC - Attrib Certs - ABAC - ZBAC
ABAC - Attribute Based Access Control and related
ABAC roots
ABAC implementations
AC - Attribute Certificates
Refer also to SAML and XACML.
RBAC - Role Based Access Control (NIST/ANSI 359-2004) and related
PSL - Policy Specification Languages