Colm O hEigeartaigh

OpenId Connect in Apache CXF Fediz 1.3.0

Colm O hEigeartaigh - Fri, 08/12/2016 - 18:02
Previous blog posts have described support for OpenId Connect protocol bridging in the Apache CXF Fediz IdP. What this means is that the Apache CXF Fediz IdP can bridge between the WS-Federation protocol and OpenId Connect third party IdPs, when the user must be authenticated in a different security domain. However, the 1.3.0 release of Apache CXF Fediz also sees the introduction of a new OpenId Connect Idp which is independent of the existing (WS-Federation and SAML-SSO based) IdP, and based on Apache CXF. This post will introduce the new IdP via an example.

The example code is available on github:
  • cxf-fediz-oidc: This project shows how to use interceptors of Apache CXF to authenticate and authorize clients of a JAX-RS service using OpenId Connect.
1) The secured service

The first module available in the example contains a trivial JAX-RS Service based on Apache CXF which "doubles" a number that is passed as a path parameter via HTTP GET. The service defines via a @RolesAllowed annotation that only users allowed in roles "User", "Admin" or "Manager" can access the service.

The service is configured via spring. The endpoint configuration references the service bean above, as well as the CXF SecureAnnotationsInterceptor which enforces the @RolesAllowed annotation on the service bean. In addition, the service is configured with the CXF OidcRpAuthenticationFilter, which ensures that only users authenticated via OpenId Connect can access the service. The filter is configured with a URL to redirect the user to. It also explicitly requires a role claim to enforce authorization.

The OidcRpAuthenticationFilter redirects the browser to a separate authentication endpoint, defined in the same spring file for convenience. This endpoint has a filter called OidcClientCodeRequestFilter, which initiates the OpenId Connect authorization code flow to a remote OpenId Connect IdP (in this case, the new Fediz IdP). It is also responsible for getting an IdToken after successfully getting an authorization code from the IdP.

2) The Fediz OpenId Connect IdP

The second module contains an integration test which deploys a number of wars into an Apache Tomcat container:
  • The "double-it" service as described above
  • The Apache CXF Fediz IdP which authenticates users via WS-Federation
  • The Apache CXF Fediz STS which performs the underlying authentication of users
  • The Apache CXF Fediz OpenId Connect IdP
The way the Apache CXF Fediz OpenId Connect IdP works (at least for 1.3.x) is that user authentication is actually delegated to the WS-Federation based IdP via a Fediz plugin. So when the user is redirected to the Fediz IdP, (s)he gets redirected to the WS-Federation based IdP for authentication, and then gets redirected back to the OpenId Connect IdP with a WS-Federation Response. The OpenId Connect IdP parses this (SAML) Response and converts it into a JWT IdToken. Future releases will enable authentication directly at the OpenId Connect service.

After deploying all of the services, the test code makes a series of REST calls to create a client in the OpenId Connect IdP so that we can run the test without having to manually enter information in the client UI of the Fediz IdP. To run the test, simply remove the @org.junit.Ignore assertion on the "testInBrowser" method. The test code will create the clients in Fediz and then print out a URL in the console before sleeping. Copy the URL and paste it into a browser. Authenticate using the credentials "alice/ecila".
Categories: Colm O hEigeartaigh

Introducing Apache Syncope 2.0.0

Colm O hEigeartaigh - Thu, 08/11/2016 - 17:17
Apache Syncope is a powerful and flexible open-source Identity Management system that has been developed at the Apache Software Foundation for several years now. The Apache Syncope team has been busy developing a ton of new features for the forthcoming new major release (2.0.0), which will really help to cement Apache Syncope's position as a first class Identity Management solution. If you wish to experiment with these new features, a 2.0.0-M4 release is available. In this post we will briefly cover some of the new features and changes. For a more comprehensive overview please refer to the reference guide.

1) Domains

Perhaps the first new concept you will be introduced to in Syncope 2.0.0 after starting the (Admin) console is that of a domain. When logging in, as well as specifying a username, password, and language, you can also specify a configured domain. Domains are a new concept in Syncope 2.0.0 that facilitate multi-tenancy. Domains allow the physical separation of all data stored in Syncope (by storing the data in different database instances). Therefore, Syncope can facilitate users, groups etc. that are in different domains in a single Syncope instance.

2) New Console layout

After logging in, it becomes quickly apparent that the Syncope Console is quite different compared to the 1.2.x console. It has been completely rewritten and looks great. Connectors and Resources are now managed under "Topology" in the menu on the left-hand side. Users and Groups (formerly Roles) are managed under "Realms" in the menu. The Schema types are configured under "Configuration". A video overview of the new Console can be seen here.


3) AnyType Objects

With Syncope 1.2.x, it was possible to define plain/derived/virtual Schema Types for users, roles and memberships, but no other entities. In Syncope 2.0.0, the Schema Types are decoupled from the entity that uses them. Instead, a new concept called an AnyType class is available which is a collection of schema types. In turn, an AnyType object can be created which consists of any number of AnyType classes. AnyType objects represent the type of things that Apache Syncope can model. Besides the predefined Users and Groups, it can also represent physical things such as printers, workstations, etc. With this new concept, Apache Syncope 2.0.0 can model many different types of identities.

4) Realms

Another new concept in Apache Syncope 2.0.0 is that of a realm. A realm encapsulates a number of Users, Groups and Any Objects. It is possible to specify account and password policies per-realm (see here for a blog entry on custom policies in Syncope 2.0.0). Each realm has a parent realm (apart from the pre-defined root realm identified as "/"). The realm tree is hierarchical, meaning that Users, Groups etc. defined in a sub-realm, are also defined on a parent realm. Combined with Roles (see below), realms facilitate some powerful access management scenarios.

5) Groups/Roles

In Syncope 2.0.0, what were referred to as "roles" in Syncope 1.2.x are now called "groups". In addition, "roles" in Syncope 2.0.0 are a new concept which associate a number of entitlements with a number of realms. Users assigned to a role can exercise the defined entitlements on any of the objects in the given realms (any any sub-realms).

Syncope 2.0.0 also has the powerful concept of dynamic membership, which means that users can be assigned to groups or roles via a conditional expression (e.g. if an attribute matches a given value).

6) Apache Camel Provisioning

An exciting new feature of Apache Syncope 2.0.0 is the new Apache Camel provisioning engine, which is available under "Extensions/Camel Routes" in the Console. Apache Syncope comes pre-loaded with some Camel routes that are executed as part of the provisioning implementation for Users, Groups and Any Objects. The real power of this new engine lies is the ability to modify the routes to perform some custom provisioning rules. For example, on creating a new user, you may wish to send an email to an administrator. Or if a user is reactivated, you may wish to reactivate the user's home page on a web server. All these things and more are possible using the myriad of components that are available to be used in Apache Camel routes. I'll explore this feature some more in future blog posts.

7) End-User UI

As well as the Admin console (available via /syncope-console), Apache Syncope 2.0.0 also ships with an Enduser console (available via /syncope-enduser). This allows a user to edit only details pertaining to his/her-self, such as editing the user attributes, changing the password, etc. See the following blog entry for more information on the new End-User UI.


8) Command Line Interface (CLI) client

Another new feature of Apache Syncope 2.0.0 is that of the CLI client. It is available as a separate download. Once downloaded, extract it and run (on linux): ./syncopeadm.sh install --setup. Answer the questions about where Syncope is deployed and the credentials required to access it. After installation, you can run queries such as: ./syncopeadm.sh user --list.

9) Apache CXF-based testcases

I updated the testcases that I wrote before to use Apache Syncope 2.0.0 to authenticate and authorize web services calls using Apache CXF. The new test-cases are available here
Categories: Colm O hEigeartaigh

Installing the Apache Ranger Key Management Server (KMS)

Colm O hEigeartaigh - Mon, 08/08/2016 - 13:40
The previous couple of blog entries have looked at how to install the Apache Ranger Admin Service as well as the Usersync Service. In this post we will look at how to install the Apache Ranger Key Management Server (KMS). KMS is a component of Apache Hadoop to manage cryptographic keys. Apache Ranger ships with its own KMS implementation, which allows you to store the (encrypted) keys in a database. The Apache Ranger KMS is also secured via policies defined in the Apache Ranger Admin Service.

1) Build the source code

The first step is to download the source code, as well as the signature file and associated message digests (all available on the download page). Verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting KMS archive to a location where you wish to install it:
  • tar zxvf apache-ranger-incubating-0.6.0.tar.gz
  • cd apache-ranger-incubating-0.6.0
  • mvn clean package assembly:assembly 
  • tar zxvf target/ranger-0.6.0-kms.tar.gz
  • mv ranger-0.6.0-kms ${rangerkms.home}
2) Install the Apache Ranger KMS Service

As the Apache Ranger KMS Service stores the cryptographic keys in a database, we will need to setup and configure a database. We will also configure the KMS Service to store audit logs in the database. Follow the steps given in section 2 of the tutorial on the Apache Ranger Admin Service to set up MySQL. We will also need to create a new user 'rangerkms':
  • CREATE USER 'rangerkms'@'localhost' IDENTIFIED BY 'password';
  • FLUSH PRIVILEGES; 
You will need to install the Apache Ranger KMS Service using "sudo". If the root user does not have a JAVA_HOME property defined, then edit ${rangerkms.home}/setup.sh + add in, e.g.:
  • export JAVA_HOME=/opt/jdk1.8.0_91
Next edit ${rangerkms.home}/install.properties and make the following changes:
  • Change SQL_CONNECTOR_JAR to point to the MySQL JDBC driver jar (see previous tutorial).
  • Set (db_root_user/db_root_password) to (admin/password)
  • Set (db_user/db_password) to (rangerkms/password)
  • Change KMS_MASTER_KEY_PASSWD to a secure password value.
  • Set POLICY_MGR_URL=http://localhost:6080
  • Set XAAUDIT.DB.IS_ENABLED=true
  • Set XAAUDIT.DB.FLAVOUR=MYSQL 
  • Set XAAUDIT.DB.HOSTNAME=localhost 
  • Set XAAUDIT.DB.DATABASE_NAME=ranger_audit 
  • Set XAAUDIT.DB.USER_NAME=rangerlogger
  • Set XAAUDIT.DB.PASSWORD=password
Now you can run the setup script via "sudo ./setup.sh".

3) Starting the Apache Ranger KMS service

After a successful installation, first start the Apache Ranger admin service with "sudo ranger-admin start". Then start the Apache Ranger KMS Service via "sudo ranger-kms start". Now open a browser and go to "http://localhost:6080/". Log on with "keyadmin/keyadmin". Note that these are different credentials to those used to log onto the Apache Ranger Admin UI in the previous tutorial. Click on the "+" button on the "KMS" tab to create a new KMS Service. Specify the following values:
  • Service Name: kmsdev
  • KMS URL: kms://http@localhost:9292/kms
  • Username: keyadmin
  • Password: keyadmin
Click on "Test Connection" to make sure that the KMS Service is up and running. If it is showing a connection failure, log out and log into the Admin UI using credentials "admin/admin". Go to the "Audit" section and click on "Plugins". You should see a successful message indicating that the KMS plugin can successfully download policies from the Admin Service:


After logging back in to the UI as "keyadmin" you can start to create keys. Click on the "Encryption/Key Manager" tab. Select the "kmsdev" service in the dropdown list and click on "Add New Key". You can create, delete and rollover keys in the UI:



Categories: Colm O hEigeartaigh

Syncing users and groups from LDAP into Apache Ranger

Colm O hEigeartaigh - Fri, 07/22/2016 - 16:37
The previous post covered how to install the Apache Ranger Admin service. The Apache Ranger Admin UI supports creating authorization policies for various Big Data components, by giving users and/or groups permissions on resources. This means that we need to import users/groups into the Apache Ranger Admin service from some backend service in order to create meaningful authorization policies. Apache Ranger supports syncing users into the Admin service from both unix and ldap. In this post, we'll look at syncing in users and groups from an OpenDS LDAP backend.

1) The OpenDS backend

For the purposes of this tutorial, we will use OpenDS as the LDAP server. It contains a domain called "dc=example,dc=com", and 5 users (alice/bob/dave/oscar/victor) and 2 groups (employee/manager). Victor, Oscar and Bob are employees, Alice and Dave are managers. Here is a screenshot using Apache Directory Studio:



2) Build the Apache Ranger usersync module

Follow the steps in the previous tutorial to build Apache Ranger and to setup and start the Apache Ranger Admin service. Once this is done, go back to the Apache Ranger distribution that you have built and copy the usersync module:
  • tar zxvf target/ranger-0.6.0-usersync.tar.gz
  • mv ranger-0.6.0-usersync ${usersync.home}
3) Configure and build the Apache Ranger usersync service 

You will need to install the Apache Ranger Usersync service using "sudo". If the root user does not have a JAVA_HOME property defined, then edit ${usersync.home}/setup.sh + add in, e.g.:
  • export JAVA_HOME=/opt/jdk1.8.0_91
Next edit ${usersync.home}/install.properties and make the following changes:
  • POLICY_MGR_URL = http://localhost:6080
  • SYNC_SOURCE = ldap
  • SYNC_INTERVAL = 1 (just for testing purposes....)
  • SYNC_LDAP_URL = ldap://localhost:2389
  • SYNC_LDAP_BIND_DN = cn=Directory Manager,dc=example,dc=com
  • SYNC_LDAP_BIND_PASSWORD = test
  • SYNC_LDAP_SEARCH_BASE = dc=example,dc=com
  • SYNC_LDAP_USER_SEARCH_BASE = ou=users,dc=example,dc=com
  • SYNC_GROUP_SEARCH_BASE=ou=groups,dc=example,dc=com
Now you can run the setup script via "sudo ./setup.sh". 

4) Start the Usersync service

The Apache Ranger Usersync service can be started via "sudo ./ranger-usersync-services.sh start". After 1 minute (see SYNC_INTERVAL above), it should successfully copy the users/groups from the OpenDS backend into the Apache Ranger Admin. Open a browser and go to "http://localhost:6080", and click on "Settings" and then "Users/Groups". You should see the users and groups synced successfully from OpenDS.

Categories: Colm O hEigeartaigh

Installing the Apache Ranger Admin UI

Colm O hEigeartaigh - Tue, 07/19/2016 - 14:22
Apache Ranger 0.6 has been released, featuring new support for securing Apache Atlas and Nifi, as well as a huge amount of bug fixes. It's easiest to get started with Apache Ranger by downloading a big data sandbox with Ranger pre-installed. However, the most flexible way is to grab the Apache Ranger source and to build and deploy the artifacts yourself. In this tutorial, we will look into building Apache Ranger from source, setting up a database to store policies/users/groups/etc. as well as Ranger audit information, and deploying the Apache Ranger Admin UI.

1) Build the source code

The first step is to download the source code, as well as the signature file and associated message digests (all available on the download page). Verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting admin archive to a location where you wish to install the UI:
  • tar zxvf apache-ranger-incubating-0.6.0.tar.gz
  • cd apache-ranger-incubating-0.6.0
  • mvn clean package assembly:assembly 
  • tar zxvf target/ranger-0.6.0-admin.tar.gz
  • mv ranger-0.6.0-admin ${rangerhome}
2) Install MySQL

The Apache Ranger Admin UI requires a database to keep track of users/groups as well as policies for various big data projects that you are securing via Ranger. In addition, we will use the database for auditing as well. For the purposes of this tutorial, we will use MySQL. Install MySQL in $SQL_HOME and start MySQL via:
  • sudo $SQL_HOME/bin/mysqld_safe --user=mysql
Now you need to log on as the root user and create three users for Ranger. We need a root user with admin privileges (let's call this user "admin"), a user for the Ranger Schema (we'll call this user "ranger"), and finally a user to store the Ranger audit logs in the DB as well ("rangerlogger"):
  • CREATE USER 'admin'@'localhost' IDENTIFIED BY 'password';
  • GRANT ALL PRIVILEGES ON * . * TO 'admin'@'localhost' WITH GRANT OPTION;
  • CREATE USER 'ranger'@'localhost' IDENTIFIED BY 'password';
  • CREATE USER 'rangerlogger'@'localhost' IDENTIFIED BY 'password'; 
  • FLUSH PRIVILEGES;
Finally,  download the JDBC driver jar for MySQL and put it in ${rangerhome}.

3) Install the Apache Ranger Admin UI

You will need to install the Apache Ranger Admin UI using "sudo". If the root user does not have a JAVA_HOME property defined, then edit ${rangerhome}/setup.sh + add in, e.g.:
  • export JAVA_HOME=/opt/jdk1.8.0_91
Next edit ${rangerhome}/install.properties and make the following changes:
  • Change SQL_CONNECTOR_JAR to point to the MySQL JDBC driver jar that you downloaded above.
  • Set (db_root_user/db_root_password) to (admin/password)
  • Set (db_user/db_password) to (ranger/password)
  • Change "audit_store" from "solr" to "db"
  • Set "audit_db_name" to "ranger_audit"
  • Set (audit_db_user/audit_db_password) to (rangerlogger/password).
Now you can run the setup script via "sudo ./setup.sh".

4) Starting the Apache Ranger admin service

After a successful installation, we can start the Apache Ranger admin service with "sudo ${rangerhome}./ews/start-ranger-admin.sh". Now open a browser and go to "http://localhost:6080/". Log on with "admin/admin" and you should be able to create authorization policies for a desired big data component.


Categories: Colm O hEigeartaigh

A new REST interface for the Apache CXF Security Token Service - part II

Colm O hEigeartaigh - Fri, 06/17/2016 - 12:38
The previous blog entry introduced the new REST interface of the Apache CXF Security Token Service. It covered issuing, renewing and validating tokens via HTTP GET and POST with a focus on SAML tokens. In this post, we'll cover the new support in the STS to issue and validate JWT tokens via the REST interface, as well as how we can transform SAML tokens into JWT tokens, and vice versa. For information on how to configure the STS to support JWT tokens (via WS-Trust), please look at a previous blog entry.

1) Issuing JWT Tokens

We can retrieve a  JWT Token from the REST interface of the STS simply by making a HTTP GET request to "/token/jwt". Specify the "appliesTo" query parameter to insert the service address as the audience of the generated token. You can also include various claims in the token via the "claim" query parameter.

If an "application/xml" accept type is specified, or if multiple accept types are specified (as in a browser), the JWT token is wrapped in a "TokenWrapper" XML Element by the STS:

If "application/json" is specified, the JWT token is wrapped in a simple JSON Object as follows:

If "text/plain" is specified, the raw JWT token is returned. We can also get a JWT token using HTTP POST by constructing a WS-Trust RequestSecurityToken XML fragment, specifying "urn:ietf:params:oauth:token-type:jwt" as the WS-Trust TokenType.

2) Validating JWT Tokens and token transformation

We can also validate JWT Tokens by POSTing a WS-Trust RequestSecurityToken to the STS. The raw (String) JWT Token must be wrapped in a "TokenWrapper" XML fragment, which in turn is specified as the value of "ValidateTarget". Also, an "action" query parameter must be added with value "validate".

A powerful feature of the STS is the ability to transform tokens from one type to another. This is done by making a validate request for a given token, and specifying a value for "TokenType" that corresponds to a token type that is desired.  In this way we can validate a SAML token and issue a JWT Token, and vice versa.

To see some examples of how to do token validation as well as token transformation, please take a look at the following tests, which use the CXF WebClient to invoke on the REST interface of the STS.
Categories: Colm O hEigeartaigh

A new REST interface for the Apache CXF Security Token Service - part I

Colm O hEigeartaigh - Thu, 06/16/2016 - 17:46
Apache CXF ships a Security Token Service (STS) that can issue/validate/renew/cancel tokens via the (SOAP based) WS-Trust interface. The principal focus of the STS is to deal with SAML tokens, although other token types are supported as well. JAX-RS clients can easily obtain tokens using helper classes in CXF (see Andrei Shakirin's blog entry for more information on this).

However, wouldn't it be cool if a JAX-RS client could dispense with SOAP altogether and obtain tokens via a REST interface? Starting from the 3.1.6 release, the Apache CXF STS now has a powerful and flexible REST API which I'll describe in this post. The next post will cover how the STS can now issue and validate JWT tokens, and how it can transform JWT tokens into SAML tokens and vice versa.

One caveat - this REST interface is obviously not a standard, as compared to the WS-Trust interface, and so is specific to CXF.

1) Configuring the JAX-RS endpoint of the CXF STS

Firstly, let's look at how to set up the JAX-RS endpoint needed to support the REST interface of the STS. The STS is configured in exactly the same way as for the standard WS-Trust based interface, the only difference being that we are setting up a JAX-RS endpoint instead of a JAX-WS endpoint. Example configuration can be seen here in a system test.


2) Issuing Tokens

The new JAX-RS interface supports a number of different methods to obtain tokens. In this post, we will just focus on the methods used to obtain XML based tokens (such as SAML).

2.1) Issue tokens via HTTP GET

The easiest way to get a token is by doing a HTTP GET on "/token/{tokenType}". Here is a simple example using a web browser:

 The supported tokenType values are:
  • saml
  • saml2.0
  • saml1.1
  • jwt
  • sct
A number of optional query parameters are supported:
  • keyType - The standard WS-Trust based URIs to indicate whether a Bearer, HolderOfKey, etc. token is required. The default is to issue a Bearer token.
  • claim - A list of requested claims to include in the token. See below for the list of acceptable values.
  • appliesTo - The AppliesTo value to use
  • wstrustResponse - A boolean parameter to indicate whether to return a WS-Trust Response or just the desired token. The default is false. This parameter only applies to the XML case.
Note that for the "Holder of Key" keytype, the STS will try to get the client key via the TLS client certificate, if it is available. The (default) supported claim values are:
  • emailaddress
  • role
  • surname
  • givenname
  • name
  • upn
  • nameidentifier
The format of the returned token depends on the HTTP application type:
  • application/xml - Returns a token in XML format
  • application/json - JSON format for JWT tokens only.
  • text/plain - The "plain" token is returned. For JWT tokens it is just the raw token. For XML-based tokens, the token is BASE-64 encoded and returned.
The default is to return XML if multiple types are specified (e.g. in a browser).

2.2) Issue tokens via HTTP POST

While the GET method above is very convenient to use, it may be that you want to pass other parameters to the STS in order to issue a token. In this case, you can construct a WS-Trust RequestSecurityToken XML fragment and POST it to the STS. The response will be the standard WS-Trust Response, and not the raw token as you have the option of receiving via the GET method.

3) Validating/Renewing/Cancelling tokens

It is only possible to renew or validate or cancel a token using the POST method detailed in section 2.2 above. You construct the RequestSecurityToken XML fragment in the same way as for Issue, except including the token in question under "ValidateTarget", "RenewTarget", etc. For the non-issue use case, you must specify a "action" query parameter, which can be "issue" (default), "validate", "renew", "cancel".
Categories: Colm O hEigeartaigh

SAML SSO support in the Fediz 1.3.0 IdP

Colm O hEigeartaigh - Fri, 05/27/2016 - 17:12
The Apache CXF Fediz Identity Provider (IdP) has had the ability to talk to third party IdPs using SAML SSO since the 1.2.0 release. However, one of the new features of the 1.3.0 release is the ability to configure the Fediz IdP to use the SAML SSO protocol directly, instead of WS-Federation. This means that Fediz can be used as a fully functioning SAML SSO IdP.

I added a new test-case to github to show how this works:
  • cxf-fediz-saml-sso: This project shows how to use the SAML SSO interceptors of Apache CXF to authenticate and authorize clients of a JAX-RS service. 
The test-case consists of two modules. The first is a web application which contains a simple JAX-RS service, which has a single GET method to return a doubled number. The method is secured with a @RolesAllowed annotation, meaning that only a user in roles "User", "Admin", or "Manager" can access the service. The service is configured with the SamlRedirectBindingFilter, which redirects unauthenticated users to a SAML SSO IdP for authentication (in this case Fediz). The service configuration also defines an AssertionConsumerService which validates the response from the IdP, and sets up the session for the user + populates the CXF security context with the roles from the SAML Assertion.

The second module deploys the Fediz IdP and STS in Apache Tomcat, as well as the "double-it" war above. It uses Htmlunit to make an invocation on the service and check that access is granted to the service. Alternatively, you can comment the @Ignore annotation of the "testInBrowser" method, and copy the printed out URL into a browser to test the service directly (user credentials: "alice/ecila").

The IdP configuration is defined in entities-realma.xml. Note that under "supportedProtocols" for the "idp-realmA" configuration is the value "urn:oasis:names:tc:SAML:2.0:profiles:SSO:browser". In addition, the default authentication URI is "saml/up". These are the only changes that are required to switch the IdP for "realm A" from WS-Federation to SAML SSO.
Categories: Colm O hEigeartaigh

An interop demo between Apache CXF Fediz and Facebook

Colm O hEigeartaigh - Fri, 05/06/2016 - 17:49
The previous post showed how to configure the Fediz IdP to interoperate with the Google OpenID Connect provider. In addition to supporting WS-Federation, SAML SSO and OpenID Connect, from the forthcoming 1.3.1 release the Fediz IdP will also support authenticating users via Facebook Connect. Facebook Connect is based on OAuth 2.0 and hence the implementation in Fediz shares quite a lot of common code with the OpenID Connect implementation. In this post, we will show how to configure the Fediz IdP to interoperate with Facebook.

1) Create a new app in the Facebook developer console

The first step is to create a new application in the Facebook developer console. Create a Facebook developer account. Login and register as a Facebook developer. Click on "My Apps" and "Add a New App". Select the "website" platform and create a new app with the name "Fediz IdP". Enter your email, select a Category and click on "Create App ID". Enter "https://localhost:8443/fediz-idp/federation" for the Site URL. Now go to the dashboard for the application and copy the App ID + App Secret and save for later.

2) Testing the newly created Client

To test that everything is working correctly, open a web browser and navigate to, substituting the client id saved in step "1" above:
  • https://www.facebook.com/dialog/oauth?response_type=code&client_id=<app-id>&redirect_uri=https://localhost:8443/fediz-idp/federation&scope=email
Login using your Facebook credentials and grant permission on the OAuth consent screen. The browser will then attempt a redirect to the given redirect_uri. Copy the URL + extract the "code" query String. Open a terminal and invoke the following command, substituting in the secret and code extracted above:
  • curl --data "client_id=<app-id>&client_secret=<app-secret>&grant_type=authorization_code&code=<code>&redirect_uri=https://localhost:8443/fediz-idp/federation" https://graph.facebook.com/v2.6/oauth/access_token
You should see a successful response containing the OAuth 2.0 Access Token.

3) Install and configure the Apache CXF Fediz IdP and sample Webapp

Follow a previous tutorial to deploy the latest Fediz IdP + STS to Apache Tomcat, as well as the "simpleWebapp". Note that you will need to use at least Fediz 1.3.1-SNAPSHOT here for Facebook support. Test that the "simpleWebapp" is working correctly by navigating to the following URL (selecting "realm A" at the IdP, and authenticating as "alice/ecila"):
  • https://localhost:8443/fedizhelloworld/secure/fedservlet 
3.1) Configure the Fediz IdP to communicate with the Facebook IdP

Now we will configure the Fediz IdP to authenticate the user in "realm B" by using the Facebook Connect protocol. Edit 'webapps/fediz-idp/WEB-INF/classes/entities-realma.xml'. In the 'idp-realmA' bean:
  • Change the port in "idpUrl" to "8443". 
In the 'trusted-idp-realmB' bean:
  • Change the "url" value to "https://www.facebook.com/dialog/oauth".
  • Change the "protocol" value to "facebook-connect".
  • Delete the "certificate" and "trustType" properties.
  • Add the following parameters Map, filling in a value for the client secret extracted above: <property name="parameters">
                <util:map>
                    <entry key="client.id" value="<app-id>" />
                    <entry key="client.secret" value="<app-secret>" />
                </util:map>
            </property>
It is possible to specify custom scopes via a "scope" parameter. The default value for this parameter is "email". The token endpoint can be configured via a "token.endpoint" parameter (defaults to "https://graph.facebook.com/v2.6/oauth/access_token"). Similarly, the API endpoint used to retrieve the user's email address can be configured via "api.endpoint", defaulting to "https://graph.facebook.com/v2.6". The Subject claim to be used to insert into a SAML Assertion is defined via "subject.claim", which defaults to "email".

3.2) Update the TLS configuration

By default, the Fediz IdP is configured with a truststore required to access the Fediz STS. However, this would mean that the requests to the Facebook IdP over TLS will not be trusted. to change this, edit 'webapps/fediz-idp/WEB-INF/classes/cxf-tls.xml', and change the HTTP conduit name from "*.http-conduit" to "https://localhost.*". This means that this configuration will only get picked up when communicating with the STS (deployed on "localhost"), and the default JDK truststore will get used when communicating with the Facebook IdP.

3.3) Update Fediz STS claim mapping

The STS will receive a SAML Token created by the IdP representing the authenticated user. The Subject name will be the email address of the user as configured above. Therefore, we need to add a claims mapping in the STS to map the principal received to some claims. Edit 'webapps/fediz-idp-sts/WEB-INF/userClaims.xml' and just copy the entry for "alice" in "userClaimsREALMA", changing "alice" to your email address used to log into Facebook.

Finally, restart Fediz to pick up the changes (you may need to remove the persistent storage first).

4) Testing the service

To test the service navigate to:
  • https://localhost:8443/fedizhelloworld/secure/fedservlet
Select "realm B". You should be redirected to the Facebook authentication page. Enter the user credentials you have created. You will be redirected to Fediz, where it creates a SAML token representing the act of authentication via the trusted third party IdP,  and redirects to the web application.
Categories: Colm O hEigeartaigh

An interop demo between Apache CXF Fediz and Google OpenID Connect

Colm O hEigeartaigh - Fri, 04/29/2016 - 18:19
The previous post introduced some of the new features in Apache CXF Fediz 1.3.0. One of the new enhancements is that the Fediz IdP can now delegate WS-Federation (and SAML SSO) authentication requests to a third party IdP via OpenID Connect. An article published in February showed how it is possible to interoperate between Fediz and the Keycloak OpenID Connect provider. In this post, we will show how to configure the Fediz IdP to interoperate with the Google OpenID Connect provider.

1) Create a new client in the Google developer console

The first step is to create a new client in the Google developer console to represent the Apache CXF Fediz IdP. Login to the Google developer console and create a new project. Click on "Enable and Manage APIs" and then select "Credentials". Click on "Create Credentials" and select "OAuth client id". Configure the OAuth consent screen and select "web application" as the application type. Specify "https://localhost:8443/fediz-idp/federation" as the authorized redirect URI. After creating the client, a pop-up window will specify the newly created client id and secret. Save both values locally. The Google documentation is available here.

2) Testing the newly created Client

It's possible to see the Google OpenId Connect configuration by navigating to:
  • https://accounts.google.com/.well-known/openid-configuration
This tells us what the authorization and token endpoints are, both of which we will need to configure the Fediz IdP. To test that everything is working correctly, open a web browser and navigate to, substituting the client id saved in step "1" above:
  • https://accounts.google.com/o/oauth2/v2/auth?response_type=code&client_id=<client-id>&redirect_uri=https://localhost:8443/fediz-idp/federation&scope=openid
Login using your google credentials and grant permission on the OAuth consent screen. The browser will then attempt a redirect to the given redirect_uri. Copy the URL + extract the "code" query String. Open a terminal and invoke the following command, substituting in the secret and code extracted above:
  • curl -u <client-id>:<secret> --data "client_id=<client-id>&grant_type=authorization_code&code=<code>&redirect_uri=https://localhost:8443/fediz-idp/federation" https://www.googleapis.com/oauth2/v4/token
You should see a succesful response containing (amongst other things) the OAuth 2.0 Access Token and the OpenId Connect IdToken, containing the user identity.

3) Install and configure the Apache CXF Fediz IdP and sample Webapp

Follow a previous tutorial to deploy the latest Fediz IdP + STS to Apache Tomcat, as well as the "simpleWebapp". Note that you will need to use Fediz 1.3.0 here for OpenId Connect support. Test that the "simpleWebapp" is working correctly by navigating to the following URL (selecting "realm A" at the IdP, and authenticating as "alice/ecila"):
  • https://localhost:8443/fedizhelloworld/secure/fedservlet 
3.1) Configure the Fediz IdP to communicate with the Google IdP

Now we will configure the Fediz IdP to authenticate the user in "realm B" by using the OpenID Connect protocol. Edit 'webapps/fediz-idp/WEB-INF/classes/entities-realma.xml'. In the 'idp-realmA' bean:
  • Change the port in "idpUrl" to "8443". 
In the 'trusted-idp-realmB' bean:
  • Change the "url" value to "https://accounts.google.com/o/oauth2/v2/auth".
  • Change the "protocol" value to "openid-connect-1.0".
  • Delete the "certificate" and "trustType" properties.
  • Add the following parameters Map, filling in a value for the client secret extracted above: <property name="parameters">
                <util:map>
                    <entry key="client.id" value="<client-id>" />
                    <entry key="client.secret" value="<secret>" />
                    <entry key="token.endpoint" value="https://accounts.google.com/o/oauth2/token" />
                    <entry key="scope" value="openid profile email"/>
                    <entry key="jwks.uri" value="https://www.googleapis.com/oauth2/v3/certs" />
                    <entry key="subject.claim" value="email"/>
                </util:map>
            </property>
There are a few additional properties configured here compared to the previous tutorial. It is possible to specify custom scopes via the "scope" parameter. In this case we are requesting the "profile" and "email" scopes. The default value for this parameter is "openid". In addition, rather than validating the signed IdToken using a local certificate, here we are specifying a value for "jwks.uri", which is the location of the signing key. The "subject.claim" property specifies the claim name from which to obtain the Subject name, which is inserted into a SAML Token that is sent to the STS.

3.2) Update the TLS configuration

By default, the Fediz IdP is configured with a truststore required to access the Fediz STS. However, this would mean that the requests to the Google IdP over TLS will not be trusted. to change this, edit 'webapps/fediz-idp/WEB-INF/classes/cxf-tls.xml', and change the HTTP conduit name from "*.http-conduit" to "https://localhost.*". This means that this configuration will only get picked up when communicating with the STS (deployed on "localhost"), and the default JDK truststore will get used when communicating with the Google IdP.

3.3) Update Fediz STS claim mapping

The STS will receive a SAML Token created by the IdP representing the authenticated user. The Subject name will be the email address of the user as configured above. Therefore, we need to add a claims mapping in the STS to map the principal received to some claims. Edit 'webapps/fediz-idp-sts/WEB-INF/userClaims.xml' and just copy the entry for "alice" in "userClaimsREALMA", changing "alice" to your Google email address.

Finally, restart Fediz to pick up the changes (you may need to remove the persistent storage first).

4) Testing the service

To test the service navigate to:
  • https://localhost:8443/fedizhelloworld/secure/fedservlet
Select "realm B". You should be redirected to the Google authentication page. Enter the user credentials you have created. You will be redirected to Fediz, where it converts the received JWT token to a token in the realm of Fediz (realm A) and redirects to the web application.
    Categories: Colm O hEigeartaigh

    Apache CXF Fediz 1.3.0 released

    Colm O hEigeartaigh - Mon, 04/25/2016 - 18:06
    A new major release (1.3.0) of Apache CXF Fediz was released a few weeks ago. There are some major dependency updates as part of this release:
    • The core Apache CXF dependency is updated from the 3.0.x branch to the 3.1.x branch (3.1.6 to be precise)
    • The Spring dependency of the IdP is updated from the 3.2.x branch to the 4.1.x branch.
    Fediz contains a number of container plugins to support the Passive Requestor Profile of WS-Federation. The 1.3.0 release now supports container plugins for:
    • Websphere
    • Jetty 8 and 9 (new)
    • Apache Tomcat 7 and 8 (new)
    • Spring Security 2 and 3
    • Apache CXF.
    The Identity Provider (IdP) service has the following new features:
    • The IdP now supports protocol bridging with OpenId Connect IdPs (see previous article on an interop demo with Keycloak).
    • The IdP is now capable of supporting the SAML SSO protocol natively, in addition to the Passive Requestor Profile of WS-Federation.
    • A new IdP service is now available which supports OpenId Connect by leveraging Apache CXF. By default it delegates authentication to the existing Fediz IdP using WS-Federation.
    In a nutshell, the Fediz 1.3.0 IdP supports user authentication via the WS-Federation, SAML SSO and OpenId Connect protocols, and it can also bridge between all of these different protocols. This is a compelling selling point of Fediz, and one I will explore more in some forthcoming articles.
    Categories: Colm O hEigeartaigh

    Using the CXF failover feature to authenticate to multiple Apache Syncope instances

    Colm O hEigeartaigh - Wed, 03/02/2016 - 21:35
    A couple of years ago, I described a testcase that showed how an Apache CXF web service endpoint could send a username/password received via WS-Security to Apache Syncope for authentication. In this article, I'll extend that testcase to make use of the CXF failover feature. The failover feature allows the client to use a set of alternative addresses when the primary endpoint address is unavailable. For the purposes of the demo, instead of a single Apache Syncope instance, we will set up two instances which share the same internal storage. When the first/primary instance goes down, the failover feature will automatically switch to use the second instance.

    1) Set up the Apache Syncope instances

    1.a) Set up a database for Internal Storage

    Apache Syncope persists internal storage to a database via Apache OpenJPA. For the purposes of this demo, we will set up MySQL. Install MySQL in $SQL_HOME and create a new user for Apache Syncope. We will create a new user "syncope_user" with password "syncope_pass". Start MySQL and create a new Syncope database:
    • Start: sudo $SQL_HOME/bin/mysqld_safe --user=mysql
    • Log on: $SQL_HOME/bin/mysql -u syncope_user -p
    • Create a Syncope database: create database syncope; 
    1.b) Set up containers to host Apache Syncope

    We will deploy Syncope to Apache Tomcat. Download Tomcat + extract it twice (calling it first-instance + second-instance). In both instances, edit 'conf/context.xml', and uncomment the the "<Manager pathname="" />" configuration. Also in 'conf/context.xml', add a datasource for internal storage:

    <Resource name="jdbc/syncopeDataSource" auth="Container"
        type="javax.sql.DataSource"
        factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
        testWhileIdle="true" testOnBorrow="true" testOnReturn="true"
        validationQuery="SELECT 1" validationInterval="30000"
        maxActive="50" minIdle="2" maxWait="10000" initialSize="2"
        removeAbandonedTimeout="20000" removeAbandoned="true"
        logAbandoned="true" suspectTimeout="20000"
        timeBetweenEvictionRunsMillis="5000" minEvictableIdleTimeMillis="5000"
        jdbcInterceptors="org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;
        org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer"
        username="syncope_user" password="syncope_pass"
        driverClassName="com.mysql.jdbc.Driver"
        url="jdbc:mysql://localhost:3306/syncope?characterEncoding=UTF-8"/>

    The next step is to enable a way to deploy applications to Tomcat using the Manager app. Edit 'conf/tomcat-users.xml' in both instances and add the following:

    <role rolename="manager-script"/>
    <user username="manager" password="s3cret" roles="manager-script"/>

    Next, download the JDBC driver jar for MySQL and put it in Tomcat's 'lib' directory in both instances. Edit 'conf/server.xml' of the second instance, and change the port to "9080", and change the other ports to avoid conflict with the first Tomcat instance. Now start both Tomcat instances.

    1.c) Install Syncope to the containers

    Download and run the Apache Syncope installer. Install it to Tomcat using MySQL as the database. For more info on this, see a previous tutorial. Run this twice to install Syncope in both Apache Tomcat instances.

    1.d) Configure the container to share the same database

    Next we need to configure both containers to share the same database. Edit 'webapps/syncope/WEB-INF/classes/persistenceContextEMFactory.xml' in the first instance, and change the 'openjpa.RemoteCommitProvider' to:
    • <entry key="openjpa.RemoteCommitProvider" value="tcp(Port=12345, Addresses=127.0.0.1:12345;127.0.0.1:12346)"/>
    Similarly, change the value in the second instance to:
    • <entry key="openjpa.RemoteCommitProvider" value="tcp(Port=12346, Addresses=127.0.0.1:12345;127.0.0.1:12346)"/>
    This is necessary to ensure data consistency across the two Syncope instances. Please see the Syncope cluster page for more information.

    1.e) Add users

    In the first Tomcat instance running on port 8080, go to http://localhost:8080/syncope-console, and add two new roles "employee" and "boss". Add two new users, "alice" and "bob" both with password "security". "alice" has both roles, but "bob" is only an "employee". Now logout, and login to the second instance running on port 9080. Check that the newly created users are available.

    2) The CXF testcase

    The CXF testcase is available in github:
    • cxf-syncope-failover: This project contains a number of tests that show how an Apache CXF service endpoint can use the CXF Failover feature, to authenticate to different Apache Syncope instances.
    A CXF client sends a SOAP UsernameToken to a CXF Endpoint. The CXF Endpoint has been configured (see cxf-service.xml) to validate the UsernameToken via the SyncopeUTValidator, which dispatches the username/passwords to Syncope for authentication via Syncope's REST API.

    The SyncopeUTValidator is configured to use the CXF failover feature with the address of the primary Syncope instance ("first-instance" above running on 8080). It is also configured with a list of alternative addresses to try if the first instance is down (in this case the "second-instance" running on 9080).

    The test makes two invocations. The first should successfully authenticate to
    the first Syncope instance. Now the test sleeps for 10 seconds after prompting
    you to kill the first Syncope instance. It should successfully failover to the
    second Syncope instance on the second invocation.
    Categories: Colm O hEigeartaigh

    Support for OpenId Connect protocol bridging in Apache CXF Fediz 1.3.0

    Colm O hEigeartaigh - Fri, 02/26/2016 - 15:49
    Apache CXF Fediz 1.3.0 will be released in the near future. One of the new features of Fediz 1.2.0 (released last year) was the ability to act as an identity broker with a SAML SSO IdP. In the 1.3.0 release, Apache CXF Fediz will have the ability to act as an identity broker with an OpenId Connect IdP. In other words, the Fediz IdP can act as a protocol bridge between the WS-Federation and OpenId Connect protocols. In this article, we will look at an interop test case with Keycloak.

    1) Install and configure Keycloak

    Download and install the latest Keycloak distribution (tested with 1.8.0). Start keycloak in standalone mode by running 'sh bin/standalone.sh'.

    1.1) Create users in Keycloak

    First we need to create an admin user by navigating to the following URL, and entering a password:
    • http://localhost:8080/auth/
    Click on the "Administration Console" link, logging on using the admin user credentials. You will see the configuration details of the "Master" realm. For the purposes of this demo, we will create a new realm. Hover the mouse pointer over "Master" in the top left-hand corner, and click on "Add realm". Create a new realm called "realmb". Now we will create a new user in this realm. Click on "Users" and select "Add User", specifying "alice" as the username. Click "save" and then go to the "Credentials" tab for "alice", and specify a password, unselecting the "Temporary" checkbox, and reset the password.

    1.2) Create a new client application in Keycloak

    Now we will create a new client application for the Fediz IdP in Keycloak. Select "Clients" in the left-hand menu, and click on "Create". Specify the following values:
    • Client ID: realma-client
    • Client protocol: openid-connect
    • Root URL: https://localhost:8443/fediz-idp/federation
    Once the client is created you will see more configuration options:
    • Select "Access Type" to be "confidential".
    Now go to the "Credentials" tab of the newly created client and copy the "Secret" value. This will be required in the Fediz IdP to authenticate to the token endpoint in Keycloak.

    1.3) Export the Keycloak signing certificate

    Finally, we need to export the Keycloak signing certificate so that the Fediz IdP can validate the signed JWT Token from Keycloak. Select "Realm Settings" (for "realmb") and click on the "Keys" tab. Copy and save the value specified in the "Certificate" textfield.

    1.4) Testing the Keycloak configuration

    It's possible to see the Keycloak OpenId Connect configuration by navigating to:
    • http://localhost:8080/auth/realms/realmb/.well-known/openid-configuration
    This tells us what the authorization and token endpoints are, both of which we will need to configure the Fediz IdP. To test that everything is working correctly, open a web browser and navigate to:
    • localhost:8080/auth/realms/realmb/protocol/openid-connect/auth?response_type=code&client_id=realma-client&redirect_uri=https://localhost:8443/fediz-idp/federation&scope=openid
    Login using the credentials you have created for "alice". Keycloak will then attempt to redirect to the given "redirect_uri" and so the browser will show a connection error message. However, copy the URL + extract the "code" query String. Open a terminal and invoke the following command, substituting in the secret and code extracted above:
    • curl -u realma-client:<secret> --data "client_id=realma-client&grant_type=authorization_code&code=<code>&redirect_uri=https://localhost:8443/fediz-idp/federation" http://localhost:8080/auth/realms/realmb/protocol/openid-connect/token
    You should see a succesful response containing (amongst other things) the OAuth 2.0 Access Token and the OpenId Connect IdToken, containing the user identity.

    2) Install and configure the Apache CXF Fediz IdP and sample Webapp

    Follow a previous tutorial to deploy the latest Fediz IdP + STS to Apache Tomcat, as well as the "simpleWebapp". Note that you will need to use Fediz 1.3.0 here (or the latest SNAPSHOT version) for OpenId Connect support. Test that the "simpleWebapp" is working correctly by navigating to the following URL (selecting "realm A" at the IdP, and authenticating as "alice/ecila"):
    • https://localhost:8443/fedizhelloworld/secure/fedservlet
    2.1) Configure the Fediz IdP to communicate with Keycloak

    Now we will configure the Fediz IdP to authenticate the user in "realm B" by using the OpenId Connect protocol. Edit 'webapps/fediz-idp/WEB-INF/classes/entities-realma.xml'. In the 'idp-realmA' bean:
    • Change the port in "idpUrl" to "8443". 
    In the 'trusted-idp-realmB' bean:
    • Change the "url" value to "http://localhost:8080/auth/realms/realmb/protocol/openid-connect/auth".
    • Change the "protocol" value to "openid-connect-1.0".
    • Change the "certificate" value to "keycloak.cert". 
    • Add the following parameters Map, filling in a value for the client secret extracted above: <property name="parameters">
                  <util:map>
                      <entry key="client.id" value="realma-client"/>
                      <entry key="client.secret" value="<secret>"/>
                      <entry key="token.endpoint" value="http://localhost:8080/auth/realms/realmb/protocol/openid-connect/token"/>
                  </util:map>
              </property>
       
    2.2) Configure Fediz to use the Keycloak signing certificate

    Copy 'webapps/fediz-idp/WEB-INF/classes/realmb.cert' to a new file called 'webapps/fediz-idp/WEB-INF/classes/keycloak.cert'. Edit this file + delete the content between the "-----BEGIN CERTIFICATE----- / -----END CERTIFICATE-----" tags, pasting instead the Keycloak signing certificate as retrieved in step "1.3" above.

    Restart Fediz to pick up the changes (you may need to remove the persistent storage first).

    3) Testing the service

    To test the service navigate to:
    • https://localhost:8443/fedizhelloworld/secure/fedservlet
    Select "realm B". You should be redirected to the Keycloak authentication page. Enter the user credentials you have created. You will be redirected to Fediz, where it converts the received JWT token to a token in the realm of Fediz (realm A) and redirects to the web application.
    Categories: Colm O hEigeartaigh

    Apache CXF Fediz 1.2.2 released

    Colm O hEigeartaigh - Wed, 02/17/2016 - 13:12
    Apache CXF Fediz 1.2.2 has been released. The issues fixed can be seen here. Highlights include:
    • The core Apache CXF dependency is updated to the recent 3.0.8 release.
    • A new HomeRealm Discovery Service based on Spring EL is available in the IdP.
    • Support for configurable token expiration validation in the plugins has been added.
    • Various fixes for the websphere container plugin have been added.
    A new feature in 1.2.2 is the ability to specify a constraint in the IdP on the acceptable 'wreply' value for a given service. When the IdP successfully authenticates the end user, it will issue the WS-Federation response to the value specified in the initial request in the 'wreply' parameter. However, this could be exploited by a malicious third party to redirect the end user to a custom address, where the issued token could be retrieved. In 1.2.2, there is a new property associated with the Application in the IdP called 'passiveRequestorEndpointConstraint'. This is a regular expression on the acceptable value for the 'wreply' endpoint associated with this Application. If this property is not specified, a warning is logged in the IdP. For example:

    Categories: Colm O hEigeartaigh

    Javascript Object Signing and Encryption (JOSE) support in Apache CXF - part IV

    Colm O hEigeartaigh - Mon, 02/15/2016 - 17:36
    This is the fourth and final article in a series of posts on support for Javascript Object Signing and Encryption (JOSE) in Apache CXF. The first article covered how to sign content using JWS, while the second article showed how to encrypt content using JWE. The third article described how to construct JWT Tokens, how to sign and encrypt them, and how they can be used for authentication and authorization in Apache CXF. In this post, we will show how the CXF Security Token Service (STS) can be leveraged to issue and validate JWT Tokens.

    1) The CXF STS

    Apache CXF ships with a powerful and widely deployed STS implementation that has been covered extensively on this blog before. Clients interact with the STS via the SOAP WS-Trust interface, typically asking the STS to issue a (SAML) token by passing some parameters. The STS offers the following functionality with respect to tokens:
    • It can issue SAML (1.1 + 2.0) and SecurityContextTokens.
    • It can validate SAML, UsernameToken and BinarySecurityTokens.
    • It can renew SAML Tokens
    • It can cancel SecurityContextTokens.
    • It can transform tokens from one type to another, and from one realm to another.
    Wouldn't it be cool if you could ask the STS to issue and validate JWT tokens as well? Well that's exactly what you can do from the new CXF 3.1.5 release! If you already have an STS instance deployed to issue SAML tokens, then you can also issue JWT tokens to different clients with some simple configuration changes to your existing deployment.

    2) Issuing JWT Tokens from the STS

    Let's look at the most common use-case first, that of issuing JWT tokens from the STS. The client specifies a TokenType String in the request to indicate the type of the desired token. There is no standard WS-Trust Token Type for JWT tokens as there is for SAML Tokens. The default implementation that ships with the STS uses the token type "urn:ietf:params:oauth:token-type:jwt" (see here).

    The STS maintains a list of TokenProvider implementations, which it queries in turn to see whether it is capable of issuing a token of the given type. A new implementation is available to issue JWT Tokens - the JWTTokenProvider. By default tokens are signed via JWS using the STS master keystore (this is controlled via a "signToken" property of the JWTTokenProvider). The keystore configuration is exactly the same as for the SAML case. Tokens can also be encrypted via JWE if desired. Realms are also supported in the same way as for SAML Tokens.

    The claims inserted into the issued token are obtained via a JWTClaimsProvider Object configured in the JWTTokenProvider. The default implementation adds the following claims:
    • The current principal is added as the Subject claim.
    • The issuer name of the STS is added as the Issuer claim.
    • Any claims that were requested by the client via the WS-Trust "Claims" parameter (that can be handled by the ClaimManager of the STS).
    • The various "time" constraints, such as Expiry, NotBefore, IssuedAt, etc.
    • Finally, it adds in the audience claim obtained from an AppliesTo address and the wst:Participants, if either were specified by the client.
    The token that is generated by the JWTTokenProvider is in the form of a String. However, as the token will be included in the WS-Trust Response, the String must be "wrapped" somehow to form a valid XML Element. A TokenWrapper interface is defined to do precisely this. The default implementation simply inserts the JWT Token into the WS-Trust Response as the text content of a "TokenWrapper" Element.
      3) Validating JWT Tokens in the STS

      As well as issuing JWT Tokens, the STS can also validate them via the WS-Trust Validate operation. A new TokenValidator implementation is available to validate JWT tokens called the JWTTokenValidator. The signature of the token is first validated by the STS truststore. Then the time related claims of the token are checked, e.g. is the token expired or is the current time before the NotBefore time of the token, etc.

      A useful feature of the WS-Trust validate operation is the ability to transform tokens from one type to another. Normally, a client just wants to know if a token is valid or not, and hence receives a "yes/no" response from the STS. However, if the client specifies a TokenType that doesn't corresponds to the standard "Status" TokenType, but instead corresponds to a different token, the STS will validate the received token and then generate a new token of the desired type using the principal associated with the validated token.

      This "token transformation" functionality is also supported with the new JWT implementation. It is possible to transform a SAML Token into a JWT Token, and vice versa, something that could be quite useful in a deployment where you need to support both REST and SOAP services for example. Using a JWT Token as a WS-Trust OnBehalfOf/ActAs token is also supported.
      Categories: Colm O hEigeartaigh

      An interop demo between Apache CXF Fediz and Keycloak

      Colm O hEigeartaigh - Mon, 02/01/2016 - 17:15
      Last week, I showed how to use the Apache CXF Fediz IdP as an identity broker with a real-world SAML SSO IdP based on the Shibboleth IdP (as opposed to an earlier article which used a mocked SAML SSO IdP). In this post, I will give similar instructions to configure the Fediz IdP to act as an identity broker with Keycloak.

      1) Install and configure Keycloak

      Download and install the latest Keycloak distribution (tested with 1.8.0). Start keycloak in standalone mode by running 'sh bin/standalone.sh'.

      1.1) Create users in Keycloak

      First we need to create an admin user by navigating to the following URL, and entering a password:
      • http://localhost:8080/auth/
      Click on the "Administration Console" link, logging on using the admin user credentials. You will see the configuration details of the "Master" realm. For the purposes of this demo, we will create a new realm. Hover the mouse pointer over "Master" in the top left-hand corner, and click on "Add realm". Create a new realm called "realmb". Now we will create a new user in this realm. Click on "Users" and select "Add User", specifying "alice" as the username. Click "save" and then go to the "Credentials" tab for "alice", and specify a password, unselecting the "Temporary" checkbox, and reset the password.

      1.2) Create a new client application in Keycloak

      Now we will create a new client application for the Fediz IdP in Keycloak. Select "Clients" in the left-hand menu, and click on "Create". Specify the following values:
      • Client ID: urn:org:apache:cxf:fediz:idp:realm-A
      • Client protocol: saml
      • Client SAML Endpoint: https://localhost:8443/fediz-idp/federation
      Once the client is created you will see more configuration options:
      • Select "Sign Assertions"
      • Select "Force Name ID Format".
      • Valid Redirect URIs: https://localhost:8443/*
      Now go to the "SAML Keys" tab of the newly created client. Here we will have to import the certificate of the Fediz IdP so that Keycloak can validate the signed SAML requests. Click "Import" and specify:
      • Archive Format: JKS
      • Key Alias: realma
      • Store password: storepass
      • Import file: stsrealm_a.jks
      1.3) Export the Keycloak signing certificate

      Finally, we need to export the Keycloak signing certificate so that the Fediz IdP can validate the signed SAML Response from Keycloak. Select "Realm Settings" (for "realmb") and click on the "Keys" tab. Copy and save the value specified in the "Certificate" textfield.

      2) Install and configure the Apache CXF Fediz IdP and sample Webapp

      Follow a previous tutorial to deploy the latest Fediz IdP + STS to Apache Tomcat, as well as the "simpleWebapp". Test that the "simpleWebapp" is working correctly by navigating to the following URL (selecting "realm A" at the IdP, and authenticating as "alice/ecila"):
      • https://localhost:8443/fedizhelloworld/secure/fedservlet
      2.1) Configure the Fediz IdP to communicate with Keycloak

      Now we will configure the Fediz IdP to authenticate the user in "realm B" by using the SAML SSO protocol. Edit 'webapps/fediz-idp/WEB-INF/classes/entities-realma.xml'. In the 'idp-realmA' bean:
      • Change the port in "idpUrl" to "8443". 
      In the 'trusted-idp-realmB' bean:
      • Change the "url" value to "http://localhost:8080/auth/realms/realmb/protocol/saml".
      • Change the "protocol" value to "urn:oasis:names:tc:SAML:2.0:profiles:SSO:browser".
      • Change the "certificate" value to "keycloak.cert".
      2.2) Configure Fediz to use the Keycloak signing certificate

      Copy 'webapps/fediz-idp/WEB-INF/classes/realmb.cert' to a new file called 'webapps/fediz-idp/WEB-INF/classes/keycloak.cert'. Edit this file + delete the content between the "-----BEGIN CERTIFICATE----- / -----END CERTIFICATE-----" tags, pasting instead the Keycloak signing certificate as retrieved in step "1.3" above.

      The STS also needs to trust the Keycloak signing certificate. Copy keycloak.cert into 'webapps/fediz-idp-sts/WEB-INF/classes". In this directory import the keycloak.cert into the STS truststore via:
      • keytool -keystore ststrust.jks -import -file keycloak.cert -storepass storepass -alias keycloak
      Restart Fediz to pick up the changes (you may need to remove the persistent storage first).

      3) Testing the service

      To test the service navigate to:
      • https://localhost:8443/fedizhelloworld/secure/fedservlet
      Select "realm B". You should be redirected to the Keycloak authentication page. Enter the user credentials you have created. You will be redirected to Fediz, where it converts the received SAML token to a token in the realm of Fediz (realm A) and redirects to the web application.


      Categories: Colm O hEigeartaigh

      An interop demo between Apache CXF Fediz and Shibboleth

      Colm O hEigeartaigh - Tue, 01/26/2016 - 12:45
      Apache CXF Fediz is an open source implementation of the WS-Federation Passive Requestor Profile for SSO. It allows you to configure SSO for your web application via a container plugin, which redirects the user to authenticate at an IdP (Fediz or another WS-Federation based IdP). The Fediz IdP also supports the ability to act as an identity broker to a remote IdP, if the user is to be authenticated in a different realm. Last year, this blog covered a new feature of Apache CXF Fediz 1.2.0, which was the ability to act as an identity broker with a remote SAML SSO IdP. In this post, we will look at extending the demo to work with the Shibboleth IdP.

      1) Install and configure the Apache CXF Fediz IdP and sample Webapp

      Firstly, follow a previous tutorial to deploy the latest Fediz IdP + STS to Apache Tomcat, as well as the "simpleWebapp". Test that the "simpleWebapp" is working correctly by navigating to the following URL (selecting "realm A" at the IdP, and authenticating as "alice/ecila"):
      • https://localhost:8443/fedizhelloworld/secure/fedservlet
      Now we will configure the Fediz IdP to authenticate the user in "realm B", by using the SAML SSO protocol with a Shibboleth IdP instance. Edit 'webapps/fediz-idp/WEB-INF/classes/entities-realma.xml':

      In the 'idp-realmA' bean:
      • Change the port in "idpUrl" to "8443". 
      In the 'trusted-idp-realmB' bean:
      • Change the "url" value to "http://localhost:9080/idp/profile/SAML2/Redirect/SSO".
      • Change the "protocol" value to "urn:oasis:names:tc:SAML:2.0:profiles:SSO:browser".
      • Add the following property: <property name="parameters"><util:map><entry key="require.known.issuer" value="false" /></util:map></property>
      Restart Fediz to pick up the changes (you may need to remove the persistent storage first).

      2) Install and configure Shibboleth

      This is a reasonable complex task, so let's break it down into various sections.

      2.1) Install Shibboleth and deploy to Tomcat

      Download and extract the latest Shibboleth Identity Provider (tested with 3.2.1). Install Shibboleth by running the "install.sh" script in the bin directory. Install Shibboleth to "$SHIB_HOME" and use the default values that are prompted as part of the installation process, entering a random password for the keystore (we won't be using it). Download and extract an Apache Tomcat 7 instance, and follow these steps:
      • Copy '$SHIB_HOME/war/idp.war' into the Tomcat webapps directory.
      • Configure Shibboleth to find the IDP by defining: export JAVA_OPTS="-Xmx512M -Didp.home=$SHIB_HOME".
      • Next you need to download the jstl jar (https://repo1.maven.org/maven2/jstl/jstl/1.2/) + put it in the lib directory of Tomcat.
      • Edit conf/server.xml + change the ports to avoid conflict with the Tomcat instance that the Fediz IdP is running in. (e.g. use 9080 instead of 8080, etc.).
      • Now start Tomcat + check that everything is working by navigating to: http://localhost:9080/idp/profile/status
      2.2) Configure the RP provider in Shibboleth

      Next we need to configure the Fediz IdP as a RP (relying party) in Shibboleth. The Fediz IdP has the ability to generate metadata (either WS-Federation or SAML SSO) for a trusted IdP. Navigate to the following URL in a browser and save the metadata to a local file "fediz-metadata.xml":
      • https://localhost:8443/fediz-idp/metadata/urn:org:apache:cxf:fediz:idp:realm-B
      Edit '$SHIB_HOME/conf/metadata-providers.xml' and add the following configuration to pick up this metadata file:

      <MetadataProvider id="FedizMetadata"  xsi:type="FilesystemMetadataProvider" metadataFile="$METADATA_PATH/fediz-metadata.xml"/>

      As we won't be encrypting the SAML response in this demo, edit the '$SHIB_HOME/idp.properties' file and uncomment the "idp.encryption.optional = false" line, changing "false" to "true".

      2.3) Configuring the Signing Keys in Shibboleth

      Next we need to copy the keys that Fediz has defined for "realm B" to Shibboleth. Copy the signing certificate "realmb.cert" from the Fediz source and rename it as '$SHIB_HOME/credentials/idp-sigining.crt'. Next we need to extract the private key from the keystore and save it as a plaintext private key. Download the realmb keystore. Extract the private key via:
      • keytool -importkeystore -srckeystore stsrealm_b.jks -destkeystore stsrealmb.p12 -deststoretype PKCS12 -srcalias realmb -deststorepass storepass -srcstorepass storepass -srckeypass realmb -destkeypass realmb
      • openssl pkcs12 -in stsrealmb.p12  -nodes -nocerts -out idp-signing.key
      • Edit idp-signing.key to remove any additional information before the "BEGIN PRIVATE KEY" part.
      • cp idp-signing.key $SHIB_HOME/credentials
      2.4) Configure Shibboleth to authenticate users

      Next we need to configure Shibboleth to authenticate users. If you have a configured KDC or LDAP installation on your machine, then it is easiest to use this, if you configure some test users there. If not then, in this section we will configure Shibboleth to use JAAS to authenticate users via Kerberos. Edit '$SHIB_HOME/conf/authn/password-authn-config.xml' + comment out the ldap import, instead uncommenting the "jaas-authn-config.xml" import. Next edit '$SHIB_HOME/conf/authn/jaas.config' and replace the contents with:

       ShibUserPassAuth {
          com.sun.security.auth.module.Krb5LoginModule required refreshKrb5Config=true useKeyTab=false;
      };

      Now we will set up and configure a test KDC. Assuming you are running this demo on linux, edit '/etc/krb5.conf' and change the "default_realm" to "service.ws.apache.org" + add the following to the realms section:

              service.ws.apache.org = {
                      kdc = localhost:12345
              }

      Now we will look at altering a test-case I wrote that uses Apache Kerby as a test-kdc. Clone my testcases github repo and go to the "cxf-kerberos-kerby" test-case, which is part of the top level "cxf" projects. Edit the AuthenticationTest
      and change the values for KDC_PORT + KDC_UDP_PORT to "12345". Next remove the @org.junit.Ignore annotation from the "launchKDCTest()" method to just launch the KDC + sleep for a few minutes. Launch the test on the command line via "mvn test -Dtest=AuthenticationTest".

      2.5) Configure Shibboleth to include the authenticated principal in the SAML Subject

      For the purposes of our demo scenario, the Fediz IdP expects the authenticated principal in the SAML Subject it gets back from Shibboleth. Edit '$SHIB_HOME/conf/attribute-filter.xml' and add the following:

          <AttributeFilterPolicy id="releasePersistentIdToAnyone">
              <PolicyRequirementRule xsi:type="ANY"/>

              <AttributeRule attributeID="persistentId">
                  <PermitValueRule xsi:type="ANY"/>
              </AttributeRule>
          </AttributeFilterPolicy>

      Edit '$SHIB_HOME/conf/attribute-resolver.xml' and add the following:

      <resolver:AttributeDefinition id="persistentId" xsi:type="ad:PrincipalName">
            <resolver:AttributeEncoder xsi:type="enc:SAML1StringNameIdentifier" nameFormat="urn:mace:shibboleth:1.0:nameIdentifier"/>
            <resolver:AttributeEncoder xsi:type="enc:SAML2StringNameID" nameFormat="urn:oasis:names:tc:SAML:2.0:nameid-format:persistent"/>
      </resolver:AttributeDefinition>

      Edit '$SHIB_HOME/conf/saml-nameid.xml' and comment out any beans listed in the "shibboleth.SAML2NameIDGenerators" list. Finally, edit '$SHIB_HOME/conf/saml-nameid.properties' and uncomment the legacy Generator:

      idp.nameid.saml2.legacyGenerator = shibboleth.LegacySAML2NameIDGenerator

      That will enable Shibboleth to process the "persistent" attribute using the Principal Name. (Re)start the Tomcat instance we we should be ready to go.

      3) Testing the service

      To test the service navigate to:
      • https://localhost:8443/fedizhelloworld/secure/fedservlet
      Select "realm B". You should be redirected to the Shibboleth authentication page. Enter "alice/alice" as the username/password. You will be redirected to Fediz, where it converts the received SAML token to a token in the realm of Fediz (realm A) and redirects to the web application.




      Categories: Colm O hEigeartaigh

      Javascript Object Signing and Encryption (JOSE) support in Apache CXF - part III

      Colm O hEigeartaigh - Mon, 12/07/2015 - 18:03
      This is the third article in a series of posts on support for Javascript Object Signing and Encryption (JOSE) in Apache CXF. The previous article described how to encrypt content using the JSON Web Encryption specification. This post covers support for JSON Web Tokens (JWT) in Apache CXF. The JWT spec is not strictly part of the JOSE specification set, but it makes sense to cover it in this series of posts, as it is a natural extension of the JOSE specs to convey claims.

      1) Test Cases:

      As before, let's start by looking at some practical unit tests on github:
      • cxf-jaxrs-jose: This project contains a number of tests that show how to use the JSON Security functionality in Apache CXF to sign and encrypt JSON payloads to/from a JAX-RS service.
      These tests mostly follow the same basic format. The web service configuration defines a number of endpoints that map to the same web service implementation (a simple DoubleItService, which allows a user to POST a number and receive the doubled amount, where both the request and response are encoded via JSON). The client test code first constructs a JWT Token and then includes it in the message payload by adding a specific provider. The JWT Token is then processed by a provider on the receiving side. We will cover the individual tests in more detail below.

      2) Constructing JWT Tokens

      Apache CXF provides an easy to use API to create JWT Tokens. Let's look at how this is done in one of the tests - the JWTAuthenticationTest. The tests construct a JwtClaims Object, which has convenient set/get methods for the standard JWT Claims. For example:


      The JwtClaims Object can be passed through to the constructor of a JwtToken Object. To serialize the JwtToken in the request simply add the JwtAuthenticationClientFilter to the client provider list, and specify the JwtToken Object as a message property under "jwt.token". Alternatively, the JwtClaims Object can be passed without having to construct a JwtToken via the property "jwt.claims". On the receiving side, a JWTToken can be processed by the JwtAuthenticationFilter provider.

      3) Authentication and Authorization using signed JWT Tokens

      By default, the JwtAuthenticationClientFilter signs the JWT Token. The signature configuration is the exact same as that used for JWS as covered in the first article. On the receiving side, the JwtAuthenticationFilter will set up a security context for the authenticated client, assuming that a public key signature algorithm was used to sign the token. The "sub" (Subject) claim from the token gets mapped to the principal of the security context.

      As the JWT Token can convey arbitrary claims, it can be used for RBAC authorization. This is demonstrated in the JWTAuthorizationTest. A role called "boss" is inserted into the token. On the receiving side, the JwtAuthenticationFilter is configured to use the "role" claim in the token to extract roles for the authenticated principal and to populate the security context with them. As part of the test-case, CXF's SimpleAuthorizingInterceptor is used to require that a client must have a role of "boss" to invoke on the web service method in question.

      4) Encrypting JWT Tokens

      It is very easy to encrypt JWT Tokens (as well as both sign and encrypt) them in CXF. The JwtAuthenticationClientFilter needs to be configured to also encrypt the token, and the same configuration is used as for JWE in the previous article. Similarly, on the receiving side the JwtAuthenticationFilter must have the property "jweRequired" set to "true" to decrypt incoming encrypted tokens. See the JWTEncryptedTest test for some examples. 

      5) Token validation

      On receiving a JAX-RS request containing a JWT token, the
      JwtAuthenticationFilter will first parse the token, and then verify the signature (if present) and decrypt the token (if encrypted). It then performs some quality of service validation on the token claims, which I'll detail here. This validation can be easily modified by overriding the protected "validateToken" method in JwtAuthenticationFilter.
      • The "exp" (Expiration Time) claim is validated if present. If the expiry value is before the current date/time, then the token is rejected. 
      • The "nbf" (Not Before) claim is validated if present. If the not before value is after the current date/time, then the token is rejected. 
      • The "iat" (Issued At) claim is validated if present. To validate the "iat" claim, a "ttl" property must be set on the JwtAuthenticationFilter.
      • Either an "iat" or "exp" claim must be present in the token, as otherwise we have no way of enforcing an expiry on a token.
      • A clockskew value can also be configured on the JwtAuthenticationFilter via the "clockOffset" property.
      • The "aud" (Audience) claim is validated. This claim must contain at least one audience value which matches the endpoint address of the recipient.
      Categories: Colm O hEigeartaigh

      Javascript Object Signing and Encryption (JOSE) support in Apache CXF - part II

      Colm O hEigeartaigh - Wed, 12/02/2015 - 18:49
      This is the second in a series of blog posts on the support for the Javascript Object Signing and Encryption (JOSE) specifications in Apache CXF. The first article covered how to sign content using the JSON Web Signature (JWS) specification. In this post we will look at how to encrypt content using the JSON Web Encryption (JWE) specification.

      1) Test Cases:

      As before, let's start by looking at some practical unit tests on github:
      • cxf-jaxrs-jose: This project contains a number of tests that show how to use the JSON Security functionality in Apache CXF to sign and encrypt JSON payloads to/from a JAX-RS service.
      For now let's look at the tests contained in JWETest. These tests mostly follow the same basic format. The web service configuration defines a number of endpoints that map to the same web service implementation (a simple DoubleItService, which allows a user to POST a number and receive the doubled amount, where both the request and response are encoded via JSON). The client test code uses the CXF WebClient API to encrypt the message payload by adding a specific provider. The message in turn is decrypted by a provider on the receiving side. You can run the test via the command line "mvn test -Dtest=JWETest", and the message requests and responses will appear in the console.

      2) Compact vs. JSON Serialization

      Just like the JWS specification, there are two different ways of serializing JWE structures. Compact serialization is a URL-safe representation that is convenient to use when you only have a single encrypting entity. JSON serialization respresents the JWE structures as JSON objects. As of the time of publishing this post, CXF only has interceptors for the compact approach. However I'm told the JSON serialization case will be supported very soon :-)

      The providers to use for the compact case on both the client + receiving sides are:
      • Compact: JweWriterInterceptor (out) + JweContainerRequestFilter (in)
      3) Security Configuration

      As well as adding the desired providers, it is also necessary to specify the security configuration to set the appropriate algorithms, keys, etc. to use. The CXF wiki has an extensive list of all of the different security configuration properties. For encryption, we need to first load the encrypting key (either a JKS keystore or else a JSON Web Key (JWK) is supported).

      As well as defining the encryption key, we also need to configure the algorithms to encrypt the content as well as to encrypt the key. The list of acceptable encryption algorithms for JWE is defined by the JSON Web Algorithms (JWA) spec. For content encryption, the supported algorithms are all based on AES (CBC/GCM mode) with different key sizes. For key encryption, various schemes based on RSA, AES KeyWrap/GCM, Elliptic Curve Diffie-Hellman, PBES2 are supported, as well as a symmetric encryption option.

      For example, the tests use the following configuration to load a public key from a Java KeyStore, and to use the RSA-OAEP key encryption algorithm, as well as a 128-bit AES content encryption algorithm in CBC mode, using HMAC-SHA256 to generate the authentication code:
      The service side configuration is largely the same, apart from the fact that we need to specify the "rs.security.key.password" configuration tag to load the private key. The encryption algorithms must also be specified to impose a constraint on the desired algorithms.

      4) Encrypting XML payload

      Similar to the signature case, it is possible to use JWE to encrypt an XML message, and not just a JSON message. An example is included in JWETest ("testEncryptingXMLPayload").  The configuration is exactly the same as for the JSON case. 

      5) Including the encrypting key

      It is possible to include the encrypting certificate/key in the JWE header by setting one of the following properties:
      • rs.security.encryption.include.public.key - Include the JWK public key for encryption in the "jwk" header.
      • rs.security.encryption.include.cert - Include the X.509 certificate for encryption in the "x5c" header.
      • rs.security.encryption.include.key.id - Include the JWK key id for encryption in the "kid" header.
      • rs.security.encryption.include.cert.sha1 - Include the X.509 certificate SHA-1 digest for encryption in the "x5t" header.
      It could be useful to include the encrypting certificate/key if the recipient has multiple decryption keys and doesn't know which one to use to decrypt the request.

      6) Signing + encrypting a message

      It is possible to both sign and encrypt a message by combining the JWS + JWE interceptors. For example, simply adding the JweWriterInterceptor and JwsWriterInterceptor providers on the client side will both sign and encrypt the request. An example is included in the github project above (JWEJWSTest). 
      Categories: Colm O hEigeartaigh

      Javascript Object Signing and Encryption (JOSE) support in Apache CXF - part I

      Colm O hEigeartaigh - Mon, 11/30/2015 - 18:39
      The Javascript Object Signing and Encryption (JOSE) specifications cover how to sign and encrypt data using JSON. Apache CXF has excellent support for all of the JOSE specifications (see the documentation here), thanks largely to the work done by my colleague Sergey Beryozkin. This is the first in a series of blog posts describing how to use and implement JOSE for your web services using Apache CXF. In this post, we will cover how to sign content using the JSON Web Signature (JWS) specification.

      1) Test Cases:

      Let's start by looking at some practical unit tests on github:
      • cxf-jaxrs-jose: This project contains a number of tests that show how to use the JSON Security functionality in Apache CXF to sign and encrypt JSON payloads to/from a JAX-RS service.
      For now let's look at the JWSSignatureTest. These tests mostly follow the same basic format. The web service configuration defines a number of endpoints that map to the same web service implementation (a simple DoubleItService, which allows a user to POST a number and receive the doubled amount, where both the request and response are encoded via JSON). The client test code uses the CXF WebClient API to sign the message payload by adding a specific provider. The message in turn is validated by a provider on the receiving side. You can run the test via the command line "mvn test -Dtest=JWSSignatureTest", and the message requests and responses will appear in the console.

      For example, here is the output of the "testSignatureCompact" request. The Payload consists of the concatenated (+ separated by a '.') BASE-64 URL encoded JWS header (e.g. containing the signature algorithm, amongst other values), the BASE-64 URL encoded message payload, and the BASE-64 URL encoded signature value.


      2) Compact vs. JSON Serialization

      The JWS specification defines two different ways of serializing the signatures. Compact serialization is a URL-safe representation that is convenient to use when you only have a single signing entity. JSON serialization resprents the JWS structures as JSON objects, and allows multiple signatures to be included in the request as a result. The JWSSignatureTest includes both examples ("testSignatureListProperties" and "testSignatureCompact"). It's very easy to experiment with both approaches, as you only have to use different providers on both the client + receiving sides:
      • JSON Serialization: JwsJsonWriterInterceptor (out) + JwsJsonContainerRequestFilter (in)
      • Compact: JwsWriterInterceptor (out) + JwsContainerRequestFilter (in)
      3) Security Configuration

      As well as adding the desired providers, it is also necessary to specify the security configuration to set the appropriate algorithms, keys, etc. to use. The CXF wiki has an extensive list of all of the different security configuration properties. For signature, we need to first load the signing key (either a JKS keystore or else a JSON Web Key (JWK) is supported).

      As well as defining a signing key, we also need to configure the signing algorithm. The list of acceptable signing algorithms for JWS is defined by the JSON Web Algorithms (JWA) spec. For signature, this boils down to a public key signature scheme based on either RSA or EC-DSA, or a symmetric scheme using HMAC.

      For example, the tests use the following configuration to load a private key from a Java KeyStore, and to use the signature scheme of RSASSA-PKCS1-v1_5 using SHA-256:
      The service side configuration is largely the same, apart from the fact that we don't need to specify the "rs.security.key.password" configuration tag, as we don't need to load the private key. The signature algorithm must be specified to impose a constraint on the acceptable signature algorithm.

      4) Signing XML payload

      The JWS payload (the content to be signed) can be any binary content and not just a JSON Object. This means that we can use JWS to sign an XML message. This is a really cool feature of the specification in my opinion, as it essentially removes the need to use XML Signature to sign XML payload. An example of this is included in the JWSSignatureTest. The configuration is exactly the same as for the JSON case.

      5) Including the signing key

      It is possible to include the signing certificate/key in the JWS header by setting one of the following properties:
      • rs.security.signature.include.public.key -  Include the JWK public key for signature in the "jwk" header.
      • rs.security.signature.include.cert - Include the X.509 certificate for signature in the "x5c" header.
      • rs.security.signature.include.key.id - Include the JWK key id for signature in the "kid" header.
      • rs.security.signature.include.cert.sha1- Include the X.509 certificate SHA-1 digest for signature in the "x5t" header.
      One advantage of including the entire certificate is that the service doesn't need to store the client certificate locally, but only the issuing certificate.
      Categories: Colm O hEigeartaigh

      Pages

      Subscribe to Talend Community Coders aggregator - Colm O hEigeartaigh