Latest Activity

Integrating Apache Camel with Apache Syncope - part I

Colm O hEigeartaigh - Wed, 08/31/2016 - 12:29
Apache Syncope is an open-source Identity Management solution. A key feature of Apache Syncope is the ability to pull Users, Groups and Any Objects from multiple backend resources (such as LDAP, RDMBS, etc.) into Syncope's internal storage, where they can then be assigned roles, pushed to other backend resources, exposed via Syncope's REST API, etc.

However, what if you wanted to easily perform some custom task as part of the Identity Management process? Wouldn't it be cool to be able to plug in a powerful integration framework such as Apache Camel, so that you could exploit Camel's huge list of messaging components and routing + mediation rules? Well with Syncope 2.0.0 you can do just this with the new Apache Camel provisioning manager. This is a unique and very powerful selling point of Apache Syncope in my opinion. In this article, we will introduce the new Camel provisioning manager, and show a simple example of how to use it.

1) The new Apache Camel provisioning manager

As stated above, a new provisioning manager is available in Apache Syncope 2.0.0 based on Apache Camel. A set of Camel routes are available by default which are invoked when the User, Groups and Any Objects in question are changed in some way. So for example, if a new User is created, then the corresponding Camel route is invoked at the same time. This allows the administrator to plug in custom logic on any of these state changes. The routes can be viewed and edited in the Admin Console by clicking on "Extensions" and then "Camel Routes".

Each of the Camel routes uses a new "propagate" Camel component available in Syncope 2.0.0. This component encapsulates some common logic involved in using the Syncope PropagationManager to create some tasks, and to execute them via the PropagationTaskExecutor. All of the routes invoke this propagate component via something like:
  • <to uri="propagate:<propagateType>?anyTypeKind=<anyTypeKind>&options"/>
Where propagateType is one of:
  • create
  • update
  • delete
  • provision
  • deprovision
  • status
  • suspend
  • confirmPasswordReset
and anyTypeKind is one of:
  • USER
  • ANY
2) The use-case

In this post, we will look at a simple use-case of sending an email to an administrator when a User is created, with some details about the created User in the email. Of course, this could be handled by a Notification Task, but we'll discuss some more advanced scenarios in future blog posts. Also note that a video available on the Tirasa blog details more or less the same use-case. For the purposes of the demo, we will set up a mailtrap account where we will receive the emails sent by Camel.

3) Configure Apache Syncope

Download and install  Apache Syncope (I used the "standalone" download for the purposes of this demo). Before starting Apache Syncope, we need to copy a few jars that are required by Apache Camel to actually send emails. Copy the following jars to $SYNCOPE/webapps/syncope/WEB-INF/lib:
Now start Apache Syncope and log on to the admin console. Click on "Extensions" and then "Camel Routes". As we want to change the default route when users are created, click on the "edit" image for the "createUser" route. Add the following information just above the "bean method=" line:
  • <setHeader headerName="subject"><simple>New user ${body.username} created in realm ${body.realm}</simple></setHeader> 
  • <setBody><simple>User full name: ${body.plainAttrMap[fullname].values[0]}</simple></setBody>
  • <to uri="smtp://<username>&amp;password=<password>&amp;contentType=text/html&amp;"/>
Let's examine what each of these statements do. The first statement is setting the Camel header "Subject" which corresponds to the Subject of the Email. It simply states that a new user with a given name is created in a given realm. The second statement sets the message Body, which is used as the content of the message by Camel. It just shows the User's full name, extracted from the "fullname" attribute, as an example of how to access User attributes in the route.

The third statement invokes on the Camel smtp component. You'll need to substitute in the username + password you configured when setting up the mailtrap account. The recipient is configured using the "to" part of the URI. One more change is required to the existing route. As we have overridden the message Body in the second statement above, we need to change the ${body} in the create call to ${exchangeProperty.actual}, which is the saved Body. Click on "save" to save the modified route.

Before creating a User, we need to add a "fullname" User attribute as the route expects. Go to "Configuration" and "Types", and click on the "Schemas" tab. Click on the "+" button under "PLAIN" and add a new attribute called "fullname". Then click on "AnyTypeClasses", and add the "fullname" attribute to the BaseUser AnyTypeClass.

Finally, go to the "/" realm and create a new user, specifying a fullname attribute. A new email should be available in the mailtrap account as follows:

Categories: Colm O hEigeartaigh

Custom JSSE Truststore to enable XKMS Certificate Validation

Jan Bernhardt - Mon, 08/29/2016 - 08:56
Recently I was involved in a project which uses a central XKMS Server for certificate and trust management. This was all working fine within the Talend runtime with a custom wss4j crypto provider. However the need raised to perform client certificate validations (mutal SSL) with Apache Fediz running inside an Apache Tomcat server.

Usually I would use a JKS truststore for Tomcat to add trusted certificates (CAs). However this was not possible for this project, because all certificates will be managed inside an LDAP accessible via a XKMS service. Searching for a solution to extend Tomcat to support XKMS based certificate validation I came across the JSSE Standard.

Reading throw the documentation was not so straightforward and clear. But searching through the internet finally helped me to achieve my goal. In this blog post, I'll show you what I had to do, to enabled XKMS based SSL certificate validation in Tomcat. To manage your SSL truststore settings you can use standard System or Tomcat properties:

System PropertiesTomcat location for JKS truststore for JKS for a truststoreFactory of your truststore. Default is "JKS"n/atrustManagerClassNameCustom trust manager class to use to validate client certificates
Settings are considered in the following order:
  1. Tomcat truststore properties
  2. System Properties
  3. Tomcat keystore properties
  4. Default Values
If a trustManagerClassName is set, this implementation will be used and all other truststore settings will be ignored. If a truststore provider is defined any Java standard provider will be ignored.

You can review this behavior in the Tomcat JSSESocketFactory init method.

The easiest way to achieve my goal was to implement my own XKMSTrustManager implementing the interface.
public class XKMSTrustManager implements X509TrustManager {

private static final Logger LOG = LoggerFactory.getLogger(XKMSTrustManager.class);

private XKMSInvoker xkms;

public XKMSTrustManager() throws MalformedURLException {
XKMSService xkmsService = new XKMSService(
URI.create(System.getProperty("xkms.wsdl.location", "http://localhost:8040/services/XKMS/?wsdl"))
xkms = new XKMSInvoker(xkmsService.getXKMSPort());

public void checkClientTrusted(X509Certificate[] chain, String authType) throws CertificateException {
LOG.debug("Check client trust for: {}", chain);

public void checkServerTrusted(X509Certificate[] chain, String authType) throws CertificateException {
LOG.debug("Check server trust for: {}", chain);

public X509Certificate[] getAcceptedIssuers() {
return new X509Certificate[] {};

protected void validateTrust(X509Certificate[] chain) throws CertificateException {
if (chain == null) {
throw new CertificateException("Certificate chain is null");

if (!xkms.validateCertificate(chain)) {
LOG.error("Certificate chain is not trusted: {}", chain);
throw new CertificateException("Certificate chain is not trusted");
<Server port="9005" shutdown="SHUTDOWN">

  <Service name="Catalina">

    <Connector port="9443" protocol="org.apache.coyote.http11.Http11Protocol"
               maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
               sslProtocol="TLS" />



However setting a trustManager is only possible if this option is provided by your application or if you have access to the source code of the SSL SocketFactory. In all other cases you will have to implement your own Security Provider providing your own truststore factory. This task is much more challenging. During my internet research for this topic I found several pages, which should be a good reference for you, if you have to go this way:

JCA Reference Guide - Crypto Provider

Howto Implement a JCA Provider

JSSE Reference Guide - Customized Certificate Storage

Custom CA Truststore in addition to System CA Truststore
HowTo Register global security provider
<java-home>/lib/security/ Advantage: Multiple providers. Adding just the "missing piece".
Disadvantage: System wide configuration

Override security provider settings with system properties
Changing Security Settings via CodeSecurity.insertProviderAt(new FooBarProvider(), 1);
Register a TrustManagerput("TrustManagerFactory.SunX509", "$SimpleFactory");
put("TrustManagerFactory.PKIX", "$PKIXFactory");Using a Custom Certificate Trust Store
Sun JSSE Provider Implementation
Categories: Jan Bernhardt

Pulling users and groups from LDAP into Apache Syncope 2.0.0

Colm O hEigeartaigh - Fri, 08/26/2016 - 17:54
A previous tutorial showed how to synchronize (pull) users and roles into Apache Syncope 1.2.x from an LDAP backend (Apache Directory). Interacting with an LDAP backend appears to be a common use-case for Apache Syncope users. For this reason, in this tutorial we will cover how to pull users and groups (previously roles) into Apache Syncope 2.0.0 from an LDAP backend via the Admin Console, as it is a little different from the previous 1.2.x releases.

1) Apache DS

The basic scenario is that we have a directory that stores user and group information that we would like to import into Apache Syncope 2.0.0. For the purposes of this tutorial, we will work with Apache DS. The first step is to download and launch Apache DS. I recommend installing Apache Directory Studio for an easy way to create and view the data stored in your directory.

Create two new groups (groupOfNames) in the default domain ("dc=example,dc=com") called "cn=employee,ou=groups,ou=system" and "cn=boss,ou=groups,ou=system". Create two new users (inetOrgPerson) "cn=alice,ou=users,ou=system" and "cn=bob,ou=users,ou=system". Now edit the groups you created such that both alice and bob are employees, but only alice is a boss. Specify "sn" (surname) and "userPassword" attributes for both users.

2) Pull data into Apache Syncope

The next task is to import (pull) the user data from Apache DS into Apache Syncope. Download and launch an Apache Syncope 2.0.x instance. Make sure that an LDAP Connector bundle is available (see here).

a) Define a 'surname' User attribute

The inetOrgPerson instances we created in Apache DS have a "sn" (surname) attribute. We will map this into an internal User attribute in Apache Syncope. The Schema configuration is quite different in the Admin Console compared to Syncope 1.2.x. Select "Configuration" and then "Types" in the left hand menu. Click on the "Schemas" tab and then the "+" button associated with "PLAIN". Add "surname" for the Key and click "save". Now go into the "AnyTypeClasses" tab and edit the "BaseUser" item. Select "surname" from the list of available plain Schema attributes. Now the users we create in Syncope can have a "surname" attribute.

b) Define a Connector

The next thing to do is to define a Connector to enable Syncope to talk to the Apache DS backend. Click on "Topology" in the left-hand menu, and on the ConnId instance on the map. Click "Add new connector" and create a new Connector of type "net.tirasa.connid.bundles.ldap". On the next tab select:
  • Host: localhost
  • TCP Port: 10389
  • Principal: uid=admin,ou=system
  • Password: <password>
  • Base Contexts: ou=users,ou=system and ou=groups,ou=system
  • LDAP Filter for retrieving accounts: cn=*
  • Group Object Classes: groupOfNames
  • Group member attribute: member
  • Click on "Maintain LDAP Group Membership".
  • Uid attribute: cn
  • Base Context to Synchronize: ou=users,ou=system and ou=groups,ou=system
  • Object Classes to Synchronize: inetOrgPerson and groupOfNames
  • Status Management Class: net.tirasa.connid.bundles.ldap.commons.AttributeStatusManagement
  • Tick "Retrieve passwords with search".
Click on the "heart" icon at the top of tab to check to see whether Syncope is able to connect to the backend resource. If you don't see a green "Successful Connection" message, then consult the logs. On the next tab select all of the available capabilities and click on "Finish".

c) Define a Resource

Next we need to define a Resource that uses the LDAP Connector.  The Resource essentially defines how we use the Connector to map information from the backend into Syncope Users and Groups. Click on the Connector that was created in the Topology map and select "Add new resource". Just select the defaults and finish creating the new resource. When the new resource is created, click on it and add some provisioning rules via "Edit provision rules".

Click the "+" button and select the "USER" type to create the mapping rules for users. Click "next" until you come to the mapping tab and create the following mappings:

Click "next" and enable "Use Object Link" and enter "'cn=' + username + ',ou=users,ou=system'". Click "Finish" and "save". Repeat the process above for the "GROUP" type to create a mapping rule for groups as follows:
Similar to creating the user mappings, we also need to enable "Use Object Link" and enter "'cn=' + name + ',ou=groups,ou=system'". Click "Finish" and "save".

d) Create a pull task

Having defined a Connector and a Resource to use that Connector, with mappings to map User/Group information to and from the backend, it's time to import the backend information into Syncope.  Click on the resource and select "Pull Tasks". Create a new Pull Task via the "+" button. Select "/" as the destination realm to create the users and groups in. Choose "FULL_RECONCILIATION" as the pull mode. Select "LDAPMembershipPullActions"  (this will preserve the fact that users are members of a group in Syncope) and "LDAPPasswordPullActions" from the list of available actions. Select "Allow create/update/delete". When the task is created,  click on the "execute" button (it looks like a cogged wheel). Now switch to the "Realms" tab in the left-hand menu and look at the users and groups that have been imported in the "/" realm from Apache DS.

Categories: Colm O hEigeartaigh

SwaggerUI in CXF or what Child's Play really means

Sergey Beryozkin - Tue, 08/23/2016 - 14:03
We've had an extensive demonstration of how to enable Swagger UI for CXF endpoints returning Swagger documents for a while but the only 'problem' was that our demos only showed how to unpack a SwaggerUI module into a local folder with the help of a Maven plugin and make these unpacked resources available to browsers.
It was not immediately obvious to the users how to activate SwaggerUI and with the news coming from a SpringBoot land that apparently it is really easy over there to do it it was time to look at making it easier for CXF users.
So Aki, Andriy and myself talked and this is what CXF 3.1.7 users have to do:

1. Have Swagger2Feature activated to get Swagger JSON returned
2. Add a swagger-ui dependency  to the runtime classpath.
3. Access Swagger UI

For example, run a description_swagger2 demo. After starting a server go to the CXF Services page and you will see:

Click on the link and see a familiar Swagger UI page showing your endpoint's API.

Have you wondered what do some developers mean when they say it is a child's play to try whatever they have done ? You'll find it hard to find a better example of it after trying Swagger UI with CXF 3.1.7 :-)

Note in CXF 3.1.8-SNAPSHOT we have already fixed it to work for Blueprint endpoints in OSGI (with the help from Łukasz Dywicki).  SwaggerUI auto-linking code has also been improved to support some older browsers better.

Besides, CXF 3.1.8 will also offer a proper support for Swagger correctly representing multiple JAX-RS endpoints based on the fix contributed by Andriy and available in Swagger 1.5.10 or when API interface and implementations are available in separate (OSGI) bundles (Łukasz figured out how to make it work).

Before I finish let me return to the description_swagger2 demo. Add a cxf-rt-rs-service-description dependency to pom.xml. Start the server and check the services page:

Of course some users do and will continue working with XML-based services and WADL is the best language available around to describe such services. If you click on a WADL link you will see an XML document returned. WADLGenerator can be configured with an XSLT template reference and if you have a good template you can get UI as good as this Apache Syncope document.

Whatever your data representation preferences are, CXF will get you supported.


Categories: Sergey Beryozkin

OpenId Connect in Apache CXF Fediz 1.3.0

Colm O hEigeartaigh - Fri, 08/12/2016 - 18:02
Previous blog posts have described support for OpenId Connect protocol bridging in the Apache CXF Fediz IdP. What this means is that the Apache CXF Fediz IdP can bridge between the WS-Federation protocol and OpenId Connect third party IdPs, when the user must be authenticated in a different security domain. However, the 1.3.0 release of Apache CXF Fediz also sees the introduction of a new OpenId Connect Idp which is independent of the existing (WS-Federation and SAML-SSO based) IdP, and based on Apache CXF. This post will introduce the new IdP via an example.

The example code is available on github:
  • cxf-fediz-oidc: This project shows how to use interceptors of Apache CXF to authenticate and authorize clients of a JAX-RS service using OpenId Connect.
1) The secured service

The first module available in the example contains a trivial JAX-RS Service based on Apache CXF which "doubles" a number that is passed as a path parameter via HTTP GET. The service defines via a @RolesAllowed annotation that only users allowed in roles "User", "Admin" or "Manager" can access the service.

The service is configured via spring. The endpoint configuration references the service bean above, as well as the CXF SecureAnnotationsInterceptor which enforces the @RolesAllowed annotation on the service bean. In addition, the service is configured with the CXF OidcRpAuthenticationFilter, which ensures that only users authenticated via OpenId Connect can access the service. The filter is configured with a URL to redirect the user to. It also explicitly requires a role claim to enforce authorization.

The OidcRpAuthenticationFilter redirects the browser to a separate authentication endpoint, defined in the same spring file for convenience. This endpoint has a filter called OidcClientCodeRequestFilter, which initiates the OpenId Connect authorization code flow to a remote OpenId Connect IdP (in this case, the new Fediz IdP). It is also responsible for getting an IdToken after successfully getting an authorization code from the IdP.

2) The Fediz OpenId Connect IdP

The second module contains an integration test which deploys a number of wars into an Apache Tomcat container:
  • The "double-it" service as described above
  • The Apache CXF Fediz IdP which authenticates users via WS-Federation
  • The Apache CXF Fediz STS which performs the underlying authentication of users
  • The Apache CXF Fediz OpenId Connect IdP
The way the Apache CXF Fediz OpenId Connect IdP works (at least for 1.3.x) is that user authentication is actually delegated to the WS-Federation based IdP via a Fediz plugin. So when the user is redirected to the Fediz IdP, (s)he gets redirected to the WS-Federation based IdP for authentication, and then gets redirected back to the OpenId Connect IdP with a WS-Federation Response. The OpenId Connect IdP parses this (SAML) Response and converts it into a JWT IdToken. Future releases will enable authentication directly at the OpenId Connect service.

After deploying all of the services, the test code makes a series of REST calls to create a client in the OpenId Connect IdP so that we can run the test without having to manually enter information in the client UI of the Fediz IdP. To run the test, simply remove the @org.junit.Ignore assertion on the "testInBrowser" method. The test code will create the clients in Fediz and then print out a URL in the console before sleeping. Copy the URL and paste it into a browser. Authenticate using the credentials "alice/ecila".
Categories: Colm O hEigeartaigh

Introducing Apache Syncope 2.0.0

Colm O hEigeartaigh - Thu, 08/11/2016 - 17:17
Apache Syncope is a powerful and flexible open-source Identity Management system that has been developed at the Apache Software Foundation for several years now. The Apache Syncope team has been busy developing a ton of new features for the forthcoming new major release (2.0.0), which will really help to cement Apache Syncope's position as a first class Identity Management solution. If you wish to experiment with these new features, a 2.0.0-M4 release is available. In this post we will briefly cover some of the new features and changes. For a more comprehensive overview please refer to the reference guide.

1) Domains

Perhaps the first new concept you will be introduced to in Syncope 2.0.0 after starting the (Admin) console is that of a domain. When logging in, as well as specifying a username, password, and language, you can also specify a configured domain. Domains are a new concept in Syncope 2.0.0 that facilitate multi-tenancy. Domains allow the physical separation of all data stored in Syncope (by storing the data in different database instances). Therefore, Syncope can facilitate users, groups etc. that are in different domains in a single Syncope instance.

2) New Console layout

After logging in, it becomes quickly apparent that the Syncope Console is quite different compared to the 1.2.x console. It has been completely rewritten and looks great. Connectors and Resources are now managed under "Topology" in the menu on the left-hand side. Users and Groups (formerly Roles) are managed under "Realms" in the menu. The Schema types are configured under "Configuration". A video overview of the new Console can be seen here.

3) AnyType Objects

With Syncope 1.2.x, it was possible to define plain/derived/virtual Schema Types for users, roles and memberships, but no other entities. In Syncope 2.0.0, the Schema Types are decoupled from the entity that uses them. Instead, a new concept called an AnyType class is available which is a collection of schema types. In turn, an AnyType object can be created which consists of any number of AnyType classes. AnyType objects represent the type of things that Apache Syncope can model. Besides the predefined Users and Groups, it can also represent physical things such as printers, workstations, etc. With this new concept, Apache Syncope 2.0.0 can model many different types of identities.

4) Realms

Another new concept in Apache Syncope 2.0.0 is that of a realm. A realm encapsulates a number of Users, Groups and Any Objects. It is possible to specify account and password policies per-realm (see here for a blog entry on custom policies in Syncope 2.0.0). Each realm has a parent realm (apart from the pre-defined root realm identified as "/"). The realm tree is hierarchical, meaning that Users, Groups etc. defined in a sub-realm, are also defined on a parent realm. Combined with Roles (see below), realms facilitate some powerful access management scenarios.

5) Groups/Roles

In Syncope 2.0.0, what were referred to as "roles" in Syncope 1.2.x are now called "groups". In addition, "roles" in Syncope 2.0.0 are a new concept which associate a number of entitlements with a number of realms. Users assigned to a role can exercise the defined entitlements on any of the objects in the given realms (any any sub-realms).

Syncope 2.0.0 also has the powerful concept of dynamic membership, which means that users can be assigned to groups or roles via a conditional expression (e.g. if an attribute matches a given value).

6) Apache Camel Provisioning

An exciting new feature of Apache Syncope 2.0.0 is the new Apache Camel provisioning engine, which is available under "Extensions/Camel Routes" in the Console. Apache Syncope comes pre-loaded with some Camel routes that are executed as part of the provisioning implementation for Users, Groups and Any Objects. The real power of this new engine lies is the ability to modify the routes to perform some custom provisioning rules. For example, on creating a new user, you may wish to send an email to an administrator. Or if a user is reactivated, you may wish to reactivate the user's home page on a web server. All these things and more are possible using the myriad of components that are available to be used in Apache Camel routes. I'll explore this feature some more in future blog posts.

7) End-User UI

As well as the Admin console (available via /syncope-console), Apache Syncope 2.0.0 also ships with an Enduser console (available via /syncope-enduser). This allows a user to edit only details pertaining to his/her-self, such as editing the user attributes, changing the password, etc. See the following blog entry for more information on the new End-User UI.

8) Command Line Interface (CLI) client

Another new feature of Apache Syncope 2.0.0 is that of the CLI client. It is available as a separate download. Once downloaded, extract it and run (on linux): ./ install --setup. Answer the questions about where Syncope is deployed and the credentials required to access it. After installation, you can run queries such as: ./ user --list.

9) Apache CXF-based testcases

I updated the testcases that I wrote before to use Apache Syncope 2.0.0 to authenticate and authorize web services calls using Apache CXF. The new test-cases are available here
Categories: Colm O hEigeartaigh

CXF Spring Boot Starters Unveiled

Sergey Beryozkin - Mon, 08/08/2016 - 23:51
The very first check some new users may do these days, while evaluating your JAX-RS implementation, can be: how well is it integrated into SpringBoot ?

And the good news is that Apache CXF 3.1.7 users can start working with SpringBoot real fast.
We have left it somewhat late. It is hard to prioritize sometimes on various new requirements. And see some users moving away. In such cases the community support is paramount. And the Power of Open Source Collaboration came to the rescue once again when it was really needed.

I'd like to start with thanking James for providing an initial set of links to various SpringBoot documentation pages and reacting positively to the initial code we had. But you know yourself - sometimes we all value some little 'starters' - the initial code contributions :-)

And then we had a Spring Boot expert coming in and getting the process moving. Vedran Pavic helped me to create the auto-configuration and starter modules for JAX-RS and JAX-WS, patiently explained how his initial contribution works, how these modules have to be designed, and helped with the advice throughout the process. I felt like I passed some SpringBoot qualification exam once we were finished which let me continue enhancing the JAX-RS starter independently before CXF 3.1.7 was released.

CXF Spring Boot starters are now documented at this page which is also linked to from a Spring Boot README listing the community contributions.

If you are working with CXF JAX-RS then do check this section. See the demos and get excited about the ease with which you can enable JAX-RS endpoints, their Swagger API docs (and auto-link Swagger UI - the topic of the next post).

See how you can run your CXF WebClient or Proxy clients in Spring Boot, initialized if needed from the metadata found a Netflix Eureka. The demo code on the master uses a CXF CircuitBreakerFailoverFeature written by a legendary DevMind, a sound, simple and light-weight Apache Zest based implementation.
Not all users may realize how flexible CXF Failover Feature is. 

While the most effort went into a JAX-RS starter I'm sure we will add more support for JAX-WS users too.

We'll need to do a bit more work - link CXF statistics to the actuator endpoints, support scanning JAX-RS Applications and few other things.

If you prefer working with Spring Boot: be certain that a second to none support for running CXF services in Spring Boot will be there. Enjoy!

Categories: Sergey Beryozkin

Installing the Apache Ranger Key Management Server (KMS)

Colm O hEigeartaigh - Mon, 08/08/2016 - 13:40
The previous couple of blog entries have looked at how to install the Apache Ranger Admin Service as well as the Usersync Service. In this post we will look at how to install the Apache Ranger Key Management Server (KMS). KMS is a component of Apache Hadoop to manage cryptographic keys. Apache Ranger ships with its own KMS implementation, which allows you to store the (encrypted) keys in a database. The Apache Ranger KMS is also secured via policies defined in the Apache Ranger Admin Service.

1) Build the source code

The first step is to download the source code, as well as the signature file and associated message digests (all available on the download page). Verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting KMS archive to a location where you wish to install it:
  • tar zxvf apache-ranger-incubating-0.6.0.tar.gz
  • cd apache-ranger-incubating-0.6.0
  • mvn clean package assembly:assembly 
  • tar zxvf target/ranger-0.6.0-kms.tar.gz
  • mv ranger-0.6.0-kms ${rangerkms.home}
2) Install the Apache Ranger KMS Service

As the Apache Ranger KMS Service stores the cryptographic keys in a database, we will need to setup and configure a database. We will also configure the KMS Service to store audit logs in the database. Follow the steps given in section 2 of the tutorial on the Apache Ranger Admin Service to set up MySQL. We will also need to create a new user 'rangerkms':
  • CREATE USER 'rangerkms'@'localhost' IDENTIFIED BY 'password';
You will need to install the Apache Ranger KMS Service using "sudo". If the root user does not have a JAVA_HOME property defined, then edit ${rangerkms.home}/ + add in, e.g.:
  • export JAVA_HOME=/opt/jdk1.8.0_91
Next edit ${rangerkms.home}/ and make the following changes:
  • Change SQL_CONNECTOR_JAR to point to the MySQL JDBC driver jar (see previous tutorial).
  • Set (db_root_user/db_root_password) to (admin/password)
  • Set (db_user/db_password) to (rangerkms/password)
  • Change KMS_MASTER_KEY_PASSWD to a secure password value.
  • Set POLICY_MGR_URL=http://localhost:6080
  • Set XAAUDIT.DB.HOSTNAME=localhost 
  • Set XAAUDIT.DB.DATABASE_NAME=ranger_audit 
  • Set XAAUDIT.DB.USER_NAME=rangerlogger
  • Set XAAUDIT.DB.PASSWORD=password
Now you can run the setup script via "sudo ./".

3) Starting the Apache Ranger KMS service

After a successful installation, first start the Apache Ranger admin service with "sudo ranger-admin start". Then start the Apache Ranger KMS Service via "sudo ranger-kms start". Now open a browser and go to "http://localhost:6080/". Log on with "keyadmin/keyadmin". Note that these are different credentials to those used to log onto the Apache Ranger Admin UI in the previous tutorial. Click on the "+" button on the "KMS" tab to create a new KMS Service. Specify the following values:
  • Service Name: kmsdev
  • KMS URL: kms://http@localhost:9292/kms
  • Username: keyadmin
  • Password: keyadmin
Click on "Test Connection" to make sure that the KMS Service is up and running. If it is showing a connection failure, log out and log into the Admin UI using credentials "admin/admin". Go to the "Audit" section and click on "Plugins". You should see a successful message indicating that the KMS plugin can successfully download policies from the Admin Service:

After logging back in to the UI as "keyadmin" you can start to create keys. Click on the "Encryption/Key Manager" tab. Select the "kmsdev" service in the dropdown list and click on "Add New Key". You can create, delete and rollover keys in the UI:

Categories: Colm O hEigeartaigh

Apache Fediz with Client Certificate Authentication (X.509)

Jan Bernhardt - Thu, 08/04/2016 - 12:25
In this blog post I will explain how to generate your own SSL key-pair to perform certificate based authentication for SSO purposes with Apache Fediz IDP.
Client Key AuthenticationGenerate Key-PairI like to use the keystore-explorer under windows, because it makes certificate management very easy. You don't have to lookup console commands but instead you get nice Wizards to get it all done. If you are running linux I can recommend this page to you, because it contains the most common Java Keytool commands you will need.

After starting keystore-explorer create a new keykeystore (PKCS #12). Next click generate keypair. RSA with 2.048 bit should be fine. Now you should enter your name and after that click on extensions to define an "Extended Key Usage" for "TLS Web Client Authentication":

Make sure that this extension flag is really set for your key-pair. I first tried without this extension and I could not get any of my browsers to even show me a certificate selection popup when authentication against the IDP.
Since you will have to import your personal certificate to the IDP truststore later on, I would recommend to you to export your public certificate at this step:

Import Key-Pair to your BrowserOnce your key generation was successful, you need to add this key-pair to your browser:

In Chrome you need to open your settings -> extended settings ->  HTTPS/SSL -> Manage Certificates -> Import select your p12 certificate and make sure that all extensions from the certificate are included:

Since chrome and IE will use the same certificate store. So there is no need to do this twice if you have done this once for one of the two.

For Firefox you need to go to Options -> Advanced -> Certificates -> View Certificates -> Your Certificates -> Import

I had to restart my machine before my browsers would show me the option to select my certificates for client authentication. Some articles in the internet also recommended to add the IDP URL to your list of trusted sides in the Internet Explorer.Setup Fediz IDPYou can find a full IDP / Web-App setup instruction in one of my previous articles. In this article I will only highlight steps that are related to SSL slient authentication.

Add SSL support to your tomcat conf/server.xml
<Connector port="9443" protocol="org.apache.coyote.http11.Http11Protocol"
maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
sslProtocol="TLS" />
If you want all clients to authenticate with a client SSL Certificate against your IDP you must set the clientAuth attribute to "true" instead of "want". However if you want to support multiple authentication styles even without a client certificate you should set clientAuth to "want".

Open your idp-ssl-trust.jks with your keystore-explorer to import your personal certificate from your desktop (see previous export step above).
Validate SetupOpen your browser to the Fediz Hello World page: https://localhost:9443/fediz-idp/. Your browser should show you a selection popup for your client certificate:

If you imported this certificate correctly to your tomcat IDP truststore you should now see a "Hello World!" welcome page from Fediz.

Please also take a look at colms blog about this topic.
Categories: Jan Bernhardt

[OT] Reuse Or Reimplement ?

Sergey Beryozkin - Wed, 08/03/2016 - 18:06
I said in one of my earlier posts I'd share some thoughts I've had during the years on re-using vs re-implementing while working on various CXF projects. Some of it may be a bit CXF specific but most of it might be interest to non-CXF developers too.

When the time comes to implement a new feature the immediate decision that needs to be taken is how you do it. In general it is always a good idea to re-use a quality 3rd party library that can help in realizing the new feature fast.

Consider a task of providing a UI interface to have Swagger JSON documents nicely presented. One can invest the time and write UI scripts and pages. Or one can download a well-known Swagger UI module.

Another example: one needs a collection sort algorithm implementation which will do faster than Java Collections code. One can invest a time and write a new library or look around and try an Apache or Google library.

In both cases re-using the existing solution will most likely be better and help deliver the higher-level, complete offering faster.

Things may get more complicated when one works on a project in a competitive space. For example, at some point there were 6 active JAX-RS Java implementation projects, with other non JAX-RS implementations such as the one offered by Spring adding up to the total number.

When you work on a project like that one a number of important decisions need to be made: how complete you'd like your project to be ? Is supporting HTTP verbs and reading and writing the data is all what is needed ? What sort of security support around the service you'd like to provide ? What other extensions should your project have ? How would you like your project be recognized - as a library or something bigger that offers all sort of relevant support for people writing the HTTP services ?

The higher the 'ambitions' of such a project the more likely the 're-implementing' becomes a viable option, nearly a necessity in some cases. In fact re-implementing is going all around at such projects.

I've been involved in a fair bit of re-implementation projects.

To start with we started implementing JAX-RS at a time when Jersey was already high. Why ? To have Apache CXF open to users with different preferences on how to do HTTP services. It was hard at times but it was really never simply because we wanted to prove we could do it.

The latest 're-implementation' was JOSE. Why ? I won't deny I was keen to work with the low-level security code closer, but overall, I wanted a CXF Security Story be more complete. Implementing it vs re-using the quality libraries I listed at the Wiki let us tune and re-work the implementation for it to be better integrated with the JAX-RS and Core security support so many times that it would be highly unlikely to happen if I were working with a 3rd party library.

I do not think re-implementing in an open way is not healthy. For example it has been acknowledged that having many JAX-RS implementations around help to make JAX-RS more popular. Re-implementing may offer more options to users.

Or, reimplementing can prove a complete loss of time. Here are some basic 'guidelines' if you decide to try to re-implement in the Open Source:
- think not twice but many times before you try it
- if you feel the urge then do it, get the experience, make the mistakes, next time you will do the best choice
- never expect that once you re-implement something then everyone will stop using whatever they use and switch to what you have written - a lot of clever developers are working full time
- if you'd like others to use your project then you absolutely must love working with the users, don't even start if you think that it will be up to the Customer Support
- you need to have a support of your colleagues
- expect that the only 'remuneration' you will have is the non-stop work to keep the project constantly evolving

Yes, very often re-using may be the very best thing :-)

Enjoy, Happy Re-Using, Happy Re-Implementing :-)


Categories: Sergey Beryozkin

Syncing users and groups from LDAP into Apache Ranger

Colm O hEigeartaigh - Fri, 07/22/2016 - 16:37
The previous post covered how to install the Apache Ranger Admin service. The Apache Ranger Admin UI supports creating authorization policies for various Big Data components, by giving users and/or groups permissions on resources. This means that we need to import users/groups into the Apache Ranger Admin service from some backend service in order to create meaningful authorization policies. Apache Ranger supports syncing users into the Admin service from both unix and ldap. In this post, we'll look at syncing in users and groups from an OpenDS LDAP backend.

1) The OpenDS backend

For the purposes of this tutorial, we will use OpenDS as the LDAP server. It contains a domain called "dc=example,dc=com", and 5 users (alice/bob/dave/oscar/victor) and 2 groups (employee/manager). Victor, Oscar and Bob are employees, Alice and Dave are managers. Here is a screenshot using Apache Directory Studio:

2) Build the Apache Ranger usersync module

Follow the steps in the previous tutorial to build Apache Ranger and to setup and start the Apache Ranger Admin service. Once this is done, go back to the Apache Ranger distribution that you have built and copy the usersync module:
  • tar zxvf target/ranger-0.6.0-usersync.tar.gz
  • mv ranger-0.6.0-usersync ${usersync.home}
3) Configure and build the Apache Ranger usersync service 

You will need to install the Apache Ranger Usersync service using "sudo". If the root user does not have a JAVA_HOME property defined, then edit ${usersync.home}/ + add in, e.g.:
  • export JAVA_HOME=/opt/jdk1.8.0_91
Next edit ${usersync.home}/ and make the following changes:
  • POLICY_MGR_URL = http://localhost:6080
  • SYNC_SOURCE = ldap
  • SYNC_INTERVAL = 1 (just for testing purposes....)
  • SYNC_LDAP_URL = ldap://localhost:2389
  • SYNC_LDAP_BIND_DN = cn=Directory Manager,dc=example,dc=com
  • SYNC_LDAP_SEARCH_BASE = dc=example,dc=com
  • SYNC_LDAP_USER_SEARCH_BASE = ou=users,dc=example,dc=com
  • SYNC_GROUP_SEARCH_BASE=ou=groups,dc=example,dc=com
Now you can run the setup script via "sudo ./". 

4) Start the Usersync service

The Apache Ranger Usersync service can be started via "sudo ./ start". After 1 minute (see SYNC_INTERVAL above), it should successfully copy the users/groups from the OpenDS backend into the Apache Ranger Admin. Open a browser and go to "http://localhost:6080", and click on "Settings" and then "Users/Groups". You should see the users and groups synced successfully from OpenDS.

Categories: Colm O hEigeartaigh

Karaf JDBC JAAS Module

Jan Bernhardt - Wed, 07/20/2016 - 17:58
Karaf relys on JAAS for user authentication. JAAS makes it possible to plugin multiple modules for this purpose. By default Karaf will use the karaf realm with a JAAS module getting its user and role information from a property file: runtime/etc/

In this blog post I will show you how to use the Karaf JAAS console commands and how to setup a JDBC module to authenticate against a database.

All code was tested on Karaf version 4.0.3 JDBC SetupRegister DatasourceAt first you need to install the Karaf JDBC feature:
karaf@trun()> feature:install jdbc
karaf@trun()> feature:install pax-jdbc-derby
Next you can create a new Datasource:
karaf@root()> jdbc:ds-create -dn derby -url "jdbc:derby:users;create=true" -u db_admin usersWith the -dn derby option you define a datasource of type derby. Alternative you could also use generic, oracle, mysql, postgres, h2, hsql as your datasource type. Please make sure to install also the matching jdbc pax feature for your datasource type.
The -u db_admin option defines the datasource username. Finally jaas_realm is the datasource name.
Add sample data jdbc:execute users CREATE TABLE users ( username VARCHAR(255) PRIMARY KEY NOT NULL, password VARCHAR(255) NOT NULL );
jdbc:execute users CREATE TABLE roles ( username VARCHAR(255) NOT NULL, role VARCHAR(255) NOT NULL, PRIMARY KEY (username,role) );
jdbc:execute users INSERT INTO users values('alice','e5e9fa1ba31ecd1ae84f75caaa474f3a663f05f4');
jdbc:execute users INSERT INTO roles values('alice','manager');Validate your input:
karaf@trun()> jdbc:query users SELECT * FROM roles
manager | aliceJAAS Console CommandsKaraf provides some nice console commands to manage your JAAS realms.
List JAAS realms with assigned moduleskaraf@trun()> jaas:realm-list
Index | Realm Name | Login Module Class Name
1     | karaf      |
2     | karaf      | org.apache.karaf.jaas.modules.publickey.PublickeyLoginModule
3     | karaf      | org.apache.karaf.jaas.modules.audit.FileAuditLoginModule
4     | karaf      | org.apache.karaf.jaas.modules.audit.EventAdminAuditLoginModuleList users and assigned roleskaraf@trun()> jaas:realm-manage --realm karaf

karaf@trun()> jaas:user-list
User Name | Group      | Role
tadmin    | admingroup | admin
tadmin    | admingroup | manager
tadmin    | admingroup | viewer
tadmin    | admingroup | systembundles
tadmin    |            | sl_admin
tesb      | admingroup | admin
tesb      | admingroup | manager
tesb      | admingroup | viewer
tesb      | admingroup | systembundles
tesb      |            | sl_maintain
karaf     | admingroup | admin
karaf     | admingroup | manager
karaf     | admingroup | viewer
karaf     | admingroup | systembundles
List userskaraf@trun()> jaas:realm-manage --realm karaf

karaf@trun()> jaas:user-list
User Name | Group      | Role
tadmin    | admingroup | admin
tadmin    | admingroup | manager
tadmin    | admingroup | viewer
tadmin    | admingroup | systembundles
tadmin    |            | sl_admin
tesb      | admingroup | admin
tesb      | admingroup | manager
tesb      | admingroup | viewer
tesb      | admingroup | systembundles
tesb      |            | sl_maintain
karaf     | admingroup | admin
karaf     | admingroup | manager
karaf     | admingroup | viewer
karaf     | admingroup | systembundles

karaf@trun()> jaas:cancelAdding a userkaraf@trun()> jaas:realm-manage --realm karaf
karaf@trun()> jaas:user-add alice secret
karaf@trun()> jaas:update
If you execute "List users" again you will see alice added to the realm. You will also find alice added to the file.
Install JDBC JAAS ModuleRegister ModuleCreate a file db_jaas.xml within the deploy folder of your karaf installation:
<?xml version="1.0" encoding="UTF-8"?>
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
See the License for the specific language governing permissions and
limitations under the License.
<blueprint xmlns=""

<!-- Allow usage of System properties, especially the karaf.base property -->
<ext:property-placeholder placeholder-prefix="$[" placeholder-suffix="]"/>

<!-- AdminConfig property place holder for the org.apache.karaf.jaas -->
<cm:property-placeholder persistent-id="org.apache.karaf.jaas.db" update-strategy="reload">
<cm:property name="" value="basic"/>
<cm:property name="encryption.enabled" value="true"/>
<!--cm:property name="encryption.prefix" value="{CRYPT}"/>
<cm:property name="encryption.suffix" value="{CRYPT}"/-->
<cm:property name="encryption.algorithm" value="SHA1"/>
<cm:property name="encryption.encoding" value="hexadecimal"/>
<cm:property name="detailed.login.exception" value="true"/>
<cm:property name="audit.file.enabled" value="true"/>
<cm:property name="audit.file.file" value="$[]/security/audit.log"/>
<cm:property name="audit.eventadmin.enabled" value="true"/>
<cm:property name="audit.eventadmin.topic" value="org/apache/karaf/login"/>

<jaas:config name="karaf" rank="10">

<jaas:module className="org.apache.karaf.jaas.modules.jdbc.JDBCLoginModule" flags="required">
datasource = osgi:javax.sql.DataSource/(
insert.user = INSERT INTO USERS VALUES(?,?)
insert.role = INSERT INTO ROLES VALUES(?,?)
encryption.enabled = ${encryption.enabled} = ${}
encryption.algorithm = ${encryption.algorithm}
encryption.encoding = ${encryption.encoding}
detailed.login.exception = ${detailed.login.exception}
<jaas:module className="org.apache.karaf.jaas.modules.audit.FileAuditLoginModule" flags="optional">
enabled = ${audit.file.enabled}
file = ${audit.file.file}
<jaas:module className="org.apache.karaf.jaas.modules.audit.EventAdminAuditLoginModule" flags="optional">
enabled = ${audit.eventadmin.enabled}
topic = ${audit.eventadmin.topic}

</blueprint>By adding a configuration file org.apache.karaf.jaas.db.cfg to your etc folder you will be able to update the configuration of your jaas bundle during runtime.
encryption.enabled = true = basic
encryption.algorithm = SHA1
encryption.encoding = hexadecimal
detailed.login.exception = trueNow you can login to Karaf via SSH with you alice DB user.
ssh -p 8101 alice@localhostPassword will be a: secret
Categories: Jan Bernhardt

Installing the Apache Ranger Admin UI

Colm O hEigeartaigh - Tue, 07/19/2016 - 14:22
Apache Ranger 0.6 has been released, featuring new support for securing Apache Atlas and Nifi, as well as a huge amount of bug fixes. It's easiest to get started with Apache Ranger by downloading a big data sandbox with Ranger pre-installed. However, the most flexible way is to grab the Apache Ranger source and to build and deploy the artifacts yourself. In this tutorial, we will look into building Apache Ranger from source, setting up a database to store policies/users/groups/etc. as well as Ranger audit information, and deploying the Apache Ranger Admin UI.

1) Build the source code

The first step is to download the source code, as well as the signature file and associated message digests (all available on the download page). Verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting admin archive to a location where you wish to install the UI:
  • tar zxvf apache-ranger-incubating-0.6.0.tar.gz
  • cd apache-ranger-incubating-0.6.0
  • mvn clean package assembly:assembly 
  • tar zxvf target/ranger-0.6.0-admin.tar.gz
  • mv ranger-0.6.0-admin ${rangerhome}
2) Install MySQL

The Apache Ranger Admin UI requires a database to keep track of users/groups as well as policies for various big data projects that you are securing via Ranger. In addition, we will use the database for auditing as well. For the purposes of this tutorial, we will use MySQL. Install MySQL in $SQL_HOME and start MySQL via:
  • sudo $SQL_HOME/bin/mysqld_safe --user=mysql
Now you need to log on as the root user and create three users for Ranger. We need a root user with admin privileges (let's call this user "admin"), a user for the Ranger Schema (we'll call this user "ranger"), and finally a user to store the Ranger audit logs in the DB as well ("rangerlogger"):
  • CREATE USER 'admin'@'localhost' IDENTIFIED BY 'password';
  • GRANT ALL PRIVILEGES ON * . * TO 'admin'@'localhost' WITH GRANT OPTION;
  • CREATE USER 'ranger'@'localhost' IDENTIFIED BY 'password';
  • CREATE USER 'rangerlogger'@'localhost' IDENTIFIED BY 'password'; 
Finally,  download the JDBC driver jar for MySQL and put it in ${rangerhome}.

3) Install the Apache Ranger Admin UI

You will need to install the Apache Ranger Admin UI using "sudo". If the root user does not have a JAVA_HOME property defined, then edit ${rangerhome}/ + add in, e.g.:
  • export JAVA_HOME=/opt/jdk1.8.0_91
Next edit ${rangerhome}/ and make the following changes:
  • Change SQL_CONNECTOR_JAR to point to the MySQL JDBC driver jar that you downloaded above.
  • Set (db_root_user/db_root_password) to (admin/password)
  • Set (db_user/db_password) to (ranger/password)
  • Change "audit_store" from "solr" to "db"
  • Set "audit_db_name" to "ranger_audit"
  • Set (audit_db_user/audit_db_password) to (rangerlogger/password).
Now you can run the setup script via "sudo ./".

4) Starting the Apache Ranger admin service

After a successful installation, we can start the Apache Ranger admin service with "sudo ${rangerhome}./ews/". Now open a browser and go to "http://localhost:6080/". Log on with "admin/admin" and you should be able to create authorization policies for a desired big data component.

Categories: Colm O hEigeartaigh

Shifting the Power – Ending Police Traffic Stop Oppression

Francis Upton's blog - Fri, 07/08/2016 - 18:19

I sit in my room at the Pact camp. This camp is a significant conference about people (mostly white as it turns out) who adopt people of color. One of the issues that is discussed quite a bit is the issue of race, and in particular, how to handle the problem of black people (mostly men and boys) being harassed or killed by the police. The prevailing wisdom seems to be that you must prepare your black young man, as a boy, to behave in a way that will not attract the attention of the police, and then certainly, if said attention is attracted, to not upset them in any way so that you can be safe. However, this is not really working, as evidenced by the large number of people that continue to be killed by the police.

When this issue of unjustified violence by the police first came to my attention a few years ago, my thought was the problem would be solved once the police had body cameras. I felt that these problems could not exist in an environment where everything was completely public and the police would have to account for their actions in a public way. I did not think much further about it, except for the hopeless feelings I had every I heard about an unjustified death.

I am a computer person, I have always been one. Last week I went to the Hadoop Summit 2016 where we heard several presentations about the promise of big data to dramatically improve the world in areas ranging from game changing medical breakthroughs (cure cancer anyone?), to improving access to education in rural India, to reducing the cost of car insurance. We are only at the beginning of this huge technological shift, brought about the dramatic improvements in accessibility and usability of software, data, and hardware.

I want to see this technology applied to this problem of police oppression, so that we can give oppressed people the tools and power to have safety (the type of safety that I take for granted as a white person) at all times in all places. There are a lot of oppressed people, and there are a lot of people who sympathize with the oppressed people (like me). If we can bring technology to bear on this problem, we can shift the conversation to having the oppressors be the fearful ones (of their actions being exposed), rather than the oppressed. This can also level the playing field, giving people stopped in their cars a way to get instant support.

We have to mobilize and focus the enormous resources of the reasonable people (of all colors) in this country against the very small number of people who think (through action or inaction) that this can be tolerated.

We are taking a number of steps in this direction. There are ACLU apps that allow you to make a video and have it be uploaded so that even if the phone is confiscated the video survives. They also allow you to be aware of bad things happening around you so you can get involved and be a witness. With the use of social networks and improved applications, we can go much further with this.

As I have been writing this, the Philando Castile killing happened. The way this was recorded serves as another example of getting closer to the real time action that’s required: the immediate video of the aftermath was quickly shared on social media. However, what if everything was video and audio recorded from the beginning of the stop? What if Mr. Castile had the opportunity to be speaking with a trained person who could help him as the incident was happening? Maybe this could have been prevented. Even if not prevented, the evidence captured would go a long way in helping justice to be served.

I can imagine a situation where you have an app on your phone, and perhaps a hidden camera or two in the car. When you are stopped for traffic, you just say a phrase into your phone and you are immediately connected with someone who can support you, and everything is recorded both by video and audio. Video recording of police is currently legal in all states, and audio recording of this kind is legal in 47 states. You can let the police know this is all being recorded and that you are in communication with someone in real time. Your location is noted publicly.

I think this sort of setup could be provided at very little (or perhaps no) cost to those who wanted it, and working with the ACLU or similar organization, it’s possible to put together people that can be available around the clock to be the support person. The technology to do this relatively cheaply is there now. It just takes the coordination, will, and some money to make this happen.

If the police knew that everyone who wanted it would not only get a person to help them, but also everything would be automatically recorded, then maybe the police departments would do what was required to fix this. Even though officers might not be adequately prosecuted for this bad behavior, the cost to the policy agency of the unambiguous publicity will be very high and something they would seek to avoid.

Given my responsibilities and qualifications, I’m not in a position to personally lead this effort, or even contribute much in the way of time or money. I’m sure there are better qualified people who can lead and contribute to this. I will do what I can to help put people together and get the word out. Feel free to comment with your ideas of where we can go from here.

Categories: Francis Upton

Asynchronous JAX-RS Proxies in CXF

Sergey Beryozkin - Tue, 06/21/2016 - 23:35
Dan had an idea the other day to get CXF JAX-RS proxies enhanced a bit for them to support the asynchronous calls. After all, HTTP centric JAX-RS 2.0 and CXF WebClient clients support such calls with AsyncInvoker.

So here is what we have started from. Simply register InvocationCallback with a proxy request context as shown in the examples and make the asynchronous call. The proxy method will return immediately and the callback will be notified in due time once the typed response is available. As the examples show one can register a single callback or a collection of callbacks bound to specific response types.

I suppose we can consider generating typed asynchronous proxy methods from the service descriptions such as WADL going forward.

This feature will be available in CXF 3.1.7. Give a try please, refresh your JAX-RS proxy code a bit, enjoy. 

Categories: Sergey Beryozkin

A new REST interface for the Apache CXF Security Token Service - part II

Colm O hEigeartaigh - Fri, 06/17/2016 - 12:38
The previous blog entry introduced the new REST interface of the Apache CXF Security Token Service. It covered issuing, renewing and validating tokens via HTTP GET and POST with a focus on SAML tokens. In this post, we'll cover the new support in the STS to issue and validate JWT tokens via the REST interface, as well as how we can transform SAML tokens into JWT tokens, and vice versa. For information on how to configure the STS to support JWT tokens (via WS-Trust), please look at a previous blog entry.

1) Issuing JWT Tokens

We can retrieve a  JWT Token from the REST interface of the STS simply by making a HTTP GET request to "/token/jwt". Specify the "appliesTo" query parameter to insert the service address as the audience of the generated token. You can also include various claims in the token via the "claim" query parameter.

If an "application/xml" accept type is specified, or if multiple accept types are specified (as in a browser), the JWT token is wrapped in a "TokenWrapper" XML Element by the STS:

If "application/json" is specified, the JWT token is wrapped in a simple JSON Object as follows:

If "text/plain" is specified, the raw JWT token is returned. We can also get a JWT token using HTTP POST by constructing a WS-Trust RequestSecurityToken XML fragment, specifying "urn:ietf:params:oauth:token-type:jwt" as the WS-Trust TokenType.

2) Validating JWT Tokens and token transformation

We can also validate JWT Tokens by POSTing a WS-Trust RequestSecurityToken to the STS. The raw (String) JWT Token must be wrapped in a "TokenWrapper" XML fragment, which in turn is specified as the value of "ValidateTarget". Also, an "action" query parameter must be added with value "validate".

A powerful feature of the STS is the ability to transform tokens from one type to another. This is done by making a validate request for a given token, and specifying a value for "TokenType" that corresponds to a token type that is desired.  In this way we can validate a SAML token and issue a JWT Token, and vice versa.

To see some examples of how to do token validation as well as token transformation, please take a look at the following tests, which use the CXF WebClient to invoke on the REST interface of the STS.
Categories: Colm O hEigeartaigh

A new REST interface for the Apache CXF Security Token Service - part I

Colm O hEigeartaigh - Thu, 06/16/2016 - 17:46
Apache CXF ships a Security Token Service (STS) that can issue/validate/renew/cancel tokens via the (SOAP based) WS-Trust interface. The principal focus of the STS is to deal with SAML tokens, although other token types are supported as well. JAX-RS clients can easily obtain tokens using helper classes in CXF (see Andrei Shakirin's blog entry for more information on this).

However, wouldn't it be cool if a JAX-RS client could dispense with SOAP altogether and obtain tokens via a REST interface? Starting from the 3.1.6 release, the Apache CXF STS now has a powerful and flexible REST API which I'll describe in this post. The next post will cover how the STS can now issue and validate JWT tokens, and how it can transform JWT tokens into SAML tokens and vice versa.

One caveat - this REST interface is obviously not a standard, as compared to the WS-Trust interface, and so is specific to CXF.

1) Configuring the JAX-RS endpoint of the CXF STS

Firstly, let's look at how to set up the JAX-RS endpoint needed to support the REST interface of the STS. The STS is configured in exactly the same way as for the standard WS-Trust based interface, the only difference being that we are setting up a JAX-RS endpoint instead of a JAX-WS endpoint. Example configuration can be seen here in a system test.

2) Issuing Tokens

The new JAX-RS interface supports a number of different methods to obtain tokens. In this post, we will just focus on the methods used to obtain XML based tokens (such as SAML).

2.1) Issue tokens via HTTP GET

The easiest way to get a token is by doing a HTTP GET on "/token/{tokenType}". Here is a simple example using a web browser:

 The supported tokenType values are:
  • saml
  • saml2.0
  • saml1.1
  • jwt
  • sct
A number of optional query parameters are supported:
  • keyType - The standard WS-Trust based URIs to indicate whether a Bearer, HolderOfKey, etc. token is required. The default is to issue a Bearer token.
  • claim - A list of requested claims to include in the token. See below for the list of acceptable values.
  • appliesTo - The AppliesTo value to use
  • wstrustResponse - A boolean parameter to indicate whether to return a WS-Trust Response or just the desired token. The default is false. This parameter only applies to the XML case.
Note that for the "Holder of Key" keytype, the STS will try to get the client key via the TLS client certificate, if it is available. The (default) supported claim values are:
  • emailaddress
  • role
  • surname
  • givenname
  • name
  • upn
  • nameidentifier
The format of the returned token depends on the HTTP application type:
  • application/xml - Returns a token in XML format
  • application/json - JSON format for JWT tokens only.
  • text/plain - The "plain" token is returned. For JWT tokens it is just the raw token. For XML-based tokens, the token is BASE-64 encoded and returned.
The default is to return XML if multiple types are specified (e.g. in a browser).

2.2) Issue tokens via HTTP POST

While the GET method above is very convenient to use, it may be that you want to pass other parameters to the STS in order to issue a token. In this case, you can construct a WS-Trust RequestSecurityToken XML fragment and POST it to the STS. The response will be the standard WS-Trust Response, and not the raw token as you have the option of receiving via the GET method.

3) Validating/Renewing/Cancelling tokens

It is only possible to renew or validate or cancel a token using the POST method detailed in section 2.2 above. You construct the RequestSecurityToken XML fragment in the same way as for Issue, except including the token in question under "ValidateTarget", "RenewTarget", etc. For the non-issue use case, you must specify a "action" query parameter, which can be "issue" (default), "validate", "renew", "cancel".
Categories: Colm O hEigeartaigh

Apache CXF JAX-RS and SAML Assertions

Sergey Beryozkin - Thu, 06/02/2016 - 16:39
While the software industry with the interests in WEB security is enthusiastically embracing the latest and coolest technologies such as OpenId Connect and JOSE, with JSON Web Tokens being the stars of the advanced security flows, less 'glamorous' SAML security tokens have been continuing helping to secure the existing services.

CXF JAX-RS has been providing a comprehensive support for SAML assertions for a while now which is being relied upon in a number of productions. I'd also like to encourage the developers who work with SAML give this access control feature a try.

The question which is often being asked is how a JAX-RS client gets these assertions. Please read this informative blog post explaining how CXF JAX-RS clients can seamlessly get a SAML assertion from a WS STS service and use it with the server validating it against STS or locally.

Please also check this section if you are you curious how to reuse SAML assertions in OAuth2 flows.

Categories: Sergey Beryozkin

Practical Cryptography with Apache CXF JOSE

Sergey Beryozkin - Tue, 05/31/2016 - 17:20
It has been a year since I had a chance to talk about Practical JOSE in Apache CXF at Apache Con NA 2015.

We have significantly improved CXF JOSE implementation  since then, with Colm helping a lot with the code, tests, documentation. The code has become more thoroughly tested, the configuration - better, with the documentation being updated recently. 

Production quality CXF STS service can now issue JOSE-protected JWT assertions and Fediz OpenId Connect project directly depends on JOSE in order to secure OIDC IdTokens.

But it is important to realize that doing JOSE does not mean you need to do OAuth2 in general or OpenId Connect in particular, though it is definitely true that understanding JOSE will help when you decide to work with OAuth2/OIDC.
As such, a web service developer can experiment with JOSE in a number of ways.

One approach is to use JWS Signature or JWE Encryption helpers to sign and/or encrypt the arbitrary data.

For example, have your service receiving a confidential String over 2-way HTTPS, then JWE-encrypt and save it to the database to ensure the data is safe or JWS-sign only and forward further, being assured the data won't be modified, and choose between JWS Compact or JSON representations.

Have you already heard JOSE sequences have the data Base64 URL encoded ? Try JWS JSON with an unencoded payload option.

Another approach is let CXF do JOSE for you. Use CXF JOSE filters and make service data secured by typing few lines of text in the configuration properties.
These filters will do the best effort at streaming the outbound data while preparing JOSE sequences.

Would you like to link client JWT assertions obtained with the progressive services such as CXF STS to the data being protected ? Add a couple of filters

I honestly think that JOSE is the best technology which can help many of us  understand better what cryptography is.

Start with selecting a signature algorithm. You most likely have a Java JKS key store somewhere around, so go for 'RS256'. Get the private key out and sign, then get a public key and validate as shown here.
Next try to encrypt, select RSA-OEPA to make it real fast given that you have this JKS store. Use a public key to secure a content encryption key generated by CXF for you and then do A128GCM content encryption. Finish with decrypting the content with a private key.

Works ? Interested in trying different key sizes or combinations of JOSE algorithms ? No problems, try them fast. Learn more about these algorithms next. See how it all works when the CXF JOSE filters do the work.

We've thought a lot on how to help developers start experimenting with JOSE as fast and easy as possible and I hope those of you who will start working with CXF JOSE code will help us make it even better.

Would like to use some other quality JOSE libraries such as these ones ?  No problems, use them inside your custom JAX-RS filters or directly in the service code.

You may say, I'm not really seeing others use JOSE in regular HTTP services work. Let me finish with this advice: please do not worry about it, be a pioneer, experiment and find new interesting ways to secure your services and prepare them to work in the world of JOSE-protected tokens and data flowing everywhere.

Do JOSE today, convince your boss your team needs it :-), become a cryptography expert. Enjoy !

Categories: Sergey Beryozkin

SAML SSO support in the Fediz 1.3.0 IdP

Colm O hEigeartaigh - Fri, 05/27/2016 - 17:12
The Apache CXF Fediz Identity Provider (IdP) has had the ability to talk to third party IdPs using SAML SSO since the 1.2.0 release. However, one of the new features of the 1.3.0 release is the ability to configure the Fediz IdP to use the SAML SSO protocol directly, instead of WS-Federation. This means that Fediz can be used as a fully functioning SAML SSO IdP.

I added a new test-case to github to show how this works:
  • cxf-fediz-saml-sso: This project shows how to use the SAML SSO interceptors of Apache CXF to authenticate and authorize clients of a JAX-RS service. 
The test-case consists of two modules. The first is a web application which contains a simple JAX-RS service, which has a single GET method to return a doubled number. The method is secured with a @RolesAllowed annotation, meaning that only a user in roles "User", "Admin", or "Manager" can access the service. The service is configured with the SamlRedirectBindingFilter, which redirects unauthenticated users to a SAML SSO IdP for authentication (in this case Fediz). The service configuration also defines an AssertionConsumerService which validates the response from the IdP, and sets up the session for the user + populates the CXF security context with the roles from the SAML Assertion.

The second module deploys the Fediz IdP and STS in Apache Tomcat, as well as the "double-it" war above. It uses Htmlunit to make an invocation on the service and check that access is granted to the service. Alternatively, you can comment the @Ignore annotation of the "testInBrowser" method, and copy the printed out URL into a browser to test the service directly (user credentials: "alice/ecila").

The IdP configuration is defined in entities-realma.xml. Note that under "supportedProtocols" for the "idp-realmA" configuration is the value "urn:oasis:names:tc:SAML:2.0:profiles:SSO:browser". In addition, the default authentication URI is "saml/up". These are the only changes that are required to switch the IdP for "realm A" from WS-Federation to SAML SSO.
Categories: Colm O hEigeartaigh


Subscribe to Talend Community Coders aggregator