Latest Activity

How to enable Fediz Plugin Logging

Jan Bernhardt - Thu, 09/22/2016 - 14:43
If you are using the Apache Fediz plugin to enable WS-Federation Support for your Tomcat container, you will not see any log statements from the Fediz Plugin by default. Especially when testing or analyzing issues with the plugin you will be interested in actually seeing some log statements from the plugin.

In this blog post I'll explain to you what need to be done to get all DEBUG log level statements from the Apache Fediz Tomcat Plugin using Log4J.
Apache Tomcat tells you how to enable logging on the container level.
1. Adding DependenciesFirst you need to ensure that the required libraries are available within your classpath. This can be done in one of two ways:
a) Adding Maven Dependencies to the Fediz Tomcat PluginAdd the following dependency to cxf-fediz/plugins/tomcat7/pom.xml:
<project . . .>
. . .
. . .
Now build the plugin again mvn clean package and deploy the content of cxf-fediz/plugins/tomcat7/target/ into your tomcat/lib/fediz folder.
b) Adding lib files directly to your lib folderAdd slf4j and log4j libs (in the desired version) to your fediz plugin dependencies:

2. Adding Log4J configuration fileOnce your dependencies are added to your Tomcat installation, you need to add a file to your tomcat/lib folder. Here is an example content for this file:
# Loggers
log4j.rootLogger = WARN, CATALINA, CONSOLE = DEBUG, CONSOLE, FEDIZ = false

# Appenders
log4j.appender.CATALINA = org.apache.log4j.DailyRollingFileAppender
log4j.appender.CATALINA.File = ${catalina.base}/logs/catalina.out
log4j.appender.CATALINA.Append = true
log4j.appender.CATALINA.Encoding = UTF-8
log4j.appender.CATALINA.DatePattern = '.'yyyy-MM-dd
log4j.appender.CATALINA.layout = org.apache.log4j.PatternLayout
log4j.appender.CATALINA.layout.ConversionPattern = %d [%t] %-5p %c %x - %m%n

log4j.appender.FEDIZ = org.apache.log4j.DailyRollingFileAppender
log4j.appender.FEDIZ.File = ${catalina.base}/logs/fediz-plugin.log
log4j.appender.FEDIZ.Append = true
log4j.appender.FEDIZ.Encoding = UTF-8
log4j.appender.FEDIZ.Threshold = DEBUG
log4j.appender.FEDIZ.DatePattern = '.'yyyy-MM-dd
log4j.appender.FEDIZ.layout = org.apache.log4j.PatternLayout
log4j.appender.FEDIZ.layout.ConversionPattern = %d [%t] %-5p %c %x - %m%n

log4j.appender.CONSOLE = org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Encoding = UTF-8
log4j.appender.CONSOLE.Threshold = INFO
log4j.appender.CONSOLE.layout = org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern = %d [%t] %-5p %c %x - %m%n

Now restart your tomcat container and you will see Fediz Info logs on your console and Debug messages within tomcat/logs/fediz-plugin.log.
Categories: Jan Bernhardt

Invoking on the Talend ESB STS using SoapUI

Colm O hEigeartaigh - Wed, 09/21/2016 - 17:11
Talend ESB ships with a powerful SecurityTokenService (STS) based on the STS that ships with Apache CXF. The Talend Open Studio for ESB contains UI support for creating web service clients that use the STS to obtain SAML tokens for authentication (and also authorization via roles embedded in the tokens). However, it is sometimes useful to be able to obtain tokens with a third party client. In this post we will show how SoapUI can be used to obtain SAML Tokens from the Talend ESB STS.

1) Download and run Talend Open Studio for ESB

The first step is to download Talend Open Studio for ESB (the current version at the time of writing this post is 6.2.1). Unzip it and start the container via:
  • Runtime_ESBSE/container/bin/trun
The next step is to start the STS itself:
  • tesb:start-sts
2) Download and run SoapUI

Download SoapUI and run the installation script. Create a new SOAP Project called "STS" using the WSDL:
  • http://localhost:8040/services/SecurityTokenService/UT?wsdl
The WSDL of the STS defines a number of different services. The one we are interested in is the "UT_Binding", which requires a WS-Security UsernameToken to authenticate the client. Click on "UT_Binding/Issue/Request 1" in the left-hand menu to see a sample request for the service. Now we need to do some editing of the request. Remove the 'Context="?"' attribute from RequestSecurityToken. Then paste the following into the Body of the RequestSecurityToken:
  • <t:TokenType xmlns:t=""></t:TokenType>
  • <t:KeyType xmlns:t=""></t:KeyType>
  • <t:RequestType xmlns:t=""></t:RequestType>
Now we need to configure a username and password to use when authenticating the client request. In the "Request Properties" box in the lower left corner, add "tesb" for the "username" and "password" properties. Now right click in the request pane, and select "Add WSS Username Token" (Password Text). Now send the request and you should receive a SAML Token in response.

Bear in mind that if you wish to re-use the SAML Token retrieved from the STS in a subsequent request, you must copy it from the "Raw" tab and not the "XML" tab of the response. The latter adds in whitespace that breaks the signature on the token. Another thing to watch out for is that the STS maintains a cache of the Username Token nonce values, so you will need to recreate the UsernameToken each time you want to get a new token.

3) Requesting a "PublicKey" KeyType

The example above uses a "Bearer" KeyType. Another common use-case, as is the case with the security-enabled services developed using the Talend Studio, is when the token must have the PublicKey/Certificate of the client embedded in it. To request such a token from the STS, change the "Bearer" KeyType as above to "PublicKey". However, we also need to present a certificate to the STS to include in the token.

As we are just using the test credentials used by the Talend STS, go to the Runtime_ESBSE/container/etc/keystores and extract the client key with:
  • keytool -exportcert -rfc -keystore clientstore.jks -alias myclientkey -file client.cer -storepass cspass
Edit client.cer + remove the first and end lines (that contain BEGIN/END CERTIFICATE). Now go back to SOAP-UI and add the following to the RequestSecurityToken Body:
  • <t:UseKey xmlns:t=""><ds:KeyInfo xmlns:ds=""><ds:X509Data><ds:X509Certificate>...</ds:X509Certificate></ds:X509Data></ds:KeyInfo></t:UseKey>
where the content of the X.509 Certificate is the content in client.cer. This time, the token issued by the STS will contain the public key of the client embedded in the SAML Subject.

Categories: Colm O hEigeartaigh

Securing an Apache Kafka broker - part II

Colm O hEigeartaigh - Mon, 09/19/2016 - 15:49
In the previous post, we looked at how to configure an Apache Kafka broker to require SSL client authentication. In this post we will add authorization to the example, making sure that only authorized producers can send messages to the broker. In addition, we will show how to enforce authorization rules per-topic for consumers.

1) Configure authorization in the broker

Configure Apache Kafka as per the previous tutorial. To enforce some custom authorization rules in Kafka, we will need to implement the Kafka Authorizer interface. This interface contains an "authorize" method, which supplies a Session Object, where you can obtain the current principal, as well as the Operation and Resource upon which to enforce an authorization decision.

In terms of the example detailed in the previous post, we created broker, service (producer) and client (consumer) principals. We want to enforce authorization decisions as follows:
  • Let the broker principal do anything
  • Let the producer principal read/write on all topics
  • Let the consumer principal read/describe only on topics starting with "test".
There is a sample Authorizer implementation available in some Kafka unit test I wrote in github that can be used in this example - CustomAuthorizer:

Next we need to package up the CustomAuthorizer in a jar so that it can be used in the broker. You can do this by checking out the testcases github repo, and invoking "mvn clean package jar:test-jar -DskipTests" in the "apache/bigdata/kafka" directory. Now copy the resulting test jar in "target" to the "libs" directory in your Kafka installation. Finally, edit the "config/" file and add the following configuration item:
2) Test authorization

Now lets test the authorization logic. Restart the broker and the producer:
  • bin/ config/
  • bin/ --broker-list localhost:9092 --topic test --producer.config config/
Send a few messages to check that the producer is authorized correctly. Now start the consumer:
  • bin/ --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/ --new-consumer
If everything is configured correctly then it should work as in the first tutorial. Now we will create a new topic called "messages":
  • bin/ --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic messages
Restart the producer to send messages to "messages" instead of "test". This should work correctly. Now try to consume from "messages" instead of "test". This should result in an authorization failure, as the "client" principal can only consume from the "test" topic according to the authorization rules.
Categories: Colm O hEigeartaigh

Securing an Apache Kafka broker - part I

Colm O hEigeartaigh - Fri, 09/16/2016 - 18:19
Apache Kafka is a messaging system for the age of big data, with a strong focus on reliability, scalability and message throughput. This is the first part of a short series of posts on how to secure an Apache Kafka broker. In this post, we will focus on authenticating message producers and consumers using SSL. Future posts will look at how to authorize message producers and consumers.

1) Create SSL keys

As we will be securing the broker using SSL client authentication, the first step is to create some keys for testing purposes. Download the OpenSSL ca.config file used by the WSS4J project. Change the "certificate" value to "ca.pem", and the "private_key" value to "cakey.pem". You will also need to create a directory called "ca.db.certs", and make an empty file called "ca.db.index". Now create a new CA key and cert via:
  • openssl req -x509 -newkey rsa:1024 -keyout cakey.pem -out ca.pem -config ca.config -days 3650
Just accept the default options. Now we need to convert the CA cert into jks format:
  • openssl x509 -outform DER -in ca.pem -out ca.crt
  • keytool -import -file ca.crt -alias ca -keystore truststore.jks -storepass security
Now we will create the client key, sign it with the CA key, and put the signed client cert and CA cert into a keystore:
  • keytool -genkey -validity 3650 -alias myclientkey -keyalg RSA -keystore clientstore.jks -dname "CN=Client,O=Apache,L=Dublin,ST=Leinster,C=IE" -storepass cspass -keypass ckpass
  • keytool -certreq -alias myclientkey -keystore clientstore.jks -file myclientkey.cer -storepass cspass -keypass ckpass
  • echo 20 > ca.db.serial
  • openssl ca -config ca.config -policy policy_anything -days 3650 -out myclientkey.pem -infiles myclientkey.cer
  • openssl x509 -outform DER -in myclientkey.pem -out myclientkey.crt
  • keytool -import -file ca.crt -alias ca -keystore clientstore.jks -storepass cspass
  • keytool -import -file myclientkey.crt -alias myclientkey -keystore clientstore.jks -storepass cspass -keypass ckpass
Now follow the same template to create a "service" key in servicestore.jks, with store password "sspass" and key password "skpass". In addition, we will create a "broker" key in brokerstore.jks, with storepass "bspass" and key password "bkpass".  

2) Configure the broker

Download Apache Kafka and extract it ( was used for the purposes of this tutorial). Copy the keys created in section "1" into $KAFKA_HOME/config. Start Zookeeper with:
  • bin/ config/
Now edit 'config/' and add the following:
  • ssl.keystore.location=./config/brokerstore.jks
  • ssl.keystore.password=bspass
  • ssl.key.password=bkpass
  • ssl.truststore.location=./config/truststore.jks
  • ssl.truststore.password=security
  • ssl.client.auth=required
  • listeners=SSL://localhost:9092
and start the broker and then create a "test" topic with:
  • bin/ config/
  • bin/ --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
3) Configure the producer

Now we will configure the message producer. Edit 'config/' and add the following:
  • security.protocol=SSL
  • ssl.keystore.location=./config/servicestore.jks
  • ssl.keystore.password=sspass
  • ssl.key.password=skpass
  • ssl.truststore.location=./config/truststore.jks
  • ssl.truststore.password=security
and start the producer with:
  • bin/ --broker-list localhost:9092 --topic test --producer.config config/
Send a few messages to the topic to make sure that everything is working ok.

4) Configure the consumer

Finally we will configure the message consumer. Edit 'config/' and add the following:
  • security.protocol=SSL
  • ssl.keystore.location=./config/clientstore.jks
  • ssl.keystore.password=cspass
  • ssl.key.password=ckpass
  • ssl.truststore.location=./config/truststore.jks
  • ssl.truststore.password=security
and start the consumer with:
  • bin/ --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/ --new-consumer
The messages sent by the producer should appear in the console window of the consumer. So there it is, we have configured the Kafka broker to require SSL client authentication. In the next post we will look at adding simple authorization support to this scenario.
Categories: Colm O hEigeartaigh

Integrating Apache Camel with Apache Syncope - part II

Colm O hEigeartaigh - Fri, 09/09/2016 - 16:58
A recent blog post introduced the new Apache Camel provisioning manager that is available in Apache Syncope 2.0.0. It also covered a simple use-case for the new functionality, where the "createUser" Camel route is modified to send an email to an administrator when a User is created, with some details about the created User in the email. In this post, we will look at a different use-case, where the Camel provisioning manager is used to extend the functionality offered by Syncope.

1) The use-case

Apache Syncope stores users in internal storage in a table called "SyncopeUser". This table contains information such as the User Id, name, password, creation date, last password change date, etc. In addition, if there is an applicable password policy associated with the User realm, a list of the previous passwords associated with the User is stored in a table called "SyncopeUser_passwordHistory":

As can be seen from this screenshot, the table stores a list of Syncope User Ids with a corresponding password value. The main function of this table is to enforce the password policy. So for example, if the password policy states that a user can't change back to a password for at least 10 subsequent password changes, this table provides the raw information that is needed to enforce the policy.

Now, what if the administrator wants a stronger audit trail for User password changes other than what is provided by default in the Syncope internal storage? In particular, the administrator would like a record of when the User changes a password. The "SyncopeUser" table only stores the last password change date. There is no way of seeing when each password stored in the "SyncopeUser_passwordHistory" table was changed. Enter the Camel provisioning manager...

2) Configure Apache Syncope

Download and install  Apache Syncope (I used the "standalone" download for the purposes of this demo). Start Apache Syncope and log on to the admin console. First we will create a password policy. Click on "Configuration" and then "Policies". Click on the "Password" tab and then select the "+" button to create a new policy. Give a description for the policy and then select a history length of "10".

Now let's create a new user in the default realm. Click on "realms" and hit the "edit" button. Select the password policy you have just created and click "Finish". This means that all users created in this realm will have the applicable policy applied to them. Now click on the "User" tab and the "+" button to add a new user called "alice". Click on "Password Management" and enter a password for "alice". Now we have a new user created, we want to be able to see when she updates her password from now on.

Click on "Extensions" and "Camel Routes". Find the "updateUser" route (might not be on the first page) and edit it. Create a new Camel "filter" (as per the screenshot below) just above the "bean method=" line with the following content:
  • <simple>${body.password} != null</simple>
  • <setHeader headerName="CamelFileName"><simple>${body.key}.passwords</simple></setHeader>
  • <setBody><simple>New password '${body.password.value}' changed at time '${date:now:yyyy-MM-dd'T'HH:mm:ss.SSSZ}'\n</simple></setBody>
  • <to uri="file:./?fileExist=Append"/>
So what we are doing here is to audit the password changes for a particular user, by writing out the password + Timestamp to a file associated with that user. Let's examine what each of these statements do in turn. The first statement is the filter condition. It states that we should execute the following statements if the password is not null. The password will only be non null if it is being changed. So for example, if the user just changes a given attribute and not the password, the filter will not be invoked.

The second statement sets the Camel Header "CamelFileName" to the format of "<>.passwords". This header is used by the Camel File component as the file name to write out to. The third statement sets the exchange Body (the file content) to show the password value along with a Timestamp. Finally, the fourth statement is an invocation of the Camel File component, which appends the exchange Body to the given file name. As we have overridden the message Body in the second statement above, we need to change the ${body} in the create call to ${exchangeProperty.actual}, which is the saved Body. Click on "save" to save the modified route.

Now let's update the user "alice" and set a new password a couple of times. There should be a new file in the directory where you launched Tomcat containing the audit log for password changes for that particular user. With the power of Apache Camel, we could audit to a database, to Apache Kafka, etc etc.

Categories: Colm O hEigeartaigh

Apache CXF Fediz 1.2.3 and 1.3.1 released

Colm O hEigeartaigh - Thu, 09/08/2016 - 19:45
Apache CXF Fediz 1.2.3 and 1.3.1 have been released. The 1.3.1 release contains the following significant features/fixes:
  • An update to use Apache CXF 3.1.7 
  • Support for Facebook Login as a Trusted IdP.
  • A fix for SAML SSO redirection on ForceAuthn or token expiry.
  • A bug fix to support multiple realms in the IdP.
  • A fix to enforce that mandatory claims are present in the received token.
In addition, both 1.2.3 and 1.3.1 contain a fix for a new security advisory - CVE-2016-4464:
Apache CXF Fediz is a subproject of Apache CXF which implements the WS-Federation Passive Requestor Profile for SSO specification. It provides a number of container based plugins to enable SSO for Relying Party applications. It is possible to configure a list of audience URIs for the plugins, against which the AudienceRestriction values of the received SAML tokens are supposed to be matched. However, this matching does not actually take place.

This means that a token could be accepted by the application plugin (assuming that the signature is trusted) that is targeted for another service, something that could potentially be exploited by an attacker.
    Categories: Colm O hEigeartaigh

    Syncope User Synchronisation with a Database

    Jan Bernhardt - Thu, 09/01/2016 - 13:12
    In a previous post I explained how to setup a datasource for an embedded H2 database and how to use it with the Karaf DB JAAS plugin.

    In this post, I'll explain to you how to setup Syncope to synchronize users from that database into syncope. Of course you can also use any other database with a matching JDBC driver.
    Install SyncopeIn this post I'll refer to the Syncope Installation which comes with the Talend 6.1.1 installer. If you need to setup Syncope manually, please take a look at some posts from Colm.
    Setup DB Connection ModuleSyncope uses connid to connect to other backend systems like LDAP.
    You need to download the DB connid bundle and follow the installation instructions.
    1. Open webapps/syncope/WEB-INF/classes/ and define your connid bundle location:
      Windows Style:connid.locations=file:/C:/Talend/6.1.1/apache-tomcat/webapps/syncope/WEB-INF/connid/Linux Style:connid.locations=file:/opt/Talend-6.1.1/apache-tomcat/webapps/syncope/WEB-INF/connid/
    2. Create the defined folder and copy your downloaded connid bundle (jar) into it
    3. Download and copy your required JDBC driver to your tomcat/lib folder
    4. Restart Syncope / Tomcat
    5. Login to Syncope Console: http://localhost:8080/syncope-console/
      Default-Username: admin
      Default-Password: password
    Setup DB ConnectorNext you need to setup a connection to your database, before you can define any synchronization pattern.
    1. Switch to Resources -> Connectors and click Create
    2. Enter your connection name and select your connid bundle:
    3. Configure your connection settings:
      Since Syncope expects SHA1 hashes to be Uppercase you must set this checkbox, or otherwise your users will not be able to authenticate against syncope with their synchronized password.

      In Syncope 2.x and newer it will also be possible to avoid user password synchronization, but instead to do a "pass-through authentication". This will be especially helpful if your passwords are not just hashed but also salted and encrypted.
    4. Perform a connection test by clicking on the top right world icon of the configuration tab

      If you are experiencing connection problems, take a look into the  tomcat/logs/core-connid.log file for detailed information.
    5. Select all checkboxes on the capabilities tab:
    6. Save your connection
      Define DB Resource Now you can setup a new resource to define the attribute matching from syncope internal DB and external DB.
      1. Click on Resources -> Resources -> Create
      2.  Switch to user mapping tab
      3. Click Save
      Add Synchronization Task To import users from your database you need to setup a synchronization task.
      1. Click on Task ->  Synchronization Tasks -> Create
      2. Click Save
      3. Execute your new synchronization task
        If your run was successful you will see alice as a new user under Users.
        Create a new UserTo test user propagation, you must create a new user and add this user to the H2-users Resource.
        1. Click Users -> List -> Create
        2. Select Resource
        3. Save
        You will now find Bob in your H2 database.

        I was not able to do a role synchronization with my DB backend, due to missing support in the UI / connid handler.
        Categories: Jan Bernhardt

        Integrating Apache Camel with Apache Syncope - part I

        Colm O hEigeartaigh - Wed, 08/31/2016 - 12:29
        Apache Syncope is an open-source Identity Management solution. A key feature of Apache Syncope is the ability to pull Users, Groups and Any Objects from multiple backend resources (such as LDAP, RDMBS, etc.) into Syncope's internal storage, where they can then be assigned roles, pushed to other backend resources, exposed via Syncope's REST API, etc.

        However, what if you wanted to easily perform some custom task as part of the Identity Management process? Wouldn't it be cool to be able to plug in a powerful integration framework such as Apache Camel, so that you could exploit Camel's huge list of messaging components and routing + mediation rules? Well with Syncope 2.0.0 you can do just this with the new Apache Camel provisioning manager. This is a unique and very powerful selling point of Apache Syncope in my opinion. In this article, we will introduce the new Camel provisioning manager, and show a simple example of how to use it.

        1) The new Apache Camel provisioning manager

        As stated above, a new provisioning manager is available in Apache Syncope 2.0.0 based on Apache Camel. A set of Camel routes are available by default which are invoked when the User, Groups and Any Objects in question are changed in some way. So for example, if a new User is created, then the corresponding Camel route is invoked at the same time. This allows the administrator to plug in custom logic on any of these state changes. The routes can be viewed and edited in the Admin Console by clicking on "Extensions" and then "Camel Routes".

        Each of the Camel routes uses a new "propagate" Camel component available in Syncope 2.0.0. This component encapsulates some common logic involved in using the Syncope PropagationManager to create some tasks, and to execute them via the PropagationTaskExecutor. All of the routes invoke this propagate component via something like:
        • <to uri="propagate:<propagateType>?anyTypeKind=<anyTypeKind>&options"/>
        Where propagateType is one of:
        • create
        • update
        • delete
        • provision
        • deprovision
        • status
        • suspend
        • confirmPasswordReset
        and anyTypeKind is one of:
        • USER
        • GROUP
        • ANY
        2) The use-case

        In this post, we will look at a simple use-case of sending an email to an administrator when a User is created, with some details about the created User in the email. Of course, this could be handled by a Notification Task, but we'll discuss some more advanced scenarios in future blog posts. Also note that a video available on the Tirasa blog details more or less the same use-case. For the purposes of the demo, we will set up a mailtrap account where we will receive the emails sent by Camel.

        3) Configure Apache Syncope

        Download and install  Apache Syncope (I used the "standalone" download for the purposes of this demo). Before starting Apache Syncope, we need to copy a few jars that are required by Apache Camel to actually send emails. Copy the following jars to $SYNCOPE/webapps/syncope/WEB-INF/lib:
        Now start Apache Syncope and log on to the admin console. Click on "Extensions" and then "Camel Routes". As we want to change the default route when users are created, click on the "edit" image for the "createUser" route. Add the following information just above the "bean method=" line:
        • <setHeader headerName="subject"><simple>New user ${body.username} created in realm ${body.realm}</simple></setHeader> 
        • <setBody><simple>User full name: ${body.plainAttrMap[fullname].values[0]}</simple></setBody>
        • <to uri="smtp://<username>&amp;password=<password>&amp;contentType=text/html&amp;"/>
        Let's examine what each of these statements do. The first statement is setting the Camel header "Subject" which corresponds to the Subject of the Email. It simply states that a new user with a given name is created in a given realm. The second statement sets the message Body, which is used as the content of the message by Camel. It just shows the User's full name, extracted from the "fullname" attribute, as an example of how to access User attributes in the route.

        The third statement invokes on the Camel smtp component. You'll need to substitute in the username + password you configured when setting up the mailtrap account. The recipient is configured using the "to" part of the URI. One more change is required to the existing route. As we have overridden the message Body in the second statement above, we need to change the ${body} in the create call to ${exchangeProperty.actual}, which is the saved Body. Click on "save" to save the modified route.

        Before creating a User, we need to add a "fullname" User attribute as the route expects. Go to "Configuration" and "Types", and click on the "Schemas" tab. Click on the "+" button under "PLAIN" and add a new attribute called "fullname". Then click on "AnyTypeClasses", and add the "fullname" attribute to the BaseUser AnyTypeClass.

        Finally, go to the "/" realm and create a new user, specifying a fullname attribute. A new email should be available in the mailtrap account as follows:

        Categories: Colm O hEigeartaigh

        Custom JSSE Truststore to enable XKMS Certificate Validation

        Jan Bernhardt - Mon, 08/29/2016 - 08:56
        Recently I was involved in a project which uses a central XKMS Server for certificate and trust management. This was all working fine within the Talend runtime with a custom wss4j crypto provider. However the need raised to perform client certificate validations (mutal SSL) with Apache Fediz running inside an Apache Tomcat server.

        Usually I would use a JKS truststore for Tomcat to add trusted certificates (CAs). However this was not possible for this project, because all certificates will be managed inside an LDAP accessible via a XKMS service. Searching for a solution to extend Tomcat to support XKMS based certificate validation I came across the JSSE Standard.

        Reading throw the documentation was not so straightforward and clear. But searching through the internet finally helped me to achieve my goal. In this blog post, I'll show you what I had to do, to enabled XKMS based SSL certificate validation in Tomcat. To manage your SSL truststore settings you can use standard System or Tomcat properties:

        System PropertiesTomcat location for JKS truststore for JKS for a truststoreFactory of your truststore. Default is "JKS"n/atrustManagerClassNameCustom trust manager class to use to validate client certificates
        Settings are considered in the following order:
        1. Tomcat truststore properties
        2. System Properties
        3. Tomcat keystore properties
        4. Default Values
        If a trustManagerClassName is set, this implementation will be used and all other truststore settings will be ignored. If a truststore provider is defined any Java standard provider will be ignored.

        You can review this behavior in the Tomcat JSSESocketFactory init method.

        The easiest way to achieve my goal was to implement my own XKMSTrustManager implementing the interface.
        public class XKMSTrustManager implements X509TrustManager {

        private static final Logger LOG = LoggerFactory.getLogger(XKMSTrustManager.class);

        private XKMSInvoker xkms;

        public XKMSTrustManager() throws MalformedURLException {
        XKMSService xkmsService = new XKMSService(
        URI.create(System.getProperty("xkms.wsdl.location", "http://localhost:8040/services/XKMS/?wsdl"))
        xkms = new XKMSInvoker(xkmsService.getXKMSPort());

        public void checkClientTrusted(X509Certificate[] chain, String authType) throws CertificateException {
        LOG.debug("Check client trust for: {}", chain);

        public void checkServerTrusted(X509Certificate[] chain, String authType) throws CertificateException {
        LOG.debug("Check server trust for: {}", chain);

        public X509Certificate[] getAcceptedIssuers() {
        return new X509Certificate[] {};

        protected void validateTrust(X509Certificate[] chain) throws CertificateException {
        if (chain == null) {
        throw new CertificateException("Certificate chain is null");

        if (!xkms.validateCertificate(chain)) {
        LOG.error("Certificate chain is not trusted: {}", chain);
        throw new CertificateException("Certificate chain is not trusted");
        <Server port="9005" shutdown="SHUTDOWN">

          <Service name="Catalina">

            <Connector port="9443" protocol="org.apache.coyote.http11.Http11Protocol"
                       maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
                       sslProtocol="TLS" />



        However setting a trustManager is only possible if this option is provided by your application or if you have access to the source code of the SSL SocketFactory. In all other cases you will have to implement your own Security Provider providing your own truststore factory. This task is much more challenging. During my internet research for this topic I found several pages, which should be a good reference for you, if you have to go this way:

        JCA Reference Guide - Crypto Provider

        Howto Implement a JCA Provider

        JSSE Reference Guide - Customized Certificate Storage

        Custom CA Truststore in addition to System CA Truststore
        HowTo Register global security provider
        <java-home>/lib/security/ Advantage: Multiple providers. Adding just the "missing piece".
        Disadvantage: System wide configuration

        Override security provider settings with system properties
        Changing Security Settings via CodeSecurity.insertProviderAt(new FooBarProvider(), 1);
        Register a TrustManagerput("TrustManagerFactory.SunX509", "$SimpleFactory");
        put("TrustManagerFactory.PKIX", "$PKIXFactory");Using a Custom Certificate Trust Store
        Sun JSSE Provider Implementation
        Categories: Jan Bernhardt

        Pulling users and groups from LDAP into Apache Syncope 2.0.0

        Colm O hEigeartaigh - Fri, 08/26/2016 - 17:54
        A previous tutorial showed how to synchronize (pull) users and roles into Apache Syncope 1.2.x from an LDAP backend (Apache Directory). Interacting with an LDAP backend appears to be a common use-case for Apache Syncope users. For this reason, in this tutorial we will cover how to pull users and groups (previously roles) into Apache Syncope 2.0.0 from an LDAP backend via the Admin Console, as it is a little different from the previous 1.2.x releases.

        1) Apache DS

        The basic scenario is that we have a directory that stores user and group information that we would like to import into Apache Syncope 2.0.0. For the purposes of this tutorial, we will work with Apache DS. The first step is to download and launch Apache DS. I recommend installing Apache Directory Studio for an easy way to create and view the data stored in your directory.

        Create two new groups (groupOfNames) in the default domain ("dc=example,dc=com") called "cn=employee,ou=groups,ou=system" and "cn=boss,ou=groups,ou=system". Create two new users (inetOrgPerson) "cn=alice,ou=users,ou=system" and "cn=bob,ou=users,ou=system". Now edit the groups you created such that both alice and bob are employees, but only alice is a boss. Specify "sn" (surname) and "userPassword" attributes for both users.

        2) Pull data into Apache Syncope

        The next task is to import (pull) the user data from Apache DS into Apache Syncope. Download and launch an Apache Syncope 2.0.x instance. Make sure that an LDAP Connector bundle is available (see here).

        a) Define a 'surname' User attribute

        The inetOrgPerson instances we created in Apache DS have a "sn" (surname) attribute. We will map this into an internal User attribute in Apache Syncope. The Schema configuration is quite different in the Admin Console compared to Syncope 1.2.x. Select "Configuration" and then "Types" in the left hand menu. Click on the "Schemas" tab and then the "+" button associated with "PLAIN". Add "surname" for the Key and click "save". Now go into the "AnyTypeClasses" tab and edit the "BaseUser" item. Select "surname" from the list of available plain Schema attributes. Now the users we create in Syncope can have a "surname" attribute.

        b) Define a Connector

        The next thing to do is to define a Connector to enable Syncope to talk to the Apache DS backend. Click on "Topology" in the left-hand menu, and on the ConnId instance on the map. Click "Add new connector" and create a new Connector of type "net.tirasa.connid.bundles.ldap". On the next tab select:
        • Host: localhost
        • TCP Port: 10389
        • Principal: uid=admin,ou=system
        • Password: <password>
        • Base Contexts: ou=users,ou=system and ou=groups,ou=system
        • LDAP Filter for retrieving accounts: cn=*
        • Group Object Classes: groupOfNames
        • Group member attribute: member
        • Click on "Maintain LDAP Group Membership".
        • Uid attribute: cn
        • Base Context to Synchronize: ou=users,ou=system and ou=groups,ou=system
        • Object Classes to Synchronize: inetOrgPerson and groupOfNames
        • Status Management Class: net.tirasa.connid.bundles.ldap.commons.AttributeStatusManagement
        • Tick "Retrieve passwords with search".
        Click on the "heart" icon at the top of tab to check to see whether Syncope is able to connect to the backend resource. If you don't see a green "Successful Connection" message, then consult the logs. On the next tab select all of the available capabilities and click on "Finish".

        c) Define a Resource

        Next we need to define a Resource that uses the LDAP Connector.  The Resource essentially defines how we use the Connector to map information from the backend into Syncope Users and Groups. Click on the Connector that was created in the Topology map and select "Add new resource". Just select the defaults and finish creating the new resource. When the new resource is created, click on it and add some provisioning rules via "Edit provision rules".

        Click the "+" button and select the "USER" type to create the mapping rules for users. Click "next" until you come to the mapping tab and create the following mappings:

        Click "next" and enable "Use Object Link" and enter "'cn=' + username + ',ou=users,ou=system'". Click "Finish" and "save". Repeat the process above for the "GROUP" type to create a mapping rule for groups as follows:
        Similar to creating the user mappings, we also need to enable "Use Object Link" and enter "'cn=' + name + ',ou=groups,ou=system'". Click "Finish" and "save".

        d) Create a pull task

        Having defined a Connector and a Resource to use that Connector, with mappings to map User/Group information to and from the backend, it's time to import the backend information into Syncope.  Click on the resource and select "Pull Tasks". Create a new Pull Task via the "+" button. Select "/" as the destination realm to create the users and groups in. Choose "FULL_RECONCILIATION" as the pull mode. Select "LDAPMembershipPullActions"  (this will preserve the fact that users are members of a group in Syncope) and "LDAPPasswordPullActions" from the list of available actions. Select "Allow create/update/delete". When the task is created,  click on the "execute" button (it looks like a cogged wheel). Now switch to the "Realms" tab in the left-hand menu and look at the users and groups that have been imported in the "/" realm from Apache DS.

        Categories: Colm O hEigeartaigh

        SwaggerUI in CXF or what Child's Play really means

        Sergey Beryozkin - Tue, 08/23/2016 - 14:03
        We've had an extensive demonstration of how to enable Swagger UI for CXF endpoints returning Swagger documents for a while but the only 'problem' was that our demos only showed how to unpack a SwaggerUI module into a local folder with the help of a Maven plugin and make these unpacked resources available to browsers.
        It was not immediately obvious to the users how to activate SwaggerUI and with the news coming from a SpringBoot land that apparently it is really easy over there to do it it was time to look at making it easier for CXF users.
        So Aki, Andriy and myself talked and this is what CXF 3.1.7 users have to do:

        1. Have Swagger2Feature activated to get Swagger JSON returned
        2. Add a swagger-ui dependency  to the runtime classpath.
        3. Access Swagger UI

        For example, run a description_swagger2 demo. After starting a server go to the CXF Services page and you will see:

        Click on the link and see a familiar Swagger UI page showing your endpoint's API.

        Have you wondered what do some developers mean when they say it is a child's play to try whatever they have done ? You'll find it hard to find a better example of it after trying Swagger UI with CXF 3.1.7 :-)

        Note in CXF 3.1.8-SNAPSHOT we have already fixed it to work for Blueprint endpoints in OSGI (with the help from Łukasz Dywicki).  SwaggerUI auto-linking code has also been improved to support some older browsers better.

        Besides, CXF 3.1.8 will also offer a proper support for Swagger correctly representing multiple JAX-RS endpoints based on the fix contributed by Andriy and available in Swagger 1.5.10 or when API interface and implementations are available in separate (OSGI) bundles (Łukasz figured out how to make it work).

        Before I finish let me return to the description_swagger2 demo. Add a cxf-rt-rs-service-description dependency to pom.xml. Start the server and check the services page:

        Of course some users do and will continue working with XML-based services and WADL is the best language available around to describe such services. If you click on a WADL link you will see an XML document returned. WADLGenerator can be configured with an XSLT template reference and if you have a good template you can get UI as good as this Apache Syncope document.

        Whatever your data representation preferences are, CXF will get you supported.


        Categories: Sergey Beryozkin

        OpenId Connect in Apache CXF Fediz 1.3.0

        Colm O hEigeartaigh - Fri, 08/12/2016 - 18:02
        Previous blog posts have described support for OpenId Connect protocol bridging in the Apache CXF Fediz IdP. What this means is that the Apache CXF Fediz IdP can bridge between the WS-Federation protocol and OpenId Connect third party IdPs, when the user must be authenticated in a different security domain. However, the 1.3.0 release of Apache CXF Fediz also sees the introduction of a new OpenId Connect Idp which is independent of the existing (WS-Federation and SAML-SSO based) IdP, and based on Apache CXF. This post will introduce the new IdP via an example.

        The example code is available on github:
        • cxf-fediz-oidc: This project shows how to use interceptors of Apache CXF to authenticate and authorize clients of a JAX-RS service using OpenId Connect.
        1) The secured service

        The first module available in the example contains a trivial JAX-RS Service based on Apache CXF which "doubles" a number that is passed as a path parameter via HTTP GET. The service defines via a @RolesAllowed annotation that only users allowed in roles "User", "Admin" or "Manager" can access the service.

        The service is configured via spring. The endpoint configuration references the service bean above, as well as the CXF SecureAnnotationsInterceptor which enforces the @RolesAllowed annotation on the service bean. In addition, the service is configured with the CXF OidcRpAuthenticationFilter, which ensures that only users authenticated via OpenId Connect can access the service. The filter is configured with a URL to redirect the user to. It also explicitly requires a role claim to enforce authorization.

        The OidcRpAuthenticationFilter redirects the browser to a separate authentication endpoint, defined in the same spring file for convenience. This endpoint has a filter called OidcClientCodeRequestFilter, which initiates the OpenId Connect authorization code flow to a remote OpenId Connect IdP (in this case, the new Fediz IdP). It is also responsible for getting an IdToken after successfully getting an authorization code from the IdP.

        2) The Fediz OpenId Connect IdP

        The second module contains an integration test which deploys a number of wars into an Apache Tomcat container:
        • The "double-it" service as described above
        • The Apache CXF Fediz IdP which authenticates users via WS-Federation
        • The Apache CXF Fediz STS which performs the underlying authentication of users
        • The Apache CXF Fediz OpenId Connect IdP
        The way the Apache CXF Fediz OpenId Connect IdP works (at least for 1.3.x) is that user authentication is actually delegated to the WS-Federation based IdP via a Fediz plugin. So when the user is redirected to the Fediz IdP, (s)he gets redirected to the WS-Federation based IdP for authentication, and then gets redirected back to the OpenId Connect IdP with a WS-Federation Response. The OpenId Connect IdP parses this (SAML) Response and converts it into a JWT IdToken. Future releases will enable authentication directly at the OpenId Connect service.

        After deploying all of the services, the test code makes a series of REST calls to create a client in the OpenId Connect IdP so that we can run the test without having to manually enter information in the client UI of the Fediz IdP. To run the test, simply remove the @org.junit.Ignore assertion on the "testInBrowser" method. The test code will create the clients in Fediz and then print out a URL in the console before sleeping. Copy the URL and paste it into a browser. Authenticate using the credentials "alice/ecila".
        Categories: Colm O hEigeartaigh

        Introducing Apache Syncope 2.0.0

        Colm O hEigeartaigh - Thu, 08/11/2016 - 17:17
        Apache Syncope is a powerful and flexible open-source Identity Management system that has been developed at the Apache Software Foundation for several years now. The Apache Syncope team has been busy developing a ton of new features for the forthcoming new major release (2.0.0), which will really help to cement Apache Syncope's position as a first class Identity Management solution. If you wish to experiment with these new features, a 2.0.0-M4 release is available. In this post we will briefly cover some of the new features and changes. For a more comprehensive overview please refer to the reference guide.

        1) Domains

        Perhaps the first new concept you will be introduced to in Syncope 2.0.0 after starting the (Admin) console is that of a domain. When logging in, as well as specifying a username, password, and language, you can also specify a configured domain. Domains are a new concept in Syncope 2.0.0 that facilitate multi-tenancy. Domains allow the physical separation of all data stored in Syncope (by storing the data in different database instances). Therefore, Syncope can facilitate users, groups etc. that are in different domains in a single Syncope instance.

        2) New Console layout

        After logging in, it becomes quickly apparent that the Syncope Console is quite different compared to the 1.2.x console. It has been completely rewritten and looks great. Connectors and Resources are now managed under "Topology" in the menu on the left-hand side. Users and Groups (formerly Roles) are managed under "Realms" in the menu. The Schema types are configured under "Configuration". A video overview of the new Console can be seen here.

        3) AnyType Objects

        With Syncope 1.2.x, it was possible to define plain/derived/virtual Schema Types for users, roles and memberships, but no other entities. In Syncope 2.0.0, the Schema Types are decoupled from the entity that uses them. Instead, a new concept called an AnyType class is available which is a collection of schema types. In turn, an AnyType object can be created which consists of any number of AnyType classes. AnyType objects represent the type of things that Apache Syncope can model. Besides the predefined Users and Groups, it can also represent physical things such as printers, workstations, etc. With this new concept, Apache Syncope 2.0.0 can model many different types of identities.

        4) Realms

        Another new concept in Apache Syncope 2.0.0 is that of a realm. A realm encapsulates a number of Users, Groups and Any Objects. It is possible to specify account and password policies per-realm (see here for a blog entry on custom policies in Syncope 2.0.0). Each realm has a parent realm (apart from the pre-defined root realm identified as "/"). The realm tree is hierarchical, meaning that Users, Groups etc. defined in a sub-realm, are also defined on a parent realm. Combined with Roles (see below), realms facilitate some powerful access management scenarios.

        5) Groups/Roles

        In Syncope 2.0.0, what were referred to as "roles" in Syncope 1.2.x are now called "groups". In addition, "roles" in Syncope 2.0.0 are a new concept which associate a number of entitlements with a number of realms. Users assigned to a role can exercise the defined entitlements on any of the objects in the given realms (any any sub-realms).

        Syncope 2.0.0 also has the powerful concept of dynamic membership, which means that users can be assigned to groups or roles via a conditional expression (e.g. if an attribute matches a given value).

        6) Apache Camel Provisioning

        An exciting new feature of Apache Syncope 2.0.0 is the new Apache Camel provisioning engine, which is available under "Extensions/Camel Routes" in the Console. Apache Syncope comes pre-loaded with some Camel routes that are executed as part of the provisioning implementation for Users, Groups and Any Objects. The real power of this new engine lies is the ability to modify the routes to perform some custom provisioning rules. For example, on creating a new user, you may wish to send an email to an administrator. Or if a user is reactivated, you may wish to reactivate the user's home page on a web server. All these things and more are possible using the myriad of components that are available to be used in Apache Camel routes. I'll explore this feature some more in future blog posts.

        7) End-User UI

        As well as the Admin console (available via /syncope-console), Apache Syncope 2.0.0 also ships with an Enduser console (available via /syncope-enduser). This allows a user to edit only details pertaining to his/her-self, such as editing the user attributes, changing the password, etc. See the following blog entry for more information on the new End-User UI.

        8) Command Line Interface (CLI) client

        Another new feature of Apache Syncope 2.0.0 is that of the CLI client. It is available as a separate download. Once downloaded, extract it and run (on linux): ./ install --setup. Answer the questions about where Syncope is deployed and the credentials required to access it. After installation, you can run queries such as: ./ user --list.

        9) Apache CXF-based testcases

        I updated the testcases that I wrote before to use Apache Syncope 2.0.0 to authenticate and authorize web services calls using Apache CXF. The new test-cases are available here
        Categories: Colm O hEigeartaigh

        CXF Spring Boot Starters Unveiled

        Sergey Beryozkin - Mon, 08/08/2016 - 23:51
        The very first check some new users may do these days, while evaluating your JAX-RS implementation, can be: how well is it integrated into SpringBoot ?

        And the good news is that Apache CXF 3.1.7 users can start working with SpringBoot real fast.
        We have left it somewhat late. It is hard to prioritize sometimes on various new requirements. And see some users moving away. In such cases the community support is paramount. And the Power of Open Source Collaboration came to the rescue once again when it was really needed.

        I'd like to start with thanking James for providing an initial set of links to various SpringBoot documentation pages and reacting positively to the initial code we had. But you know yourself - sometimes we all value some little 'starters' - the initial code contributions :-)

        And then we had a Spring Boot expert coming in and getting the process moving. Vedran Pavic helped me to create the auto-configuration and starter modules for JAX-RS and JAX-WS, patiently explained how his initial contribution works, how these modules have to be designed, and helped with the advice throughout the process. I felt like I passed some SpringBoot qualification exam once we were finished which let me continue enhancing the JAX-RS starter independently before CXF 3.1.7 was released.

        CXF Spring Boot starters are now documented at this page which is also linked to from a Spring Boot README listing the community contributions.

        If you are working with CXF JAX-RS then do check this section. See the demos and get excited about the ease with which you can enable JAX-RS endpoints, their Swagger API docs (and auto-link Swagger UI - the topic of the next post).

        See how you can run your CXF WebClient or Proxy clients in Spring Boot, initialized if needed from the metadata found a Netflix Eureka. The demo code on the master uses a CXF CircuitBreakerFailoverFeature written by a legendary DevMind, a sound, simple and light-weight Apache Zest based implementation.
        Not all users may realize how flexible CXF Failover Feature is. 

        While the most effort went into a JAX-RS starter I'm sure we will add more support for JAX-WS users too.

        We'll need to do a bit more work - link CXF statistics to the actuator endpoints, support scanning JAX-RS Applications and few other things.

        If you prefer working with Spring Boot: be certain that a second to none support for running CXF services in Spring Boot will be there. Enjoy!

        Categories: Sergey Beryozkin

        Installing the Apache Ranger Key Management Server (KMS)

        Colm O hEigeartaigh - Mon, 08/08/2016 - 13:40
        The previous couple of blog entries have looked at how to install the Apache Ranger Admin Service as well as the Usersync Service. In this post we will look at how to install the Apache Ranger Key Management Server (KMS). KMS is a component of Apache Hadoop to manage cryptographic keys. Apache Ranger ships with its own KMS implementation, which allows you to store the (encrypted) keys in a database. The Apache Ranger KMS is also secured via policies defined in the Apache Ranger Admin Service.

        1) Build the source code

        The first step is to download the source code, as well as the signature file and associated message digests (all available on the download page). Verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting KMS archive to a location where you wish to install it:
        • tar zxvf apache-ranger-incubating-0.6.0.tar.gz
        • cd apache-ranger-incubating-0.6.0
        • mvn clean package assembly:assembly 
        • tar zxvf target/ranger-0.6.0-kms.tar.gz
        • mv ranger-0.6.0-kms ${rangerkms.home}
        2) Install the Apache Ranger KMS Service

        As the Apache Ranger KMS Service stores the cryptographic keys in a database, we will need to setup and configure a database. We will also configure the KMS Service to store audit logs in the database. Follow the steps given in section 2 of the tutorial on the Apache Ranger Admin Service to set up MySQL. We will also need to create a new user 'rangerkms':
        • CREATE USER 'rangerkms'@'localhost' IDENTIFIED BY 'password';
        You will need to install the Apache Ranger KMS Service using "sudo". If the root user does not have a JAVA_HOME property defined, then edit ${rangerkms.home}/ + add in, e.g.:
        • export JAVA_HOME=/opt/jdk1.8.0_91
        Next edit ${rangerkms.home}/ and make the following changes:
        • Change SQL_CONNECTOR_JAR to point to the MySQL JDBC driver jar (see previous tutorial).
        • Set (db_root_user/db_root_password) to (admin/password)
        • Set (db_user/db_password) to (rangerkms/password)
        • Change KMS_MASTER_KEY_PASSWD to a secure password value.
        • Set POLICY_MGR_URL=http://localhost:6080
        • Set XAAUDIT.DB.IS_ENABLED=true
        • Set XAAUDIT.DB.HOSTNAME=localhost 
        • Set XAAUDIT.DB.DATABASE_NAME=ranger_audit 
        • Set XAAUDIT.DB.USER_NAME=rangerlogger
        • Set XAAUDIT.DB.PASSWORD=password
        Now you can run the setup script via "sudo ./".

        3) Starting the Apache Ranger KMS service

        After a successful installation, first start the Apache Ranger admin service with "sudo ranger-admin start". Then start the Apache Ranger KMS Service via "sudo ranger-kms start". Now open a browser and go to "http://localhost:6080/". Log on with "keyadmin/keyadmin". Note that these are different credentials to those used to log onto the Apache Ranger Admin UI in the previous tutorial. Click on the "+" button on the "KMS" tab to create a new KMS Service. Specify the following values:
        • Service Name: kmsdev
        • KMS URL: kms://http@localhost:9292/kms
        • Username: keyadmin
        • Password: keyadmin
        Click on "Test Connection" to make sure that the KMS Service is up and running. If it is showing a connection failure, log out and log into the Admin UI using credentials "admin/admin". Go to the "Audit" section and click on "Plugins". You should see a successful message indicating that the KMS plugin can successfully download policies from the Admin Service:

        After logging back in to the UI as "keyadmin" you can start to create keys. Click on the "Encryption/Key Manager" tab. Select the "kmsdev" service in the dropdown list and click on "Add New Key". You can create, delete and rollover keys in the UI:

        Categories: Colm O hEigeartaigh

        Apache Fediz with Client Certificate Authentication (X.509)

        Jan Bernhardt - Thu, 08/04/2016 - 12:25
        In this blog post I will explain how to generate your own SSL key-pair to perform certificate based authentication for SSO purposes with Apache Fediz IDP.
        Client Key AuthenticationGenerate Key-PairI like to use the keystore-explorer under windows, because it makes certificate management very easy. You don't have to lookup console commands but instead you get nice Wizards to get it all done. If you are running linux I can recommend this page to you, because it contains the most common Java Keytool commands you will need.

        After starting keystore-explorer create a new keykeystore (PKCS #12). Next click generate keypair. RSA with 2.048 bit should be fine. Now you should enter your name and after that click on extensions to define an "Extended Key Usage" for "TLS Web Client Authentication":

        Make sure that this extension flag is really set for your key-pair. I first tried without this extension and I could not get any of my browsers to even show me a certificate selection popup when authentication against the IDP.
        Since you will have to import your personal certificate to the IDP truststore later on, I would recommend to you to export your public certificate at this step:

        Import Key-Pair to your BrowserOnce your key generation was successful, you need to add this key-pair to your browser:

        In Chrome you need to open your settings -> extended settings ->  HTTPS/SSL -> Manage Certificates -> Import select your p12 certificate and make sure that all extensions from the certificate are included:

        Since chrome and IE will use the same certificate store. So there is no need to do this twice if you have done this once for one of the two.

        For Firefox you need to go to Options -> Advanced -> Certificates -> View Certificates -> Your Certificates -> Import

        I had to restart my machine before my browsers would show me the option to select my certificates for client authentication. Some articles in the internet also recommended to add the IDP URL to your list of trusted sides in the Internet Explorer.Setup Fediz IDPYou can find a full IDP / Web-App setup instruction in one of my previous articles. In this article I will only highlight steps that are related to SSL slient authentication.

        Add SSL support to your tomcat conf/server.xml
        <Connector port="9443" protocol="org.apache.coyote.http11.Http11Protocol"
        maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
        sslProtocol="TLS" />
        If you want all clients to authenticate with a client SSL Certificate against your IDP you must set the clientAuth attribute to "true" instead of "want". However if you want to support multiple authentication styles even without a client certificate you should set clientAuth to "want".

        Open your idp-ssl-trust.jks with your keystore-explorer to import your personal certificate from your desktop (see previous export step above).
        Validate SetupOpen your browser to the Fediz Hello World page: https://localhost:9443/fediz-idp/. Your browser should show you a selection popup for your client certificate:

        If you imported this certificate correctly to your tomcat IDP truststore you should now see a "Hello World!" welcome page from Fediz.

        Please also take a look at colms blog about this topic.
        Categories: Jan Bernhardt

        [OT] Reuse Or Reimplement ?

        Sergey Beryozkin - Wed, 08/03/2016 - 18:06
        I said in one of my earlier posts I'd share some thoughts I've had during the years on re-using vs re-implementing while working on various CXF projects. Some of it may be a bit CXF specific but most of it might be interest to non-CXF developers too.

        When the time comes to implement a new feature the immediate decision that needs to be taken is how you do it. In general it is always a good idea to re-use a quality 3rd party library that can help in realizing the new feature fast.

        Consider a task of providing a UI interface to have Swagger JSON documents nicely presented. One can invest the time and write UI scripts and pages. Or one can download a well-known Swagger UI module.

        Another example: one needs a collection sort algorithm implementation which will do faster than Java Collections code. One can invest a time and write a new library or look around and try an Apache or Google library.

        In both cases re-using the existing solution will most likely be better and help deliver the higher-level, complete offering faster.

        Things may get more complicated when one works on a project in a competitive space. For example, at some point there were 6 active JAX-RS Java implementation projects, with other non JAX-RS implementations such as the one offered by Spring adding up to the total number.

        When you work on a project like that one a number of important decisions need to be made: how complete you'd like your project to be ? Is supporting HTTP verbs and reading and writing the data is all what is needed ? What sort of security support around the service you'd like to provide ? What other extensions should your project have ? How would you like your project be recognized - as a library or something bigger that offers all sort of relevant support for people writing the HTTP services ?

        The higher the 'ambitions' of such a project the more likely the 're-implementing' becomes a viable option, nearly a necessity in some cases. In fact re-implementing is going all around at such projects.

        I've been involved in a fair bit of re-implementation projects.

        To start with we started implementing JAX-RS at a time when Jersey was already high. Why ? To have Apache CXF open to users with different preferences on how to do HTTP services. It was hard at times but it was really never simply because we wanted to prove we could do it.

        The latest 're-implementation' was JOSE. Why ? I won't deny I was keen to work with the low-level security code closer, but overall, I wanted a CXF Security Story be more complete. Implementing it vs re-using the quality libraries I listed at the Wiki let us tune and re-work the implementation for it to be better integrated with the JAX-RS and Core security support so many times that it would be highly unlikely to happen if I were working with a 3rd party library.

        I do not think re-implementing in an open way is not healthy. For example it has been acknowledged that having many JAX-RS implementations around help to make JAX-RS more popular. Re-implementing may offer more options to users.

        Or, reimplementing can prove a complete loss of time. Here are some basic 'guidelines' if you decide to try to re-implement in the Open Source:
        - think not twice but many times before you try it
        - if you feel the urge then do it, get the experience, make the mistakes, next time you will do the best choice
        - never expect that once you re-implement something then everyone will stop using whatever they use and switch to what you have written - a lot of clever developers are working full time
        - if you'd like others to use your project then you absolutely must love working with the users, don't even start if you think that it will be up to the Customer Support
        - you need to have a support of your colleagues
        - expect that the only 'remuneration' you will have is the non-stop work to keep the project constantly evolving

        Yes, very often re-using may be the very best thing :-)

        Enjoy, Happy Re-Using, Happy Re-Implementing :-)


        Categories: Sergey Beryozkin

        Syncing users and groups from LDAP into Apache Ranger

        Colm O hEigeartaigh - Fri, 07/22/2016 - 16:37
        The previous post covered how to install the Apache Ranger Admin service. The Apache Ranger Admin UI supports creating authorization policies for various Big Data components, by giving users and/or groups permissions on resources. This means that we need to import users/groups into the Apache Ranger Admin service from some backend service in order to create meaningful authorization policies. Apache Ranger supports syncing users into the Admin service from both unix and ldap. In this post, we'll look at syncing in users and groups from an OpenDS LDAP backend.

        1) The OpenDS backend

        For the purposes of this tutorial, we will use OpenDS as the LDAP server. It contains a domain called "dc=example,dc=com", and 5 users (alice/bob/dave/oscar/victor) and 2 groups (employee/manager). Victor, Oscar and Bob are employees, Alice and Dave are managers. Here is a screenshot using Apache Directory Studio:

        2) Build the Apache Ranger usersync module

        Follow the steps in the previous tutorial to build Apache Ranger and to setup and start the Apache Ranger Admin service. Once this is done, go back to the Apache Ranger distribution that you have built and copy the usersync module:
        • tar zxvf target/ranger-0.6.0-usersync.tar.gz
        • mv ranger-0.6.0-usersync ${usersync.home}
        3) Configure and build the Apache Ranger usersync service 

        You will need to install the Apache Ranger Usersync service using "sudo". If the root user does not have a JAVA_HOME property defined, then edit ${usersync.home}/ + add in, e.g.:
        • export JAVA_HOME=/opt/jdk1.8.0_91
        Next edit ${usersync.home}/ and make the following changes:
        • POLICY_MGR_URL = http://localhost:6080
        • SYNC_SOURCE = ldap
        • SYNC_INTERVAL = 1 (just for testing purposes....)
        • SYNC_LDAP_URL = ldap://localhost:2389
        • SYNC_LDAP_BIND_DN = cn=Directory Manager,dc=example,dc=com
        • SYNC_LDAP_BIND_PASSWORD = test
        • SYNC_LDAP_SEARCH_BASE = dc=example,dc=com
        • SYNC_LDAP_USER_SEARCH_BASE = ou=users,dc=example,dc=com
        • SYNC_GROUP_SEARCH_BASE=ou=groups,dc=example,dc=com
        Now you can run the setup script via "sudo ./". 

        4) Start the Usersync service

        The Apache Ranger Usersync service can be started via "sudo ./ start". After 1 minute (see SYNC_INTERVAL above), it should successfully copy the users/groups from the OpenDS backend into the Apache Ranger Admin. Open a browser and go to "http://localhost:6080", and click on "Settings" and then "Users/Groups". You should see the users and groups synced successfully from OpenDS.

        Categories: Colm O hEigeartaigh

        Karaf JDBC JAAS Module

        Jan Bernhardt - Wed, 07/20/2016 - 17:58
        Karaf relys on JAAS for user authentication. JAAS makes it possible to plugin multiple modules for this purpose. By default Karaf will use the karaf realm with a JAAS module getting its user and role information from a property file: runtime/etc/

        In this blog post I will show you how to use the Karaf JAAS console commands and how to setup a JDBC module to authenticate against a database.

        All code was tested on Karaf version 4.0.3 JDBC SetupRegister DatasourceAt first you need to install the Karaf JDBC feature:
        karaf@trun()> feature:install jdbc
        karaf@trun()> feature:install pax-jdbc-derby
        Next you can create a new Datasource:
        karaf@root()> jdbc:ds-create -dn derby -url "jdbc:derby:users;create=true" -u db_admin usersWith the -dn derby option you define a datasource of type derby. Alternative you could also use generic, oracle, mysql, postgres, h2, hsql as your datasource type. Please make sure to install also the matching jdbc pax feature for your datasource type.
        The -u db_admin option defines the datasource username. Finally jaas_realm is the datasource name.
        Add sample data jdbc:execute users CREATE TABLE users ( username VARCHAR(255) PRIMARY KEY NOT NULL, password VARCHAR(255) NOT NULL );
        jdbc:execute users CREATE TABLE roles ( username VARCHAR(255) NOT NULL, role VARCHAR(255) NOT NULL, PRIMARY KEY (username,role) );
        jdbc:execute users INSERT INTO users values('alice','e5e9fa1ba31ecd1ae84f75caaa474f3a663f05f4');
        jdbc:execute users INSERT INTO roles values('alice','manager');Validate your input:
        karaf@trun()> jdbc:query users SELECT * FROM roles
        manager | aliceJAAS Console CommandsKaraf provides some nice console commands to manage your JAAS realms.
        List JAAS realms with assigned moduleskaraf@trun()> jaas:realm-list
        Index | Realm Name | Login Module Class Name
        1     | karaf      |
        2     | karaf      | org.apache.karaf.jaas.modules.publickey.PublickeyLoginModule
        3     | karaf      | org.apache.karaf.jaas.modules.audit.FileAuditLoginModule
        4     | karaf      | org.apache.karaf.jaas.modules.audit.EventAdminAuditLoginModuleList users and assigned roleskaraf@trun()> jaas:realm-manage --realm karaf

        karaf@trun()> jaas:user-list
        User Name | Group      | Role
        tadmin    | admingroup | admin
        tadmin    | admingroup | manager
        tadmin    | admingroup | viewer
        tadmin    | admingroup | systembundles
        tadmin    |            | sl_admin
        tesb      | admingroup | admin
        tesb      | admingroup | manager
        tesb      | admingroup | viewer
        tesb      | admingroup | systembundles
        tesb      |            | sl_maintain
        karaf     | admingroup | admin
        karaf     | admingroup | manager
        karaf     | admingroup | viewer
        karaf     | admingroup | systembundles
        List userskaraf@trun()> jaas:realm-manage --realm karaf

        karaf@trun()> jaas:user-list
        User Name | Group      | Role
        tadmin    | admingroup | admin
        tadmin    | admingroup | manager
        tadmin    | admingroup | viewer
        tadmin    | admingroup | systembundles
        tadmin    |            | sl_admin
        tesb      | admingroup | admin
        tesb      | admingroup | manager
        tesb      | admingroup | viewer
        tesb      | admingroup | systembundles
        tesb      |            | sl_maintain
        karaf     | admingroup | admin
        karaf     | admingroup | manager
        karaf     | admingroup | viewer
        karaf     | admingroup | systembundles

        karaf@trun()> jaas:cancelAdding a userkaraf@trun()> jaas:realm-manage --realm karaf
        karaf@trun()> jaas:user-add alice secret
        karaf@trun()> jaas:update
        If you execute "List users" again you will see alice added to the realm. You will also find alice added to the file.
        Install JDBC JAAS ModuleRegister ModuleCreate a file db_jaas.xml within the deploy folder of your karaf installation:
        <?xml version="1.0" encoding="UTF-8"?>
        Licensed to the Apache Software Foundation (ASF) under one or more
        contributor license agreements. See the NOTICE file distributed with
        this work for additional information regarding copyright ownership.
        The ASF licenses this file to You under the Apache License, Version 2.0
        (the "License"); you may not use this file except in compliance with
        the License. You may obtain a copy of the License at
        Unless required by applicable law or agreed to in writing, software
        distributed under the License is distributed on an "AS IS" BASIS,
        WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
        See the License for the specific language governing permissions and
        limitations under the License.
        <blueprint xmlns=""

        <!-- Allow usage of System properties, especially the karaf.base property -->
        <ext:property-placeholder placeholder-prefix="$[" placeholder-suffix="]"/>

        <!-- AdminConfig property place holder for the org.apache.karaf.jaas -->
        <cm:property-placeholder persistent-id="org.apache.karaf.jaas.db" update-strategy="reload">
        <cm:property name="" value="basic"/>
        <cm:property name="encryption.enabled" value="true"/>
        <!--cm:property name="encryption.prefix" value="{CRYPT}"/>
        <cm:property name="encryption.suffix" value="{CRYPT}"/-->
        <cm:property name="encryption.algorithm" value="SHA1"/>
        <cm:property name="encryption.encoding" value="hexadecimal"/>
        <cm:property name="detailed.login.exception" value="true"/>
        <cm:property name="audit.file.enabled" value="true"/>
        <cm:property name="audit.file.file" value="$[]/security/audit.log"/>
        <cm:property name="audit.eventadmin.enabled" value="true"/>
        <cm:property name="audit.eventadmin.topic" value="org/apache/karaf/login"/>

        <jaas:config name="karaf" rank="10">

        <jaas:module className="org.apache.karaf.jaas.modules.jdbc.JDBCLoginModule" flags="required">
        datasource = osgi:javax.sql.DataSource/(
        insert.user = INSERT INTO USERS VALUES(?,?)
        insert.role = INSERT INTO ROLES VALUES(?,?)
        delete.user = DELETE FROM USERS WHERE USERNAME=?
        delete.roles = DELETE FROM ROLES WHERE USERNAME=?
        encryption.enabled = ${encryption.enabled} = ${}
        encryption.algorithm = ${encryption.algorithm}
        encryption.encoding = ${encryption.encoding}
        detailed.login.exception = ${detailed.login.exception}
        <jaas:module className="org.apache.karaf.jaas.modules.audit.FileAuditLoginModule" flags="optional">
        enabled = ${audit.file.enabled}
        file = ${audit.file.file}
        <jaas:module className="org.apache.karaf.jaas.modules.audit.EventAdminAuditLoginModule" flags="optional">
        enabled = ${audit.eventadmin.enabled}
        topic = ${audit.eventadmin.topic}

        </blueprint>By adding a configuration file org.apache.karaf.jaas.db.cfg to your etc folder you will be able to update the configuration of your jaas bundle during runtime.
        encryption.enabled = true = basic
        encryption.algorithm = SHA1
        encryption.encoding = hexadecimal
        detailed.login.exception = trueNow you can login to Karaf via SSH with you alice DB user.
        ssh -p 8101 alice@localhostPassword will be a: secret
        Categories: Jan Bernhardt

        Installing the Apache Ranger Admin UI

        Colm O hEigeartaigh - Tue, 07/19/2016 - 14:22
        Apache Ranger 0.6 has been released, featuring new support for securing Apache Atlas and Nifi, as well as a huge amount of bug fixes. It's easiest to get started with Apache Ranger by downloading a big data sandbox with Ranger pre-installed. However, the most flexible way is to grab the Apache Ranger source and to build and deploy the artifacts yourself. In this tutorial, we will look into building Apache Ranger from source, setting up a database to store policies/users/groups/etc. as well as Ranger audit information, and deploying the Apache Ranger Admin UI.

        1) Build the source code

        The first step is to download the source code, as well as the signature file and associated message digests (all available on the download page). Verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting admin archive to a location where you wish to install the UI:
        • tar zxvf apache-ranger-incubating-0.6.0.tar.gz
        • cd apache-ranger-incubating-0.6.0
        • mvn clean package assembly:assembly 
        • tar zxvf target/ranger-0.6.0-admin.tar.gz
        • mv ranger-0.6.0-admin ${rangerhome}
        2) Install MySQL

        The Apache Ranger Admin UI requires a database to keep track of users/groups as well as policies for various big data projects that you are securing via Ranger. In addition, we will use the database for auditing as well. For the purposes of this tutorial, we will use MySQL. Install MySQL in $SQL_HOME and start MySQL via:
        • sudo $SQL_HOME/bin/mysqld_safe --user=mysql
        Now you need to log on as the root user and create three users for Ranger. We need a root user with admin privileges (let's call this user "admin"), a user for the Ranger Schema (we'll call this user "ranger"), and finally a user to store the Ranger audit logs in the DB as well ("rangerlogger"):
        • CREATE USER 'admin'@'localhost' IDENTIFIED BY 'password';
        • GRANT ALL PRIVILEGES ON * . * TO 'admin'@'localhost' WITH GRANT OPTION;
        • CREATE USER 'ranger'@'localhost' IDENTIFIED BY 'password';
        • CREATE USER 'rangerlogger'@'localhost' IDENTIFIED BY 'password'; 
        Finally,  download the JDBC driver jar for MySQL and put it in ${rangerhome}.

        3) Install the Apache Ranger Admin UI

        You will need to install the Apache Ranger Admin UI using "sudo". If the root user does not have a JAVA_HOME property defined, then edit ${rangerhome}/ + add in, e.g.:
        • export JAVA_HOME=/opt/jdk1.8.0_91
        Next edit ${rangerhome}/ and make the following changes:
        • Change SQL_CONNECTOR_JAR to point to the MySQL JDBC driver jar that you downloaded above.
        • Set (db_root_user/db_root_password) to (admin/password)
        • Set (db_user/db_password) to (ranger/password)
        • Change "audit_store" from "solr" to "db"
        • Set "audit_db_name" to "ranger_audit"
        • Set (audit_db_user/audit_db_password) to (rangerlogger/password).
        Now you can run the setup script via "sudo ./".

        4) Starting the Apache Ranger admin service

        After a successful installation, we can start the Apache Ranger admin service with "sudo ${rangerhome}./ews/". Now open a browser and go to "http://localhost:6080/". Log on with "admin/admin" and you should be able to create authorization policies for a desired big data component.

        Categories: Colm O hEigeartaigh


        Subscribe to Talend Community Coders aggregator