Latest Activity

Apache Karaf Tutorial Part 6 - Database Access

Christian Schneider - Tue, 07/28/2015 - 11:13

Blog post edited by Christian Schneider

Shows how to access databases from OSGi applications running in Karaf and how to abstract from the DB product by installing DataSources as OSGi services. Some new Karaf shell commands can be used to work with the database from the command line. Finally JDBC and JPA examples show how to use such a DataSource from user code.Prerequisites

You need an installation of apache karaf 3.0.3 for this tutorial.

Example sources

The example projects are on github Karaf-Tutorial/db.

Drivers and DataSources

In plain java it is quite popular to use the DriverManager to create a database connection (see this tutorial). In OSGi this does not work as the ClassLoader of your bundle will have no visibility of the database driver. So in OSGi the best practice is to create a DataSource at some place that knows about the driver and publish it as an OSGi service. The user bundle should then only use the DataSource without knowing the driver specifics. This is quite similar to the best practice in application servers where the DataSource is managed by the server and published to jndi.

So we need to learn how to create and use DataSources first.

The DataSourceFactory services

To make it easier to create DataSources in OSGi the specs define a DataSourceFactory interface. It allows to create a DataSource using a specific driver from properties. Each database driver is expected to implement this interface and publish it with properties for the driver class name and the driver name.

Introducing pax-jdbc

The pax-jdbc project aims at making it a lot easier to use databases in an OSGi environment. It does the following things:

  • Implement the DataSourceFactory service for Databases that do not create this service directly
  • Implement a pooling and XA wrapper for XADataSources (This is explained at the pax jdbc docs)
  • Provide a facility to create DataSource services from config admin configurations
  • Provide karaf features for many databases as well as for the above additional functionality

So it covers everything you need from driver installation to creation of production quality DataSources.

Installing the driver

The first step is to install the driver bundles for your database system into Karaf. Most drivers are already valid bundles and available in the maven repo.

For several databases pax-jdbc already provides karadf features to install a current version of the database driver.

For H2 the following commands will work

feature:repo-add mvn:org.ops4j.pax.jdbc/pax-jdbc-features/0.5.0/xml/features feature:install transaction jndi pax-jdbc-h2 pax-jdbc-pool-dbcp2 pax-jdbc-config service:list DataSourceFactory

Strictly speaking we would only need the pax-jdbc-h2 feature but we will need the others for the next steps.

This will install the pax-jdbc feature repository and the h2 database driver. This driver already implements the DataSourceFactory so the last command will display this service.

DataSourceFactory [org.osgi.service.jdbc.DataSourceFactory] ----------------------------------------- osgi.jdbc.driver.class = org.h2.Driver = H2 osgi.jdbc.driver.version = 1.3.172 = 691 Provided by : H2 Database Engine (68)

The pax-jdbc-pool-dbcp2 feature wraps this DataSourceFactory to provide pooling and XA support.

pooled and XA DataSourceFactory [org.osgi.service.jdbc.DataSourceFactory] ----------------------------------------- osgi.jdbc.driver.class = org.h2.Driver = H2-pool-xa osgi.jdbc.driver.version = 1.3.172 pooled = true = 694 xa = true Provided by : OPS4J Pax JDBC Pooling support using Commons-DBCP2 (73)

Technically this DataSourceFactory also creates DataSource objects but internally they manage XA support and pooling. So we want to use this one for our later code examples.

Creating the DataSource

Now we just need to create a configuration with the correct factory pid to create a DataSource as a service

So create the file etc/org.ops4j.datasource-tasklist.cfg with the following content

config for DataSource url=jdbc:h2:mem:person dataSourceName=person

The config will automatically trigger the pax-jdbc-config module to create a DataSource.

  • The name osgi.jdbc.driver=H2-pool-xa will select the H2 DataSourceFactory with pooling and XA support we previously installed.
  • The url configures H2 to create a simple in memory database named test.
  • The dataSourceName will be reflected in a service property of the DataSource so we can find it later
  • You could also set pooling configurations in this config but we leave it at the defaults

DataSource karaf@root()> service:list DataSource [javax.sql.DataSource] ---------------------- dataSourceName = person = H2-pool-xa = person service.factoryPid = org.ops4j.datasource = 696 = org.ops4j.datasource.83139141-24c6-4eb3-a6f4-82325942d36a url = jdbc:h2:mem:person Provided by : OPS4J Pax JDBC Config (69)

So when we search for services implementing the DataSource interface we find the person datasource we just created.

When we installed the features above we also installed the aries jndi feature. This module maps OSGi services to jndi objects. So we can also use jndi to retrieve the DataSource which will be used in the persistence.xml for jpa later.

jndi url of DataSource osgi:service/person Karaf jdbc commands

Karaf contains some commands to manage DataSources and do queries on databases. The commands for managing DataSources in karaf 3.x still work with the older approach of using blueprint to create DataSources. So we will not use these commands but we can use the functionality to list datasources, list tables and execute queries.

jdbc commands feature:install jdbc jdbc:datasources jdbc:tables person

We first install the karaf jdbc feature which provides the jdbc commands. Then we list the DataSources and show the tables of the database accessed by the person DataSource.

jdbc:execute person "create table person (name varchar(100), twittername varchar(100))" jdbc:execute person "insert into person (name, twittername) values ('Christian Schneider', '@schneider_chris')" jdbc:query person "select * from person"

This creates a table person, adds a row to it and shows the table.

The output should look like this

select * from person NAME | TWITTERNAME -------------------------------------- Christian Schneider | @schneider_chris Accessing the database using JDBC

The project db/examplejdbc shows how to use the datasource we installed and execute jdbc commands on it. The example uses a blueprint.xml to refer to the OSGi service for the DataSource and injects it into the class
DbExample.The test method is then called as init method and shows some jdbc statements on the DataSource.The DbExample class is completely independent of OSGi and can be easily tested standalone using the DbExampleTest. This test shows how to manually set up the DataSource outside of OSGi.

Build and install

Build works like always using maven

> mvn clean install

In Karaf we just need our own bundle as we have no special dependencies

> install -s Using datasource H2, URL jdbc:h2:~/test Christian Schneider, @schneider_chris,

After installation the bundle should directly print the db info and the persisted person.

Accessing the database using JPA

For larger projects often JPA is used instead of hand crafted SQL. Using JPA has two big advantages over JDBC.

  1. You need to maintain less SQL code
  2. JPA provides dialects for the subtle differences in databases that else you would have to code yourself.

For this example we use Hibernate as the JPA Implementation. On top of it we add Apache Aries JPA which supplies an implementation of the OSGi JPA Service Specification and blueprint integration for JPA.

The project examplejpa shows a simple project that implements a PersonService managing Person objects.
Person is just a java bean annotated with JPA @Entitiy.

Additionally the project implements two Karaf shell commands person:add and person:list that allow to easily test the project.


Like in a typical JPA project the peristence.xml defines the DataSource lookup, database settings and lists the persistent classes. The datasource is refered using the jndi name "osgi:service/person".

The OSGi JPA Service Specification defines that the Manifest should contain an attribute "Meta-Persistence" that points to the persistence.xml. So this needs to be defined in the config of the maven bundle plugin in the prom. The Aries JPA container will scan for these attributes
and register an initialized EntityMangerFactory as an OSGi service on behalf of the use bundle.


We use a blueprint.xml context to inject an EntityManager into our service implementation and to provide automatic transaction support.
The following snippet is the interesting part:

<bean id="personService" class=""> <jpa:context property="em" unitname="person" /> <tx:transaction method="*" value="Required"/> </bean>

This makes a lookup for the EntityManagerFactory OSGi service that is suitable for the persistence unit person and injects a thread safe EnityManager (using a ThreadLocal under the hood) into the
PersonServiceImpl. Additionally it wraps each call to a method of PersonServiceImpl with code that opens a transaction before the method and commits on success or rollbacks on any exception thrown.

Build and InstallBuild mvn clean install

A prerequisite is that the derby datasource is installed like described above. Then we have to install the bundles for hibernate, aries jpa, transaction, jndi and of course our db-examplejpa bundle.
See ReadMe.txt for the exact commands to use.

Test person:add 'Christian Schneider' @schneider_chris

Then we list the persisted persons

karaf@root> person:list Christian Schneider, @schneider_chris Summary

In this tutorial we learned how to work with databases in Apache Karaf. We installed drivers for our database and a DataSource. We were able to check and manipulate the DataSource using the jdbc:* commands. In the examplejdbc we learned how to acquire a datasource
and work with it using plain jdbc4.  Last but not least we also used jpa to access our database.

Back to Karaf Tutorials

View Online
Categories: Christian Schneider

(Slightly) Faster WS-Security using MTOM in Apache CXF 3.1.2

Colm O hEigeartaigh - Fri, 07/17/2015 - 17:31
A recent issue was reported at Apache CXF to do with the inability to process certain WS-Security requests that were generated by Metro or .NET when MTOM was enabled. In this case, Metro and .NET avoid BASE-64 encoding bytes and inserting them directly into the message (e.g. for BinarySecurityTokens or the CipherValue data associated with EncryptedData or EncryptedKey Elements). Instead the raw bytes are stored in a message attachment, and referred to in the message via xop:Include. Support for processing these types of requests has been added for WSS4J 2.0.5 and 2.1.2.

In addition, CXF 3.1.2 now has the ability to avoid the BASE-64 encoding step when creating requests when MTOM is enabled, something that we will look at in this post. The advantage of this is that is marginally more efficient due to avoiding BASE-64 encoding at the sending side, and BASE-64 decoding on the receiving side.

1) Storing message bytes in attachments in WSS4J

A new WSS4J configuration property has been added in WSS4J 2.0.5/2.1.2 to support storing message bytes in attachments. This property is used when configuring WS-Security via the "action" based approach in CXF:
  • storeBytesInAttachment: Whether to store bytes (CipherData or BinarySecurityToken) in an attachment. The default is false, meaning that bytes are BASE-64 encoded and "inlined" in the message.
WSS4J is stack-neutral, meaning that it has no concept of what a message attachment actually is. So for this to work, a CallbackHandler must be set on the RequestData Object, that knows how to retrieve attachments, as well as write modified/new attachments out. If you are using Apache CXF then this is taken care for you automatically.

There is another configuration property that is of interest on the receiving side:
  • expandXOPIncludeForSignature: Whether to expand xop:Include Elements encountered when verifying a Signature. The default is true, meaning that the relevant attachment bytes are BASE-64 encoded and inserted into the Element. This ensures that the actual bytes are signed, and not just the reference.
So for example, if an encrypted SOAP Body is signed, the default behaviour is to expand the xop:Include Element to make sure that we are verifying the signature on the SOAP Body. On the sending side, we must have a signature action *before* an encryption action, for this same reason. If we encrypt before signing, then WSS4J will turn off the "storeBytesInAttachment" property, to make sure that we are not signing a reference.

2) Storing message bytes in attachments with WS-SecurityPolicy

A new security configuration property is also available in Apache CXF to control the ability to store message bytes in attachments with WS-Security when WS-SecurityPolicy is used:
  • Whether to store bytes (CipherData or BinarySecurityToken) in an attachment. The default is true if MTOM is enabled.
This property is also available in CXF 3.0.6, but is it is "false" by default. Similar to the action case, CXF will turn off this property by default in either of the following policy cases:
  • If sp:EncryptBeforeSigning is present
  • If sp:ProtectTokens is present. In this case, the signing cert is itself signed, and again we want to avoid signing a reference rather than the certificate bytes.
3) Tests

To see this new functionality in action, take a look at the MTOMSecurityTest in CXF's ws-security systests module. It has three methods that test storing bytes in attachments with a symmetric binding, asymmetric binding + an "action based" approach to configuring WS-Security. Enable logging to see the requests and responses. The encrypted SOAP Body now contains a CipherValue that does not include the BASE-64 encoded bytes any more:

The referenced attachment looks like:

Finally, I wrote a blog post some time back about using Apache JMeter to load-test security-enabled CXF-based web services. I decided to modify the standard symmetric and asymmetric tests, so that the CXF service was MTOM enabled, so that the ability to store message bytes in the attachments was switched on with CXF 3.1.2. The results for both test-cases showed that throughput was around 1% higher when message bytes were stored in attachments. Bear in mind that the change just measures the service creation change, the client request was still non-MTOM aware as it is just pasted into JMeter. So one would expect up to a 4% improvement for a fully MTOM-aware client + service invocation:

Categories: Colm O hEigeartaigh

Apache CXF Fediz 1.2.0 tutorial - part IV

Colm O hEigeartaigh - Thu, 07/16/2015 - 17:40
This is the fourth in a series of blog posts on the new features and changes in Apache CXF Fediz 1.2.0. The last two articles focused on how clients can authenticate to the IdP in Fediz 1.2.0 using Kerberos and TLS client authentication. In this post we will divert our attention from the IdP for the time being, and look at a new container-independent Relying Party (RP) plugin available in Fediz 1.2.0 based on Apache CXF.

1) RP plugins in Fediz

Apache Fediz ships with a number of RP plugins to secure your web application. These plugins are container-dependent, meaning that if your web app is deployed in say Apache Tomcat, you need to use the Tomcat plugin in Fediz. The following plugins were available prior to Fediz 1.2.0:
The CXF plugin referred to here was not a full WS-Federation RP plugin as in the other modules. Instead, it consisted of a mechanism that allows the SSO (SAML) token retrieved as part of the WS-Federation process to be used by CXF client code, if the web application needed to obtain another token "on behalf of" the other token when making some subsequent web services call.

2) CXF RP plugin in Fediz 1.2.0

In Fediz 1.2.0, the CXF plugin mentioned above now contains a fully fledged WS-Federation RP implementation that can be used to secure a JAX-RS service, rather than using one of the container dependent plugins. Lets see how this works using a test-case:
  • cxf-fediz-federation-sso: This project shows how to use the new CXF plugin of Apache Fediz 1.2.0 to authenticate and authorize clients of a JAX-RS service using WS-Federation.
The test-case consists of two modules. The first is a web application which contains a simple JAX-RS service, which has a single GET method to return a doubled number. The method is secured with a @RolesAllowed annotation, meaning that only a user in roles "User", "Admin", or "Manager" can access the service.

This is enforced via CXF's SecureAnnotationsInterceptor. Finally WS-Federation is enabled for the service via the JAX-RS Provider called the FedizRedirectBindingFilter, available in the CXF plugin in Fediz. This takes a "configFile" parameter, which is a link to the standard Fediz plugin configuration file:

It's as easy as this to secure your CXF JAX-RS service using WS-Federation! The remaining module in the test above deploys the IdP + STS from Fediz in Apache Tomcat. It then takes the "double-it" war above and also deployed it in Tomcat.

Finally, it uses Htmlunit to make an invocation on the service, and checks that access is granted to the service. Alternatively, you can comment the @Ignore annotation of the "testInBrowser" method, and copy the printed out URL into a browser to test the service directly (user credentials: "alice/ecila").
Categories: Colm O hEigeartaigh

Apache CXF Fediz 1.2.0 tutorial - part III

Colm O hEigeartaigh - Wed, 07/15/2015 - 17:22
This is the third in a series of blog posts on the new features and changes in Apache CXF Fediz 1.2.0. The previous blog entry described how different client authentication mechanisms are supported in the IdP, and how to configure client authentication via an X.509 certificate, a new feature in Fediz 1.2.0. Another new authentication mechanism in Fediz 1.2.0 is the ability to authenticate to the IdP using Kerberos, which we will cover in this article.

1) Kerberos client authentication in the IdP

Recall that the Apache Fediz IdP in 1.2.0 supports different client authentication methods by default using different URL paths. In particular for Kerberos, the URL path is:
  • /federation/krb -> authentication using Kerberos
The default value for the "wauth" parameter added by the service provider to the request to activate this URL path is:
When the IdP receives a request at the URL path configured for Kerberos, it sends back a request for a Negotiate Authorization header if none is present. Otherwise it parses the header and BASE-64 decodes the Kerberos token and dispatches it to the configured authentication provider. Kerberos tokens are authenticated in the IdP via the STSKrbAuthenticationProvider, which is configured in the Spring security-config.xml

2) Authenticating Kerberos tokens in the IdP

The IdP supports two different ways of validating Kerberos tokens:
  • Passthrough Authentication. Here we do not authenticate the Kerberos token at all in the IdP, but pass it through to the STS for authentication. This is similar to what is done for the Username/Password authentication case. The default security binding of the STS for this scenario requires a KerberosToken Supporting Token. This is the default way of authenticating Kerberos tokens in the IdP.
  • Delegation. If delegation is enabled in the IdP, then the received token is validated locally in the IdP. The delegated credential is then used to get a new Kerberos Token to authenticate the STS call "on behalf of" the original user. 
To enable the delegation scenario, simply update the STSKrbAuthenticationProvider bean in the security-config.xml,
set the "requireDelegation" property to "true", and configure the kerberosTokenValidator property to validate the received Kerberos token:

Categories: Colm O hEigeartaigh

Securing Apache CXF with Apache Camel

Colm O hEigeartaigh - Fri, 07/10/2015 - 18:45
The previous post I wrote about how to integrate Apache CXF with Apache Camel. The basic test scenario involved using an Apache CXF proxy service to authenticate clients, and Apache Camel to route the authenticated requests to a backend service, which had different security requirements to the proxy. In this post, we will look at a slightly different scenario, where the duty of authenticating the clients shifts from the proxy service to Apache Camel itself. In addition, we will look at how to authorize the clients via different Apache Camel components.

For a full description of the test scenario see the previous post. The Apache CXF based proxy service receives a WS-Security UsernameToken, which is used to authenticate the client. In the previous scenario, this was done at the proxy by supplying a CallbackHandler instance to verify the given username and password. However, this time we will just configure the proxy to pass the received credentials through to the route instead of authenticating them. This can be done by setting the JAX-WS property "ws-security.validate.token" to "false":

So now it is up to the Camel route to authenticate and authorize the user credentials. Here are two possibilities using Apache Shiro and Spring Security.

1) Apache Shiro

I've covered previously how to use Apache Shiro to authenticate and authorize web service invocations using Apache CXF. Apache Camel ships with a camel-shiro component which allows you to authenticate and authorize Camel routes. The test-case can be downloaded and run here:
  • camel-cxf-proxy-shiro-demo: Some authentication and authorization tests for an Apache CXF proxy service using the Apache Camel Shiro component.
Username, passwords and roles are stored in a file and parsed in a ShiroSecurityPolicy object:

The Camel route is as follows:

Note that the shiroHeaderProcessor bean processes the result from the proxy before applying the Shiro policy. This processor retrieves the client credentials (which are stored as a JAAS Subject in a header on the exchange) and extracts the username and password, storing them in special headers that are used by the Shiro component in Camel to get the username and password for authentication. 

The authorization use-case uses the same route, however the ShiroSecurityPolicy bean enforces that the user must have a role of "boss" to invoke on the backend service:

2) Spring Security 

I've also covered previously how to use Spring Security to authenticate and authorize web service invocations using Apache CXF. Apache Camel ships with a camel-spring-security component which allows you to authenticate and authorize Camel routes. The test-case can be downloaded and run here:
Like the Shiro test-case, username, passwords and roles are stored in a file, which is used to create an authorizationPolicy bean:
The Camel route is exactly the same as in the Shiro example above, except that a different processor implementation is used. The SpringSecurityHeaderProcessor bean used in the tests translates the user credentials into a Spring Security UsernamePasswordAuthenticationToken principal, which is added to the JAAS Subject stored under the Exchange.AUTHENTICATION header. This principal is then used by the Spring Security component to authenticate the request.

To authorize the request, a different authorizationPolicy configuration is required:

Categories: Colm O hEigeartaigh

Integrating Apache CXF with Apache Camel

Colm O hEigeartaigh - Mon, 07/06/2015 - 12:51
Apache Camel provides support for integrating Apache CXF endpoints via the camel-cxf component. A common example of the benefits of using Apache Camel with webservices is when a proxy service is required to translate some client request into a format that is capable of being processed by some backend service. Apache Camel ships with an example where a backend service consumes SOAP over JMS, and a proxy service translates a SOAP over HTTP client request into SOAP over JMS. In this post, we will show an example of how to use this proxy pattern to secure a client invocation to a backend service via a proxy, when the backend service and proxy have different security requirements.

The test scenario is as follows. The backend service is an Apache CXF-based JAX-WS "double-it" service that can only be called by trusted clients. However, we don't want to give the backend service the responsibility to authenticate clients. A CXF-based proxy service will be responsible for authenticating clients, and then routing the authenticated requests to the backend service via Apache Camel. The backend service is secured via TLS with client authentication, meaning that we have direct trust between the proxy service and the backend service. Clients must authenticate to the proxy service via a WS-Security UsernameToken over TLS.

The test-case can be downloaded and run here:
 The CXF proxy is configured as follows:

The CallbackHandler supplies the password to authenticate client passwords. The Camel route is defined as:

The headerFilterStrategy reference is to a CxfHeaderFilterStrategy bean which instructs Camel to drop the message headers (we don't need the security header beyond the proxy, as the proxy is responsible for authenticating the client). Messages are routed to the "doubleItService", which is defined as follows:

Categories: Colm O hEigeartaigh

Karaf Tutorial Part 1 - Installation and First application

Christian Schneider - Thu, 07/02/2015 - 18:06

Blog post edited by Christian Schneider

Getting StartedWith this post I am beginning a series of posts about Apache Karaf. So what is Karaf and why should you be interested in it? Karaf is an OSGi container based on Equinox or Felix. The main difference to these fine containers is that it brings excellent management features with it.

Outstanding features of Karaf:

  • Extensible Console with Bash like completion features
  • ssh console
  • deployment of bundles and features from maven repositories
  • easy creation of new instances from command line

All together these features make developing server based OSGi applications almost as easy as regular java applications. Deployment and management is on a level that is much better than all applications servers I have seen till now. All this is combined with a small footprint as well of karaf as the resulting applications. In my opinion this allows a light weight development style like JEE 6 together with the flexibility of spring applications.

Installation and first startup
  • Download Karaf 3.0.3 from the Karaf web site.
  • Extract and start with bin/karaf

You should see the welcome screen:

__ __ ____ / //_/____ __________ _/ __/ / ,< / __ `/ ___/ __ `/ /_ / /| |/ /_/ / / / /_/ / __/ /_/ |_|\__,_/_/ \__,_/_/ Apache Karaf (3.0.3) Hit '<tab>' for a list of available commands and '[cmd] \--help' for help on a specific command. Hit '<ctrl-d>' or 'osgi:shutdown' to shutdown Karaf. karaf@root()> Some handy commandsCommandDescriptionlaShows all installed bundlesservice:listShows the active OSGi services. This list is quite long. Here it is quite handy that you can use unix pipes like "ls | grep admin"exportsShows exported packages and bundles providing them. This helps to find out where a package may come from.feature:listShows which features are installed and can be installed.features:install webconsole

Install features (a list of bundles and other features). Using the above command we install the Karaf webconsole.

It can be reached at http://localhost:8181/system/console . Log in with karaf/karaf and take some time to see what it has to offer.

log:tailShow the log. Use ctrl-c to  go back to ConsoleCtrl-dExit the console. If this is the main console karaf will also be stopped.

OSGi containers preserve state after restarts


Please note that Karaf like all osgi containers maintains it´s last state of installed and started bundles. So if something should not work anymore a restart is not sure to help. To really start fresh again stop karaf and delete the data directory.

Check the logs


Karaf is very silent. To not miss error messages always keep a tail -f data/karaf.log open !!

Tasklist - A small osgi application

Without any useful application Karaf is a nice but useless container. So let´s create our first application. The good news is that creating an OSGi application is quite easy and
maven can help a lot. The difference to a normal maven project is quite small. To write the application I recommend to use Eclipse 4 with the m2eclipse plugin which is installed by default on current versions.

Get the source code

Import into Eclipse

  • Start Eclipse 
  • In Eclipse Package explorer: Import -> Existing maven project -> Browse to the extracted directory into the tasklist sub dir
  • Eclipse will show all maven projects it finds
  • Click through to import with defaults

Eclipse will now import the projects and wire all dependencies using m2eclipse.

The tasklist example consists of three projects

ModuleDescriptiontasklist-modelService interface and Task classtasklist-persistenceSimple persistence implementation that offers a TaskServicetasklist-uiServlet that displays the tasklist using a TaskServicetasklist-featuresFeatures descriptor for the application that makes installing in Karaf very easyTasklist-persistence

This project contains the domain model and the service implementation. The model is the Task class and a TaskService interface. The persistence implementation TaskServiceImpl manages tasks in a simple HashMap.
The TaskService is published as an OSGi service using a blueprint context. Blueprint is an OSGi standard for dependency injection and is very similar to a spring context.

<blueprint xmlns=""> <bean id="taskService" class="" /> <service ref="taskService" interface="" /> </blueprint>

The bean tag creates a single instance of the TaskServiceImpl. The service tag publishes this instance as an OSGi service with the TaskService interface.

The pom.xml is of packaging bundle and the maven-bundle-plugin creates the jar with an OSGi Manifest. By default the plugin imports all packages that are imported in java files or referenced in the blueprint context.
It also exports all packages that do not contain the string impl or internal. In our case we want the model package to be imported but not the persistence.impl package. As the naming convention is used
we need no additional configuration.


The ui project contains a small servlet TaskServlet to display the tasklist and individual tasks. To work with the tasks the servlet needs the TaskService.

To inject the TaskService and to publish the servlet the following blueprint context is used:

<blueprint xmlns=""> <reference id="taskService" availability="mandatory" interface="" /> <bean id="taskServlet" class=""> <property name="taskService" ref="taskService"></property> </bean> <service ref="taskServlet" interface="javax.servlet.http.HttpServlet"> <service-properties> <entry key="alias" value="/tasklist" /> </service-properties> </service> </blueprint>

The reference tag makes blueprint search and eventually wait for a service that implements the TaskService interface and creates a bean "taskService".
The bean taskServlet instantiates the servlet class and injects the taskService. The service tag publishes the servlet as an OSGi service with the HttpServlet interface and sets a property alias.
This way of publishing a servlet is not yet standardized but is supported by the pax web whiteboard extender. This extender registers each service with interface HttpServlet with the OSGi http service. It uses the alias
property to set the path where the servlet is available.

See also:


The last project only installs a feature descriptor to the maven repository so we can install it easily in Karaf. The descriptor defines a feature named tasklist and the bundles to be installed from
the maven repository.

<feature name="example-tasklist-persistence" version="${pom.version}"> <bundle>${pom.version}</bundle> <bundle>${pom.version}</bundle> </feature> <feature name="example-tasklist-ui" version="${pom.version}"> <feature>http</feature> <feature>http-whiteboard</feature> <bundle>${pom.version}</bundle> <bundle>${pom.version}</bundle> </feature>

A feature can consist of other features that also should be installed and bundles to be installed. The bundles typically use mvn urls. This means they are loaded from the configured maven repositories or your local maven repositiory in ~/.m2/repository.

Installing the Application in Karaf feature:repo-add feature:install example-tasklist-persistence example-tasklist-ui

Add the features descriptor to Karaf so it is added to the available features, then Install and start the tasklist feature. After this command the tasklist application should run


Check that all bundles of tasklist are active. If not try to start them and check the log.

http:list ID | Servlet | Servlet-Name | State | Alias | Url ------------------------------------------------------------------------------- 56 | TaskListServlet | ServletModel-2 | Deployed | /tasklist | [/tasklist/*]

Should show the TaskListServlet. By default the example will start at http://localhost:8181/tasklist .

You can change the port by creating aa text file in "etc/org.ops4j.pax.web.cfg" with the content "org.osgi.service.http.port=8080". This will tell the HttpService to use the port 8080. Now the tasklist application should be available at http://localhost:8080/tasklist


In this tutorial we have installed Karaf and learned some commands. Then we created a small OSGi application that shows servlets, OSGi services, blueprint and the whiteboard pattern.

In the next tutorial we take a look at using Apache Camel and Apache CXF on OSGi.

Back to Karaf Tutorials

View Online
Categories: Christian Schneider

Using SSH/SCP/SFTP with Apache Camel

Colm O hEigeartaigh - Thu, 07/02/2015 - 15:46
Apache Camel contains a number of components to make it easy to work with SSH/SCP/SFTP. I've created a new camel-ssh testcase in github to illustrate how to use these various components, continuing on from previous posts describing the security capabilities of Apache Camel:
  • SSHTest: This test-case shows how to use the Apache Camel SSH component. The test fires up an Apache MINA SSHD server, which has been configured to allow authenticated users to execute arbitrary commands (ok not very safe...). Some files that contain unix commands are read in via a Camel route, executed using the SSH component, and the results are stored in target/ssh_results.
  • SCPTest: This test-case shows how to use the Apache Camel JSCH component (which supports SCP using JSCH). An Apache MINA SSHD server is configured that allows SCP. Some XML files are read in via a Camel route, and copied using SCP to a target directory on the server (which maps to target/storage for the purposes of this test).
  • SFTPTest: This test-case shows how to use the Apache Camel FTP component. An Apache MINA SSHD server is configured that allows SFTP. Some XML files are read in via a Camel route, and copied using SFTP to a target directory on the server (target/storage_sftp).
Categories: Colm O hEigeartaigh

Messieurs les contrôleurs de la SNCF vous faites un beau métier

Olivier Lamy - Wed, 07/01/2015 - 23:46
Pour nos vacances en France, nous avons choisi de passer une semaine en Bretagne (cela tombe à point c'est la canicule à Paris).
En fait non, il n'y a rien d'improvisé et les billets ont été réservés et PAYES il y a près de deux mois par internet (ce point est important dans la suite du post).
Le train part donc de Paris Montparnasse à 10:04 ce mardi 30 Juin 2015. Depuis le début de nos vacances, nous avions prévu de rendre notre voiture de location puis de prendre notre train pour nous rendre dans la belle et fraîche Bretagne.
Bonne idée n'est-ce pas? Oui nous le pensions mais c'était sans compter sur quelques petits détails que nous avions oubliés...
Nous étions jusqu'ici en banlieue Essonne sud. Nous pensions mettre environ 1H30 pour nous rendre à la gare. Grosse Erreur!! Cela nous a pris 2H.
Donc nous finissons le trajet en catastrophe, regardant constamment notre montre, énervés, faisons face à la grande incivilité (je dirais même l'égoïsme) des français au volant.... Un ensemble de sentiments que nous ne connaissions plus.
Donc arrivée à 10h et le train est dans 4 minutes!!
Le retour de la voiture chez le loueur s'effectue dans la plus grande des cohue possible.
Là la course commence. Petit rappel sur le contexte, nous sommes une famille avec 4 enfants avec un petit de 3 ans (qui ne marche pas tout le temps donc nous avons une poussette), deux autres filles de 7 et 12 ans et un garçon de 14 ans. Donc oui nous sommes chargés et nous n'avons à ce moment là plus que 3 minutes pour aller du parking Avis jusqu'au quai.
Les enfants comprennent la situation et se chargent de valises (pour une petite de 7 ans une valise cela peut-être très lourd mais elle nous aide bien).
Là avec ma poussette et mes deux valises à tirer, je comprends la difficulté des handicapés dans des lieux publics!!! Mais tant bien que mal nous y arrivons, montons dans le wagon de queue au moment de la sonnerie.
Un grand merci à ces passagers qui nous ont aidés à monter nos valises, poussettes et sacs. En plus il fait bien chaud en ce jour de canicule!!
Donc nous sommes dans le train. Enfin presque car le train se sépare à Rennes et nous allons devoir remonter les dix wagons pour nous mettre le plus près possible de la locomotive afin de changer de train en moins de 4 minutes. Encore de grand moments de sueur nous attendent!!
Dans toute cette course pas eu le temps de retirer les billets DEJA payés il y a plus de 2 mois. (oui je sais j'insiste un peu sur ce point) Donc un de mes premières préocupations est de trouver un contrôleur pour lui expliquer que bloqués dans les bouchons nous n'avons pas eu le temps de retirer nos billets mais que j'ai bien le numéro de dossier etc...
Sa réponse froide et sur un ton très ironique voire moqueur: "ne vous inquiétez pas monsieur nous allons bien nous occuper de votre cas". Je vous rappelle lecteurs que nous venons de courir dans la canicule parisienne chargés de sacs, valises et autres poussettes et que monsieur se permet de faire un humour très ironique....
Très naïevement, je pense un très court instant que ce contrôleur est en fait très sympathique et va arranger notre souci.
Donc nous commençons notre remontée de nos dix wagons. je vous assure que remonter dix wagons avec une poussette et les bagages pour une famille de 6 personnes ce n'est vraiment pas simple surtout lorsque le train est chargé et bon nombre de personne ne se donne même pas la peine de déplacer même légèrement les bagages qu'ils laissent au sol à moitié dans le chemin (oui c'est un peu dur de mettre son sac en hauteur pour ne pas gêner les autres...)
A mi chemin de notre remontée, nous croisons les contrôleurs (ceux que nous avions déjà prévenus à notre montée dans le train) qui nous demandent: "titre de transport s'il vous plait".
Donc je tente de discuter en expliquant de nouveau notre cas, donne mon numéro de dossier pour vérification (mais apparemment en 2015 les contrôleurs n'auraient pas de moyens nécessaires pour vérifier les billets associés à mon numéro de dossier). Je n'ai apparemment pas bien saisi la nuance entre e-billet et billets à retirer.
Soit mais mon billet je l'ai déjà payé et nous avons simplement eu de la malchance à cause des bouchons.
Le contrôleur m'explique qu'en fait je peux frauder et vouloir me faire rembourser mon billet mais quand même prendre le train!!!
J'avoue que pour un père de famille de 4 enfants c'est toujours un peu dur de se faire traiter de voleur devant ses enfants.
Je lui montre donc ma commande avec le mention "non échangeable, non remboursable". Donc là franchement, je ne vois pas comment je pourrais faire cela.
Mais non ces messieurs se montrent intransigeants et nous mettent 5 amendes de 122 euros.
Là j'avoue ne pas comprendre. Nos billets ont été payés et réservés plus de 2 mois à l'avance.
Je me calme, j'essaie de faire comprendre à mes enfants que non nous ne sommes pas des voleurs. Qu'il s'agit simplement de malchance.
Nous parvenons enfin à regagner la locomotive de tête. Et oui il nous reste encore à changer de train en moins de 4 minutes. Le tout en transférant une poussette et des bagages pour une famille de 6 et sans oublier que c'est la canicule en France...
Finalement nous y parvenons....
Je ne comprends toujours pas aujourd'hui comment ces contrôleurs ont pu nous mettre de telles amendes. Le prétexte de la non possibilité de vérifier l'état de notre dossier me semble un peu gros. Nous sommes tout de même en 2015 dans un pays civilisé doté de matériel technologique souvent de pointe.
Donc oui messieurs les contrôleurs je trouve que vous faites un bien beau métier en mettant des amendes à une famille de 4 enfants (qui a déjà payé ses billets!!). La cible est évidemment bien facile, il y a tellement d'autres endroits en France mais les cibles sont peu-être plus compliquées et demandent un peu plus de courage....
Categories: Olivier Lamy

An STS JAAS LoginModule for Apache CXF

Colm O hEigeartaigh - Tue, 06/30/2015 - 13:27
Last year I blogged about how to use JAAS with Apache CXF, and the different LoginModules that were available. Recently, I wrote another article about using a JDBC LoginModule with CXF. This article will cover a relatively new JAAS LoginModule  added to CXF for the 3.0.3 release. It allows a service to dispatch a Username and Password to a STS (Security Token Service) instance for authentication via the WS-Trust protocol, and also to retrieve the user's roles by extracting them from a SAML token returned by the STS.

1) The STS JAAS LoginModule

The new STS JAAS LoginModule is available in the CXF WS-Security runtime module. It takes a Username and Password from the Callbackhandler passed to the LoginModule, and uses them to create a WS-Security UsernameToken structure. What happens then depends on a configuration setting in the LoginModule.

If the "require.roles" property is set, then the UsernameToken is added to a WS-Trust "Issue" request to the STS, and a "TokenType" attribute is sent in the request (defaults to the standard "SAML2" URI, but can be configured). The client also adds a WS-Trust "Claim" to the request that tells the STS to add the role of the authenticated end user to the request. How the token is added to the WS-Trust request depends on whether the "disable.on.behalf.of" property is set or not. By default, the token is added as an "OnBehalfOf" token in the WS-Trust request. However, if "disable.on.behalf.of" is set to "true", then the credentials are used according to the WS-SecurityPolicy of the STS endpoint. For example, if the policy requires a UsernameToken, then the credentials are added to the security header of the WS-Trust request. If the "require.roles" property is not set, the the UsernameToken is added to a WS-Trust "Validate" request.

The STS validates the received UsernameToken credentials supplied by the end user, and then either creates a token (if the Issue binding was used), or just returns a simple response telling the client whether the validation was successful or not. In the former use-case, the token that is returned is cached meaning that the end user does not have to re-authenticate until the token expires from the cache.

The LoginModule has the following configuration properties:
  • require.roles - If this is defined, then the WS-Trust Issue binding is used, passing the value specified for the "token.type" property as the TokenType, and the "key.type" property for the KeyType. It also adds a Claim to the request for the default "role" URI.
  • disable.on.behalf.of - Whether to disable passing Username + Password credentials via "OnBehalfOf".
  • disable.caching - Whether to disable caching of validated credentials. Default is "false". Only applies when "require.roles" is defined.
  • wsdl.location - The location of the WSDL of the STS
  • - The service QName of the STS
  • - The endpoint QName of the STS
  • key.size - The key size to use (if requesting a SymmetricKey KeyType). Defaults to 256.
  • key.type - The KeyType to use. Defaults to the standard "Bearer" URI.
  • token.type - The TokenType to use. Defaults to the standard "SAML2" URI.
  • - The WS-Trust namespace to use. Defaults to the standard WS-Trust 1.3 namespace.
In addition, any of the standard CXF security configuration tags that start with "ws-security." can be used as documented here. Sometimes it is necessary to set some security configuration depending on the security policy of the WSDL.

Here is an example of the new JAAS LoginModule configuration:

2) A testcase for the new LoginModule

Using an STS via WS-Trust for authentication and authorization can be quite difficult to set up and understand, but the new LoginModule makes it easy. I created a testcase + uploaded it to github:
  • cxf-jaxrs-jaas-sts: This project demonstrates how to use the new STS JAAS LoginModule in CXF to authenticate and authorize a user. It contains a "double-it" module which contains a "double-it" JAX-RS service. It is secured with JAAS at the container level, and requires a role of "boss" to access the service. The "sts" module contains a Apache CXF STS web application which can authenticate users and issue SAML tokens with embedded roles.
To run the test, download Apache Tomcat and do "mvn clean install" in the testcase above. Then copy both wars and the jaas configuration file to the Apache Tomcat install (${catalina.home}):
  • cp double-it/target/cxf-double-it.war ${catalina.home}/webapps
  • cp sts/target/cxf-sts.war ${catalina.home}/webapps
  • cp double-it/src/main/resources/jaas.conf ${catalina.home}/conf
Next set the following system property:
  • export${catalina.home}/conf/jaas.conf
Finally, start Tomcat, open a web browser and navigate to:


Use credentials "alice/security" when prompted. The STS JAAS LoginModule takes the username and password, and dispatches them to the STS for validation.

    Categories: Colm O hEigeartaigh

    Apache Karaf Tutorial part 10 - Declarative services

    Christian Schneider - Tue, 06/30/2015 - 11:09

    Blog post edited by Christian Schneider

    This tutorial shows how to use Declarative Services together with the new Aries JPA 2.0.

    You can find the full source code on github Karaf-Tutorial/tasklist-ds

    Declarative Services

    Declarative Services (DS) is the biggest contender to blueprint. It is a slim service injection framework that is completely focused on OSGi. DS allows you to offer and consume OSGi services and to work with configurations.

    At the core DS works with xml files to define scr components and their dependencies. They typically live in the OSGI-INF directory and are announced in the Manifest using the header "Service-Component" with the path to the component descriptor file.  Luckily it is not necessary to directly work with this xml as there is also support for DS annotations. These are processed by the maven-bundle-plugin. The only prerequisite is that they have to be enabled by a setting in the configuration instructions of the plugin.


    For more details see

    DS vs Blueprint

    Let us look into DS by comparing it to the already better known blueprint. There are some important differences:

    1. Blueprint always works on a complete blueprint context. So the context will be started when all mandatory service deps are present. It then publishes all offered services. As a consequence a blueprint context can not depend on services it offers itself. DS works on Components. A component is a class that offers a service and can depend on other services and configuration. In DS you can manage each component separately like start and stop it. It is also possible that a bundle offers two components but only one is started as the dependencies of the other are not yet there.
    2. DS supports the OSGi service dynamics better than blueprint. Lets look into a simple example:
      You have a DS and blueprint module component that offers a service A and depends on a mandatory service B. Blueprint will wait on the first start for the mandatory service to be available. If it does not come up it will fail after a timeout and will not be able to recover from this. Once the blueprint context is up it stays up even if the mandatory service goes away. This is called service damping and has the goal to avoid restarting blueprint contexts too often. Services are injected into blueprint beans as dynamic proxies. Internally the proxy handles the replacement and unavailability of services. One problem this causes is that calls to a non available service will block the thread until a timeout and then throw a RuntimeException.
      In DS on the other hand a component lifecycle is directly bound to dependent services. So a component will only be activated when all mandatory services are present and deactivated as soon as one goes away. The advantage is that the service injected into the component does not have to be proxied and calls to it should always work.
    3. Every DS component must be a service. While blueprint can have internal beans that are just there to wire internal classes to each other this is not possible in DS. So DS is not a complete dependency injection framework and lacks many of the features blueprint offers in this regard.
    4. DS does not support extension namespaces. Aries blueprint has support for quite a few other Apache projects using extension namespaces. Examples are: Aries jpa, Aries transactions, Aries authz, CXF, Camel. So using these technologies in DS can be a bit more difficult.
    5. DS does not support support interceptors. In blueprint an extension namespace can introduce and interceptor that is always called before or after a bean. This is for example used for security as well as transation handling. For this reason DS did not support JPA very well as normal usage mandates to have interceptors. See below how jpa can work on DS.

    So if DS is a good match for your project depends on how much you need the service dynamics and how well you can integrate DS with other projects.

    JEE and JPA

    The JPA spec is based on JEE which has a very special thread and interceptor model. In JEE you use session beans with a container managed EntityManger
    to manipulate JPA Entities. It looks like this:

    JPA @Stateless class TaskServiceImpl implements TaskService {  @PersistenceContext(unitName="tasklist") private EntityManager em; public Task getTask(Integer id) { return em.find(Task.class, id); } }

    In JEE calling getTask will by default participate in or start a transaction. If the method call succeeds the transaction will be committed, if there is an exception it will be rolled back.
    The calls go to a pool of TaskServiceImpl instances. Each of these instances will only be used by one thread at a time. As a result of this the EntityManager interface is not thread safe!

    So the advantage of this model is that it looks simple and allows pretty small code. On the other hand it is a bit difficult to test such code outside a container as you have to mimic the way the container works with this class. It is also difficult to access e.g. em
     as it is private and there is not setter.

    Blueprint supports a coding style similar to the JEE example and implements this using a special jpa and tx namespace and
    interceptors that handle the transaction / em management.

    DS and JPA

    In DS each component is a singleton. So there is only one instance of it that needs to cope with multi threaded access. So working with the plain JEE concepts for JPA is not possible in DS.

    Of course it would be possible to inject an EntityManagerFactory and handle the EntityManager lifecycle and transactions by hand but this results in quite verbose and error prone code.

    Aries JPA 2.0.0 is the first version that offers special support for frameworks like DS that do not offer interceptors. The solution here is the concept of a JPATemplate together with support for closures in Java 8. To see how the code looks like peek below at chapter persistence.

    Instead of the EntityManager we inject a thread safe JpaTemplate into our code. We need to put the jpa code inside a closure and run it with jpa.txEpr() or jpa.tx(). The JPATemplate will then guarantee the same environment like JEE inside the closure. As each closure runs as its own
    instance there is one em per thread. The code will also participate/create a transaction and the transaction  commit/rollback also works like in JEE.

    So this requires a little more code but the advantage is that there is no need for a special framework integration.
    The code can also be tested much easier. See TaskServiceImplTest in the example.

    • features
    • model
    • persistence
    • ui

    Defines the karaf features to install the example as well as all necessary dependencies.


    This module defines the Task JPA entity, a TaskService interface and the persistence.xml. For a detailed description of model see the tasklist-blueprint example. The model is exactly the same here.

    PersistenceTaskServiceImpl @Component public class TaskServiceImpl implements TaskService { private JpaTemplate jpa; public Task getTask(Integer id) { return jpa.txExpr(em -> em.find(Task.class, id)); } @Reference(target = "(") public void setJpa(JpaTemplate jpa) { this.jpa = jpa; } }

    We define that we need an OSGi service with interface TaskService and a property "" with the value "tasklist".

    InitHelper @Component public class InitHelper { Logger LOG = LoggerFactory.getLogger(InitHelper.class); TaskService taskService; @Activate public void addDemoTasks() { try { Task task = new Task(1, "Just a sample task", "Some more info"); taskService.addTask(task); } catch (Exception e) { LOG.warn(e.getMessage(), e); } } @Reference public void setTaskService(TaskService taskService) { this.taskService = taskService; } }

    The class InitHelper creates and persists a first task so the UI has something to show. It is also an example how business code that works with the task service can look like.
    @Reference TaskService taskService injects the TaskService into the field taskService.
    @Activate makes sure that addDemoTasks() is called after injection of this component.

    Another interesting point in the module is the test TaskServiceImplTest. It runs outside OSGi and uses a special
    persistence.xml for testing to create the EntityManagerFactory. It also shows how to instantiate a ResourceLocalJpaTemplate
    to avoid having to install a JTA transaction manager for the test. The test code shows that indeed the TaskServiceImpl can
    be used as plain java code without any special tricks.


    The tasklist-ui module uses the TaskService as an OSGi service and publishes a Servlet as an OSGi service. The Pax-web whiteboard bundle will then pick up the exported servlet and publish it using the HttpService so it is available on http.

    TaskListServlet @Component(immediate = true, service = { Servlet.class }, property = { "alias:String=/tasklist" } ) public class TaskListServlet extends HttpServlet { private TaskService taskService; protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { // Actual code omitted } @Reference public void setTaskService(TaskService taskService) { this.taskService = taskService; } }

    The above snippet shows how to specify which interface to use when exporting a service as well as how to define service properties.

    The TaskListServlet is exported with the interface javax.servlet.Servlet with the service property alias="/tasklist".
    So it is available on the url http://localhost:8181/tasklist.


    Make sure you use JDK 8 and run:

    mvn clean install Installation

    Make sure you use JDK 8.
    Download and extract Karaf 4.0.0.
    Start karaf and execute the commands below

    # Install create config for DataSource tasklist cat | tac -f etc/org.ops4j.datasource-tasklist.cfg # Install feature:repo-add feature:install example-tasklist-ds-persistence example-tasklist-ds-ui Validate Installation

    First we check that the JpaTemplate service is present for our persistence unit.

    service:list JpaTemplate [org.apache.aries.jpa.template.JpaTemplate] ------------------------------------------- = tasklist transaction.type = JTA = 164 service.bundleid = 57 service.scope = singleton Provided by : tasklist-model (57) Used by: tasklist-persistence (58)

    Aries JPA should have created this service for us from our model bundle. If this did not work then check the log for messages from Aries JPA. It should print what it tried and what it is waiting for. You can also check for the presence of an EntityManagerFactory and EmSupplier service which are used by JpaTemplate.

    A likely problem would be that the DataSource is missing so lets also check it:

    service:list DataSource [javax.sql.DataSource] ---------------------- dataSourceName = tasklist felix.fileinstall.filename = file:/home/cschneider/java/apache-karaf-4.0.0/etc/org.ops4j.datasource-tasklist.cfg = H2-pool-xa = tasklist service.factoryPid = org.ops4j.datasource = org.ops4j.datasource.cdc87e75-f024-4b8c-a318-687ff83257cf url = jdbc:h2:mem:test = 156 service.bundleid = 113 service.scope = singleton Provided by : OPS4J Pax JDBC Config (113) Used by: Apache Aries JPA container (62)

    This is like it should look like. Pax-jdbc-config created the DataSource out of the configuration in "etc/org.ops4j.datasource-tasklist.cfg".  By using a DataSourceFactory wit the property "". So the resulting DataSource should be pooled and fully ready for XA transactions.

    Next we check that the DS components started:

    scr:list ID | State | Component Name -------------------------------------------------------------- 1 | ACTIVE | 2 | ACTIVE | 3 | ACTIVE |

    If any of the components is not active you can inspect it in detail like this:

    scr:details Component Details Name : State : ACTIVE Properties : References Reference : Jpa State : satisfied Multiple : single Optional : mandatory Policy : static Service Reference : Bound Service ID 164 Test

    Open the url below in your browser.

    You should see a list of one task

     http://localhost:8181/tasklist?add&taskId=2&title=Another Task


    View Online
    Categories: Christian Schneider

    Apache Karaf Tutorial Part 8 - Distributed OSGi

    Christian Schneider - Tue, 06/30/2015 - 09:59

    Blog post edited by Christian Schneider - "Updated to karaf 3.0.3 and cxf dosgi 1.6.0"

    By default OSGi services are only visible and accessible in the OSGi container where they are published. Distributed OSGi allows to define services in one container and use them in some other (even over machine boundaries).

    For this tutorial we use the DOSGi sub project of CXF which is the reference implementation of the OSGi Remote Service Admin specification, chapter 122 in the OSGi 4.2 Enterprise Specification).

    Example on github

    Introducing the example

    Following the hands on nature of these tutorial we start with an example that can be tried in some minutes and explain the details later.

    Our example is again the tasklist example from Part 1 of this tutorial. The only difference is that we now deploy the model and the persistence service on container A and model and UI to container B and we install the dosgi runtime on bother containers.

    As DOSGi should not be active for all services on a system the spec defines that the service property "osgi.remote.interfaces" triggers if DOSGi should process the service. It expects the interface names that this service should export remotely. Setting the property to "*" means that all interfaces the service implements should be exported. The tasklist persistence service already sets the property so the service is exported with defaults.

    Installing the service

    To keep things simple we will install container A and B on the same system.

    • Download Apache Karaf 3.0.3
    • Unpack karaf into folder container_a
    • Start bin/karaf
    • config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper zookeeper.port 2181
    • config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper.server clientPort 2181
    • feature:repo-add cxf-dosgi 1.6.0
    • feature:install cxf-dosgi-discovery-distributed cxf-dosgi-zookeeper-server
    • feature:repo-add
    • feature:install example-tasklist-persistence

    After these commands the tasklist persistence service should be running and be published on zookeeper.

    You can check the wsdl of the exported service http://localhost:8181/cxf/net/lr/tasklist/model/TaskService?wsdlBy starting the zookeeper client from a zookeeper distro you can optionally check that there is a node for the service below the osgi path.

    Installing the UI
    • Unpack into folder container_b
    • Start bin/karaf
    • config:property-set -p org.ops4j.pax.web org.osgi.service.http.port 8182
    • config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper zookeeper.port 2181
    • feature:repo-add cxf-dosgi 1.6.0
    • feature:install cxf-dosgi-discovery-distributed
    • feature:repo-add
    • feature:install example-tasklist-ui

    The tasklist client ui should be in status Active/Created and the servlet should be available on http://localhost:8182/tasklist. If the ui bundle stays in status graceperiod then DOSGi did not provide a local proxy for the persistence service.

    How does it work

    The Remote Service Admin spec defines an extension of the OSGi service model. Using special properties when publishing OSGi services you can tell the DOSGi runtime to export a service for remote consumption. The CXF DOSGi runtime listens for all services deployed on the local container. It only processes services that have the "osgi.remote.interfaces" property. If the property is found then the service is either exported with the named interfaces or with all interfaces it implements.The way the export works can be fine tuned using the CXF DOSGi configuration options.

    By default the service will be exported using the CXF servlet transport. The URL of the service is derived from the interface name. The servlet prefix, hostname and port number default to the Karaf defaults of "cxf", the ip address of the host and the port 8181. All these options can be defined using a config admin configuration (See the configuration options). By default the service uses the CXF Simple Frontend and the Aegis Databinding. If the service interface is annotated with the JAX-WS @WebService annotation then the default is JAX-WS frontend and JAXB databinding.

    The service informations are then also propagated using the DOSGi discovery. In the example we use the Zookeeper discovery implementation. So the service metadata is written to a zookeeper server.

    The container_b will monitor the local container for needed services. It will then check if a needed service is available on the discovery impl (on the zookeeper server in our case). For each service it finds it will create a local proxy that acts as an OSGi service implementing the requested interface. Incoming request are then serialized and sent to the remote service endpoint.

    So together this allows for almost transparent service calls. The developer only needs to use the OSGi service model and can still communicate over container boundaries.

    View Online
    Categories: Christian Schneider

    A new Crypto implementation in Apache WSS4J

    Colm O hEigeartaigh - Mon, 06/29/2015 - 16:51
    Apache WSS4J uses the Crypto interface to get keys and certificates for asymmetric encryption/decryption and signature creation/verification. In addition, it also takes care of verifying trust in an X.509 certificate used to sign some portion of the message. WSS4J currently ships with three Crypto implementations:
    • Merlin: The standard implementation, based around two JDK keystores for key/cert retrieval, and trust verification.
    • CertificateStore: Holds an array of X509 Certificates. Can only be used for encryption and signature verification.
    • MerlinDevice: Based on Merlin, allows loading of keystores using a null InputStream - for example on a smart-card device.
    The next release(s) of WSS4J, 2.0.5 and 2.1.2, will contain a fourth implementation:
    • MerlinAKI: A new Merlin-based Crypto implementation that searches the truststore for the issuing certificate using the AuthorityKeyIdentifier extension bytes of the signing certificate, as opposed to the issuer DN.
    Trust verification for the standard/default Merlin implementation works as follows:
    1. Is the signing cert contained in the keystore/truststore? If yes, then trust verification succeeds. This can be combined with using regular expressions on the Subject DN as well.
    2. If not, then get the issuing cert by reading the Issuer DN from the signing cert. Then search for this cert in the keystore/truststore. 
    3. If the issuer cert is found, then form a cert path containing the signing cert, the issuing cert and any subsequent issuing cert of that cert. Then validate the cert path.
    However, the retrieval of the issuing cert in step 2 above falls down under certain rare scenarios, where there may not be a 1-to-1 link between the Subject DN of a certificate and a public key. This is where the new MerlinAKI implementation comes in. Instead of searching for the issuing cert using the issuer DN of the signing cert, it instead uses BouncyCastle to retrieve the AuthorityKeyIdentifier extension bytes (if present) from the cert. It then searches for the issuing cert by seeing which of the certs in the truststore contain a SubjectKeyIdentifier extension with a matching identifier value. You can switch to use MerlinAKI simply by changing the name of the Crypto provider in the Crypto properties file:

    Categories: Colm O hEigeartaigh

    Using AWS KMS with Apache CXF to secure passwords

    Colm O hEigeartaigh - Fri, 06/26/2015 - 13:16
    The previous tutorial showed how the AWS Key Management Service (KMS) can be used to generate symmetric encryption keys that can be used with WS-Security to encrypt and decrypt a service request using Apache CXF. It is also possible to use the KMS to secure keystore passwords for asymmetric encryption and signature, that are typically stored in properties files when using WS-Security with Apache CXF.

    1) Encrypting passwords in a Crypto properties file

    Apache CXF uses the WSS4J Crypto interface to get keys and certificates for asymmetric encryption/decryption and for signature creation/verification. Merlin is the standard implementation, based around two JDK keystores for key/cert retrieval, and trust verification. Typically, a Crypto implementation is loaded and configured via a Crypto properties file. For example:

    However one issue with this style of configuration is that the keystore password is stored in plaintext in the file. Apache WSS4J 2.0.0 introduced the ability to store encrypted passwords in the crypto properties file instead. A PasswordEncryptor interface was defined to allow for the encryption/decryption of passwords, and a default implementation based on Jasypt was made available in the release. In this case, the master password used to decrypt the encrypted keystore password was retrieved from a CallbackHandler implementation.

    2) Using KMS to encrypt keystore passwords

    Instead of using the Jasypt PasswordEncryptor implementation provided by default in Apache WSS4J, it is possible to use instead the AWS KMS to decrypt encrypted keystore passwords stored in crypto properties files. I've updated the test-case introduced in the previous tutorial with an asymmetric encryption test-case, where a SOAP service invocation is encrypted using a WS-SecurityPolicy AsymmetricBinding policy.

    The first step in running the test-case is to follow the previous tutorial in terms of registering for AWS, creating a user "alice" and a corresponding customer master key. One you have this, then run the "testEncryptedPasswords" test available here, which outputs the encrypted passwords for the client and service keystores ("cspass" and "sspass"). Copy the output + paste them into the and in the "ENC()" tags. For example:

    The client and service configure a custom PasswordEncryptor implementation designed to decrypt the encrypted keystore password using KMS. The KMSPasswordEncryptor is spring-loaded in the client and service configuration, and must be updated with the access key id, secret key, master key id, etc. as defined earlier. Of course this means that the secret key is in plaintext in a spring configuration file in this example. However, it could be obtained via a system property or some other means, and is more secure than storing a plaintext keystore password in a properties file. Once the KMSPasswordEncryptor is properly configured, then the AsymmetricTest can be run, and you will see the secured service request and response in the console window.

    Categories: Colm O hEigeartaigh

    Integrating AWS Key Management Service with Apache CXF

    Colm O hEigeartaigh - Thu, 06/25/2015 - 13:20
    Apache CXF supports a wide range of standards designed to help you secure a web service request, from WS-Security for SOAP requests, to XML Security and JWS/JWE for XML/JSON REST requests. All of these standards provide for using symmetric keys to encrypt requests, and then using a master key (typically a public key associated with an X.509 certificate) to encrypt the symmetric key, embedding this information somewhere in the request. The usual use-case is to generate random bytes for the symmetric key. But what if you wanted instead to manage the secret keys in some way? Or if your client did not have access to sufficient entropy to generate truly random bytes? In this article, we will look at how to use the AWS Key Management Service to perform this task for us, in the context of an encrypted SOAP request using WS-Security.

    1) AWS Key Management Service

    The AWS Key Management Service allows us to create master keys and data keys for users defined in the AWS Identity and Access Management service. Once we have created a user, and a corresponding master key for the user (which is only stored in AWS and cannot be exported), we can ask the Key Management Service to issue us a data key (using either AES 128 or 256), and an encrypted data key. The idea is that the data key is used to encrypt some data and is then disposed of. The encrypted data key is added to the request, where the recipient can ask the Key Management Service to decrypt the key, which can be then be used to decrypt the encrypted data in the request.

    The first step is to register for Amazon AWS here. Once we have registered, we need to create a user in the Identity and Access Management service. Create a new user "alice", and make a note of the access key and secret access key associated with "alice". Next we need to write some code to obtain keys for "alice" (documentation). First we must create a client:

    AWSCredentials creds = new BasicAWSCredentials(<access key id>, <secret key>);
    AWSKMSClient kms = new AWSKMSClient(creds);

    Next we must create a customer master key for "alice":

    String desc = "Secret encryption key";
    CreateKeyRequest req = new CreateKeyRequest().withDescription(desc);
    CreateKeyResult result = kms.createKey(req);

    The CreateKeyResult object returned as part of the key creation process will contain a key Id, which we will need later.

    2) Using AWS Key Management Service keys with WS-Security

    As mentioned above, the typical process for WS-Security when encrypting a request, is to generate some random bytes to use as the symmetric encryption key, and then use a key wrap algorithm with another key (typically a public key) to encrypt the symmetric key. Instead, we will use the AWS Key Management Service to retrieve the symmetric key to encrypt the request. We will store the encrypted form of the symmetric key in the WS-Security EncryptedKey structure, which will reference the Customer Master Key via a "KeyName" pointing to the Key Id.

    I have created a project that can be used to demonstrate this integration:
    • cxf-amazon-kms: This project contains a number of tests that show how to use the AWS Key Management Service with Apache CXF.
    The first task in running the test (assuming the steps followed in point 1 above were followed) is to edit the client configuration, entering the correct values in the CommonCallbackHandler for the access key id, secret key, endpoint, and master key id as gathered above, ditto for the service configuration. The CommonCallbackHandler uses the AWS Key Management Service API to create the symmetric key on the sending side, and to decrypt it on the receiving side. Then to run the test simply remove the "org.junit.Ignore" annotation, and the encrypted web service request can be seen in the console:

    Categories: Colm O hEigeartaigh

    Using a JDBC JAAS LoginModule with Apache CXF

    Colm O hEigeartaigh - Tue, 06/23/2015 - 13:20
    Last year I wrote a blog entry giving an overview of the different ways that you can use JAAS with Apache CXF for authenticating and authorizing web service calls. I also covered some different login modules and linked to samples for authenticating a Username + Password to LDAP, as well as Kerberos Tokens to a KDC. This article covers how to use JAAS with Apache CXF to authenticate a Username + Password to a database via JDBC.

    The test-case is available here:
    • cxf-jdbc: This project contains a number of tests that show how an Apache CXF service endpoint can authenticate and authorize a client using JDBC.
    It contains two tests, one dealing with authentication (no roles required by the service) and the other with authorization (a specific role is required). Both tests involve a JAX-WS service invocation, where the service requires a WS-Security UsernameToken over TLS. In each case, the service configures Apache WSS4J's JAASUsernameTokenValidator using the context name "jetty". The JAAS configuration file contains an entry for the "jetty" context, which references the Jetty JDBCLoginModule:

    The configuration of the JDBCLoginModule is easy to follow. The "dbUrl" refers to the JDBC connection URL (in this case an in-memory Apache Derby instance). The table containing user data is "app.users", where the fields used for usernames and passwords are "name" and "password" respectively. Similarly, the table containing role data is "app.roles", where the fields used for usernames + roles are "name" and "role" respectively.

    The tests use Apache Derby as an in-memory database. It is created in code as follows:

    Then the following SQL file is read in, and each statement is executed using the statement Object above:

    Categories: Colm O hEigeartaigh

    Apache CXF Fediz 1.2.0 tutorial - part II

    Colm O hEigeartaigh - Thu, 06/11/2015 - 14:47
    This is the second in a series of blog posts on the new features and changes in Apache CXF Fediz 1.2.0. The previous blog entry gave instructions about how to deploy the Fediz IdP and a sample service application in Apache Tomcat. This article describes how different client authentication methods are supported in the IdP, and how they can be selected by the service via the "wauth" parameter. Then we will extend the previous tutorial by showing how to authenticate to the IdP using a client certificate in the browser, as opposed to entering a username + password.

    1) Supporting different client authentication methods in the IdP

    The Apache Fediz IdP in 1.2.0 supports different client authentication methods by default using different URL paths, as follows:
    • /federation -> the main entry point
    • /federation/up -> authentication using HTTP B/A
    • /federation/krb -> authentication using Kerberos
    • /federation/clientcert -> authentication using a client cert
    The way it works is as follows. The service provider (SP) should use the URL for the main entry point (although the SP has the option of choosing one the more specific URLs as well). The IdP extracts the "wauth" parameter from the request ("default" is the default value), and looks for a matching key in the "authenticationURIs" section of the service configuration. For example:

    <property name="authenticationURIs">
            <entry key="default" value="federation/up" />
            <entry key="" value="federation/krb" />
            <entry key="" value="federation/up" />
            <entry key="" value="federation/clientcert" />

    If a matching key is found for the wauth value, then the browser gets redirected to the associated URL. Therefore, a service provider can specify a value for "wauth" in the plugin configuration, and select the client authentication mode as a result. The values defined for "wauth" above are taken from the specification, but can be changed if required. The service provider can specify the value for "wauth" by using the "authenticationType" configuration tag, as documented here.

    2) Client authentication using a certificate

    A new feature of Fediz 1.2.0 is the ability for a client to authenticate to the IdP using a certificate embedded in the browser. To see how this works in practice, please follow the steps given in the previous tutorial to set up the IdP and service web application in Apache Tomcat. To switch to use client certificate authentication, only one change is required in the service provider configuration:
    • Edit ${catalina.home}/conf/fediz_config.xml, and add the following under the "protocol" section: <authenticationType></authenticationType>
    The next step is to add a client certificate to the browser that you are using. To avoid changing the IdP TLS configuration, we will just use the same certificate / private key that is used by the IdP on the client side for the purposes of this demo. First, we need to convert the IdP key from JKS to PKCS12. So take the idp-ssl-key.jks configured in the previous tutorial and run:
    • keytool -importkeystore -srckeystore idp-ssl-key.jks -destkeystore idp-ssl-key.p12 -srcstoretype JKS -deststoretype PKCS12 -srcstorepass tompass -deststorepass tompass -srcalias mytomidpkey -destalias mytomidpkey -srckeypass tompass -destkeypass tompass -noprompt
    I will use Chrome for the client browser. Under Settings, Advanced Settings, "HTTPS/SSL", click on the Manage Certificates button, and add the idp-ssl-key.p12 keystore above using the password "tompass":
    Next, we need to tell the STS to trust the key used by the client (you can skip these steps if using Fediz 1.2.1):
    • First, export the certificate as follows: keytool -keystore idp-ssl-key.jks -storepass tompass -export -alias mytomidpkey -file MyTCIDP.cer
    • Take the ststrust.jks + import the cert: keytool -import -trustcacerts -keystore ststrust.jks -storepass storepass -alias idpcert -file MyTCIDP.cer -noprompt
    • Finally, copy the modified ststrust.jks into the STS: ${catalina.home}/webapps/fediz-idp-sts/WEB-INF/classes
    The last configuration step is to tell the STS where to retrieve claims for the cert. We will just copy the claims for Alice:
    • Edit ${catalina.home}/webapps/fediz-idp-sts/WEB-INF/userClaims.xml
    • Add the following under "userClaimsREALMA": <entry key="CN=localhost" value-ref="REALMA_aliceClaims" />
    Now restart Tomcat and navigate to the service URL:
    • https://localhost:8443/fedizhelloworld/secure/fedservlet
    Select the certificate that we have uploaded, and you should be able to authenticate to the IdP and be redirected back to the service, without having to enter any username/password credentials!
    Categories: Colm O hEigeartaigh

    Apache CXF Fediz 1.2.0 tutorial - part I

    Colm O hEigeartaigh - Wed, 06/10/2015 - 17:16
    The previous blog entry gave an overview of the new features in Apache CXF Fediz 1.2.0. This post first focuses on setting up and running the IdP (Identity Provider) and the sample simpleWebapp in Apache Tomcat.

    1) Deploying the 1.2.0 Fediz IdP in Apache Tomcat

    Download Fediz 1.2.0 and extract it to a new directory (${fediz.home}). We will use a Apache Tomcat 7 container to host the Idp. To deploy the IdP to Tomcat:
    • Create a new directory: ${catalina.home}/lib/fediz
    • Edit ${catalina.home}/conf/ and append ',${catalina.home}/lib/fediz/*.jar' to the 'common.loader' property.
    • Copy ${fediz.home}/plugins/tomcat/lib/* to ${catalina.home}/lib/fediz
    • Copy ${fediz.home}/idp/war/* to ${catalina.home}/webapps
    • Download and copy the hsqldb jar (e.g. hsqldb- to ${catalina.home}/lib
    Now we need to set up TLS:
    • The keys that ship with Fediz 1.2.0 are 1024 bit DSA keys, which will not work with most modern browsers (this will be fixed for 1.2.1). 
    • So we need to generate a new key: keytool -genkeypair -validity 730 -alias mytomidpkey -keystore idp-ssl-key.jks -dname "cn=localhost" -keypass tompass -storepass tompass -keysize 2048 -keyalg RSA
    • Export the cert: keytool -keystore idp-ssl-key.jks -storepass tompass -export -alias mytomidpkey -file MyTCIDP.cer
    • Create a new truststore with the cert: keytool -import -trustcacerts -keystore idp-ssl-trust.jks -storepass ispass -alias mytomidpkey -file MyTCIDP.cer -noprompt
    • Copy idp-ssl-key.jks and idp-ssl-trust.jks to ${catalina.home}.
    • Copy both jks files as well to ${catalina.home}/webapps/fediz-idp/WEB-INF/classes/ (after Tomcat is started)
    • Edit the TLS Connector in ${catalina.home}/conf/server.xml', e.g.: <Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="want" sslProtocol="TLS" keystoreFile="idp-ssl-key.jks" keystorePass="tompass" keyPass="tompass" truststoreFile="idp-ssl-trust.jks" truststorePass="ispass" />
    Now start Tomcat, and check that the IdP is live by opening the STS WSDL in a web browser: 'https://localhost:8443/fediz-idp-sts/REALMA/STSServiceTransport?wsdl'

    For a more thorough test, enter the following in a web browser - you should be directed to the URL for the service application (404, as we have not yet configured it):


    2) Deploying the simpleWebapp in Apache Tomcat

    To deploy the service to Tomcat:
    • Copy ${fediz.home}/examples/samplekeys/rp-ssl-server.jks and ${fediz.home}/examples/samplekeys/ststrust.jks to ${catalina.home}.
    • Copy ${fediz.home}/examples/simpleWebapp/src/main/config/fediz_config.xml to ${catalina.home}/conf/
    • Edit ${catalina.home}/conf/fediz_config.xml and replace '9443' with '8443'.
    • Do a "mvn clean install" in ${fediz.home}/examples/simpleWebapp
    • Copy ${fediz.home}/examples/simpleWebapp/target/fedizhelloworld.war to ${catalina.home}/webapps.
    3) Testing the service

    To test the service navigate to:
    • https://localhost:8443/fedizhelloworld/  (this is not secured) 
    • https://localhost:8443/fedizhelloworld/secure/fedservlet
    With the latter URL, the browser is redirected to the IDP (select realm "A") and is prompted for a username and password. Enter "alice/ecila" or "bob/bob" or "ted/det" to test the various roles that are associated with these username/password pairs.
    Categories: Colm O hEigeartaigh

    Enterprise ready request logging with CXF 3.1.0 and elastic search

    Christian Schneider - Mon, 06/08/2015 - 17:29

    Blog post added by Christian Schneider

    You may already know the CXF LoggingFeature. You used it like this:

    Old CXF LoggingFeature <jaxws:endpoint ...> <jaxws:features> <bean class="org.apache.cxf.ext.logging.LoggingFeature"/> </jaxws:features> </jaxws:endpoint>

    It allowed to add logging to a CXF endpoint at compile time.

    While this already helped a lot it was not really enterprise ready. The logging could not be controlled much at runtime and contained too few details. This all changes with the new CXF logging support and the up coming Karaf Decanter.

    Logging feature in CXF 3.1.0

    In CXF 3.1 this code was moved into a separate module and gathered some new features.

    • Auto logging for existing CXF endpoints
    • Uses slf4j MDC to log meta data separately
    • Adds meta data for Rest calls
    • Adds MD5 message id and exchange id for correlation
    • Simple interface for writing your own appenders
    • Karaf decanter support to log into elastic search
    Auto logging for existing CXF endpoints in Apache Karaf

    Simply install and enable the new logging feature:

    Logging feature in karaf feature:repo-add cxf 3.1.0 feature:install cxf-features-logging config:property-set -p org.apache.cxf.features.logging enabled true

    Then install CXF endpoints like always. For example install the PersonService from the Karaf Tutorial Part 4 - CXF Services in OSGi. The client and endpoint in the example are not equipped with the LoggingFeature. Still the new logging feature will enhance the clients and endpoints and log all SOAP and Rest calls using slf4j. So the logging data will be processed by pax logging and by default end up in your karaf log.

    A log entry looks like this:

    Sample Log entry 2015-06-08 16:35:54,068 | INFO | qtp1189348109-73 | REQ_IN | 90 - org.apache.cxf.cxf-rt-features-logging - 3.1.0 | <soap:Envelope xmlns:soap=""><soap:Body><ns2:addPerson xmlns:ns2="" xmlns:ns3=""><arg0><id>3</id><name>Test2</name><url></url></arg0></ns2:addPerson></soap:Body></soap:Envelope>

    This does not look very informative. You only see that it is an incoming request (REQ_IN) and the SOAP message in the log message. The logging feature provides a lot more information though. You just need to configure the pax logging config to show it.

    Slf4j MDC values for meta data

    This is the raw logging information you get for a SOAP call:

    FieldValue@timestamp2015-06-08T14:43:27,097ZMDC.addresshttp://localhost:8181/cxf/personServiceMDC.bundle.id90MDC.bundle.nameorg.apache.cxf.cxf-rt-features-loggingMDC.bundle.version3.1.0MDC.content-typetext/xml; charset=UTF-8MDC.encodingUTF-8MDC.exchangeId56b037e3-d254-4fe5-8723-f442835fa128MDC.headers{content-type=text/xml; charset=UTF-8, connection=keep-alive, Host=localhost:8181, Content-Length=251, SOAPAction="", User-Agent=Apache CXF 3.1.0, Accept=*/*, Pragma=no-cache, Cache-Control=no-cache}MDC.httpMethodPOSTMDC.messageIda46eebd2-60af-4975-ba42-8b8205ac884cMDC.portNamePersonServiceImplPortMDC.portTypeNamePers<soap:Envelope xmlns:soap=""><soap:Body><ns2:getAll xmlns:ns2=""; xmlns:ns3=""/></soap:Body></soap:Envelope>;threadNameqtp80604361-78timeStamp1433774607097

    Some things to note:

    • The logger name is <service namespace>.<ServiceName>.<type> karaf by default only cuts it to just the type.
    • A lot of the details are in the MDC values

    You need to change your pax logging config to make these visible.

    You can use the logger name to fine tune which services you want to log this way. For example set the debug level to WARN for noisy services to avoid that they are logged or log some services to another file.

    Message id and exhange id

    The messageId allows to uniquely identify messages even if you collect them from several servers. It is also transported over the wire so you can correlate a request sent on one machine with the request received on another machine.

    The exchangeId will be the same for an incoming request and the response sent out or on the other side for an outgoing request and the response for it. This allows to correlate request and responses and so follow the conversations.

    Simple interface to write your own appenders

    Write your own LogSender and set it on the LoggingFeature to do custom logging. You have access to all meta data from the class LogEvent.

    So for example you could write your logs to one file per message or to JMS.

    Karaf decanter support to write into elastic search

    Many people use elastic search for their logging. Fortunately you do not have to write a special LogSender for this purpose. The standard CXF logging feature will already work.

    It works like this:

    • CXF sends the messages as slf4j events which are processed by pax logging
    • Karaf Decanter LogCollector attaches to pax logging and sends all log events into the karaf message bus (EventAdmin topics)
    • Karaf Decanter ElasticSearchAppender sends the log events to a configurable elastic search instance

    As Decanter also provides features for a local elastic search and kibana instance you are ready to go in just minutes.

    Installing Decanter for CXF Logging feature:repo-add mvn:org.apache.karaf.decanter/apache-karaf-decanter/3.0.0-SNAPSHOT/xml/features feature:install decanter-collector-log decanter-appender-elasticsearch elasticsearch kibana

    After that open a browser at http://localhost:8181/kibana. When decanter is released kibana will be fully set up. At the moment you have to add the logstash dashboard and change the index name to [karaf-]YYYY.MM.DD.

    Then you should see your cxf messages like this:

    Kibana easily allows to filter for specific services and correlate requests and responses.

    This is just a preview of decanter. I will do a more detailed post when the first release is out.


    View Online
    Categories: Christian Schneider

    Apache CXF Fediz 1.2.0 tutorial - overview

    Colm O hEigeartaigh - Thu, 05/28/2015 - 17:59
    Apache CXF Fediz 1.2.0 has been released. Fediz is a subproject of the Apache CXF web services stack. It is an implementation of the WS-Federation Passive Requestor Profile for SSO that supports Claims Based Access Control. In laymans terms, Fediz allows you to implement Single Sign On (SSO) for your web application, by redirecting the client browser to an Identity Provider (IdP), where the client is authenticated and redirected back to the application. Fediz consists of a number of container-specific plugins (Tomcat, Jetty, Spring Security, Websphere, etc.) as well as an IdP which bundles the CXF Security Token Service (STS) to issue SAML Tokens.

    This is an overview of a planned series of articles on the new features that are available in Fediz 1.2.0, which is a new major release of the project. Subsequent articles will go into more detail on the new features, which are as follows:
    • Dependency update to use CXF 3.0.x (3.0.4).
    • A new container-independent CXF-based plugin is available.
    • Logout Support has been added to the plugins and IdP
    • A new REST API is available for configuring the IdP
    • Support for authenticating to the IdP using Kerberos has been added
    • Support for authenticating to the IdP using a client certificate has been added
    • It is now possible to use the IdP as an identity broker with a SAML SSO IdP
    • Metadata support has been added for the plugins and IdP
    Categories: Colm O hEigeartaigh


    Subscribe to Talend Community Coders aggregator