Latest Activity

[OT] Apache CXF: Nothing Else Matters !

Sergey Beryozkin - Fri, 10/09/2015 - 18:40
One can ask, do web services still matter in the today's world of emerging technologies such as Cloud and Big Data ?

Of course they do still matter. Take Big Data. The important thing to realize is as far as a remote client interacting with your HTTP server which internally initiates BigData flows is concerned it is still a client and HTTP server only - the client submits the data and gets the response it needs - the mechanism used to produce this response is, and has to be, completely opaque to the remote client, nothing unusual here, simply a proper service design leaking no implementation details to the client. In some cases one may not want to link BigData response streams to remote clients but in other cases it may make a perfect sense.

So yes, the web services do still matter because one still needs a production-quality, secure, flexible HTTP layer between the remote clients and internal data systems.

In fact, if you are an Apache CXF user, I can already hear you all saying (or even singing ?), Apache CXF: Nothing Else Matters ! Just make it loud enough to have your colleagues from the other team hear it :-)
Categories: Sergey Beryozkin

SurfCoastCentury 100km the video

Olivier Lamy - Fri, 10/09/2015 - 02:36
So I finally found a bit of time to make a movie (please be indulgent I'm not a professional :-) )

If you have read the run report, here the video:

Yes I enjoyed a LOT :-)

I hope after watching this, you will want to come here down under for a little run :-)
Categories: Olivier Lamy

Make your CXF JAX-RS servers OpenId Connect ready

Sergey Beryozkin - Thu, 10/08/2015 - 23:04
We've been doing a lot of work during the last year to ensure CXF developers can start experimenting fast and effectively with the latest RS security advancements such as OAuth2 and JOSE which are also the building blocks for OpenId Connect (OIDC).

With OAuth2 and JOSE modules becoming quite solid, it was time to turn the attention to OIDC, OIDC RP being a starting point - which is a mechanism to log on the users into the servers by federating to OIDC IDP providers such as Google and Facebook.  OIDC is a fairly complex protocol but with OAuth2 and JOSE covered it was not that tricky after all.

The initial result is these two demos:

1. BigQuery

This demo shows a client OAuth2 server that accesses a user's BigQuery data-sets. The demo checks a public data-set of Shakespeare works but once you have a Google Developer account you can easily create your own BigQuery data-set and use the demo to access it instead.

2. Basic OIDC

This demo shows that a server does not have to be specifically coded around OAuth2 flows to use OIDC - it only uses OIDC to log-in the users and then work with these users.

I'd like to encourage you to run these demos - ask at CXF users or #apache-cxf if you have any issues running them and start making your CXF servers OIDC-aware now !

I look forward to the feedback from the early adopters. And please watch this space - this is only a start :-)

Categories: Sergey Beryozkin

Apache CXF and Aries Blueprint Everywhere

Sergey Beryozkin - Wed, 10/07/2015 - 18:15
Many times, when developing JAX-RS demos, I had to solve the following issue: how to describe the demo endpoints to be run in OSGI and the same endpoints to be run in Tomcat.

Typically I'd create a Spring context file describing few JAX-RS endpoints and use it when running a demo in Tomcat. Next I'd create an equivalent Blueprint context file and run the demo in Karaf.

It works but having to duplicate the contexts in Spring and Blueprint languages is unfortunate. Granted, one can use Spring DM to run the endpoints described in Spring contexts in Karaf but OSGI developers know Spring DM is a finished line.

So we did some work first to make a CXFBlueprintServlet referencing a Blueprint context working in OSGI - the same way a CXFServlet can work with Spring contexts in OSGI with the help of Sprinjg DM.

Next, my colleague Alex suggested to have the same mechanism working in non-OSGI deployments - for the reason described above, i.e, to reuse the same context language (Blueprint) when deploying CXF endpoints to OSGI and servlet containers. As it happens Apache Aries team already did some work for supporting Blueprint in non-OSGI setups, so after doing some more work in CXF and Aries we can now have CXFBlueprintServlet loading Blueprint contexts in standalone Tomcat/Jetty too. Some work still needs to be done here, particularly ensuring such endpoints can run offline, but overall it looks promising.

The short overview is here.  Note the same web.xml and Blueprint context is used in OSGI and non-OSGI setups - the only thing which changes is a single Maven Aries dependency.

Note this mechanism works for CXF JAX-RS and JAX-WS endpoints.

If you are a Blueprint fan: Enjoy!  

Categories: Sergey Beryozkin

SurfCoastCentury 100km (my first ultra trail)

Olivier Lamy - Tue, 09/22/2015 - 05:45
I will try to tell you my journey to achieve this 100km trail called SurfcoatCentury because it happen on the surfcoast trails with some beach runs!
So I wanted to run this race maybe more than a year ago (even before my first marathon which I ran 12 october last year!).
But I have to train a lot first! Especially because I restarted doing sport seriously since around January 2013 (yup only for 18 months :-) ) after many years of very limited sport....
So during this year I managed to train a lot and do various long race bitumen and off road:
  • 17 May 2015: Great Ocean Road Marathon (44km )
  • 21 March 2015: RollerCoaster run (44km trail)
  • 11 January 2015: Two bays run (28km trail)
  • 12 Oct 2014: Melbourne Marathon
I remember someone saying: "your body must recover so you normally cannot run more than one or two marathons per year".
My answer was: "As my goal is to run 100km, those marathons are just training :-)"
Looking at the training log, I have 2000km running since 1st January 2015.
The hardest part was the 3 months before the big run. Despite a travel to France early July, I managed to run around 1200km from early June to the race (with a big August peak 470km). The France trip was not too bad as I ran a lot of off road compare to the city style suburb we live in Melbourne.
So here we are the big day!!!. Some stress coming!!! As I never ran more than 44km race (longest training was 48km) I don't know what will happen :-)
This start with the bag packing what to get what to wear etc... What to ask my great support crew to carry for various checkpoints...
The race start at 6.30am. We managed to find a house just 10 minutes walking from the start. For some reasons, I didn't want to ask my wife to drive me to a race start at 6.30am :-).
Obviously I didn't sleep a lot!
The program for today.

So here we are race start on a beach with an amazing sunrise!!! (for some reasons my wife prefer sunset over sunrise so she missed that one :P)

The first leg of the run is 21km of beach run: sand, cliff climbing and sometimes water until the knees (yup really wet shoes). Hopefully the sand is a bit humide so it's not soft sand.
The scenery is just amazing!!

After 21km we arrived at Torquay after passing trough beaches as Bells Beach, Point Addis (If you surf you know those mythics names). I managed to run those 21km in 2H30. Then I stopped 12 minutes. Yup that's too long as I didn't even change shoes but those long stops are the big mistake of this long run (but it was the first one so I didn't want to burn myself which I didn't for sure :-) ).
Now we are going back to Anglesea using the SurfCoast trails/walk (a mix of trail on the top of the cliff and a bit of bush) Again amazing scenery.

I'm happy as the family is here at Checkpoint3 so I got smiles and plenty of "Allez Papa!!" (Go Dad!!)

This leg2 is a bit more hilly: 28km with 520m elevation gain. I managed to run the first 11km with a good pace 6.22m/km then the last 17km at 7.20m/km.
I arrived at checkpoint 4 (mid race 50km) after 6h. Now time to change shoes/socks/t-shirt and got a lunch. I think here I made a BIG mistake as I stopped 35minutes!! Again I don't have the experience of such race so I did everything (eating/changing stuff) in a totally wrong order and do it bad. Next time: just have a printed TODO list with everything in a correct order. Other mistake I eat too much and too fast (sandwich and a banana) so my stomach was not really happy the next 15km :-(.

So now we start the leg 3 which is most difficult part of the race: 28km / 760 m elevation gain. Here an other completely different scenery with a real bush part!! You know the famous red/orange Australian ground and a bush forest!! And here I start to run long distance alone (the 50km runners are not here anymore) and runners are more streched after such distance.

But that's ok plenty of crazy birds!!. The first 20km are really good I managed a good 7.26m/km for this hilly part. But the next 7km are bad (really hard time in this last hard up hill): 9.13m/km during 7km (yup really slow!! especially I stopped too long at CP5 water point). I'm happy to finish this 3rd leg but a bit tired :-). I got help from Nicolas who is biking with me during the last part. So now an other smile time with the family at checkpoint 6. 77km done!! almost finished :-). An other mistake as I stopped too long again :-( (19minutes!!) but I was happy to have a chat with wife and kids.

So now time for the last leg, I still feel well (honestly you cannot give up after have done 77% of the race :-) ). The program is: 23km. Elevation gain / loss: 426m / 466m.

The first 9km are still a bit up/dow bush single track but then you are back to the ocean (you start thinking the finish line is not far anymore!!). Last checkpoint (CP7) (time to change for long sleeve and get head torch) wife/kids are here again for a last smile! See you "soon" at the finish line :-) This sunset really worth such a long run!! :-)

Now time to finish. But it's dark. I mean really dark as you are a middle nowhere without any lights from any cities.

And especially with the last beach run part around 4km (yup again beach run after 92km!!! Sacrebleu!!). So I do not see anything I just want to avoid running in the ocean :-).

Finally I can see lights wife/kids are here I can cross the line with the kids. That's it that's finished!! (yup you can't believe you only realise days after).

I believe I was crazy when I was thinking running this race more than a year ago. But now it's done and it was such an AMAZING experience.

It really worth all the hard training (wake up at 5am 5days/week during 2.5 months really need devotion/motivation the race is finally very easy compare to all the training).

I want to THANKS a lot wife and kids for all the support during this special and before when I was not here because training somewhere!!!

More details on my race here and the original race description (in a real good english :-) ).

My result. I'm happy to have finished (and with dignity). I think I could have made a better time especially with shorter break (but I'm French so I need long lunch break). It was my first experience on 100km so next one will be better :-)

And I already have some ideas for the next one. I'm pretty sure the family will be happy to visit the Blue mountains :-) ( see The north face 100 ).

I ran the race with a gopro so I have plenty of video materials to show you how Australia is such a beautiful country (but I need a bit of time to do it)
Categories: Olivier Lamy

Support for Jetty 9 in Apache CXF Fediz 1.3.0

Colm O hEigeartaigh - Fri, 09/18/2015 - 18:31
Yesterday I gave a tutorial on how to deploy the Apache CXF Fediz simpleWebapp example to Jetty 8 using Fediz 1.2.1. Apache CXF Fediz 1.3.0 will ship with a new plugin to support Jetty 9. In this post I will cover how to deploy the simpleWebapp example to Jetty 9 using this new plugin.

1) Deploying the 1.2.0 Fediz IdP in Apache Tomcat

As per the previous tutorial on deploying to Tomcat, we will deploy the IdP and STS in Apache Tomcat. Download Fediz and extract it to a new directory (${fediz.home}). To deploy the IdP to Tomcat:

  • Copy ${fediz.home}/idp/war/* to ${catalina.home}/webapps
  • Download and copy the hsqldb jar (e.g. hsqldb- to ${catalina.home}/lib
  • Copy idp-ssl-key.jks and idp-ssl-trust.jks from ${fediz.home}/examples/samplekeys to ${catalina.home}.
  • Edit ${catalina.home}/conf/server.xml and change the ports from 8080 -> 9080 + 8443 -> 9443 so as not to conflict with Jetty.
  • Edit the TLS Connector in ${catalina.home}/conf/server.xml' as well, e.g.: <Connector port="9443" protocol="org.apache.coyote.http11.Http11Protocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="want" sslProtocol="TLS" keystoreFile="idp-ssl-key.jks" keystorePass="tompass" keyPass="tompass" truststoreFile="idp-ssl-trust.jks" truststorePass="ispass" />
Now start Tomcat, and check that the IdP is live by opening the STS WSDL in a web browser: 'https://localhost:9443/fediz-idp-sts/REALMA/STSServiceTransport?wsdl'

2) Deploying the simpleWebapp in Jetty 9

Download Jetty 9 and extract it to a new directory (${jetty.home}). First let's set up TLS:
  • Copy ${fediz.home}/examples/samplekeys/rp-ssl-key.jks to ${jetty.home}/etc
  • Copy ${fediz.home}/examples/samplekeys/ststrust.jks to ${jetty.home} *and* to ${jetty.home}/etc
  • Edit ${jetty.home}/start.ini to include the ssl, https and fediz modules, and set up the TLS configuration as follows:
  •  The "fediz" module referred to above must be placed in ${jetty.home}/modules/fediz.mod with content:
Now we will deploy the simpleWebapp:
  • Copy ${fediz.home}/examples/simpleWebapp/src/main/config/fediz_config.xml to ${jetty.home}/etc
  • Do a "mvn clean install" in ${fediz.home}/examples/simpleWebapp
  • Copy ${fediz.home}/examples/simpleWebapp/target/fedizhelloworld.war to ${jetty.home}/webapps
  • Create a new directory: ${jetty.home}/lib/fediz
  • Copy ${fediz.home}/plugins/jetty9/lib/* to ${jetty.home}/lib/fediz (note you may want to copy in a slf4j logging binding in here to see logging output, e.g. slf4j-jdk14-1.7.12.jar).
  • Create a new file in ${jetty.home}/webapps called "fedizhelloworld.xml" with content as follows, and then start Jetty as normal: 

3) Testing the service

To test the service navigate to:
  • https://localhost:8443/fedizhelloworld/  (this is not secured) 
  • https://localhost:8443/fedizhelloworld/secure/fedservlet
With the latter URL, the browser is redirected to the IDP (select realm "A") and is prompted for a username and password. Enter "alice/ecila" or "bob/bob" or "ted/det" to test the various roles that are associated with these username/password pairs.
Categories: Colm O hEigeartaigh

Deploying the Apache CXF Fediz simpleWebapp to Jetty

Colm O hEigeartaigh - Thu, 09/17/2015 - 18:41
On previous tutorials about Apache CXF Fediz, I have always described deploying the simpleWebapp example that ships with Fediz in Apache Tomcat. However, Fediz also supports deploying secured applications in Jetty (7 and 8 as of yet, support for Jetty 9 is forthcoming). As it can be somewhat confusing setting up the security requirements correctly, I will cover briefly how to deploy the simpleWebapp in Jetty 8 in this blog post (see the Fediz wiki for a dedicated page on deploying to Jetty).

1) Deploying the 1.2.0 Fediz IdP in Apache Tomcat

As per the previous tutorial on deploying to Tomcat, we will deploy the IdP and STS in Apache Tomcat. Download Fediz 1.2.1 and extract it to a new directory (${fediz.home}). To deploy the IdP to Tomcat:
  • Copy ${fediz.home}/idp/war/* to ${catalina.home}/webapps
  • Download and copy the hsqldb jar (e.g. hsqldb- to ${catalina.home}/lib
  • Copy idp-ssl-key.jks and idp-ssl-trust.jks from ${fediz.home}/examples/samplekeys to ${catalina.home}.
  • Edit ${catalina.home}/conf/server.xml and change the ports from 8080 -> 9080 + 8443 -> 9443 so as not to conflict with Jetty.
  • Edit the TLS Connector in ${catalina.home}/conf/server.xml' as well, e.g.: <Connector port="9443" protocol="org.apache.coyote.http11.Http11Protocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="want" sslProtocol="TLS" keystoreFile="idp-ssl-key.jks" keystorePass="tompass" keyPass="tompass" truststoreFile="idp-ssl-trust.jks" truststorePass="ispass" />
Now start Tomcat, and check that the IdP is live by opening the STS WSDL in a web browser: 'https://localhost:9443/fediz-idp-sts/REALMA/STSServiceTransport?wsdl'

2) Deploying the simpleWebapp in Jetty 8

Download Jetty 8 and extract it to a new directory (${jetty.home}). First let's set up TLS:
  • Copy ${fediz.home}/examples/samplekeys/rp-ssl-key.jks to ${jetty.home}/etc
  • Copy ${fediz.home}/examples/samplekeys/ststrust.jks to ${jetty.home} *and* to ${jetty.home}/etc
  • Edit ${jetty.home}/start.ini and make sure that 'etc/jetty-ssl.xml' is included.
  • Edit ${jetty.home}/etc/jetty-ssl.xml and configure the TLS keys, e.g.:

Now we will deploy the simpleWebapp:
  • Copy ${fediz.home}/examples/simpleWebapp/src/main/config/fediz_config.xml to ${jetty.home}/etc
  • Do a "mvn clean install" in ${fediz.home}/examples/simpleWebapp
  • Copy ${fediz.home}/examples/simpleWebapp/target/fedizhelloworld.war to ${jetty.home}/webapps
  • Create a new directory: ${jetty.home}/lib/fediz
  • Copy ${fediz.home}/plugins/jetty/lib/* to ${jetty.home}/lib/fediz (note you may want to copy in a slf4j logging binding in here to see logging output, e.g. slf4j-jdk14-1.7.12.jar).
  • Edit ${jetty.home}/start.ini and add "fediz" to "OPTIONS".
  • Create a new file in ${jetty.home}/contexts called "fedizhelloworld.xml" with content as follows, and then start Jetty as normal:

3) Testing the service

To test the service navigate to:
  • https://localhost:8443/fedizhelloworld/  (this is not secured) 
  • https://localhost:8443/fedizhelloworld/secure/fedservlet
With the latter URL, the browser is redirected to the IDP (select realm "A") and is prompted for a username and password. Enter "alice/ecila" or "bob/bob" or "ted/det" to test the various roles that are associated with these username/password pairs.
    Categories: Colm O hEigeartaigh

    Authorization for web services using XACML 3.0

    Colm O hEigeartaigh - Tue, 09/08/2015 - 16:29
    In a blog post last year, I covered some authentication and authorization test-cases for Apache CXF-based web services that I uploaded to github. In particular, the cxf-sts-xacml demo showed how a CXF service can use XACML to authorize a web service request, by sending a XACML request to a Policy Decision Point (PDP) and then by enforcing the authorization decision. This demo only covered XACML 2.0 (provided by OpenSAML). In this post we will give an example of how to use XACML 3.0 via Apache OpenAz to make and enforce authorization requests for Apache CXF based services.

    1) Introducing Apache OpenAz

    The XACML functionality in Apache CXF is based on OpenSAML, which provides support for XACML 2.0. However, XACML 3.0 is an OASIS standard as of January, 2013. A new project in the Apache Incubator called Apache OpenAz addresses this gap. The source code is broken down into the following modules:
    • openaz-xacml - API + common functionality.
    • openaz-xacml-rest - Some common functionality used by the RESTful API interfaces
    • openaz-xacml-pdp - A PDP implementation
    • openaz-xacml-pdp-rest - An implementation of the XACML 3.0 RESTful Interface for the PDP
    • openaz-xacml-pap-rest - An implementation of the XACML 3.0 RESTful Interface for the PAP
    • openaz-xacml-test - Some testsuites
    • openax-pep -  The PEP (Policy Enforcement Point) implementation.
    2) Integrating Apache OpenAz with Apache CXF

    The testcases are available here:
    • cxf-sts-xacml: This project contains a number of tests that show how to use XACML with CXF to authorize a client request. It contains both XACML 2.0 tests and XACML 3.0 tests.
    In both cases, the client obtains a SAML Token from the STS with the roles of the client embedded in the token. The service provider extracts the roles, and creates a XACML request. For the XACML 2.0 case, OpenSAML is used to create a XML XACML 2.0 request. This is then sent to a mocked PDP JAX-RS service. However, let's focus on the XACML 3.0 case. In this test, the OpenAz API (via the openaz-xacml module as covered above) is used to create a JSON XACML 3.0 request. This is evaluated by a OpenAz-based PDP which is co-located with the service. After evaluating the request, the PDP response is then enforced at the service side.

    The service endpoint is configured in Spring as follows, registering a XACML3AuthorizingInterceptor (which in turn contains a reference to the co-located PDP):

    The XACML3AuthorizingInterceptor is configured with a implementation to create a XACML 3.0 request using the SAML 2.0 profile of XACML 3.0, which is subsequently converted into JSON + sent to the PDP. The PDP is configured with "root" and "reference" policies, that state that a user of role "boss" has permission to "execute" the Web Service Operation "{}DoubleItService#DoubleIt". For example:
    A sample authorization request looks like:
    If you are interested in XACML 3.0 please get involved with the Apache OpenAz project! Once the project gets more mature, the PEP code in my project will probably make it over to Apache CXF so that users have the option of supporting XACML 2.0 or 3.0 (and XML or JSON) with their web services.
    Categories: Colm O hEigeartaigh

    Apache CXF Fediz 1.2.1 and 1.1.3 released

    Colm O hEigeartaigh - Thu, 08/27/2015 - 18:50
    Apache CXF Fediz 1.2.1 and 1.1.3 have been released. Both releases contain updates to the underlying CXF dependency, as well as a number of minor bug-fixes and improvements. However the most important enhancement is a fix for a recent security advisory:
    • CVE-2015-5175: Apache CXF Fediz application plugins are vulnerable to Denial of Service (DoS) attacks
    Apache CXF Fediz is a subproject of Apache CXF which implements the WS-Federation Passive Requestor Profile for SSO specification. It provides a number of container based plugins to enable SSO for Relying Party applications. These plugins are potentially vulnerable to DoS attacks due to the fact that support for Document Type Declarations (DTDs) is not disabled when parsing the response from the Identity Provider (IdP).
    Categories: Colm O hEigeartaigh

    Apache CXF Fediz 1.2.0 tutorial - part VII

    Colm O hEigeartaigh - Tue, 08/25/2015 - 17:36
    This is the seventh and final blog post on a series of new features introduced in Apache CXF Fediz 1.2.0. The previous post looked at the new REST API of the IdP. Up to now, we have only covered the basic scenario where the application and the IdP are in the same realm. However, a more sophisticated example is when the application is in a different realm. In this case, the IdP must redirect the user to the home IdP of the application for authentication. The IdP has supported this functionality up to now using WS-Federation only. However, the 1.2.0 IdP supports the ability to redirect to a SAML SSO IdP, thus acting as an identity broker between the two protocols. We will cover this functionality in this tutorial.

    1) Setup simpleWebapp + SAML SSO IdP

    As with previous tutorials, please follow the first tutorial to deploy the Fediz IdP + STS to Apache Tomcat, as well as the "simpleWebapp. However, this time the "simpleWebapp" is going to be deployed in a different realm. Edit 'conf/fediz_config.xml' and add the following under the "protocol" section:
    • <homeRealm type="String">urn:org:apache:cxf:fediz:idp:realm-B</homeRealm>
    This tells the IdP that the application is to be authenticated in "realm-B".

    The next thing we are going to do is to set up a SAML SSO IdP which will authenticate users who want to access "simpleWebapp". In this tutorial we will just use a mocked SAML SSO IdP in the Fediz system tests for convenience. Build the war as in the following steps + deploy to Tomcat:
    2) Configure the Fediz IdP

    Next we need to take a look at configuring the Fediz IdP so that it knows where to find the SAML SSO IdP associated with "realm B" and how to communicate with it. Edit 'webapps/fediz-idp/WEB-INF/classes/entities-realma.xml':

    In the 'idp-realmA' bean:
    • Change the port in "idpUrl" to "8443". 
    In the 'trusted-idp-realmB' bean:
    • Change the "url" value to "https://localhost:8443/samlssoidp/samlsso".
    • Change the "protocol" value to "urn:oasis:names:tc:SAML:2.0:profiles:SSO:browser".
    • Add the following: <property name="parameters"><util:map><entry key="support.deflate.encoding" value="true" /></util:map></property>
    The "parameters" map above is a way to provide SAML SSO specific configuration options to the Fediz IdP. The following options can be configured:
    • sign.request - Whether to sign the request or not. The default is "true".
    • require.keyinfo - Whether to require a KeyInfo or not when processing a (signed) Response. The default is "true".
    • require.signed.assertions - Whether the assertions contained in the Response must be signed or not. The default is "true".
    • require.known.issuer - Whether we have to "know" the issuer of the SAML Response or not. The default is "true".
    • support.base64.encoding - Whether we BASE-64 decode the response or not. The default is "true".
    • support.deflate.encoding - Whether we support Deflate encoding or not. The default is "false".
    Redeploy the Fediz IdP + navigate to the following URL in a browser:
    • https://localhost:8443/fedizhelloworld/secure/fedservlet
    You will see that the Fediz IdP will redirect the browser to the mocked SAML SSO IdP for authentication (authenticate with "ALICE/ECILA") and then back to the Fediz IdP and eventually back to the client application.

    Categories: Colm O hEigeartaigh

    Apache CXF Fediz 1.2.0 tutorial - part VI

    Colm O hEigeartaigh - Wed, 08/19/2015 - 17:55
    This is the sixth in a series of posts on the new features of Apache CXF Fediz 1.2.0. The previous post looked at Single Sign Out support in Fediz. In this article we will briefly cover the new REST API of the Fediz IdP. Prior to the 1.2.0 release all of the IdP configuration was done in a static way using Spring. If the IdP administrator wished to change the claims for a particular application, then the change would necessitate restarting the IdP. In contrast, the Fediz 1.2.0 IdP persists the configuration to a database using JPA. In addition, it allows access to this configuration via a REST API powered by Apache CXF.

    To get started, please follow step 1 of the first tutorial to deploy the Fediz IdP to Apache Tomcat. The REST API is described by a WADL document available at the following URL:
    • https://localhost:8443/fediz-idp/services/rs?_wadl
    The WADL document describes the following resource URIs:
    • services/rs/idps - An IdP for a given realm. 
    • services/rs/claims - The claims that are available in the IdP.
    • services/rs/applications - The applications that are defined in the IdP.
    • services/rs/trusted-idps - The trusted IdPs that are defined in the IdP.
    • services/rs/roles - The roles associated with the REST API.
    By using the standard HTTP verbs in the usual way you can retrieve, store, modify and remove items from the IdP configuration. For example, to see (GET) the configuration associated with the IdP for "realm A" navigate to the following URL in a browser:
    • https://localhost:8443/fediz-idp/services/rs/idps/urn:org:apache:cxf:fediz:idp:realm-A
    The user credentials are defined in "webapps/fediz-idp/WEB-INF/classes/". You can use "admin/password" by default to access the API. Here you can see the protocols supported, the token types offered, the different ways of authenticating to the IdP, the claim types offered, the applications supported, etc. Note that by default the information returned in a GET request is in XML format. You can return it in JSON format just by appending ".json" to the URL:
    For much more information on how to use the new REST API, please see Oliver Wulff's blog on this topic.
    Categories: Colm O hEigeartaigh

    Apache CXF Fediz 1.2.0 tutorial - part V

    Colm O hEigeartaigh - Fri, 08/07/2015 - 18:43
    This is the fifth in a series of posts on the new features available in Apache CXF Fediz 1.2.0. The previous article described a new container-independent Relying Party (RP) plugin available in Fediz 1.2.0 based on Apache CXF. In this post we will take a look at two new features, support for Single Sign Out and the ability to publish metadata for both RP plugins and the IdP.

    1) Single Sign Out support in Fediz

    An important new feature in Fediz 1.2.0 is the ability to perform Single Sign Out both at the RP and IdP. The user can log out at either the RP or IdP by adding "?wa=wsignout1.0" to the relevant URL. Alternatively, two new configuration options are added for the RP:
    • logoutURL - The logout URL to trigger federated logout
    • logoutRedirectTo - URL landing-page after successful logout.
    To see how this works in practice, follow the first tutorial to set up the hello world demo in Tomcat, and log on via:
    • https://localhost:8443/fedizhelloworld/secure/fedservlet
    After successful authentication, you will see a basic webpage detailing the User principal, roles, and the underlying SAML Assertion. Now what if you want to log out from the application? From Fediz 1.2.0 it's simple. Navigate to the following URL:
    • https://localhost:8443/fedizhelloworld/secure/fedservlet?wa=wsignout1.0
    The browser will be redirected to the logout page for the IdP:

    Click "Logout" and you see a page confirming that Logout was successful (in both the RP + IdP). To confirm this, navigate again to the application URL, and you will see that you are redirected back to the IdP for authentication. The user can also logout directly at the IdP by navigating to:
    • https://localhost:8443/fediz-idp/federation?wa=wsignout1.0
    2) Metadata Support in Fediz

    It has been possible since Fediz 1.0.0 to publish the Metadata document associated with a Relying Party using the Tomcat plugin. This Metadata document is built dynamically using the Fediz configuration values and is published at the standard URL. Here is a screenshot of a request using the "fedizhelloworld" demo:

    This document describes the endpoint address of the service, the realm of the service, and the claims (both required and optional). The metadata document can also be signed by specifying a "signingKey" in the Fediz configuration.

    So what's new in Fediz 1.2.0? The first thing is that it was only possible previously to publish the metadata document when using the Tomcat plugin. In Fediz 1.2.0, this has been extended to cover the other plugins, i.e. Jetty, Spring, etc. In addition, the forthcoming Fediz 1.2.1 release adds support for Metadata to the IdP. The Metadata is available at the same standard URL as for the RP, e.g.:

    This signed document describes the URL of the STS, as well as that of the IdP itself, and the claims that are offered by the IdP.
    Categories: Colm O hEigeartaigh

    Karaf Tutorial Part 4 - CXF Services in OSGi

    Christian Schneider - Wed, 08/05/2015 - 10:44

    Blog post edited by Christian Schneider

    Shows how to publish and use a simple REST and SOAP service in karaf using cxf and blueprint.

    To run the example you need to install the http feature of karaf. The default http port is 8080 and can be configured using the
    config admin pid "org.ops4j.pax.web". You also need to install the cxf feature. The base url of the cxf servlet is by default "/cxf".
    It can be configured in the config pid "org.apache.cxf.osgi".

    Differences in Talend ESB


    If you use Talend ESB instead of plain karaf then the default http port is 8044 and the default cxf servlet name is "/services".

    PersonService Example

    The "business case" is to manage a list of persons. As service should provide the typical CRUD operations. Front ends should be a REST service, a SOAP service and a web UI.

    The example consists of four projects

    • model: Person class and PersonService interface
    • server: Service implementation and logic to publish the service using REST and SOAP
    • proxy: Accesses the SOAP service and publishes it as an OSGi service
    • webui: Provides a simple servlet based web ui to list and add persons. Uses the OSGi service

    You can find the full source on github:

    Installation and test run

    First we build, install and run the example to give an overview of what it does. The following main chapter then explains in detail how it works.

    Installing Karaf and preparing for CXF

    We start with a fresh Karaf 2.3.1.

    Installing CXF

    In Karaf Console run

    features:chooseurl cxf 2.7.4 features:install http cxf

    Changes in commands for karaf 3

    • features:chooseurl -> feature:repo-add
    • features:install -> feature:install
    Build and Test

    Checkout the project from github and build using maven

    > mvn clean install

    Install service and ui in karaf install -s install -s install -s install -s Test the service

    The person service should show up in the list of currently installed services that can be found herehttp://localhost:8181/cxf/

    List the known personshttp://localhost:8181/cxf/person
    This should show one person "chris"

    Now using a firefox extension like Poster or Httprequester you can add a person.

    Send the following xml snippet:

    <?xml version="1.0" encoding="UTF-8"?> <person> <id>1001</id> <name>Christian Schneider</name> <url></url> </person>

    with Content-Type:text/xml using PUT:http://localhost:8181/cxf/person/1001
    or to this url using POST:http://localhost:8181/cxf/person

    Now the list of persons should show two persons.

    Test the proxy and web UI


    You should see the list of persons managed by the personservice and be able to add new persons.

    How it worksDefining the model

    The model project is a simple java maven project that defines a JAX-WS service and a JAXB data class. It has no dependencies to cxf. The service interface is just a plain java interface with the @WebService annotation.

    @WebService public interface PersonService { public abstract Person[] getAll(); public abstract Person getPerson(String id); public abstract void updatePerson(String id, Person person); public abstract void addPerson(Person person); }

    The Person class is just a simple pojo with getters and setters for id, name and url and the necessary JAXB annotations. Additionally you need an ObjectFactory to tell JAXB what xml element to use for the Person class.
    There is also no special code for OSGi in this project. So the model works perfectly inside and outside of an OSGi container.


    The service is defined java first. SOAP and rest are used quite transparently. This is very suitable to communicate between a client and server of the same application. If the service
    is to be used by other applications the wsdl first approach is more suitable. In this case the model project should be configured to generate the data classes and service interface from
    a wsdl (see cxf wsdl_first example pom file). For rest services the java first approach is quite common in general as the client typically does not use proxy classes anyway.

    Service implementation (server)

    PersonServiceImpl is a java class the implements the service interface and contains some additional JAX-RS annotations. The way the class is defined allows it to implement a REST service and a SOAP service at the same time.

    The server project also contains a small starter class that allows the service to be published directly from eclipse. This class is not necessary for deployment in karaf.

    The production deployment of the service is done in src/main/resources/OSGI-INF/blueprint/blueprint.xml.

    As the file is in the special location OSGI-INF/blueprint it is automatically processed by the blueprint implementation aries in karaf. The REST service is published using the jaxrs:server element and the SOAP service is published using the jaxws:endpoint element. The blueprint namespaces are different from spring but apart from this the xml is very similar to a spring xml.

    Service proxy

    The service proxy project only contains a blueprint xml that uses the CXF JAXWS client to consume the SOAP service and exports it as an OSGi Service. Encapsulating the service client as an OSGi service (proxy project) is not strictly necessary but it has the advantage that the webui is then completely independent of cxf. So it is very easy to change the way the service is accessed. So this is considered a best practice in OSGi.

    See blueprint.xml

    Web UI (webui)

    This project consumes the PersonService OSGi service and exports the PersonServlet as an OSGi service. The pax web whiteboard extender will then publish the servlet on the location /personui.
    The PersonServlet gets the PersonService injected and uses to get all persons and also to add persons.

    The wiring is done using a blueprint context.

    Some further remarks

    The example uses blueprint instead of spring dm as it works much better in an OSGi environment. The bundles are created using the maven bundle plugin. A fact that shows how well blueprint works
    is that the maven bundle plugin is just used with default settings. In spring dm the imports have to be configured as spring needs access to many implementation classes of cxf. For spring dm examples
    take a look at the Talend Service Factory examples (

    The example shows that writing OSGi applications is quite simple with aries and blueprint. It needs only 153 lines of java code (without comments) for a complete little application.
    The blueprint xml is also quite small and readable.

    Back to Karaf Tutorials

    View Online
    Categories: Christian Schneider

    Apache Karaf Tutorial Part 6 - Database Access

    Christian Schneider - Tue, 07/28/2015 - 11:13

    Blog post edited by Christian Schneider

    Shows how to access databases from OSGi applications running in Karaf and how to abstract from the DB product by installing DataSources as OSGi services. Some new Karaf shell commands can be used to work with the database from the command line. Finally JDBC and JPA examples show how to use such a DataSource from user code.Prerequisites

    You need an installation of apache karaf 3.0.3 for this tutorial.

    Example sources

    The example projects are on github Karaf-Tutorial/db.

    Drivers and DataSources

    In plain java it is quite popular to use the DriverManager to create a database connection (see this tutorial). In OSGi this does not work as the ClassLoader of your bundle will have no visibility of the database driver. So in OSGi the best practice is to create a DataSource at some place that knows about the driver and publish it as an OSGi service. The user bundle should then only use the DataSource without knowing the driver specifics. This is quite similar to the best practice in application servers where the DataSource is managed by the server and published to jndi.

    So we need to learn how to create and use DataSources first.

    The DataSourceFactory services

    To make it easier to create DataSources in OSGi the specs define a DataSourceFactory interface. It allows to create a DataSource using a specific driver from properties. Each database driver is expected to implement this interface and publish it with properties for the driver class name and the driver name.

    Introducing pax-jdbc

    The pax-jdbc project aims at making it a lot easier to use databases in an OSGi environment. It does the following things:

    • Implement the DataSourceFactory service for Databases that do not create this service directly
    • Implement a pooling and XA wrapper for XADataSources (This is explained at the pax jdbc docs)
    • Provide a facility to create DataSource services from config admin configurations
    • Provide karaf features for many databases as well as for the above additional functionality

    So it covers everything you need from driver installation to creation of production quality DataSources.

    Installing the driver

    The first step is to install the driver bundles for your database system into Karaf. Most drivers are already valid bundles and available in the maven repo.

    For several databases pax-jdbc already provides karadf features to install a current version of the database driver.

    For H2 the following commands will work

    feature:repo-add mvn:org.ops4j.pax.jdbc/pax-jdbc-features/0.5.0/xml/features feature:install transaction jndi pax-jdbc-h2 pax-jdbc-pool-dbcp2 pax-jdbc-config service:list DataSourceFactory

    Strictly speaking we would only need the pax-jdbc-h2 feature but we will need the others for the next steps.

    This will install the pax-jdbc feature repository and the h2 database driver. This driver already implements the DataSourceFactory so the last command will display this service.

    DataSourceFactory [org.osgi.service.jdbc.DataSourceFactory] ----------------------------------------- osgi.jdbc.driver.class = org.h2.Driver = H2 osgi.jdbc.driver.version = 1.3.172 = 691 Provided by : H2 Database Engine (68)

    The pax-jdbc-pool-dbcp2 feature wraps this DataSourceFactory to provide pooling and XA support.

    pooled and XA DataSourceFactory [org.osgi.service.jdbc.DataSourceFactory] ----------------------------------------- osgi.jdbc.driver.class = org.h2.Driver = H2-pool-xa osgi.jdbc.driver.version = 1.3.172 pooled = true = 694 xa = true Provided by : OPS4J Pax JDBC Pooling support using Commons-DBCP2 (73)

    Technically this DataSourceFactory also creates DataSource objects but internally they manage XA support and pooling. So we want to use this one for our later code examples.

    Creating the DataSource

    Now we just need to create a configuration with the correct factory pid to create a DataSource as a service

    So create the file etc/org.ops4j.datasource-tasklist.cfg with the following content

    config for DataSource url=jdbc:h2:mem:person dataSourceName=person

    The config will automatically trigger the pax-jdbc-config module to create a DataSource.

    • The name osgi.jdbc.driver=H2-pool-xa will select the H2 DataSourceFactory with pooling and XA support we previously installed.
    • The url configures H2 to create a simple in memory database named test.
    • The dataSourceName will be reflected in a service property of the DataSource so we can find it later
    • You could also set pooling configurations in this config but we leave it at the defaults

    DataSource karaf@root()> service:list DataSource [javax.sql.DataSource] ---------------------- dataSourceName = person = H2-pool-xa = person service.factoryPid = org.ops4j.datasource = 696 = org.ops4j.datasource.83139141-24c6-4eb3-a6f4-82325942d36a url = jdbc:h2:mem:person Provided by : OPS4J Pax JDBC Config (69)

    So when we search for services implementing the DataSource interface we find the person datasource we just created.

    When we installed the features above we also installed the aries jndi feature. This module maps OSGi services to jndi objects. So we can also use jndi to retrieve the DataSource which will be used in the persistence.xml for jpa later.

    jndi url of DataSource osgi:service/person Karaf jdbc commands

    Karaf contains some commands to manage DataSources and do queries on databases. The commands for managing DataSources in karaf 3.x still work with the older approach of using blueprint to create DataSources. So we will not use these commands but we can use the functionality to list datasources, list tables and execute queries.

    jdbc commands feature:install jdbc jdbc:datasources jdbc:tables person

    We first install the karaf jdbc feature which provides the jdbc commands. Then we list the DataSources and show the tables of the database accessed by the person DataSource.

    jdbc:execute person "create table person (name varchar(100), twittername varchar(100))" jdbc:execute person "insert into person (name, twittername) values ('Christian Schneider', '@schneider_chris')" jdbc:query person "select * from person"

    This creates a table person, adds a row to it and shows the table.

    The output should look like this

    select * from person NAME | TWITTERNAME -------------------------------------- Christian Schneider | @schneider_chris Accessing the database using JDBC

    The project db/examplejdbc shows how to use the datasource we installed and execute jdbc commands on it. The example uses a blueprint.xml to refer to the OSGi service for the DataSource and injects it into the class
    DbExample.The test method is then called as init method and shows some jdbc statements on the DataSource.The DbExample class is completely independent of OSGi and can be easily tested standalone using the DbExampleTest. This test shows how to manually set up the DataSource outside of OSGi.

    Build and install

    Build works like always using maven

    > mvn clean install

    In Karaf we just need our own bundle as we have no special dependencies

    > install -s Using datasource H2, URL jdbc:h2:~/test Christian Schneider, @schneider_chris,

    After installation the bundle should directly print the db info and the persisted person.

    Accessing the database using JPA

    For larger projects often JPA is used instead of hand crafted SQL. Using JPA has two big advantages over JDBC.

    1. You need to maintain less SQL code
    2. JPA provides dialects for the subtle differences in databases that else you would have to code yourself.

    For this example we use Hibernate as the JPA Implementation. On top of it we add Apache Aries JPA which supplies an implementation of the OSGi JPA Service Specification and blueprint integration for JPA.

    The project examplejpa shows a simple project that implements a PersonService managing Person objects.
    Person is just a java bean annotated with JPA @Entitiy.

    Additionally the project implements two Karaf shell commands person:add and person:list that allow to easily test the project.


    Like in a typical JPA project the peristence.xml defines the DataSource lookup, database settings and lists the persistent classes. The datasource is refered using the jndi name "osgi:service/person".

    The OSGi JPA Service Specification defines that the Manifest should contain an attribute "Meta-Persistence" that points to the persistence.xml. So this needs to be defined in the config of the maven bundle plugin in the prom. The Aries JPA container will scan for these attributes
    and register an initialized EntityMangerFactory as an OSGi service on behalf of the use bundle.


    We use a blueprint.xml context to inject an EntityManager into our service implementation and to provide automatic transaction support.
    The following snippet is the interesting part:

    <bean id="personService" class=""> <jpa:context property="em" unitname="person" /> <tx:transaction method="*" value="Required"/> </bean>

    This makes a lookup for the EntityManagerFactory OSGi service that is suitable for the persistence unit person and injects a thread safe EnityManager (using a ThreadLocal under the hood) into the
    PersonServiceImpl. Additionally it wraps each call to a method of PersonServiceImpl with code that opens a transaction before the method and commits on success or rollbacks on any exception thrown.

    Build and InstallBuild mvn clean install

    A prerequisite is that the derby datasource is installed like described above. Then we have to install the bundles for hibernate, aries jpa, transaction, jndi and of course our db-examplejpa bundle.
    See ReadMe.txt for the exact commands to use.

    Test person:add 'Christian Schneider' @schneider_chris

    Then we list the persisted persons

    karaf@root> person:list Christian Schneider, @schneider_chris Summary

    In this tutorial we learned how to work with databases in Apache Karaf. We installed drivers for our database and a DataSource. We were able to check and manipulate the DataSource using the jdbc:* commands. In the examplejdbc we learned how to acquire a datasource
    and work with it using plain jdbc4.  Last but not least we also used jpa to access our database.

    Back to Karaf Tutorials

    View Online
    Categories: Christian Schneider

    (Slightly) Faster WS-Security using MTOM in Apache CXF 3.1.2

    Colm O hEigeartaigh - Fri, 07/17/2015 - 17:31
    A recent issue was reported at Apache CXF to do with the inability to process certain WS-Security requests that were generated by Metro or .NET when MTOM was enabled. In this case, Metro and .NET avoid BASE-64 encoding bytes and inserting them directly into the message (e.g. for BinarySecurityTokens or the CipherValue data associated with EncryptedData or EncryptedKey Elements). Instead the raw bytes are stored in a message attachment, and referred to in the message via xop:Include. Support for processing these types of requests has been added for WSS4J 2.0.5 and 2.1.2.

    In addition, CXF 3.1.2 now has the ability to avoid the BASE-64 encoding step when creating requests when MTOM is enabled, something that we will look at in this post. The advantage of this is that is marginally more efficient due to avoiding BASE-64 encoding at the sending side, and BASE-64 decoding on the receiving side.

    1) Storing message bytes in attachments in WSS4J

    A new WSS4J configuration property has been added in WSS4J 2.0.5/2.1.2 to support storing message bytes in attachments. This property is used when configuring WS-Security via the "action" based approach in CXF:
    • storeBytesInAttachment: Whether to store bytes (CipherData or BinarySecurityToken) in an attachment. The default is false, meaning that bytes are BASE-64 encoded and "inlined" in the message.
    WSS4J is stack-neutral, meaning that it has no concept of what a message attachment actually is. So for this to work, a CallbackHandler must be set on the RequestData Object, that knows how to retrieve attachments, as well as write modified/new attachments out. If you are using Apache CXF then this is taken care for you automatically.

    There is another configuration property that is of interest on the receiving side:
    • expandXOPIncludeForSignature: Whether to expand xop:Include Elements encountered when verifying a Signature. The default is true, meaning that the relevant attachment bytes are BASE-64 encoded and inserted into the Element. This ensures that the actual bytes are signed, and not just the reference.
    So for example, if an encrypted SOAP Body is signed, the default behaviour is to expand the xop:Include Element to make sure that we are verifying the signature on the SOAP Body. On the sending side, we must have a signature action *before* an encryption action, for this same reason. If we encrypt before signing, then WSS4J will turn off the "storeBytesInAttachment" property, to make sure that we are not signing a reference.

    2) Storing message bytes in attachments with WS-SecurityPolicy

    A new security configuration property is also available in Apache CXF to control the ability to store message bytes in attachments with WS-Security when WS-SecurityPolicy is used:
    • Whether to store bytes (CipherData or BinarySecurityToken) in an attachment. The default is true if MTOM is enabled.
    This property is also available in CXF 3.0.6, but is it is "false" by default. Similar to the action case, CXF will turn off this property by default in either of the following policy cases:
    • If sp:EncryptBeforeSigning is present
    • If sp:ProtectTokens is present. In this case, the signing cert is itself signed, and again we want to avoid signing a reference rather than the certificate bytes.
    3) Tests

    To see this new functionality in action, take a look at the MTOMSecurityTest in CXF's ws-security systests module. It has three methods that test storing bytes in attachments with a symmetric binding, asymmetric binding + an "action based" approach to configuring WS-Security. Enable logging to see the requests and responses. The encrypted SOAP Body now contains a CipherValue that does not include the BASE-64 encoded bytes any more:

    The referenced attachment looks like:

    Finally, I wrote a blog post some time back about using Apache JMeter to load-test security-enabled CXF-based web services. I decided to modify the standard symmetric and asymmetric tests, so that the CXF service was MTOM enabled, so that the ability to store message bytes in the attachments was switched on with CXF 3.1.2. The results for both test-cases showed that throughput was around 1% higher when message bytes were stored in attachments. Bear in mind that the change just measures the service creation change, the client request was still non-MTOM aware as it is just pasted into JMeter. So one would expect up to a 4% improvement for a fully MTOM-aware client + service invocation:

    Categories: Colm O hEigeartaigh

    Apache CXF Fediz 1.2.0 tutorial - part IV

    Colm O hEigeartaigh - Thu, 07/16/2015 - 17:40
    This is the fourth in a series of blog posts on the new features and changes in Apache CXF Fediz 1.2.0. The last two articles focused on how clients can authenticate to the IdP in Fediz 1.2.0 using Kerberos and TLS client authentication. In this post we will divert our attention from the IdP for the time being, and look at a new container-independent Relying Party (RP) plugin available in Fediz 1.2.0 based on Apache CXF.

    1) RP plugins in Fediz

    Apache Fediz ships with a number of RP plugins to secure your web application. These plugins are container-dependent, meaning that if your web app is deployed in say Apache Tomcat, you need to use the Tomcat plugin in Fediz. The following plugins were available prior to Fediz 1.2.0:
    The CXF plugin referred to here was not a full WS-Federation RP plugin as in the other modules. Instead, it consisted of a mechanism that allows the SSO (SAML) token retrieved as part of the WS-Federation process to be used by CXF client code, if the web application needed to obtain another token "on behalf of" the other token when making some subsequent web services call.

    2) CXF RP plugin in Fediz 1.2.0

    In Fediz 1.2.0, the CXF plugin mentioned above now contains a fully fledged WS-Federation RP implementation that can be used to secure a JAX-RS service, rather than using one of the container dependent plugins. Lets see how this works using a test-case:
    • cxf-fediz-federation-sso: This project shows how to use the new CXF plugin of Apache Fediz 1.2.0 to authenticate and authorize clients of a JAX-RS service using WS-Federation.
    The test-case consists of two modules. The first is a web application which contains a simple JAX-RS service, which has a single GET method to return a doubled number. The method is secured with a @RolesAllowed annotation, meaning that only a user in roles "User", "Admin", or "Manager" can access the service.

    This is enforced via CXF's SecureAnnotationsInterceptor. Finally WS-Federation is enabled for the service via the JAX-RS Provider called the FedizRedirectBindingFilter, available in the CXF plugin in Fediz. This takes a "configFile" parameter, which is a link to the standard Fediz plugin configuration file:

    It's as easy as this to secure your CXF JAX-RS service using WS-Federation! The remaining module in the test above deploys the IdP + STS from Fediz in Apache Tomcat. It then takes the "double-it" war above and also deployed it in Tomcat.

    Finally, it uses Htmlunit to make an invocation on the service, and checks that access is granted to the service. Alternatively, you can comment the @Ignore annotation of the "testInBrowser" method, and copy the printed out URL into a browser to test the service directly (user credentials: "alice/ecila").
    Categories: Colm O hEigeartaigh

    Apache CXF Fediz 1.2.0 tutorial - part III

    Colm O hEigeartaigh - Wed, 07/15/2015 - 17:22
    This is the third in a series of blog posts on the new features and changes in Apache CXF Fediz 1.2.0. The previous blog entry described how different client authentication mechanisms are supported in the IdP, and how to configure client authentication via an X.509 certificate, a new feature in Fediz 1.2.0. Another new authentication mechanism in Fediz 1.2.0 is the ability to authenticate to the IdP using Kerberos, which we will cover in this article.

    1) Kerberos client authentication in the IdP

    Recall that the Apache Fediz IdP in 1.2.0 supports different client authentication methods by default using different URL paths. In particular for Kerberos, the URL path is:
    • /federation/krb -> authentication using Kerberos
    The default value for the "wauth" parameter added by the service provider to the request to activate this URL path is:
    When the IdP receives a request at the URL path configured for Kerberos, it sends back a request for a Negotiate Authorization header if none is present. Otherwise it parses the header and BASE-64 decodes the Kerberos token and dispatches it to the configured authentication provider. Kerberos tokens are authenticated in the IdP via the STSKrbAuthenticationProvider, which is configured in the Spring security-config.xml

    2) Authenticating Kerberos tokens in the IdP

    The IdP supports two different ways of validating Kerberos tokens:
    • Passthrough Authentication. Here we do not authenticate the Kerberos token at all in the IdP, but pass it through to the STS for authentication. This is similar to what is done for the Username/Password authentication case. The default security binding of the STS for this scenario requires a KerberosToken Supporting Token. This is the default way of authenticating Kerberos tokens in the IdP.
    • Delegation. If delegation is enabled in the IdP, then the received token is validated locally in the IdP. The delegated credential is then used to get a new Kerberos Token to authenticate the STS call "on behalf of" the original user. 
    To enable the delegation scenario, simply update the STSKrbAuthenticationProvider bean in the security-config.xml,
    set the "requireDelegation" property to "true", and configure the kerberosTokenValidator property to validate the received Kerberos token:

    Categories: Colm O hEigeartaigh

    Securing Apache CXF with Apache Camel

    Colm O hEigeartaigh - Fri, 07/10/2015 - 18:45
    The previous post I wrote about how to integrate Apache CXF with Apache Camel. The basic test scenario involved using an Apache CXF proxy service to authenticate clients, and Apache Camel to route the authenticated requests to a backend service, which had different security requirements to the proxy. In this post, we will look at a slightly different scenario, where the duty of authenticating the clients shifts from the proxy service to Apache Camel itself. In addition, we will look at how to authorize the clients via different Apache Camel components.

    For a full description of the test scenario see the previous post. The Apache CXF based proxy service receives a WS-Security UsernameToken, which is used to authenticate the client. In the previous scenario, this was done at the proxy by supplying a CallbackHandler instance to verify the given username and password. However, this time we will just configure the proxy to pass the received credentials through to the route instead of authenticating them. This can be done by setting the JAX-WS property "ws-security.validate.token" to "false":

    So now it is up to the Camel route to authenticate and authorize the user credentials. Here are two possibilities using Apache Shiro and Spring Security.

    1) Apache Shiro

    I've covered previously how to use Apache Shiro to authenticate and authorize web service invocations using Apache CXF. Apache Camel ships with a camel-shiro component which allows you to authenticate and authorize Camel routes. The test-case can be downloaded and run here:
    • camel-cxf-proxy-shiro-demo: Some authentication and authorization tests for an Apache CXF proxy service using the Apache Camel Shiro component.
    Username, passwords and roles are stored in a file and parsed in a ShiroSecurityPolicy object:

    The Camel route is as follows:

    Note that the shiroHeaderProcessor bean processes the result from the proxy before applying the Shiro policy. This processor retrieves the client credentials (which are stored as a JAAS Subject in a header on the exchange) and extracts the username and password, storing them in special headers that are used by the Shiro component in Camel to get the username and password for authentication. 

    The authorization use-case uses the same route, however the ShiroSecurityPolicy bean enforces that the user must have a role of "boss" to invoke on the backend service:

    2) Spring Security 

    I've also covered previously how to use Spring Security to authenticate and authorize web service invocations using Apache CXF. Apache Camel ships with a camel-spring-security component which allows you to authenticate and authorize Camel routes. The test-case can be downloaded and run here:
    Like the Shiro test-case, username, passwords and roles are stored in a file, which is used to create an authorizationPolicy bean:
    The Camel route is exactly the same as in the Shiro example above, except that a different processor implementation is used. The SpringSecurityHeaderProcessor bean used in the tests translates the user credentials into a Spring Security UsernamePasswordAuthenticationToken principal, which is added to the JAAS Subject stored under the Exchange.AUTHENTICATION header. This principal is then used by the Spring Security component to authenticate the request.

    To authorize the request, a different authorizationPolicy configuration is required:

    Categories: Colm O hEigeartaigh

    Integrating Apache CXF with Apache Camel

    Colm O hEigeartaigh - Mon, 07/06/2015 - 12:51
    Apache Camel provides support for integrating Apache CXF endpoints via the camel-cxf component. A common example of the benefits of using Apache Camel with webservices is when a proxy service is required to translate some client request into a format that is capable of being processed by some backend service. Apache Camel ships with an example where a backend service consumes SOAP over JMS, and a proxy service translates a SOAP over HTTP client request into SOAP over JMS. In this post, we will show an example of how to use this proxy pattern to secure a client invocation to a backend service via a proxy, when the backend service and proxy have different security requirements.

    The test scenario is as follows. The backend service is an Apache CXF-based JAX-WS "double-it" service that can only be called by trusted clients. However, we don't want to give the backend service the responsibility to authenticate clients. A CXF-based proxy service will be responsible for authenticating clients, and then routing the authenticated requests to the backend service via Apache Camel. The backend service is secured via TLS with client authentication, meaning that we have direct trust between the proxy service and the backend service. Clients must authenticate to the proxy service via a WS-Security UsernameToken over TLS.

    The test-case can be downloaded and run here:
     The CXF proxy is configured as follows:

    The CallbackHandler supplies the password to authenticate client passwords. The Camel route is defined as:

    The headerFilterStrategy reference is to a CxfHeaderFilterStrategy bean which instructs Camel to drop the message headers (we don't need the security header beyond the proxy, as the proxy is responsible for authenticating the client). Messages are routed to the "doubleItService", which is defined as follows:

    Categories: Colm O hEigeartaigh

    Karaf Tutorial Part 1 - Installation and First application

    Christian Schneider - Thu, 07/02/2015 - 18:06

    Blog post edited by Christian Schneider

    Getting StartedWith this post I am beginning a series of posts about Apache Karaf. So what is Karaf and why should you be interested in it? Karaf is an OSGi container based on Equinox or Felix. The main difference to these fine containers is that it brings excellent management features with it.

    Outstanding features of Karaf:

    • Extensible Console with Bash like completion features
    • ssh console
    • deployment of bundles and features from maven repositories
    • easy creation of new instances from command line

    All together these features make developing server based OSGi applications almost as easy as regular java applications. Deployment and management is on a level that is much better than all applications servers I have seen till now. All this is combined with a small footprint as well of karaf as the resulting applications. In my opinion this allows a light weight development style like JEE 6 together with the flexibility of spring applications.

    Installation and first startup
    • Download Karaf 3.0.3 from the Karaf web site.
    • Extract and start with bin/karaf

    You should see the welcome screen:

    __ __ ____ / //_/____ __________ _/ __/ / ,< / __ `/ ___/ __ `/ /_ / /| |/ /_/ / / / /_/ / __/ /_/ |_|\__,_/_/ \__,_/_/ Apache Karaf (3.0.3) Hit '<tab>' for a list of available commands and '[cmd] \--help' for help on a specific command. Hit '<ctrl-d>' or 'osgi:shutdown' to shutdown Karaf. karaf@root()> Some handy commandsCommandDescriptionlaShows all installed bundlesservice:listShows the active OSGi services. This list is quite long. Here it is quite handy that you can use unix pipes like "ls | grep admin"exportsShows exported packages and bundles providing them. This helps to find out where a package may come from.feature:listShows which features are installed and can be installed.features:install webconsole

    Install features (a list of bundles and other features). Using the above command we install the Karaf webconsole.

    It can be reached at http://localhost:8181/system/console . Log in with karaf/karaf and take some time to see what it has to offer.

    log:tailShow the log. Use ctrl-c to  go back to ConsoleCtrl-dExit the console. If this is the main console karaf will also be stopped.

    OSGi containers preserve state after restarts


    Please note that Karaf like all osgi containers maintains it´s last state of installed and started bundles. So if something should not work anymore a restart is not sure to help. To really start fresh again stop karaf and delete the data directory.

    Check the logs


    Karaf is very silent. To not miss error messages always keep a tail -f data/karaf.log open !!

    Tasklist - A small osgi application

    Without any useful application Karaf is a nice but useless container. So let´s create our first application. The good news is that creating an OSGi application is quite easy and
    maven can help a lot. The difference to a normal maven project is quite small. To write the application I recommend to use Eclipse 4 with the m2eclipse plugin which is installed by default on current versions.

    Get the source code

    Import into Eclipse

    • Start Eclipse 
    • In Eclipse Package explorer: Import -> Existing maven project -> Browse to the extracted directory into the tasklist sub dir
    • Eclipse will show all maven projects it finds
    • Click through to import with defaults

    Eclipse will now import the projects and wire all dependencies using m2eclipse.

    The tasklist example consists of three projects

    ModuleDescriptiontasklist-modelService interface and Task classtasklist-persistenceSimple persistence implementation that offers a TaskServicetasklist-uiServlet that displays the tasklist using a TaskServicetasklist-featuresFeatures descriptor for the application that makes installing in Karaf very easyTasklist-persistence

    This project contains the domain model and the service implementation. The model is the Task class and a TaskService interface. The persistence implementation TaskServiceImpl manages tasks in a simple HashMap.
    The TaskService is published as an OSGi service using a blueprint context. Blueprint is an OSGi standard for dependency injection and is very similar to a spring context.

    <blueprint xmlns=""> <bean id="taskService" class="" /> <service ref="taskService" interface="" /> </blueprint>

    The bean tag creates a single instance of the TaskServiceImpl. The service tag publishes this instance as an OSGi service with the TaskService interface.

    The pom.xml is of packaging bundle and the maven-bundle-plugin creates the jar with an OSGi Manifest. By default the plugin imports all packages that are imported in java files or referenced in the blueprint context.
    It also exports all packages that do not contain the string impl or internal. In our case we want the model package to be imported but not the persistence.impl package. As the naming convention is used
    we need no additional configuration.


    The ui project contains a small servlet TaskServlet to display the tasklist and individual tasks. To work with the tasks the servlet needs the TaskService.

    To inject the TaskService and to publish the servlet the following blueprint context is used:

    <blueprint xmlns=""> <reference id="taskService" availability="mandatory" interface="" /> <bean id="taskServlet" class=""> <property name="taskService" ref="taskService"></property> </bean> <service ref="taskServlet" interface="javax.servlet.http.HttpServlet"> <service-properties> <entry key="alias" value="/tasklist" /> </service-properties> </service> </blueprint>

    The reference tag makes blueprint search and eventually wait for a service that implements the TaskService interface and creates a bean "taskService".
    The bean taskServlet instantiates the servlet class and injects the taskService. The service tag publishes the servlet as an OSGi service with the HttpServlet interface and sets a property alias.
    This way of publishing a servlet is not yet standardized but is supported by the pax web whiteboard extender. This extender registers each service with interface HttpServlet with the OSGi http service. It uses the alias
    property to set the path where the servlet is available.

    See also:


    The last project only installs a feature descriptor to the maven repository so we can install it easily in Karaf. The descriptor defines a feature named tasklist and the bundles to be installed from
    the maven repository.

    <feature name="example-tasklist-persistence" version="${pom.version}"> <bundle>${pom.version}</bundle> <bundle>${pom.version}</bundle> </feature> <feature name="example-tasklist-ui" version="${pom.version}"> <feature>http</feature> <feature>http-whiteboard</feature> <bundle>${pom.version}</bundle> <bundle>${pom.version}</bundle> </feature>

    A feature can consist of other features that also should be installed and bundles to be installed. The bundles typically use mvn urls. This means they are loaded from the configured maven repositories or your local maven repositiory in ~/.m2/repository.

    Installing the Application in Karaf feature:repo-add feature:install example-tasklist-persistence example-tasklist-ui

    Add the features descriptor to Karaf so it is added to the available features, then Install and start the tasklist feature. After this command the tasklist application should run


    Check that all bundles of tasklist are active. If not try to start them and check the log.

    http:list ID | Servlet | Servlet-Name | State | Alias | Url ------------------------------------------------------------------------------- 56 | TaskListServlet | ServletModel-2 | Deployed | /tasklist | [/tasklist/*]

    Should show the TaskListServlet. By default the example will start at http://localhost:8181/tasklist .

    You can change the port by creating aa text file in "etc/org.ops4j.pax.web.cfg" with the content "org.osgi.service.http.port=8080". This will tell the HttpService to use the port 8080. Now the tasklist application should be available at http://localhost:8080/tasklist


    In this tutorial we have installed Karaf and learned some commands. Then we created a small OSGi application that shows servlets, OSGi services, blueprint and the whiteboard pattern.

    In the next tutorial we take a look at using Apache Camel and Apache CXF on OSGi.

    Back to Karaf Tutorials

    View Online
    Categories: Christian Schneider


    Subscribe to Talend Community Coders aggregator