Latest Activity

Integrating AWS Key Management Service with Apache CXF

Colm O hEigeartaigh - Thu, 06/25/2015 - 13:20
Apache CXF supports a wide range of standards designed to help you secure a web service request, from WS-Security for SOAP requests, to XML Security and JWS/JWE for XML/JSON REST requests. All of these standards provide for using symmetric keys to encrypt requests, and then using a master key (typically a public key associated with an X.509 certificate) to encrypt the symmetric key, embedding this information somewhere in the request. The usual use-case is to generate random bytes for the symmetric key. But what if you wanted instead to manage the secret keys in some way? Or if your client did not have access to sufficient entropy to generate truly random bytes? In this article, we will look at how to use the AWS Key Management Service to perform this task for us, in the context of an encrypted SOAP request using WS-Security.

1) AWS Key Management Service

The AWS Key Management Service allows us to create master keys and data keys for users defined in the AWS Identity and Access Management service. Once we have created a user, and a corresponding master key for the user (which is only stored in AWS and cannot be exported), we can ask the Key Management Service to issue us a data key (using either AES 128 or 256), and an encrypted data key. The idea is that the data key is used to encrypt some data and is then disposed of. The encrypted data key is added to the request, where the recipient can ask the Key Management Service to decrypt the key, which can be then be used to decrypt the encrypted data in the request.

The first step is to register for Amazon AWS here. Once we have registered, we need to create a user in the Identity and Access Management service. Create a new user "alice", and make a note of the access key and secret access key associated with "alice". Next we need to write some code to obtain keys for "alice" (documentation). First we must create a client:

AWSCredentials creds = new BasicAWSCredentials(<access key id>, <secret key>);
AWSKMSClient kms = new AWSKMSClient(creds);
kms.setEndpoint("https://kms.eu-west-1.amazonaws.com");

Next we must create a customer master key for "alice":

String desc = "Secret encryption key";
CreateKeyRequest req = new CreateKeyRequest().withDescription(desc);
CreateKeyResult result = kms.createKey(req);

The CreateKeyResult object returned as part of the key creation process will contain a key Id, which we will need later.

2) Using AWS Key Management Service keys with WS-Security

As mentioned above, the typical process for WS-Security when encrypting a request, is to generate some random bytes to use as the symmetric encryption key, and then use a key wrap algorithm with another key (typically a public key) to encrypt the symmetric key. Instead, we will use the AWS Key Management Service to retrieve the symmetric key to encrypt the request. We will store the encrypted form of the symmetric key in the WS-Security EncryptedKey structure, which will reference the Customer Master Key via a "KeyName" pointing to the Key Id.

I have created a project that can be used to demonstrate this integration:
  • cxf-amazon-kms: This project contains a number of tests that show how to use the AWS Key Management Service with Apache CXF.
The first task in running the test (assuming the steps followed in point 1 above were followed) is to edit the client configuration, entering the correct values in the CommonCallbackHandler for the access key id, secret key, endpoint, and master key id as gathered above, ditto for the service configuration. The CommonCallbackHandler uses the AWS Key Management Service API to create the symmetric key on the sending side, and to decrypt it on the receiving side. Then to run the test simply remove the "org.junit.Ignore" annotation, and the encrypted web service request can be seen in the console:


Categories: Colm O hEigeartaigh

Using a JDBC JAAS LoginModule with Apache CXF

Colm O hEigeartaigh - Tue, 06/23/2015 - 13:20
Last year I wrote a blog entry giving an overview of the different ways that you can use JAAS with Apache CXF for authenticating and authorizing web service calls. I also covered some different login modules and linked to samples for authenticating a Username + Password to LDAP, as well as Kerberos Tokens to a KDC. This article covers how to use JAAS with Apache CXF to authenticate a Username + Password to a database via JDBC.

The test-case is available here:
  • cxf-jdbc: This project contains a number of tests that show how an Apache CXF service endpoint can authenticate and authorize a client using JDBC.
It contains two tests, one dealing with authentication (no roles required by the service) and the other with authorization (a specific role is required). Both tests involve a JAX-WS service invocation, where the service requires a WS-Security UsernameToken over TLS. In each case, the service configures Apache WSS4J's JAASUsernameTokenValidator using the context name "jetty". The JAAS configuration file contains an entry for the "jetty" context, which references the Jetty JDBCLoginModule:

The configuration of the JDBCLoginModule is easy to follow. The "dbUrl" refers to the JDBC connection URL (in this case an in-memory Apache Derby instance). The table containing user data is "app.users", where the fields used for usernames and passwords are "name" and "password" respectively. Similarly, the table containing role data is "app.roles", where the fields used for usernames + roles are "name" and "role" respectively.

The tests use Apache Derby as an in-memory database. It is created in code as follows:


Then the following SQL file is read in, and each statement is executed using the statement Object above:


Categories: Colm O hEigeartaigh

Apache CXF Fediz 1.2.0 tutorial - part II

Colm O hEigeartaigh - Thu, 06/11/2015 - 14:47
This is the second in a series of blog posts on the new features and changes in Apache CXF Fediz 1.2.0. The previous blog entry gave instructions about how to deploy the Fediz IdP and a sample service application in Apache Tomcat. This article describes how different client authentication methods are supported in the IdP, and how they can be selected by the service via the "wauth" parameter. Then we will extend the previous tutorial by showing how to authenticate to the IdP using a client certificate in the browser, as opposed to entering a username + password.

1) Supporting different client authentication methods in the IdP

The Apache Fediz IdP in 1.2.0 supports different client authentication methods by default using different URL paths, as follows:
  • /federation -> the main entry point
  • /federation/up -> authentication using HTTP B/A
  • /federation/krb -> authentication using Kerberos
  • /federation/clientcert -> authentication using a client cert
The way it works is as follows. The service provider (SP) should use the URL for the main entry point (although the SP has the option of choosing one the more specific URLs as well). The IdP extracts the "wauth" parameter from the request ("default" is the default value), and looks for a matching key in the "authenticationURIs" section of the service configuration. For example:

<property name="authenticationURIs">
    <util:map>
        <entry key="default" value="federation/up" />
        <entry key="http://docs.oasis-open.org/wsfed/authorization/200706/authntypes/SslAndKey" value="federation/krb" />
        <entry key="http://docs.oasis-open.org/wsfed/authorization/200706/authntypes/default" value="federation/up" />
        <entry key="http://docs.oasis-open.org/wsfed/authorization/200706/authntypes/Ssl" value="federation/clientcert" />
    </util:map>
</property>

If a matching key is found for the wauth value, then the browser gets redirected to the associated URL. Therefore, a service provider can specify a value for "wauth" in the plugin configuration, and select the client authentication mode as a result. The values defined for "wauth" above are taken from the specification, but can be changed if required. The service provider can specify the value for "wauth" by using the "authenticationType" configuration tag, as documented here.

2) Client authentication using a certificate

A new feature of Fediz 1.2.0 is the ability for a client to authenticate to the IdP using a certificate embedded in the browser. To see how this works in practice, please follow the steps given in the previous tutorial to set up the IdP and service web application in Apache Tomcat. To switch to use client certificate authentication, only one change is required in the service provider configuration:
  • Edit ${catalina.home}/conf/fediz_config.xml, and add the following under the "protocol" section: <authenticationType>http://docs.oasis-open.org/wsfed/authorization/200706/authntypes/Ssl</authenticationType>
The next step is to add a client certificate to the browser that you are using. To avoid changing the IdP TLS configuration, we will just use the same certificate / private key that is used by the IdP on the client side for the purposes of this demo. First, we need to convert the IdP key from JKS to PKCS12. So take the idp-ssl-key.jks configured in the previous tutorial and run:
  • keytool -importkeystore -srckeystore idp-ssl-key.jks -destkeystore idp-ssl-key.p12 -srcstoretype JKS -deststoretype PKCS12 -srcstorepass tompass -deststorepass tompass -srcalias mytomidpkey -destalias mytomidpkey -srckeypass tompass -destkeypass tompass -noprompt
I will use Chrome for the client browser. Under Settings, Advanced Settings, "HTTPS/SSL", click on the Manage Certificates button, and add the idp-ssl-key.p12 keystore above using the password "tompass":
Next, we need to tell the STS to trust the key used by the client (you can skip these steps if using Fediz 1.2.1):
  • First, export the certificate as follows: keytool -keystore idp-ssl-key.jks -storepass tompass -export -alias mytomidpkey -file MyTCIDP.cer
  • Take the ststrust.jks + import the cert: keytool -import -trustcacerts -keystore ststrust.jks -storepass storepass -alias idpcert -file MyTCIDP.cer -noprompt
  • Finally, copy the modified ststrust.jks into the STS: ${catalina.home}/webapps/fediz-idp-sts/WEB-INF/classes
The last configuration step is to tell the STS where to retrieve claims for the cert. We will just copy the claims for Alice:
  • Edit ${catalina.home}/webapps/fediz-idp-sts/WEB-INF/userClaims.xml
  • Add the following under "userClaimsREALMA": <entry key="CN=localhost" value-ref="REALMA_aliceClaims" />
Now restart Tomcat and navigate to the service URL:
  • https://localhost:8443/fedizhelloworld/secure/fedservlet
Select the certificate that we have uploaded, and you should be able to authenticate to the IdP and be redirected back to the service, without having to enter any username/password credentials!
Categories: Colm O hEigeartaigh

Apache CXF Fediz 1.2.0 tutorial - part I

Colm O hEigeartaigh - Wed, 06/10/2015 - 17:16
The previous blog entry gave an overview of the new features in Apache CXF Fediz 1.2.0. This post first focuses on setting up and running the IdP (Identity Provider) and the sample simpleWebapp in Apache Tomcat.

1) Deploying the 1.2.0 Fediz IdP in Apache Tomcat

Download Fediz 1.2.0 and extract it to a new directory (${fediz.home}). We will use a Apache Tomcat 7 container to host the Idp. To deploy the IdP to Tomcat:
  • Create a new directory: ${catalina.home}/lib/fediz
  • Edit ${catalina.home}/conf/catalina.properties and append ',${catalina.home}/lib/fediz/*.jar' to the 'common.loader' property.
  • Copy ${fediz.home}/plugins/tomcat/lib/* to ${catalina.home}/lib/fediz
  • Copy ${fediz.home}/idp/war/* to ${catalina.home}/webapps
  • Download and copy the hsqldb jar (e.g. hsqldb-1.8.0.10.jar) to ${catalina.home}/lib
Now we need to set up TLS:
  • The keys that ship with Fediz 1.2.0 are 1024 bit DSA keys, which will not work with most modern browsers (this will be fixed for 1.2.1). 
  • So we need to generate a new key: keytool -genkeypair -validity 730 -alias mytomidpkey -keystore idp-ssl-key.jks -dname "cn=localhost" -keypass tompass -storepass tompass -keysize 2048 -keyalg RSA
  • Export the cert: keytool -keystore idp-ssl-key.jks -storepass tompass -export -alias mytomidpkey -file MyTCIDP.cer
  • Create a new truststore with the cert: keytool -import -trustcacerts -keystore idp-ssl-trust.jks -storepass ispass -alias mytomidpkey -file MyTCIDP.cer -noprompt
  • Copy idp-ssl-key.jks and idp-ssl-trust.jks to ${catalina.home}.
  • Copy both jks files as well to ${catalina.home}/webapps/fediz-idp/WEB-INF/classes/ (after Tomcat is started)
  • Edit the TLS Connector in ${catalina.home}/conf/server.xml', e.g.: <Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="want" sslProtocol="TLS" keystoreFile="idp-ssl-key.jks" keystorePass="tompass" keyPass="tompass" truststoreFile="idp-ssl-trust.jks" truststorePass="ispass" />
Now start Tomcat, and check that the IdP is live by opening the STS WSDL in a web browser: 'https://localhost:8443/fediz-idp-sts/REALMA/STSServiceTransport?wsdl'

For a more thorough test, enter the following in a web browser - you should be directed to the URL for the service application (404, as we have not yet configured it):

https://localhost:8443/fediz-idp/federation?wa=wsignin1.0&wreply=http%3A%2%2Flocalhost%3A8080%2Fwsfedhelloworld%2Fsecureservlet%2Ffed&wtrealm=urn%3Aorg%3Aapache%3Acxf%3Afediz%3Afedizhelloworld

2) Deploying the simpleWebapp in Apache Tomcat

To deploy the service to Tomcat:
  • Copy ${fediz.home}/examples/samplekeys/rp-ssl-server.jks and ${fediz.home}/examples/samplekeys/ststrust.jks to ${catalina.home}.
  • Copy ${fediz.home}/examples/simpleWebapp/src/main/config/fediz_config.xml to ${catalina.home}/conf/
  • Edit ${catalina.home}/conf/fediz_config.xml and replace '9443' with '8443'.
  • Do a "mvn clean install" in ${fediz.home}/examples/simpleWebapp
  • Copy ${fediz.home}/examples/simpleWebapp/target/fedizhelloworld.war to ${catalina.home}/webapps.
3) Testing the service

To test the service navigate to:
  • https://localhost:8443/fedizhelloworld/  (this is not secured) 
  • https://localhost:8443/fedizhelloworld/secure/fedservlet
With the latter URL, the browser is redirected to the IDP (select realm "A") and is prompted for a username and password. Enter "alice/ecila" or "bob/bob" or "ted/det" to test the various roles that are associated with these username/password pairs.
Categories: Colm O hEigeartaigh

Enterprise ready request logging with CXF 3.1.0 and elastic search

Christian Schneider - Mon, 06/08/2015 - 17:29

Blog post added by Christian Schneider

You may already know the CXF LoggingFeature. You used it like this:

Old CXF LoggingFeature <jaxws:endpoint ...> <jaxws:features> <bean class="org.apache.cxf.ext.logging.LoggingFeature"/> </jaxws:features> </jaxws:endpoint>

It allowed to add logging to a CXF endpoint at compile time.

While this already helped a lot it was not really enterprise ready. The logging could not be controlled much at runtime and contained too few details. This all changes with the new CXF logging support and the up coming Karaf Decanter.

Logging feature in CXF 3.1.0

In CXF 3.1 this code was moved into a separate module and gathered some new features.

  • Auto logging for existing CXF endpoints
  • Uses slf4j MDC to log meta data separately
  • Adds meta data for Rest calls
  • Adds MD5 message id and exchange id for correlation
  • Simple interface for writing your own appenders
  • Karaf decanter support to log into elastic search
Auto logging for existing CXF endpoints in Apache Karaf

Simply install and enable the new logging feature:

Logging feature in karaf feature:repo-add cxf 3.1.0 feature:install cxf-features-logging config:property-set -p org.apache.cxf.features.logging enabled true

Then install CXF endpoints like always. For example install the PersonService from the Karaf Tutorial Part 4 - CXF Services in OSGi. The client and endpoint in the example are not equipped with the LoggingFeature. Still the new logging feature will enhance the clients and endpoints and log all SOAP and Rest calls using slf4j. So the logging data will be processed by pax logging and by default end up in your karaf log.

A log entry looks like this:

Sample Log entry 2015-06-08 16:35:54,068 | INFO | qtp1189348109-73 | REQ_IN | 90 - org.apache.cxf.cxf-rt-features-logging - 3.1.0 | <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"><soap:Body><ns2:addPerson xmlns:ns2="http://model.personservice.cxf.karaf.tutorial.lr.net/" xmlns:ns3="http://person.jms2rest.camel.karaf.tutorial.lr.net"><arg0><id>3</id><name>Test2</name><url></url></arg0></ns2:addPerson></soap:Body></soap:Envelope>

This does not look very informative. You only see that it is an incoming request (REQ_IN) and the SOAP message in the log message. The logging feature provides a lot more information though. You just need to configure the pax logging config to show it.

Slf4j MDC values for meta data

This is the raw logging information you get for a SOAP call:

FieldValue@timestamp2015-06-08T14:43:27,097ZMDC.addresshttp://localhost:8181/cxf/personServiceMDC.bundle.id90MDC.bundle.nameorg.apache.cxf.cxf-rt-features-loggingMDC.bundle.version3.1.0MDC.content-typetext/xml; charset=UTF-8MDC.encodingUTF-8MDC.exchangeId56b037e3-d254-4fe5-8723-f442835fa128MDC.headers{content-type=text/xml; charset=UTF-8, connection=keep-alive, Host=localhost:8181, Content-Length=251, SOAPAction="", User-Agent=Apache CXF 3.1.0, Accept=*/*, Pragma=no-cache, Cache-Control=no-cache}MDC.httpMethodPOSTMDC.messageIda46eebd2-60af-4975-ba42-8b8205ac884cMDC.portNamePersonServiceImplPortMDC.portTypeNamePersonServiceMDC.serviceNamePersonServiceImplServiceMDC.typeREQ_INlevelINFOloc.classorg.apache.cxf.ext.logging.slf4j.Slf4jEventSenderloc.fileSlf4jEventSender.javaloc.line55loc.methodsendloggerClassorg.ops4j.pax.logging.slf4j.Slf4jLoggerloggerNameorg.apache.cxf.services.PersonService.REQ_INmessage<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"><soap:Body><ns2:getAll xmlns:ns2="http://model.personservice.cxf.karaf.tutorial.lr.net/"; xmlns:ns3="http://person.jms2rest.camel.karaf.tutorial.lr.net"/></soap:Body></soap:Envelope>;threadNameqtp80604361-78timeStamp1433774607097

Some things to note:

  • The logger name is <service namespace>.<ServiceName>.<type> karaf by default only cuts it to just the type.
  • A lot of the details are in the MDC values

You need to change your pax logging config to make these visible.

You can use the logger name to fine tune which services you want to log this way. For example set the debug level to WARN for noisy services to avoid that they are logged or log some services to another file.

Message id and exhange id

The messageId allows to uniquely identify messages even if you collect them from several servers. It is also transported over the wire so you can correlate a request sent on one machine with the request received on another machine.

The exchangeId will be the same for an incoming request and the response sent out or on the other side for an outgoing request and the response for it. This allows to correlate request and responses and so follow the conversations.

Simple interface to write your own appenders

Write your own LogSender and set it on the LoggingFeature to do custom logging. You have access to all meta data from the class LogEvent.

So for example you could write your logs to one file per message or to JMS.

Karaf decanter support to write into elastic search

Many people use elastic search for their logging. Fortunately you do not have to write a special LogSender for this purpose. The standard CXF logging feature will already work.

It works like this:

  • CXF sends the messages as slf4j events which are processed by pax logging
  • Karaf Decanter LogCollector attaches to pax logging and sends all log events into the karaf message bus (EventAdmin topics)
  • Karaf Decanter ElasticSearchAppender sends the log events to a configurable elastic search instance

As Decanter also provides features for a local elastic search and kibana instance you are ready to go in just minutes.

Installing Decanter for CXF Logging feature:repo-add mvn:org.apache.karaf.decanter/apache-karaf-decanter/3.0.0-SNAPSHOT/xml/features feature:install decanter-collector-log decanter-appender-elasticsearch elasticsearch kibana


After that open a browser at http://localhost:8181/kibana. When decanter is released kibana will be fully set up. At the moment you have to add the logstash dashboard and change the index name to [karaf-]YYYY.MM.DD.

Then you should see your cxf messages like this:

Kibana easily allows to filter for specific services and correlate requests and responses.

This is just a preview of decanter. I will do a more detailed post when the first release is out.

 

View Online
Categories: Christian Schneider

Apache CXF Fediz 1.2.0 tutorial - overview

Colm O hEigeartaigh - Thu, 05/28/2015 - 17:59
Apache CXF Fediz 1.2.0 has been released. Fediz is a subproject of the Apache CXF web services stack. It is an implementation of the WS-Federation Passive Requestor Profile for SSO that supports Claims Based Access Control. In laymans terms, Fediz allows you to implement Single Sign On (SSO) for your web application, by redirecting the client browser to an Identity Provider (IdP), where the client is authenticated and redirected back to the application. Fediz consists of a number of container-specific plugins (Tomcat, Jetty, Spring Security, Websphere, etc.) as well as an IdP which bundles the CXF Security Token Service (STS) to issue SAML Tokens.

This is an overview of a planned series of articles on the new features that are available in Fediz 1.2.0, which is a new major release of the project. Subsequent articles will go into more detail on the new features, which are as follows:
  • Dependency update to use CXF 3.0.x (3.0.4).
  • A new container-independent CXF-based plugin is available.
  • Logout Support has been added to the plugins and IdP
  • A new REST API is available for configuring the IdP
  • Support for authenticating to the IdP using Kerberos has been added
  • Support for authenticating to the IdP using a client certificate has been added
  • It is now possible to use the IdP as an identity broker with a SAML SSO IdP
  • Metadata support has been added for the plugins and IdP
Categories: Colm O hEigeartaigh

SAML SSO RP Metadata support in Apache CXF

Colm O hEigeartaigh - Thu, 05/28/2015 - 17:28
Apache CXF provides comprehensive support for SSO using the SAML Web SSO profile for CXF-based JAX-RS services. In Apache CXF 3.1.0 (and 3.0.5), a new Metadata service is available to allow for the publishing of SAML SSO Metadata for a given service.

The MetadataService class is available on a "metadata" path and provides a single @GET method that returns the service metadata in XML format. It has the following properties which should be configured:
  • String serviceAddress - The URL of the service
  • String assertionConsumerServiceAddress - The URL of the RACS. If it is co-located with the service, then it can be the same URL as for the serviceAddress.
  • String logoutServiceAddress - The URL of the logout service (if available).
  • boolean addEndpointAddressToContext - Whether to add the full endpoint address to the values configured above. The default is false.
In addition, the MetadataService extends the AbstractSSOSpHandler, which contains various properties that are required to sign the metadata (keystore alias, crypto properties file which references the keystore, etc.). A sample spring-based configuration for the MetadataService is available in the CXF system tests here. Here is the sample output when accessed via a web brower:


Categories: Colm O hEigeartaigh

Apache CXF 3.1.0 released

Colm O hEigeartaigh - Tue, 05/26/2015 - 16:04
Apache CXF 3.1.0 has been released and is available for download. The migration guide for CXF 3.1.x is available here. The main (non-security) features of CXF 3.1.0 are as follows:
  • Java 6 is no longer supported.
  • Jetty 9 is now supported. Support for Jetty 7 has been dropped.
  • A new Metrics feature for collecting metrics about CXF services is available. 
  • A new Throttling feature is available for easily throttling CXF services.
  • A new Logging feature is available that is more powerful than the existing logging functionality.
The security-specific changes and features are as follows:
  • CXF 3.1.0 picks up a new major release of WSS4J (2.1.0) and OpenSAML (3.1.0). Please see a recent post on WSS4J 2.1.0 for some migration notes if you are using WS-Security or SAML with CXF.
  • The STS now signs issued SAML tokens by default using RSA-SHA256 (previously RSA-SHA1).
  • Some security configuration tags have been renamed from "ws-security.*" to "security.*", as they are now shared with the XML Security JAX-RS module. The old tags will continue to work as before however without any change. See the Security Configuration page for more information.
  • The SAML/XACML functionality previously available in the cxf-rt-security module is now in a new cxf-rt-security-saml module.
  • A new Metadata service for SAML SSO is available. More on this in a future blog post.
  • It is now possible to "plug in" custom security policy validators for WS-Security in CXF, if you want to change the default validation logic. See here for a test that shows how to do this.
Categories: Colm O hEigeartaigh

Apache WSS4J 2.0.4 released

Colm O hEigeartaigh - Fri, 05/15/2015 - 16:24
In addition to the new major release of Apache WSS4J (2.1.0), there is a new bug fix release available - Apache WSS4J 2.0.4. Here are the most important bugs that were fixed in this release:
  • We now support the InclusiveC14N policy.
  • We can enforce that a Timestamp has an "Expires" Element via configuration, if desired.
  • There is a slight modification to how we cache signed Timestamps, to allow for the scenario of two Signatures in a security header that sign the same Timestamp, but with different keys.
  • The policy layer now allows a SupportingToken policy to have more than one token.
  • A bug has been fixed in the MerlinDevice crypto provider, which is designed to work with smartcards.
  • A bug has been fixed in terms of using the correct JCE provider for encryption/decryption.
Categories: Colm O hEigeartaigh

Apache WSS4J 2.1.0 released

Colm O hEigeartaigh - Fri, 05/15/2015 - 16:23
A new major release of Apache WSS4J, 2.1.0, has been released. The previous major release of almost a year ago, Apache WSS4J 2.0.0, had a lot of substantial changes (see the migration guide), such as a new set of maven modules, a new streaming implementation, changes to configuration tags, package changes for CallbackHandlers, etc. In contrast, WSS4J 2.1.0 has a much smaller set of changes, and users should be able to upgrade with very few changes (if at all). This post briefly covers the new features and migration issues for WSS4J 2.1.0.

Here are the new features of WSS4J 2.1.0:
  • JDK7 minimum requirement: WSS4J 2.1.0 requires at least JDK7. The project source has also been updated to make use of some of the new features of JDK7, such as try-with-resources, etc.
  • OpenSAML 3.x support: WSS4J 2.1.0 upgrades from OpenSAML 2.x to 3.x (currently 3.1.0). This is an important upgrade, as OpenSAML 2.x is not currently supported.
  • Extensive code refactoring. A lot of work was done to make the retrieval of security results easier and faster.

Here are the migration issues in WSS4J 2.1.0:
  • Due to the OpenSAML 3.x upgrade, you will need to make a small change in your SAML CallbackHandlers, if you are specifying the "Version". In WSS4J 2.0.x, you could specify the SAML Version by passing a "org.opensaml.common.SAMLVersion" instance through to the SAMLCallback.setSamlVersion(...) method. The "org.opensaml.common" package is removed in OpenSAML 3.x. Instead, a new Version bean is provided in WSS4J 2.1.0, that can be passed to the setSamlVersion method on SAMLCallback as before. See here for an example.
  • The Xerces and xml-apis dependencies in the DOM code of Apache WSS4J 2.1.0 have been removed (previously they were at "provided" scope).
  • If you have a custom Processor instance to process a token in the security header in some custom way, you must add the WSSecurityEngineResult that is generated by the processing, to the WSDocInfo Object via the "addResult" method. Otherwise, it will not be available when security results are retrieved and processed.


Categories: Colm O hEigeartaigh

The Rise Of Apache Tika

Sergey Beryozkin - Thu, 05/14/2015 - 22:51
Apache Tika is an interesting project. It is not a very big one but IMHO it is poised to become the project every team serious about doing the complex, unstructured, binary content processing will talk about and use.

The power of Apache Tika lies in the simplicity it offers for processing different types of binary and other types of complex data. Consider a simple situation: your project needs to support analyzing PDF files. One approach is to write a PDF library specific routine. This approach stops scaling as soon you need to support Excel and ODT files too. And stops working once you have a task to support a possibly unlimited number of types of data.

Apache Tika helps with generalizing the processing of arbitrary types of data and thus offers a unique opportunity for a given project to offer a real value add-on.

I really liked this presentation at the recent Apache Con NA.  It was absolutely packed with the interesting content and Chris talked a lot about applying Tika to solving the real life problems. Andriy Redko did a brilliant talk about the CXF and Tika integration. There were more Tika presentations and I regret I could not make it to all of them.

The future is bright for Tika. And for the projects that will use it :-)
Categories: Sergey Beryozkin

Opend Id Connect Certification Strategy

Sergey Beryozkin - Fri, 05/01/2015 - 12:30
I've just read about an OpenId Connect Certification open strategy. IMHO it is brilliant and no doubt will guarantee a wider adoption of OIDC. Mike Jones's explanation of why it will work is a good read.

The closed (payed-only) certification model limits the adoption of a given technology by the implementors.
Categories: Sergey Beryozkin

[OT] Apache CXF is Electric !

Sergey Beryozkin - Thu, 04/30/2015 - 23:30
I remember this day as it was yesterday. April or March of 1998. I'm in England, Stockport city centre, listening to Oasis's latest single. It was absolutely great, the energizing effect of it.

As it happens I haven't listened to Oasis for the next 17 years apart from hearing them occasionally on the local FM. But a month or so back, I finally got their disk.

"She is Electric" is one of the best songs, classical Oasis. Nearly every time I listen to it I think, well, one can definitely say "Apache CXF is Electric". Why ? Because Apache CXF is cool, active and alive !  Work with it and you will become Electric too :-)   
Categories: Sergey Beryozkin

Apache Santuario - XML Security for Java 2.0.4 released

Colm O hEigeartaigh - Wed, 04/22/2015 - 15:09
Apache Santuario - XML Security for Java 2.0.4 has been released. The issues fixed are available here. Perhaps the most significant issue fixed is an interop issue which emerged when XML Security is used with OpenSAML (see the Apache CXF JIRA where this was raised).
Categories: Colm O hEigeartaigh

Vulnerability testing of Apache CXF based web services

Colm O hEigeartaigh - Mon, 04/13/2015 - 16:21
A number of automated tools can be used to conduct vulnerability or penetration testing of web services. In this article, we will take a look at using WS-Attacker to attack Apache CXF based web service endpoints. WS-Attacker is a useful tool based on SOAP-UI and developed by the Chair of Network and Data Security, Ruhr University Bochum (http://nds.rub.de/) and 3curity GmbH (http://3curity.de/). As an indication of how useful this tool is, it has uncovered a SOAP Action Spoofing vulnerability in older versions of CXF (see here). In this testing scenario, WS-Attacker 1.4-SNAPSHOT was used to test web services using Apache CXF 3.0.4. Apache CXF 3.0.4 is immune to all of the attacks described in this article (as can be seen from the "green" status in the screenshots).

1) SOAPAction Spoofing attacks

A SOAPAction spoofing attack is where the attacker will attempt to fool the service by "spoofing" the SOAPAction header to execute another operation. To test this I created a CXF based SOAP 1.1 endpoint which uses the action "soapAction="http://doubleit/DoubleIt"". I then loaded the WSDL into WS-Attacker, and selected the SOAPAction spoofing plugin, selecting a manual SOAP Action. Here is the result:


2) WS-Addressing Spoofing attacks

A WS-Addressing Spoofing attack involves sending an address in the WS-Addressing ReplyTo/To/FaultTo header that is not understood or known by the service, but to which the service redirects the message anyway. To guard against this attack in Apache CXF it is required to ensure that the WS-Addressing headers should be signed. As a sanity test, I added an endpoint where WS-Addressing is not configured, and the tests fail as expected:



3) XML Signature wrapping attacks

XML Signature allows you to sign various parts of a SOAP request, where the parts in question are referenced via an "Id". An attacker can leverage various "wrapping" based attacks to try to fool the resolution of a signed Element via its "Id". So for example, an attacker could modify the signed SOAP Body of a valid request (thus causing signature validation to fail), and put the original signed SOAP Body somewhere else in the request. If the signature validation code only picks up the Element that has been moved, then signature validation will pass even though the SOAP Body has been modified.

For this test I created a number of endpoints that are secured using WS-SecurityPolicy. I captured a successful signed request from a unit test and loaded it into WS-Attacker, and ran the Signature wrapping attack plugin. Here is the result:


4) Denial of Service attacks

A Denial of Service attack is where an attacker attempts to crash or dramatically slow down a service by flooding it with data, or crafting a message in such a way as to cause parsing to crash or hang. An example might be to create an XML structure with a huge amount of Attributes or Elements, that would cause an out of memory error. WS-Attacker comes with a range of different attacks in this area. Here are the results of all of the DoS attacks against a plain CXF SOAP service (apart from the SOAP Array attack, as the service doesn't accept a SOAP array):


Categories: Colm O hEigeartaigh

Talking about CXF at Apache Con NA 2015

Sergey Beryozkin - Fri, 03/13/2015 - 18:30
Apache Con NA 2015 will be held in Austin, Texas on April 13-16 and as it is usually the case there will be several presentations done there about Apache CXF. There will be interesting presentations from Hadrian and JB too. There will be many other great presentations as usual.

As far as CXF presentations are concerned:

Aki Yoshida will talk about combining Swagger (Web) Sockets, Apache Olingo and CXF Web Sockets Transport - now, this is seriously cool :-) The good news the presentations will be available online for those who will not be able to see it live.

Andriy Redko will talk about something which equally rocks, about combining a CXF Search Extension (FIQL or OData/Olingo based), Apache Tika and Lucene to show the effective search support for uploaded PDF and Open Office documents.

Attending both presentations can get anyone over-excited, that is for sure :-).
This is going to be tough, choosing to which presentation to go with my other colleagues presenting on the same day.


Finally, I will do the introduction of Apache CXF JOSE implementation which I briefly introduced in the previous blog. I'll describe all that CXF JOSE project has in place, and finish with a demo.

The demo deserves a special attention: I haven't written this demo, Anders Rundgren did. The original demo is here. This appears to be like a regular JavaScript-based demo but it is bigger than that, it shows what WebCrypto can do. Supporting generic browser-based signature applications, and interoperating with target servers in a variety of formats, with JOSE one of them. So the demo will show a WebCrypto client interoperating with an Apache CXF JOSE server.


Anders has been incredibly helpful and supportive, helped me to get his demo running in no time. Anders is working on a JSON Clear Signature (JCS) initiative that offers an XML Signature like support for signing JSON documents.  JCS are easier to understand than JOSE formats where Base64URL content representations are used. I'd like to encourage the interested users experiment with JCS, and help Anders. Hopefully something similar to JCS will be supported as part of a wider JOSE effort in the future.

I'm happy as usual I've got a talk selected and my employer's support to travel to Apache Con. It is always great to talk to my colleagues who work with CXF and other Apache technologies, it is important to show others CXF is very much alive and 'looks forward'. I regret I won't see some of my team colleagues there who haven't had a chance to submit for various important reasons but overall I'm looking forward to the conference with a great anticipation. Especially because I promised someone to beat him in chess after the presentations are over :-).

See you there !






Categories: Sergey Beryozkin

Apache CXF is getting JOSE ready

Sergey Beryozkin - Fri, 03/13/2015 - 17:42
I've already talked about JOSE on this blog. In my opinion, it is one of the key technologies, alongside OAuth2, that will deeply affect the way developers write secure HTTP RS services in the years to come.

A one sentence summary: one can use JOSE to secure, sign and/or encrypt a data content in any format, JSON, text, binaries, anything. JOSE is a key component of an advanced OAuth2 application, but also is a good fit for securing the regular HTTP web service communications.

As such it should not be a surprise that CXF now ships its own JOSE implementation offering a support for all of JOSE signature and encryption algorithms and representation formats and joins a list of other frameworks/projects directly supporting JOSE.

I've done an initial documentation here. There's so much to document that I will need probably another week to complete it all. Lots of interesting stuff for developers to experiment with that needs to be documented. I think it is unique in its own way while probably repeating some of the boilerplate code that any JOSE implementation needs to do.

Apart from being keen to directly deal with such an implementation, IMHO it is also good to have it supported in CXF due to how important this technology will become for web services developers in the future. It is always healthy to have multiple implementations as the JAX-RS space has demonstrated. And if CXF users would prefer to use other JOSE implementations then it will be fine.

One such 3rd party implementation is Jose4J. I'd like to thank Brian Campbell for creating it - it did help me to keep my sanity when I only started trying to write a test validating an RSA-OAEP output which is random. I also looked at its source recently when I was puzzled as to why my tests involving EC keys produce wrong-size signatures, even though the validation was passing - the comment in Jose4J made a rather cryptic JOSE spec text obvious, JOSE EC signatures are formatted in a format more compact than DER. I still wrote my own code though :-) which one might say is questionable but there you go. Thanks Brian. I think we can plug in Jose4J with CXF JOSE filters easily enough should users demand it.



CXF JOSE project is not completely finalized but I'm thinking it is getting pretty close to the final API. I'd like to encourage the early adopters give it a go and provide the feedback. In meantime I'll be working on completing the documentation and tweaking the code to enforce some of the security considerations documented in JOSE specifications, etc.

Enjoy !




Categories: Sergey Beryozkin

Camel CXFRS Improvements

Sergey Beryozkin - Wed, 03/11/2015 - 18:51
Camel CXFRS is one of the oldest Camel components which was created by Willem Jiang, my former colleague back from IONA Technology days, and maintained by Willem since its early days.

Camel is known to be a very democratic project with respect to supporting all sort of components, and it has many components that can deal with HTTP invocations. CXFRS is indeed just one of them but as you can guess from its name it is dedicated to supporting HTTP endpoints and clients written on top of Apache CXF JAX-RS implementation.

I think that over the years CXFRS has got a bit of the mixed reception from the community,  may be because it was not deemed that ideal for supporting some styles of routing for which other lighter Camel HTTP aware components were good at.

However CXFRS has been used by some developers and it has been significantly improved recently with respect to its usability. I'd like though to touch on the very last few updates which can be of interest.

The main CXFRS feature which appears to be quite confusing initially is that a CXFRS endpoint (Camel Consumer)  does not actually invoke on the provided JAX-RS implementation. This appears to be rather strange but this is what actually helps to integrate CXF JAXRS into Camel. The JAX-RS runtime is only used to prepare all the data according to JAX-RS Service method signatures but not invoke the actual service but make all the data needed available to custom Camel processors which extract these data from Camel exchanges and make some next routing decisions.

The side-effect of it that in some cases once can not actually just take an existing JAX-RS service implementation and plug it into a Camel route. Unless one use a CXFRS Bean component that can route from Jetty endpoints to CXF JAX-RS service implementation. This approach works but requires another Camel (Jetty only) component with an absolute HTTP address and has a few limitations of its own.

So the first improvement is that starting from Camel 2.15.0 one can configure a CXFRS consumer with a 'performInvocation=true' option and it will actually invoke on the service implementation, set a JAX-RS response on the Camel  exchange and will route to the next custom processor as usual, except that in this case the custom processor will have all the input parameters as before but also a response ready - the processors now can customize the response or do whatever else they need to do. It also makes it much simpler to convert the existing CXF Spring/Blueprint JAX-RS declarations  with the service implementations into Camel CXFRS endpoints if needed.

Note that in a default case one typically provides a no-op CXFRS service implementation (recall, CXFRS does not invoke on the service by default, only needs the method signatures/JAX-RS metadata). Providing interfaces only makes it more logical given that the invocation is not done by default, in fact it is possible for URI-only CXFRS consumer style which is rather limited in what it can do. So the other minor improvement is that starting from Camel 2.15.0 one can just prepare a JAX-RS interface and use it with CXFRS Consumer unless a new 'performInvocation' option is set in which case a complete implementation is needed.

The next one is the new "propagateContexts" configuration option. What it does is that it allows CXFRS developers write their custom processors against JAX-RS Context API, i.e, they can extract one of JAX-RS Contexts such as UriInfo, SecurityContext, HttpHeaders as a typed Camel exchange property and work with these contexts to figure out what needs to be done next. This should be a useful option indeed as JAX-RS Context API is very useful indeed.

Finally, a CXF No Annotations Feature is now supported too, CXFRS users can link to a CXF Model document and use it to JAX-RS enable a given Java interface without JAX-RS annotations. In fact, starting from Camel 2.15.0 it is sufficient to have a model-only CXFRS Consumer without a specific JAX-RS service interface or implementation - in this case custom processors will get the same request data as usual, with the model serving as the source binding the request URI to a set of request parameters.

We hope to build upon this latest feature going forward with other descriptions supported, to have a model-only CXFRS consumer more capable.

Enjoy !







Categories: Sergey Beryozkin

Apache Karaf Tutorial Part 9 - Annotation based blueprint and JPA

Christian Schneider - Fri, 03/06/2015 - 09:19

Blog post edited by Christian Schneider

Writing blueprint xml is quite verbose and large blueprint xmls are difficult to keep in sync with code changes and especially refactorings. So many people prefer to do most declarations using annoations. Ideally these annotations should be standardized so it is clearly defined what they do.blueprint-maven-plugin

The aries maven-blueprint-plugin allows to configure blueprint using annotations. It scans one or more paths for annotated classes and creates a blueprint.xml in target/generated-resources. See aries documentation of the maven-blueprint-plugin.

Example tasklist-blueprint-cdi

This example shows how to create a small application with a model, persistence layer and UI completely without handwritten blueprint xml.

You can find the full source code on github Karaf-Tutorial/tasklist-cdi-blueprint

Structure
  • features
  • model
  • persistence
  • ui
Features

Defines the karaf features to install the example as well as all necessary dependencies.

Model

The model project defines Task as a jpa entity and the Service TaskService as an interface. As model does not do any dependency injection the blueprint-maven-plugin is not involved here.

Task JPA Entity @Entity public class Task { @Id Integer id; String title; String description; Date dueDate; boolean finished; // Getters and setters omitted } TaskService (CRUD operations for Tasks) public interface TaskService { Task getTask(Integer id); void addTask(Task task); void updateTask(Task task); void deleteTask(Integer id); Collection<Task> getTasks(); } persistence.xml <persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"> <persistence-unit name="tasklist" transaction-type="JTA"> <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider> <jta-data-source>osgi:service/tasklist</jta-data-source> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/> <property name="hibernate.hbm2ddl.auto" value="create-drop"/> </properties> </persistence-unit> </persistence>

Persistence.xml defines the persistence unit name as "tasklist" and to use JTA transactions. The jta-data-source points to the jndi name of the DataSource service named "tasklist". So apart from the JTA DataSource name it is a normal hibernate 4.3 style persistence definition with automatic schema creation.

One other important thing is the configuration for the maven-bundle-plugin.

Configurations for maven bundle plugin <Meta-Persistence>META-INF/persistence.xml</Meta-Persistence> <Import-Package>*, org.hibernate.proxy, javassist.util.proxy</Import-Package>

The Meta-Persistence points to the persistence.xml and is the trigger for aries jpa to create an EntityManagerFactory for this bundle.
The Import-Package configurations import two packages that are needed by the runtime enhancement done by hibernate. As this enhancement is not known at compile time we need to give the maven bundle plugin these hints.

Persistence

The tasklist-cdi-persistence bundle is the first module in the example to use the blueprint-maven-plugin. In the pom we set the scanpath to "net.lr.tasklist.persistence.impl". So all classes in this package and sub packages are scanned.

In the pom we need a special configuration for the maven bundle plugin:
<Import-Package>!javax.transaction, *, javax.transaction;version="[1.1,2)"</Import-Package>
In the dependencies we use the transaction API 1.2 as it is the first spec version to include the @Transactional annotation. At runtime though we do not need this annotation and karaf only provides the transaction API version 1.1. So we tweak the import to be ok with the version karaf offers. As soon as the transaction API 1.2 is available for karaf this line will not be necessary anymore.

TaskServiceImpl @OsgiServiceProvider(classes = {TaskService.class}) @Singleton @Transactional public class TaskServiceImpl implements TaskService { @PersistenceContext(unitName="tasklist") EntityManager em; @Override public Task getTask(Integer id) { return em.find(Task.class, id); } @Override public void addTask(Task task) { em.persist(task); em.flush(); } // Other methods omitted }

TaskServiceImpl uses quite a lot of the annotations. The class is marked as a blueprint bean using @Singleton. It is also marked to be exported as an OSGi Service with the interface TaskService.

The class is marked as @Transactional. So all methods are executed in a jta transaction of type Required. This means that if there is no transaction it will be created. If there is a transaction the method will take part in it. At the end of the transaction boundary the transaction is either committed or in case of an exception it is rolled back.

A managed EntityManager for the persistence unit "tasklist" is injected into the field em. It transparently provides one EntityManager per thread which is created on demand and closed at the end of the transaction boundary.

InitHelper @Singleton public class InitHelper { Logger LOG = LoggerFactory.getLogger(InitHelper.class); @Inject TaskService taskService; @PostConstruct public void addDemoTasks() { try { Task task = new Task(1, "Just a sample task", "Some more info"); taskService.addTask(task); } catch (Exception e) { LOG.warn(e.getMessage(), e); } } }

The class InitHelper is not strictly necessary. It simply creates and persists a first task so the UI has something to show. Again the @Singleton is necessary to mark the class for creation as a blueprint bean.
@Inject TaskService taskService injects the first bean of type TaskService it finds in the blueprint context into the field taskService. In our case this is the implementation above.
@PostConstruct makes sure that addDemoTasks() is called after injection of all fields of this bean.

Another interesting thing in the module is the test TaskServiceImplTest. It runs outside OSGi and uses a special persistence.xml for testing to create the EntityManagerFactory without a jndi DataSource which would be difficult to supply. It also uses RESOURCE_LOCAL transactions so we do not need to set up a transaction manager. The test injects the plain EntityManger into the TaskServiceImpl class. So we have to manually begin and commit the transaction. So this shows that you can test the JPA code with plain java which results in very simple and fast tests.

UI

The tasklist-ui module uses the TaskService as an OSGi service and publishes a Servlet as an OSGi service. The Pax-web whiteboard bundle will then pick up the exported servlet and publish it using the HttpService so it is available on http.
In the pom this module needs the blueprint-maven-plugin with a suitable scanPath.

TasklistServlet @OsgiServiceProvider(classes={Servlet.class}) @Properties({@Property(name="alias", value="/tasklist")}) @Singleton public class TaskListServlet extends HttpServlet { @Inject @OsgiService TaskService taskService; protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { // Actual code omitted } }

The TaskListServlet is exported with the interface javax.servlet.Servlet with the service property alias="/tasklist". So it is available on the url http://localhost:8181/tasklist.

@Inject @OsgiService TaskService taskService creates a blueprint reference element to import an OSGi service with the interface TaskService. It then injects this service into the taskService field of the above class.
If there are several services of this interface the filter property can be used to select one of them.

Build

mvn clean install

Installation and test

See Readme.txt on github.

 

View Online
Categories: Christian Schneider

Apache Karaf Tutorial Part 9 - Annotation based blueprint and JPA

Christian Schneider - Thu, 03/05/2015 - 17:41

Blog post edited by Christian Schneider

Writing blueprint xml is quite verbose and large blueprint xmls are difficult to keep in sync with code changes and especially refactorings.
So many people prefer to do most declarations using annoations. Ideally these annotations should be standardized so it is clearly defined what they do.

blueprint-maven-plugin

The aries maven-blueprint-plugin allows to configure blueprint using annotations. It scans one or more paths for annotated classes and creates a blueprint.xml in target/generated-resources. See aries documentation of the maven-blueprint-plugin.

Example tasklist-blueprint-cdi

This example shows how to create a small application with a model, persistence layer and UI completely without handwritten blueprint xml.

You can find the full source code on github Karaf-Tutorial/tasklist-cdi-blueprint

Structure
  • features
  • model
  • persistence
  • ui
Features

Defines the karaf features to install the example as well as all necessary dependencies.

Model

The model project defines Task as a jpa entity and the Service TaskService as an interface. As model does not do any dependency injection the blueprint-maven-plugin is not involved here.

Task JPA Entity @Entity public class Task { @Id Integer id; String title; String description; Date dueDate; boolean finished; // Getters and setters omitted } TaskService (CRUD operations for Tasks) public interface TaskService { Task getTask(Integer id); void addTask(Task task); void updateTask(Task task); void deleteTask(Integer id); Collection<Task> getTasks(); } persistence.xml <persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"> <persistence-unit name="tasklist" transaction-type="JTA"> <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider> <jta-data-source>osgi:service/tasklist</jta-data-source> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/> <property name="hibernate.hbm2ddl.auto" value="create-drop"/> </properties> </persistence-unit> </persistence>

Persistence.xml defines the persistence unit name as "tasklist" and to use JTA transactions. The jta-data-source points to the jndi name of the DataSource service named "tasklist". So apart from the JTA DataSource name it is a normal hibernate 4.3 style persistence definition with automatic schema creation.

One other important thing is the configuration for the maven-bundle-plugin.

Configurations for maven bundle plugin <Meta-Persistence>META-INF/persistence.xml</Meta-Persistence> <Import-Package>*, org.hibernate.proxy, javassist.util.proxy</Import-Package>

The Meta-Persistence points to the persistence.xml and is the trigger for aries jpa to create an EntityManagerFactory for this bundle.
The Import-Package configurations import two packages that are needed by the runtime enhancement done by hibernate. As this enhancement is not known at compile time we need to give the maven bundle plugin these hints.

Persistence

The tasklist-cdi-persistence bundle is the first module in the example to use the blueprint-maven-plugin. In the pom we set the scanpath to "net.lr.tasklist.persistence.impl". So all classes in this package and sub packages are scanned.

In the pom we need a special configuration for the maven bundle plugin:
<Import-Package>!javax.transaction, *, javax.transaction;version="[1.1,2)"</Import-Package>
In the dependencies we use the transaction API 1.2 as it is the first spec version to include the @Transactional annotation. At runtime though we do not need this annotation and karaf only provides the transaction API version 1.1. So we tweak the import to be ok with the version karaf offers. As soon as the transaction API 1.2 is available for karaf this line will not be necessary anymore.

TaskServiceImpl @OsgiServiceProvider(classes = {TaskService.class}) @Singleton @Transactional public class TaskServiceImpl implements TaskService { @PersistenceContext(unitName="tasklist") EntityManager em; @Override public Task getTask(Integer id) { return em.find(Task.class, id); } @Override public void addTask(Task task) { em.persist(task); em.flush(); } // Other methods omitted }

TaskServiceImpl uses quite a lot of the annotations. The class is marked as a blueprint bean using @Singleton. It is also marked to be exported as an OSGi Service with the interface TaskService.

The class is marked as @Transactional. So all methods are executed in a jta transaction of type Required. This means that if there is no transaction it will be created. If there is a transaction the method will take part in it. At the end of the transaction boundary the transaction is either committed or in case of an exception it is rolled back.

A managed EntityManager for the persistence unit "tasklist" is injected into the field em. It transparently provides one EntityManager per thread which is created on demand and closed at the end of the transaction boundary.

InitHelper @Singleton public class InitHelper { Logger LOG = LoggerFactory.getLogger(InitHelper.class); @Inject TaskService taskService; @PostConstruct public void addDemoTasks() { try { Task task = new Task(1, "Just a sample task", "Some more info"); taskService.addTask(task); } catch (Exception e) { LOG.warn(e.getMessage(), e); } } }

The class InitHelper is not strictly necessary. It simply creates and persists a first task so the UI has something to show. Again the @Singleton is necessary to mark the class for creation as a blueprint bean.
@Inject TaskService taskService injects the first bean of type TaskService it finds in the blueprint context into the field taskService. In our case this is the implementation above.
@PostConstruct makes sure that addDemoTasks() is called after injection of all fields of this bean.

Another interesting thing in the module is the test TaskServiceImplTest. It runs outside OSGi and uses a special persistence.xml for testing to create the EntityManagerFactory without a jndi DataSource which would be difficult to supply. It also uses RESOURCE_LOCAL transactions so we do not need to set up a transaction manager. The test injects the plain EntityManger into the TaskServiceImpl class. So we have to manually begin and commit the transaction. So this shows that you can test the JPA code with plain java which results in very simple and fast tests.

UI

The tasklist-ui module uses the TaskService as an OSGi service and publishes a Servlet as an OSGi service. The Pax-web whiteboard bundle will then pick up the exported servlet and publish it using the HttpService so it is available on http.
In the pom this module needs the blueprint-maven-plugin with a suitable scanPath.

TasklistServlet @OsgiServiceProvider(classes={Servlet.class}) @Properties({@Property(name="alias", value="/tasklist")}) @Singleton public class TaskListServlet extends HttpServlet { @Inject @OsgiService TaskService taskService; protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { // Actual code omitted } }

The TaskListServlet is exported with the interface javax.servlet.Servlet with the service property alias="/tasklist". So it is available on the url http://localhost:8181/tasklist.

@Inject @OsgiService TaskService taskService creates a blueprint reference element to import an OSGi service with the interface TaskService. It then injects this service into the taskService field of the above class.
If there are several services of this interface the filter property can be used to select one of them.

Build

mvn clean install

Installation and test

See Readme.txt on github.

 

View Online
Categories: Christian Schneider

Pages

Subscribe to Talend Community Coders aggregator