Latest Activity

Using SSH/SCP/SFTP with Apache Camel

Colm O hEigeartaigh - Thu, 07/02/2015 - 15:46
Apache Camel contains a number of components to make it easy to work with SSH/SCP/SFTP. I've created a new camel-ssh testcase in github to illustrate how to use these various components, continuing on from previous posts describing the security capabilities of Apache Camel:
  • SSHTest: This test-case shows how to use the Apache Camel SSH component. The test fires up an Apache MINA SSHD server, which has been configured to allow authenticated users to execute arbitrary commands (ok not very safe...). Some files that contain unix commands are read in via a Camel route, executed using the SSH component, and the results are stored in target/ssh_results.
  • SCPTest: This test-case shows how to use the Apache Camel JSCH component (which supports SCP using JSCH). An Apache MINA SSHD server is configured that allows SCP. Some XML files are read in via a Camel route, and copied using SCP to a target directory on the server (which maps to target/storage for the purposes of this test).
  • SFTPTest: This test-case shows how to use the Apache Camel FTP component. An Apache MINA SSHD server is configured that allows SFTP. Some XML files are read in via a Camel route, and copied using SFTP to a target directory on the server (target/storage_sftp).
Categories: Colm O hEigeartaigh

Messieurs les contrôleurs de la SNCF vous faites un beau métier

Olivier Lamy - Wed, 07/01/2015 - 23:46
Pour nos vacances en France, nous avons choisi de passer une semaine en Bretagne (cela tombe à point c'est la canicule à Paris).
En fait non, il n'y a rien d'improvisé et les billets ont été réservés et PAYES il y a près de deux mois par internet (ce point est important dans la suite du post).
Le train part donc de Paris Montparnasse à 10:04 ce mardi 30 Juin 2015. Depuis le début de nos vacances, nous avions prévu de rendre notre voiture de location puis de prendre notre train pour nous rendre dans la belle et fraîche Bretagne.
Bonne idée n'est-ce pas? Oui nous le pensions mais c'était sans compter sur quelques petits détails que nous avions oubliés...
Nous étions jusqu'ici en banlieue Essonne sud. Nous pensions mettre environ 1H30 pour nous rendre à la gare. Grosse Erreur!! Cela nous a pris 2H.
Donc nous finissons le trajet en catastrophe, regardant constamment notre montre, énervés, faisons face à la grande incivilité (je dirais même l'égoïsme) des français au volant.... Un ensemble de sentiments que nous ne connaissions plus.
Donc arrivée à 10h et le train est dans 4 minutes!!
Le retour de la voiture chez le loueur s'effectue dans la plus grande des cohue possible.
Là la course commence. Petit rappel sur le contexte, nous sommes une famille avec 4 enfants avec un petit de 3 ans (qui ne marche pas tout le temps donc nous avons une poussette), deux autres filles de 7 et 12 ans et un garçon de 14 ans. Donc oui nous sommes chargés et nous n'avons à ce moment là plus que 3 minutes pour aller du parking Avis jusqu'au quai.
Les enfants comprennent la situation et se chargent de valises (pour une petite de 7 ans une valise cela peut-être très lourd mais elle nous aide bien).
Là avec ma poussette et mes deux valises à tirer, je comprends la difficulté des handicapés dans des lieux publics!!! Mais tant bien que mal nous y arrivons, montons dans le wagon de queue au moment de la sonnerie.
Un grand merci à ces passagers qui nous ont aidés à monter nos valises, poussettes et sacs. En plus il fait bien chaud en ce jour de canicule!!
Donc nous sommes dans le train. Enfin presque car le train se sépare à Rennes et nous allons devoir remonter les dix wagons pour nous mettre le plus près possible de la locomotive afin de changer de train en moins de 4 minutes. Encore de grand moments de sueur nous attendent!!
Dans toute cette course pas eu le temps de retirer les billets DEJA payés il y a plus de 2 mois. (oui je sais j'insiste un peu sur ce point) Donc un de mes premières préocupations est de trouver un contrôleur pour lui expliquer que bloqués dans les bouchons nous n'avons pas eu le temps de retirer nos billets mais que j'ai bien le numéro de dossier etc...
Sa réponse froide et sur un ton très ironique voire moqueur: "ne vous inquiétez pas monsieur nous allons bien nous occuper de votre cas". Je vous rappelle lecteurs que nous venons de courir dans la canicule parisienne chargés de sacs, valises et autres poussettes et que monsieur se permet de faire un humour très ironique....
Très naïevement, je pense un très court instant que ce contrôleur est en fait très sympathique et va arranger notre souci.
Donc nous commençons notre remontée de nos dix wagons. je vous assure que remonter dix wagons avec une poussette et les bagages pour une famille de 6 personnes ce n'est vraiment pas simple surtout lorsque le train est chargé et bon nombre de personne ne se donne même pas la peine de déplacer même légèrement les bagages qu'ils laissent au sol à moitié dans le chemin (oui c'est un peu dur de mettre son sac en hauteur pour ne pas gêner les autres...)
A mi chemin de notre remontée, nous croisons les contrôleurs (ceux que nous avions déjà prévenus à notre montée dans le train) qui nous demandent: "titre de transport s'il vous plait".
Donc je tente de discuter en expliquant de nouveau notre cas, donne mon numéro de dossier pour vérification (mais apparemment en 2015 les contrôleurs n'auraient pas de moyens nécessaires pour vérifier les billets associés à mon numéro de dossier). Je n'ai apparemment pas bien saisi la nuance entre e-billet et billets à retirer.
Soit mais mon billet je l'ai déjà payé et nous avons simplement eu de la malchance à cause des bouchons.
Le contrôleur m'explique qu'en fait je peux frauder et vouloir me faire rembourser mon billet mais quand même prendre le train!!!
J'avoue que pour un père de famille de 4 enfants c'est toujours un peu dur de se faire traiter de voleur devant ses enfants.
Je lui montre donc ma commande avec le mention "non échangeable, non remboursable". Donc là franchement, je ne vois pas comment je pourrais faire cela.
Mais non ces messieurs se montrent intransigeants et nous mettent 5 amendes de 122 euros.
Là j'avoue ne pas comprendre. Nos billets ont été payés et réservés plus de 2 mois à l'avance.
Je me calme, j'essaie de faire comprendre à mes enfants que non nous ne sommes pas des voleurs. Qu'il s'agit simplement de malchance.
Nous parvenons enfin à regagner la locomotive de tête. Et oui il nous reste encore à changer de train en moins de 4 minutes. Le tout en transférant une poussette et des bagages pour une famille de 6 et sans oublier que c'est la canicule en France...
Finalement nous y parvenons....
Je ne comprends toujours pas aujourd'hui comment ces contrôleurs ont pu nous mettre de telles amendes. Le prétexte de la non possibilité de vérifier l'état de notre dossier me semble un peu gros. Nous sommes tout de même en 2015 dans un pays civilisé doté de matériel technologique souvent de pointe.
Donc oui messieurs les contrôleurs je trouve que vous faites un bien beau métier en mettant des amendes à une famille de 4 enfants (qui a déjà payé ses billets!!). La cible est évidemment bien facile, il y a tellement d'autres endroits en France mais les cibles sont peu-être plus compliquées et demandent un peu plus de courage....
Categories: Olivier Lamy

An STS JAAS LoginModule for Apache CXF

Colm O hEigeartaigh - Tue, 06/30/2015 - 13:27
Last year I blogged about how to use JAAS with Apache CXF, and the different LoginModules that were available. Recently, I wrote another article about using a JDBC LoginModule with CXF. This article will cover a relatively new JAAS LoginModule  added to CXF for the 3.0.3 release. It allows a service to dispatch a Username and Password to a STS (Security Token Service) instance for authentication via the WS-Trust protocol, and also to retrieve the user's roles by extracting them from a SAML token returned by the STS.

1) The STS JAAS LoginModule

The new STS JAAS LoginModule is available in the CXF WS-Security runtime module. It takes a Username and Password from the Callbackhandler passed to the LoginModule, and uses them to create a WS-Security UsernameToken structure. What happens then depends on a configuration setting in the LoginModule.

If the "require.roles" property is set, then the UsernameToken is added to a WS-Trust "Issue" request to the STS, and a "TokenType" attribute is sent in the request (defaults to the standard "SAML2" URI, but can be configured). The client also adds a WS-Trust "Claim" to the request that tells the STS to add the role of the authenticated end user to the request. How the token is added to the WS-Trust request depends on whether the "disable.on.behalf.of" property is set or not. By default, the token is added as an "OnBehalfOf" token in the WS-Trust request. However, if "disable.on.behalf.of" is set to "true", then the credentials are used according to the WS-SecurityPolicy of the STS endpoint. For example, if the policy requires a UsernameToken, then the credentials are added to the security header of the WS-Trust request. If the "require.roles" property is not set, the the UsernameToken is added to a WS-Trust "Validate" request.

The STS validates the received UsernameToken credentials supplied by the end user, and then either creates a token (if the Issue binding was used), or just returns a simple response telling the client whether the validation was successful or not. In the former use-case, the token that is returned is cached meaning that the end user does not have to re-authenticate until the token expires from the cache.

The LoginModule has the following configuration properties:
  • require.roles - If this is defined, then the WS-Trust Issue binding is used, passing the value specified for the "token.type" property as the TokenType, and the "key.type" property for the KeyType. It also adds a Claim to the request for the default "role" URI.
  • disable.on.behalf.of - Whether to disable passing Username + Password credentials via "OnBehalfOf".
  • disable.caching - Whether to disable caching of validated credentials. Default is "false". Only applies when "require.roles" is defined.
  • wsdl.location - The location of the WSDL of the STS
  • service.name - The service QName of the STS
  • endpoint.name - The endpoint QName of the STS
  • key.size - The key size to use (if requesting a SymmetricKey KeyType). Defaults to 256.
  • key.type - The KeyType to use. Defaults to the standard "Bearer" URI.
  • token.type - The TokenType to use. Defaults to the standard "SAML2" URI.
  • ws.trust.namespace - The WS-Trust namespace to use. Defaults to the standard WS-Trust 1.3 namespace.
In addition, any of the standard CXF security configuration tags that start with "ws-security." can be used as documented here. Sometimes it is necessary to set some security configuration depending on the security policy of the WSDL.

Here is an example of the new JAAS LoginModule configuration:



2) A testcase for the new LoginModule

Using an STS via WS-Trust for authentication and authorization can be quite difficult to set up and understand, but the new LoginModule makes it easy. I created a testcase + uploaded it to github:
  • cxf-jaxrs-jaas-sts: This project demonstrates how to use the new STS JAAS LoginModule in CXF to authenticate and authorize a user. It contains a "double-it" module which contains a "double-it" JAX-RS service. It is secured with JAAS at the container level, and requires a role of "boss" to access the service. The "sts" module contains a Apache CXF STS web application which can authenticate users and issue SAML tokens with embedded roles.
To run the test, download Apache Tomcat and do "mvn clean install" in the testcase above. Then copy both wars and the jaas configuration file to the Apache Tomcat install (${catalina.home}):
  • cp double-it/target/cxf-double-it.war ${catalina.home}/webapps
  • cp sts/target/cxf-sts.war ${catalina.home}/webapps
  • cp double-it/src/main/resources/jaas.conf ${catalina.home}/conf
Next set the following system property:
  • export JAVA_OPTS=-Djava.security.auth.login.config=${catalina.home}/conf/jaas.conf
Finally, start Tomcat, open a web browser and navigate to:

http://localhost:8080/cxf-double-it/doubleit/services/100

Use credentials "alice/security" when prompted. The STS JAAS LoginModule takes the username and password, and dispatches them to the STS for validation.

    Categories: Colm O hEigeartaigh

    Apache Karaf Tutorial part 10 - Declarative services

    Christian Schneider - Tue, 06/30/2015 - 11:09

    Blog post edited by Christian Schneider

    This tutorial shows how to use Declarative Services together with the new Aries JPA 2.0.

    You can find the full source code on github Karaf-Tutorial/tasklist-ds

    Declarative Services

    Declarative Services (DS) is the biggest contender to blueprint. It is a slim service injection framework that is completely focused on OSGi. DS allows you to offer and consume OSGi services and to work with configurations.

    At the core DS works with xml files to define scr components and their dependencies. They typically live in the OSGI-INF directory and are announced in the Manifest using the header "Service-Component" with the path to the component descriptor file.  Luckily it is not necessary to directly work with this xml as there is also support for DS annotations. These are processed by the maven-bundle-plugin. The only prerequisite is that they have to be enabled by a setting in the configuration instructions of the plugin.

    <_dsannotations>*</_dsannotations>

    For more details see http://www.aqute.biz/Bnd/Components

    DS vs Blueprint

    Let us look into DS by comparing it to the already better known blueprint. There are some important differences:

    1. Blueprint always works on a complete blueprint context. So the context will be started when all mandatory service deps are present. It then publishes all offered services. As a consequence a blueprint context can not depend on services it offers itself. DS works on Components. A component is a class that offers a service and can depend on other services and configuration. In DS you can manage each component separately like start and stop it. It is also possible that a bundle offers two components but only one is started as the dependencies of the other are not yet there.
    2. DS supports the OSGi service dynamics better than blueprint. Lets look into a simple example:
      You have a DS and blueprint module component that offers a service A and depends on a mandatory service B. Blueprint will wait on the first start for the mandatory service to be available. If it does not come up it will fail after a timeout and will not be able to recover from this. Once the blueprint context is up it stays up even if the mandatory service goes away. This is called service damping and has the goal to avoid restarting blueprint contexts too often. Services are injected into blueprint beans as dynamic proxies. Internally the proxy handles the replacement and unavailability of services. One problem this causes is that calls to a non available service will block the thread until a timeout and then throw a RuntimeException.
      In DS on the other hand a component lifecycle is directly bound to dependent services. So a component will only be activated when all mandatory services are present and deactivated as soon as one goes away. The advantage is that the service injected into the component does not have to be proxied and calls to it should always work.
    3. Every DS component must be a service. While blueprint can have internal beans that are just there to wire internal classes to each other this is not possible in DS. So DS is not a complete dependency injection framework and lacks many of the features blueprint offers in this regard.
    4. DS does not support extension namespaces. Aries blueprint has support for quite a few other Apache projects using extension namespaces. Examples are: Aries jpa, Aries transactions, Aries authz, CXF, Camel. So using these technologies in DS can be a bit more difficult.
    5. DS does not support support interceptors. In blueprint an extension namespace can introduce and interceptor that is always called before or after a bean. This is for example used for security as well as transation handling. For this reason DS did not support JPA very well as normal usage mandates to have interceptors. See below how jpa can work on DS.

    So if DS is a good match for your project depends on how much you need the service dynamics and how well you can integrate DS with other projects.

    JEE and JPA

    The JPA spec is based on JEE which has a very special thread and interceptor model. In JEE you use session beans with a container managed EntityManger
    to manipulate JPA Entities. It looks like this:

    JPA @Stateless class TaskServiceImpl implements TaskService {  @PersistenceContext(unitName="tasklist") private EntityManager em; public Task getTask(Integer id) { return em.find(Task.class, id); } }

    In JEE calling getTask will by default participate in or start a transaction. If the method call succeeds the transaction will be committed, if there is an exception it will be rolled back.
    The calls go to a pool of TaskServiceImpl instances. Each of these instances will only be used by one thread at a time. As a result of this the EntityManager interface is not thread safe!

    So the advantage of this model is that it looks simple and allows pretty small code. On the other hand it is a bit difficult to test such code outside a container as you have to mimic the way the container works with this class. It is also difficult to access e.g. em
     as it is private and there is not setter.

    Blueprint supports a coding style similar to the JEE example and implements this using a special jpa and tx namespace and
    interceptors that handle the transaction / em management.

    DS and JPA

    In DS each component is a singleton. So there is only one instance of it that needs to cope with multi threaded access. So working with the plain JEE concepts for JPA is not possible in DS.

    Of course it would be possible to inject an EntityManagerFactory and handle the EntityManager lifecycle and transactions by hand but this results in quite verbose and error prone code.

    Aries JPA 2.0.0 is the first version that offers special support for frameworks like DS that do not offer interceptors. The solution here is the concept of a JPATemplate together with support for closures in Java 8. To see how the code looks like peek below at chapter persistence.

    Instead of the EntityManager we inject a thread safe JpaTemplate into our code. We need to put the jpa code inside a closure and run it with jpa.txEpr() or jpa.tx(). The JPATemplate will then guarantee the same environment like JEE inside the closure. As each closure runs as its own
    instance there is one em per thread. The code will also participate/create a transaction and the transaction  commit/rollback also works like in JEE.

    So this requires a little more code but the advantage is that there is no need for a special framework integration.
    The code can also be tested much easier. See TaskServiceImplTest in the example.

    Structure
    • features
    • model
    • persistence
    • ui
    Features

    Defines the karaf features to install the example as well as all necessary dependencies.

    Model

    This module defines the Task JPA entity, a TaskService interface and the persistence.xml. For a detailed description of model see the tasklist-blueprint example. The model is exactly the same here.

    PersistenceTaskServiceImpl @Component public class TaskServiceImpl implements TaskService { private JpaTemplate jpa; public Task getTask(Integer id) { return jpa.txExpr(em -> em.find(Task.class, id)); } @Reference(target = "(osgi.unit.name=tasklist)") public void setJpa(JpaTemplate jpa) { this.jpa = jpa; } }

    We define that we need an OSGi service with interface TaskService and a property "osgi.unit.name" with the value "tasklist".

    InitHelper @Component public class InitHelper { Logger LOG = LoggerFactory.getLogger(InitHelper.class); TaskService taskService; @Activate public void addDemoTasks() { try { Task task = new Task(1, "Just a sample task", "Some more info"); taskService.addTask(task); } catch (Exception e) { LOG.warn(e.getMessage(), e); } } @Reference public void setTaskService(TaskService taskService) { this.taskService = taskService; } }

    The class InitHelper creates and persists a first task so the UI has something to show. It is also an example how business code that works with the task service can look like.
    @Reference TaskService taskService injects the TaskService into the field taskService.
    @Activate makes sure that addDemoTasks() is called after injection of this component.

    Another interesting point in the module is the test TaskServiceImplTest. It runs outside OSGi and uses a special
    persistence.xml for testing to create the EntityManagerFactory. It also shows how to instantiate a ResourceLocalJpaTemplate
    to avoid having to install a JTA transaction manager for the test. The test code shows that indeed the TaskServiceImpl can
    be used as plain java code without any special tricks.

    UI

    The tasklist-ui module uses the TaskService as an OSGi service and publishes a Servlet as an OSGi service. The Pax-web whiteboard bundle will then pick up the exported servlet and publish it using the HttpService so it is available on http.

    TaskListServlet @Component(immediate = true, service = { Servlet.class }, property = { "alias:String=/tasklist" } ) public class TaskListServlet extends HttpServlet { private TaskService taskService; protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { // Actual code omitted } @Reference public void setTaskService(TaskService taskService) { this.taskService = taskService; } }

    The above snippet shows how to specify which interface to use when exporting a service as well as how to define service properties.

    The TaskListServlet is exported with the interface javax.servlet.Servlet with the service property alias="/tasklist".
    So it is available on the url http://localhost:8181/tasklist.

    Build

    Make sure you use JDK 8 and run:

    mvn clean install Installation

    Make sure you use JDK 8.
    Download and extract Karaf 4.0.0.
    Start karaf and execute the commands below

    # Install create config for DataSource tasklist cat https://raw.githubusercontent.com/cschneider/Karaf-Tutorial/master/tasklist-blueprint-cdi/org.ops4j.datasource-tasklist.cfg | tac -f etc/org.ops4j.datasource-tasklist.cfg # Install feature:repo-add mvn:net.lr.tasklist.ds/tasklist-features/1.0.0-SNAPSHOT/xml feature:install example-tasklist-ds-persistence example-tasklist-ds-ui Validate Installation

    First we check that the JpaTemplate service is present for our persistence unit.

    service:list JpaTemplate [org.apache.aries.jpa.template.JpaTemplate] ------------------------------------------- osgi.unit.name = tasklist transaction.type = JTA service.id = 164 service.bundleid = 57 service.scope = singleton Provided by : tasklist-model (57) Used by: tasklist-persistence (58)

    Aries JPA should have created this service for us from our model bundle. If this did not work then check the log for messages from Aries JPA. It should print what it tried and what it is waiting for. You can also check for the presence of an EntityManagerFactory and EmSupplier service which are used by JpaTemplate.

    A likely problem would be that the DataSource is missing so lets also check it:

    service:list DataSource [javax.sql.DataSource] ---------------------- dataSourceName = tasklist felix.fileinstall.filename = file:/home/cschneider/java/apache-karaf-4.0.0/etc/org.ops4j.datasource-tasklist.cfg osgi.jdbc.driver.name = H2-pool-xa osgi.jndi.service.name = tasklist service.factoryPid = org.ops4j.datasource service.pid = org.ops4j.datasource.cdc87e75-f024-4b8c-a318-687ff83257cf url = jdbc:h2:mem:test service.id = 156 service.bundleid = 113 service.scope = singleton Provided by : OPS4J Pax JDBC Config (113) Used by: Apache Aries JPA container (62)

    This is like it should look like. Pax-jdbc-config created the DataSource out of the configuration in "etc/org.ops4j.datasource-tasklist.cfg".  By using a DataSourceFactory wit the property "osgi.jdbc.driver.name=H2-pool-xa". So the resulting DataSource should be pooled and fully ready for XA transactions.

    Next we check that the DS components started:

    scr:list ID | State | Component Name -------------------------------------------------------------- 1 | ACTIVE | net.lr.tasklist.persistence.impl.InitHelper 2 | ACTIVE | net.lr.tasklist.persistence.impl.TaskServiceImpl 3 | ACTIVE | net.lr.tasklist.ui.TaskListServlet

    If any of the components is not active you can inspect it in detail like this:

    scr:details net.lr.tasklist.persistence.impl.TaskServiceImpl Component Details Name : net.lr.tasklist.persistence.impl.TaskServiceImpl State : ACTIVE Properties : component.name=net.lr.tasklist.persistence.impl.TaskServiceImpl component.id=2 Jpa.target=(osgi.unit.name=tasklist) References Reference : Jpa State : satisfied Multiple : single Optional : mandatory Policy : static Service Reference : Bound Service ID 164 Test

    Open the url below in your browser.
    http://localhost:8181/tasklist

    You should see a list of one task

     http://localhost:8181/tasklist?add&taskId=2&title=Another Task

     

    View Online
    Categories: Christian Schneider

    Apache Karaf Tutorial Part 8 - Distributed OSGi

    Christian Schneider - Tue, 06/30/2015 - 09:59

    Blog post edited by Christian Schneider - "Updated to karaf 3.0.3 and cxf dosgi 1.6.0"

    By default OSGi services are only visible and accessible in the OSGi container where they are published. Distributed OSGi allows to define services in one container and use them in some other (even over machine boundaries).

    For this tutorial we use the DOSGi sub project of CXF which is the reference implementation of the OSGi Remote Service Admin specification, chapter 122 in the OSGi 4.2 Enterprise Specification).

    Example on github

    Introducing the example

    Following the hands on nature of these tutorial we start with an example that can be tried in some minutes and explain the details later.

    Our example is again the tasklist example from Part 1 of this tutorial. The only difference is that we now deploy the model and the persistence service on container A and model and UI to container B and we install the dosgi runtime on bother containers.

    As DOSGi should not be active for all services on a system the spec defines that the service property "osgi.remote.interfaces" triggers if DOSGi should process the service. It expects the interface names that this service should export remotely. Setting the property to "*" means that all interfaces the service implements should be exported. The tasklist persistence service already sets the property so the service is exported with defaults.

    Installing the service

    To keep things simple we will install container A and B on the same system.

    • Download Apache Karaf 3.0.3
    • Unpack karaf into folder container_a
    • Start bin/karaf
    • config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper zookeeper.port 2181
    • config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper.server clientPort 2181
    • feature:repo-add cxf-dosgi 1.6.0
    • feature:install cxf-dosgi-discovery-distributed cxf-dosgi-zookeeper-server
    • feature:repo-add mvn:net.lr.tasklist/tasklist-features/1.0.0-SNAPSHOT/xml
    • feature:install example-tasklist-persistence

    After these commands the tasklist persistence service should be running and be published on zookeeper.

    You can check the wsdl of the exported service http://localhost:8181/cxf/net/lr/tasklist/model/TaskService?wsdlBy starting the zookeeper client zkCli.sh from a zookeeper distro you can optionally check that there is a node for the service below the osgi path.

    Installing the UI
    • Unpack into folder container_b
    • Start bin/karaf
    • config:property-set -p org.ops4j.pax.web org.osgi.service.http.port 8182
    • config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper zookeeper.port 2181
    • feature:repo-add cxf-dosgi 1.6.0
    • feature:install cxf-dosgi-discovery-distributed
    • feature:repo-add mvn:net.lr.tasklist/tasklist-features/1.0.0-SNAPSHOT/xml
    • feature:install example-tasklist-ui

    The tasklist client ui should be in status Active/Created and the servlet should be available on http://localhost:8182/tasklist. If the ui bundle stays in status graceperiod then DOSGi did not provide a local proxy for the persistence service.

    How does it work

    The Remote Service Admin spec defines an extension of the OSGi service model. Using special properties when publishing OSGi services you can tell the DOSGi runtime to export a service for remote consumption. The CXF DOSGi runtime listens for all services deployed on the local container. It only processes services that have the "osgi.remote.interfaces" property. If the property is found then the service is either exported with the named interfaces or with all interfaces it implements.The way the export works can be fine tuned using the CXF DOSGi configuration options.

    By default the service will be exported using the CXF servlet transport. The URL of the service is derived from the interface name. The servlet prefix, hostname and port number default to the Karaf defaults of "cxf", the ip address of the host and the port 8181. All these options can be defined using a config admin configuration (See the configuration options). By default the service uses the CXF Simple Frontend and the Aegis Databinding. If the service interface is annotated with the JAX-WS @WebService annotation then the default is JAX-WS frontend and JAXB databinding.

    The service informations are then also propagated using the DOSGi discovery. In the example we use the Zookeeper discovery implementation. So the service metadata is written to a zookeeper server.

    The container_b will monitor the local container for needed services. It will then check if a needed service is available on the discovery impl (on the zookeeper server in our case). For each service it finds it will create a local proxy that acts as an OSGi service implementing the requested interface. Incoming request are then serialized and sent to the remote service endpoint.

    So together this allows for almost transparent service calls. The developer only needs to use the OSGi service model and can still communicate over container boundaries.

    View Online
    Categories: Christian Schneider

    A new Crypto implementation in Apache WSS4J

    Colm O hEigeartaigh - Mon, 06/29/2015 - 16:51
    Apache WSS4J uses the Crypto interface to get keys and certificates for asymmetric encryption/decryption and signature creation/verification. In addition, it also takes care of verifying trust in an X.509 certificate used to sign some portion of the message. WSS4J currently ships with three Crypto implementations:
    • Merlin: The standard implementation, based around two JDK keystores for key/cert retrieval, and trust verification.
    • CertificateStore: Holds an array of X509 Certificates. Can only be used for encryption and signature verification.
    • MerlinDevice: Based on Merlin, allows loading of keystores using a null InputStream - for example on a smart-card device.
    The next release(s) of WSS4J, 2.0.5 and 2.1.2, will contain a fourth implementation:
    • MerlinAKI: A new Merlin-based Crypto implementation that searches the truststore for the issuing certificate using the AuthorityKeyIdentifier extension bytes of the signing certificate, as opposed to the issuer DN.
    Trust verification for the standard/default Merlin implementation works as follows:
    1. Is the signing cert contained in the keystore/truststore? If yes, then trust verification succeeds. This can be combined with using regular expressions on the Subject DN as well.
    2. If not, then get the issuing cert by reading the Issuer DN from the signing cert. Then search for this cert in the keystore/truststore. 
    3. If the issuer cert is found, then form a cert path containing the signing cert, the issuing cert and any subsequent issuing cert of that cert. Then validate the cert path.
    However, the retrieval of the issuing cert in step 2 above falls down under certain rare scenarios, where there may not be a 1-to-1 link between the Subject DN of a certificate and a public key. This is where the new MerlinAKI implementation comes in. Instead of searching for the issuing cert using the issuer DN of the signing cert, it instead uses BouncyCastle to retrieve the AuthorityKeyIdentifier extension bytes (if present) from the cert. It then searches for the issuing cert by seeing which of the certs in the truststore contain a SubjectKeyIdentifier extension with a matching identifier value. You can switch to use MerlinAKI simply by changing the name of the Crypto provider in the Crypto properties file:


    Categories: Colm O hEigeartaigh

    Using AWS KMS with Apache CXF to secure passwords

    Colm O hEigeartaigh - Fri, 06/26/2015 - 13:16
    The previous tutorial showed how the AWS Key Management Service (KMS) can be used to generate symmetric encryption keys that can be used with WS-Security to encrypt and decrypt a service request using Apache CXF. It is also possible to use the KMS to secure keystore passwords for asymmetric encryption and signature, that are typically stored in properties files when using WS-Security with Apache CXF.

    1) Encrypting passwords in a Crypto properties file

    Apache CXF uses the WSS4J Crypto interface to get keys and certificates for asymmetric encryption/decryption and for signature creation/verification. Merlin is the standard implementation, based around two JDK keystores for key/cert retrieval, and trust verification. Typically, a Crypto implementation is loaded and configured via a Crypto properties file. For example:


    However one issue with this style of configuration is that the keystore password is stored in plaintext in the file. Apache WSS4J 2.0.0 introduced the ability to store encrypted passwords in the crypto properties file instead. A PasswordEncryptor interface was defined to allow for the encryption/decryption of passwords, and a default implementation based on Jasypt was made available in the release. In this case, the master password used to decrypt the encrypted keystore password was retrieved from a CallbackHandler implementation.

    2) Using KMS to encrypt keystore passwords

    Instead of using the Jasypt PasswordEncryptor implementation provided by default in Apache WSS4J, it is possible to use instead the AWS KMS to decrypt encrypted keystore passwords stored in crypto properties files. I've updated the test-case introduced in the previous tutorial with an asymmetric encryption test-case, where a SOAP service invocation is encrypted using a WS-SecurityPolicy AsymmetricBinding policy.

    The first step in running the test-case is to follow the previous tutorial in terms of registering for AWS, creating a user "alice" and a corresponding customer master key. One you have this, then run the "testEncryptedPasswords" test available here, which outputs the encrypted passwords for the client and service keystores ("cspass" and "sspass"). Copy the output + paste them into the clientEncKeystore.properties and serviceEncKeystore.properties in the "ENC()" tags. For example:


    The client and service configure a custom PasswordEncryptor implementation designed to decrypt the encrypted keystore password using KMS. The KMSPasswordEncryptor is spring-loaded in the client and service configuration, and must be updated with the access key id, secret key, master key id, etc. as defined earlier. Of course this means that the secret key is in plaintext in a spring configuration file in this example. However, it could be obtained via a system property or some other means, and is more secure than storing a plaintext keystore password in a properties file. Once the KMSPasswordEncryptor is properly configured, then the AsymmetricTest can be run, and you will see the secured service request and response in the console window.


    Categories: Colm O hEigeartaigh

    Integrating AWS Key Management Service with Apache CXF

    Colm O hEigeartaigh - Thu, 06/25/2015 - 13:20
    Apache CXF supports a wide range of standards designed to help you secure a web service request, from WS-Security for SOAP requests, to XML Security and JWS/JWE for XML/JSON REST requests. All of these standards provide for using symmetric keys to encrypt requests, and then using a master key (typically a public key associated with an X.509 certificate) to encrypt the symmetric key, embedding this information somewhere in the request. The usual use-case is to generate random bytes for the symmetric key. But what if you wanted instead to manage the secret keys in some way? Or if your client did not have access to sufficient entropy to generate truly random bytes? In this article, we will look at how to use the AWS Key Management Service to perform this task for us, in the context of an encrypted SOAP request using WS-Security.

    1) AWS Key Management Service

    The AWS Key Management Service allows us to create master keys and data keys for users defined in the AWS Identity and Access Management service. Once we have created a user, and a corresponding master key for the user (which is only stored in AWS and cannot be exported), we can ask the Key Management Service to issue us a data key (using either AES 128 or 256), and an encrypted data key. The idea is that the data key is used to encrypt some data and is then disposed of. The encrypted data key is added to the request, where the recipient can ask the Key Management Service to decrypt the key, which can be then be used to decrypt the encrypted data in the request.

    The first step is to register for Amazon AWS here. Once we have registered, we need to create a user in the Identity and Access Management service. Create a new user "alice", and make a note of the access key and secret access key associated with "alice". Next we need to write some code to obtain keys for "alice" (documentation). First we must create a client:

    AWSCredentials creds = new BasicAWSCredentials(<access key id>, <secret key>);
    AWSKMSClient kms = new AWSKMSClient(creds);
    kms.setEndpoint("https://kms.eu-west-1.amazonaws.com");

    Next we must create a customer master key for "alice":

    String desc = "Secret encryption key";
    CreateKeyRequest req = new CreateKeyRequest().withDescription(desc);
    CreateKeyResult result = kms.createKey(req);

    The CreateKeyResult object returned as part of the key creation process will contain a key Id, which we will need later.

    2) Using AWS Key Management Service keys with WS-Security

    As mentioned above, the typical process for WS-Security when encrypting a request, is to generate some random bytes to use as the symmetric encryption key, and then use a key wrap algorithm with another key (typically a public key) to encrypt the symmetric key. Instead, we will use the AWS Key Management Service to retrieve the symmetric key to encrypt the request. We will store the encrypted form of the symmetric key in the WS-Security EncryptedKey structure, which will reference the Customer Master Key via a "KeyName" pointing to the Key Id.

    I have created a project that can be used to demonstrate this integration:
    • cxf-amazon-kms: This project contains a number of tests that show how to use the AWS Key Management Service with Apache CXF.
    The first task in running the test (assuming the steps followed in point 1 above were followed) is to edit the client configuration, entering the correct values in the CommonCallbackHandler for the access key id, secret key, endpoint, and master key id as gathered above, ditto for the service configuration. The CommonCallbackHandler uses the AWS Key Management Service API to create the symmetric key on the sending side, and to decrypt it on the receiving side. Then to run the test simply remove the "org.junit.Ignore" annotation, and the encrypted web service request can be seen in the console:


    Categories: Colm O hEigeartaigh

    Using a JDBC JAAS LoginModule with Apache CXF

    Colm O hEigeartaigh - Tue, 06/23/2015 - 13:20
    Last year I wrote a blog entry giving an overview of the different ways that you can use JAAS with Apache CXF for authenticating and authorizing web service calls. I also covered some different login modules and linked to samples for authenticating a Username + Password to LDAP, as well as Kerberos Tokens to a KDC. This article covers how to use JAAS with Apache CXF to authenticate a Username + Password to a database via JDBC.

    The test-case is available here:
    • cxf-jdbc: This project contains a number of tests that show how an Apache CXF service endpoint can authenticate and authorize a client using JDBC.
    It contains two tests, one dealing with authentication (no roles required by the service) and the other with authorization (a specific role is required). Both tests involve a JAX-WS service invocation, where the service requires a WS-Security UsernameToken over TLS. In each case, the service configures Apache WSS4J's JAASUsernameTokenValidator using the context name "jetty". The JAAS configuration file contains an entry for the "jetty" context, which references the Jetty JDBCLoginModule:

    The configuration of the JDBCLoginModule is easy to follow. The "dbUrl" refers to the JDBC connection URL (in this case an in-memory Apache Derby instance). The table containing user data is "app.users", where the fields used for usernames and passwords are "name" and "password" respectively. Similarly, the table containing role data is "app.roles", where the fields used for usernames + roles are "name" and "role" respectively.

    The tests use Apache Derby as an in-memory database. It is created in code as follows:


    Then the following SQL file is read in, and each statement is executed using the statement Object above:


    Categories: Colm O hEigeartaigh

    Apache CXF Fediz 1.2.0 tutorial - part II

    Colm O hEigeartaigh - Thu, 06/11/2015 - 14:47
    This is the second in a series of blog posts on the new features and changes in Apache CXF Fediz 1.2.0. The previous blog entry gave instructions about how to deploy the Fediz IdP and a sample service application in Apache Tomcat. This article describes how different client authentication methods are supported in the IdP, and how they can be selected by the service via the "wauth" parameter. Then we will extend the previous tutorial by showing how to authenticate to the IdP using a client certificate in the browser, as opposed to entering a username + password.

    1) Supporting different client authentication methods in the IdP

    The Apache Fediz IdP in 1.2.0 supports different client authentication methods by default using different URL paths, as follows:
    • /federation -> the main entry point
    • /federation/up -> authentication using HTTP B/A
    • /federation/krb -> authentication using Kerberos
    • /federation/clientcert -> authentication using a client cert
    The way it works is as follows. The service provider (SP) should use the URL for the main entry point (although the SP has the option of choosing one the more specific URLs as well). The IdP extracts the "wauth" parameter from the request ("default" is the default value), and looks for a matching key in the "authenticationURIs" section of the service configuration. For example:

    <property name="authenticationURIs">
        <util:map>
            <entry key="default" value="federation/up" />
            <entry key="http://docs.oasis-open.org/wsfed/authorization/200706/authntypes/SslAndKey" value="federation/krb" />
            <entry key="http://docs.oasis-open.org/wsfed/authorization/200706/authntypes/default" value="federation/up" />
            <entry key="http://docs.oasis-open.org/wsfed/authorization/200706/authntypes/Ssl" value="federation/clientcert" />
        </util:map>
    </property>

    If a matching key is found for the wauth value, then the browser gets redirected to the associated URL. Therefore, a service provider can specify a value for "wauth" in the plugin configuration, and select the client authentication mode as a result. The values defined for "wauth" above are taken from the specification, but can be changed if required. The service provider can specify the value for "wauth" by using the "authenticationType" configuration tag, as documented here.

    2) Client authentication using a certificate

    A new feature of Fediz 1.2.0 is the ability for a client to authenticate to the IdP using a certificate embedded in the browser. To see how this works in practice, please follow the steps given in the previous tutorial to set up the IdP and service web application in Apache Tomcat. To switch to use client certificate authentication, only one change is required in the service provider configuration:
    • Edit ${catalina.home}/conf/fediz_config.xml, and add the following under the "protocol" section: <authenticationType>http://docs.oasis-open.org/wsfed/authorization/200706/authntypes/Ssl</authenticationType>
    The next step is to add a client certificate to the browser that you are using. To avoid changing the IdP TLS configuration, we will just use the same certificate / private key that is used by the IdP on the client side for the purposes of this demo. First, we need to convert the IdP key from JKS to PKCS12. So take the idp-ssl-key.jks configured in the previous tutorial and run:
    • keytool -importkeystore -srckeystore idp-ssl-key.jks -destkeystore idp-ssl-key.p12 -srcstoretype JKS -deststoretype PKCS12 -srcstorepass tompass -deststorepass tompass -srcalias mytomidpkey -destalias mytomidpkey -srckeypass tompass -destkeypass tompass -noprompt
    I will use Chrome for the client browser. Under Settings, Advanced Settings, "HTTPS/SSL", click on the Manage Certificates button, and add the idp-ssl-key.p12 keystore above using the password "tompass":
    Next, we need to tell the STS to trust the key used by the client (you can skip these steps if using Fediz 1.2.1):
    • First, export the certificate as follows: keytool -keystore idp-ssl-key.jks -storepass tompass -export -alias mytomidpkey -file MyTCIDP.cer
    • Take the ststrust.jks + import the cert: keytool -import -trustcacerts -keystore ststrust.jks -storepass storepass -alias idpcert -file MyTCIDP.cer -noprompt
    • Finally, copy the modified ststrust.jks into the STS: ${catalina.home}/webapps/fediz-idp-sts/WEB-INF/classes
    The last configuration step is to tell the STS where to retrieve claims for the cert. We will just copy the claims for Alice:
    • Edit ${catalina.home}/webapps/fediz-idp-sts/WEB-INF/userClaims.xml
    • Add the following under "userClaimsREALMA": <entry key="CN=localhost" value-ref="REALMA_aliceClaims" />
    Now restart Tomcat and navigate to the service URL:
    • https://localhost:8443/fedizhelloworld/secure/fedservlet
    Select the certificate that we have uploaded, and you should be able to authenticate to the IdP and be redirected back to the service, without having to enter any username/password credentials!
    Categories: Colm O hEigeartaigh

    Apache CXF Fediz 1.2.0 tutorial - part I

    Colm O hEigeartaigh - Wed, 06/10/2015 - 17:16
    The previous blog entry gave an overview of the new features in Apache CXF Fediz 1.2.0. This post first focuses on setting up and running the IdP (Identity Provider) and the sample simpleWebapp in Apache Tomcat.

    1) Deploying the 1.2.0 Fediz IdP in Apache Tomcat

    Download Fediz 1.2.0 and extract it to a new directory (${fediz.home}). We will use a Apache Tomcat 7 container to host the Idp. To deploy the IdP to Tomcat:
    • Create a new directory: ${catalina.home}/lib/fediz
    • Edit ${catalina.home}/conf/catalina.properties and append ',${catalina.home}/lib/fediz/*.jar' to the 'common.loader' property.
    • Copy ${fediz.home}/plugins/tomcat/lib/* to ${catalina.home}/lib/fediz
    • Copy ${fediz.home}/idp/war/* to ${catalina.home}/webapps
    • Download and copy the hsqldb jar (e.g. hsqldb-1.8.0.10.jar) to ${catalina.home}/lib
    Now we need to set up TLS:
    • The keys that ship with Fediz 1.2.0 are 1024 bit DSA keys, which will not work with most modern browsers (this will be fixed for 1.2.1). 
    • So we need to generate a new key: keytool -genkeypair -validity 730 -alias mytomidpkey -keystore idp-ssl-key.jks -dname "cn=localhost" -keypass tompass -storepass tompass -keysize 2048 -keyalg RSA
    • Export the cert: keytool -keystore idp-ssl-key.jks -storepass tompass -export -alias mytomidpkey -file MyTCIDP.cer
    • Create a new truststore with the cert: keytool -import -trustcacerts -keystore idp-ssl-trust.jks -storepass ispass -alias mytomidpkey -file MyTCIDP.cer -noprompt
    • Copy idp-ssl-key.jks and idp-ssl-trust.jks to ${catalina.home}.
    • Copy both jks files as well to ${catalina.home}/webapps/fediz-idp/WEB-INF/classes/ (after Tomcat is started)
    • Edit the TLS Connector in ${catalina.home}/conf/server.xml', e.g.: <Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="want" sslProtocol="TLS" keystoreFile="idp-ssl-key.jks" keystorePass="tompass" keyPass="tompass" truststoreFile="idp-ssl-trust.jks" truststorePass="ispass" />
    Now start Tomcat, and check that the IdP is live by opening the STS WSDL in a web browser: 'https://localhost:8443/fediz-idp-sts/REALMA/STSServiceTransport?wsdl'

    For a more thorough test, enter the following in a web browser - you should be directed to the URL for the service application (404, as we have not yet configured it):

    https://localhost:8443/fediz-idp/federation?wa=wsignin1.0&wreply=http%3A%2%2Flocalhost%3A8080%2Fwsfedhelloworld%2Fsecureservlet%2Ffed&wtrealm=urn%3Aorg%3Aapache%3Acxf%3Afediz%3Afedizhelloworld

    2) Deploying the simpleWebapp in Apache Tomcat

    To deploy the service to Tomcat:
    • Copy ${fediz.home}/examples/samplekeys/rp-ssl-server.jks and ${fediz.home}/examples/samplekeys/ststrust.jks to ${catalina.home}.
    • Copy ${fediz.home}/examples/simpleWebapp/src/main/config/fediz_config.xml to ${catalina.home}/conf/
    • Edit ${catalina.home}/conf/fediz_config.xml and replace '9443' with '8443'.
    • Do a "mvn clean install" in ${fediz.home}/examples/simpleWebapp
    • Copy ${fediz.home}/examples/simpleWebapp/target/fedizhelloworld.war to ${catalina.home}/webapps.
    3) Testing the service

    To test the service navigate to:
    • https://localhost:8443/fedizhelloworld/  (this is not secured) 
    • https://localhost:8443/fedizhelloworld/secure/fedservlet
    With the latter URL, the browser is redirected to the IDP (select realm "A") and is prompted for a username and password. Enter "alice/ecila" or "bob/bob" or "ted/det" to test the various roles that are associated with these username/password pairs.
    Categories: Colm O hEigeartaigh

    Enterprise ready request logging with CXF 3.1.0 and elastic search

    Christian Schneider - Mon, 06/08/2015 - 17:29

    Blog post added by Christian Schneider

    You may already know the CXF LoggingFeature. You used it like this:

    Old CXF LoggingFeature <jaxws:endpoint ...> <jaxws:features> <bean class="org.apache.cxf.ext.logging.LoggingFeature"/> </jaxws:features> </jaxws:endpoint>

    It allowed to add logging to a CXF endpoint at compile time.

    While this already helped a lot it was not really enterprise ready. The logging could not be controlled much at runtime and contained too few details. This all changes with the new CXF logging support and the up coming Karaf Decanter.

    Logging feature in CXF 3.1.0

    In CXF 3.1 this code was moved into a separate module and gathered some new features.

    • Auto logging for existing CXF endpoints
    • Uses slf4j MDC to log meta data separately
    • Adds meta data for Rest calls
    • Adds MD5 message id and exchange id for correlation
    • Simple interface for writing your own appenders
    • Karaf decanter support to log into elastic search
    Auto logging for existing CXF endpoints in Apache Karaf

    Simply install and enable the new logging feature:

    Logging feature in karaf feature:repo-add cxf 3.1.0 feature:install cxf-features-logging config:property-set -p org.apache.cxf.features.logging enabled true

    Then install CXF endpoints like always. For example install the PersonService from the Karaf Tutorial Part 4 - CXF Services in OSGi. The client and endpoint in the example are not equipped with the LoggingFeature. Still the new logging feature will enhance the clients and endpoints and log all SOAP and Rest calls using slf4j. So the logging data will be processed by pax logging and by default end up in your karaf log.

    A log entry looks like this:

    Sample Log entry 2015-06-08 16:35:54,068 | INFO | qtp1189348109-73 | REQ_IN | 90 - org.apache.cxf.cxf-rt-features-logging - 3.1.0 | <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"><soap:Body><ns2:addPerson xmlns:ns2="http://model.personservice.cxf.karaf.tutorial.lr.net/" xmlns:ns3="http://person.jms2rest.camel.karaf.tutorial.lr.net"><arg0><id>3</id><name>Test2</name><url></url></arg0></ns2:addPerson></soap:Body></soap:Envelope>

    This does not look very informative. You only see that it is an incoming request (REQ_IN) and the SOAP message in the log message. The logging feature provides a lot more information though. You just need to configure the pax logging config to show it.

    Slf4j MDC values for meta data

    This is the raw logging information you get for a SOAP call:

    FieldValue@timestamp2015-06-08T14:43:27,097ZMDC.addresshttp://localhost:8181/cxf/personServiceMDC.bundle.id90MDC.bundle.nameorg.apache.cxf.cxf-rt-features-loggingMDC.bundle.version3.1.0MDC.content-typetext/xml; charset=UTF-8MDC.encodingUTF-8MDC.exchangeId56b037e3-d254-4fe5-8723-f442835fa128MDC.headers{content-type=text/xml; charset=UTF-8, connection=keep-alive, Host=localhost:8181, Content-Length=251, SOAPAction="", User-Agent=Apache CXF 3.1.0, Accept=*/*, Pragma=no-cache, Cache-Control=no-cache}MDC.httpMethodPOSTMDC.messageIda46eebd2-60af-4975-ba42-8b8205ac884cMDC.portNamePersonServiceImplPortMDC.portTypeNamePersonServiceMDC.serviceNamePersonServiceImplServiceMDC.typeREQ_INlevelINFOloc.classorg.apache.cxf.ext.logging.slf4j.Slf4jEventSenderloc.fileSlf4jEventSender.javaloc.line55loc.methodsendloggerClassorg.ops4j.pax.logging.slf4j.Slf4jLoggerloggerNameorg.apache.cxf.services.PersonService.REQ_INmessage<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"><soap:Body><ns2:getAll xmlns:ns2="http://model.personservice.cxf.karaf.tutorial.lr.net/"; xmlns:ns3="http://person.jms2rest.camel.karaf.tutorial.lr.net"/></soap:Body></soap:Envelope>;threadNameqtp80604361-78timeStamp1433774607097

    Some things to note:

    • The logger name is <service namespace>.<ServiceName>.<type> karaf by default only cuts it to just the type.
    • A lot of the details are in the MDC values

    You need to change your pax logging config to make these visible.

    You can use the logger name to fine tune which services you want to log this way. For example set the debug level to WARN for noisy services to avoid that they are logged or log some services to another file.

    Message id and exhange id

    The messageId allows to uniquely identify messages even if you collect them from several servers. It is also transported over the wire so you can correlate a request sent on one machine with the request received on another machine.

    The exchangeId will be the same for an incoming request and the response sent out or on the other side for an outgoing request and the response for it. This allows to correlate request and responses and so follow the conversations.

    Simple interface to write your own appenders

    Write your own LogSender and set it on the LoggingFeature to do custom logging. You have access to all meta data from the class LogEvent.

    So for example you could write your logs to one file per message or to JMS.

    Karaf decanter support to write into elastic search

    Many people use elastic search for their logging. Fortunately you do not have to write a special LogSender for this purpose. The standard CXF logging feature will already work.

    It works like this:

    • CXF sends the messages as slf4j events which are processed by pax logging
    • Karaf Decanter LogCollector attaches to pax logging and sends all log events into the karaf message bus (EventAdmin topics)
    • Karaf Decanter ElasticSearchAppender sends the log events to a configurable elastic search instance

    As Decanter also provides features for a local elastic search and kibana instance you are ready to go in just minutes.

    Installing Decanter for CXF Logging feature:repo-add mvn:org.apache.karaf.decanter/apache-karaf-decanter/3.0.0-SNAPSHOT/xml/features feature:install decanter-collector-log decanter-appender-elasticsearch elasticsearch kibana


    After that open a browser at http://localhost:8181/kibana. When decanter is released kibana will be fully set up. At the moment you have to add the logstash dashboard and change the index name to [karaf-]YYYY.MM.DD.

    Then you should see your cxf messages like this:

    Kibana easily allows to filter for specific services and correlate requests and responses.

    This is just a preview of decanter. I will do a more detailed post when the first release is out.

     

    View Online
    Categories: Christian Schneider

    Apache CXF Fediz 1.2.0 tutorial - overview

    Colm O hEigeartaigh - Thu, 05/28/2015 - 17:59
    Apache CXF Fediz 1.2.0 has been released. Fediz is a subproject of the Apache CXF web services stack. It is an implementation of the WS-Federation Passive Requestor Profile for SSO that supports Claims Based Access Control. In laymans terms, Fediz allows you to implement Single Sign On (SSO) for your web application, by redirecting the client browser to an Identity Provider (IdP), where the client is authenticated and redirected back to the application. Fediz consists of a number of container-specific plugins (Tomcat, Jetty, Spring Security, Websphere, etc.) as well as an IdP which bundles the CXF Security Token Service (STS) to issue SAML Tokens.

    This is an overview of a planned series of articles on the new features that are available in Fediz 1.2.0, which is a new major release of the project. Subsequent articles will go into more detail on the new features, which are as follows:
    • Dependency update to use CXF 3.0.x (3.0.4).
    • A new container-independent CXF-based plugin is available.
    • Logout Support has been added to the plugins and IdP
    • A new REST API is available for configuring the IdP
    • Support for authenticating to the IdP using Kerberos has been added
    • Support for authenticating to the IdP using a client certificate has been added
    • It is now possible to use the IdP as an identity broker with a SAML SSO IdP
    • Metadata support has been added for the plugins and IdP
    Categories: Colm O hEigeartaigh

    SAML SSO RP Metadata support in Apache CXF

    Colm O hEigeartaigh - Thu, 05/28/2015 - 17:28
    Apache CXF provides comprehensive support for SSO using the SAML Web SSO profile for CXF-based JAX-RS services. In Apache CXF 3.1.0 (and 3.0.5), a new Metadata service is available to allow for the publishing of SAML SSO Metadata for a given service.

    The MetadataService class is available on a "metadata" path and provides a single @GET method that returns the service metadata in XML format. It has the following properties which should be configured:
    • String serviceAddress - The URL of the service
    • String assertionConsumerServiceAddress - The URL of the RACS. If it is co-located with the service, then it can be the same URL as for the serviceAddress.
    • String logoutServiceAddress - The URL of the logout service (if available).
    • boolean addEndpointAddressToContext - Whether to add the full endpoint address to the values configured above. The default is false.
    In addition, the MetadataService extends the AbstractSSOSpHandler, which contains various properties that are required to sign the metadata (keystore alias, crypto properties file which references the keystore, etc.). A sample spring-based configuration for the MetadataService is available in the CXF system tests here. Here is the sample output when accessed via a web brower:


    Categories: Colm O hEigeartaigh

    Apache CXF 3.1.0 released

    Colm O hEigeartaigh - Tue, 05/26/2015 - 16:04
    Apache CXF 3.1.0 has been released and is available for download. The migration guide for CXF 3.1.x is available here. The main (non-security) features of CXF 3.1.0 are as follows:
    • Java 6 is no longer supported.
    • Jetty 9 is now supported. Support for Jetty 7 has been dropped.
    • A new Metrics feature for collecting metrics about CXF services is available. 
    • A new Throttling feature is available for easily throttling CXF services.
    • A new Logging feature is available that is more powerful than the existing logging functionality.
    The security-specific changes and features are as follows:
    • CXF 3.1.0 picks up a new major release of WSS4J (2.1.0) and OpenSAML (3.1.0). Please see a recent post on WSS4J 2.1.0 for some migration notes if you are using WS-Security or SAML with CXF.
    • The STS now signs issued SAML tokens by default using RSA-SHA256 (previously RSA-SHA1).
    • Some security configuration tags have been renamed from "ws-security.*" to "security.*", as they are now shared with the XML Security JAX-RS module. The old tags will continue to work as before however without any change. See the Security Configuration page for more information.
    • The SAML/XACML functionality previously available in the cxf-rt-security module is now in a new cxf-rt-security-saml module.
    • A new Metadata service for SAML SSO is available. More on this in a future blog post.
    • It is now possible to "plug in" custom security policy validators for WS-Security in CXF, if you want to change the default validation logic. See here for a test that shows how to do this.
    Categories: Colm O hEigeartaigh

    Apache WSS4J 2.0.4 released

    Colm O hEigeartaigh - Fri, 05/15/2015 - 16:24
    In addition to the new major release of Apache WSS4J (2.1.0), there is a new bug fix release available - Apache WSS4J 2.0.4. Here are the most important bugs that were fixed in this release:
    • We now support the InclusiveC14N policy.
    • We can enforce that a Timestamp has an "Expires" Element via configuration, if desired.
    • There is a slight modification to how we cache signed Timestamps, to allow for the scenario of two Signatures in a security header that sign the same Timestamp, but with different keys.
    • The policy layer now allows a SupportingToken policy to have more than one token.
    • A bug has been fixed in the MerlinDevice crypto provider, which is designed to work with smartcards.
    • A bug has been fixed in terms of using the correct JCE provider for encryption/decryption.
    Categories: Colm O hEigeartaigh

    Apache WSS4J 2.1.0 released

    Colm O hEigeartaigh - Fri, 05/15/2015 - 16:23
    A new major release of Apache WSS4J, 2.1.0, has been released. The previous major release of almost a year ago, Apache WSS4J 2.0.0, had a lot of substantial changes (see the migration guide), such as a new set of maven modules, a new streaming implementation, changes to configuration tags, package changes for CallbackHandlers, etc. In contrast, WSS4J 2.1.0 has a much smaller set of changes, and users should be able to upgrade with very few changes (if at all). This post briefly covers the new features and migration issues for WSS4J 2.1.0.

    Here are the new features of WSS4J 2.1.0:
    • JDK7 minimum requirement: WSS4J 2.1.0 requires at least JDK7. The project source has also been updated to make use of some of the new features of JDK7, such as try-with-resources, etc.
    • OpenSAML 3.x support: WSS4J 2.1.0 upgrades from OpenSAML 2.x to 3.x (currently 3.1.0). This is an important upgrade, as OpenSAML 2.x is not currently supported.
    • Extensive code refactoring. A lot of work was done to make the retrieval of security results easier and faster.

    Here are the migration issues in WSS4J 2.1.0:
    • Due to the OpenSAML 3.x upgrade, you will need to make a small change in your SAML CallbackHandlers, if you are specifying the "Version". In WSS4J 2.0.x, you could specify the SAML Version by passing a "org.opensaml.common.SAMLVersion" instance through to the SAMLCallback.setSamlVersion(...) method. The "org.opensaml.common" package is removed in OpenSAML 3.x. Instead, a new Version bean is provided in WSS4J 2.1.0, that can be passed to the setSamlVersion method on SAMLCallback as before. See here for an example.
    • The Xerces and xml-apis dependencies in the DOM code of Apache WSS4J 2.1.0 have been removed (previously they were at "provided" scope).
    • If you have a custom Processor instance to process a token in the security header in some custom way, you must add the WSSecurityEngineResult that is generated by the processing, to the WSDocInfo Object via the "addResult" method. Otherwise, it will not be available when security results are retrieved and processed.


    Categories: Colm O hEigeartaigh

    The Rise Of Apache Tika

    Sergey Beryozkin - Thu, 05/14/2015 - 22:51
    Apache Tika is an interesting project. It is not a very big one but IMHO it is poised to become the project every team serious about doing the complex, unstructured, binary content processing will talk about and use.

    The power of Apache Tika lies in the simplicity it offers for processing different types of binary and other types of complex data. Consider a simple situation: your project needs to support analyzing PDF files. One approach is to write a PDF library specific routine. This approach stops scaling as soon you need to support Excel and ODT files too. And stops working once you have a task to support a possibly unlimited number of types of data.

    Apache Tika helps with generalizing the processing of arbitrary types of data and thus offers a unique opportunity for a given project to offer a real value add-on.

    I really liked this presentation at the recent Apache Con NA.  It was absolutely packed with the interesting content and Chris talked a lot about applying Tika to solving the real life problems. Andriy Redko did a brilliant talk about the CXF and Tika integration. There were more Tika presentations and I regret I could not make it to all of them.

    The future is bright for Tika. And for the projects that will use it :-)
    Categories: Sergey Beryozkin

    Opend Id Connect Certification Strategy

    Sergey Beryozkin - Fri, 05/01/2015 - 12:30
    I've just read about an OpenId Connect Certification open strategy. IMHO it is brilliant and no doubt will guarantee a wider adoption of OIDC. Mike Jones's explanation of why it will work is a good read.

    The closed (payed-only) certification model limits the adoption of a given technology by the implementors.
    Categories: Sergey Beryozkin

    [OT] Apache CXF is Electric !

    Sergey Beryozkin - Thu, 04/30/2015 - 23:30
    I remember this day as it was yesterday. April or March of 1998. I'm in England, Stockport city centre, listening to Oasis's latest single. It was absolutely great, the energizing effect of it.

    As it happens I haven't listened to Oasis for the next 17 years apart from hearing them occasionally on the local FM. But a month or so back, I finally got their disk.

    "She is Electric" is one of the best songs, classical Oasis. Nearly every time I listen to it I think, well, one can definitely say "Apache CXF is Electric". Why ? Because Apache CXF is cool, active and alive !  Work with it and you will become Electric too :-)   
    Categories: Sergey Beryozkin

    Pages

    Subscribe to Talend Community Coders aggregator