Latest Activity

An interop demo between Apache CXF Fediz and Google OpenID Connect

Colm O hEigeartaigh - Fri, 04/29/2016 - 18:19
The previous post introduced some of the new features in Apache CXF Fediz 1.3.0. One of the new enhancements is that the Fediz IdP can now delegate WS-Federation (and SAML SSO) authentication requests to a third party IdP via OpenID Connect. An article published in February showed how it is possible to interoperate between Fediz and the Keycloak OpenID Connect provider. In this post, we will show how to configure the Fediz IdP to interoperate with the Google OpenID Connect provider.

1) Create a new client in the Google developer console

The first step is to create a new client in the Google developer console to represent the Apache CXF Fediz IdP. Login to the Google developer console and create a new project. Click on "Enable and Manage APIs" and then select "Credentials". Click on "Create Credentials" and select "OAuth client id". Configure the OAuth consent screen and select "web application" as the application type. Specify "https://localhost:8443/fediz-idp/federation" as the authorized redirect URI. After creating the client, a pop-up window will specify the newly created client id and secret. Save both values locally. The Google documentation is available here.

2) Testing the newly created Client

It's possible to see the Google OpenId Connect configuration by navigating to:
  • https://accounts.google.com/.well-known/openid-configuration
This tells us what the authorization and token endpoints are, both of which we will need to configure the Fediz IdP. To test that everything is working correctly, open a web browser and navigate to, substituting the client id saved in step "1" above:
  • https://accounts.google.com/o/oauth2/v2/auth?response_type=code&client_id=<client-id>&redirect_uri=https://localhost:8443/fediz-idp/federation&scope=openid
Login using your google credentials and grant permission on the OAuth consent screen. The browser will then attempt a redirect to the given redirect_uri. Copy the URL + extract the "code" query String. Open a terminal and invoke the following command, substituting in the secret and code extracted above:
  • curl -u <client-id>:<secret> --data "client_id=<client-id>&grant_type=authorization_code&code=<code>&redirect_uri=https://localhost:8443/fediz-idp/federation" https://www.googleapis.com/oauth2/v4/token
You should see a succesful response containing (amongst other things) the OAuth 2.0 Access Token and the OpenId Connect IdToken, containing the user identity.

3) Install and configure the Apache CXF Fediz IdP and sample Webapp

Follow a previous tutorial to deploy the latest Fediz IdP + STS to Apache Tomcat, as well as the "simpleWebapp". Note that you will need to use Fediz 1.3.0 here for OpenId Connect support. Test that the "simpleWebapp" is working correctly by navigating to the following URL (selecting "realm A" at the IdP, and authenticating as "alice/ecila"):
  • https://localhost:8443/fedizhelloworld/secure/fedservlet 
3.1) Configure the Fediz IdP to communicate with the Google IdP

Now we will configure the Fediz IdP to authenticate the user in "realm B" by using the OpenID Connect protocol. Edit 'webapps/fediz-idp/WEB-INF/classes/entities-realma.xml'. In the 'idp-realmA' bean:
  • Change the port in "idpUrl" to "8443". 
In the 'trusted-idp-realmB' bean:
  • Change the "url" value to "https://accounts.google.com/o/oauth2/v2/auth".
  • Change the "protocol" value to "openid-connect-1.0".
  • Delete the "certificate" and "trustType" properties.
  • Add the following parameters Map, filling in a value for the client secret extracted above: <property name="parameters">
                <util:map>
                    <entry key="client.id" value="<client-id>" />
                    <entry key="client.secret" value="<secret>" />
                    <entry key="token.endpoint" value="https://accounts.google.com/o/oauth2/token" />
                    <entry key="scope" value="openid profile email"/>
                    <entry key="jwks.uri" value="https://www.googleapis.com/oauth2/v3/certs" />
                    <entry key="subject.claim" value="email"/>
                </util:map>
            </property>
There are a few additional properties configured here compared to the previous tutorial. It is possible to specify custom scopes via the "scope" parameter. In this case we are requesting the "profile" and "email" scopes. The default value for this parameter is "openid". In addition, rather than validating the signed IdToken using a local certificate, here we are specifying a value for "jwks.uri", which is the location of the signing key. The "subject.claim" property specifies the claim name from which to obtain the Subject name, which is inserted into a SAML Token that is sent to the STS.

3.2) Update the TLS configuration

By default, the Fediz IdP is configured with a truststore required to access the Fediz STS. However, this would mean that the requests to the Google IdP over TLS will not be trusted. to change this, edit 'webapps/fediz-idp/WEB-INF/classes/cxf-tls.xml', and change the HTTP conduit name from "*.http-conduit" to "https://localhost.*". This means that this configuration will only get picked up when communicating with the STS (deployed on "localhost"), and the default JDK truststore will get used when communicating with the Google IdP.

3.3) Update Fediz STS claim mapping

The STS will receive a SAML Token created by the IdP representing the authenticated user. The Subject name will be the email address of the user as configured above. Therefore, we need to add a claims mapping in the STS to map the principal received to some claims. Edit 'webapps/fediz-idp-sts/WEB-INF/userClaims.xml' and just copy the entry for "alice" in "userClaimsREALMA", changing "alice" to your Google email address.

Finally, restart Fediz to pick up the changes (you may need to remove the persistent storage first).

4) Testing the service

To test the service navigate to:
  • https://localhost:8443/fedizhelloworld/secure/fedservlet
Select "realm B". You should be redirected to the Google authentication page. Enter the user credentials you have created. You will be redirected to Fediz, where it converts the received JWT token to a token in the realm of Fediz (realm A) and redirects to the web application.
    Categories: Colm O hEigeartaigh

    Apache CXF Fediz 1.3.0 released

    Colm O hEigeartaigh - Mon, 04/25/2016 - 18:06
    A new major release (1.3.0) of Apache CXF Fediz was released a few weeks ago. There are some major dependency updates as part of this release:
    • The core Apache CXF dependency is updated from the 3.0.x branch to the 3.1.x branch (3.1.6 to be precise)
    • The Spring dependency of the IdP is updated from the 3.2.x branch to the 4.1.x branch.
    Fediz contains a number of container plugins to support the Passive Requestor Profile of WS-Federation. The 1.3.0 release now supports container plugins for:
    • Websphere
    • Jetty 8 and 9 (new)
    • Apache Tomcat 7 and 8 (new)
    • Spring Security 2 and 3
    • Apache CXF.
    The Identity Provider (IdP) service has the following new features:
    • The IdP now supports protocol bridging with OpenId Connect IdPs (see previous article on an interop demo with Keycloak).
    • The IdP is now capable of supporting the SAML SSO protocol natively, in addition to the Passive Requestor Profile of WS-Federation.
    • A new IdP service is now available which supports OpenId Connect by leveraging Apache CXF. By default it delegates authentication to the existing Fediz IdP using WS-Federation.
    In a nutshell, the Fediz 1.3.0 IdP supports user authentication via the WS-Federation, SAML SSO and OpenId Connect protocols, and it can also bridge between all of these different protocols. This is a compelling selling point of Fediz, and one I will explore more in some forthcoming articles.
    Categories: Colm O hEigeartaigh

    Talking about Fediz OIDC at Apache Con NA 2016

    Sergey Beryozkin - Sun, 04/24/2016 - 19:01
    Colm and myself are going to talk about Fediz OpenId Connect at Apache Con NA 2016. The session is on Friday 13th May.

    Be there if you can, you can then tell your grandchildren you were at the 1st public presentation about Fediz OIDC :-)

    I do look forward to being at Apache Con again. Seeing and talking to the colleagues from Apache CXF and other projects is always super great.
    Categories: Sergey Beryozkin

    [OT] U2 Innocence And Experience or Understand HTTP services with CXF

    Sergey Beryozkin - Sun, 04/24/2016 - 18:47
    I've already told to all of my colleagues who would listen how lucky I was to get a chance to listen live to U2 who played several concerts in Dublin as part of their Innocence and Experience tour

    I've already told why I like U2. But seeing them playing live is really special. The voice is so good it is shocking at first. They are hard working and innovative, despite not being that young any more, the latter part is something I can definitely associate with :-).

    In all of the [OT] entries on my blog I'm trying to look for a 'connection' to Apache CXF. No exception this time:

    Apache CXF is not only a place where one can have a Web/HTTP Service created. But also go from a Novice to Expert in building such services. CXF may not offer a way for a Hello World application be created for you without doing anything at all. But it has been known to deliver in supporting most demanding services. By the time the developers have those services up and running they have become the experts who know what it takes to write a service that works well. They have moved from the 'Innocence' of Hello World services to 'Experience' requited to support Real World services. 

     

     
    Categories: Sergey Beryozkin

    CXF Master JAX-RS 2.1 Branch is Opened

    Sergey Beryozkin - Sun, 04/24/2016 - 18:17
    Good news for CXF JAX-RS users: Andriy Redko has opened a CXF Master JAX-RS 2.1 branch. Server Side Events is the first feature of 2.1 API which is supported on this branch. Having this 2.1 API Snapshot is handy.

    The development of JAX-RS 2.1 has been frustratingly slow but there's some progress nonetheless with Jersey (RI) expected to be ready as soon as realistically possible, given that all the major features proposed for JAX-RS 2.1 have already been implemented in Jersey.

    JAX-RS is easily the best API for building REST clients and servers. Despite the process difficulties it will continue evolving. Use it and believe more is to come in the JAX-RS space.
    Categories: Sergey Beryozkin

    Apache Karaf Tutorial part 10 - Declarative services

    Christian Schneider - Fri, 04/22/2016 - 17:16

    Blog post edited by Christian Schneider

    This tutorial shows how to use Declarative Services together with the new Aries JPA 2.0.

    You can find the full source code on github Karaf-Tutorial/tasklist-ds

    Declarative Services

    Declarative Services (DS) is the biggest contender to blueprint. It is a slim service injection framework that is completely focused on OSGi. DS allows you to offer and consume OSGi services and to work with configurations.

    At the core DS works with xml files to define scr components and their dependencies. They typically live in the OSGI-INF directory and are announced in the Manifest using the header "Service-Component" with the path to the component descriptor file.  Luckily it is not necessary to directly work with this xml as there is also support for DS annotations. These are processed by the maven-bundle-plugin. The only prerequisite is that they have to be enabled by a setting in the configuration instructions of the plugin.

    <_dsannotations>*</_dsannotations>

    For more details see http://www.aqute.biz/Bnd/Components

    DS vs Blueprint

    Let us look into DS by comparing it to the already better known blueprint. There are some important differences:

    1. Blueprint always works on a complete blueprint context. So the context will be started when all mandatory service deps are present. It then publishes all offered services. As a consequence a blueprint context can not depend on services it offers itself. DS works on Components. A component is a class that offers a service and can depend on other services and configuration. In DS you can manage each component separately like start and stop it. It is also possible that a bundle offers two components but only one is started as the dependencies of the other are not yet there.
    2. DS supports the OSGi service dynamics better than blueprint. Lets look into a simple example:
      You have a DS and blueprint module component that offers a service A and depends on a mandatory service B. Blueprint will wait on the first start for the mandatory service to be available. If it does not come up it will fail after a timeout and will not be able to recover from this. Once the blueprint context is up it stays up even if the mandatory service goes away. This is called service damping and has the goal to avoid restarting blueprint contexts too often. Services are injected into blueprint beans as dynamic proxies. Internally the proxy handles the replacement and unavailability of services. One problem this causes is that calls to a non available service will block the thread until a timeout and then throw a RuntimeException.
      In DS on the other hand a component lifecycle is directly bound to dependent services. So a component will only be activated when all mandatory services are present and deactivated as soon as one goes away. The advantage is that the service injected into the component does not have to be proxied and calls to it should always work.
    3. Every DS component must be a service. While blueprint can have internal beans that are just there to wire internal classes to each other this is not possible in DS. So DS is not a complete dependency injection framework and lacks many of the features blueprint offers in this regard.
    4. DS does not support extension namespaces. Aries blueprint has support for quite a few other Apache projects using extension namespaces. Examples are: Aries jpa, Aries transactions, Aries authz, CXF, Camel. So using these technologies in DS can be a bit more difficult.
    5. DS does not support support interceptors. In blueprint an extension namespace can introduce and interceptor that is always called before or after a bean. This is for example used for security as well as transation handling. For this reason DS did not support JPA very well as normal usage mandates to have interceptors. See below how jpa can work on DS.

    So if DS is a good match for your project depends on how much you need the service dynamics and how well you can integrate DS with other projects.

    JEE and JPA

    The JPA spec is based on JEE which has a very special thread and interceptor model. In JEE you use session beans with a container managed EntityManger
    to manipulate JPA Entities. It looks like this:

    JPA @Stateless class TaskServiceImpl implements TaskService {  @PersistenceContext(unitName="tasklist") private EntityManager em; public Task getTask(Integer id) { return em.find(Task.class, id); } }

    In JEE calling getTask will by default participate in or start a transaction. If the method call succeeds the transaction will be committed, if there is an exception it will be rolled back.
    The calls go to a pool of TaskServiceImpl instances. Each of these instances will only be used by one thread at a time. As a result of this the EntityManager interface is not thread safe!

    So the advantage of this model is that it looks simple and allows pretty small code. On the other hand it is a bit difficult to test such code outside a container as you have to mimic the way the container works with this class. It is also difficult to access e.g. em
     as it is private and there is not setter.

    Blueprint supports a coding style similar to the JEE example and implements this using a special jpa and tx namespace and
    interceptors that handle the transaction / em management.

    DS and JPA

    In DS each component is a singleton. So there is only one instance of it that needs to cope with multi threaded access. So working with the plain JEE concepts for JPA is not possible in DS.

    Of course it would be possible to inject an EntityManagerFactory and handle the EntityManager lifecycle and transactions by hand but this results in quite verbose and error prone code.

    Aries JPA 2.0.0 is the first version that offers special support for frameworks like DS that do not offer interceptors. The solution here is the concept of a JPATemplate together with support for closures in Java 8. To see how the code looks like peek below at chapter persistence.

    Instead of the EntityManager we inject a thread safe JpaTemplate into our code. We need to put the jpa code inside a closure and run it with jpa.txEpr() or jpa.tx(). The JPATemplate will then guarantee the same environment like JEE inside the closure. As each closure runs as its own
    instance there is one em per thread. The code will also participate/create a transaction and the transaction  commit/rollback also works like in JEE.

    So this requires a little more code but the advantage is that there is no need for a special framework integration.
    The code can also be tested much easier. See TaskServiceImplTest in the example.

    Structure
    • features
    • model
    • persistence
    • ui
    Features

    Defines the karaf features to install the example as well as all necessary dependencies.

    Model

    This module defines the Task JPA entity, a TaskService interface and the persistence.xml. For a detailed description of model see the tasklist-blueprint example. The model is exactly the same here.

    PersistenceTaskServiceImpl @Component public class TaskServiceImpl implements TaskService { private JpaTemplate jpa; public Task getTask(Integer id) { return jpa.txExpr(em -> em.find(Task.class, id)); } @Reference(target = "(osgi.unit.name=tasklist)") public void setJpa(JpaTemplate jpa) { this.jpa = jpa; } }

    We define that we need an OSGi service with interface TaskService and a property "osgi.unit.name" with the value "tasklist".

    InitHelper @Component public class InitHelper { Logger LOG = LoggerFactory.getLogger(InitHelper.class); TaskService taskService; @Activate public void addDemoTasks() { try { Task task = new Task(1, "Just a sample task", "Some more info"); taskService.addTask(task); } catch (Exception e) { LOG.warn(e.getMessage(), e); } } @Reference public void setTaskService(TaskService taskService) { this.taskService = taskService; } }

    The class InitHelper creates and persists a first task so the UI has something to show. It is also an example how business code that works with the task service can look like.
    @Reference TaskService taskService injects the TaskService into the field taskService.
    @Activate makes sure that addDemoTasks() is called after injection of this component.

    Another interesting point in the module is the test TaskServiceImplTest. It runs outside OSGi and uses a special
    persistence.xml for testing to create the EntityManagerFactory. It also shows how to instantiate a ResourceLocalJpaTemplate
    to avoid having to install a JTA transaction manager for the test. The test code shows that indeed the TaskServiceImpl can
    be used as plain java code without any special tricks.

    UI

    The tasklist-ui module uses the TaskService as an OSGi service and publishes a Servlet as an OSGi service. The Pax-web whiteboard bundle will then pick up the exported servlet and publish it using the HttpService so it is available on http.

    TaskListServlet @Component(immediate = true, service = { Servlet.class }, property = { "alias:String=/tasklist" } ) public class TaskListServlet extends HttpServlet { private TaskService taskService; protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { // Actual code omitted } @Reference public void setTaskService(TaskService taskService) { this.taskService = taskService; } }

    The above snippet shows how to specify which interface to use when exporting a service as well as how to define service properties.

    The TaskListServlet is exported with the interface javax.servlet.Servlet with the service property alias="/tasklist".
    So it is available on the url http://localhost:8181/tasklist.

    Build

    Make sure you use JDK 8 and run:

    mvn clean install Installation

    Make sure you use JDK 8.
    Download and extract Karaf 4.0.0.
    Start karaf and execute the commands below

    Create DataSource config and Install Example cat https://raw.githubusercontent.com/cschneider/Karaf-Tutorial/master/tasklist-blueprint-cdi/org.ops4j.datasource-tasklist.cfg | tac -f etc/org.ops4j.datasource-tasklist.cfg feature:repo-add mvn:net.lr.tasklist.ds/tasklist/1.0.0-SNAPSHOT/xml/features feature:install example-tasklist-ds-persistence example-tasklist-ds-ui Validate Installation

    First we check that the JpaTemplate service is present for our persistence unit.

    service:list JpaTemplate [org.apache.aries.jpa.template.JpaTemplate] ------------------------------------------- osgi.unit.name = tasklist transaction.type = JTA service.id = 164 service.bundleid = 57 service.scope = singleton Provided by : tasklist-model (57) Used by: tasklist-persistence (58)

    Aries JPA should have created this service for us from our model bundle. If this did not work then check the log for messages from Aries JPA. It should print what it tried and what it is waiting for. You can also check for the presence of an EntityManagerFactory and EmSupplier service which are used by JpaTemplate.

    A likely problem would be that the DataSource is missing so lets also check it:

    service:list DataSource [javax.sql.DataSource] ---------------------- dataSourceName = tasklist felix.fileinstall.filename = file:/home/cschneider/java/apache-karaf-4.0.0/etc/org.ops4j.datasource-tasklist.cfg osgi.jdbc.driver.name = H2-pool-xa osgi.jndi.service.name = tasklist service.factoryPid = org.ops4j.datasource service.pid = org.ops4j.datasource.cdc87e75-f024-4b8c-a318-687ff83257cf url = jdbc:h2:mem:test service.id = 156 service.bundleid = 113 service.scope = singleton Provided by : OPS4J Pax JDBC Config (113) Used by: Apache Aries JPA container (62)

    This is like it should look like. Pax-jdbc-config created the DataSource out of the configuration in "etc/org.ops4j.datasource-tasklist.cfg".  By using a DataSourceFactory wit the property "osgi.jdbc.driver.name=H2-pool-xa". So the resulting DataSource should be pooled and fully ready for XA transactions.

    Next we check that the DS components started:

    scr:list ID | State | Component Name -------------------------------------------------------------- 1 | ACTIVE | net.lr.tasklist.persistence.impl.InitHelper 2 | ACTIVE | net.lr.tasklist.persistence.impl.TaskServiceImpl 3 | ACTIVE | net.lr.tasklist.ui.TaskListServlet

    If any of the components is not active you can inspect it in detail like this:

    scr:details net.lr.tasklist.persistence.impl.TaskServiceImpl Component Details Name : net.lr.tasklist.persistence.impl.TaskServiceImpl State : ACTIVE Properties : component.name=net.lr.tasklist.persistence.impl.TaskServiceImpl component.id=2 Jpa.target=(osgi.unit.name=tasklist) References Reference : Jpa State : satisfied Multiple : single Optional : mandatory Policy : static Service Reference : Bound Service ID 164 Test

    Open the url below in your browser.
    http://localhost:8181/tasklist

    You should see a list of one task

     http://localhost:8181/tasklist?add&taskId=2&title=Another Task

     

    View Online
    Categories: Christian Schneider

    Karaf Tutorial Part 4 - CXF Services in OSGi

    Christian Schneider - Tue, 04/05/2016 - 09:03

    Blog post edited by Christian Schneider

    Shows how to publish and use a simple REST and SOAP service in karaf using cxf and blueprint.

    To run the example you need to install the http feature of karaf. The default http port is 8080 and can be configured using the
    config admin pid "org.ops4j.pax.web". You also need to install the cxf feature. The base url of the cxf servlet is by default "/cxf".
    It can be configured in the config pid "org.apache.cxf.osgi".

    Differences in Talend ESB

    Icon

    If you use Talend ESB instead of plain karaf then the default http port is 8044 and the default cxf servlet name is "/services".

    PersonService Example

    The "business case" is to manage a list of persons. As service should provide the typical CRUD operations. Front ends should be a REST service, a SOAP service and a web UI.

    The example consists of four projects

    • model: Person class and PersonService interface
    • server: Service implementation and logic to publish the service using jax-ws (SOAP)
    • proxy: Accesses the SOAP service and publishes it as an OSGi service
    • webui: Provides a simple servlet based web ui to list and add persons. Uses the OSGi service

    You can find the full source on github: https://github.com/cschneider/Karaf-Tutorial/tree/master/cxf/personservice/

    Installation and test run

    First we build, install and run the example to give an overview of what it does. The following main chapter then explains in detail how it works.

    Installing Karaf and preparing for CXF

    We start with a fresh Karaf 4.0.4

    Build and Test

    Checkout the project from github and build using maven

    mvn clean install Install service and ui in karaf feature:repo-add cxf 3.1.5 feature:install http cxf-jaxws http-whiteboard install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-model/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-server/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-proxy/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-webui/1.0-SNAPSHOT Test the service

    The person service should show up in the list of currently installed services that can be found here http://localhost:8181/cxf/

    Test the proxy and web UI

    http://localhost:8181/personui

    You should see the list of persons managed by the personservice and be able to add new persons.

    How it worksDefining the model

    The model project is a simple java maven project that defines a JAX-WS service and a JAXB data class. It has no dependencies to cxf. The service interface is just a plain java interface with the @WebService annotation.

    @WebService public interface PersonService { public abstract Person[] getAll(); public abstract Person getPerson(String id); public abstract void updatePerson(String id, Person person); public abstract void addPerson(Person person); }

    The Person class is just a simple pojo with getters and setters for id, name and url and the necessary JAXB annotations. Additionally you need an ObjectFactory to tell JAXB what xml element to use for the Person class.
    There is also no special code for OSGi in this project. So the model works perfectly inside and outside of an OSGi container.

    Icon

    The service is defined java first. SOAP and rest are used quite transparently. This is very suitable to communicate between a client and server of the same application. If the service
    is to be used by other applications the wsdl first approach is more suitable. In this case the model project should be configured to generate the data classes and service interface from
    a wsdl (see cxf wsdl_first example pom file). For rest services the java first approach is quite common in general as the client typically does not use proxy classes anyway.

    Service implementation (server)

    PersonServiceImpl is a java class the implements the service interface. The server project also contains a small starter class that allows the service to be published directly from eclipse. This class is not necessary for deployment in karaf.

    The production deployment of the service is done in src/main/resources/OSGI-INF/blueprint/blueprint.xml.

    As the file is in the special location OSGI-INF/blueprint it is automatically processed by the blueprint implementation aries in karaf. The REST service is published using the jaxrs:server element and the SOAP service is published using the jaxws:endpoint element. The blueprint namespaces are different from spring but apart from this the xml is very similar to a spring xml.

    Service proxy

    The service proxy project only contains a blueprint xml that uses the CXF JAXWS client to consume the SOAP service and exports it as an OSGi Service. Encapsulating the service client as an OSGi service (proxy project) is not strictly necessary but it has the advantage that the webui is then completely independent of cxf. So it is very easy to change the way the service is accessed. So this is considered a best practice in OSGi.

    See blueprint.xml

    Web UI (webui)

    This project consumes the PersonService OSGi service and exports the PersonServlet as an OSGi service. The pax web whiteboard extender will then publish the servlet on the location /personui.
    The PersonServlet gets the PersonService injected and uses to get all persons and also to add persons.

    The wiring is done using a blueprint context.

     

    PersonService REST

    The personservice REST example is very similar to to SOAP one but it uses jaxrs to expose a REST service instead.

    The example can be found in github Karaf-Tutorial cxf personservice-rest. It contains these modules:

    • personservice-model: Interface PersonService and Person dto
    • personservice-server: Implements the service and publishes it using blueprint
    • personservice-webui: Simple servlet UI to show and add persons
    Build mvn clean install Install feature:repo-add cxf 3.1.5 feature:install cxf-jaxrs http-whiteboard install -s mvn:net.lr.tutorial.karaf.cxf.personrest/personrest-model/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personrest/personrest-server/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personrest/personrest-webui/1.0-SNAPSHOT How it works

    The interface of the service must contain jaxrs annotations to tell CXF how to map rest requests to the methods.

    @Produces(MediaType.APPLICATION_XML) public interface PersonService { @GET @Path("/") public Person[] getAll(); @GET @Path("/{id}") public Person getPerson(@PathParam("id") String id); @PUT @Path("/{id}") public void updatePerson(@PathParam("id") String id, Person person); @POST @Path("/") public void addPerson(Person person); }

    In blueprint the implementation of the rest service needs to be published as a REST resource:

    <bean id="personServiceImpl" class="net.lr.tutorial.karaf.cxf.personrest.impl.PersonServiceImpl"/> <jaxrs:server address="/person" id="personService"> <jaxrs:serviceBeans> <ref component-id="personServiceImpl" /> </jaxrs:serviceBeans> <jaxrs:features> <cxf:logging /> </jaxrs:features> </jaxrs:server>

     

    Test the service

    The person service should show up in the list of currently installed services that can be found here
    http://localhost:8181/cxf/

    List the known persons

    http://localhost:8181/cxf/person
    This should show one person "chris"

    Add a person

    Now using a firefox extension like Poster or Httprequester you can add a person.

    Send the following xml snippet:

    <?xml version="1.0" encoding="UTF-8"?> <person> <id>1001</id> <name>Christian Schneider</name> <url>http://www.liquid-reality.de</url> </person>

    with Content-Type:text/xml using PUT:http://localhost:8181/cxf/person/1001
    or to this url using POST:http://localhost:8181/cxf/person

    Now the list of persons should show two persons.

    Now using a firefox extension like Poster or Httprequester you can add a person.
    Send the content of server/src/test/resources/person1.xml to the following url using PUT:
    http://localhost:8181/cxf/person/1001

    Or to this url using POST:
    http://localhost:8181/cxf/person

    Now the list of persons should show two persons

    Test the web UI

    http://localhost:8181/personuirest

    You should see the list of persons managed by the personservice and be able to add new persons.

    Some further remarks

    The example uses blueprint instead of spring dm as it works much better in an OSGi environment. The bundles are created using the maven bundle plugin. A fact that shows how well blueprint works
    is that the maven bundle plugin is just used with default settings. In spring dm the imports have to be configured as spring needs access to many implementation classes of cxf. For spring dm examples
    take a look at the Talend Service Factory examples (https://github.com/Talend/tsf/tree/master/examples).

    The example shows that writing OSGi applications is quite simple with aries and blueprint. It needs only 153 lines of java code (without comments) for a complete little application.
    The blueprint xml is also quite small and readable.

    Back to Karaf Tutorials

    View Online
    Categories: Christian Schneider

    Karaf Tutorial Part 4 - CXF Services in OSGi

    Christian Schneider - Thu, 03/31/2016 - 14:41

    Blog post edited by Christian Schneider

    Shows how to publish and use a simple REST and SOAP service in karaf using cxf and blueprint.

    To run the example you need to install the http feature of karaf. The default http port is 8080 and can be configured using the
    config admin pid "org.ops4j.pax.web". You also need to install the cxf feature. The base url of the cxf servlet is by default "/cxf".
    It can be configured in the config pid "org.apache.cxf.osgi".

    Differences in Talend ESB

    Icon

    If you use Talend ESB instead of plain karaf then the default http port is 8044 and the default cxf servlet name is "/services".

    PersonService Example

    The "business case" is to manage a list of persons. As service should provide the typical CRUD operations. Front ends should be a REST service, a SOAP service and a web UI.

    The example consists of four projects

    • model: Person class and PersonService interface
    • server: Service implementation and logic to publish the service using jax-ws (SOAP)
    • proxy: Accesses the SOAP service and publishes it as an OSGi service
    • webui: Provides a simple servlet based web ui to list and add persons. Uses the OSGi service

    You can find the full source on github: https://github.com/cschneider/Karaf-Tutorial/tree/master/cxf/personservice/

    Installation and test run

    First we build, install and run the example to give an overview of what it does. The following main chapter then explains in detail how it works.

    Installing Karaf and preparing for CXF

    We start with a fresh Karaf 4.0.4

    Build and Test

    Checkout the project from github and build using maven

    mvn clean install Install service and ui in karaf feature:repo-add cxf 3.1.5 feature:install http cxf-jaxws http-whiteboard install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-model/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-server/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-proxy/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-webui/1.0-SNAPSHOT Test the service

    The person service should show up in the list of currently installed services that can be found here http://localhost:8181/cxf/

    Test the proxy and web UI

    http://localhost:8181/personui

    You should see the list of persons managed by the personservice and be able to add new persons.

    How it worksDefining the model

    The model project is a simple java maven project that defines a JAX-WS service and a JAXB data class. It has no dependencies to cxf. The service interface is just a plain java interface with the @WebService annotation.

    @WebService public interface PersonService { public abstract Person[] getAll(); public abstract Person getPerson(String id); public abstract void updatePerson(String id, Person person); public abstract void addPerson(Person person); }

    The Person class is just a simple pojo with getters and setters for id, name and url and the necessary JAXB annotations. Additionally you need an ObjectFactory to tell JAXB what xml element to use for the Person class.
    There is also no special code for OSGi in this project. So the model works perfectly inside and outside of an OSGi container.

    Icon

    The service is defined java first. SOAP and rest are used quite transparently. This is very suitable to communicate between a client and server of the same application. If the service
    is to be used by other applications the wsdl first approach is more suitable. In this case the model project should be configured to generate the data classes and service interface from
    a wsdl (see cxf wsdl_first example pom file). For rest services the java first approach is quite common in general as the client typically does not use proxy classes anyway.

    Service implementation (server)

    PersonServiceImpl is a java class the implements the service interface. The server project also contains a small starter class that allows the service to be published directly from eclipse. This class is not necessary for deployment in karaf.

    The production deployment of the service is done in src/main/resources/OSGI-INF/blueprint/blueprint.xml.

    As the file is in the special location OSGI-INF/blueprint it is automatically processed by the blueprint implementation aries in karaf. The REST service is published using the jaxrs:server element and the SOAP service is published using the jaxws:endpoint element. The blueprint namespaces are different from spring but apart from this the xml is very similar to a spring xml.

    Service proxy

    The service proxy project only contains a blueprint xml that uses the CXF JAXWS client to consume the SOAP service and exports it as an OSGi Service. Encapsulating the service client as an OSGi service (proxy project) is not strictly necessary but it has the advantage that the webui is then completely independent of cxf. So it is very easy to change the way the service is accessed. So this is considered a best practice in OSGi.

    See blueprint.xml

    Web UI (webui)

    This project consumes the PersonService OSGi service and exports the PersonServlet as an OSGi service. The pax web whiteboard extender will then publish the servlet on the location /personui.
    The PersonServlet gets the PersonService injected and uses to get all persons and also to add persons.

    The wiring is done using a blueprint context.

     

    PersonService REST

    The personservice REST example is very similar to to SOAP one but it uses jaxrs to expose a REST service instead.

    The example can be found in github Karaf-Tutorial cxf personservice-rest. It contains these modules:

    • personservice-model: Interface PersonService and Person dto
    • personservice-server: Implements the service and publishes it using blueprint
    • personservice-webui: Simple servlet UI to show and add persons
    Build mvn clean install Install feature:repo-add cxf 3.1.5 feature:install cxf-jaxrs http install -s mvn:net.lr.tutorial.karaf.cxf.personrest/personrest-model/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personrest/personrest-server/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personrest/personrest-webui/1.0-SNAPSHOT How it works

    The interface of the service must contain jaxrs annotations to tell CXF how to map rest requests to the methods.

    @Produces(MediaType.APPLICATION_XML) public interface PersonService { @GET @Path("/") public Person[] getAll(); @GET @Path("/{id}") public Person getPerson(@PathParam("id") String id); @PUT @Path("/{id}") public void updatePerson(@PathParam("id") String id, Person person); @POST @Path("/") public void addPerson(Person person); }

    In blueprint the implementation of the rest service needs to be published as a REST resource:

    <bean id="personServiceImpl" class="net.lr.tutorial.karaf.cxf.personrest.impl.PersonServiceImpl"/> <jaxrs:server address="/person" id="personService"> <jaxrs:serviceBeans> <ref component-id="personServiceImpl" /> </jaxrs:serviceBeans> <jaxrs:features> <cxf:logging /> </jaxrs:features> </jaxrs:server>

     

    Test the service

    The person service should show up in the list of currently installed services that can be found here
    http://localhost:8181/cxf/

    List the known persons

    http://localhost:8181/cxf/person
    This should show one person "chris"

    Add a person

    Now using a firefox extension like Poster or Httprequester you can add a person.

    Send the following xml snippet:

    <?xml version="1.0" encoding="UTF-8"?> <person> <id>1001</id> <name>Christian Schneider</name> <url>http://www.liquid-reality.de</url> </person>

    with Content-Type:text/xml using PUT:http://localhost:8181/cxf/person/1001
    or to this url using POST:http://localhost:8181/cxf/person

    Now the list of persons should show two persons.

    Now using a firefox extension like Poster or Httprequester you can add a person.
    Send the content of server/src/test/resources/person1.xml to the following url using PUT:
    http://localhost:8181/cxf/person/1001

    Or to this url using POST:
    http://localhost:8181/cxf/person

    Now the list of persons should show two persons

    Test the web UI

    http://localhost:8181/personuirest

    You should see the list of persons managed by the personservice and be able to add new persons.

    Some further remarks

    The example uses blueprint instead of spring dm as it works much better in an OSGi environment. The bundles are created using the maven bundle plugin. A fact that shows how well blueprint works
    is that the maven bundle plugin is just used with default settings. In spring dm the imports have to be configured as spring needs access to many implementation classes of cxf. For spring dm examples
    take a look at the Talend Service Factory examples (https://github.com/Talend/tsf/tree/master/examples).

    The example shows that writing OSGi applications is quite simple with aries and blueprint. It needs only 153 lines of java code (without comments) for a complete little application.
    The blueprint xml is also quite small and readable.

    Back to Karaf Tutorials

    View Online
    Categories: Christian Schneider

    Karaf Tutorial Part 5 - Running Apache Camel integrations in OSGi

    Christian Schneider - Mon, 03/28/2016 - 10:33

    Blog post edited by Christian Schneider

    Shows how to run your camel routes in the OSGi server Apache Karaf. Like for CXF blueprint is used to boot up camel. The tutorial shows three examples - a simple blueprint route, a jms2rest adapter and an order processing example.Installing Karaf and making Camel features available
    • Download Karaf 4.0.4 and unpack to the file system
    • Start bin\karaf.bat or bin/karaf for unix

    In Karaf type:

    feature:repo-add camel 2.16.2 feature:list

    You should see the camel features that are now ready to be installed.

    Getting and building the examples

    You can find the examples for this tutorial on github Karaf Tutorial - camel.

    So either clone the git repo or just download and unpack the zip of it.To build the code do:

    cd camel mvn clean install Starting simple with a pure blueprint deployment

    Our first example does not even require a java project. In Karaf it is possible to deploy pure blueprint xml files. As camel is well integrated with blueprint you can define a complete camel context with routes in a simple blueprint file.

    simple-camel-blueprint.xml

    The blueprint xml for a camel context is very similar to the same in spring. Mainly the namespaces are different. Blueprint discovers the dependency on camel so it will automatically require the at least the camel-blueprint feature is installed. The camel components in routes are discovered as OSGi services. So as soon as a camel component is installed using the respective feature it is automatically available for usage in routes.

    So to install the above blueprint based camel integration you only have to do the following steps:

    feature:install camel-blueprint camel-stream

    Copy simple-camel-blueprint.xml to the deploy folder of karaf. You should now see "Hello Camel" written to the console every 5 seconds.

    The blueprint file will be automatically monitored for changes so any changes we do are directly reflected in Karaf. To try this open the simple-camel-blueprint.xml file from the deploy folder in an editor, change "stream:out" to "log:test" and save. Now the messages on the console should stop and instead you should be able to see "Hello Camel" in the Karaf log file formatted as a normal log line.

    JMS to REST Adapter (jms2rest)

    Icon

    This example is not completely standalone. As a prerequisite install the person service example like described in Karaf Tutorial 4.

    The example shows how to create a bridge from the messaging world to a REST service. It is simple enough that it could be done in a pure blueprint file like the example above. As any bigger integration needs some java code I opted to use a java project for that case.

    Like most times we mainly use the maven bundle plugin with defaults and the packaging type bundle to make the project OSGi ready. The camel context is booted up using a blueprint file blueprint.xml and the routes are defined in the java class Jms2RestRoute.

    Routes

    The first route watches the directory "in" and writes the content of any file placed there to the jms queue "person". It is not strictly necessary but makes it much simpler to test the example by hand.

    The seconds route is the real jms2rest adapter. It listens on the jms queue person and expects to get xml content with persons like also used in the PersonService. In the route the id of the person is extracted from the xml and stored in a camel message header. This header is then used to build the rest uri. As a last step the content from the message is sent to the rest uri with a PUT request. So this tells the service to store the person with the given id and data.

    Use of Properties

    Besides the pure route the example shows some more tpyical things you need in camel. So it is a good practice to externalize the url of services we access. Camel uses the Properties component for this task.

    This enables us to write {{personServiceUri}} in endpoints or ${properties:personServiceUri} in the simple language.

    In a blueprint context the Properties component is automatically aware of injected properties from the config admin service. We use a cm:property-placeholder definition to inject the attributes of the config admin pid "net.lr.tutorial.karaf.cxf.personservice". As there might be no such pid we also define a default value for the personServiceUri so the integration can be deployed without further configuation.

    JMS Component

    We are using the camel jms component in our routes. This is one of the few components that need further configuration to work. We also do this in the blueprint context by defining a JmsComponent and injecting a connection factory into it. In OSGi it is good practice to not define connection factories or data sources directly in the bundle instead we are simply refering to it using a OSGi service reference.

    Deploying and testing the jms2rest Adapter

    Just type the following in Karaf:

    feature:repo-add activemq 5.12.2 feature:repo-add camel 2.16.2 feature:install  camel-blueprint camel-jms camel-http camel-saxon activemq-broker jms jms:create -t activemq localhost install -s mvn:net.lr.tutorial.karaf.camel/example-jms2rest/1.0-SNAPSHOT

    This installs the activemq and camel feature files and features in karaf. The activemq:create command creates a broker defintions in the deploy folder. This broker is then automatically started. The broker defintion also publishes an OSGi service for a suitable connection factory. This is then referenced later by our bundle.

    As a last step we install our own bundle with the camel route.

    Now the route should be visible when typing:

    > camel:route-list Route Id Context Name Status [file2jms ] [jms2rest ] [Started ] [personJms2Rest ] [jms2rest ] [Started ]

    Now copy the file src/test/resources/person1.xml to the folder "in" below the karaf directory. The file should be sent to the queue person by the first route and then sent to the rest service by the second route.

    In case the personservice is instaleld you should now see a message like "Update request received for ...". In case it is not installed you should see a 404 in the karaf error when accessing the rest service.

    Order processing example

    The business case in this example is a shop that partly works with external vendors.

    We receive an order as an xml file (See: order1.xml). The order contains a customer element and several item elements. Each item specifies a vendor. This can be either "direct" when we deliver the item ourself or a external vendor name. If the item vendor is "direct" then the item should be exported to a file in a directory with the customer name. All other items are sent out by mail. The mail content should be customizeable. The mail address has to be fetched from a service that maps vendor name to mail address.

    How it works

    This example again uses maven to build, a blueprint.xml context to boot up camel and a java class OrderRouteBuilder for the camel routes. So from an OSGi perspective it works almost the same as the jms2rest example.

    The routes are defined in net.lr.tutorial.karaf.camel.order.OrderRouteBuilder. The "order" route listens on the directory "orderin" and expects xml order files to be placed there. The route uses xpath to extract several attributes of the order into message headers. A splitter is used to handle each (/order/item) spearately. Then a content based router is used to handle "direct" items different from others.

    In the case of a direct item the recipientlist pattern is used to build the destination folder dynamically using a simple language expression.

    recipientList(simple("file:ordersout/${header.customer}"))

    If the vendor is not "direct" then the route "mailtovendor" is called to create and send a mail to the vendor. Some subject and to address are set using special header names that the mail component understands. The content of the mail is expected in the message body. As the body also should be comfigureable the velocity component is used to fill the mailtemplate.txt with values from the headers that were extracted before.

    Deploy into karaf

    The deployment is also very similar to the previous example but a little simpler as we do not need jms. Type the following in karaf

    feature:repo-add camel 2.16.2 feature:install camel-blueprint camel-mail camel-velocity camel-stream install -s mvn:net.lr.tutorial.karaf.camel/example-order/1.0-SNAPSHOT

    To be able to receive the mail you have to edit the configuration pid. You can either do this by placing a properties file
    into etc/net.lr.tutorial.karaf.cxf.personservice.cfg or editing the config pid using the karaf webconsole. (See part 2 and part 3 of the Karaf Tutorial series).

    Basically you have to set these two properties according to your own mail environment.

    mailserver=yourmailserver.com testVendorEmail=youmail@yourdomain.com Test the order example

    Copy the file order1.xml into the folder "ordersin" below the karaf dir.

    The Karaf console will show:

    Order from Christian Schneider Count: 1, Article: Flatscreen TV

    The same should be in a mail in your inbox. At the same time a file should be created in ordersout/Christian Schneider/order1.xml that contains the book item.

    Wrapping it up and outlook

    The examples show that fairly sophisticated integrations can be done using camel and be nicely deployed in an Apache Karaf container. The examples also show some best practices around configuration management, jms connection factories and templates for customization. The examples should also provide a good starting point for you own integration projects. Many people are a bit hesitant using OSGi in production. I hope these simple examples can show how easy this is in practice. Still problems can arise of course. For that case it is advisable to think about getting a support contract from a vendor like Talend. The whole Talend Integration portfolio is based on Apache Karaf so we are quite experienced in this area.

    I have left out one big use case for Apache Camel in this tutorial - Database integrations. This is a big area and warrants a separate tutorial that will soon follow. There I will also explain how to handle DataSources and Connection Factories with drivers that are not already OSGi compliant.

    Back to Karaf Tutorials

    View Online
    Categories: Christian Schneider

    New Kid On The Block: Fediz OpenId Connect

    Sergey Beryozkin - Wed, 03/16/2016 - 19:02
    Apache Fediz, Identity Provider for the WEB, was created by Oliver Wulff and during the last few years, with the major support from Colm and Jan, has become quite a popular provider for supporting SSO with the help of the WS-Federation Profile.

    Before I continue, I'd like to clarify that even though WS-Federation is obviously related to SOAP, the important thing is that as far as the user experience is concerned, it is pure SSO. For example, AFAIK, a Microsoft Office Outlook login process is currently WS-Fed aware.

    But OpenId Connect (OIDC) is a new SSO star for the WEB, with all of the software industry players with SSO-related interests supporting it, as far as I can see it.

    OIDC really shines. I was talking about something similar before in context of the JOSE work, it is really been designed by some of the best security and web experts in the industry. And OIDC is still a very bleeding edge development as far as a maintsream adoption by the software industry is concerned. Google, Microsoft, and other top companies have created OIDC servers, but what if you want your own OIDC ?

    Fediz OpenId Connect (Fediz OIDC) is the new project that Colm, Jan and myself started working upon back in November 2015 and it joins a family of OIDC-focused projects that are appearing probably every month in various developer communities.   

    As you can imagine we are at the start of a rather long road. OIDC is great but is undoubtedly complex to implement right.  We've had a good progress so far and most of OIDC Core is supported OOB, something that you can try right now.

    Apache CXF OAuth2 and OIDC authorization modules are linked to a flexible Fediz IDP (Authentication System) with the minimum amount of code. We will be working on making it all more feature complete, robust, configurable, customizable, production ready.

    We are planning to talk about Fediz OIDC a lot more going forward.

    Stay tuned !

    Categories: Sergey Beryozkin

    Using the CXF failover feature to authenticate to multiple Apache Syncope instances

    Colm O hEigeartaigh - Wed, 03/02/2016 - 21:35
    A couple of years ago, I described a testcase that showed how an Apache CXF web service endpoint could send a username/password received via WS-Security to Apache Syncope for authentication. In this article, I'll extend that testcase to make use of the CXF failover feature. The failover feature allows the client to use a set of alternative addresses when the primary endpoint address is unavailable. For the purposes of the demo, instead of a single Apache Syncope instance, we will set up two instances which share the same internal storage. When the first/primary instance goes down, the failover feature will automatically switch to use the second instance.

    1) Set up the Apache Syncope instances

    1.a) Set up a database for Internal Storage

    Apache Syncope persists internal storage to a database via Apache OpenJPA. For the purposes of this demo, we will set up MySQL. Install MySQL in $SQL_HOME and create a new user for Apache Syncope. We will create a new user "syncope_user" with password "syncope_pass". Start MySQL and create a new Syncope database:
    • Start: sudo $SQL_HOME/bin/mysqld_safe --user=mysql
    • Log on: $SQL_HOME/bin/mysql -u syncope_user -p
    • Create a Syncope database: create database syncope; 
    1.b) Set up containers to host Apache Syncope

    We will deploy Syncope to Apache Tomcat. Download Tomcat + extract it twice (calling it first-instance + second-instance). In both instances, edit 'conf/context.xml', and uncomment the the "<Manager pathname="" />" configuration. Also in 'conf/context.xml', add a datasource for internal storage:

    <Resource name="jdbc/syncopeDataSource" auth="Container"
        type="javax.sql.DataSource"
        factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
        testWhileIdle="true" testOnBorrow="true" testOnReturn="true"
        validationQuery="SELECT 1" validationInterval="30000"
        maxActive="50" minIdle="2" maxWait="10000" initialSize="2"
        removeAbandonedTimeout="20000" removeAbandoned="true"
        logAbandoned="true" suspectTimeout="20000"
        timeBetweenEvictionRunsMillis="5000" minEvictableIdleTimeMillis="5000"
        jdbcInterceptors="org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;
        org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer"
        username="syncope_user" password="syncope_pass"
        driverClassName="com.mysql.jdbc.Driver"
        url="jdbc:mysql://localhost:3306/syncope?characterEncoding=UTF-8"/>

    The next step is to enable a way to deploy applications to Tomcat using the Manager app. Edit 'conf/tomcat-users.xml' in both instances and add the following:

    <role rolename="manager-script"/>
    <user username="manager" password="s3cret" roles="manager-script"/>

    Next, download the JDBC driver jar for MySQL and put it in Tomcat's 'lib' directory in both instances. Edit 'conf/server.xml' of the second instance, and change the port to "9080", and change the other ports to avoid conflict with the first Tomcat instance. Now start both Tomcat instances.

    1.c) Install Syncope to the containers

    Download and run the Apache Syncope installer. Install it to Tomcat using MySQL as the database. For more info on this, see a previous tutorial. Run this twice to install Syncope in both Apache Tomcat instances.

    1.d) Configure the container to share the same database

    Next we need to configure both containers to share the same database. Edit 'webapps/syncope/WEB-INF/classes/persistenceContextEMFactory.xml' in the first instance, and change the 'openjpa.RemoteCommitProvider' to:
    • <entry key="openjpa.RemoteCommitProvider" value="tcp(Port=12345, Addresses=127.0.0.1:12345;127.0.0.1:12346)"/>
    Similarly, change the value in the second instance to:
    • <entry key="openjpa.RemoteCommitProvider" value="tcp(Port=12346, Addresses=127.0.0.1:12345;127.0.0.1:12346)"/>
    This is necessary to ensure data consistency across the two Syncope instances. Please see the Syncope cluster page for more information.

    1.e) Add users

    In the first Tomcat instance running on port 8080, go to http://localhost:8080/syncope-console, and add two new roles "employee" and "boss". Add two new users, "alice" and "bob" both with password "security". "alice" has both roles, but "bob" is only an "employee". Now logout, and login to the second instance running on port 9080. Check that the newly created users are available.

    2) The CXF testcase

    The CXF testcase is available in github:
    • cxf-syncope-failover: This project contains a number of tests that show how an Apache CXF service endpoint can use the CXF Failover feature, to authenticate to different Apache Syncope instances.
    A CXF client sends a SOAP UsernameToken to a CXF Endpoint. The CXF Endpoint has been configured (see cxf-service.xml) to validate the UsernameToken via the SyncopeUTValidator, which dispatches the username/passwords to Syncope for authentication via Syncope's REST API.

    The SyncopeUTValidator is configured to use the CXF failover feature with the address of the primary Syncope instance ("first-instance" above running on 8080). It is also configured with a list of alternative addresses to try if the first instance is down (in this case the "second-instance" running on 9080).

    The test makes two invocations. The first should successfully authenticate to
    the first Syncope instance. Now the test sleeps for 10 seconds after prompting
    you to kill the first Syncope instance. It should successfully failover to the
    second Syncope instance on the second invocation.
    Categories: Colm O hEigeartaigh

    Support for OpenId Connect protocol bridging in Apache CXF Fediz 1.3.0

    Colm O hEigeartaigh - Fri, 02/26/2016 - 15:49
    Apache CXF Fediz 1.3.0 will be released in the near future. One of the new features of Fediz 1.2.0 (released last year) was the ability to act as an identity broker with a SAML SSO IdP. In the 1.3.0 release, Apache CXF Fediz will have the ability to act as an identity broker with an OpenId Connect IdP. In other words, the Fediz IdP can act as a protocol bridge between the WS-Federation and OpenId Connect protocols. In this article, we will look at an interop test case with Keycloak.

    1) Install and configure Keycloak

    Download and install the latest Keycloak distribution (tested with 1.8.0). Start keycloak in standalone mode by running 'sh bin/standalone.sh'.

    1.1) Create users in Keycloak

    First we need to create an admin user by navigating to the following URL, and entering a password:
    • http://localhost:8080/auth/
    Click on the "Administration Console" link, logging on using the admin user credentials. You will see the configuration details of the "Master" realm. For the purposes of this demo, we will create a new realm. Hover the mouse pointer over "Master" in the top left-hand corner, and click on "Add realm". Create a new realm called "realmb". Now we will create a new user in this realm. Click on "Users" and select "Add User", specifying "alice" as the username. Click "save" and then go to the "Credentials" tab for "alice", and specify a password, unselecting the "Temporary" checkbox, and reset the password.

    1.2) Create a new client application in Keycloak

    Now we will create a new client application for the Fediz IdP in Keycloak. Select "Clients" in the left-hand menu, and click on "Create". Specify the following values:
    • Client ID: realma-client
    • Client protocol: openid-connect
    • Root URL: https://localhost:8443/fediz-idp/federation
    Once the client is created you will see more configuration options:
    • Select "Access Type" to be "confidential".
    Now go to the "Credentials" tab of the newly created client and copy the "Secret" value. This will be required in the Fediz IdP to authenticate to the token endpoint in Keycloak.

    1.3) Export the Keycloak signing certificate

    Finally, we need to export the Keycloak signing certificate so that the Fediz IdP can validate the signed JWT Token from Keycloak. Select "Realm Settings" (for "realmb") and click on the "Keys" tab. Copy and save the value specified in the "Certificate" textfield.

    1.4) Testing the Keycloak configuration

    It's possible to see the Keycloak OpenId Connect configuration by navigating to:
    • http://localhost:8080/auth/realms/realmb/.well-known/openid-configuration
    This tells us what the authorization and token endpoints are, both of which we will need to configure the Fediz IdP. To test that everything is working correctly, open a web browser and navigate to:
    • localhost:8080/auth/realms/realmb/protocol/openid-connect/auth?response_type=code&client_id=realma-client&redirect_uri=https://localhost:8443/fediz-idp/federation&scope=openid
    Login using the credentials you have created for "alice". Keycloak will then attempt to redirect to the given "redirect_uri" and so the browser will show a connection error message. However, copy the URL + extract the "code" query String. Open a terminal and invoke the following command, substituting in the secret and code extracted above:
    • curl -u realma-client:<secret> --data "client_id=realma-client&grant_type=authorization_code&code=<code>&redirect_uri=https://localhost:8443/fediz-idp/federation" http://localhost:8080/auth/realms/realmb/protocol/openid-connect/token
    You should see a succesful response containing (amongst other things) the OAuth 2.0 Access Token and the OpenId Connect IdToken, containing the user identity.

    2) Install and configure the Apache CXF Fediz IdP and sample Webapp

    Follow a previous tutorial to deploy the latest Fediz IdP + STS to Apache Tomcat, as well as the "simpleWebapp". Note that you will need to use Fediz 1.3.0 here (or the latest SNAPSHOT version) for OpenId Connect support. Test that the "simpleWebapp" is working correctly by navigating to the following URL (selecting "realm A" at the IdP, and authenticating as "alice/ecila"):
    • https://localhost:8443/fedizhelloworld/secure/fedservlet
    2.1) Configure the Fediz IdP to communicate with Keycloak

    Now we will configure the Fediz IdP to authenticate the user in "realm B" by using the OpenId Connect protocol. Edit 'webapps/fediz-idp/WEB-INF/classes/entities-realma.xml'. In the 'idp-realmA' bean:
    • Change the port in "idpUrl" to "8443". 
    In the 'trusted-idp-realmB' bean:
    • Change the "url" value to "http://localhost:8080/auth/realms/realmb/protocol/openid-connect/auth".
    • Change the "protocol" value to "openid-connect-1.0".
    • Change the "certificate" value to "keycloak.cert". 
    • Add the following parameters Map, filling in a value for the client secret extracted above: <property name="parameters">
                  <util:map>
                      <entry key="client.id" value="realma-client"/>
                      <entry key="client.secret" value="<secret>"/>
                      <entry key="token.endpoint" value="http://localhost:8080/auth/realms/realmb/protocol/openid-connect/token"/>
                  </util:map>
              </property>
       
    2.2) Configure Fediz to use the Keycloak signing certificate

    Copy 'webapps/fediz-idp/WEB-INF/classes/realmb.cert' to a new file called 'webapps/fediz-idp/WEB-INF/classes/keycloak.cert'. Edit this file + delete the content between the "-----BEGIN CERTIFICATE----- / -----END CERTIFICATE-----" tags, pasting instead the Keycloak signing certificate as retrieved in step "1.3" above.

    Restart Fediz to pick up the changes (you may need to remove the persistent storage first).

    3) Testing the service

    To test the service navigate to:
    • https://localhost:8443/fedizhelloworld/secure/fedservlet
    Select "realm B". You should be redirected to the Keycloak authentication page. Enter the user credentials you have created. You will be redirected to Fediz, where it converts the received JWT token to a token in the realm of Fediz (realm A) and redirects to the web application.
    Categories: Colm O hEigeartaigh

    Apache CXF Fediz 1.2.2 released

    Colm O hEigeartaigh - Wed, 02/17/2016 - 13:12
    Apache CXF Fediz 1.2.2 has been released. The issues fixed can be seen here. Highlights include:
    • The core Apache CXF dependency is updated to the recent 3.0.8 release.
    • A new HomeRealm Discovery Service based on Spring EL is available in the IdP.
    • Support for configurable token expiration validation in the plugins has been added.
    • Various fixes for the websphere container plugin have been added.
    A new feature in 1.2.2 is the ability to specify a constraint in the IdP on the acceptable 'wreply' value for a given service. When the IdP successfully authenticates the end user, it will issue the WS-Federation response to the value specified in the initial request in the 'wreply' parameter. However, this could be exploited by a malicious third party to redirect the end user to a custom address, where the issued token could be retrieved. In 1.2.2, there is a new property associated with the Application in the IdP called 'passiveRequestorEndpointConstraint'. This is a regular expression on the acceptable value for the 'wreply' endpoint associated with this Application. If this property is not specified, a warning is logged in the IdP. For example:

    Categories: Colm O hEigeartaigh

    Javascript Object Signing and Encryption (JOSE) support in Apache CXF - part IV

    Colm O hEigeartaigh - Mon, 02/15/2016 - 17:36
    This is the fourth and final article in a series of posts on support for Javascript Object Signing and Encryption (JOSE) in Apache CXF. The first article covered how to sign content using JWS, while the second article showed how to encrypt content using JWE. The third article described how to construct JWT Tokens, how to sign and encrypt them, and how they can be used for authentication and authorization in Apache CXF. In this post, we will show how the CXF Security Token Service (STS) can be leveraged to issue and validate JWT Tokens.

    1) The CXF STS

    Apache CXF ships with a powerful and widely deployed STS implementation that has been covered extensively on this blog before. Clients interact with the STS via the SOAP WS-Trust interface, typically asking the STS to issue a (SAML) token by passing some parameters. The STS offers the following functionality with respect to tokens:
    • It can issue SAML (1.1 + 2.0) and SecurityContextTokens.
    • It can validate SAML, UsernameToken and BinarySecurityTokens.
    • It can renew SAML Tokens
    • It can cancel SecurityContextTokens.
    • It can transform tokens from one type to another, and from one realm to another.
    Wouldn't it be cool if you could ask the STS to issue and validate JWT tokens as well? Well that's exactly what you can do from the new CXF 3.1.5 release! If you already have an STS instance deployed to issue SAML tokens, then you can also issue JWT tokens to different clients with some simple configuration changes to your existing deployment.

    2) Issuing JWT Tokens from the STS

    Let's look at the most common use-case first, that of issuing JWT tokens from the STS. The client specifies a TokenType String in the request to indicate the type of the desired token. There is no standard WS-Trust Token Type for JWT tokens as there is for SAML Tokens. The default implementation that ships with the STS uses the token type "urn:ietf:params:oauth:token-type:jwt" (see here).

    The STS maintains a list of TokenProvider implementations, which it queries in turn to see whether it is capable of issuing a token of the given type. A new implementation is available to issue JWT Tokens - the JWTTokenProvider. By default tokens are signed via JWS using the STS master keystore (this is controlled via a "signToken" property of the JWTTokenProvider). The keystore configuration is exactly the same as for the SAML case. Tokens can also be encrypted via JWE if desired. Realms are also supported in the same way as for SAML Tokens.

    The claims inserted into the issued token are obtained via a JWTClaimsProvider Object configured in the JWTTokenProvider. The default implementation adds the following claims:
    • The current principal is added as the Subject claim.
    • The issuer name of the STS is added as the Issuer claim.
    • Any claims that were requested by the client via the WS-Trust "Claims" parameter (that can be handled by the ClaimManager of the STS).
    • The various "time" constraints, such as Expiry, NotBefore, IssuedAt, etc.
    • Finally, it adds in the audience claim obtained from an AppliesTo address and the wst:Participants, if either were specified by the client.
    The token that is generated by the JWTTokenProvider is in the form of a String. However, as the token will be included in the WS-Trust Response, the String must be "wrapped" somehow to form a valid XML Element. A TokenWrapper interface is defined to do precisely this. The default implementation simply inserts the JWT Token into the WS-Trust Response as the text content of a "TokenWrapper" Element.
      3) Validating JWT Tokens in the STS

      As well as issuing JWT Tokens, the STS can also validate them via the WS-Trust Validate operation. A new TokenValidator implementation is available to validate JWT tokens called the JWTTokenValidator. The signature of the token is first validated by the STS truststore. Then the time related claims of the token are checked, e.g. is the token expired or is the current time before the NotBefore time of the token, etc.

      A useful feature of the WS-Trust validate operation is the ability to transform tokens from one type to another. Normally, a client just wants to know if a token is valid or not, and hence receives a "yes/no" response from the STS. However, if the client specifies a TokenType that doesn't corresponds to the standard "Status" TokenType, but instead corresponds to a different token, the STS will validate the received token and then generate a new token of the desired type using the principal associated with the validated token.

      This "token transformation" functionality is also supported with the new JWT implementation. It is possible to transform a SAML Token into a JWT Token, and vice versa, something that could be quite useful in a deployment where you need to support both REST and SOAP services for example. Using a JWT Token as a WS-Trust OnBehalfOf/ActAs token is also supported.
      Categories: Colm O hEigeartaigh

      Apache Fediz installation in a productive environment

      Jan Bernhardt - Fri, 02/05/2016 - 21:19
      In this article I'll explain to you what to do and what to be aware of, when you want to user Fediz IDP in production.

      Basically you need to change all default passwords and certificates.

      If you will use Tomcat as user Servlet container I'll also give you some tips how to secure tomcat best, so that an attacker will have a hard time breaking into your system.

      IDP ChangesRemove Filesrm -f services\idp\src\main\resources\entities-realmb.xml
      rm -f services\idp\src\main\resources\mystskey.cer
      rm -f services\idp\src\main\resources\realm.properties
      rm -f services\idp\src\main\resources\realma.cert
      rm -f services\idp\src\main\resources\realmb.cert
      rm -f services\idp\src\main\resources\stsKeystoreB.properties
      rm -f services\idp\src\main\resources\stsrealm_a.jks
      rm -f services\idp\src\main\resources\stsrealm_b.jks
      rm -f services\idp\src\main\webapp\WEB-INF\idp-config-realma.xml
      rm -f services\idp\src\main\webapp\WEB-INF\idp-config-realmb.xml
      Rename Filesmv services\idp\src\main\resources\entities-realmA.xml services\idp\src\main\resources\entities-realm-myCompany.xml
      mv services\idp\src\main\resources\stsKeystoreA.properties services\idp\src\main\resources\stsKeystoreMyCompany.properties
      Modify Files
      • Change fediz-idp to idp as finalName in services\idp\pom.xml. This will hide your used IDP product within your URL and will make it easier if you will ever want to change to a different product.
      • Apply the following changes to your entities-realm-myCompany.xm
        • Rename all realmA settings to realmYourCompany
        • Change your realm identifier 
        • Use http:// or urn: at the beginning of your realm identifier to ensure interoperability with Microsoft ADFS
        • Change Keystore settings, especially the certificatePassword
        • Update stsUrl and idpUrl to reflect your installation
        • Remove fedizhelloworld from your applications
        • Remove oidc application if not used, or update passiveRequestorEndpointConstraint if oidc will be used
        • Remove or update all trustedIdps
      • Regenerate your own IDP SSL keys for services\idp\src\main\resources\idp-ssl-key.jks and store new certificate in services\idp\src\main\resources\idp-ssl-trust.jks. Remove all other certificates in idp-ssl-trust.jks
      • Update settings for your database in services\idp\src\main\resources\persistence.properties
      • Change passwords in services\idp\src\main\resources\stsKeystoreMyCompany.properties
      • Change usernames and passwords in services\idp\src\main\resources\users.properties. Use Bcrypt passwords instead of plaintext passwords.
      • Update wsdlLocation to reflect your STS URL in services\idp\src\main\webapp\WEB-INF\idp-servlet.xml
      • Ensure correct realm value within your services\idp\src\main\webapp\WEB-INF\web.xml
      • Apply the following changes within services\idp\src\main\webapp\WEB-INF\security-config.xml
        • Change realm identifier in federationEntryPoint
        • Enable bCryptPasswordEncoder
        • Enable form-login if desired and provide custom login screen in services/idp/src/main/webapp/WEB-INF/views/signinform.jsp
        • Remove all authentication alternatives, which you don't need
        • Change username and password in securityProperties if you need certificate based authentication
        • Remove all stsUPPortFilter settings, because they are only usefull for demo setups when your STS runs within the same Tomcat as your IDP.
        • Update all wsdlLocation to match your STS URL
      Create Files
      • Provide your own keystore at services\idp\src\main\resources\stsrealm_myCompany.jks. This one should be the same as the one used later for the STS.
      STS Changes Remove Filesrm -f services\sts\src\main\resources\stsrealm_a.jks
      rm -f services\sts\src\main\resources\stsrealm_b.jks
      rm -f services\sts\src\main\resources\realma.cert
      rm -f services\sts\src\main\resources\realmb.cert
      rm -f services\sts\src\main\webapp\WEB-INF\file.xml
      rm -f services\sts\src\main\webapp\WEB-INF\passwords.xml
      rm -f services\sts\src\main\webapp\WEB-INF\userClaims.xml
      Rename Filesmv services\sts\src\main\resources\stsKeystoreA.properties services\sts\src\main\resources\stsKeystore.propertiesModify Files
      • Add dependency to services\sts\pom.xml (only needed if you want to use JEXL for claim mappings)
      <dependency>
          <groupId>org.apache.commons</groupId>
          <artifactId>commons-jexl</artifactId>
          <version>2.1.1</version>
          <scope>runtime</scope>
      </dependency>
      • Change private key password in keystore and in Callbackhandler: services\sts\src\main\java\org\apache\cxf\fediz\service\sts\PasswordCallbackHandler.java 
      • Replace  with your own keystore.properties file.
      • Change passwords in services\sts\src\main\resources\stsTruststore.properties 
      • Change log level in services\sts\src\main\resources\log4j.properties 
      • Remove all certificates in services\sts\src\main\resources\ststrust.jks and add your own.
      • Change user accounts in services\sts\src\main\webapp\WEB-INF\passwords.xml 
      • Do the following changes within: services\sts\src\main\webapp\WEB-INF\cxf-transport.xml
        • Import file with user realm configuration, like ldap.xml
        • Change Relationship settings
        • Add Claim Hanlder (if needed)
        • Rename all realmA in text to realmYourCompany
        • Remove all realmB settings / beans / endpoints
      Create Files
      • Add ClaimMapping Scripts (if needed)
        services\sts\src\main\resources\claimMapping-trusted-realm.script 
      • Add you own keystore services\sts\src\main\resources\stsrealm_myCompany.jks
      Tomcat InstallationTomcat Home 
      Only download and install Tomcat manually, if your distribution does not provide a tomcat installation. System based installation is usually better, because you will receive Tomcat (security) updates automatically with your other system updates!
      1. Download latest Tomcat Version:
      https://tomcat.apache.org/download-70.cgi

      2. Extract Tomcat to /usr/share/

      3. Create a symbolic link pointing to your latest tomcat download:
      ln -s /usr/share/apache-tomcat-7.0.67 /usr/share/tomcat
      Using of symbolic links will make it easier to switch to newer versions later on.
      4. Restrict tomcat installation
      # Create tomcat group
      groupadd tomcat

      # Set ownership of all files to root and provide tomcat access via group ownership
      chown -R root:tomcat /usr/share/tomcat/

      # Remove redundant files and folders
      rm -f /usr/share/tomcat/bin/*.bat
      rm -rf /usr/share/tomcat/temp
      rm -rf /usr/share/tomcat/work
      rm -rf /usr/share/tomcat/logs
      rm -rf /usr/share/tomcat/webapps

      # Make all normal files readonly
      find /usr/share/tomcat/ -type f -exec chmod 640 {} +
      chmod 750 /usr/share/tomcat/bin/*.sh

      # Allow tomcat to access all tomcat folders
      find /usr/share/tomcat/ -type d -exec chmod 770 {} +
      IDP Tomcat SetupSetup a tomcat base environment for your IDP
      # Create folders
      mkdir /usr/share/tomcat-idp
      mkdir /usr/share/tomcat-idp/conf
      mkdir /usr/share/tomcat-idp/logs
      mkdir /usr/share/tomcat-idp/temp
      mkdir /usr/share/tomcat-idp/webapps
      mkdir /usr/share/tomcat-idp/work

      # Copy conf files
      cp /usr/share/tomcat/conf/* /usr/share/tomcat-idp/conf/

      # Copy your war file to webapps
      cp ~/idp.war /usr/share/tomcat-idp/webapps/
      Create a system startup script /etc/init.d/tomcat-idp file, with the following content:

      #!/bin/bash
      #
      # tomcat7 This shell script takes care of starting and stopping Tomcat-IDP
      # Forked from: https://gist.github.com/valotas/1000094
      #
      # chkconfig: - 80 20
      #
      ### BEGIN INIT INFO
      # Provides: tomcat-idp
      # Required-Start: $network $syslog
      # Required-Stop: $network $syslog
      # Default-Start:
      # Default-Stop:
      # Description: Release implementation for Servlet 2.5 and JSP 2.1
      # Short-Description: start and stop tomcat-idp
      ### END INIT INFO

      ## Source function library.
      #. /etc/rc.d/init.d/functions
      export CATALINA_HOME=/usr/share/tomcat
      export CATALINA_BASE=/usr/share/tomcat-idp
      export JAVA_HOME=/usr/java/default
      export JAVA_OPTS="-Dfile.encoding=UTF-8 \
      -Djava.net.preferIPv4Stack=true \
      -Djava.net.preferIPv4Addresses=true \
      -Dnet.sf.ehcache.skipUpdateCheck=true \
      -XX:+DoEscapeAnalysis \
      -XX:+UseConcMarkSweepGC \
      -XX:+CMSClassUnloadingEnabled \
      -XX:+UseParNewGC \
      -XX:MaxPermSize=128m \
      -Xms512m -Xmx512m"
      export PATH=$JAVA_HOME/bin:$PATH
      SHUTDOWN_WAIT=20
      USER=tomcat-idp
      tomcat_pid() {
      echo `ps aux | grep org.apache.catalina.startup.Bootstrap | grep -v grep | awk '{ print $2 }'`
      }

      start() {
      pid=$(tomcat_pid)
      if [ -n "$pid" ]
      then
      echo "Tomcat is already running (pid: $pid)"
      else
      # Start tomcat
      echo "Starting $USER"
      ulimit -n 100000
      umask 007
      /bin/su -p -s /bin/sh $USER $CATALINA_HOME/bin/startup.sh
      fi


      return 0
      }

      stop() {
      pid=$(tomcat_pid)
      if [ -n "$pid" ]
      then
      echo "Stoping $USER"
      /bin/su -p -s /bin/sh $USER $CATALINA_HOME/bin/shutdown.sh

      let kwait=$SHUTDOWN_WAIT
      count=0;
      until [ `ps -p $pid | grep -c $pid` = '0' ] || [ $count -gt $kwait ]
      do
      echo -n -e "\nwaiting for processes to exit";
      sleep 1
      let count=$count+1;
      done

      if [ $count -gt $kwait ]; then
      echo -n -e "\nkilling processes which didn't stop after $SHUTDOWN_WAIT seconds"
      kill -9 $pid
      fi
      else
      echo "$USER is not running"
      fi

      return 0
      }

      case $1 in
      start)
      start
      ;;
      stop)
      stop
      ;;
      restart)
      stop
      start
      ;;
      status)
      pid=$(tomcat_pid)
      if [ -n "$pid" ]
      then
      echo "$USER is running with pid: $pid"
      else
      echo "$USER is not running"
      fi
      ;;
      esac
      exit 0
      Register Tomcat for autostart:
      chkconfig tomcat-idp onStart, Wait and Stop Tomcat
      /etc/init.d/tomcat-idp start
      tail -f /usr/share/tomcat-idp/logs/catalina.out
      /etc/init.d/tomcat-idp stop
      Your idp.war file should now be extracted.
      Copy the idp keystore to your tomcat-idp root folder

      cp /usr/share/tomcat-idp/webapps/idp/WEB-INF/classes/idp-ssl-key.jks /usr/share/tomcat-idp/
      Adjust settings of /usr/share/tomcat-idp/server.xml
      • Remove all out-commented blockes to improve readability.
      • Change shutdown password to something more complex:

      <server port="8005" shutdown="ComPlexWord">
      • Enable SSL Support
      <connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol">
                     maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
                     keystoreFile="idp-ssl-key.jks"
                     keystorePass="complexPassword"
                     sslProtocol="TLS" />
      • Disable autoDeploy

      <Host appbase="webapps" name="localhost"
                  unpackWARs="true" autoDeploy="false">
      . . .
      </host>
      Update file permissions
      # Create tomcat-idp user
      useradd -d /usr/share/tomcat-idp tomcat-idp
      useradd -G tomcat tomcat-idp

      # Set ownership of all files to root and provide tomcat-idp access via group ownership
      chown -R root:tomcat-idp /usr/share/tomcat-idp/

      # Make all normal files readonly
      find /usr/share/tomcat-idp/ -type f -exec chmod 640 {} +

      # Allow tomcat-idp to change all files in temp and work
      find /usr/share/tomcat-idp/temp/ -type f -exec chmod 660 {} + 
      find /usr/share/tomcat-idp/work/ -type f -exec chmod 660 {} +

      # Allow tomcat-idp to access all tomcat folders
      find /usr/share/tomcat-idp/ -type d -exec chmod 770 {} +

      # Log files can only be appended by tomcat-idp but not read
      chmod 730 /usr/share/tomcat-idp/logs

      # Tomcat-IDP will not be able to deploy further applications by its own
      chmod 750 /usr/share/tomcat-idp/webapps
      Start Tomcat-IDP again and check if startup was successful
      /etc/init.d/tomcat-idp start
      tail -f /usr/share/tomcat-idp/logs/catalina.out
      STS Tomcat Setup
      It can be recommended to install the STS on a different / dedicated server. In this blog post I will assume that you install IDP and STS on the same machine and therefore need to change port configuration for your tomcat instance.
      The STS Tomcat setup is almost the same as for the IDP.
      Setup a tomcat base environment for your STS
      # Create folders
      mkdir /usr/share/tomcat-sts
      mkdir /usr/share/tomcat-sts/conf
      mkdir /usr/share/tomcat-sts/logs
      mkdir /usr/share/tomcat-sts/temp
      mkdir /usr/share/tomcat-sts/webapps
      mkdir /usr/share/tomcat-sts/work

      # Copy conf files
      cp /usr/share/tomcat/conf/* /usr/share/tomcat-sts/conf/

      # Copy your war file to webapps
      cp ~/sts.war /usr/share/tomcat-sts/webapps/Create a system startup script /etc/init.d/tomcat-sts file, with the following content:

      #!/bin/bash
      #
      # tomcat7 This shell script takes care of starting and stopping Tomcat-STS
      # Forked from: https://gist.github.com/valotas/1000094
      #
      # chkconfig: - 80 20
      #
      ### BEGIN INIT INFO
      # Provides: tomcat-sts
      # Required-Start: $network $syslog
      # Required-Stop: $network $syslog
      # Default-Start:
      # Default-Stop:
      # Description: Release implementation for Servlet 2.5 and JSP 2.1
      # Short-Description: start and stop tomcat-sts
      ### END INIT INFO

      ## Source function library.
      #. /etc/rc.d/init.d/functions
      export CATALINA_HOME=/usr/share/tomcat
      export CATALINA_BASE=/usr/share/tomcat-sts
      export JAVA_HOME=/usr/java/default
      export JAVA_OPTS="-Dfile.encoding=UTF-8 \
      -Djava.net.preferIPv4Stack=true \
      -Djava.net.preferIPv4Addresses=true \
      -Dnet.sf.ehcache.skipUpdateCheck=true \
      -XX:+DoEscapeAnalysis \
      -XX:+UseConcMarkSweepGC \
      -XX:+CMSClassUnloadingEnabled \
      -XX:+UseParNewGC \
      -XX:MaxPermSize=128m \
      -Xms512m -Xmx512m"
      export PATH=$JAVA_HOME/bin:$PATH
      SHUTDOWN_WAIT=20
      USER=tomcat-sts

      tomcat_pid() {
      echo `ps aux | grep org.apache.catalina.startup.Bootstrap | grep -v grep | awk '{ print $2 }'`
      }

      start() {
      pid=$(tomcat_pid)
      if [ -n "$pid" ]
      then
      echo "Tomcat is already running (pid: $pid)"
      else
      # Start tomcat
      echo "Starting $USER"
      ulimit -n 100000
      umask 007
      /bin/su -p -s /bin/sh $USER $CATALINA_HOME/bin/startup.sh
      fi


      return 0
      }

      stop() {
      pid=$(tomcat_pid)
      if [ -n "$pid" ]
      then
      echo "Stoping $USER"
      /bin/su -p -s /bin/sh $USER $CATALINA_HOME/bin/shutdown.sh

      let kwait=$SHUTDOWN_WAIT
      count=0;
      until [ `ps -p $pid | grep -c $pid` = '0' ] || [ $count -gt $kwait ]
      do
      echo -n -e "\nwaiting for processes to exit";
      sleep 1
      let count=$count+1;
      done

      if [ $count -gt $kwait ]; then
      echo -n -e "\nkilling processes which didn't stop after $SHUTDOWN_WAIT seconds"
      kill -9 $pid
      fi
      else
      echo "$USER is not running"
      fi

      return 0
      }

      case $1 in
      start)
      start
      ;;
      stop)
      stop
      ;;
      restart)
      stop
      start
      ;;
      status)
      pid=$(tomcat_pid)
      if [ -n "$pid" ]
      then
      echo "$USER is running with pid: $pid"
      else
      echo "$USER is not running"
      fi
      ;;
      esac
      exit 0
      Register Tomcat for autostart:
      chkconfig tomcat-sts onStart, Wait and Stop Tomcat
      /etc/init.d/tomcat-sts start
      tail -f /usr/share/tomcat-sts/logs/catalina.out
      /etc/init.d/tomcat-sts stopYour sts.war file should now be extracted.
      Copy the sts keystore to your tomcat-sts root folder:
      cp /usr/share/tomcat-sts/webapps/sts/WEB-INF/classes/idp-ssl-key.jks /usr/share/tomcat-sts/
      cp /usr/share/tomcat-sts/webapps/sts/WEB-INF/classes/idp-ssl-trust.jks /usr/share/tomcat-sts/Adjust settings of /usr/share/tomcat-sts/server.xml
      • Remove all out-commented blockes to improve readability.
      • Change shutdown password to something more complex:

      <server port="9005" shutdown="ComPlexWord">
      • Enable SSL Support
      <connector port="9443" protocol="org.apache.coyote.http11.Http11Protocol"
                     maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
                     keystoreFile="idp-ssl-key.jks"
                     keystorePass="complexpassword"
                     truststoreFile="idp-ssl-trust.jks"
                     truststorePass="anotherComplexWord"
                     truststoreType="JKS"
                     clientAuth="want"
                     sslProtocol="TLS" />
      • Disable autoDeploy
      <host appbase="webapps" name="localhost"
                  unpackWARs="true" autoDeploy="false">
      . . .
      </host>Update file permissions
      # Create tomcat-sts user
      useradd -d /usr/share/tomcat-sts tomcat-sts
      useradd -G tomcat tomcat-sts

      # Set ownership of all files to root and provide tomcat-sts access via group ownership
      chown -R root:tomcat-sts /usr/share/tomcat-sts/

      # Make all normal files readonly
      find /usr/share/tomcat-sts/ -type f -exec chmod 640 {} +

      # Allow tomcat-sts to change all files in temp and work
      find /usr/share/tomcat-sts/temp/ -type f -exec chmod 660 {} +
      find /usr/share/tomcat-sts/work/ -type f -exec chmod 660 {} +

      # Allow tomcat-sts to access all tomcat folders
      find /usr/share/tomcat-sts/ -type d -exec chmod 770 {} +

      # Log files can only be appended by tomcat-sts but not read
      chmod 730 /usr/share/tomcat-sts/logs

      # Tomcat-STS will not be able to deploy further applications by its own
      chmod 750 /usr/share/tomcat-sts/webapps
      Start Tomcat-STS again and check if startup was successful
      /etc/init.d/tomcat-sts start
      tail -f /usr/share/tomcat-sts/logs/catalina.out
      Categories: Jan Bernhardt

      Apache Karaf Tutorial Part 8 - Distributed OSGi

      Christian Schneider - Tue, 02/02/2016 - 08:54

      Blog post edited by Christian Schneider - "Updated to karaf 4"

      By default OSGi services are only visible and accessible in the OSGi container where they are published. Distributed OSGi allows to define services in one container and use them in some other (even over machine boundaries).

      For this tutorial we use the DOSGi sub project of CXF which is the reference implementation of the OSGi Remote Service Admin specification, chapter 122 in the OSGi 4.2 Enterprise Specification).

      Example on github

      Introducing the example

      Following the hands on nature of these tutorial we start with an example that can be tried in some minutes and explain the details later.

      Our example is again the tasklist example from Part 1 of this tutorial. The only difference is that we now deploy the model and the persistence service on container A and model and UI to container B and we install the dosgi runtime on bother containers.

      As DOSGi should not be active for all services on a system the spec defines that the service property "osgi.remote.interfaces" triggers if DOSGi should process the service. It expects the interface names that this service should export remotely. Setting the property to "*" means that all interfaces the service implements should be exported. The tasklist persistence service already sets the property so the service is exported with defaults.

      Installing the service

      To keep things simple we will install container A and B on the same system.

      Install Service config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper zookeeper.port 2181 config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper.server clientPort 2181 feature:repo-add cxf-dosgi 1.7.0 feature:install cxf-dosgi-discovery-distributed cxf-dosgi-zookeeper-server feature:repo-add mvn:net.lr.tasklist/tasklist-features/1.0.0-SNAPSHOT/xml feature:install example-tasklist-persistence

      After these commands the tasklist persistence service should be running and be published on zookeeper.

      You can check the wsdl of the exported service http://localhost:8181/cxf/net/lr/tasklist/model/TaskService?wsdlBy starting the zookeeper client zkCli.sh from a zookeeper distro you can optionally check that there is a node for the service below the osgi path.

      Installing the UI
      • Unpack into folder container_b
      • Start bin/karaf

       

      Install Client config:property-set -p org.ops4j.pax.web org.osgi.service.http.port 8182 config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper zookeeper.port 2181 feature:repo-add cxf-dosgi 1.7.0 feature:install cxf-dosgi-discovery-distributed feature:repo-add mvn:net.lr.tasklist/tasklist-features/1.0.0-SNAPSHOT/xml feature:install example-tasklist-ui

       

      The tasklist client ui should be in status Active/Created and the servlet should be available on http://localhost:8182/tasklist. If the ui bundle stays in status graceperiod then DOSGi did not provide a local proxy for the persistence service.

      How does it work

      The Remote Service Admin spec defines an extension of the OSGi service model. Using special properties when publishing OSGi services you can tell the DOSGi runtime to export a service for remote consumption. The CXF DOSGi runtime listens for all services deployed on the local container. It only processes services that have the "osgi.remote.interfaces" property. If the property is found then the service is either exported with the named interfaces or with all interfaces it implements.The way the export works can be fine tuned using the CXF DOSGi configuration options.

      By default the service will be exported using the CXF servlet transport. The URL of the service is derived from the interface name. The servlet prefix, hostname and port number default to the Karaf defaults of "cxf", the ip address of the host and the port 8181. All these options can be defined using a config admin configuration (See the configuration options). By default the service uses the CXF Simple Frontend and the Aegis Databinding. If the service interface is annotated with the JAX-WS @WebService annotation then the default is JAX-WS frontend and JAXB databinding.

      The service informations are then also propagated using the DOSGi discovery. In the example we use the Zookeeper discovery implementation. So the service metadata is written to a zookeeper server.

      The container_b will monitor the local container for needed services. It will then check if a needed service is available on the discovery impl (on the zookeeper server in our case). For each service it finds it will create a local proxy that acts as an OSGi service implementing the requested interface. Incoming request are then serialized and sent to the remote service endpoint.

      So together this allows for almost transparent service calls. The developer only needs to use the OSGi service model and can still communicate over container boundaries.

      View Online
      Categories: Christian Schneider

      An interop demo between Apache CXF Fediz and Keycloak

      Colm O hEigeartaigh - Mon, 02/01/2016 - 17:15
      Last week, I showed how to use the Apache CXF Fediz IdP as an identity broker with a real-world SAML SSO IdP based on the Shibboleth IdP (as opposed to an earlier article which used a mocked SAML SSO IdP). In this post, I will give similar instructions to configure the Fediz IdP to act as an identity broker with Keycloak.

      1) Install and configure Keycloak

      Download and install the latest Keycloak distribution (tested with 1.8.0). Start keycloak in standalone mode by running 'sh bin/standalone.sh'.

      1.1) Create users in Keycloak

      First we need to create an admin user by navigating to the following URL, and entering a password:
      • http://localhost:8080/auth/
      Click on the "Administration Console" link, logging on using the admin user credentials. You will see the configuration details of the "Master" realm. For the purposes of this demo, we will create a new realm. Hover the mouse pointer over "Master" in the top left-hand corner, and click on "Add realm". Create a new realm called "realmb". Now we will create a new user in this realm. Click on "Users" and select "Add User", specifying "alice" as the username. Click "save" and then go to the "Credentials" tab for "alice", and specify a password, unselecting the "Temporary" checkbox, and reset the password.

      1.2) Create a new client application in Keycloak

      Now we will create a new client application for the Fediz IdP in Keycloak. Select "Clients" in the left-hand menu, and click on "Create". Specify the following values:
      • Client ID: urn:org:apache:cxf:fediz:idp:realm-A
      • Client protocol: saml
      • Client SAML Endpoint: https://localhost:8443/fediz-idp/federation
      Once the client is created you will see more configuration options:
      • Select "Sign Assertions"
      • Select "Force Name ID Format".
      • Valid Redirect URIs: https://localhost:8443/*
      Now go to the "SAML Keys" tab of the newly created client. Here we will have to import the certificate of the Fediz IdP so that Keycloak can validate the signed SAML requests. Click "Import" and specify:
      • Archive Format: JKS
      • Key Alias: realma
      • Store password: storepass
      • Import file: stsrealm_a.jks
      1.3) Export the Keycloak signing certificate

      Finally, we need to export the Keycloak signing certificate so that the Fediz IdP can validate the signed SAML Response from Keycloak. Select "Realm Settings" (for "realmb") and click on the "Keys" tab. Copy and save the value specified in the "Certificate" textfield.

      2) Install and configure the Apache CXF Fediz IdP and sample Webapp

      Follow a previous tutorial to deploy the latest Fediz IdP + STS to Apache Tomcat, as well as the "simpleWebapp". Test that the "simpleWebapp" is working correctly by navigating to the following URL (selecting "realm A" at the IdP, and authenticating as "alice/ecila"):
      • https://localhost:8443/fedizhelloworld/secure/fedservlet
      2.1) Configure the Fediz IdP to communicate with Keycloak

      Now we will configure the Fediz IdP to authenticate the user in "realm B" by using the SAML SSO protocol. Edit 'webapps/fediz-idp/WEB-INF/classes/entities-realma.xml'. In the 'idp-realmA' bean:
      • Change the port in "idpUrl" to "8443". 
      In the 'trusted-idp-realmB' bean:
      • Change the "url" value to "http://localhost:8080/auth/realms/realmb/protocol/saml".
      • Change the "protocol" value to "urn:oasis:names:tc:SAML:2.0:profiles:SSO:browser".
      • Change the "certificate" value to "keycloak.cert".
      2.2) Configure Fediz to use the Keycloak signing certificate

      Copy 'webapps/fediz-idp/WEB-INF/classes/realmb.cert' to a new file called 'webapps/fediz-idp/WEB-INF/classes/keycloak.cert'. Edit this file + delete the content between the "-----BEGIN CERTIFICATE----- / -----END CERTIFICATE-----" tags, pasting instead the Keycloak signing certificate as retrieved in step "1.3" above.

      The STS also needs to trust the Keycloak signing certificate. Copy keycloak.cert into 'webapps/fediz-idp-sts/WEB-INF/classes". In this directory import the keycloak.cert into the STS truststore via:
      • keytool -keystore ststrust.jks -import -file keycloak.cert -storepass storepass -alias keycloak
      Restart Fediz to pick up the changes (you may need to remove the persistent storage first).

      3) Testing the service

      To test the service navigate to:
      • https://localhost:8443/fedizhelloworld/secure/fedservlet
      Select "realm B". You should be redirected to the Keycloak authentication page. Enter the user credentials you have created. You will be redirected to Fediz, where it converts the received SAML token to a token in the realm of Fediz (realm A) and redirects to the web application.


      Categories: Colm O hEigeartaigh

      An interop demo between Apache CXF Fediz and Shibboleth

      Colm O hEigeartaigh - Tue, 01/26/2016 - 12:45
      Apache CXF Fediz is an open source implementation of the WS-Federation Passive Requestor Profile for SSO. It allows you to configure SSO for your web application via a container plugin, which redirects the user to authenticate at an IdP (Fediz or another WS-Federation based IdP). The Fediz IdP also supports the ability to act as an identity broker to a remote IdP, if the user is to be authenticated in a different realm. Last year, this blog covered a new feature of Apache CXF Fediz 1.2.0, which was the ability to act as an identity broker with a remote SAML SSO IdP. In this post, we will look at extending the demo to work with the Shibboleth IdP.

      1) Install and configure the Apache CXF Fediz IdP and sample Webapp

      Firstly, follow a previous tutorial to deploy the latest Fediz IdP + STS to Apache Tomcat, as well as the "simpleWebapp". Test that the "simpleWebapp" is working correctly by navigating to the following URL (selecting "realm A" at the IdP, and authenticating as "alice/ecila"):
      • https://localhost:8443/fedizhelloworld/secure/fedservlet
      Now we will configure the Fediz IdP to authenticate the user in "realm B", by using the SAML SSO protocol with a Shibboleth IdP instance. Edit 'webapps/fediz-idp/WEB-INF/classes/entities-realma.xml':

      In the 'idp-realmA' bean:
      • Change the port in "idpUrl" to "8443". 
      In the 'trusted-idp-realmB' bean:
      • Change the "url" value to "http://localhost:9080/idp/profile/SAML2/Redirect/SSO".
      • Change the "protocol" value to "urn:oasis:names:tc:SAML:2.0:profiles:SSO:browser".
      • Add the following property: <property name="parameters"><util:map><entry key="require.known.issuer" value="false" /></util:map></property>
      Restart Fediz to pick up the changes (you may need to remove the persistent storage first).

      2) Install and configure Shibboleth

      This is a reasonable complex task, so let's break it down into various sections.

      2.1) Install Shibboleth and deploy to Tomcat

      Download and extract the latest Shibboleth Identity Provider (tested with 3.2.1). Install Shibboleth by running the "install.sh" script in the bin directory. Install Shibboleth to "$SHIB_HOME" and use the default values that are prompted as part of the installation process, entering a random password for the keystore (we won't be using it). Download and extract an Apache Tomcat 7 instance, and follow these steps:
      • Copy '$SHIB_HOME/war/idp.war' into the Tomcat webapps directory.
      • Configure Shibboleth to find the IDP by defining: export JAVA_OPTS="-Xmx512M -Didp.home=$SHIB_HOME".
      • Next you need to download the jstl jar (https://repo1.maven.org/maven2/jstl/jstl/1.2/) + put it in the lib directory of Tomcat.
      • Edit conf/server.xml + change the ports to avoid conflict with the Tomcat instance that the Fediz IdP is running in. (e.g. use 9080 instead of 8080, etc.).
      • Now start Tomcat + check that everything is working by navigating to: http://localhost:9080/idp/profile/status
      2.2) Configure the RP provider in Shibboleth

      Next we need to configure the Fediz IdP as a RP (relying party) in Shibboleth. The Fediz IdP has the ability to generate metadata (either WS-Federation or SAML SSO) for a trusted IdP. Navigate to the following URL in a browser and save the metadata to a local file "fediz-metadata.xml":
      • https://localhost:8443/fediz-idp/metadata/urn:org:apache:cxf:fediz:idp:realm-B
      Edit '$SHIB_HOME/conf/metadata-providers.xml' and add the following configuration to pick up this metadata file:

      <MetadataProvider id="FedizMetadata"  xsi:type="FilesystemMetadataProvider" metadataFile="$METADATA_PATH/fediz-metadata.xml"/>

      As we won't be encrypting the SAML response in this demo, edit the '$SHIB_HOME/idp.properties' file and uncomment the "idp.encryption.optional = false" line, changing "false" to "true".

      2.3) Configuring the Signing Keys in Shibboleth

      Next we need to copy the keys that Fediz has defined for "realm B" to Shibboleth. Copy the signing certificate "realmb.cert" from the Fediz source and rename it as '$SHIB_HOME/credentials/idp-sigining.crt'. Next we need to extract the private key from the keystore and save it as a plaintext private key. Download the realmb keystore. Extract the private key via:
      • keytool -importkeystore -srckeystore stsrealm_b.jks -destkeystore stsrealmb.p12 -deststoretype PKCS12 -srcalias realmb -deststorepass storepass -srcstorepass storepass -srckeypass realmb -destkeypass realmb
      • openssl pkcs12 -in stsrealmb.p12  -nodes -nocerts -out idp-signing.key
      • Edit idp-signing.key to remove any additional information before the "BEGIN PRIVATE KEY" part.
      • cp idp-signing.key $SHIB_HOME/credentials
      2.4) Configure Shibboleth to authenticate users

      Next we need to configure Shibboleth to authenticate users. If you have a configured KDC or LDAP installation on your machine, then it is easiest to use this, if you configure some test users there. If not then, in this section we will configure Shibboleth to use JAAS to authenticate users via Kerberos. Edit '$SHIB_HOME/conf/authn/password-authn-config.xml' + comment out the ldap import, instead uncommenting the "jaas-authn-config.xml" import. Next edit '$SHIB_HOME/conf/authn/jaas.config' and replace the contents with:

       ShibUserPassAuth {
          com.sun.security.auth.module.Krb5LoginModule required refreshKrb5Config=true useKeyTab=false;
      };

      Now we will set up and configure a test KDC. Assuming you are running this demo on linux, edit '/etc/krb5.conf' and change the "default_realm" to "service.ws.apache.org" + add the following to the realms section:

              service.ws.apache.org = {
                      kdc = localhost:12345
              }

      Now we will look at altering a test-case I wrote that uses Apache Kerby as a test-kdc. Clone my testcases github repo and go to the "cxf-kerberos-kerby" test-case, which is part of the top level "cxf" projects. Edit the AuthenticationTest
      and change the values for KDC_PORT + KDC_UDP_PORT to "12345". Next remove the @org.junit.Ignore annotation from the "launchKDCTest()" method to just launch the KDC + sleep for a few minutes. Launch the test on the command line via "mvn test -Dtest=AuthenticationTest".

      2.5) Configure Shibboleth to include the authenticated principal in the SAML Subject

      For the purposes of our demo scenario, the Fediz IdP expects the authenticated principal in the SAML Subject it gets back from Shibboleth. Edit '$SHIB_HOME/conf/attribute-filter.xml' and add the following:

          <AttributeFilterPolicy id="releasePersistentIdToAnyone">
              <PolicyRequirementRule xsi:type="ANY"/>

              <AttributeRule attributeID="persistentId">
                  <PermitValueRule xsi:type="ANY"/>
              </AttributeRule>
          </AttributeFilterPolicy>

      Edit '$SHIB_HOME/conf/attribute-resolver.xml' and add the following:

      <resolver:AttributeDefinition id="persistentId" xsi:type="ad:PrincipalName">
            <resolver:AttributeEncoder xsi:type="enc:SAML1StringNameIdentifier" nameFormat="urn:mace:shibboleth:1.0:nameIdentifier"/>
            <resolver:AttributeEncoder xsi:type="enc:SAML2StringNameID" nameFormat="urn:oasis:names:tc:SAML:2.0:nameid-format:persistent"/>
      </resolver:AttributeDefinition>

      Edit '$SHIB_HOME/conf/saml-nameid.xml' and comment out any beans listed in the "shibboleth.SAML2NameIDGenerators" list. Finally, edit '$SHIB_HOME/conf/saml-nameid.properties' and uncomment the legacy Generator:

      idp.nameid.saml2.legacyGenerator = shibboleth.LegacySAML2NameIDGenerator

      That will enable Shibboleth to process the "persistent" attribute using the Principal Name. (Re)start the Tomcat instance we we should be ready to go.

      3) Testing the service

      To test the service navigate to:
      • https://localhost:8443/fedizhelloworld/secure/fedservlet
      Select "realm B". You should be redirected to the Shibboleth authentication page. Enter "alice/alice" as the username/password. You will be redirected to Fediz, where it converts the received SAML token to a token in the realm of Fediz (realm A) and redirects to the web application.




      Categories: Colm O hEigeartaigh

      Amazon EC2 plugin – 1.30 Released

      Francis Upton's blog - Sun, 01/24/2016 - 01:46

      A significant bug fix release was made in the plugin, thanks to all to helped test is and provided contributions.

      The next release will probably be in the next month or so and will include the acceptance of enhancement PRs that have been filed.

      Version 1.30 (Jan 23, 2016)
      • Add config to prefer the public IP to private IP when ssh-ing into slave
      • Added common method to compute tag value and also created constants for demand and spot
      • JENKINS-27601 instance caps incorrectly calculated
      • JENKINS-23787 EC2-plugin not spooling up stopped nodes
      • Depend on the aws-java-sdk plugin to limit AWS SDK duplication
      • Upgrade AWS SDK to 1.10.26
      • Terminate instance even if ec2 node deletion failed
      • JENKINS-27260 SPNEGO for Windows in the EC2 Plugin
      • JENKINS-26493 Use new EC2 API endpoint hostnames
      • JCIFS first tries to resolve a dfs path would timeout causing a long startup delay
      • JENKINS-28754 Jenkins EC2 Plugin should show timestamp in slave logs
      • JENKINS-30284 EC2 plugin too aggressive in timing in contacting new AWS instance over SSH
      • Use AWS4SignerType instead of QueryStringSignerType
      • Add minimum timeout for windows launching
      • Better exception handling in uptime check
      • JENKINS-29851 Global instance cap not calculated for spot instances
      • JENKINS-32439 JENKINS-32439 Incorrect slave template (AMI) found when launching slave
      • Improve logging to be less verbose

      Categories: Francis Upton

      Two Bays trail run 2016

      Olivier Lamy - Sun, 01/24/2016 - 00:12
      So I just started the year with a mid distance trail running.
      Last year I did the 28km but decided to go for the 56km this year.
      Training during summer and xmas period is a bit complicated and need some real motivation. But I finally managed to do a decent training.
      The race was a bit complicated with the weather. I had some hard time with the hot (33° in some country bush part!!)
      Bad eating strategy at the start. (eat too much...)
      But so I did it. I would expect better time but the temperature really kills me on the way back. So I must be happy with this 7H for 56km and +1600m/-1600m.
      I managed to build a video. So enjoy how is to run in Australia :-)
      Strava activity here (Some GPS issues but I really did 56km :-) )
      Next race is the rollercoaster 44km 27th February.
      Then the big one of the year Ultra Trail Australia 100km in the Blue mountains.
      And maybe the Surf coast century for a second time with a sub 12h goal (depends how the body will be in Jun :-) )
      Categories: Olivier Lamy

      Pages

      Subscribe to Talend Community Coders aggregator