Christian Schneider

Apache Karaf Tutorial part 10 - Declarative services

Christian Schneider - Fri, 04/22/2016 - 17:16

Blog post edited by Christian Schneider

This tutorial shows how to use Declarative Services together with the new Aries JPA 2.0.

You can find the full source code on github Karaf-Tutorial/tasklist-ds

Declarative Services

Declarative Services (DS) is the biggest contender to blueprint. It is a slim service injection framework that is completely focused on OSGi. DS allows you to offer and consume OSGi services and to work with configurations.

At the core DS works with xml files to define scr components and their dependencies. They typically live in the OSGI-INF directory and are announced in the Manifest using the header "Service-Component" with the path to the component descriptor file.  Luckily it is not necessary to directly work with this xml as there is also support for DS annotations. These are processed by the maven-bundle-plugin. The only prerequisite is that they have to be enabled by a setting in the configuration instructions of the plugin.

<_dsannotations>*</_dsannotations>

For more details see http://www.aqute.biz/Bnd/Components

DS vs Blueprint

Let us look into DS by comparing it to the already better known blueprint. There are some important differences:

  1. Blueprint always works on a complete blueprint context. So the context will be started when all mandatory service deps are present. It then publishes all offered services. As a consequence a blueprint context can not depend on services it offers itself. DS works on Components. A component is a class that offers a service and can depend on other services and configuration. In DS you can manage each component separately like start and stop it. It is also possible that a bundle offers two components but only one is started as the dependencies of the other are not yet there.
  2. DS supports the OSGi service dynamics better than blueprint. Lets look into a simple example:
    You have a DS and blueprint module component that offers a service A and depends on a mandatory service B. Blueprint will wait on the first start for the mandatory service to be available. If it does not come up it will fail after a timeout and will not be able to recover from this. Once the blueprint context is up it stays up even if the mandatory service goes away. This is called service damping and has the goal to avoid restarting blueprint contexts too often. Services are injected into blueprint beans as dynamic proxies. Internally the proxy handles the replacement and unavailability of services. One problem this causes is that calls to a non available service will block the thread until a timeout and then throw a RuntimeException.
    In DS on the other hand a component lifecycle is directly bound to dependent services. So a component will only be activated when all mandatory services are present and deactivated as soon as one goes away. The advantage is that the service injected into the component does not have to be proxied and calls to it should always work.
  3. Every DS component must be a service. While blueprint can have internal beans that are just there to wire internal classes to each other this is not possible in DS. So DS is not a complete dependency injection framework and lacks many of the features blueprint offers in this regard.
  4. DS does not support extension namespaces. Aries blueprint has support for quite a few other Apache projects using extension namespaces. Examples are: Aries jpa, Aries transactions, Aries authz, CXF, Camel. So using these technologies in DS can be a bit more difficult.
  5. DS does not support support interceptors. In blueprint an extension namespace can introduce and interceptor that is always called before or after a bean. This is for example used for security as well as transation handling. For this reason DS did not support JPA very well as normal usage mandates to have interceptors. See below how jpa can work on DS.

So if DS is a good match for your project depends on how much you need the service dynamics and how well you can integrate DS with other projects.

JEE and JPA

The JPA spec is based on JEE which has a very special thread and interceptor model. In JEE you use session beans with a container managed EntityManger
to manipulate JPA Entities. It looks like this:

JPA @Stateless class TaskServiceImpl implements TaskService {  @PersistenceContext(unitName="tasklist") private EntityManager em; public Task getTask(Integer id) { return em.find(Task.class, id); } }

In JEE calling getTask will by default participate in or start a transaction. If the method call succeeds the transaction will be committed, if there is an exception it will be rolled back.
The calls go to a pool of TaskServiceImpl instances. Each of these instances will only be used by one thread at a time. As a result of this the EntityManager interface is not thread safe!

So the advantage of this model is that it looks simple and allows pretty small code. On the other hand it is a bit difficult to test such code outside a container as you have to mimic the way the container works with this class. It is also difficult to access e.g. em
 as it is private and there is not setter.

Blueprint supports a coding style similar to the JEE example and implements this using a special jpa and tx namespace and
interceptors that handle the transaction / em management.

DS and JPA

In DS each component is a singleton. So there is only one instance of it that needs to cope with multi threaded access. So working with the plain JEE concepts for JPA is not possible in DS.

Of course it would be possible to inject an EntityManagerFactory and handle the EntityManager lifecycle and transactions by hand but this results in quite verbose and error prone code.

Aries JPA 2.0.0 is the first version that offers special support for frameworks like DS that do not offer interceptors. The solution here is the concept of a JPATemplate together with support for closures in Java 8. To see how the code looks like peek below at chapter persistence.

Instead of the EntityManager we inject a thread safe JpaTemplate into our code. We need to put the jpa code inside a closure and run it with jpa.txEpr() or jpa.tx(). The JPATemplate will then guarantee the same environment like JEE inside the closure. As each closure runs as its own
instance there is one em per thread. The code will also participate/create a transaction and the transaction  commit/rollback also works like in JEE.

So this requires a little more code but the advantage is that there is no need for a special framework integration.
The code can also be tested much easier. See TaskServiceImplTest in the example.

Structure
  • features
  • model
  • persistence
  • ui
Features

Defines the karaf features to install the example as well as all necessary dependencies.

Model

This module defines the Task JPA entity, a TaskService interface and the persistence.xml. For a detailed description of model see the tasklist-blueprint example. The model is exactly the same here.

PersistenceTaskServiceImpl @Component public class TaskServiceImpl implements TaskService { private JpaTemplate jpa; public Task getTask(Integer id) { return jpa.txExpr(em -> em.find(Task.class, id)); } @Reference(target = "(osgi.unit.name=tasklist)") public void setJpa(JpaTemplate jpa) { this.jpa = jpa; } }

We define that we need an OSGi service with interface TaskService and a property "osgi.unit.name" with the value "tasklist".

InitHelper @Component public class InitHelper { Logger LOG = LoggerFactory.getLogger(InitHelper.class); TaskService taskService; @Activate public void addDemoTasks() { try { Task task = new Task(1, "Just a sample task", "Some more info"); taskService.addTask(task); } catch (Exception e) { LOG.warn(e.getMessage(), e); } } @Reference public void setTaskService(TaskService taskService) { this.taskService = taskService; } }

The class InitHelper creates and persists a first task so the UI has something to show. It is also an example how business code that works with the task service can look like.
@Reference TaskService taskService injects the TaskService into the field taskService.
@Activate makes sure that addDemoTasks() is called after injection of this component.

Another interesting point in the module is the test TaskServiceImplTest. It runs outside OSGi and uses a special
persistence.xml for testing to create the EntityManagerFactory. It also shows how to instantiate a ResourceLocalJpaTemplate
to avoid having to install a JTA transaction manager for the test. The test code shows that indeed the TaskServiceImpl can
be used as plain java code without any special tricks.

UI

The tasklist-ui module uses the TaskService as an OSGi service and publishes a Servlet as an OSGi service. The Pax-web whiteboard bundle will then pick up the exported servlet and publish it using the HttpService so it is available on http.

TaskListServlet @Component(immediate = true, service = { Servlet.class }, property = { "alias:String=/tasklist" } ) public class TaskListServlet extends HttpServlet { private TaskService taskService; protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { // Actual code omitted } @Reference public void setTaskService(TaskService taskService) { this.taskService = taskService; } }

The above snippet shows how to specify which interface to use when exporting a service as well as how to define service properties.

The TaskListServlet is exported with the interface javax.servlet.Servlet with the service property alias="/tasklist".
So it is available on the url http://localhost:8181/tasklist.

Build

Make sure you use JDK 8 and run:

mvn clean install Installation

Make sure you use JDK 8.
Download and extract Karaf 4.0.0.
Start karaf and execute the commands below

Create DataSource config and Install Example cat https://raw.githubusercontent.com/cschneider/Karaf-Tutorial/master/tasklist-blueprint-cdi/org.ops4j.datasource-tasklist.cfg | tac -f etc/org.ops4j.datasource-tasklist.cfg feature:repo-add mvn:net.lr.tasklist.ds/tasklist/1.0.0-SNAPSHOT/xml/features feature:install example-tasklist-ds-persistence example-tasklist-ds-ui Validate Installation

First we check that the JpaTemplate service is present for our persistence unit.

service:list JpaTemplate [org.apache.aries.jpa.template.JpaTemplate] ------------------------------------------- osgi.unit.name = tasklist transaction.type = JTA service.id = 164 service.bundleid = 57 service.scope = singleton Provided by : tasklist-model (57) Used by: tasklist-persistence (58)

Aries JPA should have created this service for us from our model bundle. If this did not work then check the log for messages from Aries JPA. It should print what it tried and what it is waiting for. You can also check for the presence of an EntityManagerFactory and EmSupplier service which are used by JpaTemplate.

A likely problem would be that the DataSource is missing so lets also check it:

service:list DataSource [javax.sql.DataSource] ---------------------- dataSourceName = tasklist felix.fileinstall.filename = file:/home/cschneider/java/apache-karaf-4.0.0/etc/org.ops4j.datasource-tasklist.cfg osgi.jdbc.driver.name = H2-pool-xa osgi.jndi.service.name = tasklist service.factoryPid = org.ops4j.datasource service.pid = org.ops4j.datasource.cdc87e75-f024-4b8c-a318-687ff83257cf url = jdbc:h2:mem:test service.id = 156 service.bundleid = 113 service.scope = singleton Provided by : OPS4J Pax JDBC Config (113) Used by: Apache Aries JPA container (62)

This is like it should look like. Pax-jdbc-config created the DataSource out of the configuration in "etc/org.ops4j.datasource-tasklist.cfg".  By using a DataSourceFactory wit the property "osgi.jdbc.driver.name=H2-pool-xa". So the resulting DataSource should be pooled and fully ready for XA transactions.

Next we check that the DS components started:

scr:list ID | State | Component Name -------------------------------------------------------------- 1 | ACTIVE | net.lr.tasklist.persistence.impl.InitHelper 2 | ACTIVE | net.lr.tasklist.persistence.impl.TaskServiceImpl 3 | ACTIVE | net.lr.tasklist.ui.TaskListServlet

If any of the components is not active you can inspect it in detail like this:

scr:details net.lr.tasklist.persistence.impl.TaskServiceImpl Component Details Name : net.lr.tasklist.persistence.impl.TaskServiceImpl State : ACTIVE Properties : component.name=net.lr.tasklist.persistence.impl.TaskServiceImpl component.id=2 Jpa.target=(osgi.unit.name=tasklist) References Reference : Jpa State : satisfied Multiple : single Optional : mandatory Policy : static Service Reference : Bound Service ID 164 Test

Open the url below in your browser.
http://localhost:8181/tasklist

You should see a list of one task

 http://localhost:8181/tasklist?add&taskId=2&title=Another Task

 

View Online
Categories: Christian Schneider

Karaf Tutorial Part 4 - CXF Services in OSGi

Christian Schneider - Tue, 04/05/2016 - 09:03

Blog post edited by Christian Schneider

Shows how to publish and use a simple REST and SOAP service in karaf using cxf and blueprint.

To run the example you need to install the http feature of karaf. The default http port is 8080 and can be configured using the
config admin pid "org.ops4j.pax.web". You also need to install the cxf feature. The base url of the cxf servlet is by default "/cxf".
It can be configured in the config pid "org.apache.cxf.osgi".

Differences in Talend ESB

Icon

If you use Talend ESB instead of plain karaf then the default http port is 8044 and the default cxf servlet name is "/services".

PersonService Example

The "business case" is to manage a list of persons. As service should provide the typical CRUD operations. Front ends should be a REST service, a SOAP service and a web UI.

The example consists of four projects

  • model: Person class and PersonService interface
  • server: Service implementation and logic to publish the service using jax-ws (SOAP)
  • proxy: Accesses the SOAP service and publishes it as an OSGi service
  • webui: Provides a simple servlet based web ui to list and add persons. Uses the OSGi service

You can find the full source on github: https://github.com/cschneider/Karaf-Tutorial/tree/master/cxf/personservice/

Installation and test run

First we build, install and run the example to give an overview of what it does. The following main chapter then explains in detail how it works.

Installing Karaf and preparing for CXF

We start with a fresh Karaf 4.0.4

Build and Test

Checkout the project from github and build using maven

mvn clean install Install service and ui in karaf feature:repo-add cxf 3.1.5 feature:install http cxf-jaxws http-whiteboard install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-model/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-server/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-proxy/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-webui/1.0-SNAPSHOT Test the service

The person service should show up in the list of currently installed services that can be found here http://localhost:8181/cxf/

Test the proxy and web UI

http://localhost:8181/personui

You should see the list of persons managed by the personservice and be able to add new persons.

How it worksDefining the model

The model project is a simple java maven project that defines a JAX-WS service and a JAXB data class. It has no dependencies to cxf. The service interface is just a plain java interface with the @WebService annotation.

@WebService public interface PersonService { public abstract Person[] getAll(); public abstract Person getPerson(String id); public abstract void updatePerson(String id, Person person); public abstract void addPerson(Person person); }

The Person class is just a simple pojo with getters and setters for id, name and url and the necessary JAXB annotations. Additionally you need an ObjectFactory to tell JAXB what xml element to use for the Person class.
There is also no special code for OSGi in this project. So the model works perfectly inside and outside of an OSGi container.

Icon

The service is defined java first. SOAP and rest are used quite transparently. This is very suitable to communicate between a client and server of the same application. If the service
is to be used by other applications the wsdl first approach is more suitable. In this case the model project should be configured to generate the data classes and service interface from
a wsdl (see cxf wsdl_first example pom file). For rest services the java first approach is quite common in general as the client typically does not use proxy classes anyway.

Service implementation (server)

PersonServiceImpl is a java class the implements the service interface. The server project also contains a small starter class that allows the service to be published directly from eclipse. This class is not necessary for deployment in karaf.

The production deployment of the service is done in src/main/resources/OSGI-INF/blueprint/blueprint.xml.

As the file is in the special location OSGI-INF/blueprint it is automatically processed by the blueprint implementation aries in karaf. The REST service is published using the jaxrs:server element and the SOAP service is published using the jaxws:endpoint element. The blueprint namespaces are different from spring but apart from this the xml is very similar to a spring xml.

Service proxy

The service proxy project only contains a blueprint xml that uses the CXF JAXWS client to consume the SOAP service and exports it as an OSGi Service. Encapsulating the service client as an OSGi service (proxy project) is not strictly necessary but it has the advantage that the webui is then completely independent of cxf. So it is very easy to change the way the service is accessed. So this is considered a best practice in OSGi.

See blueprint.xml

Web UI (webui)

This project consumes the PersonService OSGi service and exports the PersonServlet as an OSGi service. The pax web whiteboard extender will then publish the servlet on the location /personui.
The PersonServlet gets the PersonService injected and uses to get all persons and also to add persons.

The wiring is done using a blueprint context.

 

PersonService REST

The personservice REST example is very similar to to SOAP one but it uses jaxrs to expose a REST service instead.

The example can be found in github Karaf-Tutorial cxf personservice-rest. It contains these modules:

  • personservice-model: Interface PersonService and Person dto
  • personservice-server: Implements the service and publishes it using blueprint
  • personservice-webui: Simple servlet UI to show and add persons
Build mvn clean install Install feature:repo-add cxf 3.1.5 feature:install cxf-jaxrs http-whiteboard install -s mvn:net.lr.tutorial.karaf.cxf.personrest/personrest-model/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personrest/personrest-server/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personrest/personrest-webui/1.0-SNAPSHOT How it works

The interface of the service must contain jaxrs annotations to tell CXF how to map rest requests to the methods.

@Produces(MediaType.APPLICATION_XML) public interface PersonService { @GET @Path("/") public Person[] getAll(); @GET @Path("/{id}") public Person getPerson(@PathParam("id") String id); @PUT @Path("/{id}") public void updatePerson(@PathParam("id") String id, Person person); @POST @Path("/") public void addPerson(Person person); }

In blueprint the implementation of the rest service needs to be published as a REST resource:

<bean id="personServiceImpl" class="net.lr.tutorial.karaf.cxf.personrest.impl.PersonServiceImpl"/> <jaxrs:server address="/person" id="personService"> <jaxrs:serviceBeans> <ref component-id="personServiceImpl" /> </jaxrs:serviceBeans> <jaxrs:features> <cxf:logging /> </jaxrs:features> </jaxrs:server>

 

Test the service

The person service should show up in the list of currently installed services that can be found here
http://localhost:8181/cxf/

List the known persons

http://localhost:8181/cxf/person
This should show one person "chris"

Add a person

Now using a firefox extension like Poster or Httprequester you can add a person.

Send the following xml snippet:

<?xml version="1.0" encoding="UTF-8"?> <person> <id>1001</id> <name>Christian Schneider</name> <url>http://www.liquid-reality.de</url> </person>

with Content-Type:text/xml using PUT:http://localhost:8181/cxf/person/1001
or to this url using POST:http://localhost:8181/cxf/person

Now the list of persons should show two persons.

Now using a firefox extension like Poster or Httprequester you can add a person.
Send the content of server/src/test/resources/person1.xml to the following url using PUT:
http://localhost:8181/cxf/person/1001

Or to this url using POST:
http://localhost:8181/cxf/person

Now the list of persons should show two persons

Test the web UI

http://localhost:8181/personuirest

You should see the list of persons managed by the personservice and be able to add new persons.

Some further remarks

The example uses blueprint instead of spring dm as it works much better in an OSGi environment. The bundles are created using the maven bundle plugin. A fact that shows how well blueprint works
is that the maven bundle plugin is just used with default settings. In spring dm the imports have to be configured as spring needs access to many implementation classes of cxf. For spring dm examples
take a look at the Talend Service Factory examples (https://github.com/Talend/tsf/tree/master/examples).

The example shows that writing OSGi applications is quite simple with aries and blueprint. It needs only 153 lines of java code (without comments) for a complete little application.
The blueprint xml is also quite small and readable.

Back to Karaf Tutorials

View Online
Categories: Christian Schneider

Karaf Tutorial Part 4 - CXF Services in OSGi

Christian Schneider - Thu, 03/31/2016 - 14:41

Blog post edited by Christian Schneider

Shows how to publish and use a simple REST and SOAP service in karaf using cxf and blueprint.

To run the example you need to install the http feature of karaf. The default http port is 8080 and can be configured using the
config admin pid "org.ops4j.pax.web". You also need to install the cxf feature. The base url of the cxf servlet is by default "/cxf".
It can be configured in the config pid "org.apache.cxf.osgi".

Differences in Talend ESB

Icon

If you use Talend ESB instead of plain karaf then the default http port is 8044 and the default cxf servlet name is "/services".

PersonService Example

The "business case" is to manage a list of persons. As service should provide the typical CRUD operations. Front ends should be a REST service, a SOAP service and a web UI.

The example consists of four projects

  • model: Person class and PersonService interface
  • server: Service implementation and logic to publish the service using jax-ws (SOAP)
  • proxy: Accesses the SOAP service and publishes it as an OSGi service
  • webui: Provides a simple servlet based web ui to list and add persons. Uses the OSGi service

You can find the full source on github: https://github.com/cschneider/Karaf-Tutorial/tree/master/cxf/personservice/

Installation and test run

First we build, install and run the example to give an overview of what it does. The following main chapter then explains in detail how it works.

Installing Karaf and preparing for CXF

We start with a fresh Karaf 4.0.4

Build and Test

Checkout the project from github and build using maven

mvn clean install Install service and ui in karaf feature:repo-add cxf 3.1.5 feature:install http cxf-jaxws http-whiteboard install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-model/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-server/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-proxy/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-webui/1.0-SNAPSHOT Test the service

The person service should show up in the list of currently installed services that can be found here http://localhost:8181/cxf/

Test the proxy and web UI

http://localhost:8181/personui

You should see the list of persons managed by the personservice and be able to add new persons.

How it worksDefining the model

The model project is a simple java maven project that defines a JAX-WS service and a JAXB data class. It has no dependencies to cxf. The service interface is just a plain java interface with the @WebService annotation.

@WebService public interface PersonService { public abstract Person[] getAll(); public abstract Person getPerson(String id); public abstract void updatePerson(String id, Person person); public abstract void addPerson(Person person); }

The Person class is just a simple pojo with getters and setters for id, name and url and the necessary JAXB annotations. Additionally you need an ObjectFactory to tell JAXB what xml element to use for the Person class.
There is also no special code for OSGi in this project. So the model works perfectly inside and outside of an OSGi container.

Icon

The service is defined java first. SOAP and rest are used quite transparently. This is very suitable to communicate between a client and server of the same application. If the service
is to be used by other applications the wsdl first approach is more suitable. In this case the model project should be configured to generate the data classes and service interface from
a wsdl (see cxf wsdl_first example pom file). For rest services the java first approach is quite common in general as the client typically does not use proxy classes anyway.

Service implementation (server)

PersonServiceImpl is a java class the implements the service interface. The server project also contains a small starter class that allows the service to be published directly from eclipse. This class is not necessary for deployment in karaf.

The production deployment of the service is done in src/main/resources/OSGI-INF/blueprint/blueprint.xml.

As the file is in the special location OSGI-INF/blueprint it is automatically processed by the blueprint implementation aries in karaf. The REST service is published using the jaxrs:server element and the SOAP service is published using the jaxws:endpoint element. The blueprint namespaces are different from spring but apart from this the xml is very similar to a spring xml.

Service proxy

The service proxy project only contains a blueprint xml that uses the CXF JAXWS client to consume the SOAP service and exports it as an OSGi Service. Encapsulating the service client as an OSGi service (proxy project) is not strictly necessary but it has the advantage that the webui is then completely independent of cxf. So it is very easy to change the way the service is accessed. So this is considered a best practice in OSGi.

See blueprint.xml

Web UI (webui)

This project consumes the PersonService OSGi service and exports the PersonServlet as an OSGi service. The pax web whiteboard extender will then publish the servlet on the location /personui.
The PersonServlet gets the PersonService injected and uses to get all persons and also to add persons.

The wiring is done using a blueprint context.

 

PersonService REST

The personservice REST example is very similar to to SOAP one but it uses jaxrs to expose a REST service instead.

The example can be found in github Karaf-Tutorial cxf personservice-rest. It contains these modules:

  • personservice-model: Interface PersonService and Person dto
  • personservice-server: Implements the service and publishes it using blueprint
  • personservice-webui: Simple servlet UI to show and add persons
Build mvn clean install Install feature:repo-add cxf 3.1.5 feature:install cxf-jaxrs http install -s mvn:net.lr.tutorial.karaf.cxf.personrest/personrest-model/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personrest/personrest-server/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personrest/personrest-webui/1.0-SNAPSHOT How it works

The interface of the service must contain jaxrs annotations to tell CXF how to map rest requests to the methods.

@Produces(MediaType.APPLICATION_XML) public interface PersonService { @GET @Path("/") public Person[] getAll(); @GET @Path("/{id}") public Person getPerson(@PathParam("id") String id); @PUT @Path("/{id}") public void updatePerson(@PathParam("id") String id, Person person); @POST @Path("/") public void addPerson(Person person); }

In blueprint the implementation of the rest service needs to be published as a REST resource:

<bean id="personServiceImpl" class="net.lr.tutorial.karaf.cxf.personrest.impl.PersonServiceImpl"/> <jaxrs:server address="/person" id="personService"> <jaxrs:serviceBeans> <ref component-id="personServiceImpl" /> </jaxrs:serviceBeans> <jaxrs:features> <cxf:logging /> </jaxrs:features> </jaxrs:server>

 

Test the service

The person service should show up in the list of currently installed services that can be found here
http://localhost:8181/cxf/

List the known persons

http://localhost:8181/cxf/person
This should show one person "chris"

Add a person

Now using a firefox extension like Poster or Httprequester you can add a person.

Send the following xml snippet:

<?xml version="1.0" encoding="UTF-8"?> <person> <id>1001</id> <name>Christian Schneider</name> <url>http://www.liquid-reality.de</url> </person>

with Content-Type:text/xml using PUT:http://localhost:8181/cxf/person/1001
or to this url using POST:http://localhost:8181/cxf/person

Now the list of persons should show two persons.

Now using a firefox extension like Poster or Httprequester you can add a person.
Send the content of server/src/test/resources/person1.xml to the following url using PUT:
http://localhost:8181/cxf/person/1001

Or to this url using POST:
http://localhost:8181/cxf/person

Now the list of persons should show two persons

Test the web UI

http://localhost:8181/personuirest

You should see the list of persons managed by the personservice and be able to add new persons.

Some further remarks

The example uses blueprint instead of spring dm as it works much better in an OSGi environment. The bundles are created using the maven bundle plugin. A fact that shows how well blueprint works
is that the maven bundle plugin is just used with default settings. In spring dm the imports have to be configured as spring needs access to many implementation classes of cxf. For spring dm examples
take a look at the Talend Service Factory examples (https://github.com/Talend/tsf/tree/master/examples).

The example shows that writing OSGi applications is quite simple with aries and blueprint. It needs only 153 lines of java code (without comments) for a complete little application.
The blueprint xml is also quite small and readable.

Back to Karaf Tutorials

View Online
Categories: Christian Schneider

Karaf Tutorial Part 5 - Running Apache Camel integrations in OSGi

Christian Schneider - Mon, 03/28/2016 - 10:33

Blog post edited by Christian Schneider

Shows how to run your camel routes in the OSGi server Apache Karaf. Like for CXF blueprint is used to boot up camel. The tutorial shows three examples - a simple blueprint route, a jms2rest adapter and an order processing example.Installing Karaf and making Camel features available
  • Download Karaf 4.0.4 and unpack to the file system
  • Start bin\karaf.bat or bin/karaf for unix

In Karaf type:

feature:repo-add camel 2.16.2 feature:list

You should see the camel features that are now ready to be installed.

Getting and building the examples

You can find the examples for this tutorial on github Karaf Tutorial - camel.

So either clone the git repo or just download and unpack the zip of it.To build the code do:

cd camel mvn clean install Starting simple with a pure blueprint deployment

Our first example does not even require a java project. In Karaf it is possible to deploy pure blueprint xml files. As camel is well integrated with blueprint you can define a complete camel context with routes in a simple blueprint file.

simple-camel-blueprint.xml

The blueprint xml for a camel context is very similar to the same in spring. Mainly the namespaces are different. Blueprint discovers the dependency on camel so it will automatically require the at least the camel-blueprint feature is installed. The camel components in routes are discovered as OSGi services. So as soon as a camel component is installed using the respective feature it is automatically available for usage in routes.

So to install the above blueprint based camel integration you only have to do the following steps:

feature:install camel-blueprint camel-stream

Copy simple-camel-blueprint.xml to the deploy folder of karaf. You should now see "Hello Camel" written to the console every 5 seconds.

The blueprint file will be automatically monitored for changes so any changes we do are directly reflected in Karaf. To try this open the simple-camel-blueprint.xml file from the deploy folder in an editor, change "stream:out" to "log:test" and save. Now the messages on the console should stop and instead you should be able to see "Hello Camel" in the Karaf log file formatted as a normal log line.

JMS to REST Adapter (jms2rest)

Icon

This example is not completely standalone. As a prerequisite install the person service example like described in Karaf Tutorial 4.

The example shows how to create a bridge from the messaging world to a REST service. It is simple enough that it could be done in a pure blueprint file like the example above. As any bigger integration needs some java code I opted to use a java project for that case.

Like most times we mainly use the maven bundle plugin with defaults and the packaging type bundle to make the project OSGi ready. The camel context is booted up using a blueprint file blueprint.xml and the routes are defined in the java class Jms2RestRoute.

Routes

The first route watches the directory "in" and writes the content of any file placed there to the jms queue "person". It is not strictly necessary but makes it much simpler to test the example by hand.

The seconds route is the real jms2rest adapter. It listens on the jms queue person and expects to get xml content with persons like also used in the PersonService. In the route the id of the person is extracted from the xml and stored in a camel message header. This header is then used to build the rest uri. As a last step the content from the message is sent to the rest uri with a PUT request. So this tells the service to store the person with the given id and data.

Use of Properties

Besides the pure route the example shows some more tpyical things you need in camel. So it is a good practice to externalize the url of services we access. Camel uses the Properties component for this task.

This enables us to write {{personServiceUri}} in endpoints or ${properties:personServiceUri} in the simple language.

In a blueprint context the Properties component is automatically aware of injected properties from the config admin service. We use a cm:property-placeholder definition to inject the attributes of the config admin pid "net.lr.tutorial.karaf.cxf.personservice". As there might be no such pid we also define a default value for the personServiceUri so the integration can be deployed without further configuation.

JMS Component

We are using the camel jms component in our routes. This is one of the few components that need further configuration to work. We also do this in the blueprint context by defining a JmsComponent and injecting a connection factory into it. In OSGi it is good practice to not define connection factories or data sources directly in the bundle instead we are simply refering to it using a OSGi service reference.

Deploying and testing the jms2rest Adapter

Just type the following in Karaf:

feature:repo-add activemq 5.12.2 feature:repo-add camel 2.16.2 feature:install  camel-blueprint camel-jms camel-http camel-saxon activemq-broker jms jms:create -t activemq localhost install -s mvn:net.lr.tutorial.karaf.camel/example-jms2rest/1.0-SNAPSHOT

This installs the activemq and camel feature files and features in karaf. The activemq:create command creates a broker defintions in the deploy folder. This broker is then automatically started. The broker defintion also publishes an OSGi service for a suitable connection factory. This is then referenced later by our bundle.

As a last step we install our own bundle with the camel route.

Now the route should be visible when typing:

> camel:route-list Route Id Context Name Status [file2jms ] [jms2rest ] [Started ] [personJms2Rest ] [jms2rest ] [Started ]

Now copy the file src/test/resources/person1.xml to the folder "in" below the karaf directory. The file should be sent to the queue person by the first route and then sent to the rest service by the second route.

In case the personservice is instaleld you should now see a message like "Update request received for ...". In case it is not installed you should see a 404 in the karaf error when accessing the rest service.

Order processing example

The business case in this example is a shop that partly works with external vendors.

We receive an order as an xml file (See: order1.xml). The order contains a customer element and several item elements. Each item specifies a vendor. This can be either "direct" when we deliver the item ourself or a external vendor name. If the item vendor is "direct" then the item should be exported to a file in a directory with the customer name. All other items are sent out by mail. The mail content should be customizeable. The mail address has to be fetched from a service that maps vendor name to mail address.

How it works

This example again uses maven to build, a blueprint.xml context to boot up camel and a java class OrderRouteBuilder for the camel routes. So from an OSGi perspective it works almost the same as the jms2rest example.

The routes are defined in net.lr.tutorial.karaf.camel.order.OrderRouteBuilder. The "order" route listens on the directory "orderin" and expects xml order files to be placed there. The route uses xpath to extract several attributes of the order into message headers. A splitter is used to handle each (/order/item) spearately. Then a content based router is used to handle "direct" items different from others.

In the case of a direct item the recipientlist pattern is used to build the destination folder dynamically using a simple language expression.

recipientList(simple("file:ordersout/${header.customer}"))

If the vendor is not "direct" then the route "mailtovendor" is called to create and send a mail to the vendor. Some subject and to address are set using special header names that the mail component understands. The content of the mail is expected in the message body. As the body also should be comfigureable the velocity component is used to fill the mailtemplate.txt with values from the headers that were extracted before.

Deploy into karaf

The deployment is also very similar to the previous example but a little simpler as we do not need jms. Type the following in karaf

feature:repo-add camel 2.16.2 feature:install camel-blueprint camel-mail camel-velocity camel-stream install -s mvn:net.lr.tutorial.karaf.camel/example-order/1.0-SNAPSHOT

To be able to receive the mail you have to edit the configuration pid. You can either do this by placing a properties file
into etc/net.lr.tutorial.karaf.cxf.personservice.cfg or editing the config pid using the karaf webconsole. (See part 2 and part 3 of the Karaf Tutorial series).

Basically you have to set these two properties according to your own mail environment.

mailserver=yourmailserver.com testVendorEmail=youmail@yourdomain.com Test the order example

Copy the file order1.xml into the folder "ordersin" below the karaf dir.

The Karaf console will show:

Order from Christian Schneider Count: 1, Article: Flatscreen TV

The same should be in a mail in your inbox. At the same time a file should be created in ordersout/Christian Schneider/order1.xml that contains the book item.

Wrapping it up and outlook

The examples show that fairly sophisticated integrations can be done using camel and be nicely deployed in an Apache Karaf container. The examples also show some best practices around configuration management, jms connection factories and templates for customization. The examples should also provide a good starting point for you own integration projects. Many people are a bit hesitant using OSGi in production. I hope these simple examples can show how easy this is in practice. Still problems can arise of course. For that case it is advisable to think about getting a support contract from a vendor like Talend. The whole Talend Integration portfolio is based on Apache Karaf so we are quite experienced in this area.

I have left out one big use case for Apache Camel in this tutorial - Database integrations. This is a big area and warrants a separate tutorial that will soon follow. There I will also explain how to handle DataSources and Connection Factories with drivers that are not already OSGi compliant.

Back to Karaf Tutorials

View Online
Categories: Christian Schneider

Apache Karaf Tutorial Part 8 - Distributed OSGi

Christian Schneider - Tue, 02/02/2016 - 08:54

Blog post edited by Christian Schneider - "Updated to karaf 4"

By default OSGi services are only visible and accessible in the OSGi container where they are published. Distributed OSGi allows to define services in one container and use them in some other (even over machine boundaries).

For this tutorial we use the DOSGi sub project of CXF which is the reference implementation of the OSGi Remote Service Admin specification, chapter 122 in the OSGi 4.2 Enterprise Specification).

Example on github

Introducing the example

Following the hands on nature of these tutorial we start with an example that can be tried in some minutes and explain the details later.

Our example is again the tasklist example from Part 1 of this tutorial. The only difference is that we now deploy the model and the persistence service on container A and model and UI to container B and we install the dosgi runtime on bother containers.

As DOSGi should not be active for all services on a system the spec defines that the service property "osgi.remote.interfaces" triggers if DOSGi should process the service. It expects the interface names that this service should export remotely. Setting the property to "*" means that all interfaces the service implements should be exported. The tasklist persistence service already sets the property so the service is exported with defaults.

Installing the service

To keep things simple we will install container A and B on the same system.

Install Service config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper zookeeper.port 2181 config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper.server clientPort 2181 feature:repo-add cxf-dosgi 1.7.0 feature:install cxf-dosgi-discovery-distributed cxf-dosgi-zookeeper-server feature:repo-add mvn:net.lr.tasklist/tasklist-features/1.0.0-SNAPSHOT/xml feature:install example-tasklist-persistence

After these commands the tasklist persistence service should be running and be published on zookeeper.

You can check the wsdl of the exported service http://localhost:8181/cxf/net/lr/tasklist/model/TaskService?wsdlBy starting the zookeeper client zkCli.sh from a zookeeper distro you can optionally check that there is a node for the service below the osgi path.

Installing the UI
  • Unpack into folder container_b
  • Start bin/karaf

 

Install Client config:property-set -p org.ops4j.pax.web org.osgi.service.http.port 8182 config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper zookeeper.port 2181 feature:repo-add cxf-dosgi 1.7.0 feature:install cxf-dosgi-discovery-distributed feature:repo-add mvn:net.lr.tasklist/tasklist-features/1.0.0-SNAPSHOT/xml feature:install example-tasklist-ui

 

The tasklist client ui should be in status Active/Created and the servlet should be available on http://localhost:8182/tasklist. If the ui bundle stays in status graceperiod then DOSGi did not provide a local proxy for the persistence service.

How does it work

The Remote Service Admin spec defines an extension of the OSGi service model. Using special properties when publishing OSGi services you can tell the DOSGi runtime to export a service for remote consumption. The CXF DOSGi runtime listens for all services deployed on the local container. It only processes services that have the "osgi.remote.interfaces" property. If the property is found then the service is either exported with the named interfaces or with all interfaces it implements.The way the export works can be fine tuned using the CXF DOSGi configuration options.

By default the service will be exported using the CXF servlet transport. The URL of the service is derived from the interface name. The servlet prefix, hostname and port number default to the Karaf defaults of "cxf", the ip address of the host and the port 8181. All these options can be defined using a config admin configuration (See the configuration options). By default the service uses the CXF Simple Frontend and the Aegis Databinding. If the service interface is annotated with the JAX-WS @WebService annotation then the default is JAX-WS frontend and JAXB databinding.

The service informations are then also propagated using the DOSGi discovery. In the example we use the Zookeeper discovery implementation. So the service metadata is written to a zookeeper server.

The container_b will monitor the local container for needed services. It will then check if a needed service is available on the discovery impl (on the zookeeper server in our case). For each service it finds it will create a local proxy that acts as an OSGi service implementing the requested interface. Incoming request are then serialized and sent to the remote service endpoint.

So together this allows for almost transparent service calls. The developer only needs to use the OSGi service model and can still communicate over container boundaries.

View Online
Categories: Christian Schneider

Enterprise ready request logging with CXF 3.1.0 and elastic search

Christian Schneider - Mon, 11/30/2015 - 15:24

Blog post edited by Christian Schneider

You may already know the old CXF LoggingFeature (org.apache.cxf.feature.LoggingFeature). You added it to a JAXWS endpoint to enable logging for a CXF endpoint at compile time.

While this already helped a lot it was not really enterprise ready. The logging could not be controlled much at runtime and contained too few details. This all changes with the new CXF logging support and the up coming Karaf Decanter.

Logging feature in CXF 3.1.0

In CXF 3.1 this code was moved into a separate module and gathered some new features.

  • Auto logging for existing CXF endpoints
  • Uses slf4j MDC to log meta data separately
  • Adds meta data for Rest calls
  • Adds MD5 message id and exchange id for correlation
  • Simple interface for writing your own appenders
  • Karaf decanter support to log into elastic search
Manual UsageCXF LoggingFeature <jaxws:endpoint ...> <jaxws:features> <bean class="org.apache.cxf.ext.logging.LoggingFeature"/> </jaxws:features> </jaxws:endpoint> Auto logging for existing CXF endpoints in Apache Karaf

Simply install and enable the new logging feature:

Logging feature in karaf feature:repo-add cxf 3.1.0 feature:install cxf-features-logging config:property-set -p org.apache.cxf.features.logging enabled true

Then install CXF endpoints like always. For example install the PersonService from the Karaf Tutorial Part 4 - CXF Services in OSGi. The client and endpoint in the example are not equipped with the LoggingFeature. Still the new logging feature will enhance the clients and endpoints and log all SOAP and Rest calls using slf4j. So the logging data will be processed by pax logging and by default end up in your karaf log.

A log entry looks like this:

Sample Log entry 2015-06-08 16:35:54,068 | INFO | qtp1189348109-73 | REQ_IN | 90 - org.apache.cxf.cxf-rt-features-logging - 3.1.0 | <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"><soap:Body><ns2:addPerson xmlns:ns2="http://model.personservice.cxf.karaf.tutorial.lr.net/" xmlns:ns3="http://person.jms2rest.camel.karaf.tutorial.lr.net"><arg0><id>3</id><name>Test2</name><url></url></arg0></ns2:addPerson></soap:Body></soap:Envelope>

This does not look very informative. You only see that it is an incoming request (REQ_IN) and the SOAP message in the log message. The logging feature provides a lot more information though. You just need to configure the pax logging config to show it.

Slf4j MDC values for meta data

This is the raw logging information you get for a SOAP call:

FieldValue@timestamp2015-06-08T14:43:27,097ZMDC.addresshttp://localhost:8181/cxf/personServiceMDC.bundle.id90MDC.bundle.nameorg.apache.cxf.cxf-rt-features-loggingMDC.bundle.version3.1.0MDC.content-typetext/xml; charset=UTF-8MDC.encodingUTF-8MDC.exchangeId56b037e3-d254-4fe5-8723-f442835fa128MDC.headers{content-type=text/xml; charset=UTF-8, connection=keep-alive, Host=localhost:8181, Content-Length=251, SOAPAction="", User-Agent=Apache CXF 3.1.0, Accept=*/*, Pragma=no-cache, Cache-Control=no-cache}MDC.httpMethodPOSTMDC.messageIda46eebd2-60af-4975-ba42-8b8205ac884cMDC.portNamePersonServiceImplPortMDC.portTypeNamePersonServiceMDC.serviceNamePersonServiceImplServiceMDC.typeREQ_INlevelINFOloc.classorg.apache.cxf.ext.logging.slf4j.Slf4jEventSenderloc.fileSlf4jEventSender.javaloc.line55loc.methodsendloggerClassorg.ops4j.pax.logging.slf4j.Slf4jLoggerloggerNameorg.apache.cxf.services.PersonService.REQ_INmessage<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"><soap:Body><ns2:getAll xmlns:ns2="http://model.personservice.cxf.karaf.tutorial.lr.net/"; xmlns:ns3="http://person.jms2rest.camel.karaf.tutorial.lr.net"/></soap:Body></soap:Envelope>;threadNameqtp80604361-78timeStamp1433774607097

Some things to note:

  • The logger name is <service namespace>.<ServiceName>.<type> karaf by default only cuts it to just the type.
  • A lot of the details are in the MDC values

You need to change your pax logging config to make these visible.

You can use the logger name to fine tune which services you want to log this way. For example set the debug level to WARN for noisy services to avoid that they are logged or log some services to another file.

Message id and exhange id

The messageId allows to uniquely identify messages even if you collect them from several servers. It is also transported over the wire so you can correlate a request sent on one machine with the request received on another machine.

The exchangeId will be the same for an incoming request and the response sent out or on the other side for an outgoing request and the response for it. This allows to correlate request and responses and so follow the conversations.

Simple interface to write your own appenders

Write your own LogSender and set it on the LoggingFeature to do custom logging. You have access to all meta data from the class LogEvent.

So for example you could write your logs to one file per message or to JMS.

Karaf decanter support to write into elastic search

Many people use elastic search for their logging. Fortunately you do not have to write a special LogSender for this purpose. The standard CXF logging feature will already work.

It works like this:

  • CXF sends the messages as slf4j events which are processed by pax logging
  • Karaf Decanter LogCollector attaches to pax logging and sends all log events into the karaf message bus (EventAdmin topics)
  • Karaf Decanter ElasticSearchAppender sends the log events to a configurable elastic search instance

As Decanter also provides features for a local elastic search and kibana instance you are ready to go in just minutes.

Installing Decanter for CXF Logging feature:repo-add mvn:org.apache.karaf.decanter/apache-karaf-decanter/3.0.0-SNAPSHOT/xml/features feature:install decanter-collector-log decanter-appender-elasticsearch elasticsearch kibana


After that open a browser at http://localhost:8181/kibana. When decanter is released kibana will be fully set up. At the moment you have to add the logstash dashboard and change the index name to [karaf-]YYYY.MM.DD.

Then you should see your cxf messages like this:

Kibana easily allows to filter for specific services and correlate requests and responses.

This is just a preview of decanter. I will do a more detailed post when the first release is out.

 

View Online
Categories: Christian Schneider

Karaf Tutorial Part 4 - CXF Services in OSGi

Christian Schneider - Wed, 08/05/2015 - 10:44

Blog post edited by Christian Schneider

Shows how to publish and use a simple REST and SOAP service in karaf using cxf and blueprint.

To run the example you need to install the http feature of karaf. The default http port is 8080 and can be configured using the
config admin pid "org.ops4j.pax.web". You also need to install the cxf feature. The base url of the cxf servlet is by default "/cxf".
It can be configured in the config pid "org.apache.cxf.osgi".

Differences in Talend ESB

Icon

If you use Talend ESB instead of plain karaf then the default http port is 8044 and the default cxf servlet name is "/services".

PersonService Example

The "business case" is to manage a list of persons. As service should provide the typical CRUD operations. Front ends should be a REST service, a SOAP service and a web UI.

The example consists of four projects

  • model: Person class and PersonService interface
  • server: Service implementation and logic to publish the service using REST and SOAP
  • proxy: Accesses the SOAP service and publishes it as an OSGi service
  • webui: Provides a simple servlet based web ui to list and add persons. Uses the OSGi service

You can find the full source on github: https://github.com/cschneider/Karaf-Tutorial/tree/master/cxf/personservice

Installation and test run

First we build, install and run the example to give an overview of what it does. The following main chapter then explains in detail how it works.

Installing Karaf and preparing for CXF

We start with a fresh Karaf 2.3.1.

Installing CXF

In Karaf Console run

features:chooseurl cxf 2.7.4 features:install http cxf

Changes in commands for karaf 3

Icon
  • features:chooseurl -> feature:repo-add
  • features:install -> feature:install
Build and Test

Checkout the project from github and build using maven

> mvn clean install

Install service and ui in karaf install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-model/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-server/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-proxy/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-webui/1.0-SNAPSHOT Test the service

The person service should show up in the list of currently installed services that can be found herehttp://localhost:8181/cxf/

List the known personshttp://localhost:8181/cxf/person
This should show one person "chris"

Now using a firefox extension like Poster or Httprequester you can add a person.

Send the following xml snippet:

<?xml version="1.0" encoding="UTF-8"?> <person> <id>1001</id> <name>Christian Schneider</name> <url>http://www.liquid-reality.de</url> </person>

with Content-Type:text/xml using PUT:http://localhost:8181/cxf/person/1001
or to this url using POST:http://localhost:8181/cxf/person

Now the list of persons should show two persons.

Test the proxy and web UI

http://localhost:8181/personui

You should see the list of persons managed by the personservice and be able to add new persons.

How it worksDefining the model

The model project is a simple java maven project that defines a JAX-WS service and a JAXB data class. It has no dependencies to cxf. The service interface is just a plain java interface with the @WebService annotation.

@WebService public interface PersonService { public abstract Person[] getAll(); public abstract Person getPerson(String id); public abstract void updatePerson(String id, Person person); public abstract void addPerson(Person person); }

The Person class is just a simple pojo with getters and setters for id, name and url and the necessary JAXB annotations. Additionally you need an ObjectFactory to tell JAXB what xml element to use for the Person class.
There is also no special code for OSGi in this project. So the model works perfectly inside and outside of an OSGi container.

Icon

The service is defined java first. SOAP and rest are used quite transparently. This is very suitable to communicate between a client and server of the same application. If the service
is to be used by other applications the wsdl first approach is more suitable. In this case the model project should be configured to generate the data classes and service interface from
a wsdl (see cxf wsdl_first example pom file). For rest services the java first approach is quite common in general as the client typically does not use proxy classes anyway.

Service implementation (server)

PersonServiceImpl is a java class the implements the service interface and contains some additional JAX-RS annotations. The way the class is defined allows it to implement a REST service and a SOAP service at the same time.

The server project also contains a small starter class that allows the service to be published directly from eclipse. This class is not necessary for deployment in karaf.

The production deployment of the service is done in src/main/resources/OSGI-INF/blueprint/blueprint.xml.

As the file is in the special location OSGI-INF/blueprint it is automatically processed by the blueprint implementation aries in karaf. The REST service is published using the jaxrs:server element and the SOAP service is published using the jaxws:endpoint element. The blueprint namespaces are different from spring but apart from this the xml is very similar to a spring xml.

Service proxy

The service proxy project only contains a blueprint xml that uses the CXF JAXWS client to consume the SOAP service and exports it as an OSGi Service. Encapsulating the service client as an OSGi service (proxy project) is not strictly necessary but it has the advantage that the webui is then completely independent of cxf. So it is very easy to change the way the service is accessed. So this is considered a best practice in OSGi.

See blueprint.xml

Web UI (webui)

This project consumes the PersonService OSGi service and exports the PersonServlet as an OSGi service. The pax web whiteboard extender will then publish the servlet on the location /personui.
The PersonServlet gets the PersonService injected and uses to get all persons and also to add persons.

The wiring is done using a blueprint context.

Some further remarks

The example uses blueprint instead of spring dm as it works much better in an OSGi environment. The bundles are created using the maven bundle plugin. A fact that shows how well blueprint works
is that the maven bundle plugin is just used with default settings. In spring dm the imports have to be configured as spring needs access to many implementation classes of cxf. For spring dm examples
take a look at the Talend Service Factory examples (https://github.com/Talend/tsf/tree/master/examples).

The example shows that writing OSGi applications is quite simple with aries and blueprint. It needs only 153 lines of java code (without comments) for a complete little application.
The blueprint xml is also quite small and readable.

Back to Karaf Tutorials

View Online
Categories: Christian Schneider

Apache Karaf Tutorial Part 6 - Database Access

Christian Schneider - Tue, 07/28/2015 - 11:13

Blog post edited by Christian Schneider

Shows how to access databases from OSGi applications running in Karaf and how to abstract from the DB product by installing DataSources as OSGi services. Some new Karaf shell commands can be used to work with the database from the command line. Finally JDBC and JPA examples show how to use such a DataSource from user code.Prerequisites

You need an installation of apache karaf 3.0.3 for this tutorial.

Example sources

The example projects are on github Karaf-Tutorial/db.

Drivers and DataSources

In plain java it is quite popular to use the DriverManager to create a database connection (see this tutorial). In OSGi this does not work as the ClassLoader of your bundle will have no visibility of the database driver. So in OSGi the best practice is to create a DataSource at some place that knows about the driver and publish it as an OSGi service. The user bundle should then only use the DataSource without knowing the driver specifics. This is quite similar to the best practice in application servers where the DataSource is managed by the server and published to jndi.

So we need to learn how to create and use DataSources first.

The DataSourceFactory services

To make it easier to create DataSources in OSGi the specs define a DataSourceFactory interface. It allows to create a DataSource using a specific driver from properties. Each database driver is expected to implement this interface and publish it with properties for the driver class name and the driver name.

Introducing pax-jdbc

The pax-jdbc project aims at making it a lot easier to use databases in an OSGi environment. It does the following things:

  • Implement the DataSourceFactory service for Databases that do not create this service directly
  • Implement a pooling and XA wrapper for XADataSources (This is explained at the pax jdbc docs)
  • Provide a facility to create DataSource services from config admin configurations
  • Provide karaf features for many databases as well as for the above additional functionality

So it covers everything you need from driver installation to creation of production quality DataSources.

Installing the driver

The first step is to install the driver bundles for your database system into Karaf. Most drivers are already valid bundles and available in the maven repo.

For several databases pax-jdbc already provides karadf features to install a current version of the database driver.

For H2 the following commands will work

feature:repo-add mvn:org.ops4j.pax.jdbc/pax-jdbc-features/0.5.0/xml/features feature:install transaction jndi pax-jdbc-h2 pax-jdbc-pool-dbcp2 pax-jdbc-config service:list DataSourceFactory

Strictly speaking we would only need the pax-jdbc-h2 feature but we will need the others for the next steps.

This will install the pax-jdbc feature repository and the h2 database driver. This driver already implements the DataSourceFactory so the last command will display this service.

DataSourceFactory [org.osgi.service.jdbc.DataSourceFactory] ----------------------------------------- osgi.jdbc.driver.class = org.h2.Driver osgi.jdbc.driver.name = H2 osgi.jdbc.driver.version = 1.3.172 service.id = 691 Provided by : H2 Database Engine (68)

The pax-jdbc-pool-dbcp2 feature wraps this DataSourceFactory to provide pooling and XA support.

pooled and XA DataSourceFactory [org.osgi.service.jdbc.DataSourceFactory] ----------------------------------------- osgi.jdbc.driver.class = org.h2.Driver osgi.jdbc.driver.name = H2-pool-xa osgi.jdbc.driver.version = 1.3.172 pooled = true service.id = 694 xa = true Provided by : OPS4J Pax JDBC Pooling support using Commons-DBCP2 (73)

Technically this DataSourceFactory also creates DataSource objects but internally they manage XA support and pooling. So we want to use this one for our later code examples.

Creating the DataSource

Now we just need to create a configuration with the correct factory pid to create a DataSource as a service

So create the file etc/org.ops4j.datasource-tasklist.cfg with the following content

config for DataSource osgi.jdbc.driver.name=H2-pool-xa url=jdbc:h2:mem:person dataSourceName=person

The config will automatically trigger the pax-jdbc-config module to create a DataSource.

  • The name osgi.jdbc.driver=H2-pool-xa will select the H2 DataSourceFactory with pooling and XA support we previously installed.
  • The url configures H2 to create a simple in memory database named test.
  • The dataSourceName will be reflected in a service property of the DataSource so we can find it later
  • You could also set pooling configurations in this config but we leave it at the defaults

DataSource karaf@root()> service:list DataSource [javax.sql.DataSource] ---------------------- dataSourceName = person osgi.jdbc.driver.name = H2-pool-xa osgi.jndi.service.name = person service.factoryPid = org.ops4j.datasource service.id = 696 service.pid = org.ops4j.datasource.83139141-24c6-4eb3-a6f4-82325942d36a url = jdbc:h2:mem:person Provided by : OPS4J Pax JDBC Config (69)

So when we search for services implementing the DataSource interface we find the person datasource we just created.

When we installed the features above we also installed the aries jndi feature. This module maps OSGi services to jndi objects. So we can also use jndi to retrieve the DataSource which will be used in the persistence.xml for jpa later.

jndi url of DataSource osgi:service/person Karaf jdbc commands

Karaf contains some commands to manage DataSources and do queries on databases. The commands for managing DataSources in karaf 3.x still work with the older approach of using blueprint to create DataSources. So we will not use these commands but we can use the functionality to list datasources, list tables and execute queries.

jdbc commands feature:install jdbc jdbc:datasources jdbc:tables person

We first install the karaf jdbc feature which provides the jdbc commands. Then we list the DataSources and show the tables of the database accessed by the person DataSource.

jdbc:execute person "create table person (name varchar(100), twittername varchar(100))" jdbc:execute person "insert into person (name, twittername) values ('Christian Schneider', '@schneider_chris')" jdbc:query person "select * from person"

This creates a table person, adds a row to it and shows the table.

The output should look like this

select * from person NAME | TWITTERNAME -------------------------------------- Christian Schneider | @schneider_chris Accessing the database using JDBC

The project db/examplejdbc shows how to use the datasource we installed and execute jdbc commands on it. The example uses a blueprint.xml to refer to the OSGi service for the DataSource and injects it into the class
DbExample.The test method is then called as init method and shows some jdbc statements on the DataSource.The DbExample class is completely independent of OSGi and can be easily tested standalone using the DbExampleTest. This test shows how to manually set up the DataSource outside of OSGi.

Build and install

Build works like always using maven

> mvn clean install

In Karaf we just need our own bundle as we have no special dependencies

> install -s mvn:net.lr.tutorial.karaf.db/db-examplejdbc/1.0-SNAPSHOT Using datasource H2, URL jdbc:h2:~/test Christian Schneider, @schneider_chris,

After installation the bundle should directly print the db info and the persisted person.

Accessing the database using JPA

For larger projects often JPA is used instead of hand crafted SQL. Using JPA has two big advantages over JDBC.

  1. You need to maintain less SQL code
  2. JPA provides dialects for the subtle differences in databases that else you would have to code yourself.

For this example we use Hibernate as the JPA Implementation. On top of it we add Apache Aries JPA which supplies an implementation of the OSGi JPA Service Specification and blueprint integration for JPA.

The project examplejpa shows a simple project that implements a PersonService managing Person objects.
Person is just a java bean annotated with JPA @Entitiy.

Additionally the project implements two Karaf shell commands person:add and person:list that allow to easily test the project.

persistence.xml

Like in a typical JPA project the peristence.xml defines the DataSource lookup, database settings and lists the persistent classes. The datasource is refered using the jndi name "osgi:service/person".

The OSGi JPA Service Specification defines that the Manifest should contain an attribute "Meta-Persistence" that points to the persistence.xml. So this needs to be defined in the config of the maven bundle plugin in the prom. The Aries JPA container will scan for these attributes
and register an initialized EntityMangerFactory as an OSGi service on behalf of the use bundle.

blueprint.xml

We use a blueprint.xml context to inject an EntityManager into our service implementation and to provide automatic transaction support.
The following snippet is the interesting part:

<bean id="personService" class="net.lr.tutorial.karaf.db.examplejpa.impl.PersonServiceImpl"> <jpa:context property="em" unitname="person" /> <tx:transaction method="*" value="Required"/> </bean>

This makes a lookup for the EntityManagerFactory OSGi service that is suitable for the persistence unit person and injects a thread safe EnityManager (using a ThreadLocal under the hood) into the
PersonServiceImpl. Additionally it wraps each call to a method of PersonServiceImpl with code that opens a transaction before the method and commits on success or rollbacks on any exception thrown.

Build and InstallBuild mvn clean install

A prerequisite is that the derby datasource is installed like described above. Then we have to install the bundles for hibernate, aries jpa, transaction, jndi and of course our db-examplejpa bundle.
See ReadMe.txt for the exact commands to use.

Test person:add 'Christian Schneider' @schneider_chris

Then we list the persisted persons

karaf@root> person:list Christian Schneider, @schneider_chris Summary

In this tutorial we learned how to work with databases in Apache Karaf. We installed drivers for our database and a DataSource. We were able to check and manipulate the DataSource using the jdbc:* commands. In the examplejdbc we learned how to acquire a datasource
and work with it using plain jdbc4.  Last but not least we also used jpa to access our database.

Back to Karaf Tutorials

View Online
Categories: Christian Schneider

Karaf Tutorial Part 1 - Installation and First application

Christian Schneider - Thu, 07/02/2015 - 18:06

Blog post edited by Christian Schneider

Getting StartedWith this post I am beginning a series of posts about Apache Karaf. So what is Karaf and why should you be interested in it? Karaf is an OSGi container based on Equinox or Felix. The main difference to these fine containers is that it brings excellent management features with it.

Outstanding features of Karaf:

  • Extensible Console with Bash like completion features
  • ssh console
  • deployment of bundles and features from maven repositories
  • easy creation of new instances from command line

All together these features make developing server based OSGi applications almost as easy as regular java applications. Deployment and management is on a level that is much better than all applications servers I have seen till now. All this is combined with a small footprint as well of karaf as the resulting applications. In my opinion this allows a light weight development style like JEE 6 together with the flexibility of spring applications.

Installation and first startup
  • Download Karaf 3.0.3 from the Karaf web site.
  • Extract and start with bin/karaf

You should see the welcome screen:

__ __ ____ / //_/____ __________ _/ __/ / ,< / __ `/ ___/ __ `/ /_ / /| |/ /_/ / / / /_/ / __/ /_/ |_|\__,_/_/ \__,_/_/ Apache Karaf (3.0.3) Hit '<tab>' for a list of available commands and '[cmd] \--help' for help on a specific command. Hit '<ctrl-d>' or 'osgi:shutdown' to shutdown Karaf. karaf@root()> Some handy commandsCommandDescriptionlaShows all installed bundlesservice:listShows the active OSGi services. This list is quite long. Here it is quite handy that you can use unix pipes like "ls | grep admin"exportsShows exported packages and bundles providing them. This helps to find out where a package may come from.feature:listShows which features are installed and can be installed.features:install webconsole

Install features (a list of bundles and other features). Using the above command we install the Karaf webconsole.

It can be reached at http://localhost:8181/system/console . Log in with karaf/karaf and take some time to see what it has to offer.

log:tailShow the log. Use ctrl-c to  go back to ConsoleCtrl-dExit the console. If this is the main console karaf will also be stopped.

OSGi containers preserve state after restarts

Icon

Please note that Karaf like all osgi containers maintains it´s last state of installed and started bundles. So if something should not work anymore a restart is not sure to help. To really start fresh again stop karaf and delete the data directory.

Check the logs

Icon

Karaf is very silent. To not miss error messages always keep a tail -f data/karaf.log open !!

Tasklist - A small osgi application

Without any useful application Karaf is a nice but useless container. So let´s create our first application. The good news is that creating an OSGi application is quite easy and
maven can help a lot. The difference to a normal maven project is quite small. To write the application I recommend to use Eclipse 4 with the m2eclipse plugin which is installed by default on current versions.

Get the source code

Import into Eclipse

  • Start Eclipse 
  • In Eclipse Package explorer: Import -> Existing maven project -> Browse to the extracted directory into the tasklist sub dir
  • Eclipse will show all maven projects it finds
  • Click through to import with defaults

Eclipse will now import the projects and wire all dependencies using m2eclipse.

The tasklist example consists of three projects

ModuleDescriptiontasklist-modelService interface and Task classtasklist-persistenceSimple persistence implementation that offers a TaskServicetasklist-uiServlet that displays the tasklist using a TaskServicetasklist-featuresFeatures descriptor for the application that makes installing in Karaf very easyTasklist-persistence

This project contains the domain model and the service implementation. The model is the Task class and a TaskService interface. The persistence implementation TaskServiceImpl manages tasks in a simple HashMap.
The TaskService is published as an OSGi service using a blueprint context. Blueprint is an OSGi standard for dependency injection and is very similar to a spring context.

<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <bean id="taskService" class="net.lr.tasklist.persistence.impl.TaskServiceImpl" /> <service ref="taskService" interface="net.lr.tasklist.model.TaskService" /> </blueprint>

The bean tag creates a single instance of the TaskServiceImpl. The service tag publishes this instance as an OSGi service with the TaskService interface.

The pom.xml is of packaging bundle and the maven-bundle-plugin creates the jar with an OSGi Manifest. By default the plugin imports all packages that are imported in java files or referenced in the blueprint context.
It also exports all packages that do not contain the string impl or internal. In our case we want the model package to be imported but not the persistence.impl package. As the naming convention is used
we need no additional configuration.

Tasklist-ui

The ui project contains a small servlet TaskServlet to display the tasklist and individual tasks. To work with the tasks the servlet needs the TaskService.

To inject the TaskService and to publish the servlet the following blueprint context is used:

<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <reference id="taskService" availability="mandatory" interface="net.lr.tasklist.model.TaskService" /> <bean id="taskServlet" class="net.lr.tasklist.ui.TaskListServlet"> <property name="taskService" ref="taskService"></property> </bean> <service ref="taskServlet" interface="javax.servlet.http.HttpServlet"> <service-properties> <entry key="alias" value="/tasklist" /> </service-properties> </service> </blueprint>

The reference tag makes blueprint search and eventually wait for a service that implements the TaskService interface and creates a bean "taskService".
The bean taskServlet instantiates the servlet class and injects the taskService. The service tag publishes the servlet as an OSGi service with the HttpServlet interface and sets a property alias.
This way of publishing a servlet is not yet standardized but is supported by the pax web whiteboard extender. This extender registers each service with interface HttpServlet with the OSGi http service. It uses the alias
property to set the path where the servlet is available.

See also: http://wiki.ops4j.org/display/paxweb/Whiteboard+Extender

Tasklist-features

The last project only installs a feature descriptor to the maven repository so we can install it easily in Karaf. The descriptor defines a feature named tasklist and the bundles to be installed from
the maven repository.

<feature name="example-tasklist-persistence" version="${pom.version}"> <bundle>mvn:net.lr.tasklist/tasklist-model/${pom.version}</bundle> <bundle>mvn:net.lr.tasklist/tasklist-persistence/${pom.version}</bundle> </feature> <feature name="example-tasklist-ui" version="${pom.version}"> <feature>http</feature> <feature>http-whiteboard</feature> <bundle>mvn:net.lr.tasklist/tasklist-model/${pom.version}</bundle> <bundle>mvn:net.lr.tasklist/tasklist-ui/${pom.version}</bundle> </feature>

A feature can consist of other features that also should be installed and bundles to be installed. The bundles typically use mvn urls. This means they are loaded from the configured maven repositories or your local maven repositiory in ~/.m2/repository.

Installing the Application in Karaf feature:repo-add mvn:net.lr.tasklist/tasklist-features/1.0.0-SNAPSHOT/xml feature:install example-tasklist-persistence example-tasklist-ui

Add the features descriptor to Karaf so it is added to the available features, then Install and start the tasklist feature. After this command the tasklist application should run

list

Check that all bundles of tasklist are active. If not try to start them and check the log.

http:list ID | Servlet | Servlet-Name | State | Alias | Url ------------------------------------------------------------------------------- 56 | TaskListServlet | ServletModel-2 | Deployed | /tasklist | [/tasklist/*]

Should show the TaskListServlet. By default the example will start at http://localhost:8181/tasklist .

You can change the port by creating aa text file in "etc/org.ops4j.pax.web.cfg" with the content "org.osgi.service.http.port=8080". This will tell the HttpService to use the port 8080. Now the tasklist application should be available at http://localhost:8080/tasklist

Summary

In this tutorial we have installed Karaf and learned some commands. Then we created a small OSGi application that shows servlets, OSGi services, blueprint and the whiteboard pattern.

In the next tutorial we take a look at using Apache Camel and Apache CXF on OSGi.

Back to Karaf Tutorials

View Online
Categories: Christian Schneider

Apache Karaf Tutorial part 10 - Declarative services

Christian Schneider - Tue, 06/30/2015 - 11:09

Blog post edited by Christian Schneider

This tutorial shows how to use Declarative Services together with the new Aries JPA 2.0.

You can find the full source code on github Karaf-Tutorial/tasklist-ds

Declarative Services

Declarative Services (DS) is the biggest contender to blueprint. It is a slim service injection framework that is completely focused on OSGi. DS allows you to offer and consume OSGi services and to work with configurations.

At the core DS works with xml files to define scr components and their dependencies. They typically live in the OSGI-INF directory and are announced in the Manifest using the header "Service-Component" with the path to the component descriptor file.  Luckily it is not necessary to directly work with this xml as there is also support for DS annotations. These are processed by the maven-bundle-plugin. The only prerequisite is that they have to be enabled by a setting in the configuration instructions of the plugin.

<_dsannotations>*</_dsannotations>

For more details see http://www.aqute.biz/Bnd/Components

DS vs Blueprint

Let us look into DS by comparing it to the already better known blueprint. There are some important differences:

  1. Blueprint always works on a complete blueprint context. So the context will be started when all mandatory service deps are present. It then publishes all offered services. As a consequence a blueprint context can not depend on services it offers itself. DS works on Components. A component is a class that offers a service and can depend on other services and configuration. In DS you can manage each component separately like start and stop it. It is also possible that a bundle offers two components but only one is started as the dependencies of the other are not yet there.
  2. DS supports the OSGi service dynamics better than blueprint. Lets look into a simple example:
    You have a DS and blueprint module component that offers a service A and depends on a mandatory service B. Blueprint will wait on the first start for the mandatory service to be available. If it does not come up it will fail after a timeout and will not be able to recover from this. Once the blueprint context is up it stays up even if the mandatory service goes away. This is called service damping and has the goal to avoid restarting blueprint contexts too often. Services are injected into blueprint beans as dynamic proxies. Internally the proxy handles the replacement and unavailability of services. One problem this causes is that calls to a non available service will block the thread until a timeout and then throw a RuntimeException.
    In DS on the other hand a component lifecycle is directly bound to dependent services. So a component will only be activated when all mandatory services are present and deactivated as soon as one goes away. The advantage is that the service injected into the component does not have to be proxied and calls to it should always work.
  3. Every DS component must be a service. While blueprint can have internal beans that are just there to wire internal classes to each other this is not possible in DS. So DS is not a complete dependency injection framework and lacks many of the features blueprint offers in this regard.
  4. DS does not support extension namespaces. Aries blueprint has support for quite a few other Apache projects using extension namespaces. Examples are: Aries jpa, Aries transactions, Aries authz, CXF, Camel. So using these technologies in DS can be a bit more difficult.
  5. DS does not support support interceptors. In blueprint an extension namespace can introduce and interceptor that is always called before or after a bean. This is for example used for security as well as transation handling. For this reason DS did not support JPA very well as normal usage mandates to have interceptors. See below how jpa can work on DS.

So if DS is a good match for your project depends on how much you need the service dynamics and how well you can integrate DS with other projects.

JEE and JPA

The JPA spec is based on JEE which has a very special thread and interceptor model. In JEE you use session beans with a container managed EntityManger
to manipulate JPA Entities. It looks like this:

JPA @Stateless class TaskServiceImpl implements TaskService {  @PersistenceContext(unitName="tasklist") private EntityManager em; public Task getTask(Integer id) { return em.find(Task.class, id); } }

In JEE calling getTask will by default participate in or start a transaction. If the method call succeeds the transaction will be committed, if there is an exception it will be rolled back.
The calls go to a pool of TaskServiceImpl instances. Each of these instances will only be used by one thread at a time. As a result of this the EntityManager interface is not thread safe!

So the advantage of this model is that it looks simple and allows pretty small code. On the other hand it is a bit difficult to test such code outside a container as you have to mimic the way the container works with this class. It is also difficult to access e.g. em
 as it is private and there is not setter.

Blueprint supports a coding style similar to the JEE example and implements this using a special jpa and tx namespace and
interceptors that handle the transaction / em management.

DS and JPA

In DS each component is a singleton. So there is only one instance of it that needs to cope with multi threaded access. So working with the plain JEE concepts for JPA is not possible in DS.

Of course it would be possible to inject an EntityManagerFactory and handle the EntityManager lifecycle and transactions by hand but this results in quite verbose and error prone code.

Aries JPA 2.0.0 is the first version that offers special support for frameworks like DS that do not offer interceptors. The solution here is the concept of a JPATemplate together with support for closures in Java 8. To see how the code looks like peek below at chapter persistence.

Instead of the EntityManager we inject a thread safe JpaTemplate into our code. We need to put the jpa code inside a closure and run it with jpa.txEpr() or jpa.tx(). The JPATemplate will then guarantee the same environment like JEE inside the closure. As each closure runs as its own
instance there is one em per thread. The code will also participate/create a transaction and the transaction  commit/rollback also works like in JEE.

So this requires a little more code but the advantage is that there is no need for a special framework integration.
The code can also be tested much easier. See TaskServiceImplTest in the example.

Structure
  • features
  • model
  • persistence
  • ui
Features

Defines the karaf features to install the example as well as all necessary dependencies.

Model

This module defines the Task JPA entity, a TaskService interface and the persistence.xml. For a detailed description of model see the tasklist-blueprint example. The model is exactly the same here.

PersistenceTaskServiceImpl @Component public class TaskServiceImpl implements TaskService { private JpaTemplate jpa; public Task getTask(Integer id) { return jpa.txExpr(em -> em.find(Task.class, id)); } @Reference(target = "(osgi.unit.name=tasklist)") public void setJpa(JpaTemplate jpa) { this.jpa = jpa; } }

We define that we need an OSGi service with interface TaskService and a property "osgi.unit.name" with the value "tasklist".

InitHelper @Component public class InitHelper { Logger LOG = LoggerFactory.getLogger(InitHelper.class); TaskService taskService; @Activate public void addDemoTasks() { try { Task task = new Task(1, "Just a sample task", "Some more info"); taskService.addTask(task); } catch (Exception e) { LOG.warn(e.getMessage(), e); } } @Reference public void setTaskService(TaskService taskService) { this.taskService = taskService; } }

The class InitHelper creates and persists a first task so the UI has something to show. It is also an example how business code that works with the task service can look like.
@Reference TaskService taskService injects the TaskService into the field taskService.
@Activate makes sure that addDemoTasks() is called after injection of this component.

Another interesting point in the module is the test TaskServiceImplTest. It runs outside OSGi and uses a special
persistence.xml for testing to create the EntityManagerFactory. It also shows how to instantiate a ResourceLocalJpaTemplate
to avoid having to install a JTA transaction manager for the test. The test code shows that indeed the TaskServiceImpl can
be used as plain java code without any special tricks.

UI

The tasklist-ui module uses the TaskService as an OSGi service and publishes a Servlet as an OSGi service. The Pax-web whiteboard bundle will then pick up the exported servlet and publish it using the HttpService so it is available on http.

TaskListServlet @Component(immediate = true, service = { Servlet.class }, property = { "alias:String=/tasklist" } ) public class TaskListServlet extends HttpServlet { private TaskService taskService; protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { // Actual code omitted } @Reference public void setTaskService(TaskService taskService) { this.taskService = taskService; } }

The above snippet shows how to specify which interface to use when exporting a service as well as how to define service properties.

The TaskListServlet is exported with the interface javax.servlet.Servlet with the service property alias="/tasklist".
So it is available on the url http://localhost:8181/tasklist.

Build

Make sure you use JDK 8 and run:

mvn clean install Installation

Make sure you use JDK 8.
Download and extract Karaf 4.0.0.
Start karaf and execute the commands below

# Install create config for DataSource tasklist cat https://raw.githubusercontent.com/cschneider/Karaf-Tutorial/master/tasklist-blueprint-cdi/org.ops4j.datasource-tasklist.cfg | tac -f etc/org.ops4j.datasource-tasklist.cfg # Install feature:repo-add mvn:net.lr.tasklist.ds/tasklist-features/1.0.0-SNAPSHOT/xml feature:install example-tasklist-ds-persistence example-tasklist-ds-ui Validate Installation

First we check that the JpaTemplate service is present for our persistence unit.

service:list JpaTemplate [org.apache.aries.jpa.template.JpaTemplate] ------------------------------------------- osgi.unit.name = tasklist transaction.type = JTA service.id = 164 service.bundleid = 57 service.scope = singleton Provided by : tasklist-model (57) Used by: tasklist-persistence (58)

Aries JPA should have created this service for us from our model bundle. If this did not work then check the log for messages from Aries JPA. It should print what it tried and what it is waiting for. You can also check for the presence of an EntityManagerFactory and EmSupplier service which are used by JpaTemplate.

A likely problem would be that the DataSource is missing so lets also check it:

service:list DataSource [javax.sql.DataSource] ---------------------- dataSourceName = tasklist felix.fileinstall.filename = file:/home/cschneider/java/apache-karaf-4.0.0/etc/org.ops4j.datasource-tasklist.cfg osgi.jdbc.driver.name = H2-pool-xa osgi.jndi.service.name = tasklist service.factoryPid = org.ops4j.datasource service.pid = org.ops4j.datasource.cdc87e75-f024-4b8c-a318-687ff83257cf url = jdbc:h2:mem:test service.id = 156 service.bundleid = 113 service.scope = singleton Provided by : OPS4J Pax JDBC Config (113) Used by: Apache Aries JPA container (62)

This is like it should look like. Pax-jdbc-config created the DataSource out of the configuration in "etc/org.ops4j.datasource-tasklist.cfg".  By using a DataSourceFactory wit the property "osgi.jdbc.driver.name=H2-pool-xa". So the resulting DataSource should be pooled and fully ready for XA transactions.

Next we check that the DS components started:

scr:list ID | State | Component Name -------------------------------------------------------------- 1 | ACTIVE | net.lr.tasklist.persistence.impl.InitHelper 2 | ACTIVE | net.lr.tasklist.persistence.impl.TaskServiceImpl 3 | ACTIVE | net.lr.tasklist.ui.TaskListServlet

If any of the components is not active you can inspect it in detail like this:

scr:details net.lr.tasklist.persistence.impl.TaskServiceImpl Component Details Name : net.lr.tasklist.persistence.impl.TaskServiceImpl State : ACTIVE Properties : component.name=net.lr.tasklist.persistence.impl.TaskServiceImpl component.id=2 Jpa.target=(osgi.unit.name=tasklist) References Reference : Jpa State : satisfied Multiple : single Optional : mandatory Policy : static Service Reference : Bound Service ID 164 Test

Open the url below in your browser.
http://localhost:8181/tasklist

You should see a list of one task

 http://localhost:8181/tasklist?add&taskId=2&title=Another Task

 

View Online
Categories: Christian Schneider

Apache Karaf Tutorial Part 8 - Distributed OSGi

Christian Schneider - Tue, 06/30/2015 - 09:59

Blog post edited by Christian Schneider - "Updated to karaf 3.0.3 and cxf dosgi 1.6.0"

By default OSGi services are only visible and accessible in the OSGi container where they are published. Distributed OSGi allows to define services in one container and use them in some other (even over machine boundaries).

For this tutorial we use the DOSGi sub project of CXF which is the reference implementation of the OSGi Remote Service Admin specification, chapter 122 in the OSGi 4.2 Enterprise Specification).

Example on github

Introducing the example

Following the hands on nature of these tutorial we start with an example that can be tried in some minutes and explain the details later.

Our example is again the tasklist example from Part 1 of this tutorial. The only difference is that we now deploy the model and the persistence service on container A and model and UI to container B and we install the dosgi runtime on bother containers.

As DOSGi should not be active for all services on a system the spec defines that the service property "osgi.remote.interfaces" triggers if DOSGi should process the service. It expects the interface names that this service should export remotely. Setting the property to "*" means that all interfaces the service implements should be exported. The tasklist persistence service already sets the property so the service is exported with defaults.

Installing the service

To keep things simple we will install container A and B on the same system.

  • Download Apache Karaf 3.0.3
  • Unpack karaf into folder container_a
  • Start bin/karaf
  • config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper zookeeper.port 2181
  • config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper.server clientPort 2181
  • feature:repo-add cxf-dosgi 1.6.0
  • feature:install cxf-dosgi-discovery-distributed cxf-dosgi-zookeeper-server
  • feature:repo-add mvn:net.lr.tasklist/tasklist-features/1.0.0-SNAPSHOT/xml
  • feature:install example-tasklist-persistence

After these commands the tasklist persistence service should be running and be published on zookeeper.

You can check the wsdl of the exported service http://localhost:8181/cxf/net/lr/tasklist/model/TaskService?wsdlBy starting the zookeeper client zkCli.sh from a zookeeper distro you can optionally check that there is a node for the service below the osgi path.

Installing the UI
  • Unpack into folder container_b
  • Start bin/karaf
  • config:property-set -p org.ops4j.pax.web org.osgi.service.http.port 8182
  • config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper zookeeper.port 2181
  • feature:repo-add cxf-dosgi 1.6.0
  • feature:install cxf-dosgi-discovery-distributed
  • feature:repo-add mvn:net.lr.tasklist/tasklist-features/1.0.0-SNAPSHOT/xml
  • feature:install example-tasklist-ui

The tasklist client ui should be in status Active/Created and the servlet should be available on http://localhost:8182/tasklist. If the ui bundle stays in status graceperiod then DOSGi did not provide a local proxy for the persistence service.

How does it work

The Remote Service Admin spec defines an extension of the OSGi service model. Using special properties when publishing OSGi services you can tell the DOSGi runtime to export a service for remote consumption. The CXF DOSGi runtime listens for all services deployed on the local container. It only processes services that have the "osgi.remote.interfaces" property. If the property is found then the service is either exported with the named interfaces or with all interfaces it implements.The way the export works can be fine tuned using the CXF DOSGi configuration options.

By default the service will be exported using the CXF servlet transport. The URL of the service is derived from the interface name. The servlet prefix, hostname and port number default to the Karaf defaults of "cxf", the ip address of the host and the port 8181. All these options can be defined using a config admin configuration (See the configuration options). By default the service uses the CXF Simple Frontend and the Aegis Databinding. If the service interface is annotated with the JAX-WS @WebService annotation then the default is JAX-WS frontend and JAXB databinding.

The service informations are then also propagated using the DOSGi discovery. In the example we use the Zookeeper discovery implementation. So the service metadata is written to a zookeeper server.

The container_b will monitor the local container for needed services. It will then check if a needed service is available on the discovery impl (on the zookeeper server in our case). For each service it finds it will create a local proxy that acts as an OSGi service implementing the requested interface. Incoming request are then serialized and sent to the remote service endpoint.

So together this allows for almost transparent service calls. The developer only needs to use the OSGi service model and can still communicate over container boundaries.

View Online
Categories: Christian Schneider

Enterprise ready request logging with CXF 3.1.0 and elastic search

Christian Schneider - Mon, 06/08/2015 - 17:29

Blog post added by Christian Schneider

You may already know the CXF LoggingFeature. You used it like this:

Old CXF LoggingFeature <jaxws:endpoint ...> <jaxws:features> <bean class="org.apache.cxf.ext.logging.LoggingFeature"/> </jaxws:features> </jaxws:endpoint>

It allowed to add logging to a CXF endpoint at compile time.

While this already helped a lot it was not really enterprise ready. The logging could not be controlled much at runtime and contained too few details. This all changes with the new CXF logging support and the up coming Karaf Decanter.

Logging feature in CXF 3.1.0

In CXF 3.1 this code was moved into a separate module and gathered some new features.

  • Auto logging for existing CXF endpoints
  • Uses slf4j MDC to log meta data separately
  • Adds meta data for Rest calls
  • Adds MD5 message id and exchange id for correlation
  • Simple interface for writing your own appenders
  • Karaf decanter support to log into elastic search
Auto logging for existing CXF endpoints in Apache Karaf

Simply install and enable the new logging feature:

Logging feature in karaf feature:repo-add cxf 3.1.0 feature:install cxf-features-logging config:property-set -p org.apache.cxf.features.logging enabled true

Then install CXF endpoints like always. For example install the PersonService from the Karaf Tutorial Part 4 - CXF Services in OSGi. The client and endpoint in the example are not equipped with the LoggingFeature. Still the new logging feature will enhance the clients and endpoints and log all SOAP and Rest calls using slf4j. So the logging data will be processed by pax logging and by default end up in your karaf log.

A log entry looks like this:

Sample Log entry 2015-06-08 16:35:54,068 | INFO | qtp1189348109-73 | REQ_IN | 90 - org.apache.cxf.cxf-rt-features-logging - 3.1.0 | <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"><soap:Body><ns2:addPerson xmlns:ns2="http://model.personservice.cxf.karaf.tutorial.lr.net/" xmlns:ns3="http://person.jms2rest.camel.karaf.tutorial.lr.net"><arg0><id>3</id><name>Test2</name><url></url></arg0></ns2:addPerson></soap:Body></soap:Envelope>

This does not look very informative. You only see that it is an incoming request (REQ_IN) and the SOAP message in the log message. The logging feature provides a lot more information though. You just need to configure the pax logging config to show it.

Slf4j MDC values for meta data

This is the raw logging information you get for a SOAP call:

FieldValue@timestamp2015-06-08T14:43:27,097ZMDC.addresshttp://localhost:8181/cxf/personServiceMDC.bundle.id90MDC.bundle.nameorg.apache.cxf.cxf-rt-features-loggingMDC.bundle.version3.1.0MDC.content-typetext/xml; charset=UTF-8MDC.encodingUTF-8MDC.exchangeId56b037e3-d254-4fe5-8723-f442835fa128MDC.headers{content-type=text/xml; charset=UTF-8, connection=keep-alive, Host=localhost:8181, Content-Length=251, SOAPAction="", User-Agent=Apache CXF 3.1.0, Accept=*/*, Pragma=no-cache, Cache-Control=no-cache}MDC.httpMethodPOSTMDC.messageIda46eebd2-60af-4975-ba42-8b8205ac884cMDC.portNamePersonServiceImplPortMDC.portTypeNamePersonServiceMDC.serviceNamePersonServiceImplServiceMDC.typeREQ_INlevelINFOloc.classorg.apache.cxf.ext.logging.slf4j.Slf4jEventSenderloc.fileSlf4jEventSender.javaloc.line55loc.methodsendloggerClassorg.ops4j.pax.logging.slf4j.Slf4jLoggerloggerNameorg.apache.cxf.services.PersonService.REQ_INmessage<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"><soap:Body><ns2:getAll xmlns:ns2="http://model.personservice.cxf.karaf.tutorial.lr.net/"; xmlns:ns3="http://person.jms2rest.camel.karaf.tutorial.lr.net"/></soap:Body></soap:Envelope>;threadNameqtp80604361-78timeStamp1433774607097

Some things to note:

  • The logger name is <service namespace>.<ServiceName>.<type> karaf by default only cuts it to just the type.
  • A lot of the details are in the MDC values

You need to change your pax logging config to make these visible.

You can use the logger name to fine tune which services you want to log this way. For example set the debug level to WARN for noisy services to avoid that they are logged or log some services to another file.

Message id and exhange id

The messageId allows to uniquely identify messages even if you collect them from several servers. It is also transported over the wire so you can correlate a request sent on one machine with the request received on another machine.

The exchangeId will be the same for an incoming request and the response sent out or on the other side for an outgoing request and the response for it. This allows to correlate request and responses and so follow the conversations.

Simple interface to write your own appenders

Write your own LogSender and set it on the LoggingFeature to do custom logging. You have access to all meta data from the class LogEvent.

So for example you could write your logs to one file per message or to JMS.

Karaf decanter support to write into elastic search

Many people use elastic search for their logging. Fortunately you do not have to write a special LogSender for this purpose. The standard CXF logging feature will already work.

It works like this:

  • CXF sends the messages as slf4j events which are processed by pax logging
  • Karaf Decanter LogCollector attaches to pax logging and sends all log events into the karaf message bus (EventAdmin topics)
  • Karaf Decanter ElasticSearchAppender sends the log events to a configurable elastic search instance

As Decanter also provides features for a local elastic search and kibana instance you are ready to go in just minutes.

Installing Decanter for CXF Logging feature:repo-add mvn:org.apache.karaf.decanter/apache-karaf-decanter/3.0.0-SNAPSHOT/xml/features feature:install decanter-collector-log decanter-appender-elasticsearch elasticsearch kibana


After that open a browser at http://localhost:8181/kibana. When decanter is released kibana will be fully set up. At the moment you have to add the logstash dashboard and change the index name to [karaf-]YYYY.MM.DD.

Then you should see your cxf messages like this:

Kibana easily allows to filter for specific services and correlate requests and responses.

This is just a preview of decanter. I will do a more detailed post when the first release is out.

 

View Online
Categories: Christian Schneider

Apache Karaf Tutorial Part 9 - Annotation based blueprint and JPA

Christian Schneider - Fri, 03/06/2015 - 09:19

Blog post edited by Christian Schneider

Writing blueprint xml is quite verbose and large blueprint xmls are difficult to keep in sync with code changes and especially refactorings. So many people prefer to do most declarations using annoations. Ideally these annotations should be standardized so it is clearly defined what they do.blueprint-maven-plugin

The aries maven-blueprint-plugin allows to configure blueprint using annotations. It scans one or more paths for annotated classes and creates a blueprint.xml in target/generated-resources. See aries documentation of the maven-blueprint-plugin.

Example tasklist-blueprint-cdi

This example shows how to create a small application with a model, persistence layer and UI completely without handwritten blueprint xml.

You can find the full source code on github Karaf-Tutorial/tasklist-cdi-blueprint

Structure
  • features
  • model
  • persistence
  • ui
Features

Defines the karaf features to install the example as well as all necessary dependencies.

Model

The model project defines Task as a jpa entity and the Service TaskService as an interface. As model does not do any dependency injection the blueprint-maven-plugin is not involved here.

Task JPA Entity @Entity public class Task { @Id Integer id; String title; String description; Date dueDate; boolean finished; // Getters and setters omitted } TaskService (CRUD operations for Tasks) public interface TaskService { Task getTask(Integer id); void addTask(Task task); void updateTask(Task task); void deleteTask(Integer id); Collection<Task> getTasks(); } persistence.xml <persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"> <persistence-unit name="tasklist" transaction-type="JTA"> <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider> <jta-data-source>osgi:service/tasklist</jta-data-source> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/> <property name="hibernate.hbm2ddl.auto" value="create-drop"/> </properties> </persistence-unit> </persistence>

Persistence.xml defines the persistence unit name as "tasklist" and to use JTA transactions. The jta-data-source points to the jndi name of the DataSource service named "tasklist". So apart from the JTA DataSource name it is a normal hibernate 4.3 style persistence definition with automatic schema creation.

One other important thing is the configuration for the maven-bundle-plugin.

Configurations for maven bundle plugin <Meta-Persistence>META-INF/persistence.xml</Meta-Persistence> <Import-Package>*, org.hibernate.proxy, javassist.util.proxy</Import-Package>

The Meta-Persistence points to the persistence.xml and is the trigger for aries jpa to create an EntityManagerFactory for this bundle.
The Import-Package configurations import two packages that are needed by the runtime enhancement done by hibernate. As this enhancement is not known at compile time we need to give the maven bundle plugin these hints.

Persistence

The tasklist-cdi-persistence bundle is the first module in the example to use the blueprint-maven-plugin. In the pom we set the scanpath to "net.lr.tasklist.persistence.impl". So all classes in this package and sub packages are scanned.

In the pom we need a special configuration for the maven bundle plugin:
<Import-Package>!javax.transaction, *, javax.transaction;version="[1.1,2)"</Import-Package>
In the dependencies we use the transaction API 1.2 as it is the first spec version to include the @Transactional annotation. At runtime though we do not need this annotation and karaf only provides the transaction API version 1.1. So we tweak the import to be ok with the version karaf offers. As soon as the transaction API 1.2 is available for karaf this line will not be necessary anymore.

TaskServiceImpl @OsgiServiceProvider(classes = {TaskService.class}) @Singleton @Transactional public class TaskServiceImpl implements TaskService { @PersistenceContext(unitName="tasklist") EntityManager em; @Override public Task getTask(Integer id) { return em.find(Task.class, id); } @Override public void addTask(Task task) { em.persist(task); em.flush(); } // Other methods omitted }

TaskServiceImpl uses quite a lot of the annotations. The class is marked as a blueprint bean using @Singleton. It is also marked to be exported as an OSGi Service with the interface TaskService.

The class is marked as @Transactional. So all methods are executed in a jta transaction of type Required. This means that if there is no transaction it will be created. If there is a transaction the method will take part in it. At the end of the transaction boundary the transaction is either committed or in case of an exception it is rolled back.

A managed EntityManager for the persistence unit "tasklist" is injected into the field em. It transparently provides one EntityManager per thread which is created on demand and closed at the end of the transaction boundary.

InitHelper @Singleton public class InitHelper { Logger LOG = LoggerFactory.getLogger(InitHelper.class); @Inject TaskService taskService; @PostConstruct public void addDemoTasks() { try { Task task = new Task(1, "Just a sample task", "Some more info"); taskService.addTask(task); } catch (Exception e) { LOG.warn(e.getMessage(), e); } } }

The class InitHelper is not strictly necessary. It simply creates and persists a first task so the UI has something to show. Again the @Singleton is necessary to mark the class for creation as a blueprint bean.
@Inject TaskService taskService injects the first bean of type TaskService it finds in the blueprint context into the field taskService. In our case this is the implementation above.
@PostConstruct makes sure that addDemoTasks() is called after injection of all fields of this bean.

Another interesting thing in the module is the test TaskServiceImplTest. It runs outside OSGi and uses a special persistence.xml for testing to create the EntityManagerFactory without a jndi DataSource which would be difficult to supply. It also uses RESOURCE_LOCAL transactions so we do not need to set up a transaction manager. The test injects the plain EntityManger into the TaskServiceImpl class. So we have to manually begin and commit the transaction. So this shows that you can test the JPA code with plain java which results in very simple and fast tests.

UI

The tasklist-ui module uses the TaskService as an OSGi service and publishes a Servlet as an OSGi service. The Pax-web whiteboard bundle will then pick up the exported servlet and publish it using the HttpService so it is available on http.
In the pom this module needs the blueprint-maven-plugin with a suitable scanPath.

TasklistServlet @OsgiServiceProvider(classes={Servlet.class}) @Properties({@Property(name="alias", value="/tasklist")}) @Singleton public class TaskListServlet extends HttpServlet { @Inject @OsgiService TaskService taskService; protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { // Actual code omitted } }

The TaskListServlet is exported with the interface javax.servlet.Servlet with the service property alias="/tasklist". So it is available on the url http://localhost:8181/tasklist.

@Inject @OsgiService TaskService taskService creates a blueprint reference element to import an OSGi service with the interface TaskService. It then injects this service into the taskService field of the above class.
If there are several services of this interface the filter property can be used to select one of them.

Build

mvn clean install

Installation and test

See Readme.txt on github.

 

View Online
Categories: Christian Schneider

Apache Karaf Tutorial Part 9 - Annotation based blueprint and JPA

Christian Schneider - Thu, 03/05/2015 - 17:41

Blog post edited by Christian Schneider

Writing blueprint xml is quite verbose and large blueprint xmls are difficult to keep in sync with code changes and especially refactorings.
So many people prefer to do most declarations using annoations. Ideally these annotations should be standardized so it is clearly defined what they do.

blueprint-maven-plugin

The aries maven-blueprint-plugin allows to configure blueprint using annotations. It scans one or more paths for annotated classes and creates a blueprint.xml in target/generated-resources. See aries documentation of the maven-blueprint-plugin.

Example tasklist-blueprint-cdi

This example shows how to create a small application with a model, persistence layer and UI completely without handwritten blueprint xml.

You can find the full source code on github Karaf-Tutorial/tasklist-cdi-blueprint

Structure
  • features
  • model
  • persistence
  • ui
Features

Defines the karaf features to install the example as well as all necessary dependencies.

Model

The model project defines Task as a jpa entity and the Service TaskService as an interface. As model does not do any dependency injection the blueprint-maven-plugin is not involved here.

Task JPA Entity @Entity public class Task { @Id Integer id; String title; String description; Date dueDate; boolean finished; // Getters and setters omitted } TaskService (CRUD operations for Tasks) public interface TaskService { Task getTask(Integer id); void addTask(Task task); void updateTask(Task task); void deleteTask(Integer id); Collection<Task> getTasks(); } persistence.xml <persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"> <persistence-unit name="tasklist" transaction-type="JTA"> <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider> <jta-data-source>osgi:service/tasklist</jta-data-source> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/> <property name="hibernate.hbm2ddl.auto" value="create-drop"/> </properties> </persistence-unit> </persistence>

Persistence.xml defines the persistence unit name as "tasklist" and to use JTA transactions. The jta-data-source points to the jndi name of the DataSource service named "tasklist". So apart from the JTA DataSource name it is a normal hibernate 4.3 style persistence definition with automatic schema creation.

One other important thing is the configuration for the maven-bundle-plugin.

Configurations for maven bundle plugin <Meta-Persistence>META-INF/persistence.xml</Meta-Persistence> <Import-Package>*, org.hibernate.proxy, javassist.util.proxy</Import-Package>

The Meta-Persistence points to the persistence.xml and is the trigger for aries jpa to create an EntityManagerFactory for this bundle.
The Import-Package configurations import two packages that are needed by the runtime enhancement done by hibernate. As this enhancement is not known at compile time we need to give the maven bundle plugin these hints.

Persistence

The tasklist-cdi-persistence bundle is the first module in the example to use the blueprint-maven-plugin. In the pom we set the scanpath to "net.lr.tasklist.persistence.impl". So all classes in this package and sub packages are scanned.

In the pom we need a special configuration for the maven bundle plugin:
<Import-Package>!javax.transaction, *, javax.transaction;version="[1.1,2)"</Import-Package>
In the dependencies we use the transaction API 1.2 as it is the first spec version to include the @Transactional annotation. At runtime though we do not need this annotation and karaf only provides the transaction API version 1.1. So we tweak the import to be ok with the version karaf offers. As soon as the transaction API 1.2 is available for karaf this line will not be necessary anymore.

TaskServiceImpl @OsgiServiceProvider(classes = {TaskService.class}) @Singleton @Transactional public class TaskServiceImpl implements TaskService { @PersistenceContext(unitName="tasklist") EntityManager em; @Override public Task getTask(Integer id) { return em.find(Task.class, id); } @Override public void addTask(Task task) { em.persist(task); em.flush(); } // Other methods omitted }

TaskServiceImpl uses quite a lot of the annotations. The class is marked as a blueprint bean using @Singleton. It is also marked to be exported as an OSGi Service with the interface TaskService.

The class is marked as @Transactional. So all methods are executed in a jta transaction of type Required. This means that if there is no transaction it will be created. If there is a transaction the method will take part in it. At the end of the transaction boundary the transaction is either committed or in case of an exception it is rolled back.

A managed EntityManager for the persistence unit "tasklist" is injected into the field em. It transparently provides one EntityManager per thread which is created on demand and closed at the end of the transaction boundary.

InitHelper @Singleton public class InitHelper { Logger LOG = LoggerFactory.getLogger(InitHelper.class); @Inject TaskService taskService; @PostConstruct public void addDemoTasks() { try { Task task = new Task(1, "Just a sample task", "Some more info"); taskService.addTask(task); } catch (Exception e) { LOG.warn(e.getMessage(), e); } } }

The class InitHelper is not strictly necessary. It simply creates and persists a first task so the UI has something to show. Again the @Singleton is necessary to mark the class for creation as a blueprint bean.
@Inject TaskService taskService injects the first bean of type TaskService it finds in the blueprint context into the field taskService. In our case this is the implementation above.
@PostConstruct makes sure that addDemoTasks() is called after injection of all fields of this bean.

Another interesting thing in the module is the test TaskServiceImplTest. It runs outside OSGi and uses a special persistence.xml for testing to create the EntityManagerFactory without a jndi DataSource which would be difficult to supply. It also uses RESOURCE_LOCAL transactions so we do not need to set up a transaction manager. The test injects the plain EntityManger into the TaskServiceImpl class. So we have to manually begin and commit the transaction. So this shows that you can test the JPA code with plain java which results in very simple and fast tests.

UI

The tasklist-ui module uses the TaskService as an OSGi service and publishes a Servlet as an OSGi service. The Pax-web whiteboard bundle will then pick up the exported servlet and publish it using the HttpService so it is available on http.
In the pom this module needs the blueprint-maven-plugin with a suitable scanPath.

TasklistServlet @OsgiServiceProvider(classes={Servlet.class}) @Properties({@Property(name="alias", value="/tasklist")}) @Singleton public class TaskListServlet extends HttpServlet { @Inject @OsgiService TaskService taskService; protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { // Actual code omitted } }

The TaskListServlet is exported with the interface javax.servlet.Servlet with the service property alias="/tasklist". So it is available on the url http://localhost:8181/tasklist.

@Inject @OsgiService TaskService taskService creates a blueprint reference element to import an OSGi service with the interface TaskService. It then injects this service into the taskService field of the above class.
If there are several services of this interface the filter property can be used to select one of them.

Build

mvn clean install

Installation and test

See Readme.txt on github.

 

View Online
Categories: Christian Schneider

Apache Karaf Tutorial Part 6 - Database Access

Christian Schneider - Tue, 03/03/2015 - 23:06

Blog post edited by Christian Schneider - "Corrections"

Shows how to access databases from OSGi applications running in Karaf and how to abstract from the DB product by installing DataSources as OSGi services. Some new Karaf shell commands can be used to work with the database from the command line. Finally JDBC and JPA examples show how to use such a DataSource from user code.Prerequisites

You need an installation of apache karaf 3.0.3 for this tutorial.

Example sources

The example projects are on github Karaf-Tutorial/db.

Drivers and DataSources

In plain java it is quite popular to use the DriverManager to create a database connection (see this tutorial). In OSGi this does not work as the ClassLoader of your bundle will have no visibility of the database driver. So in OSGi the best practice is to create a DataSource at some place that knows about the driver and publish it as an OSGi service. The user bundle should then only use the DataSource without knowing the driver specifics. This is quite similar to the best practice in application servers where the DataSource is managed by the server and published to jndi.

So we need to learn how to create and use DataSources first.

The DataSourceFactory services

To make it easier to create DataSources in OSGi the specs define a DataSourceFactory interface. It allows to create a DataSource using a specific driver from properties. Each database driver is expected to implement this interface and publish it with properties for the driver class name and the driver name.

Introducing pax-jdbc

The pax-jdbc project aims at making it a lot easier to use databases in an OSGi environment. It does the following things:

  • Implement the DataSourceFactory service for Databases that do not create this service directly
  • Implement a pooling and XA wrapper for XADataSources (This is explained at the pax jdbc docs)
  • Provide a facility to create DataSource services from config admin configurations
  • Provide karaf features for many databases as well as for the above additional functionality

So it covers everything you need from driver installation to creation of production quality DataSources.

Installing the driver

The first step is to install the driver bundles for your database system into Karaf. Most drivers are already valid bundles and available in the maven repo.

For several databases pax-jdbc already provides karadf features to install a current version of the database driver.

For H2 the following commands will work

feature:repo-add mvn:org.ops4j.pax.jdbc/pax-jdbc-features/0.5.0/xml/features feature:install transaction jndi pax-jdbc-h2 pax-jdbc-pool-dbcp2 pax-jdbc-config service:list DataSourceFactory

Strictly speaking we would only need the pax-jdbc-h2 feature but we will need the others for the next steps.

This will install the pax-jdbc feature repository and the h2 database driver. This driver already implements the DataSourceFactory so the last command will display this service.

DataSourceFactory [org.osgi.service.jdbc.DataSourceFactory] ----------------------------------------- osgi.jdbc.driver.class = org.h2.Driver osgi.jdbc.driver.name = H2 osgi.jdbc.driver.version = 1.3.172 service.id = 691 Provided by : H2 Database Engine (68)

The pax-jdbc-pool-dbcp2 feature wraps this DataSourceFactory to provide pooling and XA support.

pooled and XA DataSourceFactory [org.osgi.service.jdbc.DataSourceFactory] ----------------------------------------- osgi.jdbc.driver.class = org.h2.Driver osgi.jdbc.driver.name = H2-pool-xa osgi.jdbc.driver.version = 1.3.172 pooled = true service.id = 694 xa = true Provided by : OPS4J Pax JDBC Pooling support using Commons-DBCP2 (73)

Technically this DataSourceFactory also creates DataSource objects but internally they manage XA support and pooling. So we want to use this one for our later code examples.

Creating the DataSource

Now we just need to create a configuration with the correct factory pid to create a DataSource as a service

So create the file etc/org.ops4j.datasource-tasklist.cfg with the following content

config for DataSource osgi.jdbc.driver.name=H2-pool-xa url=jdbc:h2:mem:person dataSourceName=person

The config will automatically trigger the pax-jdbc-config module to create a DataSource.

  • The name osgi.jdbc.driver=H2-pool-xa will select the H2 DataSourceFactory with pooling and XA support we previously installed.
  • The url configures H2 to create a simple in memory database named test.
  • The dataSourceName will be reflected in a service property of the DataSource so we can find it later
  • You could also set pooling configurations in this config but we leave it at the defaults

DataSource karaf@root()> service:list DataSource [javax.sql.DataSource] ---------------------- dataSourceName = person osgi.jdbc.driver.name = H2-pool-xa osgi.jndi.service.name = person service.factoryPid = org.ops4j.datasource service.id = 696 service.pid = org.ops4j.datasource.83139141-24c6-4eb3-a6f4-82325942d36a url = jdbc:h2:mem:person Provided by : OPS4J Pax JDBC Config (69)

So when we search for services implementing the DataSource interface we find the person datasource we just created.

When we installed the features above we also installed the aries jndi feature. This module maps OSGi services to jndi objects. So we can also use jndi to retrieve the DataSource which will be used in the persistence.xml for jpa later.

jndi url of DataSource osgi:service/person Karaf jdbc commands

Karaf contains some commands to manage DataSources and do queries on databases. The commands for managing DataSources in karaf 3.x still work with the older approach of using blueprint to create DataSources. So we will not use these commands but we can use the functionality to list datasources, list tables and execute queries.

jdbc commands feature:install jdbc jdbc:datasources jdbc:tables person

We first install the karaf jdbc feature which provides the jdbc commands. Then we list the DataSources and show the tables of the database accessed by the person DataSource.

jdbc:execute tasklist "create table person (name varchar(100), twittername varchar(100))" jdbc:execute tasklist "insert into person (name, twittername) values ('Christian Schneider', '@schneider_chris')" jdbc:query tasklist "select * from person"

This creates a table person, adds a row to it and shows the table.

The output should look like this

select * from person NAME | TWITTERNAME -------------------------------------- Christian Schneider | @schneider_chris Accessing the database using JDBC

The project db/examplejdbc shows how to use the datasource we installed and execute jdbc commands on it. The example uses a blueprint.xml to refer to the OSGi service for the DataSource and injects it into the class
DbExample.The test method is then called as init method and shows some jdbc statements on the DataSource.The DbExample class is completely independent of OSGi and can be easily tested standalone using the DbExampleTest. This test shows how to manually set up the DataSource outside of OSGi.

Build and install

Build works like always using maven

> mvn clean install

In Karaf we just need our own bundle as we have no special dependencies

> install -s mvn:net.lr.tutorial.karaf.db/db-examplejdbc/1.0-SNAPSHOT Using datasource H2, URL jdbc:h2:~/test Christian Schneider, @schneider_chris,

After installation the bundle should directly print the db info and the persisted person.

Accessing the database using JPA

For larger projects often JPA is used instead of hand crafted SQL. Using JPA has two big advantages over JDBC.

  1. You need to maintain less SQL code
  2. JPA provides dialects for the subtle differences in databases that else you would have to code yourself.

For this example we use Hibernate as the JPA Implementation. On top of it we add Apache Aries JPA which supplies an implementation of the OSGi JPA Service Specification and blueprint integration for JPA.

The project examplejpa shows a simple project that implements a PersonService managing Person objects.
Person is just a java bean annotated with JPA @Entitiy.

Additionally the project implements two Karaf shell commands person:add and person:list that allow to easily test the project.

persistence.xml

Like in a typical JPA project the peristence.xml defines the DataSource lookup, database settings and lists the persistent classes. The datasource is refered using the jndi name "osgi:service/person".

The OSGi JPA Service Specification defines that the Manifest should contain an attribute "Meta-Persistence" that points to the persistence.xml. So this needs to be defined in the config of the maven bundle plugin in the prom. The Aries JPA container will scan for these attributes
and register an initialized EntityMangerFactory as an OSGi service on behalf of the use bundle.

blueprint.xml

We use a blueprint.xml context to inject an EntityManager into our service implementation and to provide automatic transaction support.
The following snippet is the interesting part:

<bean id="personService" class="net.lr.tutorial.karaf.db.examplejpa.impl.PersonServiceImpl"> <jpa:context property="em" unitname="person" /> <tx:transaction method="*" value="Required"/> </bean>

This makes a lookup for the EntityManagerFactory OSGi service that is suitable for the persistence unit person and injects a thread safe EnityManager (using a ThreadLocal under the hood) into the
PersonServiceImpl. Additionally it wraps each call to a method of PersonServiceImpl with code that opens a transaction before the method and commits on success or rollbacks on any exception thrown.

Build and InstallBuild mvn clean install

A prerequisite is that the derby datasource is installed like described above. Then we have to install the bundles for hibernate, aries jpa, transaction, jndi and of course our db-examplejpa bundle.
See ReadMe.txt for the exact commands to use.

Test person:add 'Christian Schneider' @schneider_chris

Then we list the persisted persons

karaf@root> person:list Christian Schneider, @schneider_chris Summary

In this tutorial we learned how to work with databases in Apache Karaf. We installed drivers for our database and a DataSource. We were able to check and manipulate the DataSource using the jdbc:* commands. In the examplejdbc we learned how to acquire a datasource
and work with it using plain jdbc4.  Last but not least we also used jpa to access our database.

Back to Karaf Tutorials

View Online
Categories: Christian Schneider

How fast is CXF ? - Measuring CXF performance on http, https and jms

Christian Schneider - Fri, 01/16/2015 - 09:13

Blog post edited by Christian Schneider

The performance numbers in this article are a bit out of date

Icon

For a more current JMS performance measurement see Revisiting JMS performance. Improvements in CXF 3.0.0.

On a 2014 system http performance should be around 10k - 20k messages/s for small messages.

 

From time to time people ask how fast is CXF? Of course this is a difficult question as the measuered speed depends very much on the Hardware of the test setup and on the whole definition of the test.
So I am trying to explain how you can do your own tests and what to do to make sure you get clean results.

What should you keep in mind when doing performance tests with Java?

  • Performance is very much influenced by thread count and request size. So it is a good idea to scale each
  • As long as you have not maxed out at least one resource you can improve the results. Typical resources to check are processor load, memory and network
  • Increase the thread count until you max out a resource. But do not go much higher
  • Always use a warmup phase (~1-2 minutes). Java needs to load classes the first time. On the Sun VM additionally the Hotspot compiler will kick in after some time
Prerequisites

The test project can be found on my github account. You can either download a zip or clone the project with git:
https://github.com/cschneider/performance-tests

As a load generator and measurement tool we use soapUI. Download the free version from the link below:
http://www.soapui.org/

The test plan

We test SOAP/HTTP, SOAP/HTTPS and SOAP/JMS performance using a small but non trivial service. For this case the CustomerService from the wsdl_first example will be used.
Two variables will be changed for the test series. The Soap Message size and the number of sender/listener threads.
The SOAP Message size will be tuned by using a String or variable length. It will be set so the complete SOAP message reaches the desired size.

The payload size can be adjusted by the number of customer records the server sends:

Size

payload size

Small

500

Medium

10 KB

Large

1 MB

The second variable is the number of sender and listener Threads. We will test with 5, 10 and 20 Threads. The optimal number of threads
correlates with the number of processor cores. In this case there are two cores. With bigger machines the maximum number of threads should be
higher.

Customerservice SOAP/HTTP performance

For the server side I have prepared a maven project which start the CustomerService implementation from the wsdl_first example on an embedded jetty. We could
also use an external server but in my tests the results were similar and the embedded version can be started very easily.

The number of listener threads can be adjusted in the file src/main/resources/server-applicationContext.xml :

<httpj:threadingParameters minThreads="5" maxThreads="5" />

Start the server:

cd cxf
mvn -Pserver

Start soapUI and load the soapUI project from the file cxf/cxf-performance-soapui-project.xml. The project was built using the wsdl of the CustomerService and contains
test requests and a load test definition. Alternatively a client class is provided that also will give the performance results. But SOAP UI is the more neutral environment.

Now navigate to the Loadtest 1 like shown in screenshot and start the loadtest by clicking on the green arrow. The intersting result is tps (Transactions per seconds). It measures how many Requests/Resonses are processed per second.
At first the number will be quite low but increase steadily. That is because of class loading and optimizations in Java. Let the test run 60 seconds. This was the warmup phase. Now start the test again.

Customerservice SOAP/JMS performance

Testing JMS is much harder than HTTP. SOAP UI supports jms tests but it needs some more configuration than in the http case and did not work so well for me. So
I used the java client for the jms tests.

Additionally there are many tuning possibilities that affect the speed tremendously. For example I was not abler to send more than
700 messages per second in the start as my activemq config was not correctly optimized. When I used the throughput optimized config
the speed was much higher.

Beware though when using the default "activemq-throughput.xml". It limits the size of queues to 1MB and stops the sender when the size is reached.
In my case that meant that my sender was hanging mysteriously. After I set the limit to 100MB my tests worked. See activemq.xml for my configs.

On the ActiveMQ website much more performance tuning tips can be found:http://activemq.apache.org/performance-tuning.html

Environment

It is always important to describe excatly on which configuration the test was run.
All the tests below were run on a Intel Core i5 / 8GB System. Client and Server where on the same machine.

SOAP/HTTP Results

Threads are listener and client threads. CPU load is read from the windows Task Manager. Transactions per Second are the highest number from soapUI.

Threads

Size

CPU Load

Transactions per Second

5

Small

55%

2580

10

Small

100%

3810

20

Small

100%

4072

5

Medium

75%

2360

10

Medium

100%

2840

20

Medium

100%

2820

5

Large

90%

94

10

Large

92%

94

20

Large

95%

84

So it looks like 10 threads is ideal for the test machine with 2 cores and 4 virtual cores. This is quite near the rule of thumb to use double the amount of cores as optimal thread number.
When scaling up the payload size performance drops with the same factor.

SOAP/HTTPS results

Cipher: AES-128 128 Bit key

The payload size can be adjusted by the number of customer records the server sends:

Threads

Size

CPU Load

Transactions per Second

5

Small

60%

2408

10

Small

100%

3310

20

Small

100%

3430

5

Medium

80%

1620

10

Medium

100%

1750

20

Medium

100%

1800

5

Large

100%

34

10

Large

100%

34

20

Large

1000%

34

So it looks like 10 threads is ideal for the test machine with 2 cores and 4 virtual cores. This is quite near the rule of thumb to use double the amount of cores as optimal thread number.
When scaling up the payload size performance drops with the same factor.

SOAP/JMS results

The JMS tests additionally need a broker. I used ActiveMQ 5.5.0 with the activemq.xml that can be found in github repo above.

Using request / reply with a fixed reply queue.

Threads

Size

CPU Load

Transactions per Second

5

Small

100%

1670

10

Small

100%

1650

20

Small

100%

1710

5

Medium

100%

1120

10

Medium

100%

1120

20

Medium

100%

1140

3

Large

75%

30

5

Large

75%

28

Using one way calls

Threads

Size

CPU Load

Transactions per Second (only client)

Transactions per Second (client and server)

5

Small

100%

3930

3205

10

Small

100%

3900

3167

20

Small

100%

4200

3166

30

Small

100%

4090

2818

When testing one way calls first only the client was running. So it can be expected that the performance is more than double the performance of
request /response as we do not have to send back a message and there is no server that consumes processor power.

Next the server was also running. This case is as expected about double the performance of request /reply as only half the messages have to be sent / received.

View Online
Categories: Christian Schneider

Revisiting JMS performance. Improvements in CXF 3.0.0

Christian Schneider - Fri, 03/28/2014 - 17:21

Blog post edited by Christian Schneider

Some time ago I did some CXF performance measurements. See How fast is CXF ? - Measuring CXF performance on http, https and jms.

For cxf 3.0.0 I did some massive changes on the JMS transport. So I thought it is a good time to compare cxf 2.x and 3 in JMS performance. My goal was to reach at least the original performance. As my test system is different now I am also measuring the cxf 2.x performance again to have a good comparison.

Test System

Dell Precision with Intel Core i7, 16 GB Ram, 256 GB SSD running ubuntu Linux 13.10.

Test Setup

I am using a new version of my performance-tests project on github.

The test runs on one machine using one activemq Server, one test server and one test client.

The test calls the example cxf CustomerService.

The following call types are supported:

Call type
Code Description
oneway
customerService.updateCustomer(customer); Asynchronous one way call. Sends one soap message to server
requestReply
List<Customer> customers = customerService.getCustomersByName("test2"); Synchronous request reply. Sends one soap message to server and waits for the reply
requestReplyAsync Future<GetCustomersByNameResponse> resp = customerService.getCustomersByNameAsync("test2");
GetCustomersByNameResponse res1 = resp.get(); Sends an asynchronous request reply. Sends one soap message to server and returns without waiting.
In this test we wait directly after the call for simplicity.

The requests above are sent using an executor with a fixed number of threads.

For the test you can specify the total number of messages, the number of threads and the call type.
First the number of requests are sent for warmup and then for the real measured test.
To run the test with cxf 3.0.0-SNAPSHOT you have to compile cxf from source.

Test execution

1. Run a standalone activemq 5.9.0 server with the activemq.xml from the github sources above.

bin/activemq console

2. Start the jms server in a new console from the project source using:

mvn -Pserver test

3. Start the jms client using:

mvn -Pclient test -Dmessages=40000 -Dthreads=20 -DcallType=oneway Test results

The test is executed with several combinations of the parameters. Using the pom property cxf.version we also switch between cxf 2.7.10 and cxf 3.0.0-SNAPSHOT.

CXF 2.7.10 threads
call type
1
20
40
oneway
10541 12143 11737 requestReply 610 661 691 requestReplyAsync 1561 3448 3859 CXF 3.0.0-SNAPSHOT threads
call type
1
20
40
oneway
11170 11632 12010 requestReply 1524 3248 3671 requestReplyAsync 1590 3569 3909 Observations

The first interesting fact here is that one way messaging does not profit from the number of threads. One thread already seems to achieve the same performance like 40 threads. This is quit intuitive as activemq needs to synchronize the calls on the one thread holding the jms connection. On the other hand using more processes also does not seem to improve the performance so we seem to be quite at the limit of activemq here which is good.

For request reply the performance seems to scale with the number threads. This can be explained as we have to wait for the response and can use this time to send some more requests.

One really astonishing thing here is that CXF 2.7.10 seems to be really bad when using synchronous request reply. This is because it uses consumer.receive in this case while it uses a jms message listener for async calls. So the jms message listener seems to perform much better than the consumer.receive case. For CXF 2.7.10 this means we can speed up our calls if we use the asynchronous interface even if it is more inconvenient.

The most important observation here is that CXF 3 performs a lot better for the synchronous request reply case. It is as fast as for the asynchronous case. The reason is that we now also use a message listener for synchronous calls as long as our correlation id is based on the conduit id prefix. This is the default so this case is vastly improved. CXF 3 is up to 5 times faster than CXF 2.7.10.

There is one down side still. If you use message id as correlation id or a user correlation id set on the cxf message then cxf 3 will switch back to consumer.receive and will be as slow as CXF 2 again.

View Online
Categories: Christian Schneider

How fast is CXF ? - Measuring CXF performance on http, https and jms

Christian Schneider - Thu, 03/27/2014 - 18:08

Blog post edited by Christian Schneider

From time to time people ask how fast is CXF? Of course this is a difficult question as the measuered speed depends very much on the Hardware of the test setup and on the whole definition of the test.
So I am trying to explain how you can do your own tests and what to do to make sure you get clean results.

What should you keep in mind when doing performance tests with Java?

  • Performance is very much influenced by thread count and request size. So it is a good idea to scale each
  • As long as you have not maxed out at least one resource you can improve the results. Typical resources to check are processor load, memory and network
  • Increase the thread count until you max out a resource. But do not go much higher
  • Always use a warmup phase (~1-2 minutes). Java needs to load classes the first time. On the Sun VM additionally the Hotspot compiler will kick in after some time
Prerequisites

The test project can be found on my github account. You can either download a zip or clone the project with git:
https://github.com/cschneider/performance-tests

As a load generator and measurement tool we use soapUI. Download the free version from the link below:
http://www.soapui.org/

The test plan

We test SOAP/HTTP, SOAP/HTTPS and SOAP/JMS performance using a small but non trivial service. For this case the CustomerService from the wsdl_first example will be used.
Two variables will be changed for the test series. The Soap Message size and the number of sender/listener threads.
The SOAP Message size will be tuned by using a String or variable length. It will be set so the complete SOAP message reaches the desired size.

The payload size can be adjusted by the number of customer records the server sends:

Size payload size Small 500 Medium 10 KB Large 1 MB

The second variable is the number of sender and listener Threads. We will test with 5, 10 and 20 Threads. The optimal number of threads
correlates with the number of processor cores. In this case there are two cores. With bigger machines the maximum number of threads should be
higher.

Customerservice SOAP/HTTP performance

For the server side I have prepared a maven project which start the CustomerService implementation from the wsdl_first example on an embedded jetty. We could
also use an external server but in my tests the results were similar and the embedded version can be started very easily.

The number of listener threads can be adjusted in the file src/main/resources/server-applicationContext.xml :

<httpj:threadingParameters minThreads="5" maxThreads="5" />

Start the server:

cd cxf
mvn -Pserver

Start soapUI and load the soapUI project from the file cxf/cxf-performance-soapui-project.xml. The project was built using the wsdl of the CustomerService and contains
test requests and a load test definition. Alternatively a client class is provided that also will give the performance results. But SOAP UI is the more neutral environment.

Now navigate to the Loadtest 1 like shown in screenshot and start the loadtest by clicking on the green arrow. The intersting result is tps (Transactions per seconds). It measures how many Requests/Resonses are processed per second.
At first the number will be quite low but increase steadily. That is because of class loading and optimizations in Java. Let the test run 60 seconds. This was the warmup phase. Now start the test again.

Customerservice SOAP/JMS performance

Testing JMS is much harder than HTTP. SOAP UI supports jms tests but it needs some more configuration than in the http case and did not work so well for me. So
I used the java client for the jms tests.

Additionally there are many tuning possibilities that affect the speed tremendously. For example I was not abler to send more than
700 messages per second in the start as my activemq config was not correctly optimized. When I used the throughput optimized config
the speed was much higher.

Beware though when using the default "activemq-throughput.xml". It limits the size of queues to 1MB and stops the sender when the size is reached.
In my case that meant that my sender was hanging mysteriously. After I set the limit to 100MB my tests worked. See activemq.xml for my configs.

On the ActiveMQ website much more performance tuning tips can be found:http://activemq.apache.org/performance-tuning.html

Environment

It is always important to describe excatly on which configuration the test was run.
All the tests below were run on a Intel Core i5 / 8GB System. Client and Server where on the same machine.

SOAP/HTTP Results

Threads are listener and client threads. CPU load is read from the windows Task Manager. Transactions per Second are the highest number from soapUI.

Threads Size CPU Load Transactions per Second 5 Small 55% 2580 10 Small 100% 3810 20 Small 100% 4072 5 Medium 75% 2360 10 Medium 100% 2840 20 Medium 100% 2820 5 Large 90% 94 10 Large 92% 94 20 Large 95% 84

So it looks like 10 threads is ideal for the test machine with 2 cores and 4 virtual cores. This is quite near the rule of thumb to use double the amount of cores as optimal thread number.
When scaling up the payload size performance drops with the same factor.

SOAP/HTTPS results

Cipher: AES-128 128 Bit key

The payload size can be adjusted by the number of customer records the server sends:

Threads Size CPU Load Transactions per Second 5 Small 60% 2408 10 Small 100% 3310 20 Small 100% 3430 5 Medium 80% 1620 10 Medium 100% 1750 20 Medium 100% 1800 5 Large 100% 34 10 Large 100% 34 20 Large 1000% 34

So it looks like 10 threads is ideal for the test machine with 2 cores and 4 virtual cores. This is quite near the rule of thumb to use double the amount of cores as optimal thread number.
When scaling up the payload size performance drops with the same factor.

SOAP/JMS results

The JMS tests additionally need a broker. I used ActiveMQ 5.5.0 with the activemq.xml that can be found in github repo above.

Using request / reply with a fixed reply queue.

Threads Size CPU Load Transactions per Second 5 Small 100% 1670 10 Small 100% 1650 20 Small 100% 1710 5 Medium 100% 1120 10 Medium 100% 1120 20 Medium 100% 1140 3 Large 75% 30 5 Large 75% 28

Using one way calls

Threads Size CPU Load Transactions per Second (only client) Transactions per Second (client and server) 5 Small 100% 3930 3205 10 Small 100% 3900 3167 20 Small 100% 4200 3166 30 Small 100% 4090 2818

When testing one way calls first only the client was running. So it can be expected that the performance is more than double the performance of
request /response as we do not have to send back a message and there is no server that consumes processor power.

Next the server was also running. This case is as expected about double the performance of request /reply as only half the messages have to be sent / received.

View Online
Categories: Christian Schneider

How to hack into any default apache karaf installation

Christian Schneider - Wed, 01/08/2014 - 11:00

Blog post added by Christian Schneider

Apache karaf is an open source OSGi server developed by the Apache foundation. It provides very convenient management functionality on top of existing OSGi frameworks. Karaf is used in several open source and commercial solutions.

Like often convenience and security do not not go well together. In the case of karaf there is one known security hole in default installations that was introduced to make the initial experience with karaf very convenient. Karaf by default starts an ssh server. It also delivers a bin/client command that is mainly meant to connect to the local karaf server without a password.

Is your karaf server vulnerable?

Some simple steps to check if your karaf installations is open.

  • Check the "etc/org.apache.karaf.shell.cfg" for the attribute sshPort. Note this port number. By default it is 8101
  • Do "ssh -p 8101 karaf@localhost". Like expected it will ask for a password. This may also be dangerous if you do not change the default password but is quite obvious.
  • Now just do bin/client -a 8101. You will get a shell without supplying a password. If this works then your server is vulnerable
How does it work

The client command has a built in ssh private key which is used when connecting to karaf. There is a config "etc/keys.properties" in karaf which defines the public keys that are allowed to connect to karaf.

Why is this dangerous?

The private key inside the client command is fixed and publicly available. See karaf.key. As the mechanism also works with remote connections "bin/client -a 8101 -h hostname" this means that anyone with access to your server ip can remotely control your karaf server. As the karaf shell also allows to execute external programs (exec command) this even allows further access to your machine.

How to secure your server ?

Simply remove the public key of the karaf user in the "etc/keys.properties". Unfortunately this will stop the bin/client command from working.

Also make sure you change the password of the karaf user in "etc/users.properties".

View Online
Categories: Christian Schneider

10 reasons to switch to Apache Karaf 3

Christian Schneider - Tue, 01/07/2014 - 10:39

Blog post edited by Christian Schneider

Nicely timed as a christmas present Apache Karaf 3.0.0 was released on the 24th of December. As a user of karaf 2.x you might ask yourself why to switch to the new major version. Here are 10 reasons why the switch is worth the effort.

External dependencies are cached locally now

One of the coolest features of karaf is that it can load features and bundles from a maven repository. In karaf 2.x the drawback was though that external dependencies thaat are not already in the system dir and local maven repo were always loaded from the external repo. Karaf 3 now uses the real maven artifact resolution. So it automatically caches downloaded artifacts in the local maven repo. So the artifacts only have to be loaded the first time.

Delay shell start till all bundles are up and running

A typical problem in karaf 2.x and also karaf 3 with default settings is that the shell comes up before all bundles are started. So if you enter a command you migh get an error that the command is unknown - simply because the respective bundle is not yet loaded. In karaf 3 you can set the property "karaf.delay.console=true". Karaf will then show a progress bar on startup and start the console when all bundles are up and running. If you are in a hurry you can still type enter to start the shell faster.

Create kar archives from existing features

If you need some features for offline deploymeent then kar files are a nice alternative to setting up a maven repo or copying everything to the system dir. Most features are not available as kar files though. In karaf 3 the kar:create command allows to create a kar file from any installed feature repository. Kar files now also can be defined as pure repositories. So they can be installed without installing all contained features.

Example:

feature:repo-add camel 2.12.2
kar:create camel-2.12.2

A kar file with all camel features will be created below data/kar. You can also select which features to include.

More consistent commands

In karaf 2.x the command naming was not very consistent. For karaf 3 we have the common scheme of <subject>:<command> or <subject>:<secondary-subject>-<command>. For example adding feature repos now is:

feature:repo-add <url or short name> ?<version>

Instead of features:chooseurl and features:addurl.

The various dev commands are now moved to the subjects they affect. Like bundle:watch instead of dev:watch or system:property instead of dev:system-property.

JDBC commands

Karaf 3 allows to directly interact with jdbc databases from the shell. Examples are creating a datasource. Executing a sql command, showing the results of a sql query. For more details see blog article from JB: New enterprise JDBC feature.

JMS commands

Similar to jdbc karaf 3 now contains commands for jms interactions from the shell. You can create connection factories, send and consume messages. See blog article from JB : new enterprise jms feature.

Role based access control for commands and services

In karaf 2.x every user with shell access can use every command, OSGi services are not protected at all. Karaf 3 contains role based access control for commands and services. So for example you can define a group of users that can only list bundles and do other non admin task by simply changing some configuration files. Similar you can protect any OSGi service so it can only be called from a process with successful jaas login and with the correct roles. More details about this feature can be found at [http://coderthoughts.blogspot.de/2013/10/role-based-access-control-for-karaf.html].

Diagnostics for blueprint and spring dm

In karaf 2.x it was difficult to diagnose problems with bundles using blueprint and spring dm. Karaf 3 now has the simple bundle:diag command that lists diagnostics about all bundles that did not start. For example you can see that a blueprint bundle waits for a namespace or that a blueprint file has a syntax error. Simply try this the next time your bundles do not work like expected.

Features for persistence frameworks

Karaf 3 now has features for openjpa and hibernate. So along with the already present jpa and jta features this makes it easy to install everything you need to do jpa based persistence.

Features for CDI and EJB

The cdi feature installs pax cdi. This allows to use the full set of CDI annotations including any portable extensions in Apache Karaf. The openjpa feature allows to even install openejb for full ejb support on Apache Karaf.

This only lists some of the most noteable features of karaf 3. There is a lot more to discover. Take your time and dig around the features and commands.

View Online
Categories: Christian Schneider

Pages

Subscribe to Talend Community Coders aggregator - Christian Schneider