Latest Activity

Apache Karaf Tutorial Part 6 - Database Access

Christian Schneider - Tue, 03/03/2015 - 23:06

Blog post edited by Christian Schneider - "Corrections"

Shows how to access databases from OSGi applications running in Karaf and how to abstract from the DB product by installing DataSources as OSGi services. Some new Karaf shell commands can be used to work with the database from the command line. Finally JDBC and JPA examples show how to use such a DataSource from user code.Prerequisites

You need an installation of apache karaf 3.0.3 for this tutorial.

Example sources

The example projects are on github Karaf-Tutorial/db.

Drivers and DataSources

In plain java it is quite popular to use the DriverManager to create a database connection (see this tutorial). In OSGi this does not work as the ClassLoader of your bundle will have no visibility of the database driver. So in OSGi the best practice is to create a DataSource at some place that knows about the driver and publish it as an OSGi service. The user bundle should then only use the DataSource without knowing the driver specifics. This is quite similar to the best practice in application servers where the DataSource is managed by the server and published to jndi.

So we need to learn how to create and use DataSources first.

The DataSourceFactory services

To make it easier to create DataSources in OSGi the specs define a DataSourceFactory interface. It allows to create a DataSource using a specific driver from properties. Each database driver is expected to implement this interface and publish it with properties for the driver class name and the driver name.

Introducing pax-jdbc

The pax-jdbc project aims at making it a lot easier to use databases in an OSGi environment. It does the following things:

  • Implement the DataSourceFactory service for Databases that do not create this service directly
  • Implement a pooling and XA wrapper for XADataSources (This is explained at the pax jdbc docs)
  • Provide a facility to create DataSource services from config admin configurations
  • Provide karaf features for many databases as well as for the above additional functionality

So it covers everything you need from driver installation to creation of production quality DataSources.

Installing the driver

The first step is to install the driver bundles for your database system into Karaf. Most drivers are already valid bundles and available in the maven repo.

For several databases pax-jdbc already provides karadf features to install a current version of the database driver.

For H2 the following commands will work

feature:repo-add mvn:org.ops4j.pax.jdbc/pax-jdbc-features/0.5.0/xml/features feature:install transaction jndi pax-jdbc-h2 pax-jdbc-pool-dbcp2 pax-jdbc-config service:list DataSourceFactory

Strictly speaking we would only need the pax-jdbc-h2 feature but we will need the others for the next steps.

This will install the pax-jdbc feature repository and the h2 database driver. This driver already implements the DataSourceFactory so the last command will display this service.

DataSourceFactory [org.osgi.service.jdbc.DataSourceFactory] ----------------------------------------- osgi.jdbc.driver.class = org.h2.Driver osgi.jdbc.driver.name = H2 osgi.jdbc.driver.version = 1.3.172 service.id = 691 Provided by : H2 Database Engine (68)

The pax-jdbc-pool-dbcp2 feature wraps this DataSourceFactory to provide pooling and XA support.

pooled and XA DataSourceFactory [org.osgi.service.jdbc.DataSourceFactory] ----------------------------------------- osgi.jdbc.driver.class = org.h2.Driver osgi.jdbc.driver.name = H2-pool-xa osgi.jdbc.driver.version = 1.3.172 pooled = true service.id = 694 xa = true Provided by : OPS4J Pax JDBC Pooling support using Commons-DBCP2 (73)

Technically this DataSourceFactory also creates DataSource objects but internally they manage XA support and pooling. So we want to use this one for our later code examples.

Creating the DataSource

Now we just need to create a configuration with the correct factory pid to create a DataSource as a service

So create the file etc/org.ops4j.datasource-tasklist.cfg with the following content

config for DataSource osgi.jdbc.driver.name=H2-pool-xa url=jdbc:h2:mem:person dataSourceName=person

The config will automatically trigger the pax-jdbc-config module to create a DataSource.

  • The name osgi.jdbc.driver=H2-pool-xa will select the H2 DataSourceFactory with pooling and XA support we previously installed.
  • The url configures H2 to create a simple in memory database named test.
  • The dataSourceName will be reflected in a service property of the DataSource so we can find it later
  • You could also set pooling configurations in this config but we leave it at the defaults

DataSource karaf@root()> service:list DataSource [javax.sql.DataSource] ---------------------- dataSourceName = person osgi.jdbc.driver.name = H2-pool-xa osgi.jndi.service.name = person service.factoryPid = org.ops4j.datasource service.id = 696 service.pid = org.ops4j.datasource.83139141-24c6-4eb3-a6f4-82325942d36a url = jdbc:h2:mem:person Provided by : OPS4J Pax JDBC Config (69)

So when we search for services implementing the DataSource interface we find the person datasource we just created.

When we installed the features above we also installed the aries jndi feature. This module maps OSGi services to jndi objects. So we can also use jndi to retrieve the DataSource which will be used in the persistence.xml for jpa later.

jndi url of DataSource osgi:service/person Karaf jdbc commands

Karaf contains some commands to manage DataSources and do queries on databases. The commands for managing DataSources in karaf 3.x still work with the older approach of using blueprint to create DataSources. So we will not use these commands but we can use the functionality to list datasources, list tables and execute queries.

jdbc commands feature:install jdbc jdbc:datasources jdbc:tables person

We first install the karaf jdbc feature which provides the jdbc commands. Then we list the DataSources and show the tables of the database accessed by the person DataSource.

jdbc:execute tasklist "create table person (name varchar(100), twittername varchar(100))" jdbc:execute tasklist "insert into person (name, twittername) values ('Christian Schneider', '@schneider_chris')" jdbc:query tasklist "select * from person"

This creates a table person, adds a row to it and shows the table.

The output should look like this

select * from person NAME | TWITTERNAME -------------------------------------- Christian Schneider | @schneider_chris Accessing the database using JDBC

The project db/examplejdbc shows how to use the datasource we installed and execute jdbc commands on it. The example uses a blueprint.xml to refer to the OSGi service for the DataSource and injects it into the class
DbExample.The test method is then called as init method and shows some jdbc statements on the DataSource.The DbExample class is completely independent of OSGi and can be easily tested standalone using the DbExampleTest. This test shows how to manually set up the DataSource outside of OSGi.

Build and install

Build works like always using maven

> mvn clean install

In Karaf we just need our own bundle as we have no special dependencies

> install -s mvn:net.lr.tutorial.karaf.db/db-examplejdbc/1.0-SNAPSHOT Using datasource H2, URL jdbc:h2:~/test Christian Schneider, @schneider_chris,

After installation the bundle should directly print the db info and the persisted person.

Accessing the database using JPA

For larger projects often JPA is used instead of hand crafted SQL. Using JPA has two big advantages over JDBC.

  1. You need to maintain less SQL code
  2. JPA provides dialects for the subtle differences in databases that else you would have to code yourself.

For this example we use Hibernate as the JPA Implementation. On top of it we add Apache Aries JPA which supplies an implementation of the OSGi JPA Service Specification and blueprint integration for JPA.

The project examplejpa shows a simple project that implements a PersonService managing Person objects.
Person is just a java bean annotated with JPA @Entitiy.

Additionally the project implements two Karaf shell commands person:add and person:list that allow to easily test the project.

persistence.xml

Like in a typical JPA project the peristence.xml defines the DataSource lookup, database settings and lists the persistent classes. The datasource is refered using the jndi name "osgi:service/person".

The OSGi JPA Service Specification defines that the Manifest should contain an attribute "Meta-Persistence" that points to the persistence.xml. So this needs to be defined in the config of the maven bundle plugin in the prom. The Aries JPA container will scan for these attributes
and register an initialized EntityMangerFactory as an OSGi service on behalf of the use bundle.

blueprint.xml

We use a blueprint.xml context to inject an EntityManager into our service implementation and to provide automatic transaction support.
The following snippet is the interesting part:

<bean id="personService" class="net.lr.tutorial.karaf.db.examplejpa.impl.PersonServiceImpl"> <jpa:context property="em" unitname="person" /> <tx:transaction method="*" value="Required"/> </bean>

This makes a lookup for the EntityManagerFactory OSGi service that is suitable for the persistence unit person and injects a thread safe EnityManager (using a ThreadLocal under the hood) into the
PersonServiceImpl. Additionally it wraps each call to a method of PersonServiceImpl with code that opens a transaction before the method and commits on success or rollbacks on any exception thrown.

Build and InstallBuild mvn clean install

A prerequisite is that the derby datasource is installed like described above. Then we have to install the bundles for hibernate, aries jpa, transaction, jndi and of course our db-examplejpa bundle.
See ReadMe.txt for the exact commands to use.

Test person:add 'Christian Schneider' @schneider_chris

Then we list the persisted persons

karaf@root> person:list Christian Schneider, @schneider_chris Summary

In this tutorial we learned how to work with databases in Apache Karaf. We installed drivers for our database and a DataSource. We were able to check and manipulate the DataSource using the jdbc:* commands. In the examplejdbc we learned how to acquire a datasource
and work with it using plain jdbc4.  Last but not least we also used jpa to access our database.

Back to Karaf Tutorials

View Online
Categories: Christian Schneider

New Apache WSS4J and CXF releases

Colm O hEigeartaigh - Fri, 02/20/2015 - 17:37
Apache WSS4J 2.0.3 and 1.6.18 have been released. Both releases contain a number of fixes in relation to validating SAML tokens, as covered earlier. In addition, Apache WSS4J 2.0.3 has unified security error messages to prevent some attacks (see here for more information). Apache CXF 3.0.4 and 2.7.15 have also been released, both of which pick up the recent WSS4J releases.
Categories: Colm O hEigeartaigh

Unified security error messages in Apache WSS4J and CXF

Colm O hEigeartaigh - Mon, 02/16/2015 - 17:59
When Apache WSS4J encounters a error on processing a secured SOAP message it throws an exception. This could be a configuration error, an invalid Signature, incorrect UsernameToken credentials, etc. The SOAP stack in question, Apache CXF for the purposes of this post, then converts the exception into a SOAP Fault and returns it to the client. However the SOAP stack must take care not to leak information (e.g. internal configuration details) to an attacker. This post looks at some changes that are coming in WSS4J and CXF in this area.

The later releases of Apache CXF 2.7.x map the WSS4J exception message to one of the standard error QNames defined in the SOAP Message Security Profile 1.1 specification. One exception is if a "replay" error occurred, such as if a UsernameToken nonce is re-used. This type of error is commonly seen in testing scenarios, when messages are replayed, and returning the original error aids in figuring out what is going wrong. Apache CXF 3.0.0 -> 3.0.3 extends this functionality a bit by adding a new configuration option:
  • ws-security.return.security.error - Whether to return the security error message to the client, and not one of the default error QNames. Default is "false".
However, even returning one of the standard security error QNames can provide an "oracle" for certain types of attacks. For example, Apache WSS4J recently released a security advisory for an attack that works if an attacker can distinguish whether the decryption of an EncryptedKey or EncryptedData structure failed. There are also attacks on data encrypted via a cipher block chaining (CBC) mode, that only require the knowledge about whether the specific decryption failed.

Therefore from Apache WSS4J 2.0.3 onwards (and CXF 3.0.4 onwards) a single error fault message ("A security error was encountered when verifying the message") and code ("http://ws.apache.org/wss4j", "SecurityError") is returned on a security processing error. It is still possible to set "ws-security.return.security.error" to "true" to return the underlying security error to aid in testing etc.
Categories: Colm O hEigeartaigh

Two new security advisories released for Apache WSS4J

Colm O hEigeartaigh - Tue, 02/10/2015 - 12:47
Two new security advisories have been released for Apache WSS4J, both of which were fixed in Apache WSS4J 2.0.2 and 1.6.17.
  • CVE-2015-0226: Apache WSS4J is (still) vulnerable to Bleichenbacher's attack
  • CVE-2015-0227: Apache WSS4J doesn't correctly enforce the requireSignedEncryptedDataElements property
Please see the Apache WSS4J security advisories page for more information.
Categories: Colm O hEigeartaigh

New SAML validation changes in Apache WSS4J and CXF

Colm O hEigeartaigh - Tue, 02/03/2015 - 18:27
Two new Apache WSS4J releases are currently under vote (1.6.18 and 2.0.3). These releases contain a number of changes in relation to validating SAML tokens. Apache CXF 2.7.15 and 3.0.4 will pick up these changes in WSS4J and enforce some additional constraints. This post will briefly cover what these new changes are.

1) Security constraints are now enforced on SAML Authn (Authentication) Statements

From the 1.6.18 and 2.0.3 WSS4J releases, security constraints are now enforced on SAML 2.0 AuthnStatements and SAML 1.1 AuthenticationStatements by default. What this means is that we check that:
  • The AuthnInstant/AuthenticationInstant is not "in the future", subject to a configured future TTL value (60 seconds by default).
  • The SessionNotOnOrAfter value for SAML 2.0 tokens is not stale / expired.
  • The Subject Locality (IP) address is either a valid IPv4 or IPv6 address.
2) Enforce constraints on SAML Assertion "IssueInstant" values

We now enforce that a SAML Assertion "IssueInstant" value is not "in the future", subject to the configured future TTL value (60 seconds by default). In addition, if there is no "NotOnOrAfter" Condition in the Assertion, we now enforce a TTL constraint on the IssueInstant of the Assertion. The default value for this is 30 minutes.

3) Add AudienceRestriction validation by default

The new WSS4J releases allow the ability to pass a list of Strings through to the SAML validation code, against which any AudienceRestriction address of the assertion are compared. If the list that is passed through is not empty, then at least one of the AudienceRestriction addresses in the assertion must be contained in the list. Apache CXF 3.0.4 and 2.7.15 will pass through the endpoint address and the service QName by default for validation (for JAX-WS endpoints). This is controlled by a new JAX-WS security property:
  • ws-security.validate.audience-restriction: If this is set to "true", then IF the SAML Token contains Audience Restriction URIs, one of them must match either the request URL or the Service QName. The default is "true" for CXF 3.0.x, and "false" for 2.7.x.
Categories: Colm O hEigeartaigh

Single Logout with Fediz - WS-Federation

Jan Bernhardt - Fri, 01/30/2015 - 15:25
WS-Federation is primarily used to achieve Single Sing On (SSO). This raises the challenge how to securely logout from multiple applications once the user is done with his work. To navigate to each application previously used to hit the logout button would be quite inconvenient. Fortunately the WS-Federation standard does not only define how to do single sign on, but also how to do single logout.

In this blog I'll explain how to setup a demonstrator to show single sing-on as well as single sing-off. Since single sing-off is implemented in CXF Fediz version 1.2, I'm going to use a snapshot build since 1.2 is not yet released.
First of all we need to download Tomcat 7 since we will deploy our IDP/STS as well as our two demo applications to a tomcat container each. I renamed the tomcat folder of my extracted tomcat zip to:
  • Fediz-IDP
  • Fediz-RP1
  • Fediz-RP2
Next I opened a terminal within the cxf-fediz source code which I downloaded from github and run maven to build fediz:
mvn clean installSetup IDPAfter my build was successfull I copied the fediz-idp-sts.war file from cxf-fediz/services/sts/target/ into my Fediz-IDP/webapps/ deployment folder. I also did the same with the fediz-idp.war file from cxf-fediz/services/idp/target/.
Since the default https fediz port for the IDP and STS is 9443 and also to avoid port collisions with my two other tomcat instances, I need to update the port configuration in my tomcat Fediz-IDP/conf/server.xml. Here I update all ports starting with '8' to start with a '9'.
<Connector port="9443" protocol="org.apache.coyote.http11.Http11Protocol"
maxHttpHeaderSize="65536"
maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
keystoreFile="idp-ssl-key.jks"
keystorePass="tompass"
truststoreFile="idp-ssl-trust.jks"
truststorePass="ispass"
truststoreType="JKS"
clientAuth="want"
sslProtocol="TLS" />
To enable SSL for my RP-IDP tomcat I need to provide a keystore as well as a truststore. For demo purposes I will simply copy the java key stores from my fediz build cxf-fediz/services/idp/target/classes/ here I find the file idp-ssl-key.jks as well as idp-ssl-trust.jks which I'll copy to my Fediz-IDP root folder.
Before you can start Fediz-IDP you also need to get the expected JDBC driver which is by default HyperSQL JDBC driver. You need to download the zip file and then extract all jar files from /hsqldb-2.3.2/hsqldb/lib/ to Fediz-IDP/lib/.
Now you can start the Fediz-IDP tomcat server via Fediz-IDP/bin/startup.sh.
To avoid OutOfMemory erros you should add the following settings to your CATALINA_OPTS system environment variable: -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -XX:MaxPermSize=128M
By default the Fediz IDP has only basic authentication activated for user login. This is done to make it easier to run some system tests. However for single logout HTTP Basic authentication is not recommended, because the browser will cache your user credentials and will automatically sent your credentials to the IDP. So you would have to close all your current browser windows to actually see the login popup again after logout. If you also enable form based authentication in your webapps/fediz-idp/WEB-INF/security-config.xml you will actually see a login form again after your logout action. Here is the sample configuration how to enable form based authentication:
<security:http use-expressions="true">
<security:custom-filter after="CHANNEL_FILTER" ref="stsPortFilter" />
<security:custom-filter after="SERVLET_API_SUPPORT_FILTER" ref="entitlementsEnricher" />
<security:intercept-url pattern="/FederationMetadata/2007-06/FederationMetadata.xml" access="isAnonymous() or isAuthenticated()" />

<!-- MUST be http-basic thus systests run fine -->
<security:form-login />
<security:http-basic />
<security:logout delete-cookies="FEDIZ_HOME_REALM,JSESSIONID" invalidate-session="true" />
</security:http>You can also disable http basic authentication if you want to. But you can also just leave it enabled. In that case you can use both authentication styles. You will see an HTML authentication form if you are requested to login, but you could also provide HTTP-Basic authentication header to login.
After you updated the IDP configuration you need to restart the IDP tomcat server to apply your changes.
Setup 1. Demo AppFirst of all we must provide the Fediz plugin dependencies to our RP tomcat container. For this purpose we need to create a fediz subfolder in Fediz-RP1/lib/. Next we extract the content of the tomcat plugin dependencies zip file (cxf-fediz\plugins\tomcat\target\fediz-tomcat-1.2.0-SNAPSHOT-zip-with-dependencies.zip) to the fediz subfolder.
To make sure that tomcat loads these additional dependencies we must also update the calatina.properties in Fediz-RP1/conf.
common.loader=${catalina.base}/lib,${catalina.base}/lib/*.jar,${catalina.home}/lib,${catalina.home}/lib/*.jar,${catalina.home}/lib/fediz/*.jarFor Fediz-RP1 we will keep all port settings as they are. To keep things simple with the SSL connection we will reuse the idp-ssl-key.jks keystore from the Fediz-IDP and copy this keystore also to Fediz-RP1 root folder. The server.xml file needs to have the following SSL connector to be configured for Fediz-RP1:
<Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol"
maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
keystoreFile="idp-ssl-key.jks"
keystorePass="tompass"
clientAuth="false"
sslProtocol="TLS" />
Before we start the demo app container, we need to copy the demo app to the webapps folders, which can be found at cxf-fediz/examples/simpleWebapp/target/fedizhelloworld.war.
Finally we must provide a correct fediz configuration file to the config folder of the demo app container. For this purpose we can copy the demo config file from cxf-fediz/examples/simpleWebapp/src/main/config/fediz_config.xml to Fediz-RP1/conf/.
To make sure that the SAML tokens issued by the STS can be validated at the RP we must also install the correct STS truststore. This we can do by copying cxf-fediz/services/sts/target/classes/ststrust.jks to Fediz-RP1 root folder.
Now everything should be in place so that we can start Fediz-RP1.

We should see no exceptions in the logfiles and we should see the metadata document from the RP at the following URL: https://localhost:8443/fedizhelloworld/FederationMetadata/2007-06/FederationMetadata.xml
Setup 2. Demo AppThe second demo app will be quite similar to the first. Therefore we can simply copy the Fediz-RP1 folder and rename it to Fediz-RP2. To avoid port collisions, we also need to update some server ports.
Therefore we will update all ports beginning with a leading '8' and replace it with a leading '7' in the Fediz-RP2/conf/server.xml file.

Since we are going to start both tomcat container on the same machine (localhost), we must also change the context path of the second demo app. Otherwise both apps would use the same cookies. Thus we need to rename the fedizhelloworld.war file within the Fediz-RP2/webapps/ folder to fedizhelloworld2.war.

To also make this application known at the IDP, you need to register this application via the IDP REST Interface. You can use SoapUI for example or simply curl from your command line.

POST https://localhost:9443/fediz-idp/services/rs/applications
<ns2:application xmlns:ns2="http://org.apache.cxf.fediz/">
     <realm>urn:org:apache:cxf:fediz:fedizhelloworld2</realm>
     <role>ApplicationServiceType</role>
     <serviceDisplayName>Fedizhelloworld</serviceDisplayName>
     <serviceDescription>Web Application to illustrate WS-Federation</serviceDescription>
     <protocol>http://docs.oasis-open.org/wsfed/federation/200706</protocol>
     <tokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</tokenType>
     <lifeTime>3600</lifeTime>
</ns2:application>
Next you need to add all claims required for the helloworld application. Since the claim types are already known by the default fedizhelloworld application you only need to add a link between application and claims:

POST https://localhost:9443/fediz-idp/services/rs/applications/urn%3Aorg%3Aapache%3Acxf%3Afediz%3Afedizhelloworld2/claims 
<ns2:requestClaim xmlns:ns2="http://org.apache.cxf.fediz/">
<claimType>http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role</claimType>
<optional>false</optional>
</ns2:requestClaim>
<ns2:requestClaim xmlns:ns2="http://org.apache.cxf.fediz/">
<claimType>http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname</claimType>
<optional>true</optional>
</ns2:requestClaim>
<ns2:requestClaim xmlns:ns2="http://org.apache.cxf.fediz/">
<claimType>http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname</claimType>
<optional>true</optional>
</ns2:requestClaim>
<ns2:requestClaim xmlns:ns2="http://org.apache.cxf.fediz/">
<claimType>http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress</claimType>
<optional>true</optional>
</ns2:requestClaim>
Next you need to register this application to a given IDP realm.
POST https://localhost:9443/fediz-idp/services/rs/idps/urn%3Aorg%3Aapache%3Acxf%3Afediz%3Aidp%3Arealm-A/applications
<ns2:application xmlns:ns2="http://org.apache.cxf.fediz/">
<realm>urn:org:apache:cxf:fediz:fedizhelloworld2</realm>
</ns2:application>
You can check if your application was registered correctly via GET https://localhost:9443/fediz-idp/services/rs/idps.
 Now the IDP will be able to provide SAML token for the second demo application.
Test Single Sign-OnTo test if single sign-on is working as expected you can open the following URL in your browser: https://localhost:8443/fedizhelloworld/secure/fedservlet. You should get redirected to the IDP and need to choose Realm-A as your home realm. Next you need to enter your credentials bob:bob.
You should be redirected back to the fedservlet URL and should see your username, assigned roles as well as  other claims.

If you now enter https://localhost:7443/fedizhelloworld/secure/fedservlet in your browser you should get redirected to the IDP and then without the need to enter your credentials again the IDP should redirect you back to the demo application.

Congratulation. Single Sing-on is working!
Test Single Sign-OffGoal of this blog post was not to achieve single sign-on but rather single sign-off. For this you have two options to trigger single logout:
  1. You can invoke a logout request starting at the demo application:
    https://localhost:8443/fedizhelloworld/secure/logout
  2. You can invoke a logout request directly at the IDP:
    https://localhost:9443/fediz-idp/federation?wa=wsignout1.0
After you triggered the logout process you will be redirected to a page listing all application which the IDP had previously issued security tokens for. You will also be asked if you really want to logout from all these applications. After you confirmed the logout request, you should see a confirmation page. This page contains the same list of applications as before but this time with a green check maker at the end of each line.

This image is the key to preform the actual logout for all the remote applications. The image resource URL will point to the logout URL of these applications, and by resolving the image resource in your browser you will also invoke the logout URL off all these applications.

If you invoke now any of the two applications you should now again be redirected to the login page of the IDP.
Congratulation. Single Logout is working!
 LimitationsThe WS-Federation standard does not require from any application to provide a "logout image" at the logout URL. This has just shown to be best practice. However if the logout URL of an application does not provide an image, the confirmation page will show a broken image, even thou the logout was most likely successful.

The Single Logout implementation for Fediz is currently not able to delegate a logout request to the requestors IDP. So for example if the user is not authenticated at realm-a but at realm-b instead, the IDP does not forward the wsingout action to realm-b. Thus the user will only be logged of at applications in realm-a but the user still remains an active session in realm-b.

Hopefully a global logout will be supported by Fediz in the future as well.
Categories: Jan Bernhardt

LDAP support in Apache Camel

Colm O hEigeartaigh - Wed, 01/28/2015 - 17:29
Apache Camel allows you to add LDAP queries to your Camel routes via the camel-ldap and camel-spring-ldap components. The camel-ldap component allows you to perform an LDAP query using a filter as the message payload. The spring-ldap component is a wrapper for Spring LDAP, and is a bit more advanced than the camel-ldap component, in that it also supports the "bind" and "unbind" operations, in addition to "search".

I've created two test-cases that show how to use each of these components. Both test-cases use the Camel file component to read in files that contain LDAP queries. These queries are then dispatched to an Apache DS server that is configured via annotations in the test code, using an LDIF file containing some test data. The results are then processed and written out in the target directory. The test-cases are available here
Categories: Colm O hEigeartaigh

Apache Santuario - XML Security for Java 2.0.3 and 1.5.8 released

Colm O hEigeartaigh - Mon, 01/19/2015 - 16:53
Versions 2.0.3 and 1.5.8 of Apache Santuario - XML Security for Java have been released. Version 2.0.3 contains a critical security advisory (CVE-2014-8152) in relation to the new streaming XML Signature support introduced in version 2.0.0:
For certain XML documents, it is possible to modify the document and the streaming XML Signature verification code will not report an error when trying to validate the signature.

Please note that the "in-memory" (DOM) API for XML Signature is not affected by this issue, nor is the JSR-105 API. Also, web service stacks that use the streaming functionality of Apache Santuario (such as Apache CXF/WSS4J) are also not affected by this vulnerability.Apart from this issue, version 2.0.3 contains a significant performance improvement, and both releases contain minor bug fixes and dependency upgrades.
Categories: Colm O hEigeartaigh

How fast is CXF ? - Measuring CXF performance on http, https and jms

Christian Schneider - Fri, 01/16/2015 - 09:13

Blog post edited by Christian Schneider

The performance numbers in this article are a bit out of date

Icon

For a more current JMS performance measurement see Revisiting JMS performance. Improvements in CXF 3.0.0.

On a 2014 system http performance should be around 10k - 20k messages/s for small messages.

 

From time to time people ask how fast is CXF? Of course this is a difficult question as the measuered speed depends very much on the Hardware of the test setup and on the whole definition of the test.
So I am trying to explain how you can do your own tests and what to do to make sure you get clean results.

What should you keep in mind when doing performance tests with Java?

  • Performance is very much influenced by thread count and request size. So it is a good idea to scale each
  • As long as you have not maxed out at least one resource you can improve the results. Typical resources to check are processor load, memory and network
  • Increase the thread count until you max out a resource. But do not go much higher
  • Always use a warmup phase (~1-2 minutes). Java needs to load classes the first time. On the Sun VM additionally the Hotspot compiler will kick in after some time
Prerequisites

The test project can be found on my github account. You can either download a zip or clone the project with git:
https://github.com/cschneider/performance-tests

As a load generator and measurement tool we use soapUI. Download the free version from the link below:
http://www.soapui.org/

The test plan

We test SOAP/HTTP, SOAP/HTTPS and SOAP/JMS performance using a small but non trivial service. For this case the CustomerService from the wsdl_first example will be used.
Two variables will be changed for the test series. The Soap Message size and the number of sender/listener threads.
The SOAP Message size will be tuned by using a String or variable length. It will be set so the complete SOAP message reaches the desired size.

The payload size can be adjusted by the number of customer records the server sends:

Size

payload size

Small

500

Medium

10 KB

Large

1 MB

The second variable is the number of sender and listener Threads. We will test with 5, 10 and 20 Threads. The optimal number of threads
correlates with the number of processor cores. In this case there are two cores. With bigger machines the maximum number of threads should be
higher.

Customerservice SOAP/HTTP performance

For the server side I have prepared a maven project which start the CustomerService implementation from the wsdl_first example on an embedded jetty. We could
also use an external server but in my tests the results were similar and the embedded version can be started very easily.

The number of listener threads can be adjusted in the file src/main/resources/server-applicationContext.xml :

<httpj:threadingParameters minThreads="5" maxThreads="5" />

Start the server:

cd cxf
mvn -Pserver

Start soapUI and load the soapUI project from the file cxf/cxf-performance-soapui-project.xml. The project was built using the wsdl of the CustomerService and contains
test requests and a load test definition. Alternatively a client class is provided that also will give the performance results. But SOAP UI is the more neutral environment.

Now navigate to the Loadtest 1 like shown in screenshot and start the loadtest by clicking on the green arrow. The intersting result is tps (Transactions per seconds). It measures how many Requests/Resonses are processed per second.
At first the number will be quite low but increase steadily. That is because of class loading and optimizations in Java. Let the test run 60 seconds. This was the warmup phase. Now start the test again.

Customerservice SOAP/JMS performance

Testing JMS is much harder than HTTP. SOAP UI supports jms tests but it needs some more configuration than in the http case and did not work so well for me. So
I used the java client for the jms tests.

Additionally there are many tuning possibilities that affect the speed tremendously. For example I was not abler to send more than
700 messages per second in the start as my activemq config was not correctly optimized. When I used the throughput optimized config
the speed was much higher.

Beware though when using the default "activemq-throughput.xml". It limits the size of queues to 1MB and stops the sender when the size is reached.
In my case that meant that my sender was hanging mysteriously. After I set the limit to 100MB my tests worked. See activemq.xml for my configs.

On the ActiveMQ website much more performance tuning tips can be found:http://activemq.apache.org/performance-tuning.html

Environment

It is always important to describe excatly on which configuration the test was run.
All the tests below were run on a Intel Core i5 / 8GB System. Client and Server where on the same machine.

SOAP/HTTP Results

Threads are listener and client threads. CPU load is read from the windows Task Manager. Transactions per Second are the highest number from soapUI.

Threads

Size

CPU Load

Transactions per Second

5

Small

55%

2580

10

Small

100%

3810

20

Small

100%

4072

5

Medium

75%

2360

10

Medium

100%

2840

20

Medium

100%

2820

5

Large

90%

94

10

Large

92%

94

20

Large

95%

84

So it looks like 10 threads is ideal for the test machine with 2 cores and 4 virtual cores. This is quite near the rule of thumb to use double the amount of cores as optimal thread number.
When scaling up the payload size performance drops with the same factor.

SOAP/HTTPS results

Cipher: AES-128 128 Bit key

The payload size can be adjusted by the number of customer records the server sends:

Threads

Size

CPU Load

Transactions per Second

5

Small

60%

2408

10

Small

100%

3310

20

Small

100%

3430

5

Medium

80%

1620

10

Medium

100%

1750

20

Medium

100%

1800

5

Large

100%

34

10

Large

100%

34

20

Large

1000%

34

So it looks like 10 threads is ideal for the test machine with 2 cores and 4 virtual cores. This is quite near the rule of thumb to use double the amount of cores as optimal thread number.
When scaling up the payload size performance drops with the same factor.

SOAP/JMS results

The JMS tests additionally need a broker. I used ActiveMQ 5.5.0 with the activemq.xml that can be found in github repo above.

Using request / reply with a fixed reply queue.

Threads

Size

CPU Load

Transactions per Second

5

Small

100%

1670

10

Small

100%

1650

20

Small

100%

1710

5

Medium

100%

1120

10

Medium

100%

1120

20

Medium

100%

1140

3

Large

75%

30

5

Large

75%

28

Using one way calls

Threads

Size

CPU Load

Transactions per Second (only client)

Transactions per Second (client and server)

5

Small

100%

3930

3205

10

Small

100%

3900

3167

20

Small

100%

4200

3166

30

Small

100%

4090

2818

When testing one way calls first only the client was running. So it can be expected that the performance is more than double the performance of
request /response as we do not have to send back a message and there is no server that consumes processor power.

Next the server was also running. This case is as expected about double the performance of request /reply as only half the messages have to be sent / received.

View Online
Categories: Christian Schneider

XML Advanced Electronic Signature (XAdES) support in Apache Camel

Colm O hEigeartaigh - Wed, 01/14/2015 - 12:43
I have previously covered some XML Signature and Encryption testcases in Apache Camel. Camel 2.15 will feature some new limited support for XML Advanced Electronic Signatures (XAdES) in the XML Security component. This post will briefly cover what XML Advanced Electronic Signatures are, and show how they can be produced in Camel. No support exists yet for validating XAdES Signatures in Camel. Note that as Camel 2.15 is not yet released, some of the details are subject to change.

XML Signature has a number of shortcomings in terms of conveying meta-data describing the signing process to the recipient. It does not include the signing certificate/key in the signature itself. It does not tell the recipient when or where the signature was created, which role the signer had at the time of signing, what format the signed data is in, what the signature policy was, etc. XAdES attempts to solve these problems by introducing standard properties that are inserted into the "Object" part of an XML Signature. Some of these properties are then included in the message signature.

Camel 2.15 will support XAdES in the XML Security component by a new "properties" configuration option, which must reference a XAdESSignatureProperties implementation. I added a new test to the camel-xmlsecurity project in github that illustrates how to do this. The spring configuration for the test is here. The xmlsecurity route links to a DefaultXAdESSignatureProperties implementation, which is configured with the signing key (and alias), an "Implied" Signature policy, and a role of "employee". The resulting ds:Object in the XML Signature looks like:

<ds:Object>
  <etsi:QualifyingProperties xmlns:etsi="..." Target="#...">
    <etsi:SignedProperties Id="_1c03790b-8e46-4837-85bc-d6562e4c713c"> 
      <etsi:SignedSignatureProperties>
        <etsi:SigningTime>2015-01-14T11:19:49Z</etsi:SigningTime>
        <etsi:SigningCertificate>
          <etsi:Cert>
            <etsi:CertDigest>
              <ds:DigestMethod Algorithm="...#sha256"/>
              <ds:DigestValue>KsquBA...=</ds:DigestValue>
            </etsi:CertDigest>
            <etsi:IssuerSerial>
              <ds:X509IssuerName>...,C=US</ds:X509IssuerName>
              <ds:X509SerialNumber>1063337...</ds:X509SerialNumber>
            </etsi:IssuerSerial>
          </etsi:Cert>
        </etsi:SigningCertificate>
        <etsi:SignaturePolicyIdentifier>
          <etsi:SignaturePolicyImplied/>
        </etsi:SignaturePolicyIdentifier>
        <etsi:SignerRole>
          <etsi:ClaimedRoles>
            <etsi:ClaimedRole>employee</etsi:ClaimedRole>
          </etsi:ClaimedRoles>
        </etsi:SignerRole>
      </etsi:SignedSignatureProperties>
    </etsi:SignedProperties>
  </etsi:QualifyingProperties>
</ds:Object>
Categories: Colm O hEigeartaigh

Signing and encrypting Apache Camel routes

Colm O hEigeartaigh - Mon, 01/12/2015 - 16:02
A recent blog post looked at using the XML Security component and dataformat in Apache Camel to sign and encrypt XML documents. However, what if you wish to secure non-XML data? An alternative is to use the Apache Camel Crypto component and dataformat. The Crypto component provides the ability to sign (and verify) messages (using the JCE). Similarly, the Crypto dataformat allows you to encrypt (and decrypt) messages (again using the JCE). Another alternative is to use the PGPDataFormat, which allows you to use PGP to sign/encrypt Camel messages.

I have created a github project called "camel-crypto" with some samples about how to use these features. It contains the following tests:
The tests follow a similar pattern, where they take some (XML) data, sign/encrypt it, and copy it to a particular directory. Another route then takes the secured data, and verifies/decrypts it, and copies it to another directory. The tests also show how to use the Camel Jasypt component to avoid hard-coding plaintext passwords in the spring configuration files. The tests rely on a SNAPSHOT version of Camel (2.15-SNAPSHOT) at the time of writing this post, due to some fixes that were required (particularly in terms of adding new (Spring) configuration options to the PGPDataFormat).
Categories: Colm O hEigeartaigh

[OT] U2: "We were pilgrims on our way"

Sergey Beryozkin - Wed, 12/24/2014 - 11:57


"The Miracle (of Joey Ramone)" from the last U2 "Songs of Innocence" album is a refreshing song. The actual album's content is strong. Not necessarily easy to listen though but it is been played in my car's CD player more or less every time I go driving for the last few weeks. The trick is, after listening to it for the first time, do a few days pause, and then listen again with a volume much higher than last time. It's a blast.

I still do like U2 even though I've learned not all in Ireland are the fans of them for various reasons. I was surprised, the same as I was when I was working in Manchester many years back, loving Manchester United and hearing people mentioning some other team, Manchester City :-).
 
The reason I still like U2 is because they are a team. These are the people in their 50s who still talk to each other :-),  continue to support each other, still have the drive and ability to create something as strong and relevant as "Songs of Innocence". I disagree it is entirely down to the financial aspect.

It is an off-topic post but as usual a link to CXF is about to be explored :-). It is in the "The Miracle (of Joey Ramone)" text.

Some of CXF users might recognize they were "pilgrims on their way" before they settled on working with CXF :-). If you read it and say, yeah, this is relevant to me, then you know where CXF is. And as U2 conclude, "your voices will be heard".

Finally, here is a link to a New Year song you won't hear in a local shopping centre starting from early September: New Year's Day from U2.  

Happy Christmas and New Year !

 

Categories: Sergey Beryozkin

No Data No Fun !

Sergey Beryozkin - Tue, 12/23/2014 - 23:17
Continuing with the theme of T-shirts, I'd like to let you know "No Data No Fun" is a cool line printed on my T-shirt I got at a Talend R&D summit organized at a second-to-none level back in early October. I guess having a collection of good T-Shirts is one of the real perks of the developers involved into the open source development :-)

"No Data No Fun" is also one of the themes behind Talend's continued investment into the tooling which facilitates the interaction with Big Data ecosystems. Getting such a tooling done right is hard. I'm impressed seeing companies like Lenovo liking it.

From my point of view, I'm interested to see how an apparent gap between the world of a typical HTTP service application and that of a Big Data one can be bridged. Ultimately web applications are about exploring the data and feeding them back to the users. We've done the first baby step, provided a FIQL to HBase query client that can be used to query massive amounts of data from HBase databases. JAX-RS StreamingOutput would very neatly fit in there.

However, it is also interesting to see how CXF services can be run natively in Hadoop, to save on a data delivery from HBase or other Hadoop-bound database to a query client running in scope of the CXF server, much cheaper to get it straight from Hadoop and send it back immediately. This is something I'm hoping to find some time for investigating next year. Propagating Kerberos or OAuth2 tokens into Hadoop/etc is also of interest.

I hope CXF will help you get a lot of data from Hadoop and have a lot of fun along the way :-) 

 
Categories: Sergey Beryozkin

Get into OAuth2 with Client Credentials Grant

Sergey Beryozkin - Tue, 12/23/2014 - 22:42
One of the possible barriers toward OAuth2 going completely mainstream is the likely association of OAuth2 with what big social media providers do and the assumption OAuth2 is only suitable for their business, for the way their users interact with these providers.

In fact, OAuth2 is more embracing. Client Credentials grant, one of several standard OAuth2 grants,  provides the easy path for the traditional clients toward starting working with security tokens.

The client, instead of doing the authentication with a name and a password (or some other client credentials) against the target service endpoint on every request (and thus having to keep these secrets for a long time) does it only once, against OAuth2 AccessTokenService which accepts various grants and returns manageable tokens with a restricted lifetime. Such tokens can be obtained out-of-band, with the client applications initialized with the tokens. The client will use the token only when authenticating against the endpoint. It is still a secret in its own way but it is a transient one that can be revoked by the administrator or by the client itself.

The client credentials grant provides for an easy and fast way into the OAuth2 ecosystem. Consider experimenting with it sooner rather than waiting for another 5 years :-), discover the OAuth2 world along the way, find how OAuth2 can positively affect your applications, and never look back again !  
Categories: Sergey Beryozkin

New SSL/TLS vulnerabilities in Apache CXF

Colm O hEigeartaigh - Mon, 12/22/2014 - 13:01
Apache CXF 3.0.3 and 2.7.14 have been released. Both of these releases contain fixes for two new SSL/TLS security advisories:
  • Note on CVE-2014-3566: This is not an advisory per se, but rather a note on an advisory. CVE-2014-3566 (aka "POODLE") is a well publicised attack which forces a TLS connection to downgrade to use SSL 3.0, which in turn is vulnerable to a padding oracle attack. Apache CXF 3.0.3 and 2.7.14 disable SSL 3.0 support by default for both clients, as well as servers configured using CXF's special support for Jetty. In addition, it is now possible to explicitly exclude protocols, see here for more information.
  • CVE-2014-3577: Apache CXF is vulnerable to a possible SSL hostname verification bypass, due to a flaw in comparing the server hostname to the domain name in the Subject's DN field. A Man In The Middle attack can exploit this vulnerability by using a specially crafted Subject DN to spoof a valid certificate.
If you are using TLS with Apache CXF then please upgrade to the latest releases.
Categories: Colm O hEigeartaigh

Apache Karaf Christmas gifts: docker.io, profiles, and decanter

Jean-Baptiste Onofré - Mon, 12/15/2014 - 14:12
We are heading to Christmas time, and the Karaf team wanted to prepare some gifts for you Of course, we are working hard in the preparation of the new Karaf releases. A bunch of bug fixes and improvements will be available in the coming releases: Karaf 2.4.1, Karaf 3.0.3, and Karaf 4.0.0.M2. Some sub-project releases […]
Categories: Jean-Baptiste Onofré

Understanding WS-Federation - Passive Requestor Profile

Jan Bernhardt - Thu, 12/11/2014 - 10:45
WS-Federation  is an identity federation specification which makes it possible to setup a SSO federation including multiple security realms. A realm (sometimes also called domain) represents a single unit under security administration or a part in a trust relationship.
EntitiesWithin the WS-Federation standard the following entities are defined:
  • Relying Party (RP)
    The relying party is a resource (web application or service) which consumes security tokens issued by the Security Token Service.
  • Requestor
    A requestor is a user who wants to access a resource (relying party).
  • Identity Provider (IDP)
    An Identity Provider can act as an authentication service to a requestor (in this case it is also called “Requestor IDP” or “Home-Realm IDP”) as well as an authentication service to a service provider (also called “Relying Party IDP”). If a user tries to access a relying party within his own security domain, the “Requestor IDP” and the “RP-IDP” can be the same IDP instance. An IDP can also be seen as an Web-Frontend (Extension) of an STS.
  • Security Token Service (STS)
    A Security Token Service is a web service that validates user credentials and issues security tokens which can include user attributes (also called claims). The security token can be used at the Relying Party to authenticate the requestor’s identity.
Passive Requestor ProfileThe “Passive Requestor Protocol”  of the WS-Federation standard deals with web-browser based access of a resource like a web portal or a web application.

The following figure shows a standard scenario of a web application (Relying Party) which delegates the user authentication to an Identity Provider (IDP) according to the WS Federation standard. This way the web application does not need to implement multiple authentication styles for a user, as well as it allows interacting with users not known within the local security domain. Another benefit of delegating the authentication process is that the IDP can retain a session with the user, so that when a user accesses another web application and is redirected to the IDP again, the IDP does not need to request user credentials again und thus providing a SSO experience for the user.


The above figure shows a sequence diagram of a user (requestor) accessing a web application with his browser. Since the user was not authentication due to a recent session, the application redirects the user to the IDP for a user login (1). The IDP collects the credentials from the user and uses a Security Token Service (STS) to validate the credentials and also to get a SAML token from the STS (2). The STS itself is connected to a LDAP data store to validate the user credentials and also to retrieve additional information (claims) about the user, e.g. roles. On successful authentication (3) the IDP returns the SAML token issued by the STS (4) to the user and advices (auto-submitting form) the user to send this SAML token to the originally requested web application (5). The IDP takes care of providing a web user interface and handling URL redirects, whereas the STS is responsible for generating SAML Token and validating of user credentials. The web application validates the SAML token (6) and on success returns the desired web page (7).

The above sample was designed to show a simple use case scenario where the Requestor IDP is equal to the Relying Part IDP. In a more sophisticated scenario the Requestor IDP will not be equal to the Relying Party IDP. In addition to that there is also a Reverse Proxy added to the web application ensuring that the home realm discovery (also see section 2.3.3) is going to work correctly. The resulting access sequence can be seen in the following figure.


The user enters the public WebApp URL in his browser which leads him to the Reverse Proxy (0). The WebApp has no recent session with the user and therefore does not know the identity of the user. Thus the WebApp redirects the user to its Relying Party IDP (1). The Reverse Proxy detects the redirect to the RP-IDP and adds a home realm parameter for the user (1). This IDP uses this home realm parameter to perform the home realm discovery (3) and thus knowing at which IDP can be redirected to for being authenticated (4). The WS-Federation standard does not define how the home realm discovery should be performed. Multiple options are usually available:
  • User Selection
    A list of known and trusted IDPs is shown to the user. The user selects the IDP at which he wants to be authenticated and is then redirected to that IDP.
  • IP Discovery
    The user will be redirected automatically to another IDP based on his IP address.
  • whr Parameter
    The URL to invoke the RP-IDP contains an additional ‘whr’ parameter to define the IDP name on which the user wants to be redirected to for authentication. The ‘whr’ parameter must be known at the RP-IDP and must either be mapped to an URL or can also already be the URL of another IDP). The ‘whr’ parameter is usually set by a Reverse Proxy or was added (by the user or a provided link) in the URL when initially calling the web application.
  • Custom Discovery
    Any custom logic can be added to the IDP to perform the home realm discovery. The standard is not limited to any predefined behaviour.
After being redirected to the users home IDP (5) the IDP also has no recent session with the user and thus shows a login form to the user to enter his credentials (6). The user sends his username/password to the IDP, which itself creates an issue request to the STS with the received unsername/password embedded (7). The STS validates the user credentials by using the LDAP. Upon successful authentication the STS retrieves the requested user claims (e.g. roles) from the LDAP (8) and creates a SAML token (9) targeted for the RP-IDP. The Requestor IDP embeds this SAML token inside an auto-submitting (Java Script) web form (10) which is then posted to the RP-IDP (11a). The RP-IDP is now able to use this SAML token to authenticate on behalf of the user against the RP STS (11b) to request a SAML token for the previously requested web application. The RP-STS needs to perform an identity or claim mapping (12) to issue a second SAML token this time applicable for the requested web application (13). The RP-IDP puts this application specific SAML token again in a self-executing HTTP form (14) which is then automatically submitted to the web application via the reverse proxy (15). The Relying Party (the web application) validates the received SAML token by verifying that the issuer certificate of the SAML token is trusted. This should be the case, since the SAML token was issued by its own Relying Party IDP. Additional claims like the user roles can be added to the security context of the web application and thus allowing authorization above authentication.
Categories: Jan Bernhardt

XML Security using Apache Camel

Colm O hEigeartaigh - Tue, 12/02/2014 - 15:48
I have previously covered how to use Apache Santuario to sign and encrypt XML, using both the DOM and StAX based APIs available in the 2.0.x releases. An alternative to using Apache Santuario directly to sign/encrypt XML, is to use the XML Security component or data format of Apache Camel. There are two obvious reasons to use Camel that immediately spring to mind. Firstly it allows you to configure XML Signature/Encryption without writing any code (e.g. by configuring the components in Spring). Secondly it allows you to take advantage of the power and flexibility of Apache Camel to integrate with a wide variety of components.

I have created a github project with two (almost identical) tests to show how to use XML Signature and Encryption in Apache Camel:
Both tests start routes which read in XML documents stored in src/test/resources/data using the Camel File component. The part of the documents which contain credit card information is then signed/encrypted, and the resulting file placed in the target/(encrypted/signed)-data folder. A second route reads files in from this folder, decrypts/verifies the file and then places it in the target/(decrypted/verified)-data folder.

The encryption configuration file is available here, and the signature configuration file is here. One difference you may notice is that encryption is configured using a "marshal/unmarshal" tag and then "secureXML", whereas for signature you can use a standard Camel "To" statement, e.g. "
<to uri="xmlsecurity:sign://enveloped?keyAccessor...". This is due to the fact that XML Encryption is implemented in Camel as a data format, whereas XML Signature is implemented as a component.

Both tests also use the Camel Jasypt component to avoid hard-coding plaintext passwords in the spring configuration files. The keystore and private key passwords and stored encrypted in a special passwords file. The master secret used to decrypt the passwords is retrieved via a system property (set in the pom.xml as part of the tests).

The testcase relies on a SNAPSHOT version of Apache Camel for now (2.15-SNAPSHOT) due to a number of fixes I added. Firstly, the DefaultKeySelector used to retrieve keys for signature did not previously support taking a Camel
keyStoreParameters Object. Secondly, the DefaultKeySelector did not support working with the Camel Jasypt component to encrypt the keystore password.  Thirdly, it wasn't possible to load a Public Key from a PrivateKeyEntry in a Keystore for XML Signature. Fourthly, the XML Encryption data format did not support embedding the KeyValue of the Public Key used to encrypt the session key in the EncryptedKey structure.
Categories: Colm O hEigeartaigh

Observations about ApacheCon EU 2014

Sergey Beryozkin - Mon, 11/24/2014 - 00:03
You may be thinking now, after reading my previous post, that all I was doing at ApacheCon EU 2014 was looking at T-shirts people were wearing :-). This post is an attempt to convince you it was not the case.

First of all, ApacheCon EU 2014, as it is usually the case with Apache conferences, was a great opportunity to meet the fellow open source developers.
Chatting to the guys I work with at Apache CXF and other projects, sharing a joke or two along the way :-), was really great. 

Some people there are great advocates of doing the software for the good of the world. You do see people there who spend their own free time to make Apache and various projects it hosts succeed and help others.

It was nice to see Talend, my employer, being mentioned as one of Apache sponsors. Even though Apache has great sponsors which contribute much more, it was good to see Talend being recognized. Every contribution counts. The companies involved in the open source have a positive vibe about them, the more they are involved the more recognized and respected in the community at large they become. The world is a small place. Customers would be positive about working with such companies, going the business with such companies, as this post posted awhile back suggested.



Those of us who did the presentations about CXF were lucky to do it on the very first day in a beautiful Corinthia Hotel Ballroom. I kept thinking, there were times people were dancing there accompanied by the music by Franz Liszt and here we are talking the cryptic things about CXF.  The times change. But the beauty of the room is there today.

The other thing I noticed was the visibility of Hortonworks. They had a strong team presenting a number of interesting talks. To be fair to them, their T-shirts are also not bad at all :-), may be they should have some sort of the competition with Tomitribe.

Overall, it was a well organized, great event ! I'm feeling positive and energized after attending it.


Categories: Sergey Beryozkin

[OT] The best T-shirt I've seen at Apache Con EU 2014

Sergey Beryozkin - Sun, 11/23/2014 - 23:07
This is the first post about Apache Con EU 2014 held in beautiful Budapest I've been lucky to attend to.

One of the nice things about being an ApacheCon visitor is that one can see lots of cool T-shirts. The official T-shirts (I do treasure them) and other T-shirts with some great lines or digits printed on them. The T-shirts that many software geeks would be happy to wear. And indeed the visitors at ApacheCon EU 2014 had a lot of different T-shirts to demonstrate.

It was at the presentation about TomEE that I realized that while the rest of the room were glued to the presentation screen and being impressed by what TomEE could do I was looking at the T-shirts of TomEE experts doing the presentation and thinking how unfair it was I did not have a T-shirt like that too.

You can see Romain wearing it here.

Tomitribe, the company which did it right once again :-) !






Categories: Sergey Beryozkin

Pages

Subscribe to Talend Community Coders aggregator