Latest Activity

SAML SSO RP Metadata support in Apache CXF

Colm O hEigeartaigh - Thu, 05/28/2015 - 17:28
Apache CXF provides comprehensive support for SSO using the SAML Web SSO profile for CXF-based JAX-RS services. In Apache CXF 3.1.0 (and 3.0.5), a new Metadata service is available to allow for the publishing of SAML SSO Metadata for a given service.

The MetadataService class is available on a "metadata" path and provides a single @GET method that returns the service metadata in XML format. It has the following properties which should be configured:
  • String serviceAddress - The URL of the service
  • String assertionConsumerServiceAddress - The URL of the RACS. If it is co-located with the service, then it can be the same URL as for the serviceAddress.
  • String logoutServiceAddress - The URL of the logout service (if available).
  • boolean addEndpointAddressToContext - Whether to add the full endpoint address to the values configured above. The default is false.
In addition, the MetadataService extends the AbstractSSOSpHandler, which contains various properties that are required to sign the metadata (keystore alias, crypto properties file which references the keystore, etc.). A sample spring-based configuration for the MetadataService is available in the CXF system tests here. Here is the sample output when accessed via a web brower:


Categories: Colm O hEigeartaigh

Apache CXF 3.1.0 released

Colm O hEigeartaigh - Tue, 05/26/2015 - 16:04
Apache CXF 3.1.0 has been released and is available for download. The migration guide for CXF 3.1.x is available here. The main (non-security) features of CXF 3.1.0 are as follows:
  • Java 6 is no longer supported.
  • Jetty 9 is now supported. Support for Jetty 7 has been dropped.
  • A new Metrics feature for collecting metrics about CXF services is available. 
  • A new Throttling feature is available for easily throttling CXF services.
  • A new Logging feature is available that is more powerful than the existing logging functionality.
The security-specific changes and features are as follows:
  • CXF 3.1.0 picks up a new major release of WSS4J (2.1.0) and OpenSAML (3.1.0). Please see a recent post on WSS4J 2.1.0 for some migration notes if you are using WS-Security or SAML with CXF.
  • The STS now signs issued SAML tokens by default using RSA-SHA256 (previously RSA-SHA1).
  • Some security configuration tags have been renamed from "ws-security.*" to "security.*", as they are now shared with the XML Security JAX-RS module. The old tags will continue to work as before however without any change. See the Security Configuration page for more information.
  • The SAML/XACML functionality previously available in the cxf-rt-security module is now in a new cxf-rt-security-saml module.
  • A new Metadata service for SAML SSO is available. More on this in a future blog post.
  • It is now possible to "plug in" custom security policy validators for WS-Security in CXF, if you want to change the default validation logic. See here for a test that shows how to do this.
Categories: Colm O hEigeartaigh

Apache WSS4J 2.0.4 released

Colm O hEigeartaigh - Fri, 05/15/2015 - 16:24
In addition to the new major release of Apache WSS4J (2.1.0), there is a new bug fix release available - Apache WSS4J 2.0.4. Here are the most important bugs that were fixed in this release:
  • We now support the InclusiveC14N policy.
  • We can enforce that a Timestamp has an "Expires" Element via configuration, if desired.
  • There is a slight modification to how we cache signed Timestamps, to allow for the scenario of two Signatures in a security header that sign the same Timestamp, but with different keys.
  • The policy layer now allows a SupportingToken policy to have more than one token.
  • A bug has been fixed in the MerlinDevice crypto provider, which is designed to work with smartcards.
  • A bug has been fixed in terms of using the correct JCE provider for encryption/decryption.
Categories: Colm O hEigeartaigh

Apache WSS4J 2.1.0 released

Colm O hEigeartaigh - Fri, 05/15/2015 - 16:23
A new major release of Apache WSS4J, 2.1.0, has been released. The previous major release of almost a year ago, Apache WSS4J 2.0.0, had a lot of substantial changes (see the migration guide), such as a new set of maven modules, a new streaming implementation, changes to configuration tags, package changes for CallbackHandlers, etc. In contrast, WSS4J 2.1.0 has a much smaller set of changes, and users should be able to upgrade with very few changes (if at all). This post briefly covers the new features and migration issues for WSS4J 2.1.0.

Here are the new features of WSS4J 2.1.0:
  • JDK7 minimum requirement: WSS4J 2.1.0 requires at least JDK7. The project source has also been updated to make use of some of the new features of JDK7, such as try-with-resources, etc.
  • OpenSAML 3.x support: WSS4J 2.1.0 upgrades from OpenSAML 2.x to 3.x (currently 3.1.0). This is an important upgrade, as OpenSAML 2.x is not currently supported.
  • Extensive code refactoring. A lot of work was done to make the retrieval of security results easier and faster.

Here are the migration issues in WSS4J 2.1.0:
  • Due to the OpenSAML 3.x upgrade, you will need to make a small change in your SAML CallbackHandlers, if you are specifying the "Version". In WSS4J 2.0.x, you could specify the SAML Version by passing a "org.opensaml.common.SAMLVersion" instance through to the SAMLCallback.setSamlVersion(...) method. The "org.opensaml.common" package is removed in OpenSAML 3.x. Instead, a new Version bean is provided in WSS4J 2.1.0, that can be passed to the setSamlVersion method on SAMLCallback as before. See here for an example.
  • The Xerces and xml-apis dependencies in the DOM code of Apache WSS4J 2.1.0 have been removed (previously they were at "provided" scope).
  • If you have a custom Processor instance to process a token in the security header in some custom way, you must add the WSSecurityEngineResult that is generated by the processing, to the WSDocInfo Object via the "addResult" method. Otherwise, it will not be available when security results are retrieved and processed.


Categories: Colm O hEigeartaigh

The Rise Of Apache Tika

Sergey Beryozkin - Thu, 05/14/2015 - 22:51
Apache Tika is an interesting project. It is not a very big one but IMHO it is poised to become the project every team serious about doing the complex, unstructured, binary content processing will talk about and use.

The power of Apache Tika lies in the simplicity it offers for processing different types of binary and other types of complex data. Consider a simple situation: your project needs to support analyzing PDF files. One approach is to write a PDF library specific routine. This approach stops scaling as soon you need to support Excel and ODT files too. And stops working once you have a task to support a possibly unlimited number of types of data.

Apache Tika helps with generalizing the processing of arbitrary types of data and thus offers a unique opportunity for a given project to offer a real value add-on.

I really liked this presentation at the recent Apache Con NA.  It was absolutely packed with the interesting content and Chris talked a lot about applying Tika to solving the real life problems. Andriy Redko did a brilliant talk about the CXF and Tika integration. There were more Tika presentations and I regret I could not make it to all of them.

The future is bright for Tika. And for the projects that will use it :-)
Categories: Sergey Beryozkin

Opend Id Connect Certification Strategy

Sergey Beryozkin - Fri, 05/01/2015 - 12:30
I've just read about an OpenId Connect Certification open strategy. IMHO it is brilliant and no doubt will guarantee a wider adoption of OIDC. Mike Jones's explanation of why it will work is a good read.

The closed (payed-only) certification model limits the adoption of a given technology by the implementors.
Categories: Sergey Beryozkin

[OT] Apache CXF is Electric !

Sergey Beryozkin - Thu, 04/30/2015 - 23:30
I remember this day as it was yesterday. April or March of 1998. I'm in England, Stockport city centre, listening to Oasis's latest single. It was absolutely great, the energizing effect of it.

As it happens I haven't listened to Oasis for the next 17 years apart from hearing them occasionally on the local FM. But a month or so back, I finally got their disk.

"She is Electric" is one of the best songs, classical Oasis. Nearly every time I listen to it I think, well, one can definitely say "Apache CXF is Electric". Why ? Because Apache CXF is cool, active and alive !  Work with it and you will become Electric too :-)   
Categories: Sergey Beryozkin

Apache Santuario - XML Security for Java 2.0.4 released

Colm O hEigeartaigh - Wed, 04/22/2015 - 15:09
Apache Santuario - XML Security for Java 2.0.4 has been released. The issues fixed are available here. Perhaps the most significant issue fixed is an interop issue which emerged when XML Security is used with OpenSAML (see the Apache CXF JIRA where this was raised).
Categories: Colm O hEigeartaigh

Vulnerability testing of Apache CXF based web services

Colm O hEigeartaigh - Mon, 04/13/2015 - 16:21
A number of automated tools can be used to conduct vulnerability or penetration testing of web services. In this article, we will take a look at using WS-Attacker to attack Apache CXF based web service endpoints. WS-Attacker is a useful tool based on SOAP-UI and developed by the Chair of Network and Data Security, Ruhr University Bochum (http://nds.rub.de/) and 3curity GmbH (http://3curity.de/). As an indication of how useful this tool is, it has uncovered a SOAP Action Spoofing vulnerability in older versions of CXF (see here). In this testing scenario, WS-Attacker 1.4-SNAPSHOT was used to test web services using Apache CXF 3.0.4. Apache CXF 3.0.4 is immune to all of the attacks described in this article (as can be seen from the "green" status in the screenshots).

1) SOAPAction Spoofing attacks

A SOAPAction spoofing attack is where the attacker will attempt to fool the service by "spoofing" the SOAPAction header to execute another operation. To test this I created a CXF based SOAP 1.1 endpoint which uses the action "soapAction="http://doubleit/DoubleIt"". I then loaded the WSDL into WS-Attacker, and selected the SOAPAction spoofing plugin, selecting a manual SOAP Action. Here is the result:


2) WS-Addressing Spoofing attacks

A WS-Addressing Spoofing attack involves sending an address in the WS-Addressing ReplyTo/To/FaultTo header that is not understood or known by the service, but to which the service redirects the message anyway. To guard against this attack in Apache CXF it is required to ensure that the WS-Addressing headers should be signed. As a sanity test, I added an endpoint where WS-Addressing is not configured, and the tests fail as expected:



3) XML Signature wrapping attacks

XML Signature allows you to sign various parts of a SOAP request, where the parts in question are referenced via an "Id". An attacker can leverage various "wrapping" based attacks to try to fool the resolution of a signed Element via its "Id". So for example, an attacker could modify the signed SOAP Body of a valid request (thus causing signature validation to fail), and put the original signed SOAP Body somewhere else in the request. If the signature validation code only picks up the Element that has been moved, then signature validation will pass even though the SOAP Body has been modified.

For this test I created a number of endpoints that are secured using WS-SecurityPolicy. I captured a successful signed request from a unit test and loaded it into WS-Attacker, and ran the Signature wrapping attack plugin. Here is the result:


4) Denial of Service attacks

A Denial of Service attack is where an attacker attempts to crash or dramatically slow down a service by flooding it with data, or crafting a message in such a way as to cause parsing to crash or hang. An example might be to create an XML structure with a huge amount of Attributes or Elements, that would cause an out of memory error. WS-Attacker comes with a range of different attacks in this area. Here are the results of all of the DoS attacks against a plain CXF SOAP service (apart from the SOAP Array attack, as the service doesn't accept a SOAP array):


Categories: Colm O hEigeartaigh

Talking about CXF at Apache Con NA 2015

Sergey Beryozkin - Fri, 03/13/2015 - 18:30
Apache Con NA 2015 will be held in Austin, Texas on April 13-16 and as it is usually the case there will be several presentations done there about Apache CXF. There will be interesting presentations from Hadrian and JB too. There will be many other great presentations as usual.

As far as CXF presentations are concerned:

Aki Yoshida will talk about combining Swagger (Web) Sockets, Apache Olingo and CXF Web Sockets Transport - now, this is seriously cool :-) The good news the presentations will be available online for those who will not be able to see it live.

Andriy Redko will talk about something which equally rocks, about combining a CXF Search Extension (FIQL or OData/Olingo based), Apache Tika and Lucene to show the effective search support for uploaded PDF and Open Office documents.

Attending both presentations can get anyone over-excited, that is for sure :-).
This is going to be tough, choosing to which presentation to go with my other colleagues presenting on the same day.


Finally, I will do the introduction of Apache CXF JOSE implementation which I briefly introduced in the previous blog. I'll describe all that CXF JOSE project has in place, and finish with a demo.

The demo deserves a special attention: I haven't written this demo, Anders Rundgren did. The original demo is here. This appears to be like a regular JavaScript-based demo but it is bigger than that, it shows what WebCrypto can do. Supporting generic browser-based signature applications, and interoperating with target servers in a variety of formats, with JOSE one of them. So the demo will show a WebCrypto client interoperating with an Apache CXF JOSE server.


Anders has been incredibly helpful and supportive, helped me to get his demo running in no time. Anders is working on a JSON Clear Signature (JCS) initiative that offers an XML Signature like support for signing JSON documents.  JCS are easier to understand than JOSE formats where Base64URL content representations are used. I'd like to encourage the interested users experiment with JCS, and help Anders. Hopefully something similar to JCS will be supported as part of a wider JOSE effort in the future.

I'm happy as usual I've got a talk selected and my employer's support to travel to Apache Con. It is always great to talk to my colleagues who work with CXF and other Apache technologies, it is important to show others CXF is very much alive and 'looks forward'. I regret I won't see some of my team colleagues there who haven't had a chance to submit for various important reasons but overall I'm looking forward to the conference with a great anticipation. Especially because I promised someone to beat him in chess after the presentations are over :-).

See you there !






Categories: Sergey Beryozkin

Apache CXF is getting JOSE ready

Sergey Beryozkin - Fri, 03/13/2015 - 17:42
I've already talked about JOSE on this blog. In my opinion, it is one of the key technologies, alongside OAuth2, that will deeply affect the way developers write secure HTTP RS services in the years to come.

A one sentence summary: one can use JOSE to secure, sign and/or encrypt a data content in any format, JSON, text, binaries, anything. JOSE is a key component of an advanced OAuth2 application, but also is a good fit for securing the regular HTTP web service communications.

As such it should not be a surprise that CXF now ships its own JOSE implementation offering a support for all of JOSE signature and encryption algorithms and representation formats and joins a list of other frameworks/projects directly supporting JOSE.

I've done an initial documentation here. There's so much to document that I will need probably another week to complete it all. Lots of interesting stuff for developers to experiment with that needs to be documented. I think it is unique in its own way while probably repeating some of the boilerplate code that any JOSE implementation needs to do.

Apart from being keen to directly deal with such an implementation, IMHO it is also good to have it supported in CXF due to how important this technology will become for web services developers in the future. It is always healthy to have multiple implementations as the JAX-RS space has demonstrated. And if CXF users would prefer to use other JOSE implementations then it will be fine.

One such 3rd party implementation is Jose4J. I'd like to thank Brian Campbell for creating it - it did help me to keep my sanity when I only started trying to write a test validating an RSA-OAEP output which is random. I also looked at its source recently when I was puzzled as to why my tests involving EC keys produce wrong-size signatures, even though the validation was passing - the comment in Jose4J made a rather cryptic JOSE spec text obvious, JOSE EC signatures are formatted in a format more compact than DER. I still wrote my own code though :-) which one might say is questionable but there you go. Thanks Brian. I think we can plug in Jose4J with CXF JOSE filters easily enough should users demand it.



CXF JOSE project is not completely finalized but I'm thinking it is getting pretty close to the final API. I'd like to encourage the early adopters give it a go and provide the feedback. In meantime I'll be working on completing the documentation and tweaking the code to enforce some of the security considerations documented in JOSE specifications, etc.

Enjoy !




Categories: Sergey Beryozkin

Camel CXFRS Improvements

Sergey Beryozkin - Wed, 03/11/2015 - 18:51
Camel CXFRS is one of the oldest Camel components which was created by Willem Jiang, my former colleague back from IONA Technology days, and maintained by Willem since its early days.

Camel is known to be a very democratic project with respect to supporting all sort of components, and it has many components that can deal with HTTP invocations. CXFRS is indeed just one of them but as you can guess from its name it is dedicated to supporting HTTP endpoints and clients written on top of Apache CXF JAX-RS implementation.

I think that over the years CXFRS has got a bit of the mixed reception from the community,  may be because it was not deemed that ideal for supporting some styles of routing for which other lighter Camel HTTP aware components were good at.

However CXFRS has been used by some developers and it has been significantly improved recently with respect to its usability. I'd like though to touch on the very last few updates which can be of interest.

The main CXFRS feature which appears to be quite confusing initially is that a CXFRS endpoint (Camel Consumer)  does not actually invoke on the provided JAX-RS implementation. This appears to be rather strange but this is what actually helps to integrate CXF JAXRS into Camel. The JAX-RS runtime is only used to prepare all the data according to JAX-RS Service method signatures but not invoke the actual service but make all the data needed available to custom Camel processors which extract these data from Camel exchanges and make some next routing decisions.

The side-effect of it that in some cases once can not actually just take an existing JAX-RS service implementation and plug it into a Camel route. Unless one use a CXFRS Bean component that can route from Jetty endpoints to CXF JAX-RS service implementation. This approach works but requires another Camel (Jetty only) component with an absolute HTTP address and has a few limitations of its own.

So the first improvement is that starting from Camel 2.15.0 one can configure a CXFRS consumer with a 'performInvocation=true' option and it will actually invoke on the service implementation, set a JAX-RS response on the Camel  exchange and will route to the next custom processor as usual, except that in this case the custom processor will have all the input parameters as before but also a response ready - the processors now can customize the response or do whatever else they need to do. It also makes it much simpler to convert the existing CXF Spring/Blueprint JAX-RS declarations  with the service implementations into Camel CXFRS endpoints if needed.

Note that in a default case one typically provides a no-op CXFRS service implementation (recall, CXFRS does not invoke on the service by default, only needs the method signatures/JAX-RS metadata). Providing interfaces only makes it more logical given that the invocation is not done by default, in fact it is possible for URI-only CXFRS consumer style which is rather limited in what it can do. So the other minor improvement is that starting from Camel 2.15.0 one can just prepare a JAX-RS interface and use it with CXFRS Consumer unless a new 'performInvocation' option is set in which case a complete implementation is needed.

The next one is the new "propagateContexts" configuration option. What it does is that it allows CXFRS developers write their custom processors against JAX-RS Context API, i.e, they can extract one of JAX-RS Contexts such as UriInfo, SecurityContext, HttpHeaders as a typed Camel exchange property and work with these contexts to figure out what needs to be done next. This should be a useful option indeed as JAX-RS Context API is very useful indeed.

Finally, a CXF No Annotations Feature is now supported too, CXFRS users can link to a CXF Model document and use it to JAX-RS enable a given Java interface without JAX-RS annotations. In fact, starting from Camel 2.15.0 it is sufficient to have a model-only CXFRS Consumer without a specific JAX-RS service interface or implementation - in this case custom processors will get the same request data as usual, with the model serving as the source binding the request URI to a set of request parameters.

We hope to build upon this latest feature going forward with other descriptions supported, to have a model-only CXFRS consumer more capable.

Enjoy !







Categories: Sergey Beryozkin

Apache Karaf Tutorial Part 9 - Annotation based blueprint and JPA

Christian Schneider - Fri, 03/06/2015 - 09:19

Blog post edited by Christian Schneider

Writing blueprint xml is quite verbose and large blueprint xmls are difficult to keep in sync with code changes and especially refactorings. So many people prefer to do most declarations using annoations. Ideally these annotations should be standardized so it is clearly defined what they do.blueprint-maven-plugin

The aries maven-blueprint-plugin allows to configure blueprint using annotations. It scans one or more paths for annotated classes and creates a blueprint.xml in target/generated-resources. See aries documentation of the maven-blueprint-plugin.

Example tasklist-blueprint-cdi

This example shows how to create a small application with a model, persistence layer and UI completely without handwritten blueprint xml.

You can find the full source code on github Karaf-Tutorial/tasklist-cdi-blueprint

Structure
  • features
  • model
  • persistence
  • ui
Features

Defines the karaf features to install the example as well as all necessary dependencies.

Model

The model project defines Task as a jpa entity and the Service TaskService as an interface. As model does not do any dependency injection the blueprint-maven-plugin is not involved here.

Task JPA Entity @Entity public class Task { @Id Integer id; String title; String description; Date dueDate; boolean finished; // Getters and setters omitted } TaskService (CRUD operations for Tasks) public interface TaskService { Task getTask(Integer id); void addTask(Task task); void updateTask(Task task); void deleteTask(Integer id); Collection<Task> getTasks(); } persistence.xml <persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"> <persistence-unit name="tasklist" transaction-type="JTA"> <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider> <jta-data-source>osgi:service/tasklist</jta-data-source> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/> <property name="hibernate.hbm2ddl.auto" value="create-drop"/> </properties> </persistence-unit> </persistence>

Persistence.xml defines the persistence unit name as "tasklist" and to use JTA transactions. The jta-data-source points to the jndi name of the DataSource service named "tasklist". So apart from the JTA DataSource name it is a normal hibernate 4.3 style persistence definition with automatic schema creation.

One other important thing is the configuration for the maven-bundle-plugin.

Configurations for maven bundle plugin <Meta-Persistence>META-INF/persistence.xml</Meta-Persistence> <Import-Package>*, org.hibernate.proxy, javassist.util.proxy</Import-Package>

The Meta-Persistence points to the persistence.xml and is the trigger for aries jpa to create an EntityManagerFactory for this bundle.
The Import-Package configurations import two packages that are needed by the runtime enhancement done by hibernate. As this enhancement is not known at compile time we need to give the maven bundle plugin these hints.

Persistence

The tasklist-cdi-persistence bundle is the first module in the example to use the blueprint-maven-plugin. In the pom we set the scanpath to "net.lr.tasklist.persistence.impl". So all classes in this package and sub packages are scanned.

In the pom we need a special configuration for the maven bundle plugin:
<Import-Package>!javax.transaction, *, javax.transaction;version="[1.1,2)"</Import-Package>
In the dependencies we use the transaction API 1.2 as it is the first spec version to include the @Transactional annotation. At runtime though we do not need this annotation and karaf only provides the transaction API version 1.1. So we tweak the import to be ok with the version karaf offers. As soon as the transaction API 1.2 is available for karaf this line will not be necessary anymore.

TaskServiceImpl @OsgiServiceProvider(classes = {TaskService.class}) @Singleton @Transactional public class TaskServiceImpl implements TaskService { @PersistenceContext(unitName="tasklist") EntityManager em; @Override public Task getTask(Integer id) { return em.find(Task.class, id); } @Override public void addTask(Task task) { em.persist(task); em.flush(); } // Other methods omitted }

TaskServiceImpl uses quite a lot of the annotations. The class is marked as a blueprint bean using @Singleton. It is also marked to be exported as an OSGi Service with the interface TaskService.

The class is marked as @Transactional. So all methods are executed in a jta transaction of type Required. This means that if there is no transaction it will be created. If there is a transaction the method will take part in it. At the end of the transaction boundary the transaction is either committed or in case of an exception it is rolled back.

A managed EntityManager for the persistence unit "tasklist" is injected into the field em. It transparently provides one EntityManager per thread which is created on demand and closed at the end of the transaction boundary.

InitHelper @Singleton public class InitHelper { Logger LOG = LoggerFactory.getLogger(InitHelper.class); @Inject TaskService taskService; @PostConstruct public void addDemoTasks() { try { Task task = new Task(1, "Just a sample task", "Some more info"); taskService.addTask(task); } catch (Exception e) { LOG.warn(e.getMessage(), e); } } }

The class InitHelper is not strictly necessary. It simply creates and persists a first task so the UI has something to show. Again the @Singleton is necessary to mark the class for creation as a blueprint bean.
@Inject TaskService taskService injects the first bean of type TaskService it finds in the blueprint context into the field taskService. In our case this is the implementation above.
@PostConstruct makes sure that addDemoTasks() is called after injection of all fields of this bean.

Another interesting thing in the module is the test TaskServiceImplTest. It runs outside OSGi and uses a special persistence.xml for testing to create the EntityManagerFactory without a jndi DataSource which would be difficult to supply. It also uses RESOURCE_LOCAL transactions so we do not need to set up a transaction manager. The test injects the plain EntityManger into the TaskServiceImpl class. So we have to manually begin and commit the transaction. So this shows that you can test the JPA code with plain java which results in very simple and fast tests.

UI

The tasklist-ui module uses the TaskService as an OSGi service and publishes a Servlet as an OSGi service. The Pax-web whiteboard bundle will then pick up the exported servlet and publish it using the HttpService so it is available on http.
In the pom this module needs the blueprint-maven-plugin with a suitable scanPath.

TasklistServlet @OsgiServiceProvider(classes={Servlet.class}) @Properties({@Property(name="alias", value="/tasklist")}) @Singleton public class TaskListServlet extends HttpServlet { @Inject @OsgiService TaskService taskService; protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { // Actual code omitted } }

The TaskListServlet is exported with the interface javax.servlet.Servlet with the service property alias="/tasklist". So it is available on the url http://localhost:8181/tasklist.

@Inject @OsgiService TaskService taskService creates a blueprint reference element to import an OSGi service with the interface TaskService. It then injects this service into the taskService field of the above class.
If there are several services of this interface the filter property can be used to select one of them.

Build

mvn clean install

Installation and test

See Readme.txt on github.

 

View Online
Categories: Christian Schneider

Apache Karaf Tutorial Part 9 - Annotation based blueprint and JPA

Christian Schneider - Thu, 03/05/2015 - 17:41

Blog post edited by Christian Schneider

Writing blueprint xml is quite verbose and large blueprint xmls are difficult to keep in sync with code changes and especially refactorings.
So many people prefer to do most declarations using annoations. Ideally these annotations should be standardized so it is clearly defined what they do.

blueprint-maven-plugin

The aries maven-blueprint-plugin allows to configure blueprint using annotations. It scans one or more paths for annotated classes and creates a blueprint.xml in target/generated-resources. See aries documentation of the maven-blueprint-plugin.

Example tasklist-blueprint-cdi

This example shows how to create a small application with a model, persistence layer and UI completely without handwritten blueprint xml.

You can find the full source code on github Karaf-Tutorial/tasklist-cdi-blueprint

Structure
  • features
  • model
  • persistence
  • ui
Features

Defines the karaf features to install the example as well as all necessary dependencies.

Model

The model project defines Task as a jpa entity and the Service TaskService as an interface. As model does not do any dependency injection the blueprint-maven-plugin is not involved here.

Task JPA Entity @Entity public class Task { @Id Integer id; String title; String description; Date dueDate; boolean finished; // Getters and setters omitted } TaskService (CRUD operations for Tasks) public interface TaskService { Task getTask(Integer id); void addTask(Task task); void updateTask(Task task); void deleteTask(Integer id); Collection<Task> getTasks(); } persistence.xml <persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"> <persistence-unit name="tasklist" transaction-type="JTA"> <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider> <jta-data-source>osgi:service/tasklist</jta-data-source> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/> <property name="hibernate.hbm2ddl.auto" value="create-drop"/> </properties> </persistence-unit> </persistence>

Persistence.xml defines the persistence unit name as "tasklist" and to use JTA transactions. The jta-data-source points to the jndi name of the DataSource service named "tasklist". So apart from the JTA DataSource name it is a normal hibernate 4.3 style persistence definition with automatic schema creation.

One other important thing is the configuration for the maven-bundle-plugin.

Configurations for maven bundle plugin <Meta-Persistence>META-INF/persistence.xml</Meta-Persistence> <Import-Package>*, org.hibernate.proxy, javassist.util.proxy</Import-Package>

The Meta-Persistence points to the persistence.xml and is the trigger for aries jpa to create an EntityManagerFactory for this bundle.
The Import-Package configurations import two packages that are needed by the runtime enhancement done by hibernate. As this enhancement is not known at compile time we need to give the maven bundle plugin these hints.

Persistence

The tasklist-cdi-persistence bundle is the first module in the example to use the blueprint-maven-plugin. In the pom we set the scanpath to "net.lr.tasklist.persistence.impl". So all classes in this package and sub packages are scanned.

In the pom we need a special configuration for the maven bundle plugin:
<Import-Package>!javax.transaction, *, javax.transaction;version="[1.1,2)"</Import-Package>
In the dependencies we use the transaction API 1.2 as it is the first spec version to include the @Transactional annotation. At runtime though we do not need this annotation and karaf only provides the transaction API version 1.1. So we tweak the import to be ok with the version karaf offers. As soon as the transaction API 1.2 is available for karaf this line will not be necessary anymore.

TaskServiceImpl @OsgiServiceProvider(classes = {TaskService.class}) @Singleton @Transactional public class TaskServiceImpl implements TaskService { @PersistenceContext(unitName="tasklist") EntityManager em; @Override public Task getTask(Integer id) { return em.find(Task.class, id); } @Override public void addTask(Task task) { em.persist(task); em.flush(); } // Other methods omitted }

TaskServiceImpl uses quite a lot of the annotations. The class is marked as a blueprint bean using @Singleton. It is also marked to be exported as an OSGi Service with the interface TaskService.

The class is marked as @Transactional. So all methods are executed in a jta transaction of type Required. This means that if there is no transaction it will be created. If there is a transaction the method will take part in it. At the end of the transaction boundary the transaction is either committed or in case of an exception it is rolled back.

A managed EntityManager for the persistence unit "tasklist" is injected into the field em. It transparently provides one EntityManager per thread which is created on demand and closed at the end of the transaction boundary.

InitHelper @Singleton public class InitHelper { Logger LOG = LoggerFactory.getLogger(InitHelper.class); @Inject TaskService taskService; @PostConstruct public void addDemoTasks() { try { Task task = new Task(1, "Just a sample task", "Some more info"); taskService.addTask(task); } catch (Exception e) { LOG.warn(e.getMessage(), e); } } }

The class InitHelper is not strictly necessary. It simply creates and persists a first task so the UI has something to show. Again the @Singleton is necessary to mark the class for creation as a blueprint bean.
@Inject TaskService taskService injects the first bean of type TaskService it finds in the blueprint context into the field taskService. In our case this is the implementation above.
@PostConstruct makes sure that addDemoTasks() is called after injection of all fields of this bean.

Another interesting thing in the module is the test TaskServiceImplTest. It runs outside OSGi and uses a special persistence.xml for testing to create the EntityManagerFactory without a jndi DataSource which would be difficult to supply. It also uses RESOURCE_LOCAL transactions so we do not need to set up a transaction manager. The test injects the plain EntityManger into the TaskServiceImpl class. So we have to manually begin and commit the transaction. So this shows that you can test the JPA code with plain java which results in very simple and fast tests.

UI

The tasklist-ui module uses the TaskService as an OSGi service and publishes a Servlet as an OSGi service. The Pax-web whiteboard bundle will then pick up the exported servlet and publish it using the HttpService so it is available on http.
In the pom this module needs the blueprint-maven-plugin with a suitable scanPath.

TasklistServlet @OsgiServiceProvider(classes={Servlet.class}) @Properties({@Property(name="alias", value="/tasklist")}) @Singleton public class TaskListServlet extends HttpServlet { @Inject @OsgiService TaskService taskService; protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { // Actual code omitted } }

The TaskListServlet is exported with the interface javax.servlet.Servlet with the service property alias="/tasklist". So it is available on the url http://localhost:8181/tasklist.

@Inject @OsgiService TaskService taskService creates a blueprint reference element to import an OSGi service with the interface TaskService. It then injects this service into the taskService field of the above class.
If there are several services of this interface the filter property can be used to select one of them.

Build

mvn clean install

Installation and test

See Readme.txt on github.

 

View Online
Categories: Christian Schneider

Apache Karaf Tutorial Part 6 - Database Access

Christian Schneider - Tue, 03/03/2015 - 23:06

Blog post edited by Christian Schneider - "Corrections"

Shows how to access databases from OSGi applications running in Karaf and how to abstract from the DB product by installing DataSources as OSGi services. Some new Karaf shell commands can be used to work with the database from the command line. Finally JDBC and JPA examples show how to use such a DataSource from user code.Prerequisites

You need an installation of apache karaf 3.0.3 for this tutorial.

Example sources

The example projects are on github Karaf-Tutorial/db.

Drivers and DataSources

In plain java it is quite popular to use the DriverManager to create a database connection (see this tutorial). In OSGi this does not work as the ClassLoader of your bundle will have no visibility of the database driver. So in OSGi the best practice is to create a DataSource at some place that knows about the driver and publish it as an OSGi service. The user bundle should then only use the DataSource without knowing the driver specifics. This is quite similar to the best practice in application servers where the DataSource is managed by the server and published to jndi.

So we need to learn how to create and use DataSources first.

The DataSourceFactory services

To make it easier to create DataSources in OSGi the specs define a DataSourceFactory interface. It allows to create a DataSource using a specific driver from properties. Each database driver is expected to implement this interface and publish it with properties for the driver class name and the driver name.

Introducing pax-jdbc

The pax-jdbc project aims at making it a lot easier to use databases in an OSGi environment. It does the following things:

  • Implement the DataSourceFactory service for Databases that do not create this service directly
  • Implement a pooling and XA wrapper for XADataSources (This is explained at the pax jdbc docs)
  • Provide a facility to create DataSource services from config admin configurations
  • Provide karaf features for many databases as well as for the above additional functionality

So it covers everything you need from driver installation to creation of production quality DataSources.

Installing the driver

The first step is to install the driver bundles for your database system into Karaf. Most drivers are already valid bundles and available in the maven repo.

For several databases pax-jdbc already provides karadf features to install a current version of the database driver.

For H2 the following commands will work

feature:repo-add mvn:org.ops4j.pax.jdbc/pax-jdbc-features/0.5.0/xml/features feature:install transaction jndi pax-jdbc-h2 pax-jdbc-pool-dbcp2 pax-jdbc-config service:list DataSourceFactory

Strictly speaking we would only need the pax-jdbc-h2 feature but we will need the others for the next steps.

This will install the pax-jdbc feature repository and the h2 database driver. This driver already implements the DataSourceFactory so the last command will display this service.

DataSourceFactory [org.osgi.service.jdbc.DataSourceFactory] ----------------------------------------- osgi.jdbc.driver.class = org.h2.Driver osgi.jdbc.driver.name = H2 osgi.jdbc.driver.version = 1.3.172 service.id = 691 Provided by : H2 Database Engine (68)

The pax-jdbc-pool-dbcp2 feature wraps this DataSourceFactory to provide pooling and XA support.

pooled and XA DataSourceFactory [org.osgi.service.jdbc.DataSourceFactory] ----------------------------------------- osgi.jdbc.driver.class = org.h2.Driver osgi.jdbc.driver.name = H2-pool-xa osgi.jdbc.driver.version = 1.3.172 pooled = true service.id = 694 xa = true Provided by : OPS4J Pax JDBC Pooling support using Commons-DBCP2 (73)

Technically this DataSourceFactory also creates DataSource objects but internally they manage XA support and pooling. So we want to use this one for our later code examples.

Creating the DataSource

Now we just need to create a configuration with the correct factory pid to create a DataSource as a service

So create the file etc/org.ops4j.datasource-tasklist.cfg with the following content

config for DataSource osgi.jdbc.driver.name=H2-pool-xa url=jdbc:h2:mem:person dataSourceName=person

The config will automatically trigger the pax-jdbc-config module to create a DataSource.

  • The name osgi.jdbc.driver=H2-pool-xa will select the H2 DataSourceFactory with pooling and XA support we previously installed.
  • The url configures H2 to create a simple in memory database named test.
  • The dataSourceName will be reflected in a service property of the DataSource so we can find it later
  • You could also set pooling configurations in this config but we leave it at the defaults

DataSource karaf@root()> service:list DataSource [javax.sql.DataSource] ---------------------- dataSourceName = person osgi.jdbc.driver.name = H2-pool-xa osgi.jndi.service.name = person service.factoryPid = org.ops4j.datasource service.id = 696 service.pid = org.ops4j.datasource.83139141-24c6-4eb3-a6f4-82325942d36a url = jdbc:h2:mem:person Provided by : OPS4J Pax JDBC Config (69)

So when we search for services implementing the DataSource interface we find the person datasource we just created.

When we installed the features above we also installed the aries jndi feature. This module maps OSGi services to jndi objects. So we can also use jndi to retrieve the DataSource which will be used in the persistence.xml for jpa later.

jndi url of DataSource osgi:service/person Karaf jdbc commands

Karaf contains some commands to manage DataSources and do queries on databases. The commands for managing DataSources in karaf 3.x still work with the older approach of using blueprint to create DataSources. So we will not use these commands but we can use the functionality to list datasources, list tables and execute queries.

jdbc commands feature:install jdbc jdbc:datasources jdbc:tables person

We first install the karaf jdbc feature which provides the jdbc commands. Then we list the DataSources and show the tables of the database accessed by the person DataSource.

jdbc:execute tasklist "create table person (name varchar(100), twittername varchar(100))" jdbc:execute tasklist "insert into person (name, twittername) values ('Christian Schneider', '@schneider_chris')" jdbc:query tasklist "select * from person"

This creates a table person, adds a row to it and shows the table.

The output should look like this

select * from person NAME | TWITTERNAME -------------------------------------- Christian Schneider | @schneider_chris Accessing the database using JDBC

The project db/examplejdbc shows how to use the datasource we installed and execute jdbc commands on it. The example uses a blueprint.xml to refer to the OSGi service for the DataSource and injects it into the class
DbExample.The test method is then called as init method and shows some jdbc statements on the DataSource.The DbExample class is completely independent of OSGi and can be easily tested standalone using the DbExampleTest. This test shows how to manually set up the DataSource outside of OSGi.

Build and install

Build works like always using maven

> mvn clean install

In Karaf we just need our own bundle as we have no special dependencies

> install -s mvn:net.lr.tutorial.karaf.db/db-examplejdbc/1.0-SNAPSHOT Using datasource H2, URL jdbc:h2:~/test Christian Schneider, @schneider_chris,

After installation the bundle should directly print the db info and the persisted person.

Accessing the database using JPA

For larger projects often JPA is used instead of hand crafted SQL. Using JPA has two big advantages over JDBC.

  1. You need to maintain less SQL code
  2. JPA provides dialects for the subtle differences in databases that else you would have to code yourself.

For this example we use Hibernate as the JPA Implementation. On top of it we add Apache Aries JPA which supplies an implementation of the OSGi JPA Service Specification and blueprint integration for JPA.

The project examplejpa shows a simple project that implements a PersonService managing Person objects.
Person is just a java bean annotated with JPA @Entitiy.

Additionally the project implements two Karaf shell commands person:add and person:list that allow to easily test the project.

persistence.xml

Like in a typical JPA project the peristence.xml defines the DataSource lookup, database settings and lists the persistent classes. The datasource is refered using the jndi name "osgi:service/person".

The OSGi JPA Service Specification defines that the Manifest should contain an attribute "Meta-Persistence" that points to the persistence.xml. So this needs to be defined in the config of the maven bundle plugin in the prom. The Aries JPA container will scan for these attributes
and register an initialized EntityMangerFactory as an OSGi service on behalf of the use bundle.

blueprint.xml

We use a blueprint.xml context to inject an EntityManager into our service implementation and to provide automatic transaction support.
The following snippet is the interesting part:

<bean id="personService" class="net.lr.tutorial.karaf.db.examplejpa.impl.PersonServiceImpl"> <jpa:context property="em" unitname="person" /> <tx:transaction method="*" value="Required"/> </bean>

This makes a lookup for the EntityManagerFactory OSGi service that is suitable for the persistence unit person and injects a thread safe EnityManager (using a ThreadLocal under the hood) into the
PersonServiceImpl. Additionally it wraps each call to a method of PersonServiceImpl with code that opens a transaction before the method and commits on success or rollbacks on any exception thrown.

Build and InstallBuild mvn clean install

A prerequisite is that the derby datasource is installed like described above. Then we have to install the bundles for hibernate, aries jpa, transaction, jndi and of course our db-examplejpa bundle.
See ReadMe.txt for the exact commands to use.

Test person:add 'Christian Schneider' @schneider_chris

Then we list the persisted persons

karaf@root> person:list Christian Schneider, @schneider_chris Summary

In this tutorial we learned how to work with databases in Apache Karaf. We installed drivers for our database and a DataSource. We were able to check and manipulate the DataSource using the jdbc:* commands. In the examplejdbc we learned how to acquire a datasource
and work with it using plain jdbc4.  Last but not least we also used jpa to access our database.

Back to Karaf Tutorials

View Online
Categories: Christian Schneider

New Apache WSS4J and CXF releases

Colm O hEigeartaigh - Fri, 02/20/2015 - 17:37
Apache WSS4J 2.0.3 and 1.6.18 have been released. Both releases contain a number of fixes in relation to validating SAML tokens, as covered earlier. In addition, Apache WSS4J 2.0.3 has unified security error messages to prevent some attacks (see here for more information). Apache CXF 3.0.4 and 2.7.15 have also been released, both of which pick up the recent WSS4J releases.
Categories: Colm O hEigeartaigh

Unified security error messages in Apache WSS4J and CXF

Colm O hEigeartaigh - Mon, 02/16/2015 - 17:59
When Apache WSS4J encounters a error on processing a secured SOAP message it throws an exception. This could be a configuration error, an invalid Signature, incorrect UsernameToken credentials, etc. The SOAP stack in question, Apache CXF for the purposes of this post, then converts the exception into a SOAP Fault and returns it to the client. However the SOAP stack must take care not to leak information (e.g. internal configuration details) to an attacker. This post looks at some changes that are coming in WSS4J and CXF in this area.

The later releases of Apache CXF 2.7.x map the WSS4J exception message to one of the standard error QNames defined in the SOAP Message Security Profile 1.1 specification. One exception is if a "replay" error occurred, such as if a UsernameToken nonce is re-used. This type of error is commonly seen in testing scenarios, when messages are replayed, and returning the original error aids in figuring out what is going wrong. Apache CXF 3.0.0 -> 3.0.3 extends this functionality a bit by adding a new configuration option:
  • ws-security.return.security.error - Whether to return the security error message to the client, and not one of the default error QNames. Default is "false".
However, even returning one of the standard security error QNames can provide an "oracle" for certain types of attacks. For example, Apache WSS4J recently released a security advisory for an attack that works if an attacker can distinguish whether the decryption of an EncryptedKey or EncryptedData structure failed. There are also attacks on data encrypted via a cipher block chaining (CBC) mode, that only require the knowledge about whether the specific decryption failed.

Therefore from Apache WSS4J 2.0.3 onwards (and CXF 3.0.4 onwards) a single error fault message ("A security error was encountered when verifying the message") and code ("http://ws.apache.org/wss4j", "SecurityError") is returned on a security processing error. It is still possible to set "ws-security.return.security.error" to "true" to return the underlying security error to aid in testing etc.
Categories: Colm O hEigeartaigh

Two new security advisories released for Apache WSS4J

Colm O hEigeartaigh - Tue, 02/10/2015 - 12:47
Two new security advisories have been released for Apache WSS4J, both of which were fixed in Apache WSS4J 2.0.2 and 1.6.17.
  • CVE-2015-0226: Apache WSS4J is (still) vulnerable to Bleichenbacher's attack
  • CVE-2015-0227: Apache WSS4J doesn't correctly enforce the requireSignedEncryptedDataElements property
Please see the Apache WSS4J security advisories page for more information.
Categories: Colm O hEigeartaigh

New SAML validation changes in Apache WSS4J and CXF

Colm O hEigeartaigh - Tue, 02/03/2015 - 18:27
Two new Apache WSS4J releases are currently under vote (1.6.18 and 2.0.3). These releases contain a number of changes in relation to validating SAML tokens. Apache CXF 2.7.15 and 3.0.4 will pick up these changes in WSS4J and enforce some additional constraints. This post will briefly cover what these new changes are.

1) Security constraints are now enforced on SAML Authn (Authentication) Statements

From the 1.6.18 and 2.0.3 WSS4J releases, security constraints are now enforced on SAML 2.0 AuthnStatements and SAML 1.1 AuthenticationStatements by default. What this means is that we check that:
  • The AuthnInstant/AuthenticationInstant is not "in the future", subject to a configured future TTL value (60 seconds by default).
  • The SessionNotOnOrAfter value for SAML 2.0 tokens is not stale / expired.
  • The Subject Locality (IP) address is either a valid IPv4 or IPv6 address.
2) Enforce constraints on SAML Assertion "IssueInstant" values

We now enforce that a SAML Assertion "IssueInstant" value is not "in the future", subject to the configured future TTL value (60 seconds by default). In addition, if there is no "NotOnOrAfter" Condition in the Assertion, we now enforce a TTL constraint on the IssueInstant of the Assertion. The default value for this is 30 minutes.

3) Add AudienceRestriction validation by default

The new WSS4J releases allow the ability to pass a list of Strings through to the SAML validation code, against which any AudienceRestriction address of the assertion are compared. If the list that is passed through is not empty, then at least one of the AudienceRestriction addresses in the assertion must be contained in the list. Apache CXF 3.0.4 and 2.7.15 will pass through the endpoint address and the service QName by default for validation (for JAX-WS endpoints). This is controlled by a new JAX-WS security property:
  • ws-security.validate.audience-restriction: If this is set to "true", then IF the SAML Token contains Audience Restriction URIs, one of them must match either the request URL or the Service QName. The default is "true" for CXF 3.0.x, and "false" for 2.7.x.
Categories: Colm O hEigeartaigh

Single Logout with Fediz - WS-Federation

Jan Bernhardt - Fri, 01/30/2015 - 15:25
WS-Federation is primarily used to achieve Single Sing On (SSO). This raises the challenge how to securely logout from multiple applications once the user is done with his work. To navigate to each application previously used to hit the logout button would be quite inconvenient. Fortunately the WS-Federation standard does not only define how to do single sign on, but also how to do single logout.

In this blog I'll explain how to setup a demonstrator to show single sing-on as well as single sing-off. Since single sing-off is implemented in CXF Fediz version 1.2, I'm going to use a snapshot build since 1.2 is not yet released.
First of all we need to download Tomcat 7 since we will deploy our IDP/STS as well as our two demo applications to a tomcat container each. I renamed the tomcat folder of my extracted tomcat zip to:
  • Fediz-IDP
  • Fediz-RP1
  • Fediz-RP2
Next I opened a terminal within the cxf-fediz source code which I downloaded from github and run maven to build fediz:
mvn clean installSetup IDPAfter my build was successfull I copied the fediz-idp-sts.war file from cxf-fediz/services/sts/target/ into my Fediz-IDP/webapps/ deployment folder. I also did the same with the fediz-idp.war file from cxf-fediz/services/idp/target/.
Since the default https fediz port for the IDP and STS is 9443 and also to avoid port collisions with my two other tomcat instances, I need to update the port configuration in my tomcat Fediz-IDP/conf/server.xml. Here I update all ports starting with '8' to start with a '9'.
<Connector port="9443" protocol="org.apache.coyote.http11.Http11Protocol"
maxHttpHeaderSize="65536"
maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
keystoreFile="idp-ssl-key.jks"
keystorePass="tompass"
truststoreFile="idp-ssl-trust.jks"
truststorePass="ispass"
truststoreType="JKS"
clientAuth="want"
sslProtocol="TLS" />
To enable SSL for my RP-IDP tomcat I need to provide a keystore as well as a truststore. For demo purposes I will simply copy the java key stores from my fediz build cxf-fediz/services/idp/target/classes/ here I find the file idp-ssl-key.jks as well as idp-ssl-trust.jks which I'll copy to my Fediz-IDP root folder.
Before you can start Fediz-IDP you also need to get the expected JDBC driver which is by default HyperSQL JDBC driver. You need to download the zip file and then extract all jar files from /hsqldb-2.3.2/hsqldb/lib/ to Fediz-IDP/lib/.
Now you can start the Fediz-IDP tomcat server via Fediz-IDP/bin/startup.sh.
To avoid OutOfMemory erros you should add the following settings to your CATALINA_OPTS system environment variable: -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -XX:MaxPermSize=128M
By default the Fediz IDP has only basic authentication activated for user login. This is done to make it easier to run some system tests. However for single logout HTTP Basic authentication is not recommended, because the browser will cache your user credentials and will automatically sent your credentials to the IDP. So you would have to close all your current browser windows to actually see the login popup again after logout. If you also enable form based authentication in your webapps/fediz-idp/WEB-INF/security-config.xml you will actually see a login form again after your logout action. Here is the sample configuration how to enable form based authentication:
<security:http use-expressions="true">
<security:custom-filter after="CHANNEL_FILTER" ref="stsPortFilter" />
<security:custom-filter after="SERVLET_API_SUPPORT_FILTER" ref="entitlementsEnricher" />
<security:intercept-url pattern="/FederationMetadata/2007-06/FederationMetadata.xml" access="isAnonymous() or isAuthenticated()" />

<!-- MUST be http-basic thus systests run fine -->
<security:form-login />
<security:http-basic />
<security:logout delete-cookies="FEDIZ_HOME_REALM,JSESSIONID" invalidate-session="true" />
</security:http>You can also disable http basic authentication if you want to. But you can also just leave it enabled. In that case you can use both authentication styles. You will see an HTML authentication form if you are requested to login, but you could also provide HTTP-Basic authentication header to login.
After you updated the IDP configuration you need to restart the IDP tomcat server to apply your changes.
Setup 1. Demo AppFirst of all we must provide the Fediz plugin dependencies to our RP tomcat container. For this purpose we need to create a fediz subfolder in Fediz-RP1/lib/. Next we extract the content of the tomcat plugin dependencies zip file (cxf-fediz\plugins\tomcat\target\fediz-tomcat-1.2.0-SNAPSHOT-zip-with-dependencies.zip) to the fediz subfolder.
To make sure that tomcat loads these additional dependencies we must also update the calatina.properties in Fediz-RP1/conf.
common.loader=${catalina.base}/lib,${catalina.base}/lib/*.jar,${catalina.home}/lib,${catalina.home}/lib/*.jar,${catalina.home}/lib/fediz/*.jarFor Fediz-RP1 we will keep all port settings as they are. To keep things simple with the SSL connection we will reuse the idp-ssl-key.jks keystore from the Fediz-IDP and copy this keystore also to Fediz-RP1 root folder. The server.xml file needs to have the following SSL connector to be configured for Fediz-RP1:
<Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol"
maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
keystoreFile="idp-ssl-key.jks"
keystorePass="tompass"
clientAuth="false"
sslProtocol="TLS" />
Before we start the demo app container, we need to copy the demo app to the webapps folders, which can be found at cxf-fediz/examples/simpleWebapp/target/fedizhelloworld.war.
Finally we must provide a correct fediz configuration file to the config folder of the demo app container. For this purpose we can copy the demo config file from cxf-fediz/examples/simpleWebapp/src/main/config/fediz_config.xml to Fediz-RP1/conf/.
To make sure that the SAML tokens issued by the STS can be validated at the RP we must also install the correct STS truststore. This we can do by copying cxf-fediz/services/sts/target/classes/ststrust.jks to Fediz-RP1 root folder.
Now everything should be in place so that we can start Fediz-RP1.

We should see no exceptions in the logfiles and we should see the metadata document from the RP at the following URL: https://localhost:8443/fedizhelloworld/FederationMetadata/2007-06/FederationMetadata.xml
Setup 2. Demo AppThe second demo app will be quite similar to the first. Therefore we can simply copy the Fediz-RP1 folder and rename it to Fediz-RP2. To avoid port collisions, we also need to update some server ports.
Therefore we will update all ports beginning with a leading '8' and replace it with a leading '7' in the Fediz-RP2/conf/server.xml file.

Since we are going to start both tomcat container on the same machine (localhost), we must also change the context path of the second demo app. Otherwise both apps would use the same cookies. Thus we need to rename the fedizhelloworld.war file within the Fediz-RP2/webapps/ folder to fedizhelloworld2.war.

To also make this application known at the IDP, you need to register this application via the IDP REST Interface. You can use SoapUI for example or simply curl from your command line.

POST https://localhost:9443/fediz-idp/services/rs/applications
<ns2:application xmlns:ns2="http://org.apache.cxf.fediz/">
     <realm>urn:org:apache:cxf:fediz:fedizhelloworld2</realm>
     <role>ApplicationServiceType</role>
     <serviceDisplayName>Fedizhelloworld</serviceDisplayName>
     <serviceDescription>Web Application to illustrate WS-Federation</serviceDescription>
     <protocol>http://docs.oasis-open.org/wsfed/federation/200706</protocol>
     <tokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</tokenType>
     <lifeTime>3600</lifeTime>
</ns2:application>
Next you need to add all claims required for the helloworld application. Since the claim types are already known by the default fedizhelloworld application you only need to add a link between application and claims:

POST https://localhost:9443/fediz-idp/services/rs/applications/urn%3Aorg%3Aapache%3Acxf%3Afediz%3Afedizhelloworld2/claims 
<ns2:requestClaim xmlns:ns2="http://org.apache.cxf.fediz/">
<claimType>http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role</claimType>
<optional>false</optional>
</ns2:requestClaim>
<ns2:requestClaim xmlns:ns2="http://org.apache.cxf.fediz/">
<claimType>http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname</claimType>
<optional>true</optional>
</ns2:requestClaim>
<ns2:requestClaim xmlns:ns2="http://org.apache.cxf.fediz/">
<claimType>http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname</claimType>
<optional>true</optional>
</ns2:requestClaim>
<ns2:requestClaim xmlns:ns2="http://org.apache.cxf.fediz/">
<claimType>http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress</claimType>
<optional>true</optional>
</ns2:requestClaim>
Next you need to register this application to a given IDP realm.
POST https://localhost:9443/fediz-idp/services/rs/idps/urn%3Aorg%3Aapache%3Acxf%3Afediz%3Aidp%3Arealm-A/applications
<ns2:application xmlns:ns2="http://org.apache.cxf.fediz/">
<realm>urn:org:apache:cxf:fediz:fedizhelloworld2</realm>
</ns2:application>
You can check if your application was registered correctly via GET https://localhost:9443/fediz-idp/services/rs/idps.
 Now the IDP will be able to provide SAML token for the second demo application.
Test Single Sign-OnTo test if single sign-on is working as expected you can open the following URL in your browser: https://localhost:8443/fedizhelloworld/secure/fedservlet. You should get redirected to the IDP and need to choose Realm-A as your home realm. Next you need to enter your credentials bob:bob.
You should be redirected back to the fedservlet URL and should see your username, assigned roles as well as  other claims.

If you now enter https://localhost:7443/fedizhelloworld/secure/fedservlet in your browser you should get redirected to the IDP and then without the need to enter your credentials again the IDP should redirect you back to the demo application.

Congratulation. Single Sing-on is working!
Test Single Sign-OffGoal of this blog post was not to achieve single sign-on but rather single sign-off. For this you have two options to trigger single logout:
  1. You can invoke a logout request starting at the demo application:
    https://localhost:8443/fedizhelloworld/secure/logout
  2. You can invoke a logout request directly at the IDP:
    https://localhost:9443/fediz-idp/federation?wa=wsignout1.0
After you triggered the logout process you will be redirected to a page listing all application which the IDP had previously issued security tokens for. You will also be asked if you really want to logout from all these applications. After you confirmed the logout request, you should see a confirmation page. This page contains the same list of applications as before but this time with a green check maker at the end of each line.

This image is the key to preform the actual logout for all the remote applications. The image resource URL will point to the logout URL of these applications, and by resolving the image resource in your browser you will also invoke the logout URL off all these applications.

If you invoke now any of the two applications you should now again be redirected to the login page of the IDP.
Congratulation. Single Logout is working!
 LimitationsThe WS-Federation standard does not require from any application to provide a "logout image" at the logout URL. This has just shown to be best practice. However if the logout URL of an application does not provide an image, the confirmation page will show a broken image, even thou the logout was most likely successful.

The Single Logout implementation for Fediz is currently not able to delegate a logout request to the requestors IDP. So for example if the user is not authenticated at realm-a but at realm-b instead, the IDP does not forward the wsingout action to realm-b. Thus the user will only be logged of at applications in realm-a but the user still remains an active session in realm-b.

Hopefully a global logout will be supported by Fediz in the future as well.
Categories: Jan Bernhardt

Pages

Subscribe to Talend Community Coders aggregator