Latest Activity

Apache CXF in OSGi

Daniel Kulp - Tue, 11/01/2011 - 15:41
I’ve had a bunch of people asking me lately about getting Apache CXF up and running in OSGi. A lot of people have run into issues trying to find the right third party bundles, configuring things like JAX-WS and JAXB, normal “class loader” issues, etc…. Thus, I decided I need to write a blog entry. [...]
Categories: Daniel Kulp

CXF Tranform Feature and Redirection

Sergey Beryozkin - Tue, 11/01/2011 - 12:40
The Transform feature is proving to be useful to CXF users, given that dynamically changing or dropping the incoming or outbound namespace(s) as well as updating the element names is often required when the changes have not been propagated across or are not even possible. The fact that all the modifications are done at the STAX level is critical as far as the performance is concerned.

The feature has been enhanced recently thanks to Aki Yoshida. The incoming payloads can get new elements added in a number of ways which can be very useful when the services validating the data using the close content schemas are in operation. Multiple updates (example, adding some elements, dropping another one, changing the name and or namespace of yet another one - all on the same payload) are better supported too.

There will be more enhancements coming in in time but I've been actually planning to highlight one of the lesser known tricks which one can use with the Transform feature, something that we demonstrate in our Talend distributions.

Consider the case where you have an endpoint deployed with hundreds or many thousands of clients consuming in. Those could be some long-running clients executing the code, possibly embedded in browsers, and that code is aware of the way this endpoint can be consumed. The time has come to deploy an updated endpoint. The more open the environment is - the fewer options are there to get the clients updated at the same time when the old endpoint goes down and the new one gets deployed.

This is not a new problem per se, but Transform Feature, in combination with the servlet redirection, offers its own simple way to tackle the cost of upgrading all the clients:

Keep the servlet serving the endpoint which is now down around but only have it redirecting all the requests from the old clients to the new servlet serving the new updated endpoint. This will keep the old clients happy for a while and the process of upgrading them can become less 'stressful'.

So now we will have a new endpoint serving the new clients but it will also get the requests redirected to it from the older clients. How will the new endpoint figure out how to handle a given request without resorting to a low-level XML or JSON manipulation ?

Yes, you are right - Transform Feature will help - it will ensure the old requests are recognized by the new endpoint and the responses from this new endpoint are recognized by the old clients still unaware of the fact they are talking to the new endpoint.

Here is how the relevant part of web.xml may look like:

<!-- Old Servlet -->

<!-- New Servlet -->



CXFServletV1 will redirect all the requests to the servlet listening on /v2 which is CXFServletV2.

Next we configure the Transform feature like this:

<bean id="transform"
<property name="contextPropertyName"
<!-- the rest of the feature config -->

The feature is configured to do the transformations only if a boolean property identified by the "contextPropertyName" is set on the current message. In this case, if the request has been redirected then the message will have an "http.service.redirection" set to true. It won't be the case for the requests coming from the new clients and thus the feature won't affect them.
Categories: Sergey Beryozkin

Vote early, vote often: Fixing WSIT-1590

Glen Mazza - Mon, 10/31/2011 - 12:00

Just after demonstrating how CXF's new STS works I next wanted to blog how Metro WSCs and WSPs can use a CXF STS (just as I had done oppositely with a Metro STS.) But I found a blocking issue in getting that to occur, namely, Metro does not presently accept encoded email addresses within certificates, so I typed up a bug report for it: Please log into the JIRA and vote for it! Getting this fixed will greatly increase interoperability between CXF and Metro, helpful for everyone who works with SOAP web services.


Apache CXF STS documentation - part IV

Colm O hEigeartaigh - Thu, 10/27/2011 - 13:45
In the previous post I covered the TokenProvider interface, which is used to generate tokens in the STS, and an implementation that ships with the STS to generate SecurityContextTokens. In this post, I will cover the other TokenProvider implementation that ships with the STS, which issues SAML Tokens (both 1.1 and 2.0).

1) The SAMLTokenProvider

The SAMLTokenProvider can issue SAML 1.1 and SAML 2.0 tokens. To request a SAML 1.1 token, the client must use one of the following Token Types:
  • urn:oasis:names:tc:SAML:1.0:assertion
To request a SAML 2.0 token, the client must use one of the following Token Types:
  • urn:oasis:names:tc:SAML:2.0:assertion
The following properties can be configured on the SAMLTokenProvider directly:
  • List<AttributeStatementProvider> attributeStatementProviders - A list of objects that can add attribute statements to the token.
  • List<AuthenticationStatementProvider> authenticationStatementProviders - A list of objects that can add authentication statements to the token.
  • List<AuthDecisionStatementProvider> authDecisionStatementProviders - A list of objects that can add authorization decision statements to the token.
  • SubjectProvider subjectProvider - An object used to add a Subject to the token.
  • ConditionsProvider conditionsProvider - An object used to add a Conditions statement to the token.
  • boolean signToken - Whether to sign the token or not. The default is true.
  • Map<String, SAMLRealm> realmMap - A map of realms to SAMLRealm objects.
We will explain each of these properties in more detail in the next few sections.

2) Realms in the TokenProviders

As explained in the previous post, the TokenProvider interface has a method that takes a realm parameter:
  • boolean canHandleToken(String tokenType, String realm) - Whether this TokenProvider implementation can provide a token of the given type, in the given realm
In other words, the TokenProvider implementation is being asked whether it can supply a token corresponding to the Token Type in a particular realm. How the STS knows what the desired realm is will be covered in a future post. However, we will explain how the realm is handled by the TokenProviders here. The SCTProvider ignores the realm in the canHandleToken method. In other words, the SCTProvider can issue a SecurityContextToken in *any* realm. If a realm is passed through via the TokenProviderParameters when creating the token, the SCTProvider will cache the token with the associated realm as a property (this was explained in the previous post).

Unlike the SCTProvider, the SAMLTokenProvider does not ignore the realm parameter to the "canHandleToken" method. Recall that the SAMLTokenProvider has a property "Map<String, SAMLRealm> realmMap". The canHandleToken method checks to see if the given realm is null, and if it is not null then the realmMap *must* contain a key which matches the given realm. So if the STS implementation is designed to issue tokens in different realms, then the realmMap of the SAMLTokenProvider must contain the corresponding realms in the key-set of the map.

The realmMap property maps realm Strings to SAMLRealm objects. The SAMLRealm class contains the following properties:
  • String issuer - the Issuer String to use in this realm
  • String signatureAlias - the keystore alias to use to retrieve the private key the SAMLTokenProvider uses to sign the generated token
In other words, if the SAMLTokenProvider is "realm aware", then it can issue tokens with an issuer name and signing key specific to a given realm. If no realm is passed to the SAMLTokenProvider, then these properties are obtained from the "system wide" properties defined in the STSPropertiesMBean object passed as part of the TokenProviderParameters, which can be set via the following methods:
  • void setSignatureUsername(String signatureUsername)
  • void setIssuer(String issuer)
Two additional properties are required when signing SAML Tokens. A password is required to access the private key in the keystore, which is supplied by a CallbackHandler instance. A WSS4J "Crypto" instance is also required which controls access to the keystore. These are both set on the STSPropertiesMBean object via:
  • void setCallbackHandler(CallbackHandler callbackHandler)
  • void setSignatureCrypto(Crypto signatureCrypto)
Note that the signature of generated SAML Tokens can be disabled, by setting the "signToken" property of the SAMLTokenProvider to "false". As per the SCTProvider, the generated SAML tokens are stored in the cache with the associated realm stored as a property.

3) Populating SAML Tokens

In the previous section we covered how a generated SAML token is signed, how to configure the key used to sign the assertion, and how to set the Issuer of the Assertion. In this section we will describe how to populate the SAML Token itself. The SAMLTokenProvider is designed to be able to issue a wide range of SAML Tokens. It does this by re-using the SAML abstraction library that ships with Apache WSS4J, which defines a collection of beans that are configured and then assembled in a CallbackHandler to create a SAML assertion.

3.1) Configure a Conditions statement

The SAMLTokenProvider has a "ConditionsProvider conditionsProvider" property, which can be used to configure the generated Conditions statement which is added to the SAML Assertion. The ConditionsProvider has a method to return a ConditionsBean object, and a method to return a lifetime in seconds. The ConditionsBean holds properties such as the not-before and not-after dates, etc. The SAMLTokenProvider ships with a default ConditionsProvider implementation that is used to insert a Conditions statement in every SAML token that is generated. This implementation uses a default lifetime of 5 minutes, and set the Audience Restriction URI of the Conditions Statement to be the received "AppliesTo" address, which is obtained from the TokenProviderParameters object.

The DefaultConditionsProvider can be configured to change the lifetime of the issued token. If you want to remove the ConditionsProvider altogether from the generation assertion, or implement a custom Conditions statement, then you must implement an instance of the ConditionsProvider interface, and set it on the SAMLTokenProvider.

3.2) Configure a Subject

The SAMLTokenProvider has a "SubjectProvider subjectProvider" property, which can be used to configure the Subject of the generated token, regardless of the version of the token. The SubjectProvider interface defines a single method to return a SubjectBean, given the token provider parameters, the parent Document of the assertion, and a secret key to use (if any). The SubjectBean contains the Subject name, name-qualifier, confirmation method, and KeyInfo element, amongst other properties. The SAMLTokenProvider ships with a default SubjectProvider implementation that is used to insert a Subject into every SAML Token that is generated.

The DefaultSubjectProvider has a single configuration method to set the subject name qualifier. It creates a subject confirmation method by checking the received key type. The subject name is the name of the principal obtained from TokenProviderParameters. Finally, a KeyInfo element is set on the SubjectBean under the following conditions:
  • If a "SymmetricKey" Key Type algorithm is specified by the client, then the secret key passed through to the SubjectProvider is encrypted with the X509Certificate of the recipient, and added to the KeyInfo element. How the provider knows the public key of the recipient will be covered later.
  • If a "PublicKey" KeyType algorithm is specified by the client, the X509Certificate that is received as part of the "UseKey" request is inserted into the KeyInfo element of the Subject.
If a "Bearer" KeyType algorithm is specified by the client, then no KeyInfo element is added to the Subject. For the "SymmetricKey" Key Type case, the SAMLTokenProvider creates a secret key using a SymmetricKeyHandler instance. The SymmetricKeyHandler first checks the key size that is supplied as part of the KeyRequirements object, by checking that it fits in between a minimum and maximum key size that can be configured. It also checks any client entropy that is supplied, as well as the computed key algorithm. It then creates some entropy and a secret key.

To add a custom Subject element to an assertion, you must create your own SubjectProvider implementation, and set it on the SAMLTokenProvider.

3.3) Adding Attribute Statements

The SAMLTokenProvider has a "List<AttributeStatementProvider> attributeStatementProviders" property, which can be used to add AttributeStatments to the generated assertion. Each object in the list adds a single Attribute statement. The AttributeStatementProvider contains a single method to return an AttributeStatementBean given the TokenProviderParameters object. This contains a SubjectBean (for SAML 1.1 assertions), and a list of AttributeBeans. The AttributeBean object holds the attribute name/qualified-name/name-format, and a list of attribute values, amongst other properties.

If no statement provider is configured in the SAMLTokenProvider, then the DefaultAttributeStatementProvider is invoked to create an Attribute statement to add to the assertion. It creates a default "authenticated" attribute, and also creates separate Attributes for any "OnBehalfOf" or "ActAs" elements that were received in the request. If the received OnBehalfOf/ActAs element was a UsernameToken, then the username is added as an Attribute. If the received element was a SAML Assertion, then the subject name is added as an Attribute. 

3.4) Adding Authentication Statements

The SAMLTokenProvider has a "List<AuthenticationStatementProvider> authenticationStatementProviders" property, which can be used to add AuthenticationStatements to the generated assertion. Each object in the list adds a single Authentication statement. The AuthenticationStatementProvider contains a single method to return an AuthenticationStatementBean given the TokenProviderParameters object. This contains a SubjectBean (for SAML 1.1 assertions), an authentication instant, authentication method, and other properties. No default implementation of the AuthenticationStatementProvider interface is provided in the STS, so if you want to issue Authentication Statements you will have to write your own.

3.5) Adding Authorization Decision Statements

The SAMLTokenProvider has a "List<AuthDecisionStatementProvider> authDecisionStatementProviders" property, which can be used to add AuthzDecisionStatements to the generated assertion. Each object in the list adds a single statement. The AuthDecisionStatementProvider  contains a single method to return an AuthDecisionStatementBean given the TokenProviderParameters object. This contains a SubjectBean (for SAML 1.1 assertions), the decision (permit/indeterminate/deny), the resource URI, a list of ActionBeans, amongst other properties. No default implementation of the AuthDecisionStatementProvider interface is provided in the STS.

Note that for SAML 1.1 tokens, the Subject is embedded in one of the Statements. When creating a SAML 1.1 Assertion, if a given Authentication/Attribute/AuthzDecision statement does not have a subject, then the standalone Subject is inserted into the statement. Finally, once a SAML token has been created, it is stored in the cache (if one is configured), with a lifetime corresponding to that of the Conditions statement. A TokenProviderResponse object is created with the DOM representation of the SAML Token, the SAML Token ID, lifetime, entropy bytes, references, etc.
    Categories: Colm O hEigeartaigh

    Activating GNOME Terminal with preconfigured tabs

    Glen Mazza - Thu, 10/27/2011 - 13:00

    I use Ubuntu Linux and for most types of tasks I normally need a set of terminal (console) tabs to be open. Which tabs, pointing to which directories, and which applications to be running in them of course is task-dependent. To speed task startup I researched how to start up several preconfigured tabs at once, which I can activate using a command-line script. In turn, I can create a command-line script for each type of "need lots of tabs" task that I have.

    For a specific task, say running Camel samples and supplying patches for any problems/enhancements, I might need several tabs to be open:

    I found the following script would create the above for me, so all I need to run is sh for the above preconfigured multitabbed terminal to activate:

    gnome-terminal --tab-with-profile=HasTitle --title "Karaf Dir" --working-directory="dataExt3/underthehood/apache-karaf-2.2.4" \ --tab-with-profile=HasTitle --title "Karaf Container" --working-directory="dataExt3/underthehood/apache-karaf-2.2.4/bin" -e "bash -c \"./karaf; exec bash\"" \ --tab-with-profile=HasTitle --title "Camel Dist Examples" --working-directory="dataExt3/underthehood/apache-camel-2.8.2/examples" \ --tab-with-profile=HasTitle --title "Eclipse Dir" --working-directory="/home/gmazza" \ --tab-with-profile=HasTitle --title "Camel Trunk Examples" --working-directory="dataExt3/opensource/camel/examples"

    Notes on the above script:

    1. In order for the above tab "title" fields to properly display, you'll need to create a new Terminal profile, which I've called "HasTitle" above. Select Edit | Profiles... from the GNOME Terminal Menu, and select New Profile with the name of HasTitle and based on the Default profile. Then on the subsequent configuration popup window select the "Title and Command" tab, and for the option "When terminal commands set their own titles:", choose "Keep Initial Title".
    2. To activate startup commands in a tab, the -e "echo foo" setting described in the gnome-terminal man page automatically closes the tab after the command executes, which is not helpful if you wish to see the results and/or enter additional commands in the tab afterwards. As explained here, using -e "bash -c..." gets around that problem, and is what I'm doing in the second tab above to activate the Karaf OSGi shell on startup.
    3. Make sure you have no whitespace characters after the line-ending backslashes, else the shell will interpret the script as multiple commands and report errors.


    Archiva 1.4-M1 released

    Olivier Lamy - Wed, 10/26/2011 - 13:32
    The Apache Archiva 1.4-M1 has been released.
    Some nice features added:

    • It is now possible to create a staging repository for any managed repository and later merge the results.

    • You can now use REST services to control Archiva or search for artifacts. See REST Services for more information.

    • Database storage for repository metadata has been replaced with a JCR repository based on Apache Jackrabbit by default (other options such as a flat-file storage may be made available in the future).

    • The search interface provide now the capability to search on OSGI metadata (based on the update of the Apache Maven Indexer library).

    • You can now download Maven index content from remote repositories to include artifacts which are not present locally in your search results

    Full release notes available here:

    Download page:

    Have Fun and some nice new features will come soon :-)

    Apache Archiva, Archiva, Apache Maven, Maven, Apache are trademarks of The Apache Software Foundation.

    Categories: Olivier Lamy

    Apache CXF STS documentation - part III

    Colm O hEigeartaigh - Tue, 10/25/2011 - 18:08
    In the next couple of blog posts I will describe how to generate tokens in the new STS implementation shipped as part of Apache CXF 2.5. In this post I will detail the interface that is used for generating tokens, as well as an implementation to generate SecurityContextTokens that ships with the STS. In the next post, I will describe how to generate SAML tokens.

    1) The TokenProvider interface

    Security tokens are created in the STS via the TokenProvider interface. It has three methods:
    • boolean canHandleToken(String tokenType) - Whether this TokenProvider implementation can provide a token of the given type
    • boolean canHandleToken(String tokenType, String realm) - Whether this TokenProvider implementation can provide a token of the given type, in the given realm
    • TokenProviderResponse createToken(TokenProviderParameters tokenParameters) - Create a token using the given parameters
    A client can request a security token from the STS by either invoking the "issue" operation and supplying a desired token type, or else calling the "validate" operation and passing a (different) token type (token transformation). Assuming that the client request is authenticated and well-formed, the STS will iterate through a list of TokenProvider implementations to see if they can "handle" the received token type. If they can, then the implementation is used to create a security token, which is returned to the client. The second "canHandleToken" method which also takes a "realm" parameter will be covered in a future post.

    So to support the issuing of a particular token type in an STS deployment, it is necessary to specify a TokenProvider implementation that can handle that token type. The STS currently ships with two TokenProvider implementations, one for generating SecurityContextTokens, and one for generating SAML Assertions. Before we look at these two implementations, let's take a look at the "createToken" operation in more detail. This method takes a TokenProviderParameters instance.

    2) TokenProviderParameters

    The TokenProviderParameters class is nothing more than a collection of configuration properties to use in creating the token, which are populated by the STS operations using information collated from the request, or static configuration, etc. The properties of the TokenProviderParameters are:
    • STSPropertiesMBean stsProperties - A configuration MBean that holds the configuration for the STS as a whole, such as information about the private key to use to sign issued tokens, etc. This will be covered later.
    • EncryptionProperties encryptionProperties - A properties object that holds encryption information relevant to the intended recipient of the token. This will be covered later.
    • Principal principal - The current client Principal object. This can be used as the "subject" of the generated token.
    • WebServiceContext webServiceContext - The current web service context object. This allows access to the client request.
    • RequestClaimCollection requestedClaims - The requested claims in the token. This will be covered later.
    • KeyRequirements keyRequirements - A set of configuration properties relating to keys. This will be covered later.
    • TokenRequirements tokenRequirements - A set of configuration properties relating to the token. This will be covered later.
    • String appliesToAddress - The URL that corresponds to the intended recipient of the token
    • ClaimsManager claimsManager - An object that can manage claims. This will be covered later.
    • Map<String, Object> additionalProperties - Any additional (custom) properties that might be used by a TokenProvider implementation.
    • STSTokenStore tokenStore - A cache used to store tokens.
    • String realm - The realm to create the token in (this should be the same as the realm passed to "canHandleToken"). This will be covered later.
    If this looks complicated then remember that the STS will take care of populating all of these properties from the request and some additional configuration. You only need to worry about the TokenProviderParameters object if you are creating your own TokenProvider implementation.

    3) TokenProviderResponse

    The "createToken" method returns an object of type TokenProviderResponse. Similar to the TokenProviderParameters object, this just holds a collection of objects that is parsed by the STS operation to construct a response to the client. The properties are:
    • Element token - The (DOM) token that was created by the TokenProvider.
    • String tokenId - The ID of the token
    • long lifetime - The lifetime of the token
    • byte[] entropy - Any entropy associated with the token
    • long keySize - The key size of a secret key associated with the token.
    • boolean computedKey - Whether a computed key algorithm was used in generating a secret key.
    • TokenReference attachedReference - An object which gives information how to refer to the token when it is "attached".
    • TokenReference unAttachedReference" - An object which gives information how to refer to the token when it is "unattached".
    Most of these properties are optional as far as the STS operation is concerned, apart from the token and token ID. The TokenReference object contains information about how to refer to the token (direct reference vs. Key Identifier, etc.), that is used by the STS to generate the appropriate reference to return to the client. 

    4) The SCTProvider

    Now that we've covered the TokenProvider interface, let's look at an implementation that is shipped with the STS. The SCTProvider is used to provide a token known as a SecurityContextToken, that is defined in the WS-SecureConversation specification. A SecurityContextToken essentially consists of a String Identifier which is associated with a particular secret key. If a service provider receives a SOAP message with a digital signature which refers to a SecurityContextToken in the KeyInfo of the signature, then the service provider knows that it must somehow obtain a secret key associated with that particular Identifier to verify the signature. How this is done is "out of band" (more on this later).

    To request a SecurityContextToken, the client must use one of the following Token Types:
    Two properties can be configured on the SCTProvider directly:
    • long lifetime - The lifetime of the generated SCT. The default is 5 minutes.
    • boolean returnEntropy - Whether to return any entropy bytes to the client or not. The default is true.
    The SCTProvider generates a secret key using the KeyRequirements object that was supplied, and constructs a SecurityContextToken with a random Identifier. It creates a CXF SecurityToken object that wraps this information, and stores it in the supplied cache using the given lifetime. The SecurityContextToken element is then returned, along with the appropriate references, lifetime element, entropy, etc.

    When requesting a token from an STS, the client will typically present some entropy along with a computed key algorithm. The STS will generate some entropy of its own, and combine it with the client entropy using the computed key algorithm to generate the secret key. Alternatively, the client will present no entropy, and the STS will supply all of the entropy. Any entropy the STS generates is then returned to the client, who can recreate the secret key using its own entropy, the STS entropy, and the computed key algorithm.

    This secret key is then used for the SCT use-case to encrypt/sign some part of a message. The SecurityContextToken is placed in the security header of the message, and referred to in the KeyInfo element of the signed/encrypted structure. As noted earlier, the service provider must obtain somehow the secret key corresponding to the SecurityContextToken identifier. Perhaps the service provider shares a (secured) distributed cache with an STS instance. Or perhaps the service provider sends the SCT to an STS instance to "validate" it, and receives a SAML token in response with the embedded (encrypted) secret key.

    5) Token caching in the TokenProvider

    Finally, we will cover token caching in a TokenProvider implementation. The SCTProvider is essentially useless without a cache, as otherwise there is no way for a third-party to know the secret key corresponding to a SecurityContextToken. Any TokenProvider implementation can cache a generated token in the STSTokenStore object supplied as part of the TokenProviderParameters. This object simply wraps the TokenStore interface in the CXF WS-Security runtime, which itself contains basic methods for adding/removing/querying CXF SecurityToken objects.

    The SCTProvider creates a SecurityToken with the ID of the SCT, the secret key associated with the SCT and the client principal. If a "realm" is passed through, then this is recorded as a property of the SecurityToken (keyed via STSConstants.TOKEN_REALM). Finally, the STS ships with two STSTokenStore implementations, an in-memory implementation based on eh-cache, and an implementation that uses Hazelcast.

      Categories: Colm O hEigeartaigh

      This Week's Links (23 October 2011)

      Glen Mazza - Sun, 10/23/2011 - 13:00

      Apache CXF STS documentation - part II

      Colm O hEigeartaigh - Fri, 10/21/2011 - 18:22
      In part I of the series of posts on the new Apache CXF STS implementation, I talked about what a Security Token Service can do, as well as the STS provider framework in CXF since the 2.4.0 release. In this part, I will leave the STS implementation to one side for the moment, and instead focus on how a client interacts with the STS in CXF.

      A simple example of how a CXF client can obtain a security token from the STS is shown in the "basic" STS system test "IssueUnitTest". This test starts an instance of the new CXF STS and obtains a number of different security tokens, all done completely programmatically, i.e. with no spring configuration. The STS instance that is used for the test-cases is configured with a number of different endpoints that use different security bindings (defined in the wsdl of the STS). For the purposes of this test, the Transport binding is used:

      <wsp:Policy wsu:Id="Transport_policy">
                              <sp:HttpsToken RequireClientCertificate="false"/>
                              <sp:Basic128 />
                              <sp:Lax />
                        <sp:IncludeTimestamp />
                              <sp:WssUsernameToken10 />

      In other words, this security policy requires that a one-way TLS connection must be used to communicate with the STS, and that authentication is performed via a Username Token in the SOAP header.

      The object that communicates with an STS in CXF is the STSClient. Typically, the user constructs an STSClient instance (normally via Spring), sets it with certain properties such as the WSDL location of the STS, what service/port to use, various crypto properties, etc, and then stores this object on the message context using the SecurityConstants tag "ws-security.sts.client". This object is then controlled by the IssuedTokenInterceptorProvider in the ws-security runtime in CXF. This interceptor provider is triggered by the "IssuedToken" policy assertion, which is typically in the WSDL of the service provider. This policy assertion informs the client that it must obtain a particular security token from an STS and include it in the service request. The IssuedTokenInterceptorProvider takes care of using the STSClient to get a Security Token from the STS, and handles how long the security token should be cached, etc.

      An example of a simple IssuedToken policy that might appear in the WSDL of a service provider is as follows:

      <sp:IssuedToken sp:IncludeToken=".../AlwaysToRecipient">

      This policy states that the client should include a SAML 2.0 Assertion of subject confirmation method "Bearer" in the request. The client must know how to communicate with an STS to obtain such a token. This is done by providing the STSClient object with the appropriate information.

      We will come back to the IssuedTokenInterceptorProvider at a later date. The IssueUnitTest referred to above uses the STSClient programmatically to obtain a security token. Let's look at the "requestSecurityToken" method called by the tests. An STSClient is instantiated via the CXF bus, and the WSDL location of the STS, plus service and port names are configured:

      STSClient stsClient = new STSClient(bus);

      A map is then populated with various properties and set on the STSClient. It is keyed with a different number of SecurityConstants tags. A username is supplied for use as the "user" in the UsernameToken. A CallbackHandler class is supplied to get the password to use in the UsernameToken. Compliance of the Basic Security Profile 1.1 is turned off, this is to prevent CXF throwing an exception when receiving a non-spec compliant response from a non-CXF STS:

      Map<String, Object> properties = new HashMap<String, Object>();
      properties.put(SecurityConstants.USERNAME, "alice");
      properties.put(SecurityConstants.IS_BSP_COMPLIANT, "false");
      If the KeyType is a "PublicKey", then an X.509 Certificate is presented to the STS in the request to embed in the generated SAML Assertion. The X.509 Certificate is obtained from the keystore defined in "", with the alias "myclientkey". Finally, the "useCertificateForConfirmationKeyInfo" property of the STSClient means that the entire certificate is to be included in the request, instead of a KeyValue (which is the default):

      if (PUBLIC_KEY_KEYTYPE.equals(keyType)) {
              properties.put(SecurityConstants.STS_TOKEN_USERNAME, "myclientkey");
              properties.put(SecurityConstants.STS_TOKEN_PROPERTIES, "");

      Finally, the token type is set on the STSClient (the type of token that is being requested), as well as the KeyType (specific to a SAML Assertion), and a security token is requested, passing the endpoint address which is sent to the STS in the "AppliesTo" element:

              return stsClient.requestSecurityToken(endpointAddress);

      The returned SecurityToken object contains the received token as a DOM element, the ID of the received token, any reference elements that were returned - which show how to reference the token, any secret associated with the token, and the lifetime of the token.
      Categories: Colm O hEigeartaigh

      Apache Tomcat Maven Plugin Features

      Olivier Lamy - Fri, 10/21/2011 - 09:31
      Recently I posted some informations regarding the move from codehaus to ASF of the Tomcat Maven plugin and about the support of tomcat7 in trunk code.

      So now in this post, I'd like to talk of the features I prefer.

      Run goal in multi modules with Maven3
      Usually with Apache Maven, your application code is splitted in some modules to respect the Separation Of Concern paradigm.
      Something like :


      So to test your webapp module you have to install all other modules first which is time/io consuming.
      Now with Apache Maven 3 and the Tomcat Maven Plugin (from Codehaus version 1.1 or now the 2.0-SNAPSHOT from Apache), you can simple use the goal run from the root directory and the plugin will see various modules build output and include those automatically in the embeded tomcat in the webapp class loader.

      Build a standalone executable war/jar
      You can now build a standalone jar which will contains Apache Tomcat needed classes and your wars.
      See documentation.
      This will produce a similar jar as for the Jenkins distribution.
      At the end you will be able to run the produced jar with a simple:

      java -jar yourjar

      And that's will start Apache Tomcat without need of any installations !

      NOTE: it's very recent feature based on my need :-)
      So all issues/feedback or some RFE are really welcome!

      Have Fun!

      Apache Maven, Maven, Apache Tomcat, Tomcat, Apache are trademarks of The Apache Software Foundation.

      Categories: Olivier Lamy

      How to enable an interceptor without configuration in CXF

      Oliver Wulff - Thu, 10/20/2011 - 21:31
      Imagine you develop an interceptor within your company which is for general use and not for a specific business project. There are different ways to configure an interceptor as described in more detail here. In summary:
      • by programming
      • configure the interceptor on the bus, client or endpoint level
      • configure a feature which registers interceptor(s)
      • configure a policy which registers interceptors
      There might be use cases where you want to enable an interceptor just because the interceptor is available in the classpath. This means, you developed an interceptor, built a Jar and published to the maven repository. Another project should only add a dependency to your jar in their POM to enable the interceptor automatically. Done.

      This blog will explain how this works and an example can be downloaded here.

      CXF 2.4 introduced a new feature called bus extension which are loaded during the initialization phase of a bus automatically. Thus, your interceptor registration code must get notified whenever a new client or server is created to be able to add the interceptor. CXF provides three LifecycleListners:
      This example shows the usage of the Server- and ClientLifecycleListner. A lifecycle listener is registered by the corresponding lifecycle manager which can be resolved using the CXF bus as illustrated in the following code snippet:

      public class DemoListener implements ClientLifeCycleListener, ServerLifeCycleListener {

       public DemoListener(Bus bus) {    ServerLifeCycleManager slm = bus.getExtension(ServerLifeCycleManager.class);    slm.registerListener(this);

      This class implements the lifecycle listener methods and registers itself to the lifecycle manager. The interceptor could be registered as illustrated in the following code snippet:

      public void startServer(Server server) {
        System.out.println("--------- startServer");  server.getEndpoint().getInInterceptors().add(new DemoInterceptor());  server.getEndpoint().getOutInterceptors().add(new DemoInterceptor());}
      public void clientCreated(Client client) {
        System.out.println("--------- clientCreated");  client.getOutInterceptors().add(new DemoInterceptor());  client.getInInterceptors().add(new DemoInterceptor());}
      So far so good, but there must be a way to trigger the instantiation of the class DemoListener. One option is to use Spring and make this class a spring managed component or you register this bean as a CXF bus extension.
      I'll explain how to register this bean as a bus extension thus it gets instantiated when the bus is created which means that the lifecycle listeners are registered in the startup phase too.
      Your Jar file must contain a file called bus-extensions.xml at the following location in the Jar:

      The content of this text file is very simple. You just list the classname:

      The last parameter is false which tells the bus whether the instantiation of the bean should be defered or not. If deferred is true, (default) the bus doesn't create the beans during startup. However, if something specifically asks for one of those beans, it will be created and loaded.

      If the class provides a constructor with the Bus argument, CXF will pass the bus during initialization.

      Where else is this feature used? In CXF itself.

      This was a very simple example how to instantiate beans during bus startup without the requirement for Spring. A lot of the advanced and WS-* features in CXF 2.3 and earlier required that you explicitly imported resources like:

      <import resource="classpath:META-INF/cxf/cxf.xml" />
      <import resource="classpath:META-INF/cxf/cxf-extension-soap.xml" /><import resource="classpath:META-INF/cxf/cxf-servlet.xml" /><import resource="classpath:META-INF/cxf/cxf-extension-jms.xml" />
      This was required to import the spring configuration files which contained bean definitions which usually implemented lifecycle listeners or support for policy. This is not required any more in CXF 2.4 and above. You only have to import the following resource now:

      <import resource="classpath:META-INF/cxf/cxf.xml" />

      With CXF 2.4, these features are not plugged into the bus using a spring mechanism but instead the bus extension mechnism as illustrated in the simple example above.

      If you want to see more advanced usage of bus extension have a look to the sources of some of these CXF modules:

      An interesting interceptor example can be found in the Talend Service Factory.
      Categories: Oliver Wulff

      Using Camel to do light weight messaging over any protocol

      Christian Schneider - Wed, 10/19/2011 - 18:15

      Blog post added by Christian Schneider

      At least for some time the whole world seemed to only talk about ESB and webservices. These technologies have their place in integration but they are quite complex and starting with them means you have to invest a lot of time and or money. Recently around the release of Java EE 6 the idea of simplicity came back to the Enterprise Java world. In this mindset I will look into some ways to do really light weight messaging with Apache Camel.

      So basically we have the problem of moving some data from Application A to Application B. So let´s assume we have java objects representing this data and want to transport it in a simple yet open format. So one obvious choice is to use xml and do the transformation using JAXB. In the following examples I will use code first but it will work the same if the JAXB annotated classes were created from an XSD using a code generator.

      The Payload

      As an example for our payload I will use the following java bean class:

      @XmlRootElement @XmlType public class Customer { String name; int age; public Customer() {  }   public Customer(String name, int age) { = name; this.age = age; } public String getName() { return name; } public void setName(String name) { = name; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } } Sending

      So how can we send this class as XML to a JMS queue?

      Customer customer = new Customer("Christian Schneider", 38); context.createProducerTemplate().sendBody("jms://test", customer);

      This almost looks a bit too easy. So what is camel doing behind the scenes. The ProducerTemplate allows us to send any java object to any camel endpoint. The endpoint uri will setup the endpoint on the fly. In our case the endpoint is a jms endpoint and will send the data to the queue "test". As the object needs to be serialized to be transfered using jms camel will use a TypeConverter to do this. In our case the camel-jaxb component is on the classpath and the Customer class has JAXB annotations. So a TypeConverter will kick in that serializes the object using JAXB.

      So the message on the queue will look like this:

      <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <customer> <age>38</age> <name>Christian Schneider</name> </customer> Receiving

      Now that we sent the message how can we receive it on the other side? For this we can use a little route and a simple java class:

      from("jms://test").to("log:testlog").bean(new CustomerReceiver()); public class CustomerReciever { public void receive(Customer customer) { System.out.println("Received a customer named " + customer.getName()); } }

      The route listens on the jms queue and will trigger for each message received. The to(log:testlog) is not strictly necessary but shows that we indeed transport xml. In the end the message is sent to the CustomerServiceReceiver using the camel bean component. The bean component uses a lot of convention over configuration to determine how to process the message. What happens in our case is the following:

      • We only have one public method so camel knows it has to use the receive method
      • The recieve method has only one parameter so camel will give the body of the message to this parameter
      • As we get a String or byte[] from jms and again the class Customer has jaxb annotations the data is automatically deserialized into a Customer object
      Using another protocol

      By simply changing from "jms://test" to e.g. "file://target/test" we can switch the transfer protocol to files. So when doing sendBody a file with the xml is placed in the directory target/test. The route for the receiver will detect that file and send the content into the route. The result is the same as for jms: We transport our data as nice xml and can work with it as a java object on both sides.

      You can try the same for http using "http://localhost:8080/test" for the sender and "jetty://localhost:8080/test" for the receiver.

      What about request / reply?

      What we saw was nice asynchonous one way messaging. Camel also allows a simlar aproach for request / reply.

      So for sending a Customer object to the other side and receiving a changed Customer object synchronously you can use:

      Customer changedCustomer = producer.requestBody("jms://test", customer);

      This would send customer to the queue "test" as xml and create a temp queue where it wait for a reply. When the reply is received the data is deserialized and returned as the object changedCustomer.

      On the receiver side the route can be kept as is and the CustomerReceiver.recive method simply should look like:

      Customer receive(Customer customer);

      It is really just as easy as this.

      So what is good and what is bad about this aproach


      • very easy to use
      • very few lines of code
      • CustomerReceiver does not contain any camel specifics


      • The sending side with the ProducerTemplate is very camel technical. You typically will not want this in your business code
      How can I keep my business code clean from camel stuff

      A simple way to keep camel classes out of your business code on the sender side is to use an interface that just has business logic in it:

      public interface CustomerSender { public void send(Customer customer); }

      And the implementation for sending using Camel:

      public class CustomerSenderImpl implements CustomerSender { private ProducerTemplate producer; private String endpointUri; public CustomerSenderImpl(ProducerTemplate producer, String endpointUri) { this.producer = producer; this.endpointUri = endpointUri; } @Override public void send(Customer customer) { producer.sendBody(endpointUri, customer); } }

      So you can inject the CustomerSenderImpl into your business code which only needs to know about the CustomerSender interface. This way your business code is nicely separated from camel.

      Can I have this even simpler?

      Camel has some annotations which allow to do this with even less code. You still need the CustomerSender interface but you can get rid of the route and the CustomerServiceImpl.

      On the sender side you use:

      @Produce(uri="jms://test") CustomerSender customerSender; ... customerSender.send(customer);

      Camel will inject a dynamic proxy for customerSender which sends the customer to the queue. The drawback is that it currently will always send a BeanInvocation. So while this works for remoting it is not the nice xml document we want to have on the route. Besides that the annotations only work in spring beans and in the camel test framework.

      On the receiver side you only need the CustomerReceiver enhanced with the @Consume annotation.

      public class CustomerReciever { @Consume(uri="jms://test") public void receive(Customer customer) { System.out.println("Received a customer named " + customer.getName()); } }

      Like before the reciever works even a little nicer than the sender. It can receive a BeanInvocation but will also work with the xml from our first example.

      In general I am not so sure about the annotation aproach in general as this way your business code still contains camel specific stuff like the annotation and the endpoint uri.

      Some TODO for me

      I plan to improve the annotation based pojo messaging in camel a bit by supporting sending plain objects instead of BeanInvocations on the client side. With this little enhancement you will be able to send and receive pojos using just annotations and still have nice xml on the wire.

      View Online | Add Comment
      Categories: Christian Schneider

      Apache CXF STS documentation - part I

      Colm O hEigeartaigh - Wed, 10/19/2011 - 18:00
      The forthcoming Apache CXF 2.5 release will have an STS (Security Token Service) implementation which has been donated by Talend. This is the first in a series of blog posts where I will be going into the STS implementation in detail. In this post I will be explaining what an STS is and talking about the STS provider framework in CXF.

      1) What is a Security Token Service?

      An informal description of a Security Token Service is that it is a web service that offers some or all of the following services (amongst others):
      • It can issue a Security Token of some sort based on presented or configured credentials.
      • It can say whether a given Security Token is valid or not
      • It can renew (extend the validity of) a given Security Token
      • It can cancel (remove the validity of) a given Security Token
      • It can transform a given Security Token into a Security Token of a different sort.
      Offloading this functionality to another service greatly simplifies client and service provider functionality, as they can simply call the STS appropriately rather than have to figure out the security requirements themselves. For example, the WSDL of a service provider might state that a particular type of security token is required to access the service. A client of the service can ask an STS for a Security Token of that particular type, which is then sent to the service provider. The service provider could choose to validate the received token locally, or dispatch the token to an STS for validation. These are the two most common use-cases of an STS.

      A client can communicate with the STS via a protocol defined in the WS-Trust specification. The SOAP Body of the request contains a "RequestSecurityToken" element that looks like:

      <wst:RequestSecurityToken Context="..." xmlns:wst="...">

      The Apache CXF STS implementation supports a wide range of parameters that are passed in the RequestSecurityToken element. The SOAP Body of the response from the STS will contain a "RequestSecurityTokenResponse(Collection)" element, e.g.:

      <wst:RequestSecurityTokenResponseCollection xmlns:wst="...">

      1.1 A sample request/response for issuing a Security Token

      A sample client request is given here, where the client wants the STS to issue a SAML 2.0 token for the "" service:

      <wst:RequestSecurityToken Context="..." xmlns:wst="...">

      The STS responds with:

      <wst:RequestSecurityTokenResponseCollection xmlns:wst="...">
                   <saml2:Assertion xmlns:saml2="..." ... />

      2) The STS provider framework in Apache CXF

      The first support for an STS in Apache CXF appeared in the 2.4.0 release with the addition of an STS provider framework in the WS-Security module. This is essentially an API that can be used to create your own STS implementation. As the STS implementation shipped in CXF 2.5 is based on this provider framework, it makes sense to examine it in more detail.

      The SEI (Service Endpoint Interface) is available here. It contains the following methods that are relevant to the STS features discussed above:
      • RequestSecurityTokenResponseCollectionType issue(RequestSecurityTokenType request) - to issue a security token
      • RequestSecurityTokenResponseType issueSingle( RequestSecurityTokenType request) - to issue a security token that is not contained in a "Collection" wrapper (for legacy applications).
      • RequestSecurityTokenResponseType cancel(RequestSecurityTokenType request) - to cancel a security token
      • RequestSecurityTokenResponseType validate(RequestSecurityTokenType request) - to validate a security token
      • RequestSecurityTokenResponseType renew(RequestSecurityTokenType request) - to renew a security token
      The SEI implementation handles each request by delegating it to a particular operation, which is just an interface that must be implemented by the provider framework implementation. Finally, a JAX-WS provider is available, which dispatches a request to the appropriate operation.

      Significant updates to the STS Provider framework after the CXF 2.4.0 release include support for SOAP 1.2, a major bug fix to support operations other than issue,  better exception propagation, and adding support for the WS-Trust 1.4 schema. These features are all available in the Apache CXF 2.4.3 release onwards.
      Categories: Colm O hEigeartaigh

      Camel presentation from the SOA and BPM days in Düsseldorf

      Christian Schneider - Wed, 10/19/2011 - 16:45

      Blog post added by Christian Schneider

      Last week I was at the SOA / BPM days in Düsseldorf and talked about Camel Architecture and showed some practical examples. Some of the highlights were:

      • Getting started with Apache Camel. Download, First project, run in Eclipse
      • Make your project fit for OSGi
      • Deployment on Karaf and showing the new Camel commands for the Karaf shell
      • How to test and debug Camel Routes?
      • Using Camel to do very light weight XML Messaging over several transports like file and jms (will describe this in a separate blog post)

      You can find my (partly) german slides here: workshop_messagebasierte_webservices_camel.pdf

      The examples are on github:

      View Online | Add Comment
      Categories: Christian Schneider

      Adding X.509 security headers to Metro SOAP calls

      Glen Mazza - Tue, 10/18/2011 - 13:00

      This tutorial shows how to modify the Metro version of the WSDL-first DoubleIt Example to include WS-Security (WSS) with X.509 public key certificates. The source code for this tutorial is located here. Of course don't use the included keys--or sample passwords--for production, also note this tutorial may have errors so make sure all work is carefully tested before production deployment.

      1. Create key pairs for the client and the web service provider. Follow the OpenSSL and Metro tutorial for this step. Note this will generate self-signed keys, sufficient for tutorial purposes but for production you'll most probably want third party CA-signed keys instead. Place the client and server key in the root directory of the DoubleIt project.

      2. Modify the WSDL to require encryption and signatures from both directions. To do this, we will create a temporary project in the NetBeans IDE and use it to modify the DoubleIt.wsdl file. Follow Steps #2-#3 in the Metro/UsernameToken tutorial for this process, except for Substep #3 under Step #3. Here, under the "Edit Web Service Attributes" section, select "Mutual Certificates Security", and for the Configure options, check the Metro guide to see what best fits your needs. For the purposes of this tutorial, I chose Basic 128 bit encryption and encrypted signature as shown in the illustration below.

        For the key information, unselect "Use Development Defaults", select the Keystore button and supply the JKS service keystore file name, store password, key alias and key password of the service key you created in Step #1 above. The Alias Selector class can be left blank. Next, press the truststore button and provide the truststore and truststore password. The Alias field can be left blank because we're accepting any client-side certificates in the truststore, as well as the Certificate Selector field, unused in this tutorial. If you followed the OpenSSL tutorial in Step #1 in creating your keys, the truststore will be the same as the keystore (servicestore.jks).

        The modified WSDL generated by NetBeans is shown below. The bolded elements show the keystore information you just entered--note this information can be manually edited without needing to use the above NetBeans wizard.

        <?xml version="1.0" encoding="UTF-8"?> <wsdl:definitions name="DoubleIt" xmlns:xsd="" xmlns:wsdl="" xmlns:soap="" xmlns:di="" xmlns:tns="" targetNamespace="" xmlns:wsp="" xmlns:wsu="" xmlns:fi="" xmlns:tcp="" xmlns:wsam="" xmlns:sp="" xmlns:sc="" xmlns:wspp=""> <wsdl:types> <xsd:schema targetNamespace=""> <xsd:element name="DoubleIt"> <xsd:complexType> <xsd:sequence> <xsd:element name="numberToDouble" type="xsd:int"/> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="DoubleItResponse"> <xsd:complexType> <xsd:sequence> <xsd:element name="doubledNumber" type="xsd:int" /> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:schema> </wsdl:types> <wsdl:message name="DoubleItRequest"> <wsdl:part element="di:DoubleIt" name="parameters" /> </wsdl:message> <wsdl:message name="DoubleItResponse"> <wsdl:part element="di:DoubleItResponse" name="parameters" /> </wsdl:message> <wsdl:portType name="DoubleItPortType"> <wsdl:operation name="DoubleIt"> <wsdl:input message="tns:DoubleItRequest" /> <wsdl:output message="tns:DoubleItResponse" /> </wsdl:operation> </wsdl:portType> <wsdl:binding name="DoubleItBinding" type="tns:DoubleItPortType"> <wsp:PolicyReference URI="#DoubleItBindingPolicy"/> <soap:binding style="document" transport="" /> <wsdl:operation name="DoubleIt"> <soap:operation soapAction=""/> <wsdl:input><soap:body use="literal"/> <wsp:PolicyReference URI="#DoubleItBinding_DoubleIt_Input_Policy"/> </wsdl:input> <wsdl:output><soap:body use="literal"/> <wsp:PolicyReference URI="#DoubleItBinding_DoubleIt_Output_Policy"/> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:service name="DoubleItService"> <wsdl:port name="DoubleItPort" binding="tns:DoubleItBinding"> <soap:address location="http://localhost:8080/doubleit/services/doubleit"/> </wsdl:port> </wsdl:service> <wsp:Policy wsu:Id="DoubleItBindingPolicy"> <wsp:ExactlyOne> <wsp:All> <wsam:Addressing wsp:Optional="false"/> <sp:AsymmetricBinding> <wsp:Policy> <sp:InitiatorToken> <wsp:Policy> <sp:X509Token sp:IncludeToken=""> <wsp:Policy> <sp:WssX509V3Token10/> </wsp:Policy> </sp:X509Token> </wsp:Policy> </sp:InitiatorToken> <sp:RecipientToken> <wsp:Policy> <sp:X509Token sp:IncludeToken=""> <wsp:Policy> <sp:WssX509V3Token10/> <sp:RequireIssuerSerialReference/> </wsp:Policy> </sp:X509Token> </wsp:Policy> </sp:RecipientToken> <sp:Layout> <wsp:Policy> <sp:Strict/> </wsp:Policy> </sp:Layout> <sp:IncludeTimestamp/> <sp:OnlySignEntireHeadersAndBody/> <sp:AlgorithmSuite> <wsp:Policy> <sp:Basic128/> </wsp:Policy> </sp:AlgorithmSuite> <sp:EncryptSignature/> </wsp:Policy> </sp:AsymmetricBinding> <sp:Wss10> <wsp:Policy> <sp:MustSupportRefIssuerSerial/> </wsp:Policy> </sp:Wss10> <sc:KeyStore wspp:visibility="private" location="/home/gmazza/dataExt3/talendwork/DoubleItX509Metro/servicestore.jks" type="JKS" storepass="sspass" alias="myservicekey" keypass="skpass"/> <sc:TrustStore wspp:visibility="private" storepass="sspass" type="JKS" location="/home/gmazza/dataExt3/talendwork/DoubleItX509Metro/servicestore.jks"/> </wsp:All> </wsp:ExactlyOne> </wsp:Policy> <wsp:Policy wsu:Id="DoubleItBinding_DoubleIt_Input_Policy"> <wsp:ExactlyOne> <wsp:All> <sp:EncryptedParts> <sp:Body/> </sp:EncryptedParts> <sp:SignedParts> <sp:Body/> <sp:Header Name="To" Namespace=""/> <sp:Header Name="From" Namespace=""/> <sp:Header Name="FaultTo" Namespace=""/> <sp:Header Name="ReplyTo" Namespace=""/> <sp:Header Name="MessageID" Namespace=""/> <sp:Header Name="RelatesTo" Namespace=""/> <sp:Header Name="Action" Namespace=""/> <sp:Header Name="AckRequested" Namespace=""/> <sp:Header Name="SequenceAcknowledgement" Namespace=""/> <sp:Header Name="Sequence" Namespace=""/> <sp:Header Name="CreateSequence" Namespace=""/> </sp:SignedParts> </wsp:All> </wsp:ExactlyOne> </wsp:Policy> <wsp:Policy wsu:Id="DoubleItBinding_DoubleIt_Output_Policy"> <wsp:ExactlyOne> <wsp:All> <sp:EncryptedParts> <sp:Body/> </sp:EncryptedParts> <sp:SignedParts> <sp:Body/> <sp:Header Name="To" Namespace=""/> <sp:Header Name="From" Namespace=""/> <sp:Header Name="FaultTo" Namespace=""/> <sp:Header Name="ReplyTo" Namespace=""/> <sp:Header Name="MessageID" Namespace=""/> <sp:Header Name="RelatesTo" Namespace=""/> <sp:Header Name="Action" Namespace=""/> <sp:Header Name="AckRequested" Namespace=""/> <sp:Header Name="SequenceAcknowledgement" Namespace=""/> <sp:Header Name="Sequence" Namespace=""/> <sp:Header Name="CreateSequence" Namespace=""/> </sp:SignedParts> </wsp:All> </wsp:ExactlyOne> </wsp:Policy> </wsdl:definitions>

        As you can see, the WSDL has been modified with new Policy elements added at the end, as well as references to those elements within the wsdl:binding section. The elements are defined in OASIS' WS-SecurityPolicy specification. The wsdl:binding section has two types of policy references: a single wsdl:binding-level policy which specifies the need for encryptions and/or signatures, and two wsdl:operation-level policies which specify the elements of the SOAP request and response that need to be encrypted and/or signed.

      3. Configure the client certificate information. Follow Step #5 of the UsernameToken tutorial for this process, except replace Substep #3 with the below:

        3. Within the Project View of the project, go to Web Service References -> DoubleIt, and right-click on the latter. Choose "Edit Web Service Attributes", and go to the Quality Of Service Tab | Security section. To use the keys you created in Step #2 above, select the Keystore button and supply the JKS client keystore, client store password, alias for the client key and client key password. Then select Truststore and supply the same JKS keystore and store password, but the service key for the alias, as that is the only key the SOAP client is to trust. (The certificate selector can be again left blank.) Click OK to close the wizard.

        The resulting client configuration files will be located in the Source Packages -> META-INF folder of the Project View. They are shown below:


        <?xml version="1.0" encoding="UTF-8"?> <definitions xmlns="" xmlns:wsdl="" xmlns:xsd="" xmlns:soap="" name="mainclientconfig" > <import location="DoubleIt.xml" namespace=""/> </definitions>

        DoubleIt.xml - This is really just the service WSDL, except the Policy section at the bottom has changed from providing service-side requirements to client-side configuration information. Bolded below is the certificate information for the client you had entered earlier.

        <?xml version="1.0" encoding="UTF-8"?> <wsdl:definitions name="DoubleIt" xmlns:xsd="" xmlns:wsdl="" xmlns:soap="" xmlns:di="" xmlns:tns="" targetNamespace="" xmlns:wsp="" xmlns:wsu="" xmlns:fi="" xmlns:tcp="" xmlns:wsam="" xmlns:sp="" xmlns:sc="" xmlns:wspp="" xmlns:sc1=""> <wsdl:types> <xsd:schema targetNamespace=""> <xsd:element name="DoubleIt"> <xsd:complexType> <xsd:sequence> <xsd:element name="numberToDouble" type="xsd:int"/> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="DoubleItResponse"> <xsd:complexType> <xsd:sequence> <xsd:element name="doubledNumber" type="xsd:int" /> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:schema> </wsdl:types> <wsdl:message name="DoubleItRequest"> <wsdl:part element="di:DoubleIt" name="parameters" /> </wsdl:message> <wsdl:message name="DoubleItResponse"> <wsdl:part element="di:DoubleItResponse" name="parameters" /> </wsdl:message> <wsdl:portType name="DoubleItPortType"> <wsdl:operation name="DoubleIt"> <wsdl:input message="tns:DoubleItRequest" /> <wsdl:output message="tns:DoubleItResponse" /> </wsdl:operation> </wsdl:portType> <wsdl:binding name="DoubleItBinding" type="tns:DoubleItPortType"> <wsp:PolicyReference URI="#DoubleItBindingPolicy"/> <soap:binding style="document" transport="" /> <wsdl:operation name="DoubleIt"> <soap:operation soapAction=""/> <wsdl:input><soap:body use="literal"/> </wsdl:input> <wsdl:output><soap:body use="literal"/> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:service name="DoubleItService"> <wsdl:port name="DoubleItPort" binding="tns:DoubleItBinding"> <soap:address location="http://localhost:8080/doubleit/services/doubleit"/> </wsdl:port> </wsdl:service> <wsp:Policy wsu:Id="DoubleItBindingPolicy"> <wsp:ExactlyOne> <wsp:All> <sc1:KeyStore wspp:visibility="private" alias="myclientkey" keypass="ckpass" storepass="cspass" type="JKS" location="/home/gmazza/dataExt3/talendwork/DoubleItX509Metro/clientstore.jks"/> <sc1:TrustStore wspp:visibility="private" peeralias="myservicekey" storepass="cspass" type="JKS" location="/home/gmazza/dataExt3/talendwork/DoubleItX509Metro/clientstore.jks"/> </wsp:All> </wsp:ExactlyOne> </wsp:Policy> </wsdl:definitions>
      4. Bring the NetBeans-generated configuration files back to your Maven project. Be sure that you copied over the service wsdl and client-side configuration files to your Maven project as explained at the end of Steps #3 and #5 of the Metro/UsernameToken tutorial.

      5. Test the client. After deploying the servlet as described in Step #8 the WSDL-first tutorial, run the client as shown in Step #10 of that tutorial and make sure you see the doubled number responses. You can see the SOAP requests and responses by activating the debug output as shown in the comments for the exec-maven-plugin in the client/pom.xml file. My results for the first client call:

        SOAP Request: ---[HTTP request - http://localhost:8080/doubleit/services/doubleit]--- Accept: text/xml, multipart/related Content-Type: text/xml; charset=utf-8 SOAPAction: "" User-Agent: Metro/2.1 (branches/2.1-6728; 2011-02-03T14:14:58+0000) JAXWS-RI/2.2.3 JAXWS/2.2 <?xml version='1.0' encoding='UTF-8'?> <S:Envelope xmlns:S="" xmlns:wsse="" xmlns:wsu="" xmlns:xs="" xmlns:ds="" xmlns:exc14n="" xmlns:xenc=""> <S:Header> <To xmlns="" wsu:Id="_5005">http://localhost:8080/doubleit/services/doubleit </To> <Action xmlns="" xmlns:S="" S:mustUnderstand="1" wsu:Id="_5004"> </Action> <ReplyTo xmlns="" wsu:Id="_5003"> <Address> </Address> </ReplyTo> <MessageID xmlns="" wsu:Id="_5002">uuid:6d3c7cbc-1ae6-4112-89b3-3b2ef9dc7fa1</MessageID> <wsse:Security S:mustUnderstand="1"> <wsu:Timestamp xmlns:ns17="" xmlns:ns16="" wsu:Id="_3"> <wsu:Created>2011-10-18T01:55:35Z</wsu:Created> <wsu:Expires>2011-10-18T02:00:35Z</wsu:Expires> </wsu:Timestamp> <wsse:BinarySecurityToken xmlns:ns17="" xmlns:ns16="" ValueType="" EncodingType="" wsu:Id="uuid_bd95d424-e86d-4d5c-8fb1-7fcae868bf44">MIIDzjCCAzeg...truncated...khfwn5DgquTnQsQStP </wsse:BinarySecurityToken> <xenc:EncryptedKey xmlns:ns17="" xmlns:ns16="" Id="_5007"> <xenc:EncryptionMethod Algorithm="" /> <ds:KeyInfo xmlns:xsi="" xsi:type="KeyInfoType"> <wsse:SecurityTokenReference> <ds:X509Data> <ds:X509IssuerSerial> <ds:X509IssuerName>, CN=Server, O=Sample Service Key - NOT FOR PRODUCTION USE, L=Niagara Falls, ST=New York, C=US </ds:X509IssuerName> <ds:X509SerialNumber>9762824747457659410 </ds:X509SerialNumber> </ds:X509IssuerSerial> </ds:X509Data> </wsse:SecurityTokenReference> </ds:KeyInfo> <xenc:CipherData> <xenc:CipherValue>LywTT/TxQbuZf/a+VnZJ...truncated...vrXbdBAP+ZfM38HggKbwlEPrgw= </xenc:CipherValue> </xenc:CipherData> <xenc:ReferenceList> <xenc:DataReference URI="#_5008" /> <xenc:DataReference URI="#_5009" /> </xenc:ReferenceList> </xenc:EncryptedKey> <xenc:EncryptedData xmlns:ns17="" xmlns:ns16="" Id="_5009" Type=""> <xenc:EncryptionMethod Algorithm="" /> <xenc:CipherData> <xenc:CipherValue>vgZu17v9KRAodnd7O...truncated...z6jPvjctlxjOGHGn2iGI0w== </xenc:CipherValue> </xenc:CipherData> </xenc:EncryptedData> </wsse:Security> </S:Header> <S:Body wsu:Id="_5006"> <xenc:EncryptedData xmlns:ns17="" xmlns:ns16="" Id="_5008" Type=""> <xenc:EncryptionMethod Algorithm="" /> <xenc:CipherData> <xenc:CipherValue>jc+U3gkWZUX4W9heZ/og...truncated...p4qBPco1iOXXVTZunZjpm </xenc:CipherValue> </xenc:CipherData> </xenc:EncryptedData> </S:Body> </S:Envelope> SOAP Response: http://localhost:8080/doubleit/services/doubleit - 200]--- null: HTTP/1.1 200 OK Content-Type: text/xml;charset=utf-8 Date: Tue, 18 Oct 2011 01:55:36 GMT Server: Apache-Coyote/1.1 Transfer-Encoding: chunked <?xml version='1.0' encoding='UTF-8'?> <S:Envelope xmlns:S="" xmlns:wsse="" xmlns:wsu="" xmlns:xs="" xmlns:ds="" xmlns:exc14n="" xmlns:xenc=""> <S:Header> <To xmlns="" wsu:Id="_5005"> </To> <Action xmlns="" xmlns:S="" S:mustUnderstand="1" wsu:Id="_5003"> </Action> <MessageID xmlns="" wsu:Id="_5002">uuid:43bfe620-bda9-4400-ba3e-ad2511a0a7dc</MessageID> <RelatesTo xmlns="" wsu:Id="_5004">uuid:6d3c7cbc-1ae6-4112-89b3-3b2ef9dc7fa1</RelatesTo> <wsse:Security S:mustUnderstand="1"> <wsu:Timestamp xmlns:ns17="" xmlns:ns16="" wsu:Id="_3"> <wsu:Created>2011-10-18T01:55:36Z</wsu:Created> <wsu:Expires>2011-10-18T02:00:36Z</wsu:Expires> </wsu:Timestamp> <xenc:EncryptedKey xmlns:ns17="" xmlns:ns16="" Id="_5007"> <xenc:EncryptionMethod Algorithm="" /> <ds:KeyInfo xmlns:xsi="" xsi:type="KeyInfoType"> <wsse:SecurityTokenReference> <ds:X509Data> <ds:X509IssuerSerial> <ds:X509IssuerName>, CN=Bob Client, O=Sample Client Key - NOT FOR PRODUCTION USE, L=Buffalo, ST=New York, C=US </ds:X509IssuerName> <ds:X509SerialNumber>16785336678514363577 </ds:X509SerialNumber> </ds:X509IssuerSerial> </ds:X509Data> </wsse:SecurityTokenReference> </ds:KeyInfo> <xenc:CipherData> <xenc:CipherValue>QHA5mJrSkmLVHDunj/e5YCFy...truncated...1NENqYs4xOHXKcRPF01gx9GU= </xenc:CipherValue> </xenc:CipherData> <xenc:ReferenceList> <xenc:DataReference URI="#_5008" /> <xenc:DataReference URI="#_5009" /> </xenc:ReferenceList> </xenc:EncryptedKey> <xenc:EncryptedData xmlns:ns17="" xmlns:ns16="" Id="_5009" Type=""> <xenc:EncryptionMethod Algorithm="" /> <xenc:CipherData> <xenc:CipherValue>IWSl4Hrxe0UsnKCQxnVlit...truncated...21oLUvP+aVa7L1PlstRt2U= </xenc:CipherValue> </xenc:CipherData> </xenc:EncryptedData> </wsse:Security> </S:Header> <S:Body wsu:Id="_5006"> <xenc:EncryptedData xmlns:ns17="" xmlns:ns16="" Id="_5008" Type=""> <xenc:EncryptionMethod Algorithm="" /> <xenc:CipherData> <xenc:CipherValue>wbM0ndRU+Q/9EqW9Af2bUXpJaON1...truncated...ne4DBocqTuY1XCxEy4Vq5Q== </xenc:CipherValue> </xenc:CipherData> </xenc:EncryptedData> </S:Body> </S:Envelope> -------------------- The number 10 doubled is 20
      1. As can be seen from the client- and service-side keystore configuration listed above, Metro uses a absolute file path when specifying keystores, making samples nonportable from machine to machine. To get around this issue, the source code download has been modified a bit to take advantage of Maven resource filtering to make a runtime change of the absolute paths depending on where the sample has been extracted.
      2. For more information on WS-SecurityPolicy, see this WSO2 article as well as the OASIS WS-SecurityPolicy Examples guide.
      3. When running the SOAP client you'll probably see warning messages like the below, indicating the client can see the service-side specific keystore configuration information but of course cannot use it: WARNING: WSP0075: Policy assertion "{}KeyStore" was evaluated as "UNSUPPORTED". Oct 17, 2011 9:55:34 PM [] selectAlternatives WARNING: WSP0075: Policy assertion "{}TrustStore" was evaluated as "UNSUPPORTED". Oct 17, 2011 9:55:34 PM [] selectAlternatives WARNING: WSP0019: Suboptimal policy alternative selected on the client side with fitness "PARTIALLY_SUPPORTED". These messages can be ignored for the purposes of this tutorial as it is just a consequence of the DoubleIt project's usage of the same WSDL file for the client and web service provider--see the Metro/UsernameToken Notes for more information.

      Configure LDAP directory for CXF STS

      Oliver Wulff - Tue, 10/18/2011 - 07:15
      I explained in my previous blog how to set up the CXF STS where you manage your users and claims in a file.This blog explains the required changes to integrate the CXF STS with a LDAP directory.

      You can attach an LDAP directory either for username/password validation or for retrieving the claims data or both.

      1. Username and password authentication

      WSS4j supports username/password authentication against a JAAS based backend since version 1.6.3.

      The JDK provides a JAAS LoginModule for LDAP which can be configured as illustrated here in a sample jaas configuration (jaas.config):
      myldap { REQUIRED

      You can get more information about this LoginModule here.

      In this example, all the users are stored in the organization unit Users within The configuration filename can be chosen, i.e. jaas.config. The filename must be configured as a JVM argument. I recommend to set JVM related configurations for Tomcat in the file located in tomcat/bin directory. This script is called by catalina.bat/sh implicitly and might look like this for UNIX:

      #!/bin/sh JAVA_OPTS=""export JAVA_OPTS

      Now, the LDAP LoginModule is configured. Next we have to configure the JAASUsernameTokenValidator for the STS endpoint.

         <property name="contextName" value="myldap"/>

      <jaxws:endpoint id="transportSTSUT"

          <entry key="ws-security.ut.validator"

      The property contextName must match with the context name defined in the JAAS configuration file which is "myldap" in this example.
       2. Claims management

      When a STS client requests a claim, the ClaimsManager in the STS checks every registered ClaimsHandler who can provide the data of the requested claim.  The CXF STS provides a claims handler implementation which allows to add claims which are stored as user attributes in a LDAP directory. You can configure which claim URI maps to which LDAP user attribute. The implementation uses the spring ldap module (LdapTemplate).

      <util:list id="claimHandlerList">
        <ref bean="ldapClaimsHandler" />

      <bean id="contextSource"
        <property name="url" value="ldap://" />
        <property name="userDn"
          value="CN=techUser,OU=Users,DC=mycompany,DC=org" />
        <property name="password" value="mypassword" />

      <bean id="ldapTemplate"
        <constructor-arg ref="contextSource" />

      <util:map id="claimsToLdapAttributeMapping">
      value="givenName" />
        <entry key=""
      value="sn" />
      value="mail" />
        <entry key=""
      value="c" />

      <bean id="ldapClaimsHandler"
        <property name="ldapTemplate" ref="ldapTemplate" />
        <property name="claimsLdapAttributeMapping"
                  ref="claimsToLdapAttributeMapping" />
        <property name="userBaseDN"
            value="OU=Users,DC=mycompany,DC=org" />

      The claim id's are configured according to chapter 7.5 in the specification Identity Metasystem Interoperability. You can add as many entries in the map claimsToLdapAttributeMapping as you want. Thus you can add any user attribute from your LDAP directory to the issued SAML token.

      Categories: Oliver Wulff

      What&#8217;s new in Karaf 2.2.4 ?

      Jean-Baptiste Onofré - Mon, 10/17/2011 - 10:45
      Apache Karaf 2.2.4 has been released this week end. You can take a look on the release notes. More than a bug fixes release, this version includes several new features and enhancements. Command aliases Previously, the osgi:* commands gathered different kind of usage: osgi:start, osgi:stop, etc are related to bundles, osgi:shutdown is related to container [...]
      Categories: Jean-Baptiste Onofré

      Web services links (16 October 2011)

      Glen Mazza - Mon, 10/17/2011 - 04:28

      Web Service related links of interest this week:

      Karaf related links:


      Configure and deploy Identity Provider (IdP) - Part II

      Oliver Wulff - Thu, 10/13/2011 - 14:30
      In my previous blog I talked about setting an STS which supports username/password authentication and the issuance of a SAML 2.0 token which contains additional claims information.
      We need these claims information to provide the application (called Relying Party in WS-Federation specification) the information like application roles for role based access control. Claims based authorization goes one step further and provides other claims data of the authenticated entity (SAML subject).


      This blog is about the Identity Provider (IDP) implementation which is referenced in the WS-Federation specification. Therefore, I'm giving a short introduction. This blog series looks at the passive requestor profile only of WS-Federation.

      The following picture is used by Microsoft which supports WS-Federation in their Windows Identity Foundation framework.

      (C) Microsoft

      The key principals of WS-Federation are:
      • Externalize authentication process from the application container
      • Provide the claims/attributes of the authenticated identity to the application for role based and fine grained authorization
      WS-Federation gives you the following benefits:
      • Applications can benefit of stronger security mechanism without changes
      • Identities/users don't have to be provisioned in all security domains to propagate identities across the security domains
      • B2B partners can be integrated without changing the application (includes programming and configuration)
      • Audit-Trail end-to-end

        Deploy Identity Provider (IdP)

        This sample IDP will support the following functionality:
        • Authentication based on username/password (Basic Authentication)
        • The authentication store is configured in the STS and can be file based (part of this example) or LDAP
        • Following federation parameters are supported:
          • wtrealm
          • wreply
          • wctx
          • wresult
        • Required claims can be configured per Relying Party (based on wtrealm value)
        Jürg Portmann and myself have put together this IDP based on Maven which can be downloaded here.

        1. Claims configuration per relying party

        The required claims per relying party are configured in the WEB-INF/RPClaims.xml. The XML file has the following structure. The key of each map entry must match with the wtrealm paramater in the redirect triggered by the relying party.
        (the set up of the relying part will be covered in the next blog).

            <util:map id="realm2ClaimsMap">
                <entry key="http://localhost:8080/wsfedhelloworldother/"
                    value-ref="claimsWsfedhelloworld" />
                <entry key="http://localhost:8080/wsfedhelloworld/"
                    value-ref="claimsWsfedhelloworldother" />

            <util:list id="claimsWsfedhelloworld">

            <util:list id="claimsWsfedhelloworldother">

        You group the required claims in beans which are a list of String as illustrated in claimsWsfedhelloworld and claimsWsfedhelloworldother.

        The map bean must be named realm2ClaimsMap and maps the different relying parties (applications) to one of the claim lists.

        In a further release, this information will be pulled from a WS-Federation Metadata document published by the replying party.

        2. Project dependencies
        The IDP reuses a lot of existing projects like Apache CXF to communicate with the STS for instance and provides an adaption of the WS-Trust interface to pure HTTP functionality as it is supported by a browser. The IDP has the following dependencies in the Maven project:

           <dependencies>        <dependency>

        3. IDP web application configuration

        Setting up the IDP involves a few steps only. If you you don't deploy the IDP in the same servlet container as the STS you must first download Tomcat 7 and update server.xml. If you deploy the IDP in the same servlet container you can skip 3.1

        3.1 Configure HTTP/S connector in Tomcat

        The HTTP connector should be configured with port 9080.

        The HTTPS connector in Tomcat is configured in conf/server.xml. Deploy the tomcatkeystore.jks of the example project  to the Tomcat root directory if the Connector is configured as illustrated:

            <Connector port="9443" protocol="HTTP/1.1" SSLEnabled="true"
                       maxThreads="150" scheme="https" secure="true"
                       keystorePass="tompass" sslProtocol="TLS" />
        This connector configures a self-signed certificate which is used for simplification only. You should get a certificate signed by a Certificate Authority for production usage.

        3.2 Configure Username/password authentication

        As described in section 1. the requested claims per relying party are managed in the file WEB-INF/RPclaims.xml.

        3.3 IDP and STS distributed
        If the IDP and STS are not deployed on the same machine (likewise in production) you have to update the following configuration:

        1) The remote WSDL location of the STS:



        2) the transport conduit to enable the truststore as described in more detail here:

          <http:conduit name="https://localhost:9443/.*">
            <http:tlsClientParameters disableCNCheck="true">
                <sec:keyStore type="jks" password="cspass" resource="clientstore.jks"/>

        3) fully qualified realm in realm2ClaimsMap

        As described in 1) the key of an entry in the map realm2ClaimsMap must match with the  parameter wtrealm in the redirect triggered by the relying party. If you access the relying party using a fully qualified URL, you must use the fully qualified URL in the IDP too.  


            <util:map id="realm2ClaimsMap">
                <entry key="http://localhost:8080/wsfedhelloworldother/"
                    value-ref="claimsWsfedhelloworld" />
                <entry key="http://localhost:8080/wsfedhelloworld/"
                    value-ref="claimsWsfedhelloworld2" />

        4. Deploy the IDP to Tomcat

        To deploy the IDP using Maven you have to follow these steps:
        • Configuring the following maven plugin    <plugin>
        • Add the server with username and password to your settings.xml
        • Ensure the user has the role "manager-script" as described here
        • Run mvn tomcat:redeploy
          (I recommend to use redeploy as deploy works the first time only)
        If you use Tomcat 6, you must change the url of the tomcat maven plugin:

        4. Test the IDP

        As long as you don't have a relying party in place you can't easily test the IDP. My next post will explain the set up of the relying party using Tomcat 7. Stay tuned.

        If you like you can test the IDP by using an HTTP client and pass the following request parameters (urls must be encoded):

        wa       wsignin1.0wreply   http://localhost:8080/wsfedhelloworld/secureservlet/fedwtrealm  http://localhost:8080/wsfedhelloworld/

        The browser will get a HTML form back (auto-submit). The action of the form is equal to the value of wreply which doesn't exist. You see the response of the STS escaped in the form parameter wresult.
        Categories: Oliver Wulff

        Using Kerberos with Web Services - part II

        Colm O hEigeartaigh - Tue, 10/11/2011 - 18:03
        This is the second of a two-part series on using Kerberos with Web Services, with Apache WSS4J and CXF. Part I showed how to set up a KDC distribution, and how to generate client and service principals to use in some CXF system tests. The system tests showed how to obtain a Kerberos token from a KDC, package it in a BinarySecurityToken, and send it to a service endpoint for validation. In other words, part I illustrated how to use Kerberos for client authentication in a web service setting.

        This article builds on part I by showing how to use the secret key associated with a Kerberos Token to secure (sign and encrypt) the request. This functionality was added as part of WSS4J 1.6.3, and the related WS-SecurityPolicy functionality was released as part of CXF 2.4.3.

        1) Setting up the Kerberos system tests in Apache CXF 

        If you have not done so already, follow the instructions in part I to install a Kerberos distribution and to generate client and service principals to run the CXF Kerberos system tests. The KerberosTokenTest in Apache CXF contains a number of different Kerberos tests. In this article we will examine the tests that involve obtaining a Kerberos Token, and using the associated secret key to secure some part of the request.

        Firstly, make sure that the JDK has unlimited security policies installed, and then checkout the CXF WS-Security system tests via:
        svn co Installing a custom KerberosTokenDecoder

        Once the client obtains an AP-REQ token from the KDC, the client also has easy access to the session key, which can be used to secure the request in some way. Unfortunately, there appears to be no easy way to obtain the session key on the receiving side. WSS4J does not support extracting a Kerberos session key on the receiving side to decrypt/verify a secured request out-of-the-box. Instead, a KerberosTokenDecoder interface is provided, which defines methods for setting the AP-REQ token and current Subject, and a method to then get a session key. An implementation must be set on the KerberosTokenValidator to obtain a session key to decrypt or verify a signed request.

        To run the Kerberos system tests that require a secret key on the receiving side, download an implementation of the KerberosTokenValidator interface here, and copy it to "systests/ws-security/src/test/java/org/apache/cxf/systest/ws/kerberos/server". The implementation is based on code written by the Java Monkey, and uses internal sun APIs, and so can't be shipped in Apache CXF/WSS4J. Once this implementation has been copied into the ws-security module, then you must compile or run any tests with the "-Pnochecks" profile enabled, as otherwise the code fails checkstyle.

        Open the server configuration file ("src/test/resources/org/apache/cxf/systest/ws/kerberos/server/server.xml"), and uncomment the "kerberosTicketDecoderImpl" bean, and the property of the "kerberosValidator" bean that refers to it:
        <bean id="kerberosTicketDecoderImpl"   class=""/>
        <bean id="kerberosValidator"
                <property name="contextName" value="bob"/>
                <property name="serviceName" value=""/>
                <property name="kerberosTokenDecoder" ref="kerberosTicketDecoderImpl"/>
        </bean> 1.2) Running the tests

        Open and comment out the "@org.junit.Ignore" entries for the last four tests, "testKerberosOverTransportEndorsing", "testKerberosOverAsymmetricEndorsing", "testKerberosOverSymmetricProtection" and "testKerberosOverSymmetricDerivedProtection". Finally, run the tests via:
        mvn -Pnochecks test -Dtest=KerberosTokenTest 2) The tests in more detail

        In this section, we'll look at the tests in more detail. 

        2.1) WS-SecurityPolicy configuration

        The wsdl that defines the service endpoints contains WS-SecurityPolicy expressions that define the security requirements of the endpoints. The following security policies are used for the four tests defined above:
        • testKerberosOverTransportEndorsing: A (one-way) transport binding is defined, with a KerberosToken required as an EndorsingSupportingToken. 
        • testKerberosOverAsymmetricEndorsing: An asymmetric binding is used, where a KerberosToken is required as an EndorsingSupportingToken.
        • testKerberosOverSymmetricProtection: A symmetric binding is used, where a KerberosToken is specified as a ProtectionToken of the binding.
        • testKerberosOverSymmetricDerivedProtection: The same as the previous test-case, except that any secret keys that are used must be derived.
        The first two test-cases use an EndorsingSupportingToken, which means that the secret key associated with the KerberosToken is used to sign (endorse) some message part (the timestamp for the Transport binding). This illustrates proof-of-possession. For the latter two test-cases, the KerberosToken is defined as a ProtectionToken, meaning that the secret key is used to sign/encrypt the request (e.g. instead of using an X.509 Token to encrypt a session key).

        2.2) Kerberos LoginModule configuration

        Both the CXF client and service endpoint use JAAS to authenticate to the KDC. The JAAS file used as part of the system test is passed to the tests via the System property "". The client (alice) uses the following login module:
        alice {
            refreshKrb5Config=true useKeyTab=true keyTab="/etc/alice.keytab"
        };and the service endpoint (bob) uses:
        bob {
            refreshKrb5Config=true useKeyTab=true storeKey=true
            keyTab="/etc/bob.keytab" principal="bob/";
        }; 2.3) Service endpoint configuration

        The service endpoints are spring-loaded. Each endpoint definition contains the JAX-WS property "ws-security.bst.validator" which is defined in SecurityConstants. WSS4J uses Validator implementations to perform validation on received security tokens. This particular property means that BinarySecurityTokens are to be validated by the given reference, e.g.:

        <jaxws:endpoint ...>
                <entry key="ws-security.bst.validator" value-ref="kerberosValidator"/>
        </jaxws:endpoint> "kerberosValidator" is a KerberosTokenValidator instance given above. It requires a "contextName" property, which corresponds to the JAAS context name, as well as an optional "serviceName" property, and an optional "kerberosTokenDecoder" property to use to obtain a secret key. Combined with the JAAS properties file, this is all that is required for the service endpoint to validate a received Kerberos Token. 

        2.4 Client configuration

        Finally, the client must contact a KDC and obtain a Kerberos Token, once it sees that the service endpoint has a security policy that requires a KerberosToken. The client configuration is available here. A sample configuration for the Kerberos Test case is as follows:
        <jaxws:client name="{...}DoubleItKerberosTransportPort"
                   <entry key="ws-security.kerberos.client">
                       <bean class="">
                           <constructor-arg ref="cxf"/>
                           <property name="contextName" value="alice"/>
                           <property name="serviceName" value=""/>
        </jaxws:client>The JAX-WS property "ws-security.kerberos.client" (again, defined in SecurityConstants) corresponds to a KerberosClient object. Similar to the KerberosTokenValidator on the receiving side, this is configured with a JAAS context Name and service Name.
        Categories: Colm O hEigeartaigh


        Subscribe to Talend Community Coders aggregator