Latest Activity

Asynchronous JAX-RS Proxies in CXF

Sergey Beryozkin - Tue, 06/21/2016 - 23:35
Dan had an idea the other day to get CXF JAX-RS proxies enhanced a bit for them to support the asynchronous calls. After all, HTTP centric JAX-RS 2.0 and CXF WebClient clients support such calls with AsyncInvoker.

So here is what we have started from. Simply register InvocationCallback with a proxy request context as shown in the examples and make the asynchronous call. The proxy method will return immediately and the callback will be notified in due time once the typed response is available. As the examples show one can register a single callback or a collection of callbacks bound to specific response types.

I suppose we can consider generating typed asynchronous proxy methods from the service descriptions such as WADL going forward.

This feature will be available in CXF 3.1.7. Give a try please, refresh your JAX-RS proxy code a bit, enjoy. 

Categories: Sergey Beryozkin

A new REST interface for the Apache CXF Security Token Service - part II

Colm O hEigeartaigh - Fri, 06/17/2016 - 12:38
The previous blog entry introduced the new REST interface of the Apache CXF Security Token Service. It covered issuing, renewing and validating tokens via HTTP GET and POST with a focus on SAML tokens. In this post, we'll cover the new support in the STS to issue and validate JWT tokens via the REST interface, as well as how we can transform SAML tokens into JWT tokens, and vice versa. For information on how to configure the STS to support JWT tokens (via WS-Trust), please look at a previous blog entry.

1) Issuing JWT Tokens

We can retrieve a  JWT Token from the REST interface of the STS simply by making a HTTP GET request to "/token/jwt". Specify the "appliesTo" query parameter to insert the service address as the audience of the generated token. You can also include various claims in the token via the "claim" query parameter.

If an "application/xml" accept type is specified, or if multiple accept types are specified (as in a browser), the JWT token is wrapped in a "TokenWrapper" XML Element by the STS:

If "application/json" is specified, the JWT token is wrapped in a simple JSON Object as follows:

If "text/plain" is specified, the raw JWT token is returned. We can also get a JWT token using HTTP POST by constructing a WS-Trust RequestSecurityToken XML fragment, specifying "urn:ietf:params:oauth:token-type:jwt" as the WS-Trust TokenType.

2) Validating JWT Tokens and token transformation

We can also validate JWT Tokens by POSTing a WS-Trust RequestSecurityToken to the STS. The raw (String) JWT Token must be wrapped in a "TokenWrapper" XML fragment, which in turn is specified as the value of "ValidateTarget". Also, an "action" query parameter must be added with value "validate".

A powerful feature of the STS is the ability to transform tokens from one type to another. This is done by making a validate request for a given token, and specifying a value for "TokenType" that corresponds to a token type that is desired.  In this way we can validate a SAML token and issue a JWT Token, and vice versa.

To see some examples of how to do token validation as well as token transformation, please take a look at the following tests, which use the CXF WebClient to invoke on the REST interface of the STS.
Categories: Colm O hEigeartaigh

A new REST interface for the Apache CXF Security Token Service - part I

Colm O hEigeartaigh - Thu, 06/16/2016 - 17:46
Apache CXF ships a Security Token Service (STS) that can issue/validate/renew/cancel tokens via the (SOAP based) WS-Trust interface. The principal focus of the STS is to deal with SAML tokens, although other token types are supported as well. JAX-RS clients can easily obtain tokens using helper classes in CXF (see Andrei Shakirin's blog entry for more information on this).

However, wouldn't it be cool if a JAX-RS client could dispense with SOAP altogether and obtain tokens via a REST interface? Starting from the 3.1.6 release, the Apache CXF STS now has a powerful and flexible REST API which I'll describe in this post. The next post will cover how the STS can now issue and validate JWT tokens, and how it can transform JWT tokens into SAML tokens and vice versa.

One caveat - this REST interface is obviously not a standard, as compared to the WS-Trust interface, and so is specific to CXF.

1) Configuring the JAX-RS endpoint of the CXF STS

Firstly, let's look at how to set up the JAX-RS endpoint needed to support the REST interface of the STS. The STS is configured in exactly the same way as for the standard WS-Trust based interface, the only difference being that we are setting up a JAX-RS endpoint instead of a JAX-WS endpoint. Example configuration can be seen here in a system test.


2) Issuing Tokens

The new JAX-RS interface supports a number of different methods to obtain tokens. In this post, we will just focus on the methods used to obtain XML based tokens (such as SAML).

2.1) Issue tokens via HTTP GET

The easiest way to get a token is by doing a HTTP GET on "/token/{tokenType}". Here is a simple example using a web browser:

 The supported tokenType values are:
  • saml
  • saml2.0
  • saml1.1
  • jwt
  • sct
A number of optional query parameters are supported:
  • keyType - The standard WS-Trust based URIs to indicate whether a Bearer, HolderOfKey, etc. token is required. The default is to issue a Bearer token.
  • claim - A list of requested claims to include in the token. See below for the list of acceptable values.
  • appliesTo - The AppliesTo value to use
  • wstrustResponse - A boolean parameter to indicate whether to return a WS-Trust Response or just the desired token. The default is false. This parameter only applies to the XML case.
Note that for the "Holder of Key" keytype, the STS will try to get the client key via the TLS client certificate, if it is available. The (default) supported claim values are:
  • emailaddress
  • role
  • surname
  • givenname
  • name
  • upn
  • nameidentifier
The format of the returned token depends on the HTTP application type:
  • application/xml - Returns a token in XML format
  • application/json - JSON format for JWT tokens only.
  • text/plain - The "plain" token is returned. For JWT tokens it is just the raw token. For XML-based tokens, the token is BASE-64 encoded and returned.
The default is to return XML if multiple types are specified (e.g. in a browser).

2.2) Issue tokens via HTTP POST

While the GET method above is very convenient to use, it may be that you want to pass other parameters to the STS in order to issue a token. In this case, you can construct a WS-Trust RequestSecurityToken XML fragment and POST it to the STS. The response will be the standard WS-Trust Response, and not the raw token as you have the option of receiving via the GET method.

3) Validating/Renewing/Cancelling tokens

It is only possible to renew or validate or cancel a token using the POST method detailed in section 2.2 above. You construct the RequestSecurityToken XML fragment in the same way as for Issue, except including the token in question under "ValidateTarget", "RenewTarget", etc. For the non-issue use case, you must specify a "action" query parameter, which can be "issue" (default), "validate", "renew", "cancel".
Categories: Colm O hEigeartaigh

Apache CXF JAX-RS and SAML Assertions

Sergey Beryozkin - Thu, 06/02/2016 - 16:39
While the software industry with the interests in WEB security is enthusiastically embracing the latest and coolest technologies such as OpenId Connect and JOSE, with JSON Web Tokens being the stars of the advanced security flows, less 'glamorous' SAML security tokens have been continuing helping to secure the existing services.

CXF JAX-RS has been providing a comprehensive support for SAML assertions for a while now which is being relied upon in a number of productions. I'd also like to encourage the developers who work with SAML give this access control feature a try.

The question which is often being asked is how a JAX-RS client gets these assertions. Please read this informative blog post explaining how CXF JAX-RS clients can seamlessly get a SAML assertion from a WS STS service and use it with the server validating it against STS or locally.

Please also check this section if you are you curious how to reuse SAML assertions in OAuth2 flows.

  
Categories: Sergey Beryozkin

Practical Cryptography with Apache CXF JOSE

Sergey Beryozkin - Tue, 05/31/2016 - 17:20
It has been a year since I had a chance to talk about Practical JOSE in Apache CXF at Apache Con NA 2015.

We have significantly improved CXF JOSE implementation  since then, with Colm helping a lot with the code, tests, documentation. The code has become more thoroughly tested, the configuration - better, with the documentation being updated recently. 

Production quality CXF STS service can now issue JOSE-protected JWT assertions and Fediz OpenId Connect project directly depends on JOSE in order to secure OIDC IdTokens.

But it is important to realize that doing JOSE does not mean you need to do OAuth2 in general or OpenId Connect in particular, though it is definitely true that understanding JOSE will help when you decide to work with OAuth2/OIDC.
As such, a web service developer can experiment with JOSE in a number of ways.

One approach is to use JWS Signature or JWE Encryption helpers to sign and/or encrypt the arbitrary data.

For example, have your service receiving a confidential String over 2-way HTTPS, then JWE-encrypt and save it to the database to ensure the data is safe or JWS-sign only and forward further, being assured the data won't be modified, and choose between JWS Compact or JSON representations.

Have you already heard JOSE sequences have the data Base64 URL encoded ? Try JWS JSON with an unencoded payload option.

Another approach is let CXF do JOSE for you. Use CXF JOSE filters and make service data secured by typing few lines of text in the configuration properties.
These filters will do the best effort at streaming the outbound data while preparing JOSE sequences.

Would you like to link client JWT assertions obtained with the progressive services such as CXF STS to the data being protected ? Add a couple of filters

I honestly think that JOSE is the best technology which can help many of us  understand better what cryptography is.

Start with selecting a signature algorithm. You most likely have a Java JKS key store somewhere around, so go for 'RS256'. Get the private key out and sign, then get a public key and validate as shown here.
Next try to encrypt, select RSA-OEPA to make it real fast given that you have this JKS store. Use a public key to secure a content encryption key generated by CXF for you and then do A128GCM content encryption. Finish with decrypting the content with a private key.

Works ? Interested in trying different key sizes or combinations of JOSE algorithms ? No problems, try them fast. Learn more about these algorithms next. See how it all works when the CXF JOSE filters do the work.

We've thought a lot on how to help developers start experimenting with JOSE as fast and easy as possible and I hope those of you who will start working with CXF JOSE code will help us make it even better.

Would like to use some other quality JOSE libraries such as these ones ?  No problems, use them inside your custom JAX-RS filters or directly in the service code.

You may say, I'm not really seeing others use JOSE in regular HTTP services work. Let me finish with this advice: please do not worry about it, be a pioneer, experiment and find new interesting ways to secure your services and prepare them to work in the world of JOSE-protected tokens and data flowing everywhere.

Do JOSE today, convince your boss your team needs it :-), become a cryptography expert. Enjoy !




Categories: Sergey Beryozkin

SAML SSO support in the Fediz 1.3.0 IdP

Colm O hEigeartaigh - Fri, 05/27/2016 - 17:12
The Apache CXF Fediz Identity Provider (IdP) has had the ability to talk to third party IdPs using SAML SSO since the 1.2.0 release. However, one of the new features of the 1.3.0 release is the ability to configure the Fediz IdP to use the SAML SSO protocol directly, instead of WS-Federation. This means that Fediz can be used as a fully functioning SAML SSO IdP.

I added a new test-case to github to show how this works:
  • cxf-fediz-saml-sso: This project shows how to use the SAML SSO interceptors of Apache CXF to authenticate and authorize clients of a JAX-RS service. 
The test-case consists of two modules. The first is a web application which contains a simple JAX-RS service, which has a single GET method to return a doubled number. The method is secured with a @RolesAllowed annotation, meaning that only a user in roles "User", "Admin", or "Manager" can access the service. The service is configured with the SamlRedirectBindingFilter, which redirects unauthenticated users to a SAML SSO IdP for authentication (in this case Fediz). The service configuration also defines an AssertionConsumerService which validates the response from the IdP, and sets up the session for the user + populates the CXF security context with the roles from the SAML Assertion.

The second module deploys the Fediz IdP and STS in Apache Tomcat, as well as the "double-it" war above. It uses Htmlunit to make an invocation on the service and check that access is granted to the service. Alternatively, you can comment the @Ignore annotation of the "testInBrowser" method, and copy the printed out URL into a browser to test the service directly (user credentials: "alice/ecila").

The IdP configuration is defined in entities-realma.xml. Note that under "supportedProtocols" for the "idp-realmA" configuration is the value "urn:oasis:names:tc:SAML:2.0:profiles:SSO:browser". In addition, the default authentication URI is "saml/up". These are the only changes that are required to switch the IdP for "realm A" from WS-Federation to SAML SSO.
Categories: Colm O hEigeartaigh

Observations about Apache Con NA 2016

Sergey Beryozkin - Wed, 05/25/2016 - 00:10
This year Apache Con NA was held in Vancouver BC.

As usual, being at Apache Con gives a chance to talk to your fellow Open Source developers and this year it was as great as ever - meeting my old and new Talend colleagues, talking to those I already have met before and getting to know other people attending the conference was nice. 
The conference hotel was few hundred meters away from the waterfront where one could walk or run to a green Stanley Park:


Now let me talk about the actual conference. Big Data and Core conference are no longer run at the same time, with a single day intersection only. I guess I was preferring a more compact 'mix-in' format as I could attend to either BigData or Core presentations in a fewer number of days. But organizing a successful conference is very difficult - in the end of the day whatever format works best for Apache Con is the winning format.

I did like and learn something new from all the keynotes I listened to but I particularly enjoyed an Open Source is a Positive Sum Game by Sam Ramji. Have you ever  looked at the schedule, not sure what to expect from the listed talks, and then someone starts speaking and you realize you are listening to a visionary ?  This is what I felt when listening to Sam.

A number of other talks were interesting. My colleague JB's presentations were both interesting and entertaining, and I was also happy to see Hadrian and Jamie, both my former colleagues :-), co-presenting

I think Colm and myself had a good audience during our presentation. It must've been difficult for those who attended to listen to a lot of security related information presented on Friday after lunch :-) and we are grateful to all who were there. I did overrun by 1 minute though and we had no chance to talk to the audience afterwards but we did convey a lot of information during our talk.

And then finally we had the last few presentations to choose from and we made it to a Shawn McKinney's presentation. Now imagine it is 16.00, late Friday afternoon, and you are about to listen to yet another security related talk :-). I think Shawn did remarkably well. Shawn's down to earth, likeable presentation style made the real difference. And while I did learn few things about Role Based Access Control (such as the temporal restrictions), what really did get to me were Shawn's advices to "test and re-use".  You may say it is all quite obvious but sometimes one can get lax on either of those fundamentals, myself including. I'd like to talk about some of the thoughts I've had about the 're-use vs implement yourself' later on.

It was great to be there :-)




   
Categories: Sergey Beryozkin

An interop demo between Apache CXF Fediz and Facebook

Colm O hEigeartaigh - Fri, 05/06/2016 - 17:49
The previous post showed how to configure the Fediz IdP to interoperate with the Google OpenID Connect provider. In addition to supporting WS-Federation, SAML SSO and OpenID Connect, from the forthcoming 1.3.1 release the Fediz IdP will also support authenticating users via Facebook Connect. Facebook Connect is based on OAuth 2.0 and hence the implementation in Fediz shares quite a lot of common code with the OpenID Connect implementation. In this post, we will show how to configure the Fediz IdP to interoperate with Facebook.

1) Create a new app in the Facebook developer console

The first step is to create a new application in the Facebook developer console. Create a Facebook developer account. Login and register as a Facebook developer. Click on "My Apps" and "Add a New App". Select the "website" platform and create a new app with the name "Fediz IdP". Enter your email, select a Category and click on "Create App ID". Enter "https://localhost:8443/fediz-idp/federation" for the Site URL. Now go to the dashboard for the application and copy the App ID + App Secret and save for later.

2) Testing the newly created Client

To test that everything is working correctly, open a web browser and navigate to, substituting the client id saved in step "1" above:
  • https://www.facebook.com/dialog/oauth?response_type=code&client_id=<app-id>&redirect_uri=https://localhost:8443/fediz-idp/federation&scope=email
Login using your Facebook credentials and grant permission on the OAuth consent screen. The browser will then attempt a redirect to the given redirect_uri. Copy the URL + extract the "code" query String. Open a terminal and invoke the following command, substituting in the secret and code extracted above:
  • curl --data "client_id=<app-id>&client_secret=<app-secret>&grant_type=authorization_code&code=<code>&redirect_uri=https://localhost:8443/fediz-idp/federation" https://graph.facebook.com/v2.6/oauth/access_token
You should see a successful response containing the OAuth 2.0 Access Token.

3) Install and configure the Apache CXF Fediz IdP and sample Webapp

Follow a previous tutorial to deploy the latest Fediz IdP + STS to Apache Tomcat, as well as the "simpleWebapp". Note that you will need to use at least Fediz 1.3.1-SNAPSHOT here for Facebook support. Test that the "simpleWebapp" is working correctly by navigating to the following URL (selecting "realm A" at the IdP, and authenticating as "alice/ecila"):
  • https://localhost:8443/fedizhelloworld/secure/fedservlet 
3.1) Configure the Fediz IdP to communicate with the Facebook IdP

Now we will configure the Fediz IdP to authenticate the user in "realm B" by using the Facebook Connect protocol. Edit 'webapps/fediz-idp/WEB-INF/classes/entities-realma.xml'. In the 'idp-realmA' bean:
  • Change the port in "idpUrl" to "8443". 
In the 'trusted-idp-realmB' bean:
  • Change the "url" value to "https://www.facebook.com/dialog/oauth".
  • Change the "protocol" value to "facebook-connect".
  • Delete the "certificate" and "trustType" properties.
  • Add the following parameters Map, filling in a value for the client secret extracted above: <property name="parameters">
                <util:map>
                    <entry key="client.id" value="<app-id>" />
                    <entry key="client.secret" value="<app-secret>" />
                </util:map>
            </property>
It is possible to specify custom scopes via a "scope" parameter. The default value for this parameter is "email". The token endpoint can be configured via a "token.endpoint" parameter (defaults to "https://graph.facebook.com/v2.6/oauth/access_token"). Similarly, the API endpoint used to retrieve the user's email address can be configured via "api.endpoint", defaulting to "https://graph.facebook.com/v2.6". The Subject claim to be used to insert into a SAML Assertion is defined via "subject.claim", which defaults to "email".

3.2) Update the TLS configuration

By default, the Fediz IdP is configured with a truststore required to access the Fediz STS. However, this would mean that the requests to the Facebook IdP over TLS will not be trusted. to change this, edit 'webapps/fediz-idp/WEB-INF/classes/cxf-tls.xml', and change the HTTP conduit name from "*.http-conduit" to "https://localhost.*". This means that this configuration will only get picked up when communicating with the STS (deployed on "localhost"), and the default JDK truststore will get used when communicating with the Facebook IdP.

3.3) Update Fediz STS claim mapping

The STS will receive a SAML Token created by the IdP representing the authenticated user. The Subject name will be the email address of the user as configured above. Therefore, we need to add a claims mapping in the STS to map the principal received to some claims. Edit 'webapps/fediz-idp-sts/WEB-INF/userClaims.xml' and just copy the entry for "alice" in "userClaimsREALMA", changing "alice" to your email address used to log into Facebook.

Finally, restart Fediz to pick up the changes (you may need to remove the persistent storage first).

4) Testing the service

To test the service navigate to:
  • https://localhost:8443/fedizhelloworld/secure/fedservlet
Select "realm B". You should be redirected to the Facebook authentication page. Enter the user credentials you have created. You will be redirected to Fediz, where it creates a SAML token representing the act of authentication via the trusted third party IdP,  and redirects to the web application.
Categories: Colm O hEigeartaigh

An interop demo between Apache CXF Fediz and Google OpenID Connect

Colm O hEigeartaigh - Fri, 04/29/2016 - 18:19
The previous post introduced some of the new features in Apache CXF Fediz 1.3.0. One of the new enhancements is that the Fediz IdP can now delegate WS-Federation (and SAML SSO) authentication requests to a third party IdP via OpenID Connect. An article published in February showed how it is possible to interoperate between Fediz and the Keycloak OpenID Connect provider. In this post, we will show how to configure the Fediz IdP to interoperate with the Google OpenID Connect provider.

1) Create a new client in the Google developer console

The first step is to create a new client in the Google developer console to represent the Apache CXF Fediz IdP. Login to the Google developer console and create a new project. Click on "Enable and Manage APIs" and then select "Credentials". Click on "Create Credentials" and select "OAuth client id". Configure the OAuth consent screen and select "web application" as the application type. Specify "https://localhost:8443/fediz-idp/federation" as the authorized redirect URI. After creating the client, a pop-up window will specify the newly created client id and secret. Save both values locally. The Google documentation is available here.

2) Testing the newly created Client

It's possible to see the Google OpenId Connect configuration by navigating to:
  • https://accounts.google.com/.well-known/openid-configuration
This tells us what the authorization and token endpoints are, both of which we will need to configure the Fediz IdP. To test that everything is working correctly, open a web browser and navigate to, substituting the client id saved in step "1" above:
  • https://accounts.google.com/o/oauth2/v2/auth?response_type=code&client_id=<client-id>&redirect_uri=https://localhost:8443/fediz-idp/federation&scope=openid
Login using your google credentials and grant permission on the OAuth consent screen. The browser will then attempt a redirect to the given redirect_uri. Copy the URL + extract the "code" query String. Open a terminal and invoke the following command, substituting in the secret and code extracted above:
  • curl -u <client-id>:<secret> --data "client_id=<client-id>&grant_type=authorization_code&code=<code>&redirect_uri=https://localhost:8443/fediz-idp/federation" https://www.googleapis.com/oauth2/v4/token
You should see a succesful response containing (amongst other things) the OAuth 2.0 Access Token and the OpenId Connect IdToken, containing the user identity.

3) Install and configure the Apache CXF Fediz IdP and sample Webapp

Follow a previous tutorial to deploy the latest Fediz IdP + STS to Apache Tomcat, as well as the "simpleWebapp". Note that you will need to use Fediz 1.3.0 here for OpenId Connect support. Test that the "simpleWebapp" is working correctly by navigating to the following URL (selecting "realm A" at the IdP, and authenticating as "alice/ecila"):
  • https://localhost:8443/fedizhelloworld/secure/fedservlet 
3.1) Configure the Fediz IdP to communicate with the Google IdP

Now we will configure the Fediz IdP to authenticate the user in "realm B" by using the OpenID Connect protocol. Edit 'webapps/fediz-idp/WEB-INF/classes/entities-realma.xml'. In the 'idp-realmA' bean:
  • Change the port in "idpUrl" to "8443". 
In the 'trusted-idp-realmB' bean:
  • Change the "url" value to "https://accounts.google.com/o/oauth2/v2/auth".
  • Change the "protocol" value to "openid-connect-1.0".
  • Delete the "certificate" and "trustType" properties.
  • Add the following parameters Map, filling in a value for the client secret extracted above: <property name="parameters">
                <util:map>
                    <entry key="client.id" value="<client-id>" />
                    <entry key="client.secret" value="<secret>" />
                    <entry key="token.endpoint" value="https://accounts.google.com/o/oauth2/token" />
                    <entry key="scope" value="openid profile email"/>
                    <entry key="jwks.uri" value="https://www.googleapis.com/oauth2/v3/certs" />
                    <entry key="subject.claim" value="email"/>
                </util:map>
            </property>
There are a few additional properties configured here compared to the previous tutorial. It is possible to specify custom scopes via the "scope" parameter. In this case we are requesting the "profile" and "email" scopes. The default value for this parameter is "openid". In addition, rather than validating the signed IdToken using a local certificate, here we are specifying a value for "jwks.uri", which is the location of the signing key. The "subject.claim" property specifies the claim name from which to obtain the Subject name, which is inserted into a SAML Token that is sent to the STS.

3.2) Update the TLS configuration

By default, the Fediz IdP is configured with a truststore required to access the Fediz STS. However, this would mean that the requests to the Google IdP over TLS will not be trusted. to change this, edit 'webapps/fediz-idp/WEB-INF/classes/cxf-tls.xml', and change the HTTP conduit name from "*.http-conduit" to "https://localhost.*". This means that this configuration will only get picked up when communicating with the STS (deployed on "localhost"), and the default JDK truststore will get used when communicating with the Google IdP.

3.3) Update Fediz STS claim mapping

The STS will receive a SAML Token created by the IdP representing the authenticated user. The Subject name will be the email address of the user as configured above. Therefore, we need to add a claims mapping in the STS to map the principal received to some claims. Edit 'webapps/fediz-idp-sts/WEB-INF/userClaims.xml' and just copy the entry for "alice" in "userClaimsREALMA", changing "alice" to your Google email address.

Finally, restart Fediz to pick up the changes (you may need to remove the persistent storage first).

4) Testing the service

To test the service navigate to:
  • https://localhost:8443/fedizhelloworld/secure/fedservlet
Select "realm B". You should be redirected to the Google authentication page. Enter the user credentials you have created. You will be redirected to Fediz, where it converts the received JWT token to a token in the realm of Fediz (realm A) and redirects to the web application.
    Categories: Colm O hEigeartaigh

    Apache CXF Fediz 1.3.0 released

    Colm O hEigeartaigh - Mon, 04/25/2016 - 18:06
    A new major release (1.3.0) of Apache CXF Fediz was released a few weeks ago. There are some major dependency updates as part of this release:
    • The core Apache CXF dependency is updated from the 3.0.x branch to the 3.1.x branch (3.1.6 to be precise)
    • The Spring dependency of the IdP is updated from the 3.2.x branch to the 4.1.x branch.
    Fediz contains a number of container plugins to support the Passive Requestor Profile of WS-Federation. The 1.3.0 release now supports container plugins for:
    • Websphere
    • Jetty 8 and 9 (new)
    • Apache Tomcat 7 and 8 (new)
    • Spring Security 2 and 3
    • Apache CXF.
    The Identity Provider (IdP) service has the following new features:
    • The IdP now supports protocol bridging with OpenId Connect IdPs (see previous article on an interop demo with Keycloak).
    • The IdP is now capable of supporting the SAML SSO protocol natively, in addition to the Passive Requestor Profile of WS-Federation.
    • A new IdP service is now available which supports OpenId Connect by leveraging Apache CXF. By default it delegates authentication to the existing Fediz IdP using WS-Federation.
    In a nutshell, the Fediz 1.3.0 IdP supports user authentication via the WS-Federation, SAML SSO and OpenId Connect protocols, and it can also bridge between all of these different protocols. This is a compelling selling point of Fediz, and one I will explore more in some forthcoming articles.
    Categories: Colm O hEigeartaigh

    Talking about Fediz OIDC at Apache Con NA 2016

    Sergey Beryozkin - Sun, 04/24/2016 - 19:01
    Colm and myself are going to talk about Fediz OpenId Connect at Apache Con NA 2016. The session is on Friday 13th May.

    Be there if you can, you can then tell your grandchildren you were at the 1st public presentation about Fediz OIDC :-)

    I do look forward to being at Apache Con again. Seeing and talking to the colleagues from Apache CXF and other projects is always super great.
    Categories: Sergey Beryozkin

    [OT] U2 Innocence And Experience or Understand HTTP services with CXF

    Sergey Beryozkin - Sun, 04/24/2016 - 18:47
    I've already told to all of my colleagues who would listen how lucky I was to get a chance to listen live to U2 who played several concerts in Dublin as part of their Innocence and Experience tour

    I've already told why I like U2. But seeing them playing live is really special. The voice is so good it is shocking at first. They are hard working and innovative, despite not being that young any more, the latter part is something I can definitely associate with :-).

    In all of the [OT] entries on my blog I'm trying to look for a 'connection' to Apache CXF. No exception this time:

    Apache CXF is not only a place where one can have a Web/HTTP Service created. But also go from a Novice to Expert in building such services. CXF may not offer a way for a Hello World application be created for you without doing anything at all. But it has been known to deliver in supporting most demanding services. By the time the developers have those services up and running they have become the experts who know what it takes to write a service that works well. They have moved from the 'Innocence' of Hello World services to 'Experience' requited to support Real World services. 

     

     
    Categories: Sergey Beryozkin

    CXF Master JAX-RS 2.1 Branch is Opened

    Sergey Beryozkin - Sun, 04/24/2016 - 18:17
    Good news for CXF JAX-RS users: Andriy Redko has opened a CXF Master JAX-RS 2.1 branch. Server Side Events is the first feature of 2.1 API which is supported on this branch. Having this 2.1 API Snapshot is handy.

    The development of JAX-RS 2.1 has been frustratingly slow but there's some progress nonetheless with Jersey (RI) expected to be ready as soon as realistically possible, given that all the major features proposed for JAX-RS 2.1 have already been implemented in Jersey.

    JAX-RS is easily the best API for building REST clients and servers. Despite the process difficulties it will continue evolving. Use it and believe more is to come in the JAX-RS space.
    Categories: Sergey Beryozkin

    Apache Karaf Tutorial part 10 - Declarative services

    Christian Schneider - Fri, 04/22/2016 - 17:16

    Blog post edited by Christian Schneider

    This tutorial shows how to use Declarative Services together with the new Aries JPA 2.0.

    You can find the full source code on github Karaf-Tutorial/tasklist-ds

    Declarative Services

    Declarative Services (DS) is the biggest contender to blueprint. It is a slim service injection framework that is completely focused on OSGi. DS allows you to offer and consume OSGi services and to work with configurations.

    At the core DS works with xml files to define scr components and their dependencies. They typically live in the OSGI-INF directory and are announced in the Manifest using the header "Service-Component" with the path to the component descriptor file.  Luckily it is not necessary to directly work with this xml as there is also support for DS annotations. These are processed by the maven-bundle-plugin. The only prerequisite is that they have to be enabled by a setting in the configuration instructions of the plugin.

    <_dsannotations>*</_dsannotations>

    For more details see http://www.aqute.biz/Bnd/Components

    DS vs Blueprint

    Let us look into DS by comparing it to the already better known blueprint. There are some important differences:

    1. Blueprint always works on a complete blueprint context. So the context will be started when all mandatory service deps are present. It then publishes all offered services. As a consequence a blueprint context can not depend on services it offers itself. DS works on Components. A component is a class that offers a service and can depend on other services and configuration. In DS you can manage each component separately like start and stop it. It is also possible that a bundle offers two components but only one is started as the dependencies of the other are not yet there.
    2. DS supports the OSGi service dynamics better than blueprint. Lets look into a simple example:
      You have a DS and blueprint module component that offers a service A and depends on a mandatory service B. Blueprint will wait on the first start for the mandatory service to be available. If it does not come up it will fail after a timeout and will not be able to recover from this. Once the blueprint context is up it stays up even if the mandatory service goes away. This is called service damping and has the goal to avoid restarting blueprint contexts too often. Services are injected into blueprint beans as dynamic proxies. Internally the proxy handles the replacement and unavailability of services. One problem this causes is that calls to a non available service will block the thread until a timeout and then throw a RuntimeException.
      In DS on the other hand a component lifecycle is directly bound to dependent services. So a component will only be activated when all mandatory services are present and deactivated as soon as one goes away. The advantage is that the service injected into the component does not have to be proxied and calls to it should always work.
    3. Every DS component must be a service. While blueprint can have internal beans that are just there to wire internal classes to each other this is not possible in DS. So DS is not a complete dependency injection framework and lacks many of the features blueprint offers in this regard.
    4. DS does not support extension namespaces. Aries blueprint has support for quite a few other Apache projects using extension namespaces. Examples are: Aries jpa, Aries transactions, Aries authz, CXF, Camel. So using these technologies in DS can be a bit more difficult.
    5. DS does not support support interceptors. In blueprint an extension namespace can introduce and interceptor that is always called before or after a bean. This is for example used for security as well as transation handling. For this reason DS did not support JPA very well as normal usage mandates to have interceptors. See below how jpa can work on DS.

    So if DS is a good match for your project depends on how much you need the service dynamics and how well you can integrate DS with other projects.

    JEE and JPA

    The JPA spec is based on JEE which has a very special thread and interceptor model. In JEE you use session beans with a container managed EntityManger
    to manipulate JPA Entities. It looks like this:

    JPA @Stateless class TaskServiceImpl implements TaskService {  @PersistenceContext(unitName="tasklist") private EntityManager em; public Task getTask(Integer id) { return em.find(Task.class, id); } }

    In JEE calling getTask will by default participate in or start a transaction. If the method call succeeds the transaction will be committed, if there is an exception it will be rolled back.
    The calls go to a pool of TaskServiceImpl instances. Each of these instances will only be used by one thread at a time. As a result of this the EntityManager interface is not thread safe!

    So the advantage of this model is that it looks simple and allows pretty small code. On the other hand it is a bit difficult to test such code outside a container as you have to mimic the way the container works with this class. It is also difficult to access e.g. em
     as it is private and there is not setter.

    Blueprint supports a coding style similar to the JEE example and implements this using a special jpa and tx namespace and
    interceptors that handle the transaction / em management.

    DS and JPA

    In DS each component is a singleton. So there is only one instance of it that needs to cope with multi threaded access. So working with the plain JEE concepts for JPA is not possible in DS.

    Of course it would be possible to inject an EntityManagerFactory and handle the EntityManager lifecycle and transactions by hand but this results in quite verbose and error prone code.

    Aries JPA 2.0.0 is the first version that offers special support for frameworks like DS that do not offer interceptors. The solution here is the concept of a JPATemplate together with support for closures in Java 8. To see how the code looks like peek below at chapter persistence.

    Instead of the EntityManager we inject a thread safe JpaTemplate into our code. We need to put the jpa code inside a closure and run it with jpa.txEpr() or jpa.tx(). The JPATemplate will then guarantee the same environment like JEE inside the closure. As each closure runs as its own
    instance there is one em per thread. The code will also participate/create a transaction and the transaction  commit/rollback also works like in JEE.

    So this requires a little more code but the advantage is that there is no need for a special framework integration.
    The code can also be tested much easier. See TaskServiceImplTest in the example.

    Structure
    • features
    • model
    • persistence
    • ui
    Features

    Defines the karaf features to install the example as well as all necessary dependencies.

    Model

    This module defines the Task JPA entity, a TaskService interface and the persistence.xml. For a detailed description of model see the tasklist-blueprint example. The model is exactly the same here.

    PersistenceTaskServiceImpl @Component public class TaskServiceImpl implements TaskService { private JpaTemplate jpa; public Task getTask(Integer id) { return jpa.txExpr(em -> em.find(Task.class, id)); } @Reference(target = "(osgi.unit.name=tasklist)") public void setJpa(JpaTemplate jpa) { this.jpa = jpa; } }

    We define that we need an OSGi service with interface TaskService and a property "osgi.unit.name" with the value "tasklist".

    InitHelper @Component public class InitHelper { Logger LOG = LoggerFactory.getLogger(InitHelper.class); TaskService taskService; @Activate public void addDemoTasks() { try { Task task = new Task(1, "Just a sample task", "Some more info"); taskService.addTask(task); } catch (Exception e) { LOG.warn(e.getMessage(), e); } } @Reference public void setTaskService(TaskService taskService) { this.taskService = taskService; } }

    The class InitHelper creates and persists a first task so the UI has something to show. It is also an example how business code that works with the task service can look like.
    @Reference TaskService taskService injects the TaskService into the field taskService.
    @Activate makes sure that addDemoTasks() is called after injection of this component.

    Another interesting point in the module is the test TaskServiceImplTest. It runs outside OSGi and uses a special
    persistence.xml for testing to create the EntityManagerFactory. It also shows how to instantiate a ResourceLocalJpaTemplate
    to avoid having to install a JTA transaction manager for the test. The test code shows that indeed the TaskServiceImpl can
    be used as plain java code without any special tricks.

    UI

    The tasklist-ui module uses the TaskService as an OSGi service and publishes a Servlet as an OSGi service. The Pax-web whiteboard bundle will then pick up the exported servlet and publish it using the HttpService so it is available on http.

    TaskListServlet @Component(immediate = true, service = { Servlet.class }, property = { "alias:String=/tasklist" } ) public class TaskListServlet extends HttpServlet { private TaskService taskService; protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { // Actual code omitted } @Reference public void setTaskService(TaskService taskService) { this.taskService = taskService; } }

    The above snippet shows how to specify which interface to use when exporting a service as well as how to define service properties.

    The TaskListServlet is exported with the interface javax.servlet.Servlet with the service property alias="/tasklist".
    So it is available on the url http://localhost:8181/tasklist.

    Build

    Make sure you use JDK 8 and run:

    mvn clean install Installation

    Make sure you use JDK 8.
    Download and extract Karaf 4.0.0.
    Start karaf and execute the commands below

    Create DataSource config and Install Example cat https://raw.githubusercontent.com/cschneider/Karaf-Tutorial/master/tasklist-blueprint-cdi/org.ops4j.datasource-tasklist.cfg | tac -f etc/org.ops4j.datasource-tasklist.cfg feature:repo-add mvn:net.lr.tasklist.ds/tasklist/1.0.0-SNAPSHOT/xml/features feature:install example-tasklist-ds-persistence example-tasklist-ds-ui Validate Installation

    First we check that the JpaTemplate service is present for our persistence unit.

    service:list JpaTemplate [org.apache.aries.jpa.template.JpaTemplate] ------------------------------------------- osgi.unit.name = tasklist transaction.type = JTA service.id = 164 service.bundleid = 57 service.scope = singleton Provided by : tasklist-model (57) Used by: tasklist-persistence (58)

    Aries JPA should have created this service for us from our model bundle. If this did not work then check the log for messages from Aries JPA. It should print what it tried and what it is waiting for. You can also check for the presence of an EntityManagerFactory and EmSupplier service which are used by JpaTemplate.

    A likely problem would be that the DataSource is missing so lets also check it:

    service:list DataSource [javax.sql.DataSource] ---------------------- dataSourceName = tasklist felix.fileinstall.filename = file:/home/cschneider/java/apache-karaf-4.0.0/etc/org.ops4j.datasource-tasklist.cfg osgi.jdbc.driver.name = H2-pool-xa osgi.jndi.service.name = tasklist service.factoryPid = org.ops4j.datasource service.pid = org.ops4j.datasource.cdc87e75-f024-4b8c-a318-687ff83257cf url = jdbc:h2:mem:test service.id = 156 service.bundleid = 113 service.scope = singleton Provided by : OPS4J Pax JDBC Config (113) Used by: Apache Aries JPA container (62)

    This is like it should look like. Pax-jdbc-config created the DataSource out of the configuration in "etc/org.ops4j.datasource-tasklist.cfg".  By using a DataSourceFactory wit the property "osgi.jdbc.driver.name=H2-pool-xa". So the resulting DataSource should be pooled and fully ready for XA transactions.

    Next we check that the DS components started:

    scr:list ID | State | Component Name -------------------------------------------------------------- 1 | ACTIVE | net.lr.tasklist.persistence.impl.InitHelper 2 | ACTIVE | net.lr.tasklist.persistence.impl.TaskServiceImpl 3 | ACTIVE | net.lr.tasklist.ui.TaskListServlet

    If any of the components is not active you can inspect it in detail like this:

    scr:details net.lr.tasklist.persistence.impl.TaskServiceImpl Component Details Name : net.lr.tasklist.persistence.impl.TaskServiceImpl State : ACTIVE Properties : component.name=net.lr.tasklist.persistence.impl.TaskServiceImpl component.id=2 Jpa.target=(osgi.unit.name=tasklist) References Reference : Jpa State : satisfied Multiple : single Optional : mandatory Policy : static Service Reference : Bound Service ID 164 Test

    Open the url below in your browser.
    http://localhost:8181/tasklist

    You should see a list of one task

     http://localhost:8181/tasklist?add&taskId=2&title=Another Task

     

    View Online
    Categories: Christian Schneider

    Karaf Tutorial Part 4 - CXF Services in OSGi

    Christian Schneider - Tue, 04/05/2016 - 09:03

    Blog post edited by Christian Schneider

    Shows how to publish and use a simple REST and SOAP service in karaf using cxf and blueprint.

    To run the example you need to install the http feature of karaf. The default http port is 8080 and can be configured using the
    config admin pid "org.ops4j.pax.web". You also need to install the cxf feature. The base url of the cxf servlet is by default "/cxf".
    It can be configured in the config pid "org.apache.cxf.osgi".

    Differences in Talend ESB

    Icon

    If you use Talend ESB instead of plain karaf then the default http port is 8044 and the default cxf servlet name is "/services".

    PersonService Example

    The "business case" is to manage a list of persons. As service should provide the typical CRUD operations. Front ends should be a REST service, a SOAP service and a web UI.

    The example consists of four projects

    • model: Person class and PersonService interface
    • server: Service implementation and logic to publish the service using jax-ws (SOAP)
    • proxy: Accesses the SOAP service and publishes it as an OSGi service
    • webui: Provides a simple servlet based web ui to list and add persons. Uses the OSGi service

    You can find the full source on github: https://github.com/cschneider/Karaf-Tutorial/tree/master/cxf/personservice/

    Installation and test run

    First we build, install and run the example to give an overview of what it does. The following main chapter then explains in detail how it works.

    Installing Karaf and preparing for CXF

    We start with a fresh Karaf 4.0.4

    Build and Test

    Checkout the project from github and build using maven

    mvn clean install Install service and ui in karaf feature:repo-add cxf 3.1.5 feature:install http cxf-jaxws http-whiteboard install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-model/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-server/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-proxy/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-webui/1.0-SNAPSHOT Test the service

    The person service should show up in the list of currently installed services that can be found here http://localhost:8181/cxf/

    Test the proxy and web UI

    http://localhost:8181/personui

    You should see the list of persons managed by the personservice and be able to add new persons.

    How it worksDefining the model

    The model project is a simple java maven project that defines a JAX-WS service and a JAXB data class. It has no dependencies to cxf. The service interface is just a plain java interface with the @WebService annotation.

    @WebService public interface PersonService { public abstract Person[] getAll(); public abstract Person getPerson(String id); public abstract void updatePerson(String id, Person person); public abstract void addPerson(Person person); }

    The Person class is just a simple pojo with getters and setters for id, name and url and the necessary JAXB annotations. Additionally you need an ObjectFactory to tell JAXB what xml element to use for the Person class.
    There is also no special code for OSGi in this project. So the model works perfectly inside and outside of an OSGi container.

    Icon

    The service is defined java first. SOAP and rest are used quite transparently. This is very suitable to communicate between a client and server of the same application. If the service
    is to be used by other applications the wsdl first approach is more suitable. In this case the model project should be configured to generate the data classes and service interface from
    a wsdl (see cxf wsdl_first example pom file). For rest services the java first approach is quite common in general as the client typically does not use proxy classes anyway.

    Service implementation (server)

    PersonServiceImpl is a java class the implements the service interface. The server project also contains a small starter class that allows the service to be published directly from eclipse. This class is not necessary for deployment in karaf.

    The production deployment of the service is done in src/main/resources/OSGI-INF/blueprint/blueprint.xml.

    As the file is in the special location OSGI-INF/blueprint it is automatically processed by the blueprint implementation aries in karaf. The REST service is published using the jaxrs:server element and the SOAP service is published using the jaxws:endpoint element. The blueprint namespaces are different from spring but apart from this the xml is very similar to a spring xml.

    Service proxy

    The service proxy project only contains a blueprint xml that uses the CXF JAXWS client to consume the SOAP service and exports it as an OSGi Service. Encapsulating the service client as an OSGi service (proxy project) is not strictly necessary but it has the advantage that the webui is then completely independent of cxf. So it is very easy to change the way the service is accessed. So this is considered a best practice in OSGi.

    See blueprint.xml

    Web UI (webui)

    This project consumes the PersonService OSGi service and exports the PersonServlet as an OSGi service. The pax web whiteboard extender will then publish the servlet on the location /personui.
    The PersonServlet gets the PersonService injected and uses to get all persons and also to add persons.

    The wiring is done using a blueprint context.

     

    PersonService REST

    The personservice REST example is very similar to to SOAP one but it uses jaxrs to expose a REST service instead.

    The example can be found in github Karaf-Tutorial cxf personservice-rest. It contains these modules:

    • personservice-model: Interface PersonService and Person dto
    • personservice-server: Implements the service and publishes it using blueprint
    • personservice-webui: Simple servlet UI to show and add persons
    Build mvn clean install Install feature:repo-add cxf 3.1.5 feature:install cxf-jaxrs http-whiteboard install -s mvn:net.lr.tutorial.karaf.cxf.personrest/personrest-model/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personrest/personrest-server/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personrest/personrest-webui/1.0-SNAPSHOT How it works

    The interface of the service must contain jaxrs annotations to tell CXF how to map rest requests to the methods.

    @Produces(MediaType.APPLICATION_XML) public interface PersonService { @GET @Path("/") public Person[] getAll(); @GET @Path("/{id}") public Person getPerson(@PathParam("id") String id); @PUT @Path("/{id}") public void updatePerson(@PathParam("id") String id, Person person); @POST @Path("/") public void addPerson(Person person); }

    In blueprint the implementation of the rest service needs to be published as a REST resource:

    <bean id="personServiceImpl" class="net.lr.tutorial.karaf.cxf.personrest.impl.PersonServiceImpl"/> <jaxrs:server address="/person" id="personService"> <jaxrs:serviceBeans> <ref component-id="personServiceImpl" /> </jaxrs:serviceBeans> <jaxrs:features> <cxf:logging /> </jaxrs:features> </jaxrs:server>

     

    Test the service

    The person service should show up in the list of currently installed services that can be found here
    http://localhost:8181/cxf/

    List the known persons

    http://localhost:8181/cxf/person
    This should show one person "chris"

    Add a person

    Now using a firefox extension like Poster or Httprequester you can add a person.

    Send the following xml snippet:

    <?xml version="1.0" encoding="UTF-8"?> <person> <id>1001</id> <name>Christian Schneider</name> <url>http://www.liquid-reality.de</url> </person>

    with Content-Type:text/xml using PUT:http://localhost:8181/cxf/person/1001
    or to this url using POST:http://localhost:8181/cxf/person

    Now the list of persons should show two persons.

    Now using a firefox extension like Poster or Httprequester you can add a person.
    Send the content of server/src/test/resources/person1.xml to the following url using PUT:
    http://localhost:8181/cxf/person/1001

    Or to this url using POST:
    http://localhost:8181/cxf/person

    Now the list of persons should show two persons

    Test the web UI

    http://localhost:8181/personuirest

    You should see the list of persons managed by the personservice and be able to add new persons.

    Some further remarks

    The example uses blueprint instead of spring dm as it works much better in an OSGi environment. The bundles are created using the maven bundle plugin. A fact that shows how well blueprint works
    is that the maven bundle plugin is just used with default settings. In spring dm the imports have to be configured as spring needs access to many implementation classes of cxf. For spring dm examples
    take a look at the Talend Service Factory examples (https://github.com/Talend/tsf/tree/master/examples).

    The example shows that writing OSGi applications is quite simple with aries and blueprint. It needs only 153 lines of java code (without comments) for a complete little application.
    The blueprint xml is also quite small and readable.

    Back to Karaf Tutorials

    View Online
    Categories: Christian Schneider

    Karaf Tutorial Part 4 - CXF Services in OSGi

    Christian Schneider - Thu, 03/31/2016 - 14:41

    Blog post edited by Christian Schneider

    Shows how to publish and use a simple REST and SOAP service in karaf using cxf and blueprint.

    To run the example you need to install the http feature of karaf. The default http port is 8080 and can be configured using the
    config admin pid "org.ops4j.pax.web". You also need to install the cxf feature. The base url of the cxf servlet is by default "/cxf".
    It can be configured in the config pid "org.apache.cxf.osgi".

    Differences in Talend ESB

    Icon

    If you use Talend ESB instead of plain karaf then the default http port is 8044 and the default cxf servlet name is "/services".

    PersonService Example

    The "business case" is to manage a list of persons. As service should provide the typical CRUD operations. Front ends should be a REST service, a SOAP service and a web UI.

    The example consists of four projects

    • model: Person class and PersonService interface
    • server: Service implementation and logic to publish the service using jax-ws (SOAP)
    • proxy: Accesses the SOAP service and publishes it as an OSGi service
    • webui: Provides a simple servlet based web ui to list and add persons. Uses the OSGi service

    You can find the full source on github: https://github.com/cschneider/Karaf-Tutorial/tree/master/cxf/personservice/

    Installation and test run

    First we build, install and run the example to give an overview of what it does. The following main chapter then explains in detail how it works.

    Installing Karaf and preparing for CXF

    We start with a fresh Karaf 4.0.4

    Build and Test

    Checkout the project from github and build using maven

    mvn clean install Install service and ui in karaf feature:repo-add cxf 3.1.5 feature:install http cxf-jaxws http-whiteboard install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-model/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-server/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-proxy/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personservice/personservice-webui/1.0-SNAPSHOT Test the service

    The person service should show up in the list of currently installed services that can be found here http://localhost:8181/cxf/

    Test the proxy and web UI

    http://localhost:8181/personui

    You should see the list of persons managed by the personservice and be able to add new persons.

    How it worksDefining the model

    The model project is a simple java maven project that defines a JAX-WS service and a JAXB data class. It has no dependencies to cxf. The service interface is just a plain java interface with the @WebService annotation.

    @WebService public interface PersonService { public abstract Person[] getAll(); public abstract Person getPerson(String id); public abstract void updatePerson(String id, Person person); public abstract void addPerson(Person person); }

    The Person class is just a simple pojo with getters and setters for id, name and url and the necessary JAXB annotations. Additionally you need an ObjectFactory to tell JAXB what xml element to use for the Person class.
    There is also no special code for OSGi in this project. So the model works perfectly inside and outside of an OSGi container.

    Icon

    The service is defined java first. SOAP and rest are used quite transparently. This is very suitable to communicate between a client and server of the same application. If the service
    is to be used by other applications the wsdl first approach is more suitable. In this case the model project should be configured to generate the data classes and service interface from
    a wsdl (see cxf wsdl_first example pom file). For rest services the java first approach is quite common in general as the client typically does not use proxy classes anyway.

    Service implementation (server)

    PersonServiceImpl is a java class the implements the service interface. The server project also contains a small starter class that allows the service to be published directly from eclipse. This class is not necessary for deployment in karaf.

    The production deployment of the service is done in src/main/resources/OSGI-INF/blueprint/blueprint.xml.

    As the file is in the special location OSGI-INF/blueprint it is automatically processed by the blueprint implementation aries in karaf. The REST service is published using the jaxrs:server element and the SOAP service is published using the jaxws:endpoint element. The blueprint namespaces are different from spring but apart from this the xml is very similar to a spring xml.

    Service proxy

    The service proxy project only contains a blueprint xml that uses the CXF JAXWS client to consume the SOAP service and exports it as an OSGi Service. Encapsulating the service client as an OSGi service (proxy project) is not strictly necessary but it has the advantage that the webui is then completely independent of cxf. So it is very easy to change the way the service is accessed. So this is considered a best practice in OSGi.

    See blueprint.xml

    Web UI (webui)

    This project consumes the PersonService OSGi service and exports the PersonServlet as an OSGi service. The pax web whiteboard extender will then publish the servlet on the location /personui.
    The PersonServlet gets the PersonService injected and uses to get all persons and also to add persons.

    The wiring is done using a blueprint context.

     

    PersonService REST

    The personservice REST example is very similar to to SOAP one but it uses jaxrs to expose a REST service instead.

    The example can be found in github Karaf-Tutorial cxf personservice-rest. It contains these modules:

    • personservice-model: Interface PersonService and Person dto
    • personservice-server: Implements the service and publishes it using blueprint
    • personservice-webui: Simple servlet UI to show and add persons
    Build mvn clean install Install feature:repo-add cxf 3.1.5 feature:install cxf-jaxrs http install -s mvn:net.lr.tutorial.karaf.cxf.personrest/personrest-model/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personrest/personrest-server/1.0-SNAPSHOT install -s mvn:net.lr.tutorial.karaf.cxf.personrest/personrest-webui/1.0-SNAPSHOT How it works

    The interface of the service must contain jaxrs annotations to tell CXF how to map rest requests to the methods.

    @Produces(MediaType.APPLICATION_XML) public interface PersonService { @GET @Path("/") public Person[] getAll(); @GET @Path("/{id}") public Person getPerson(@PathParam("id") String id); @PUT @Path("/{id}") public void updatePerson(@PathParam("id") String id, Person person); @POST @Path("/") public void addPerson(Person person); }

    In blueprint the implementation of the rest service needs to be published as a REST resource:

    <bean id="personServiceImpl" class="net.lr.tutorial.karaf.cxf.personrest.impl.PersonServiceImpl"/> <jaxrs:server address="/person" id="personService"> <jaxrs:serviceBeans> <ref component-id="personServiceImpl" /> </jaxrs:serviceBeans> <jaxrs:features> <cxf:logging /> </jaxrs:features> </jaxrs:server>

     

    Test the service

    The person service should show up in the list of currently installed services that can be found here
    http://localhost:8181/cxf/

    List the known persons

    http://localhost:8181/cxf/person
    This should show one person "chris"

    Add a person

    Now using a firefox extension like Poster or Httprequester you can add a person.

    Send the following xml snippet:

    <?xml version="1.0" encoding="UTF-8"?> <person> <id>1001</id> <name>Christian Schneider</name> <url>http://www.liquid-reality.de</url> </person>

    with Content-Type:text/xml using PUT:http://localhost:8181/cxf/person/1001
    or to this url using POST:http://localhost:8181/cxf/person

    Now the list of persons should show two persons.

    Now using a firefox extension like Poster or Httprequester you can add a person.
    Send the content of server/src/test/resources/person1.xml to the following url using PUT:
    http://localhost:8181/cxf/person/1001

    Or to this url using POST:
    http://localhost:8181/cxf/person

    Now the list of persons should show two persons

    Test the web UI

    http://localhost:8181/personuirest

    You should see the list of persons managed by the personservice and be able to add new persons.

    Some further remarks

    The example uses blueprint instead of spring dm as it works much better in an OSGi environment. The bundles are created using the maven bundle plugin. A fact that shows how well blueprint works
    is that the maven bundle plugin is just used with default settings. In spring dm the imports have to be configured as spring needs access to many implementation classes of cxf. For spring dm examples
    take a look at the Talend Service Factory examples (https://github.com/Talend/tsf/tree/master/examples).

    The example shows that writing OSGi applications is quite simple with aries and blueprint. It needs only 153 lines of java code (without comments) for a complete little application.
    The blueprint xml is also quite small and readable.

    Back to Karaf Tutorials

    View Online
    Categories: Christian Schneider

    Karaf Tutorial Part 5 - Running Apache Camel integrations in OSGi

    Christian Schneider - Mon, 03/28/2016 - 10:33

    Blog post edited by Christian Schneider

    Shows how to run your camel routes in the OSGi server Apache Karaf. Like for CXF blueprint is used to boot up camel. The tutorial shows three examples - a simple blueprint route, a jms2rest adapter and an order processing example.Installing Karaf and making Camel features available
    • Download Karaf 4.0.4 and unpack to the file system
    • Start bin\karaf.bat or bin/karaf for unix

    In Karaf type:

    feature:repo-add camel 2.16.2 feature:list

    You should see the camel features that are now ready to be installed.

    Getting and building the examples

    You can find the examples for this tutorial on github Karaf Tutorial - camel.

    So either clone the git repo or just download and unpack the zip of it.To build the code do:

    cd camel mvn clean install Starting simple with a pure blueprint deployment

    Our first example does not even require a java project. In Karaf it is possible to deploy pure blueprint xml files. As camel is well integrated with blueprint you can define a complete camel context with routes in a simple blueprint file.

    simple-camel-blueprint.xml

    The blueprint xml for a camel context is very similar to the same in spring. Mainly the namespaces are different. Blueprint discovers the dependency on camel so it will automatically require the at least the camel-blueprint feature is installed. The camel components in routes are discovered as OSGi services. So as soon as a camel component is installed using the respective feature it is automatically available for usage in routes.

    So to install the above blueprint based camel integration you only have to do the following steps:

    feature:install camel-blueprint camel-stream

    Copy simple-camel-blueprint.xml to the deploy folder of karaf. You should now see "Hello Camel" written to the console every 5 seconds.

    The blueprint file will be automatically monitored for changes so any changes we do are directly reflected in Karaf. To try this open the simple-camel-blueprint.xml file from the deploy folder in an editor, change "stream:out" to "log:test" and save. Now the messages on the console should stop and instead you should be able to see "Hello Camel" in the Karaf log file formatted as a normal log line.

    JMS to REST Adapter (jms2rest)

    Icon

    This example is not completely standalone. As a prerequisite install the person service example like described in Karaf Tutorial 4.

    The example shows how to create a bridge from the messaging world to a REST service. It is simple enough that it could be done in a pure blueprint file like the example above. As any bigger integration needs some java code I opted to use a java project for that case.

    Like most times we mainly use the maven bundle plugin with defaults and the packaging type bundle to make the project OSGi ready. The camel context is booted up using a blueprint file blueprint.xml and the routes are defined in the java class Jms2RestRoute.

    Routes

    The first route watches the directory "in" and writes the content of any file placed there to the jms queue "person". It is not strictly necessary but makes it much simpler to test the example by hand.

    The seconds route is the real jms2rest adapter. It listens on the jms queue person and expects to get xml content with persons like also used in the PersonService. In the route the id of the person is extracted from the xml and stored in a camel message header. This header is then used to build the rest uri. As a last step the content from the message is sent to the rest uri with a PUT request. So this tells the service to store the person with the given id and data.

    Use of Properties

    Besides the pure route the example shows some more tpyical things you need in camel. So it is a good practice to externalize the url of services we access. Camel uses the Properties component for this task.

    This enables us to write {{personServiceUri}} in endpoints or ${properties:personServiceUri} in the simple language.

    In a blueprint context the Properties component is automatically aware of injected properties from the config admin service. We use a cm:property-placeholder definition to inject the attributes of the config admin pid "net.lr.tutorial.karaf.cxf.personservice". As there might be no such pid we also define a default value for the personServiceUri so the integration can be deployed without further configuation.

    JMS Component

    We are using the camel jms component in our routes. This is one of the few components that need further configuration to work. We also do this in the blueprint context by defining a JmsComponent and injecting a connection factory into it. In OSGi it is good practice to not define connection factories or data sources directly in the bundle instead we are simply refering to it using a OSGi service reference.

    Deploying and testing the jms2rest Adapter

    Just type the following in Karaf:

    feature:repo-add activemq 5.12.2 feature:repo-add camel 2.16.2 feature:install  camel-blueprint camel-jms camel-http camel-saxon activemq-broker jms jms:create -t activemq localhost install -s mvn:net.lr.tutorial.karaf.camel/example-jms2rest/1.0-SNAPSHOT

    This installs the activemq and camel feature files and features in karaf. The activemq:create command creates a broker defintions in the deploy folder. This broker is then automatically started. The broker defintion also publishes an OSGi service for a suitable connection factory. This is then referenced later by our bundle.

    As a last step we install our own bundle with the camel route.

    Now the route should be visible when typing:

    > camel:route-list Route Id Context Name Status [file2jms ] [jms2rest ] [Started ] [personJms2Rest ] [jms2rest ] [Started ]

    Now copy the file src/test/resources/person1.xml to the folder "in" below the karaf directory. The file should be sent to the queue person by the first route and then sent to the rest service by the second route.

    In case the personservice is instaleld you should now see a message like "Update request received for ...". In case it is not installed you should see a 404 in the karaf error when accessing the rest service.

    Order processing example

    The business case in this example is a shop that partly works with external vendors.

    We receive an order as an xml file (See: order1.xml). The order contains a customer element and several item elements. Each item specifies a vendor. This can be either "direct" when we deliver the item ourself or a external vendor name. If the item vendor is "direct" then the item should be exported to a file in a directory with the customer name. All other items are sent out by mail. The mail content should be customizeable. The mail address has to be fetched from a service that maps vendor name to mail address.

    How it works

    This example again uses maven to build, a blueprint.xml context to boot up camel and a java class OrderRouteBuilder for the camel routes. So from an OSGi perspective it works almost the same as the jms2rest example.

    The routes are defined in net.lr.tutorial.karaf.camel.order.OrderRouteBuilder. The "order" route listens on the directory "orderin" and expects xml order files to be placed there. The route uses xpath to extract several attributes of the order into message headers. A splitter is used to handle each (/order/item) spearately. Then a content based router is used to handle "direct" items different from others.

    In the case of a direct item the recipientlist pattern is used to build the destination folder dynamically using a simple language expression.

    recipientList(simple("file:ordersout/${header.customer}"))

    If the vendor is not "direct" then the route "mailtovendor" is called to create and send a mail to the vendor. Some subject and to address are set using special header names that the mail component understands. The content of the mail is expected in the message body. As the body also should be comfigureable the velocity component is used to fill the mailtemplate.txt with values from the headers that were extracted before.

    Deploy into karaf

    The deployment is also very similar to the previous example but a little simpler as we do not need jms. Type the following in karaf

    feature:repo-add camel 2.16.2 feature:install camel-blueprint camel-mail camel-velocity camel-stream install -s mvn:net.lr.tutorial.karaf.camel/example-order/1.0-SNAPSHOT

    To be able to receive the mail you have to edit the configuration pid. You can either do this by placing a properties file
    into etc/net.lr.tutorial.karaf.cxf.personservice.cfg or editing the config pid using the karaf webconsole. (See part 2 and part 3 of the Karaf Tutorial series).

    Basically you have to set these two properties according to your own mail environment.

    mailserver=yourmailserver.com testVendorEmail=youmail@yourdomain.com Test the order example

    Copy the file order1.xml into the folder "ordersin" below the karaf dir.

    The Karaf console will show:

    Order from Christian Schneider Count: 1, Article: Flatscreen TV

    The same should be in a mail in your inbox. At the same time a file should be created in ordersout/Christian Schneider/order1.xml that contains the book item.

    Wrapping it up and outlook

    The examples show that fairly sophisticated integrations can be done using camel and be nicely deployed in an Apache Karaf container. The examples also show some best practices around configuration management, jms connection factories and templates for customization. The examples should also provide a good starting point for you own integration projects. Many people are a bit hesitant using OSGi in production. I hope these simple examples can show how easy this is in practice. Still problems can arise of course. For that case it is advisable to think about getting a support contract from a vendor like Talend. The whole Talend Integration portfolio is based on Apache Karaf so we are quite experienced in this area.

    I have left out one big use case for Apache Camel in this tutorial - Database integrations. This is a big area and warrants a separate tutorial that will soon follow. There I will also explain how to handle DataSources and Connection Factories with drivers that are not already OSGi compliant.

    Back to Karaf Tutorials

    View Online
    Categories: Christian Schneider

    New Kid On The Block: Fediz OpenId Connect

    Sergey Beryozkin - Wed, 03/16/2016 - 19:02
    Apache Fediz, Identity Provider for the WEB, was created by Oliver Wulff and during the last few years, with the major support from Colm and Jan, has become quite a popular provider for supporting SSO with the help of the WS-Federation Profile.

    Before I continue, I'd like to clarify that even though WS-Federation is obviously related to SOAP, the important thing is that as far as the user experience is concerned, it is pure SSO. For example, AFAIK, a Microsoft Office Outlook login process is currently WS-Fed aware.

    But OpenId Connect (OIDC) is a new SSO star for the WEB, with all of the software industry players with SSO-related interests supporting it, as far as I can see it.

    OIDC really shines. I was talking about something similar before in context of the JOSE work, it is really been designed by some of the best security and web experts in the industry. And OIDC is still a very bleeding edge development as far as a maintsream adoption by the software industry is concerned. Google, Microsoft, and other top companies have created OIDC servers, but what if you want your own OIDC ?

    Fediz OpenId Connect (Fediz OIDC) is the new project that Colm, Jan and myself started working upon back in November 2015 and it joins a family of OIDC-focused projects that are appearing probably every month in various developer communities.   

    As you can imagine we are at the start of a rather long road. OIDC is great but is undoubtedly complex to implement right.  We've had a good progress so far and most of OIDC Core is supported OOB, something that you can try right now.

    Apache CXF OAuth2 and OIDC authorization modules are linked to a flexible Fediz IDP (Authentication System) with the minimum amount of code. We will be working on making it all more feature complete, robust, configurable, customizable, production ready.

    We are planning to talk about Fediz OIDC a lot more going forward.

    Stay tuned !

    Categories: Sergey Beryozkin

    Using the CXF failover feature to authenticate to multiple Apache Syncope instances

    Colm O hEigeartaigh - Wed, 03/02/2016 - 21:35
    A couple of years ago, I described a testcase that showed how an Apache CXF web service endpoint could send a username/password received via WS-Security to Apache Syncope for authentication. In this article, I'll extend that testcase to make use of the CXF failover feature. The failover feature allows the client to use a set of alternative addresses when the primary endpoint address is unavailable. For the purposes of the demo, instead of a single Apache Syncope instance, we will set up two instances which share the same internal storage. When the first/primary instance goes down, the failover feature will automatically switch to use the second instance.

    1) Set up the Apache Syncope instances

    1.a) Set up a database for Internal Storage

    Apache Syncope persists internal storage to a database via Apache OpenJPA. For the purposes of this demo, we will set up MySQL. Install MySQL in $SQL_HOME and create a new user for Apache Syncope. We will create a new user "syncope_user" with password "syncope_pass". Start MySQL and create a new Syncope database:
    • Start: sudo $SQL_HOME/bin/mysqld_safe --user=mysql
    • Log on: $SQL_HOME/bin/mysql -u syncope_user -p
    • Create a Syncope database: create database syncope; 
    1.b) Set up containers to host Apache Syncope

    We will deploy Syncope to Apache Tomcat. Download Tomcat + extract it twice (calling it first-instance + second-instance). In both instances, edit 'conf/context.xml', and uncomment the the "<Manager pathname="" />" configuration. Also in 'conf/context.xml', add a datasource for internal storage:

    <Resource name="jdbc/syncopeDataSource" auth="Container"
        type="javax.sql.DataSource"
        factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
        testWhileIdle="true" testOnBorrow="true" testOnReturn="true"
        validationQuery="SELECT 1" validationInterval="30000"
        maxActive="50" minIdle="2" maxWait="10000" initialSize="2"
        removeAbandonedTimeout="20000" removeAbandoned="true"
        logAbandoned="true" suspectTimeout="20000"
        timeBetweenEvictionRunsMillis="5000" minEvictableIdleTimeMillis="5000"
        jdbcInterceptors="org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;
        org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer"
        username="syncope_user" password="syncope_pass"
        driverClassName="com.mysql.jdbc.Driver"
        url="jdbc:mysql://localhost:3306/syncope?characterEncoding=UTF-8"/>

    The next step is to enable a way to deploy applications to Tomcat using the Manager app. Edit 'conf/tomcat-users.xml' in both instances and add the following:

    <role rolename="manager-script"/>
    <user username="manager" password="s3cret" roles="manager-script"/>

    Next, download the JDBC driver jar for MySQL and put it in Tomcat's 'lib' directory in both instances. Edit 'conf/server.xml' of the second instance, and change the port to "9080", and change the other ports to avoid conflict with the first Tomcat instance. Now start both Tomcat instances.

    1.c) Install Syncope to the containers

    Download and run the Apache Syncope installer. Install it to Tomcat using MySQL as the database. For more info on this, see a previous tutorial. Run this twice to install Syncope in both Apache Tomcat instances.

    1.d) Configure the container to share the same database

    Next we need to configure both containers to share the same database. Edit 'webapps/syncope/WEB-INF/classes/persistenceContextEMFactory.xml' in the first instance, and change the 'openjpa.RemoteCommitProvider' to:
    • <entry key="openjpa.RemoteCommitProvider" value="tcp(Port=12345, Addresses=127.0.0.1:12345;127.0.0.1:12346)"/>
    Similarly, change the value in the second instance to:
    • <entry key="openjpa.RemoteCommitProvider" value="tcp(Port=12346, Addresses=127.0.0.1:12345;127.0.0.1:12346)"/>
    This is necessary to ensure data consistency across the two Syncope instances. Please see the Syncope cluster page for more information.

    1.e) Add users

    In the first Tomcat instance running on port 8080, go to http://localhost:8080/syncope-console, and add two new roles "employee" and "boss". Add two new users, "alice" and "bob" both with password "security". "alice" has both roles, but "bob" is only an "employee". Now logout, and login to the second instance running on port 9080. Check that the newly created users are available.

    2) The CXF testcase

    The CXF testcase is available in github:
    • cxf-syncope-failover: This project contains a number of tests that show how an Apache CXF service endpoint can use the CXF Failover feature, to authenticate to different Apache Syncope instances.
    A CXF client sends a SOAP UsernameToken to a CXF Endpoint. The CXF Endpoint has been configured (see cxf-service.xml) to validate the UsernameToken via the SyncopeUTValidator, which dispatches the username/passwords to Syncope for authentication via Syncope's REST API.

    The SyncopeUTValidator is configured to use the CXF failover feature with the address of the primary Syncope instance ("first-instance" above running on 8080). It is also configured with a list of alternative addresses to try if the first instance is down (in this case the "second-instance" running on 9080).

    The test makes two invocations. The first should successfully authenticate to
    the first Syncope instance. Now the test sleeps for 10 seconds after prompting
    you to kill the first Syncope instance. It should successfully failover to the
    second Syncope instance on the second invocation.
    Categories: Colm O hEigeartaigh

    Support for OpenId Connect protocol bridging in Apache CXF Fediz 1.3.0

    Colm O hEigeartaigh - Fri, 02/26/2016 - 15:49
    Apache CXF Fediz 1.3.0 will be released in the near future. One of the new features of Fediz 1.2.0 (released last year) was the ability to act as an identity broker with a SAML SSO IdP. In the 1.3.0 release, Apache CXF Fediz will have the ability to act as an identity broker with an OpenId Connect IdP. In other words, the Fediz IdP can act as a protocol bridge between the WS-Federation and OpenId Connect protocols. In this article, we will look at an interop test case with Keycloak.

    1) Install and configure Keycloak

    Download and install the latest Keycloak distribution (tested with 1.8.0). Start keycloak in standalone mode by running 'sh bin/standalone.sh'.

    1.1) Create users in Keycloak

    First we need to create an admin user by navigating to the following URL, and entering a password:
    • http://localhost:8080/auth/
    Click on the "Administration Console" link, logging on using the admin user credentials. You will see the configuration details of the "Master" realm. For the purposes of this demo, we will create a new realm. Hover the mouse pointer over "Master" in the top left-hand corner, and click on "Add realm". Create a new realm called "realmb". Now we will create a new user in this realm. Click on "Users" and select "Add User", specifying "alice" as the username. Click "save" and then go to the "Credentials" tab for "alice", and specify a password, unselecting the "Temporary" checkbox, and reset the password.

    1.2) Create a new client application in Keycloak

    Now we will create a new client application for the Fediz IdP in Keycloak. Select "Clients" in the left-hand menu, and click on "Create". Specify the following values:
    • Client ID: realma-client
    • Client protocol: openid-connect
    • Root URL: https://localhost:8443/fediz-idp/federation
    Once the client is created you will see more configuration options:
    • Select "Access Type" to be "confidential".
    Now go to the "Credentials" tab of the newly created client and copy the "Secret" value. This will be required in the Fediz IdP to authenticate to the token endpoint in Keycloak.

    1.3) Export the Keycloak signing certificate

    Finally, we need to export the Keycloak signing certificate so that the Fediz IdP can validate the signed JWT Token from Keycloak. Select "Realm Settings" (for "realmb") and click on the "Keys" tab. Copy and save the value specified in the "Certificate" textfield.

    1.4) Testing the Keycloak configuration

    It's possible to see the Keycloak OpenId Connect configuration by navigating to:
    • http://localhost:8080/auth/realms/realmb/.well-known/openid-configuration
    This tells us what the authorization and token endpoints are, both of which we will need to configure the Fediz IdP. To test that everything is working correctly, open a web browser and navigate to:
    • localhost:8080/auth/realms/realmb/protocol/openid-connect/auth?response_type=code&client_id=realma-client&redirect_uri=https://localhost:8443/fediz-idp/federation&scope=openid
    Login using the credentials you have created for "alice". Keycloak will then attempt to redirect to the given "redirect_uri" and so the browser will show a connection error message. However, copy the URL + extract the "code" query String. Open a terminal and invoke the following command, substituting in the secret and code extracted above:
    • curl -u realma-client:<secret> --data "client_id=realma-client&grant_type=authorization_code&code=<code>&redirect_uri=https://localhost:8443/fediz-idp/federation" http://localhost:8080/auth/realms/realmb/protocol/openid-connect/token
    You should see a succesful response containing (amongst other things) the OAuth 2.0 Access Token and the OpenId Connect IdToken, containing the user identity.

    2) Install and configure the Apache CXF Fediz IdP and sample Webapp

    Follow a previous tutorial to deploy the latest Fediz IdP + STS to Apache Tomcat, as well as the "simpleWebapp". Note that you will need to use Fediz 1.3.0 here (or the latest SNAPSHOT version) for OpenId Connect support. Test that the "simpleWebapp" is working correctly by navigating to the following URL (selecting "realm A" at the IdP, and authenticating as "alice/ecila"):
    • https://localhost:8443/fedizhelloworld/secure/fedservlet
    2.1) Configure the Fediz IdP to communicate with Keycloak

    Now we will configure the Fediz IdP to authenticate the user in "realm B" by using the OpenId Connect protocol. Edit 'webapps/fediz-idp/WEB-INF/classes/entities-realma.xml'. In the 'idp-realmA' bean:
    • Change the port in "idpUrl" to "8443". 
    In the 'trusted-idp-realmB' bean:
    • Change the "url" value to "http://localhost:8080/auth/realms/realmb/protocol/openid-connect/auth".
    • Change the "protocol" value to "openid-connect-1.0".
    • Change the "certificate" value to "keycloak.cert". 
    • Add the following parameters Map, filling in a value for the client secret extracted above: <property name="parameters">
                  <util:map>
                      <entry key="client.id" value="realma-client"/>
                      <entry key="client.secret" value="<secret>"/>
                      <entry key="token.endpoint" value="http://localhost:8080/auth/realms/realmb/protocol/openid-connect/token"/>
                  </util:map>
              </property>
       
    2.2) Configure Fediz to use the Keycloak signing certificate

    Copy 'webapps/fediz-idp/WEB-INF/classes/realmb.cert' to a new file called 'webapps/fediz-idp/WEB-INF/classes/keycloak.cert'. Edit this file + delete the content between the "-----BEGIN CERTIFICATE----- / -----END CERTIFICATE-----" tags, pasting instead the Keycloak signing certificate as retrieved in step "1.3" above.

    Restart Fediz to pick up the changes (you may need to remove the persistent storage first).

    3) Testing the service

    To test the service navigate to:
    • https://localhost:8443/fedizhelloworld/secure/fedservlet
    Select "realm B". You should be redirected to the Keycloak authentication page. Enter the user credentials you have created. You will be redirected to Fediz, where it converts the received JWT token to a token in the realm of Fediz (realm A) and redirects to the web application.
    Categories: Colm O hEigeartaigh

    Pages

    Subscribe to Talend Community Coders aggregator