Latest Activity

Bean Validation for CXF JAX-RS Proxies

Sergey Beryozkin - Tue, 11/29/2016 - 14:01
If you work with CXF JAX-RS Client proxies and have been thinking for a while, wouldn't it be good to have the proxy method parameters validated with the Bean Validation annotations, before the remote invocation is done, then a good news is, yes, starting from CXF 3.1.9 (which is due soon) it will be easy to do, have a look at this test please.

If you have the code which is considered to be safe with the respect to how it initializes the entities which will be posted to the remote targets then then the client side bean validation might not be needed.

It can be more useful for the cases where the proxies are collecting the data from the databases or some other external sources - using the bean validation check to minimize the risk of posting some not well initialized data can indeed help.

I'd like to thank Johannes Fiala for encouraging us to have this feature implemented.

Categories: Sergey Beryozkin

Home Realm Discovery in the Apache CXF Fediz IdP

Colm O hEigeartaigh - Fri, 11/11/2016 - 17:42
When a client application (secured via either WS-Federation or SAML SSO) redirects a user to the Apache CXF Fediz IdP, the IdP must figure out what the home realm of the user is. If the home realm of the user corresponds to the realm of the IdP, then the IdP can authenticate the user. However, if the home realm does not match that of the IdP, then the IdP has the option to forward the authentication request to a third party IdP for authentication, if it is configured to do this. In this post, we will look at the different options available in the IdP to figure out what the home realm of the user is.

1) The 'whr' query parameter

When using the WS-Federation protocol, the application can specify the home realm of the user by adding the 'whr' query parameter to the URI that the browser is redirected to. Alternatively, the 'whr' query parameter could be added by a reverse proxy sitting in front of the IdP. Here is an example of such a URI including a 'whr' query parameter:
  • https://localhost:45753/fediz-idp-realmb/federation?wa=wsignin1.0&wtrealm=urn%3Aorg%3Aapache%3Acxf%3Afediz%3Aidp%3Arealm-A&wreply=https%3A%2F%2Flocalhost%3A43618%2Ffediz-idp%2Ffederation&whr=urn:org:apache:cxf:fediz:idp:realm-B&wfresh=10&wctx=c07a5b9a-e270-4855-9201-fc1801851cc9
2) The 'hrds' configuration option in the IdP

If no 'whr' query parameter is available (this will always be the case for SAML SSO), then the IdP attempts to find out the home realm of the user by querying the "hrds" property of the IdP. This is a Spring Expression Language expression that is evaluated on the Spring WebFlow RequestContext.

For an example of how this can be used, let's look at the tests in Fediz for the SAML SSO IdP when redirecting to a trusted third party IdP. As there is no 'whr' query parameter for SAML SSO, instead we will define a class with a static method that maps application realms to home realms. The application realm is available in the IdP, as the SAML SSO AuthnRequest is already parsed at this point (it corresponds to the "Issuer" of the AuthnRequest). So we can specify the hrds configuration options in the IdP as follows:
  • <property name="hrds" value="T(org.apache.cxf.fediz.integrationtests.RealmMapper).realms()                                   .get(getFlowScope().get('saml_authn_request').issuer)" />
3) Via a form

If no 'whr' query parameter is available, and no 'hrds' configuration option is specified, then the IdP will display a form where the user can select the home realm. The IdP only does this if the configuration option "provideIdpList" is set to true. If it is set to false, then the current IdP is assumed to be the home realm IdP, unless the configuration option "useCurrentIdp" is also set to "false", in which case an error is displayed. The user can select the home realm in the form corresponding to the known trusted IdP realms of this IdP:


Categories: Colm O hEigeartaigh

Support for IdP-initiated SAML SSO in Apache CXF

Colm O hEigeartaigh - Fri, 11/04/2016 - 18:26
Previous blog posts have covered how to secure your JAX-RS web applications in Apache CXF using SAML SSO. Since the 3.1.8 release, Apache CXF also supports IdP-initiated SAML SSO. The typical use-case for SAML SSO involves the browser invoking on a JAX-RS application, and then being redirected to an IdP for authentication, which subsequently redirects the browser back to the application. However, sometimes a user will log on first to the IdP and then want to invoke on a web application. In this post we will show how to configure SAML SSO for a CXF-based web application to support the IdP-initiated flow, by demonstrating an interop test-case with Okta.

1) Configuring a SAML application in Okta

The first step is to create an account at Okta and configure a SAML application. This process is mapped out at the following link. Follow the steps listed on this page with the following additional changes:
  • Specify the following for the Single Sign On URL and audience URI: http://localhost:8080/fedizdoubleit/racs/sso
  • Specify the following for the default RelayState: http://localhost:8080/fedizdoubleit/app1/services/25
  • Add an additional attribute with name "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" and value "Manager".
The RequestAssertionConsumerService will process the SAML Response from Okta. However, it doesn't know where to subsequently send the browser. Therefore, we are configuring the RelayState parameter to encode the URL of the end application. In addition, our test application requires that the user has a specific role to invoke upon it, hence we add a "Manager" attribute with the URI corresponding to a role.

When the application is configured, you will see an option to "View Setup Instructions". Open this link in a new tab and set it aside for the moment - it contains information required when setting up the web application. Now click on the "People" tab and assign the application to the username that you have created at Okta.

2) Setting up the SAML SSO-enabled web application

We will use a trivial "double it" web application which I wrote previously to demonstrate the SAML SSO capabilities of Apache CXF Fediz. The web application is available here. Build the web application and deploy it in Apache Tomcat. You will need to edit 'webapps/fedizdoubleit/WEB-INF/cxf-service.xml'.

a) SamlRedirectBindingFilter configuration changes

First let's look at the changes which are required to the 'SamlRedirectBindingFilter':
  • Remove "idpServiceAddress" and "assertionConsumerServiceAddress". These aren't required as we are only supporting the IdP-initiated flow.
  • Also remove the "signRequest", "signaturePropertiesFile", "callbackHandler", "signatureUsername" and "issuerId" properties.
  • Add <property name="addWebAppContext" value="false"/>
  • Add <property name="supportUnsolicited" value="true"/>

b) RequestAssertionConsumerService (RACS) configuration changes

Now add the following properties to the "RequestAssertionConsumerService":
  • <property name="supportUnsolicited" value="true"/>
  • <property name="idpServiceAddress" value="..."/>
  • <property name="issuerId" value="http://localhost:8080/fedizdoubleit/racs/sso"/>
  • <property name="parseApplicationURLFromRelayState" value="true"/>
Paste in the value for "idpServiceAddress" from the "Identity Provider Single Sign-On URL" given in the "View Setup Instructions" page referenced above.
c) Adding Okta cert into the RACS truststore

As things stand, the SAML Response from Okta to the RequestAssertionConsumerService will fail, as the RACS will not trust the certificate Okta uses to sign the SAML Response. Therefore we need to insert the Okta cert into the truststore of the RACS. Copy the "X.509 Certificate" value from the "View Setup Instructions" page referenced earlier. Create a file called 'webapps/fedizdoubleit/WEB-INF/classes/okta.cert' and paste the certificate contents into this file. Import it into the truststore via:
  • keytool -keystore stsrealm_a.jks -storepass storepass -importcert -file okta.cert
At this point we should be all done. Click on the box for the application you have created in Okta. You should be redirected to the CXF RACS, which validates the SAML Response, and in turn redirects to the application.


Categories: Colm O hEigeartaigh

Client Credentials grant support in the Apache CXF Fediz OIDC service

Colm O hEigeartaigh - Wed, 11/02/2016 - 18:59
Apache CXF Fediz ships with a powerful and flexible OpenId Connect (OIDC) service that can be used to implement SSO for your organisation. All of the core OIDC flows are supported - Authorization Code flow, Implicit and Hybrid flows. As OIDC is just an identity layer over OAuth 2.0, it's possible to use Fediz as a purely OAuth 2.0 service as well, and all of the authorization grants defined in the spec are also fully supported. In this post we will look at support for one of these authorization grants in Fediz 1.3.1 - the client credentials grant.

1) The OAuth 2.0 client credentials grant

The client credentials grant is used for when the client is requesting access for a resource that is owned or controlled by that client. There is no enduser in this scenario, unlike say the authorization code flow or implicit flow. The client simply calls the token endpoint of the authorization service using "client_credentials" for the "grant_type" parameter. In addition, the client must authenticate (e.g. by supplying client_id and client_secret parameters). The authorization service authenticates the client and then returns an access token.

2) Supporting the client credentials grant in Fediz OIDC

It's easy to support the client credentials grant in the Fediz OIDC service.

a) Add the ClientCredentialsGrantHandler

Firstly, the ClientCredentialsGrantHandler must be added to the list of grant handlers supported by the token service as follows:

b) Add a way of authenticating the client

The next step is to add a way of authenticating the client credentials. Fediz uses JAAS to make it easy for the deployer to plug in different JAAS LoginModules if required. To configure JAAS, you must specify the name of the JAAS LoginModule in the configuration of the OAuthDataProviderImpl:

c) Example JAAS configuration

For the normal OIDC flows, the Fediz OIDC uses a WS-Federation filter to redirect the browser to the Fediz IdP, where the end user is then ultimately authenticated by the STS that bundles with Fediz. Therefore it seems like a natural fit to re-use the STS to authenticate the client in the Fediz OIDC. Follow steps (a) and (b) above. Start the Fediz STS, but before starting the OIDC service, specify the "java.security.auth.login.config" system property to point to the following JAAS configuration file:

You must substitute the correct port for "${idp.https.port}". The STSLoginModule takes the given username and password supplied by the client and uses them to authenticate to the STS.
Categories: Colm O hEigeartaigh

Ultra Trail Australia 2016

Olivier Lamy - Mon, 10/31/2016 - 08:47
So 14th May 2016, I managed to finish the Ultra Trail Australia 2016 in the Blue Mountains (NSW) It's 100km with 4500m elevation and down.
Well few months later, I won't really write a race report but only post a link to the video I made.

Ultra Trail Australia 2016 100km from olamy on Vimeo.


I really enjoyed it!! Hard training make it "easy" except few kilometers with stomach pain (from 35km mark to 56km mark).
And I'm crazy enough to register again for 2017.
Goal: improve my time!!
Categories: Olivier Lamy

Switching authentication mechanisms in the Apache CXF Fediz STS

Colm O hEigeartaigh - Wed, 10/26/2016 - 17:56
Apache CXF Fediz ships with an Identity Provider (IdP) that can authenticate users via either the WS-Federation or SAML SSO protocols. The IdP delegates user authentication to a Security Token Service (STS) web application using the WS-Trust protocol. The STS implementation in Fediz ships with some sample user data for use in the tests. For a real-world scenario, deployers will have to swap the sample data out for an identity backend (such as Active Directory or LDAP). This post will explain how this can be done, with a particular focus on some recent changes to the STS web application in Fediz to make the process easier.

1) The default STS that ships with Fediz

First let's explain a bit about how the STS is configured by default in Fediz to cater for the testcases.

a) Endpoints and user authentication

The STS must define two distinct set of endpoints to work with the IdP. Firstly, the STS must be able to authenticate the user credentials that are presented to the IdP. Typically this is a Username + Password combination. However, X.509 client certificates and Kerberos tokens are also supported. Note that by default, the STS authenticates usernames and passwords via a simple file local to the STS.

After successful user authentication, a SAML token is returned to the IdP. The IdP then gets another SAML token "on behalf of" the authenticated user for a given realm, authenticating using its own credentials. So we need a second endpoint in the STS to issue this token. By default, the STS requires that the IdP authenticate using TLS client authentication. The security policies are defined in the WSDLs available here.

b) Realms

The Fediz IdP and STS support the concept of authenticating users in different realms. By default, the IdP is configured to authenticate users in "Realm A". This corresponds to a specific endpoint address in the STS. The STS also defines user authentication endpoints in "Realm B" for use in test scenarios involving identity federation between two IdPs.

In addition, the STS defines some configuration to map user identities between realms. In other words, how a principal in one realm should map to another realm, and how the claims in one realm map to those in another realm.

2) Changing the STS in Fediz 1.3.2 to use LDAP

From the forthcoming 1.3.2 release onwards, the Fediz STS web application is a bit easier to customize for your specific deployment needs. Let's see how easy it is to switch the STS to use LDAP.

a) Deploy the vanilla IdP and STS to Apache Tomcat

To start with, we will deploy the STS and IdP containing the sample data to Apache Tomcat.
  • Create a new directory: ${catalina.home}/lib/fediz
  • Edit ${catalina.home}/conf/catalina.properties and append ',${catalina.home}/lib/fediz/*.jar' to the 'common.loader' property.
  • Copy ${fediz.home}/plugins/tomcat/lib/* to ${catalina.home}/lib/fediz
  • Copy ${fediz.home}/idp/war/* to ${catalina.home}/webapps
  • Download and copy the hsqldb jar (e.g. hsqldb-2.3.4.jar) to ${catalina.home}/lib 
  • Copy idp-ssl-key.jks and idp-ssl-trust.jks from ${fediz.home}/examples/sampleKeys to ${catalina.home}
  • Edit the TLS Connector in ${catalina.home}/conf/server.xml', e.g.: <Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="want" sslProtocol="TLS" keystoreFile="idp-ssl-key.jks" keystorePass="tompass" keyPass="tompass" truststoreFile="idp-ssl-trust.jks" truststorePass="ispass" />
Now start Tomcat and then enter the following in a web browser (authenticating with "alice/ecila" in "realm A" - you should be directed to the URL for the default service application (404, as we have not configured it):

https://localhost:8443/fediz-idp/federation?wa=wsignin1.0&wreply=https%3A%2F%2Flocalhost%3A8443%2Ffedizhelloworld%2Fsecure%2Ffedservlet&wtrealm=urn%3Aorg%3Aapache%3Acxf%3Afediz%3Afedizhelloworld

b) Change the STS authentication mechanism to Active Directory

To simulate an Active Directory instance for demonstration purposes, we will modify some LDAP system tests in the Fediz source that use Apache Directory. Check out the Fediz source and build it via "mvn install -DskipTests". Now go into "systests/ldap" and edit the LDAPTest. "@Ignore" the existing test + uncomment the test which just "sleeps". Also change the "@CreateTransport" annotation to start the LDAP port on "12345" instead of a random port.

Next we'll configure the Fediz STS to use this LDAP instance for authentication. Edit 'webapps/fediz-idp-sts/WEB-INF/cxf-transport.xml'. Change "endpoints/file.xml" to "endpoints/ldap.xml". Next edit 'webapps/fediz-idp-sts/WEB-INF/endpoints/ldap.xml" and just change the port from "389" to "12345".

Now we need to configure a JAAS configuration file, which the STS uses to validate the received Username + Password to LDAP. Copy this file to the "conf" directory of Tomcat, substituting "12345" for "portno". Now restart Tomcat, this time specifying the location of the JAAS configuration file, e.g.:
  • export JAVA_OPTS="-Xmx2048M -Djava.security.auth.login.config=/opt/fediz-apache-tomcat-8.0.37/conf/ldap.jaas"
This is all the changes that are required to swap over to use an LDAP instance for authentication.
    Categories: Colm O hEigeartaigh

    Fediz OIDC Story will continue at Apache Con EU 2016

    Sergey Beryozkin - Fri, 10/21/2016 - 13:52
    ApacheCon Europe 2016 will be held in Seville Spain, Nov 16-18, with Apache Big Data starting on Monday Nov 14.

    Colm and myself will continue talking about Fediz OpenId Connect following our presentation earlier this year.

    Would you like to hear about our continuous effort to make the development of OpenId Connect applications with the help of Apache CXF OIDC, OAuth2 and JOSE code nearly as easy as writing a simple JAX-RS server and contributing to the idea of making OIDC going mainstream ?

    Interested in making your own application server going OIDC way but concerned about the development costs ? See how Fediz IDP became OIDC-ready fast.

    You are interested in the WEB security, and thinking about  where to start contributing to ?

    Join us :-).  At the very least join all of us, listen to many interesting talks from my Talend, CXF and Apache SF colleagues. See you there !
    Categories: Sergey Beryozkin

    CXF JAX-RS 2.0 - Perfect HTTP Spark Streaming Connector

    Sergey Beryozkin - Thu, 09/29/2016 - 15:21
    Even the most conservative among us, the web services developers, will be better off admitting sooner rather than later that Big Data is not something that can be ignored, it has become a major technology in the software industry and will continue becoming even more 'influential' with the Internet of things wave coming in.

    Where will it place your typical HTTP service which GETs some data for the users from some data store or accepts some POSTs with the new data ?

    While I'm somewhat concerned seeing BigData consumers collecting the sources via a variety of custom optimized protocols and low-level transports like TCP, I firmly believe HTTP connectors should and will play a big role in connecting the WEB users with the Big Data processing chains.

    HTTP, being so widely used, is a perfect frontend to the local networks where the nodes process the data, and while HTTP is synchronous for a typical interaction, JAX-RS 2.0 REST services can be quite smart. A variety of typical REST patterns can be employed, for example, a POST request handler with the data to be run through a BigData chain can let the application thread deal with it while respond to the user immediately, offering a link with a job id where the status can be monitored or the results returned from.  Or the handler can rely on the suspended HTTP invocations and start streaming the results back as soon they become available.

    I have created a Spark Streaming demo showing some of the possible approaches. This demo is a work in progress and I will appreciate a feedback from the Spark experts on how the demo can be improved.

    The demo relies completely on JAX-RS 2.0 AsyncResponse - the typical pattern is to resume it when some response data are available - and what is good it can be suspended multiple times to do a fine grained optimization of the way the service code returns the data. StreamingOutput is another piece - it allows writing the data back to the user as soon as they become available.  FYI, CXF ships a typed analog, called StreamingResponse, you can see how it is being indirectly used in this RxJava Observable test code.

    But let me get back to the demo. It shows two types of Receivers in action.  This demo service shows how an HTTP InputStream can be converted to a List of Strings with a custom Receiver making them available to Spark. The service currently creates a streaming context per every request which I imagine may not be quite perfect but my tests showed the service performing quite well when the input set is parallelized - less than a sec for a half of MB PDF file.

    Speaking of PDFs and other binary files. One of the service methods uses a beautiful Apache Tika Parser API which is wrapped in this CXF extension. These few lines of code is what it takes to have a service enabled for it to push the content of either PDF or OpenOffice documents to the Spark pipeline (I only added PDF and OpenOffice Tika Parser dependencies to the demo so far). I'm sure you are now starting wondering why Tika API is still not used in your JAX-RS services which parse PDF only with the PDF specific API :-)

    I keep getting distracted. Back to the demo again. This demo service is a bit more closer to the real deployment scenario. It uses a default Spark Socket receiver - JAX-RS 2.0 service, being a good HTTP frontend, forwards the HTTP stream data to the internal Spark Streaming TCP server which processes the data and makes them available to a JAX-RS AsyncResponse handler which is also acting as a Socket server. The correlation between a given HTTP request and the Spark output data is achieved with a custom protocol extension. I can imagine it will be easier with an internal Kafka receiver which is something that the demo will be enhanced with to try later on.

    In both cases, the demo streams the response data pieces back to the user as soon as they become available to the JAX-RS AsyncResponse handler.

    Additionally, the demo shows a CXF JAX-RS Oneway extension in action. HTTP Client will get a 202 status back immediately while the service will continue with processing the request data.

    I'm sure the demo will need more work but I also hope there's enough material there for you to start experimenting. Give it a try please and watch for the updates. I think it will be very interesting to see how this demo can also be written with Apache Beam API, check this blog entry for the good introduction.

    Enjoy ! 
    Categories: Sergey Beryozkin

    Progress In The JAX-RS 2.1 space

    Sergey Beryozkin - Thu, 09/29/2016 - 13:38
    For those of you thinking what is going to happen to JAX-RS a good news is that JAX-RS 2.1 will live, surely it was the only possible outcome given the quality and the popularity of JAX-RS 2.0.

    Check out this Java One 2016 Key Note, and I will continue standing by my assertion that even more is to come from JAX-RS.  


    As far as Apache CXF is concerned, Andriy  has been working hard on a JAX-RS 2.1 branch where he implemented a 2.1 Server Sent Events API. And after Marek released the very first JAX-RS 2.1 API artifact to Central Andriy merged his work to the CXF 3.2.0 master branch.

    It is very good because we can now start working toward releasing CXF 3.2.0 with this early JAX-RS 2.1 API to be implemented for the CXF users to experiment with - JAX-RS 2.0 users will be able to migrate to CXF 3.2.0 without changing anything in their services code.

    The new JAX-RS 2.1 features (SSE, NIO and Reactive Invokers, with the early API improvements likely to happen during the coming specification work) are cool and it is worth taking the hats off to the engineering minds of the Jersey team for the top work they did.
      





    Categories: Sergey Beryozkin

    Apache Karaf Tutorial Part 6 - Database Access

    Christian Schneider - Thu, 09/29/2016 - 08:08

    Blog post edited by Christian Schneider

    Shows how to access databases from OSGi applications running in Karaf and how to abstract from the DB product by installing DataSources as OSGi services. Some new Karaf shell commands can be used to work with the database from the command line. Finally JDBC and JPA examples show how to use such a DataSource from user code.Prerequisites

    You need an installation of apache karaf 3.0.3 for this tutorial.

    Example sources

    The example projects are on github Karaf-Tutorial/db.

    Drivers and DataSources

    In plain java it is quite popular to use the DriverManager to create a database connection (see this tutorial). In OSGi this does not work as the ClassLoader of your bundle will have no visibility of the database driver. So in OSGi the best practice is to create a DataSource at some place that knows about the driver and publish it as an OSGi service. The user bundle should then only use the DataSource without knowing the driver specifics. This is quite similar to the best practice in application servers where the DataSource is managed by the server and published to jndi.

    So we need to learn how to create and use DataSources first.

    The DataSourceFactory services

    To make it easier to create DataSources in OSGi the specs define a DataSourceFactory interface. It allows to create a DataSource using a specific driver from properties. Each database driver is expected to implement this interface and publish it with properties for the driver class name and the driver name.

    Introducing pax-jdbc

    The pax-jdbc project aims at making it a lot easier to use databases in an OSGi environment. It does the following things:

    • Implement the DataSourceFactory service for Databases that do not create this service directly
    • Implement a pooling and XA wrapper for XADataSources (This is explained at the pax jdbc docs)
    • Provide a facility to create DataSource services from config admin configurations
    • Provide karaf features for many databases as well as for the above additional functionality

    So it covers everything you need from driver installation to creation of production quality DataSources.

    Installing the driver

    The first step is to install the driver bundles for your database system into Karaf. Most drivers are already valid bundles and available in the maven repo.

    For several databases pax-jdbc already provides karadf features to install a current version of the database driver.

    For H2 the following commands will work

    feature:repo-add mvn:org.ops4j.pax.jdbc/pax-jdbc-features/0.8.0/xml/features feature:install transaction jndi pax-jdbc-h2 pax-jdbc-pool-dbcp2 pax-jdbc-config service:list DataSourceFactory

    Strictly speaking we would only need the pax-jdbc-h2 feature but we will need the others for the next steps.

    This will install the pax-jdbc feature repository and the h2 database driver. This driver already implements the DataSourceFactory so the last command will display this service.

    DataSourceFactory [org.osgi.service.jdbc.DataSourceFactory] ----------------------------------------- osgi.jdbc.driver.class = org.h2.Driver osgi.jdbc.driver.name = H2 osgi.jdbc.driver.version = 1.3.172 service.id = 691 Provided by : H2 Database Engine (68)

    The pax-jdbc-pool-dbcp2 feature wraps this DataSourceFactory to provide pooling and XA support.

    pooled and XA DataSourceFactory [org.osgi.service.jdbc.DataSourceFactory] ----------------------------------------- osgi.jdbc.driver.class = org.h2.Driver osgi.jdbc.driver.name = H2-pool-xa osgi.jdbc.driver.version = 1.3.172 pooled = true service.id = 694 xa = true Provided by : OPS4J Pax JDBC Pooling support using Commons-DBCP2 (73)

    Technically this DataSourceFactory also creates DataSource objects but internally they manage XA support and pooling. So we want to use this one for our later code examples.

    Creating the DataSource

    Now we just need to create a configuration with the correct factory pid to create a DataSource as a service

    So create the file etc/org.ops4j.datasource-tasklist.cfg with the following content

    config for DataSource osgi.jdbc.driver.name=H2-pool-xa url=jdbc:h2:mem:person dataSourceName=person

    The config will automatically trigger the pax-jdbc-config module to create a DataSource.

    • The name osgi.jdbc.driver=H2-pool-xa will select the H2 DataSourceFactory with pooling and XA support we previously installed.
    • The url configures H2 to create a simple in memory database named test.
    • The dataSourceName will be reflected in a service property of the DataSource so we can find it later
    • You could also set pooling configurations in this config but we leave it at the defaults

    DataSource karaf@root()> service:list DataSource [javax.sql.DataSource] ---------------------- dataSourceName = person osgi.jdbc.driver.name = H2-pool-xa osgi.jndi.service.name = person service.factoryPid = org.ops4j.datasource service.id = 696 service.pid = org.ops4j.datasource.83139141-24c6-4eb3-a6f4-82325942d36a url = jdbc:h2:mem:person Provided by : OPS4J Pax JDBC Config (69)

    So when we search for services implementing the DataSource interface we find the person datasource we just created.

    When we installed the features above we also installed the aries jndi feature. This module maps OSGi services to jndi objects. So we can also use jndi to retrieve the DataSource which will be used in the persistence.xml for jpa later.

    jndi url of DataSource osgi:service/person Karaf jdbc commands

    Karaf contains some commands to manage DataSources and do queries on databases. The commands for managing DataSources in karaf 3.x still work with the older approach of using blueprint to create DataSources. So we will not use these commands but we can use the functionality to list datasources, list tables and execute queries.

    jdbc commands feature:install jdbc jdbc:datasources jdbc:tables person

    We first install the karaf jdbc feature which provides the jdbc commands. Then we list the DataSources and show the tables of the database accessed by the person DataSource.

    jdbc:execute person "create table person (name varchar(100), twittername varchar(100))" jdbc:execute person "insert into person (name, twittername) values ('Christian Schneider', '@schneider_chris')" jdbc:query person "select * from person"

    This creates a table person, adds a row to it and shows the table.

    The output should look like this

    select * from person NAME | TWITTERNAME -------------------------------------- Christian Schneider | @schneider_chris Accessing the database using JDBC

    The project db/examplejdbc shows how to use the datasource we installed and execute jdbc commands on it. The example uses a blueprint.xml to refer to the OSGi service for the DataSource and injects it into the class
    DbExample.The test method is then called as init method and shows some jdbc statements on the DataSource.The DbExample class is completely independent of OSGi and can be easily tested standalone using the DbExampleTest. This test shows how to manually set up the DataSource outside of OSGi.

    Build and install

    Build works like always using maven

    > mvn clean install

    In Karaf we just need our own bundle as we have no special dependencies

    > install -s mvn:net.lr.tutorial.karaf.db/db-examplejdbc/1.0-SNAPSHOT Using datasource H2, URL jdbc:h2:~/test Christian Schneider, @schneider_chris,

    After installation the bundle should directly print the db info and the persisted person.

    Accessing the database using JPA

    For larger projects often JPA is used instead of hand crafted SQL. Using JPA has two big advantages over JDBC.

    1. You need to maintain less SQL code
    2. JPA provides dialects for the subtle differences in databases that else you would have to code yourself.

    For this example we use Hibernate as the JPA Implementation. On top of it we add Apache Aries JPA which supplies an implementation of the OSGi JPA Service Specification and blueprint integration for JPA.

    The project examplejpa shows a simple project that implements a PersonService managing Person objects.
    Person is just a java bean annotated with JPA @Entitiy.

    Additionally the project implements two Karaf shell commands person:add and person:list that allow to easily test the project.

    persistence.xml

    Like in a typical JPA project the peristence.xml defines the DataSource lookup, database settings and lists the persistent classes. The datasource is refered using the jndi name "osgi:service/person".

    The OSGi JPA Service Specification defines that the Manifest should contain an attribute "Meta-Persistence" that points to the persistence.xml. So this needs to be defined in the config of the maven bundle plugin in the prom. The Aries JPA container will scan for these attributes
    and register an initialized EntityMangerFactory as an OSGi service on behalf of the use bundle.

    blueprint.xml

    We use a blueprint.xml context to inject an EntityManager into our service implementation and to provide automatic transaction support.
    The following snippet is the interesting part:

    <bean id="personService" class="net.lr.tutorial.karaf.db.examplejpa.impl.PersonServiceImpl"> <jpa:context property="em" unitname="person" /> <tx:transaction method="*" value="Required"/> </bean>

    This makes a lookup for the EntityManagerFactory OSGi service that is suitable for the persistence unit person and injects a thread safe EnityManager (using a ThreadLocal under the hood) into the
    PersonServiceImpl. Additionally it wraps each call to a method of PersonServiceImpl with code that opens a transaction before the method and commits on success or rollbacks on any exception thrown.

    Build and InstallBuild mvn clean install

    A prerequisite is that the derby datasource is installed like described above. Then we have to install the bundles for hibernate, aries jpa, transaction, jndi and of course our db-examplejpa bundle.
    See ReadMe.txt for the exact commands to use.

    Test person:add 'Christian Schneider' @schneider_chris

    Then we list the persisted persons

    karaf@root> person:list Christian Schneider, @schneider_chris Summary

    In this tutorial we learned how to work with databases in Apache Karaf. We installed drivers for our database and a DataSource. We were able to check and manipulate the DataSource using the jdbc:* commands. In the examplejdbc we learned how to acquire a datasource
    and work with it using plain jdbc4.  Last but not least we also used jpa to access our database.

    Back to Karaf Tutorials

    View Online
    Categories: Christian Schneider

    [OT] Become Most Enigmatic Person in The Office - Discover CXF

    Sergey Beryozkin - Wed, 09/28/2016 - 18:43
    Going the winding Apache CXF path is not that scary for a web services developer - some features may not be there just yet but you may discover something new instead, while helping driving the CXF forward along the way.

    Have no fear, answer the call, help your team discover what Apache CXF really is. And become the most popular and enigmatic person in your office :-)

    Categories: Sergey Beryozkin

    Securing an Apache Kafka broker - part IV

    Colm O hEigeartaigh - Wed, 09/28/2016 - 13:24
    This is the fourth in a series of articles on securing an Apache Kafka broker. The first post looked at how to secure messages and authenticate clients using SSL. The second post built on the first post by showing how to perform authorization using some custom logic. The third post showed how Apache Ranger could be used instead to create and enforce authorization policies for Apache Kafka. In this post we will look at an alternative authorization solution called Apache Sentry.

    1) Build the Apache Sentry distribution

    First we will build and install the Apache Sentry distribution. Download Apache Sentry (1.7.0 was used for the purposes of this tutorial). Verify that the signature is valid and that the message digests match. Now extract and build the source and copy the distribution to a location where you wish to install it:
    • tar zxvf apache-sentry-1.7.0-src.tar.gz
    • cd apache-sentry-1.7.0-src
    • mvn clean install -DskipTests
    • cp -r sentry-dist/target/apache-sentry-1.7.0-bin ${sentry.home}
    Apache Sentry has an authorization plugin for Apache Kafka, amongst other big data projects. In addition it comes with an RPC service which stores authorization privileges in a database. For the purposes of this tutorial we will just configure the authorization privileges in a configuration file locally to the broker. Therefore we don't need to do any further configuration to the distribution at this point.

    2) Configure authorization in the broker

    Configure Apache Kafka as per the first tutorial. To enable authorization using Apache Sentry we also need to follow these steps. First edit 'config/server.properties' and add:
    • authorizer.class.name=org.apache.sentry.kafka.authorizer.SentryKafkaAuthorizer
    • sentry.kafka.site.url=file:./config/sentry-site.xml
    Next copy the jars from the "lib" directory of the Sentry distribution to the Kafka "libs" directory. Then create a new file in the config directory called "sentry-site.xml" with the following content:

    This is the configuration file for the Sentry plugin for Kafka. It essentially says that the authorization privileges are stored in a local file, and that the groups for authenticated users should be retrieved from this file. Finally, we need to specify the authorization privileges. Create a new file in the config directory called "sentry.ini" with the following content:

    This configuration file contains three separate sections. The "[users]" section maps the authenticated principals to local groups. The "[groups]" section maps the groups to roles, and the "[roles]" section lists the actual privileges. Now we can start the broker as in the first tutorial:
    • bin/kafka-server-start.sh config/server.properties 
    3) Test authorization

    Now lets test the authorization logic. Start the producer:
    • bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties
    Send a few messages to check that the producer is authorized correctly. Now start the consumer:
    • bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/consumer.properties --new-consumer
    If everything is configured correctly then it should work as in the first tutorial. 
    Categories: Colm O hEigeartaigh

    Karaf Tutorial Part 1 - Installation and First application

    Christian Schneider - Wed, 09/28/2016 - 11:58

    Blog post edited by Christian Schneider

    Getting Started

    With this post I am beginning a series of posts about Apache Karaf, an OSGi container based on Equinox or Felix. The main difference to these frameworks is that it brings excellent management features with it.

    Outstanding features of Karaf:

    • Extensible Console with Bash like completion features
    • ssh console
    • deployment of bundles and features from maven repositories
    • easy creation of new instances from command line

    All together these features make developing server based OSGi applications almost as easy as regular java applications. Deployment and management is on a level that is much better than all applications servers I have seen till now. All this is combined with a small footprint as well of karaf as the resulting applications. In my opinion this allows a light weight development style like JEE 6 together with the flexibility of spring applications.

    Installation and first startup
    • Download Karaf 4.0.7 from the Karaf web site.
    • Extract and start with bin/karaf

    You should see the welcome screen:

    __ __ ____ / //_/____ __________ _/ __/ / ,< / __ `/ ___/ __ `/ /_ / /| |/ /_/ / / / /_/ / __/ /_/ |_|\__,_/_/ \__,_/_/ Apache Karaf (4.0.7) Hit '<tab>' for a list of available commands and '[cmd] \--help' for help on a specific command. Hit '<ctrl-d>' or 'osgi:shutdown' to shutdown Karaf. karaf@root()> Some handy commandsCommandDescriptionlaShows all installed bundleslistShow user bundlesservice:listShows the active OSGi services. This list is quite long. Here it is quite handy that you can use unix pipes like "ls | grep admin"exportsShows exported packages and bundles providing them. This helps to find out where a package may come from.feature:listShows which features are installed and can be installed.feature:install webconsole

    Install features (a list of bundles and other features). Using the above command we install the Karaf webconsole.

    It can be reached at http://localhost:8181/system/console . Log in with karaf/karaf and take some time to see what it has to offer.

    diagShow diagnostic information for bundles that could not be startedlog:tailShow the log. Use ctrl-c to  go back to ConsoleCtrl-dExit the console. If this is the main console karaf will also be stopped.

    OSGi containers preserve state after restarts

    Icon

    Please note that Karaf like all osgi containers maintains it´s last state of installed and started bundles. So if something should not work anymore a restart is not sure to help. To really start fresh again stop karaf and delete the data directory or start with bin/karaf clean.

    Check the logs

    Icon

    Karaf is very silent. To not miss error messages always keep a tail -f data/karaf.log open !!

    Tasklist - A small osgi application

    Without any useful application Karaf is a nice but useless container. So let´s create our first application. The good news is that creating an OSGi application is quite easy and
    maven can help a lot. The difference to a normal maven project is quite small. To write the application I recommend to use Eclipse 4 with the m2eclipse plugin which is installed by default on current versions.

    Get the source code from the Karaf-Tutorial repo at github.

    git clone https://github.com/cschneider/Karaf-Tutorial.git

    or download the sample project from https://github.com/cschneider/Karaf-Tutorial/zipball/master and extract to a directory.

    Import into Eclipse

    • Start Eclipse Neon or newer
    • In Eclipse Package explorer: Import -> Existing maven project -> Browse to the extracted directory into the tasklist sub dir
    • Eclipse will show all maven projects it finds
    • Click through to import all projects with defaults

    Eclipse will now import the projects and wire all dependencies using m2eclipse.

    The tasklist example consists of these projects

    ModuleDescriptiontasklist-modelService interface and Task classtasklist-persistenceSimple persistence implementation that offers a TaskServicetasklist-uiServlet that displays the tasklist using a TaskServicetasklist-featuresFeatures descriptor for the application that makes installing in Karaf very easyParent pom and general project setup

    The pom.xml is of packaging bundle and the maven-bundle-plugin creates the jar with an OSGi Manifest. By default the plugin imports all packages that are imported in java files or referenced in the blueprint context.

    It also exports all packages that do not contain the string impl or internal. In our case we want the model package to be imported but not the persistence.impl package. As the naming convention is used
    we need no additional configuration.

    Tasklist-model

    This project contains the domain model in our case it is the Task class and a TaskService interface. The model is used by both the persistence implementation and the user interface.  Any user of the TaskService will only need the model. So it is never directly bound to our current implementation.

    Tasklist-persistence

    The very simple persistence implementation TaskServiceImpl manages tasks in a simple HashMap. The class uses the @Singleton annotation to expose the class as an blueprint bean.

    The annotation  @OsgiServiceProvider will expose the bean as an OSGi service and the @Properties annotation allows to add serice properties. In our case the property service.exported.interfaces we set can be used by CXF-DOSGi which we present  in a later tutorial. For this tutorial the properties could also be removed.

    @OsgiServiceProvider @Properties(@Property(name = "service.exported.interfaces", value = "*")) @Singleton public class TaskServiceImpl implements TaskService { ... }

    The blueprint-maven-plugin will process the class above and automatically create the suitable blueprint xml. So this saves us from writing blueprint xml by hand.

    Automatically created blueprint xml can be found in target/generated-resources <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <bean id="taskService" class="net.lr.tasklist.persistence.impl.TaskServiceImpl" /> <service ref="taskService" interface="net.lr.tasklist.model.TaskService" /> </blueprint>

    Tasklist-ui

    The ui project contains a small servlet TaskServlet to display the tasklist and individual tasks. To work with the tasks the servlet needs the TaskService. We inject the TaskService by using the annotation @Inject which is able to inject any bean by type and the annotation @OsgiService which creates a blueprint reference to an OSGiSerivce of the given type.

    The whole class is exposed as an OSGi service of interface java.http.Servlet with a special property alias=/tasklist. This triggers the whiteboard extender of pax web which picks up the service and exports it as a servlet at the relative url /tasklist.

    Snippet of the relevant code:

    @OsgiServiceProvider(classes = Servlet.class) @Properties(@Property(name = "alias", value = "/tasklist")) @Singleton public class TaskListServlet extends HttpServlet { @Inject @OsgiService TaskService taskService; } Automatically created blueprint xml can be found in target/generated-resources <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <reference id="taskService" availability="mandatory" interface="net.lr.tasklist.model.TaskService" /> <bean id="taskServlet" class="net.lr.tasklist.ui.TaskListServlet"> <property name="taskService" ref="taskService"></property> </bean> <service ref="taskServlet" interface="javax.servlet.http.HttpServlet"> <service-properties> <entry key="alias" value="/tasklist" /> </service-properties> </service> </blueprint>

    See also: http://wiki.ops4j.org/display/paxweb/Whiteboard+Extender

    Tasklist-features

    The last project only installs a feature descriptor to the maven repository so we can install it easily in Karaf. The descriptor defines a feature named tasklist and the bundles to be installed from
    the maven repository.

    <feature name="example-tasklist-persistence" version="${pom.version}"> <bundle>mvn:net.lr.tasklist/tasklist-model/${pom.version}</bundle> <bundle>mvn:net.lr.tasklist/tasklist-persistence/${pom.version}</bundle> </feature> <feature name="example-tasklist-ui" version="${pom.version}"> <feature>http</feature> <feature>http-whiteboard</feature> <bundle>mvn:net.lr.tasklist/tasklist-model/${pom.version}</bundle> <bundle>mvn:net.lr.tasklist/tasklist-ui/${pom.version}</bundle> </feature>

    A feature can consist of other features that also should be installed and bundles to be installed. The bundles typically use mvn urls. This means they are loaded from the configured maven repositories or your local maven repositiory in ~/.m2/repository.

    Installing the Application in Karaf feature:repo-add mvn:net.lr.tasklist/tasklist-features/1.0.0-SNAPSHOT/xml feature:install example-tasklist-persistence example-tasklist-ui

    Add the features descriptor to Karaf so it is added to the available features, then Install and start the tasklist feature. After this command the tasklist application should run

    list

    Check that all bundles of tasklist are active. If not try to start them and check the log.

    http:list ID | Servlet | Servlet-Name | State | Alias | Url ------------------------------------------------------------------------------- 56 | TaskListServlet | ServletModel-2 | Deployed | /tasklist | [/tasklist/*]

    Should show the TaskListServlet. By default the example will start at http://localhost:8181/tasklist .

    You can change the port by creating aa text file in "etc/org.ops4j.pax.web.cfg" with the content "org.osgi.service.http.port=8080". This will tell the HttpService to use the port 8080. Now the tasklist application should be available at http://localhost:8080/tasklist

    Summary

    In this tutorial we have installed Karaf and learned some commands. Then we created a small OSGi application that shows servlets, OSGi services, blueprint and the whiteboard pattern.

    In the next tutorial we take a look at using Apache Camel and Apache CXF on OSGi.

    Back to Karaf Tutorials

    View Online
    Categories: Christian Schneider

    Some hints to boost your productivity with declarative services

    Christian Schneider - Tue, 09/27/2016 - 09:34

    Blog post added by Christian Schneider

     The declarative services (DS) spec has some hidden gems that really help to make the most out of your application.

    Use the DS spec annotations to define your component

    Some older articles about DS define the components using xml. While this is still possible it is much simpler to use annotations for this purpose.
    There are 3 sets of annotations available bnd style, felix style and OSGi DS spec style. While the first two set can still be seen in the wild you
    should only use the OSGi spec annotations for new code as the other sets are deprecated.

    At runtime DS only works with the xml so make sure your build creates xml descriptors from your annotated components. Recent versions of bnd, maven-bundle-plugin
    and bnd-maven-plugin all handle the spec DS annoations by default. So no additional settings are required.

    Activate component by configuration

    @Component(
    name = "mycomponent",
    immediate = true,
    configurationPolicy = ConfigurationPolicy.REQUIRE,
    )

    In some cases it makes sense to always install a bundle but to be able to activate and deactivate a service it provides.
    By using configurationPolicy = REQUIRE the component is only activated if the configuration pid "myComponent" exists.
    Do not forget immediate=true as by defaullt the component would be lazy and thus not activate unless someone requires it.

    Override service properties using config

    By default a DS component is published as a service with all properties that are set in the @Component annotation.
    Every component is also configurable using a config pid that matches the component name. It is less well known that the
    configuration properties also show on the service properties and override the settings in the annotation.

    One use case for this is to publish a componeent using Remote Service Admin that was not marked by the developer.
    Another use case is to override the topic a EventAdmin EventHandler listens on. See https://github.com/apache/karaf-decanter/blob/master/appender/kafka/src/main/java/org/apache/karaf/decanter/appender/kafka/KafkaAppender.java#L43

    Overide injected services of a component using config

    If a component is injected with a service using @Reference then the service is normally statically filtered using the target property of the annotation in the
    form of an ldap filter.
    This filter can be overridden using a config property target.refname where refname is the name of the property the service is injected into.

    Create multiple instances of a component using config

    Another not so well known fact is that a DS component not only reacts on a single configuation pid but also on factory configs. If the pid of your component config is "myconfig" then in apache karaf you can create configs named myconfig-1.cfg and myconfig-2.cfg and DS will create two instances of your component.

    Typesafe configuration and Metatype information

    Starting with DS 1.3 you can define type safe configs and also have them available as meta type information for config UIs.

    @ObjectClassDefinition(name = "Server Configuration") @interface ServerConfig { String host() default "0.0.0.0"; int port() default 8080; boolean enableSSL() default false; } @Component @Designate(ocd = ServerConfig.class) public class ServerComponent { @Activate public void activate(ServerConfig cfg) { ServerSocket sock = new ServerSocket(); sock.bind(new InetSocketAddress(cfg.host(), cfg.port())); // ... } }

    See Neil Bartletts post for the details.

    Internal wiring

    In DS every component publishes a service. So compared to blueprint DS seems to miss a feature for creating internal components / beans that are only visible inside the bundle.
    This can be achieved by putting a component into a private package and setting the service property to the class of the component. The component is still exported as a service
    but the service will not be visible to the outside as the package is private. Still the service can be injected into other classes of the bundle using the component class.

    Field injection and constructor injection

    Since DS 1.3 (part of the OSGi 6 specs) you can also inject services directly into a field like:

    @Reference EventAdmin eventAdmin;

    You can even inject into a private field but remember this will make it very difficult to write a unit test for your component. I personally always use package visibility for
    fields I inject stuff into. I then put the unit test into the same package and can set the field inside the test without doing any special magic.

    Constructor injection is not possible at the time of writing this article but it is part of DS 1.4 (part of the OSGi spec 7). The implementation of this spec is currently on the way at felix scr.

    Injecting multiple matching services into a List<MyService>

    Since DS 1.3 it is possible to inject all services matching the interface and an optional filter into a List

    @Reference List<MyService> myservices;

    By default DS assume the static policy. This means that whenever the list of services changes the component is deactivated and activated again. While this is the safest way it might be too slow for your use case.
    So injecting services dynamically can make sense.

    Injecting services dynamically

    By default DS will restart your component on reference changes. If this is too slow in your case you can allow DS to dynamically change the injected service(s).

     

    @Reference volatile MyService myService; View Online
    Categories: Christian Schneider

    Securing an Apache Kafka broker - part III

    Colm O hEigeartaigh - Mon, 09/26/2016 - 18:13
    This is the third in a series of blog posts about securing Apache Kafka. The first post looked at how to secure messages and authenticate clients using SSL. The second post built on the first post by showing how to perform authorization using some custom logic. However, this approach is not recommended for non-trivial deployments. In this post we will show at how we can create flexible authorization policies for Apache Kafka using the Apache Ranger admin UI. Then we will show how to enforce these policies at the broker.

    1) Install the Apache Ranger Kafka plugin

    The first step is to download Apache Ranger (0.6.1-incubating was used in this post). Verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting plugin to a location where you will configure and install it:
    • tar zxvf apache-ranger-incubating-0.6.1.tar.gz
    • cd apache-ranger-incubating-0.6.1
    • mvn clean package assembly:assembly -DskipTests
    • tar zxvf target/ranger-0.6.1-kafka-plugin.tar.gz
    • mv ranger-0.6.1-kafka-plugin ${ranger.kafka.home}
    Now go to ${ranger.kafka.home} and edit "install.properties". You need to specify the following properties:
    • COMPONENT_INSTALL_DIR_NAME: The location of your Kafka installation
    • POLICY_MGR_URL: Set this to "http://localhost:6080"
    • REPOSITORY_NAME: Set this to "KafkaTest".
    Save "install.properties" and install the plugin as root via "sudo ./enable-kafka-plugin.sh". The Apache Ranger Kafka plugin should now be successfully installed (although not yet configured properly) in the broker.

    2) Configure authorization in the broker

    Configure Apache Kafka as per the first tutorial. There are a number of steps we need to follow to configure the Ranger Kafka plugin before it is operational:
    • Edit 'config/server.properties' and add the following: authorizer.class.name=org.apache.ranger.authorization.kafka.authorizer.RangerKafkaAuthorizer
    • Add the Kafka "config" directory to the classpath, so that we can pick up the Ranger configuration files: export CLASSPATH=$KAFKA_HOME/config
    • Copy the Apache Commons Logging jar into $KAFKA_HOME/libs. 
    • The ranger plugin will try to store policies by default in "/etc/ranger/KafkaTest/policycache". As we installed the plugin as "root" make sure that this directory is accessible to the user that is running the broker.
    Now we can start the broker as in the first tutorial:
    • bin/kafka-server-start.sh config/server.properties
    3) Configure authorization policies in the Apache Ranger Admin UI 

    At this point we should have configured the broker so that the Apache Ranger plugin is used to communicate with the Apache Ranger admin service to download authorization policies. So we need to install and configure the Apache Ranger admin service. Please refer to this blog post for how to do this. Assuming the admin service is already installed, start it via "sudo ranger-admin start". Open a browser and log on to "localhost:6080" with the credentials "admin/admin".

    First lets add some new users that match the SSL principals we have created in the first tutorial. Click on "Settings" and "Users/Groups". Add new users for the principals:
    • CN=Client,O=Apache,L=Dublin,ST=Leinster,C=IE
    • CN=Service,O=Apache,L=Dublin,ST=Leinster,C=IE
    • CN=Broker,O=Apache,L=Dublin,ST=Leinster,C=IE
    Now go back to the Service Manager screen and click on the "+" button next to "KAFKA". Create a new service called "KafkaTest". Click "Test Connection" to make sure it can communicate with the Apache Kafka broker. Then click "add" to save the new service. Click on the new service. There should be an "admin" policy already created. Edit the policy and give the "broker" principal above the rights to perform any operation and save the policy. Now create a new policy called "TestPolicy" for the topic "test". Give the service principal the rights to "Consume, Describe and Publish". Give the client principal the rights to "Consum and Describe" only.


    4) Test authorization

    Now lets test the authorization logic. Bear in mind that by default the Kafka plugin reloads policies from the admin service every 30 seconds, so you may need to wait that long or to restart the broker to download the newly created policies. Start the producer:
    • bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties
    Send a few messages to check that the producer is authorized correctly. Now start the consumer:
    • bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/consumer.properties --new-consumer
    If everything is configured correctly then it should work as in the first tutorial.
    Categories: Colm O hEigeartaigh

    Karaf Tutorial Part 1 - Installation and First application

    Christian Schneider - Mon, 09/26/2016 - 15:23

    Blog post edited by Christian Schneider

    Getting Started

    With this post I am beginning a series of posts about Apache Karaf, an OSGi container based on Equinox or Felix. The main difference to these frameworks is that it brings excellent management features with it.

    Outstanding features of Karaf:

    • Extensible Console with Bash like completion features
    • ssh console
    • deployment of bundles and features from maven repositories
    • easy creation of new instances from command line

    All together these features make developing server based OSGi applications almost as easy as regular java applications. Deployment and management is on a level that is much better than all applications servers I have seen till now. All this is combined with a small footprint as well of karaf as the resulting applications. In my opinion this allows a light weight development style like JEE 6 together with the flexibility of spring applications.

    Installation and first startup
    • Download Karaf 4.0.7 from the Karaf web site.
    • Extract and start with bin/karaf

    You should see the welcome screen:

    __ __ ____ / //_/____ __________ _/ __/ / ,< / __ `/ ___/ __ `/ /_ / /| |/ /_/ / / / /_/ / __/ /_/ |_|\__,_/_/ \__,_/_/ Apache Karaf (4.0.7) Hit '<tab>' for a list of available commands and '[cmd] \--help' for help on a specific command. Hit '<ctrl-d>' or 'osgi:shutdown' to shutdown Karaf. karaf@root()> Some handy commandsCommandDescriptionlaShows all installed bundleslistShow user bundlesservice:listShows the active OSGi services. This list is quite long. Here it is quite handy that you can use unix pipes like "ls | grep admin"exportsShows exported packages and bundles providing them. This helps to find out where a package may come from.feature:listShows which features are installed and can be installed.feature:install webconsole

    Install features (a list of bundles and other features). Using the above command we install the Karaf webconsole.

    It can be reached at http://localhost:8181/system/console . Log in with karaf/karaf and take some time to see what it has to offer.

    diagShow diagnostic information for bundles that could not be startedlog:tailShow the log. Use ctrl-c to  go back to ConsoleCtrl-dExit the console. If this is the main console karaf will also be stopped.

    OSGi containers preserve state after restarts

    Icon

    Please note that Karaf like all osgi containers maintains it´s last state of installed and started bundles. So if something should not work anymore a restart is not sure to help. To really start fresh again stop karaf and delete the data directory or start with bin/karaf clean.

    Check the logs

    Icon

    Karaf is very silent. To not miss error messages always keep a tail -f data/karaf.log open !!

    Tasklist - A small osgi application

    Without any useful application Karaf is a nice but useless container. So let´s create our first application. The good news is that creating an OSGi application is quite easy and
    maven can help a lot. The difference to a normal maven project is quite small. To write the application I recommend to use Eclipse 4 with the m2eclipse plugin which is installed by default on current versions.

    Get the source code from the Karaf-Tutorial repo at github.

    git clone git@github.com:cschneider/Karaf-Tutorial.git

    or download the sample project from https://github.com/cschneider/Karaf-Tutorial/zipball/master and extract to a directory.

    Import into Eclipse

    • Start Eclipse Neon or newer
    • In Eclipse Package explorer: Import -> Existing maven project -> Browse to the extracted directory into the tasklist sub dir
    • Eclipse will show all maven projects it finds
    • Click through to import all projects with defaults

    Eclipse will now import the projects and wire all dependencies using m2eclipse.

    The tasklist example consists of these projects

    ModuleDescriptiontasklist-modelService interface and Task classtasklist-persistenceSimple persistence implementation that offers a TaskServicetasklist-uiServlet that displays the tasklist using a TaskServicetasklist-featuresFeatures descriptor for the application that makes installing in Karaf very easyParent pom and general project setup

    The pom.xml is of packaging bundle and the maven-bundle-plugin creates the jar with an OSGi Manifest. By default the plugin imports all packages that are imported in java files or referenced in the blueprint context.

    It also exports all packages that do not contain the string impl or internal. In our case we want the model package to be imported but not the persistence.impl package. As the naming convention is used
    we need no additional configuration.

    Tasklist-model

    This project contains the domain model in our case it is the Task class and a TaskService interface. The model is used by both the persistence implementation and the user interface.  Any user of the TaskService will only need the model. So it is never directly bound to our current implementation.

    Tasklist-persistence

    The very simple persistence implementation TaskServiceImpl manages tasks in a simple HashMap. The class uses the @Singleton annotation to expose the class as an blueprint bean.

    The annotation  @OsgiServiceProvider will expose the bean as an OSGi service and the @Properties annotation allows to add serice properties. In our case the property service.exported.interfaces we set can be used by CXF-DOSGi which we present  in a later tutorial. For this tutorial the properties could also be removed.

    @OsgiServiceProvider @Properties(@Property(name = "service.exported.interfaces", value = "*")) @Singleton public class TaskServiceImpl implements TaskService { ... }

    The blueprint-maven-plugin will process the class above and automatically create the suitable blueprint xml. So this saves us from writing blueprint xml by hand.

    Automatically created blueprint xml can be found in target/generated-resources <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <bean id="taskService" class="net.lr.tasklist.persistence.impl.TaskServiceImpl" /> <service ref="taskService" interface="net.lr.tasklist.model.TaskService" /> </blueprint>

    Tasklist-ui

    The ui project contains a small servlet TaskServlet to display the tasklist and individual tasks. To work with the tasks the servlet needs the TaskService. We inject the TaskService by using the annotation @Inject which is able to inject any bean by type and the annotation @OsgiService which creates a blueprint reference to an OSGiSerivce of the given type.

    The whole class is exposed as an OSGi service of interface java.http.Servlet with a special property alias=/tasklist. This triggers the whiteboard extender of pax web which picks up the service and exports it as a servlet at the relative url /tasklist.

    Snippet of the relevant code:

    @OsgiServiceProvider(classes = Servlet.class) @Properties(@Property(name = "alias", value = "/tasklist")) @Singleton public class TaskListServlet extends HttpServlet { @Inject @OsgiService TaskService taskService; } Automatically created blueprint xml can be found in target/generated-resources <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <reference id="taskService" availability="mandatory" interface="net.lr.tasklist.model.TaskService" /> <bean id="taskServlet" class="net.lr.tasklist.ui.TaskListServlet"> <property name="taskService" ref="taskService"></property> </bean> <service ref="taskServlet" interface="javax.servlet.http.HttpServlet"> <service-properties> <entry key="alias" value="/tasklist" /> </service-properties> </service> </blueprint>

    See also: http://wiki.ops4j.org/display/paxweb/Whiteboard+Extender

    Tasklist-features

    The last project only installs a feature descriptor to the maven repository so we can install it easily in Karaf. The descriptor defines a feature named tasklist and the bundles to be installed from
    the maven repository.

    <feature name="example-tasklist-persistence" version="${pom.version}"> <bundle>mvn:net.lr.tasklist/tasklist-model/${pom.version}</bundle> <bundle>mvn:net.lr.tasklist/tasklist-persistence/${pom.version}</bundle> </feature> <feature name="example-tasklist-ui" version="${pom.version}"> <feature>http</feature> <feature>http-whiteboard</feature> <bundle>mvn:net.lr.tasklist/tasklist-model/${pom.version}</bundle> <bundle>mvn:net.lr.tasklist/tasklist-ui/${pom.version}</bundle> </feature>

    A feature can consist of other features that also should be installed and bundles to be installed. The bundles typically use mvn urls. This means they are loaded from the configured maven repositories or your local maven repositiory in ~/.m2/repository.

    Installing the Application in Karaf feature:repo-add mvn:net.lr.tasklist/tasklist-features/1.0.0-SNAPSHOT/xml feature:install example-tasklist-persistence example-tasklist-ui

    Add the features descriptor to Karaf so it is added to the available features, then Install and start the tasklist feature. After this command the tasklist application should run

    list

    Check that all bundles of tasklist are active. If not try to start them and check the log.

    http:list ID | Servlet | Servlet-Name | State | Alias | Url ------------------------------------------------------------------------------- 56 | TaskListServlet | ServletModel-2 | Deployed | /tasklist | [/tasklist/*]

    Should show the TaskListServlet. By default the example will start at http://localhost:8181/tasklist .

    You can change the port by creating aa text file in "etc/org.ops4j.pax.web.cfg" with the content "org.osgi.service.http.port=8080". This will tell the HttpService to use the port 8080. Now the tasklist application should be available at http://localhost:8080/tasklist

    Summary

    In this tutorial we have installed Karaf and learned some commands. Then we created a small OSGi application that shows servlets, OSGi services, blueprint and the whiteboard pattern.

    In the next tutorial we take a look at using Apache Camel and Apache CXF on OSGi.

    Back to Karaf Tutorials

    View Online
    Categories: Christian Schneider

    Integrating Apache Camel with Apache Syncope - part III

    Colm O hEigeartaigh - Fri, 09/23/2016 - 18:00
    This is the third in a series of blog posts about integrating Apache Camel with Apache Syncope. The first post introduced the new Apache Camel provisioning manager that is available in Apache Syncope 2.0.0, and gave an example of how we can modify the default behaviour to send an email to an administrator when a user was created. The second post showed how an administrator can keep track of user password changes for auditing purposes. In this post we will show how to integrate Syncope with Apache ActiveMQ using Camel.

    1) The use-case

    The use-case is that Apache Syncope is used for Identity Management in a large organisation. When users are created we would like to be able to gather certain information about the new users and process it dynamically in some way. In particular, we are interested in the age of the new users and the country in which they are based. Perhaps at the reception desk of the company HQ we display a map with the number of employees in each country highlighted. To decouple whatever applications are processing the data from Syncope itself, we will use a messaging solution, namely Apache ActiveMQ. When new users are created, we will modify the default Camel route to send a message to two topics corresponding to the age and location of the user.

    2) Download and configure Apache ActiveMQ

    The first step is to download Apache ActiveMQ (currently 5.14.0). Unzip it and start it via:
    • bin/activemq start 
    Now go to the web interface of ActiveMQ - 'http://localhost:8161/admin/', logging in with credentials 'admin/admin'. Click on the "Queues" tab and create two new queues called 'age' and 'country'.

    3) Download and configure Apache Syncope

    Download and extract the standalone version of Apache Syncope 2.0.0. Before we start it we will copy the jars we need to get Camel working with ActiveMQ in Syncope. In the "webapps/syncope/WEB-INF/lib" directory of the Apache Tomcat instance bundled with Syncope, copy the following jars:
    • From $ACTIVEMQ_HOME/lib: activemq-client-5.14.0.jar + activemq-spring-5.14.0.jar + hawtbuf-1.11.jar + geronimo-j2ee-management_1.1_spec-1.0.1.jar
    • From $ACTIVEMQ_HOME/lib/camel: activemq-camel-5.14.0.jar + camel-jms-2.16.3.jar
    • From $ACTIVEMQ_HOME/lib/optional: activemq-pool-5.14.0.jar + activemq-jms-pool-5.14.0.jar + spring-jms-4.1.9.RELEASE.jar
    Next we need to create a Camel spring configuration file containing a bean with the address of the broker. Add the following file to the Tomcat lib directory (called "camelRoutesContext.xml"):

    Now we can start the embedded Apache Tomcat instance. Open a browser and navigate to 'http://localhost:9080/syncope-console' logging in with 'admin/password'. The first thing we need to do is to configure user attributes for "age" and "country". Go to "Configuration/Types" in the left-hand menu, and click on the "Schemas" tab. Create two plain (mandatory) schema types: "age" of type "String" and "country" of type "Long". Now click on the "AnyTypeClasses" tab and create a new AnyTypeClass selecting the two plain schema types we just created. Finally, click on the "AnyType" tab and edit the "USER". Add the new AnyTypeClass you created and hit "save".

    Now we will modify the Camel route invoked when a user is created. Click on "Extensions/Camel Routes" in the left-hand configuration menu. Edit the "createUser" route and add the following above the "bean method" part:
    • <setBody><simple>${body.plainAttrMap[age].values[0]}</simple></setBody>
    • <to uri="activemq:age"/>
    • <setBody><simple>${exchangeProperty.actual.plainAttrMap[country].values[0]}</simple></setBody>
    • <to uri="activemq:country"/>
    This should be fairly straightforward to follow. We are setting the message body to be the age of the newly created User, and dispatching that message to the "age" queue. We then follow the same process for the "country". We also need to change "body" in the "bean method" line to "exchangeProperty.actual", this is because we have redefined what the body is for each of the Camel routes above.


    Now let's create some new users. Click on the "Realms" menu and select the "USER" tab. Create new users "alice" in country "usa" of age "25" and "bob" in country "canada" of age "27". Now let's look at the ActiveMQ console again. We should see two new messages in each of the queues as follows, ready to be consumed:



    Categories: Colm O hEigeartaigh

    Using SHA-512 with Apache CXF SOAP web services

    Colm O hEigeartaigh - Thu, 09/22/2016 - 15:49
    XML Signature is used extensively in SOAP web services to guarantee message integrity, non-repudiation, as well as client authentication via PKI. A digest algorithm crops up in XML Signature both as part of the Signature Method (rsa-sha1 for example), as well as in the digests of the data that are signed. As recent weaknesses have emerged with the use of SHA-1, it makes sense to use the SHA-2 digest algorithm instead. In this post we will look how to configure Apache CXF to use SHA-512 (i.e. SHA-2 with 512 bits) as the digest algorithm.

    1) Configuring the STS to use SHA-512

    Apache CXF ships with a SecurityTokenService (STS) that is widely deployed. The principal function of the STS is to issue signed SAML tokens, although it supports a wide range of other functionalities and token types. The STS (for more recent versions of CXF) uses RSA-SHA256 for the signature method when signing SAML tokens, and uses SHA-256 for the digest algorithm. In this section we'll look at how to configure the STS to use SHA-512 instead.

    You can specify signature and digest algorithms via the SignatureProperties class in the STS. To specify SHA-512 for signature and digest algorithms for generated tokens in the STS add the following bean to your spring configuration:

    Next you need to reference this bean in the StaticSTSProperties bean for your STS:
    • <property name="signatureProperties" ref="sigProps" />
    2) Configuring WS-SecurityPolicy to use SHA-512

    Service requests are typically secured at a message level using WS-SecurityPolicy. It is possibly to specify the algorithms used to secure the request, as well as the key sizes, by configuring an AlgorithmSuite policy. Unfortunately the last WS-SecurityPolicy spec is quite dated at this point, and lacks support for more modern algorithms as part of the default AlgorithmSuite policies that are defined in the spec. The spec only supports using RSA-SHA1 for signature, and only SHA-1 and SHA-256 for digest algorithms.

    Luckily, Apache CXF users can avail of a few different ways to use stronger algorithms with web service requests. In CXF there is a JAX-WS property called 'ws-security.asymmetric.signature.algorithm' for AsymmetricBinding policies (similarly 'ws-security.symmetric.signature.algorithm' for SymmetricBinding policies). This overrides the default signature algorithm of the policy. So for example, to switch to use RSA-SHA512 instead of RSA-SHA1 simply set the following property on your client/endpoint:
    • <entry key="ws-security.asymmetric.signature.algorithm" value="http://www.w3.org/2001/04/xmldsig-more#rsa-sha512"/>
    There is no corresponding property to explicitly configure the digest algorithm, as the default AlgorithmSuite policies already support SHA-256 (although one could be added if there was enough demand). If you really need to support SHA-512 here, an option is to use a custom AlgorithmSuite (which will obviously not be portable), or to override one of the existing ones.

    It's pretty straightforward to do this. First you need to create an AlgorithmSuiteLoader implementation to handle the policy. Here is one used in the tests that creates a custom AlgorithmSuite policy called 'Basic128RsaSha512', which extends the 'Basic128' policy to use RSA-SHA512 for the signature method, and SHA-512 for the digest method. This AlgorithmSuiteLoader can be referenced in Spring via:


    The policy in question looks like:
    • <cxf:Basic128RsaSha512 xmlns:cxf="http://cxf.apache.org/custom/security-policy"/>
    Categories: Colm O hEigeartaigh

    How to enable Fediz Plugin Logging

    Jan Bernhardt - Thu, 09/22/2016 - 14:43
    If you are using the Apache Fediz plugin to enable WS-Federation Support for your Tomcat container, you will not see any log statements from the Fediz Plugin by default. Especially when testing or analyzing issues with the plugin you will be interested in actually seeing some log statements from the plugin.

    In this blog post I'll explain to you what need to be done to get all DEBUG log level statements from the Apache Fediz Tomcat Plugin using Log4J.
    Apache Tomcat tells you how to enable logging on the container level.
    1. Adding DependenciesFirst you need to ensure that the required libraries are available within your classpath. This can be done in one of two ways:
    a) Adding Maven Dependencies to the Fediz Tomcat PluginAdd the following dependency to cxf-fediz/plugins/tomcat7/pom.xml:
    <project . . .>
    . . .
    <dependencies>
    <dependency>
    <groupid>org.slf4j</groupid>
    <artifactid>slf4j-log4j12</artifactid>
    <version>${slf4j.version}</version>
    <scope>runtime</scope>
    </dependency>
    </dependencies>
    . . .
    </project>
    Now build the plugin again mvn clean package and deploy the content of cxf-fediz/plugins/tomcat7/target/fediz-tomcat7-1.3.0-zip-with-dependencies.zip into your tomcat/lib/fediz folder.
    b) Adding lib files directly to your lib folderAdd slf4j and log4j libs (in the desired version) to your fediz plugin dependencies:

    2. Adding Log4J configuration fileOnce your dependencies are added to your Tomcat installation, you need to add a log4j.properties file to your tomcat/lib folder. Here is an example content for this file:
    # Loggers
    log4j.rootLogger = WARN, CATALINA, CONSOLE
    log4j.logger.org.apache.cxf.fediz = DEBUG, CONSOLE, FEDIZ
    log4j.additivity.org.apache.cxf.fediz = false

    # Appenders
    log4j.appender.CATALINA = org.apache.log4j.DailyRollingFileAppender
    log4j.appender.CATALINA.File = ${catalina.base}/logs/catalina.out
    log4j.appender.CATALINA.Append = true
    log4j.appender.CATALINA.Encoding = UTF-8
    log4j.appender.CATALINA.DatePattern = '.'yyyy-MM-dd
    log4j.appender.CATALINA.layout = org.apache.log4j.PatternLayout
    log4j.appender.CATALINA.layout.ConversionPattern = %d [%t] %-5p %c %x - %m%n

    log4j.appender.FEDIZ = org.apache.log4j.DailyRollingFileAppender
    log4j.appender.FEDIZ.File = ${catalina.base}/logs/fediz-plugin.log
    log4j.appender.FEDIZ.Append = true
    log4j.appender.FEDIZ.Encoding = UTF-8
    log4j.appender.FEDIZ.Threshold = DEBUG
    log4j.appender.FEDIZ.DatePattern = '.'yyyy-MM-dd
    log4j.appender.FEDIZ.layout = org.apache.log4j.PatternLayout
    log4j.appender.FEDIZ.layout.ConversionPattern = %d [%t] %-5p %c %x - %m%n

    log4j.appender.CONSOLE = org.apache.log4j.ConsoleAppender
    log4j.appender.CONSOLE.Encoding = UTF-8
    log4j.appender.CONSOLE.Threshold = INFO
    log4j.appender.CONSOLE.layout = org.apache.log4j.PatternLayout
    log4j.appender.CONSOLE.layout.ConversionPattern = %d [%t] %-5p %c %x - %m%n

    Now restart your tomcat container and you will see Fediz Info logs on your console and Debug messages within tomcat/logs/fediz-plugin.log.
    Categories: Jan Bernhardt

    Invoking on the Talend ESB STS using SoapUI

    Colm O hEigeartaigh - Wed, 09/21/2016 - 17:11
    Talend ESB ships with a powerful SecurityTokenService (STS) based on the STS that ships with Apache CXF. The Talend Open Studio for ESB contains UI support for creating web service clients that use the STS to obtain SAML tokens for authentication (and also authorization via roles embedded in the tokens). However, it is sometimes useful to be able to obtain tokens with a third party client. In this post we will show how SoapUI can be used to obtain SAML Tokens from the Talend ESB STS.

    1) Download and run Talend Open Studio for ESB

    The first step is to download Talend Open Studio for ESB (the current version at the time of writing this post is 6.2.1). Unzip it and start the container via:
    • Runtime_ESBSE/container/bin/trun
    The next step is to start the STS itself:
    • tesb:start-sts
    2) Download and run SoapUI

    Download SoapUI and run the installation script. Create a new SOAP Project called "STS" using the WSDL:
    • http://localhost:8040/services/SecurityTokenService/UT?wsdl
    The WSDL of the STS defines a number of different services. The one we are interested in is the "UT_Binding", which requires a WS-Security UsernameToken to authenticate the client. Click on "UT_Binding/Issue/Request 1" in the left-hand menu to see a sample request for the service. Now we need to do some editing of the request. Remove the 'Context="?"' attribute from RequestSecurityToken. Then paste the following into the Body of the RequestSecurityToken:
    • <t:TokenType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</t:TokenType>
    • <t:KeyType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/ws-sx/ws-trust/200512/Bearer</t:KeyType>
    • <t:RequestType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</t:RequestType>
    Now we need to configure a username and password to use when authenticating the client request. In the "Request Properties" box in the lower left corner, add "tesb" for the "username" and "password" properties. Now right click in the request pane, and select "Add WSS Username Token" (Password Text). Now send the request and you should receive a SAML Token in response.

    Bear in mind that if you wish to re-use the SAML Token retrieved from the STS in a subsequent request, you must copy it from the "Raw" tab and not the "XML" tab of the response. The latter adds in whitespace that breaks the signature on the token. Another thing to watch out for is that the STS maintains a cache of the Username Token nonce values, so you will need to recreate the UsernameToken each time you want to get a new token.

    3) Requesting a "PublicKey" KeyType

    The example above uses a "Bearer" KeyType. Another common use-case, as is the case with the security-enabled services developed using the Talend Studio, is when the token must have the PublicKey/Certificate of the client embedded in it. To request such a token from the STS, change the "Bearer" KeyType as above to "PublicKey". However, we also need to present a certificate to the STS to include in the token.

    As we are just using the test credentials used by the Talend STS, go to the Runtime_ESBSE/container/etc/keystores and extract the client key with:
    • keytool -exportcert -rfc -keystore clientstore.jks -alias myclientkey -file client.cer -storepass cspass
    Edit client.cer + remove the first and end lines (that contain BEGIN/END CERTIFICATE). Now go back to SOAP-UI and add the following to the RequestSecurityToken Body:
    • <t:UseKey xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512"><ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:X509Data><ds:X509Certificate>...</ds:X509Certificate></ds:X509Data></ds:KeyInfo></t:UseKey>
    where the content of the X.509 Certificate is the content in client.cer. This time, the token issued by the STS will contain the public key of the client embedded in the SAML Subject.

    Categories: Colm O hEigeartaigh

    Pages

    Subscribe to Talend Community Coders aggregator