Latest Activity

Some hints to boost your productivity with declarative services

Christian Schneider - Tue, 09/27/2016 - 09:34

Blog post added by Christian Schneider

 The declarative services (DS) spec has some hidden gems that really help to make the most out of your application.

Use the DS spec annotations to define your component

Some older articles about DS define the components using xml. While this is still possible it is much simpler to use annotations for this purpose.
There are 3 sets of annotations available bnd style, felix style and OSGi DS spec style. While the first two set can still be seen in the wild you
should only use the OSGi spec annotations for new code as the other sets are deprecated.

At runtime DS only works with the xml so make sure your build creates xml descriptors from your annotated components. Recent versions of bnd, maven-bundle-plugin
and bnd-maven-plugin all handle the spec DS annoations by default. So no additional settings are required.

Activate component by configuration

@Component(
name = "mycomponent",
immediate = true,
configurationPolicy = ConfigurationPolicy.REQUIRE,
)

In some cases it makes sense to always install a bundle but to be able to activate and deactivate a service it provides.
By using configurationPolicy = REQUIRE the component is only activated if the configuration pid "myComponent" exists.
Do not forget immediate=true as by defaullt the component would be lazy and thus not activate unless someone requires it.

Override service properties using config

By default a DS component is published as a service with all properties that are set in the @Component annotation.
Every component is also configurable using a config pid that matches the component name. It is less well known that the
configuration properties also show on the service properties and override the settings in the annotation.

One use case for this is to publish a componeent using Remote Service Admin that was not marked by the developer.
Another use case is to override the topic a EventAdmin EventHandler listens on. See https://github.com/apache/karaf-decanter/blob/master/appender/kafka/src/main/java/org/apache/karaf/decanter/appender/kafka/KafkaAppender.java#L43

Overide injected services of a component using config

If a component is injected with a service using @Reference then the service is normally statically filtered using the target property of the annotation in the
form of an ldap filter.
This filter can be overridden using a config property target.refname where refname is the name of the property the service is injected into.

Create multiple instances of a component using config

Another not so well known fact is that a DS component not only reacts on a single configuation pid but also on factory configs. If the pid of your component config is "myconfig" then in apache karaf you can create configs named myconfig-1.cfg and myconfig-2.cfg and DS will create two instances of your component.

Typesafe configuration and Metatype information

Starting with DS 1.3 you can define type safe configs and also have them available as meta type information for config UIs.

@ObjectClassDefinition(name = "Server Configuration") @interface ServerConfig { String host() default "0.0.0.0"; int port() default 8080; boolean enableSSL() default false; } @Component @Designate(ocd = ServerConfig.class) public class ServerComponent { @Activate public void activate(ServerConfig cfg) { ServerSocket sock = new ServerSocket(); sock.bind(new InetSocketAddress(cfg.host(), cfg.port())); // ... } }

See Neil Bartletts post for the details.

Internal wiring

In DS every component publishes a service. So compared to blueprint DS seems to miss a feature for creating internal components / beans that are only visible inside the bundle.
This can be achieved by putting a component into a private package and setting the service property to the class of the component. The component is still exported as a service
but the service will not be visible to the outside as the package is private. Still the service can be injected into other classes of the bundle using the component class.

Field injection and constructor injection

Since DS 1.3 (part of the OSGi 6 specs) you can also inject services directly into a field like:

@Reference EventAdmin eventAdmin;

You can even inject into a private field but remember this will make it very difficult to write a unit test for your component. I personally always use package visibility for
fields I inject stuff into. I then put the unit test into the same package and can set the field inside the test without doing any special magic.

Constructor injection is not possible at the time of writing this article but it is part of DS 1.4 (part of the OSGi spec 7). The implementation of this spec is currently on the way at felix scr.

Injecting multiple matching services into a List<MyService>

Since DS 1.3 it is possible to inject all services matching the interface and an optional filter into a List

@Reference List<MyService> myservices;

By default DS assume the static policy. This means that whenever the list of services changes the component is deactivated and activated again. While this is the safest way it might be too slow for your use case.
So injecting services dynamically can make sense.

Injecting services dynamically

By default DS will restart your component on reference changes. If this is too slow in your case you can allow DS to dynamically change the injected service(s).

 

@Reference volatile MyService myService; View Online
Categories: Christian Schneider

Securing an Apache Kafka broker - part III

Colm O hEigeartaigh - Mon, 09/26/2016 - 18:13
This is the third in a series of blog posts about securing Apache Kafka. The first post looked at how to secure messages and authenticate clients using SSL. The second post built on the first post by showing how to perform authorization using some custom logic. However, this approach is not recommended for non-trivial deployments. In this post we will show at how we can create flexible authorization policies for Apache Kafka using the Apache Ranger admin UI. Then we will show how to enforce these policies at the broker.

1) Install the Apache Ranger Kafka plugin

The first step is to download Apache Ranger (0.6.1-incubating was used in this post). Verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting plugin to a location where you will configure and install it:
  • tar zxvf apache-ranger-incubating-0.6.1.tar.gz
  • cd apache-ranger-incubating-0.6.1
  • mvn clean package assembly:assembly -DskipTests
  • tar zxvf target/ranger-0.6.1-kafka-plugin.tar.gz
  • mv ranger-0.6.1-kafka-plugin ${ranger.kafka.home}
Now go to ${ranger.kafka.home} and edit "install.properties". You need to specify the following properties:
  • COMPONENT_INSTALL_DIR_NAME: The location of your Kafka installation
  • POLICY_MGR_URL: Set this to "http://localhost:6080"
  • REPOSITORY_NAME: Set this to "KafkaTest".
Save "install.properties" and install the plugin as root via "sudo ./enable-kafka-plugin.sh". The Apache Ranger Kafka plugin should now be successfully installed (although not yet configured properly) in the broker.

2) Configure authorization in the broker

Configure Apache Kafka as per the first tutorial. There are a number of steps we need to follow to configure the Ranger Kafka plugin before it is operational:
  • Edit 'config/server.properties' and add the following: authorizer.class.name=org.apache.ranger.authorization.kafka.authorizer.RangerKafkaAuthorizer
  • Add the Kafka "config" directory to the classpath, so that we can pick up the Ranger configuration files: export CLASSPATH=$KAFKA_HOME/config
  • Copy the Apache Commons Logging jar into $KAFKA_HOME/libs. 
  • The ranger plugin will try to store policies by default in "/etc/ranger/KafkaTest/policycache". As we installed the plugin as "root" make sure that this directory is accessible to the user that is running the broker.
Now we can start the broker as in the first tutorial:
  • bin/kafka-server-start.sh config/server.properties
3) Configure authorization policies in the Apache Ranger Admin UI 

At this point we should have configured the broker so that the Apache Ranger plugin is used to communicate with the Apache Ranger admin service to download authorization policies. So we need to install and configure the Apache Ranger admin service. Please refer to this blog post for how to do this. Assuming the admin service is already installed, start it via "sudo ranger-admin start". Open a browser and log on to "localhost:6080" with the credentials "admin/admin".

First lets add some new users that match the SSL principals we have created in the first tutorial. Click on "Settings" and "Users/Groups". Add new users for the principals:
  • CN=Client,O=Apache,L=Dublin,ST=Leinster,C=IE
  • CN=Service,O=Apache,L=Dublin,ST=Leinster,C=IE
  • CN=Broker,O=Apache,L=Dublin,ST=Leinster,C=IE
Now go back to the Service Manager screen and click on the "+" button next to "KAFKA". Create a new service called "KafkaTest". Click "Test Connection" to make sure it can communicate with the Apache Kafka broker. Then click "add" to save the new service. Click on the new service. There should be an "admin" policy already created. Edit the policy and give the "broker" principal above the rights to perform any operation and save the policy. Now create a new policy called "TestPolicy" for the topic "test". Give the service principal the rights to "Consume, Describe and Publish". Give the client principal the rights to "Consum and Describe" only.


4) Test authorization

Now lets test the authorization logic. Bear in mind that by default the Kafka plugin reloads policies from the admin service every 30 seconds, so you may need to wait that long or to restart the broker to download the newly created policies. Start the producer:
  • bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties
Send a few messages to check that the producer is authorized correctly. Now start the consumer:
  • bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/consumer.properties --new-consumer
If everything is configured correctly then it should work as in the first tutorial.
Categories: Colm O hEigeartaigh

Karaf Tutorial Part 1 - Installation and First application

Christian Schneider - Mon, 09/26/2016 - 15:23

Blog post edited by Christian Schneider

Getting Started

With this post I am beginning a series of posts about Apache Karaf, an OSGi container based on Equinox or Felix. The main difference to these frameworks is that it brings excellent management features with it.

Outstanding features of Karaf:

  • Extensible Console with Bash like completion features
  • ssh console
  • deployment of bundles and features from maven repositories
  • easy creation of new instances from command line

All together these features make developing server based OSGi applications almost as easy as regular java applications. Deployment and management is on a level that is much better than all applications servers I have seen till now. All this is combined with a small footprint as well of karaf as the resulting applications. In my opinion this allows a light weight development style like JEE 6 together with the flexibility of spring applications.

Installation and first startup
  • Download Karaf 4.0.7 from the Karaf web site.
  • Extract and start with bin/karaf

You should see the welcome screen:

__ __ ____ / //_/____ __________ _/ __/ / ,< / __ `/ ___/ __ `/ /_ / /| |/ /_/ / / / /_/ / __/ /_/ |_|\__,_/_/ \__,_/_/ Apache Karaf (4.0.7) Hit '<tab>' for a list of available commands and '[cmd] \--help' for help on a specific command. Hit '<ctrl-d>' or 'osgi:shutdown' to shutdown Karaf. karaf@root()> Some handy commandsCommandDescriptionlaShows all installed bundleslistShow user bundlesservice:listShows the active OSGi services. This list is quite long. Here it is quite handy that you can use unix pipes like "ls | grep admin"exportsShows exported packages and bundles providing them. This helps to find out where a package may come from.feature:listShows which features are installed and can be installed.feature:install webconsole

Install features (a list of bundles and other features). Using the above command we install the Karaf webconsole.

It can be reached at http://localhost:8181/system/console . Log in with karaf/karaf and take some time to see what it has to offer.

diagShow diagnostic information for bundles that could not be startedlog:tailShow the log. Use ctrl-c to  go back to ConsoleCtrl-dExit the console. If this is the main console karaf will also be stopped.

OSGi containers preserve state after restarts

Icon

Please note that Karaf like all osgi containers maintains it´s last state of installed and started bundles. So if something should not work anymore a restart is not sure to help. To really start fresh again stop karaf and delete the data directory or start with bin/karaf clean.

Check the logs

Icon

Karaf is very silent. To not miss error messages always keep a tail -f data/karaf.log open !!

Tasklist - A small osgi application

Without any useful application Karaf is a nice but useless container. So let´s create our first application. The good news is that creating an OSGi application is quite easy and
maven can help a lot. The difference to a normal maven project is quite small. To write the application I recommend to use Eclipse 4 with the m2eclipse plugin which is installed by default on current versions.

Get the source code from the Karaf-Tutorial repo at github.

git clone git@github.com:cschneider/Karaf-Tutorial.git

or download the sample project from https://github.com/cschneider/Karaf-Tutorial/zipball/master and extract to a directory.

Import into Eclipse

  • Start Eclipse Neon or newer
  • In Eclipse Package explorer: Import -> Existing maven project -> Browse to the extracted directory into the tasklist sub dir
  • Eclipse will show all maven projects it finds
  • Click through to import all projects with defaults

Eclipse will now import the projects and wire all dependencies using m2eclipse.

The tasklist example consists of these projects

ModuleDescriptiontasklist-modelService interface and Task classtasklist-persistenceSimple persistence implementation that offers a TaskServicetasklist-uiServlet that displays the tasklist using a TaskServicetasklist-featuresFeatures descriptor for the application that makes installing in Karaf very easyParent pom and general project setup

The pom.xml is of packaging bundle and the maven-bundle-plugin creates the jar with an OSGi Manifest. By default the plugin imports all packages that are imported in java files or referenced in the blueprint context.

It also exports all packages that do not contain the string impl or internal. In our case we want the model package to be imported but not the persistence.impl package. As the naming convention is used
we need no additional configuration.

Tasklist-model

This project contains the domain model in our case it is the Task class and a TaskService interface. The model is used by both the persistence implementation and the user interface.  Any user of the TaskService will only need the model. So it is never directly bound to our current implementation.

Tasklist-persistence

The very simple persistence implementation TaskServiceImpl manages tasks in a simple HashMap. The class uses the @Singleton annotation to expose the class as an blueprint bean.

The annotation  @OsgiServiceProvider will expose the bean as an OSGi service and the @Properties annotation allows to add serice properties. In our case the property service.exported.interfaces we set can be used by CXF-DOSGi which we present  in a later tutorial. For this tutorial the properties could also be removed.

@OsgiServiceProvider @Properties(@Property(name = "service.exported.interfaces", value = "*")) @Singleton public class TaskServiceImpl implements TaskService { ... }

The blueprint-maven-plugin will process the class above and automatically create the suitable blueprint xml. So this saves us from writing blueprint xml by hand.

Automatically created blueprint xml can be found in target/generated-resources <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <bean id="taskService" class="net.lr.tasklist.persistence.impl.TaskServiceImpl" /> <service ref="taskService" interface="net.lr.tasklist.model.TaskService" /> </blueprint>

Tasklist-ui

The ui project contains a small servlet TaskServlet to display the tasklist and individual tasks. To work with the tasks the servlet needs the TaskService. We inject the TaskService by using the annotation @Inject which is able to inject any bean by type and the annotation @OsgiService which creates a blueprint reference to an OSGiSerivce of the given type.

The whole class is exposed as an OSGi service of interface java.http.Servlet with a special property alias=/tasklist. This triggers the whiteboard extender of pax web which picks up the service and exports it as a servlet at the relative url /tasklist.

Snippet of the relevant code:

@OsgiServiceProvider(classes = Servlet.class) @Properties(@Property(name = "alias", value = "/tasklist")) @Singleton public class TaskListServlet extends HttpServlet { @Inject @OsgiService TaskService taskService; } Automatically created blueprint xml can be found in target/generated-resources <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <reference id="taskService" availability="mandatory" interface="net.lr.tasklist.model.TaskService" /> <bean id="taskServlet" class="net.lr.tasklist.ui.TaskListServlet"> <property name="taskService" ref="taskService"></property> </bean> <service ref="taskServlet" interface="javax.servlet.http.HttpServlet"> <service-properties> <entry key="alias" value="/tasklist" /> </service-properties> </service> </blueprint>

See also: http://wiki.ops4j.org/display/paxweb/Whiteboard+Extender

Tasklist-features

The last project only installs a feature descriptor to the maven repository so we can install it easily in Karaf. The descriptor defines a feature named tasklist and the bundles to be installed from
the maven repository.

<feature name="example-tasklist-persistence" version="${pom.version}"> <bundle>mvn:net.lr.tasklist/tasklist-model/${pom.version}</bundle> <bundle>mvn:net.lr.tasklist/tasklist-persistence/${pom.version}</bundle> </feature> <feature name="example-tasklist-ui" version="${pom.version}"> <feature>http</feature> <feature>http-whiteboard</feature> <bundle>mvn:net.lr.tasklist/tasklist-model/${pom.version}</bundle> <bundle>mvn:net.lr.tasklist/tasklist-ui/${pom.version}</bundle> </feature>

A feature can consist of other features that also should be installed and bundles to be installed. The bundles typically use mvn urls. This means they are loaded from the configured maven repositories or your local maven repositiory in ~/.m2/repository.

Installing the Application in Karaf feature:repo-add mvn:net.lr.tasklist/tasklist-features/1.0.0-SNAPSHOT/xml feature:install example-tasklist-persistence example-tasklist-ui

Add the features descriptor to Karaf so it is added to the available features, then Install and start the tasklist feature. After this command the tasklist application should run

list

Check that all bundles of tasklist are active. If not try to start them and check the log.

http:list ID | Servlet | Servlet-Name | State | Alias | Url ------------------------------------------------------------------------------- 56 | TaskListServlet | ServletModel-2 | Deployed | /tasklist | [/tasklist/*]

Should show the TaskListServlet. By default the example will start at http://localhost:8181/tasklist .

You can change the port by creating aa text file in "etc/org.ops4j.pax.web.cfg" with the content "org.osgi.service.http.port=8080". This will tell the HttpService to use the port 8080. Now the tasklist application should be available at http://localhost:8080/tasklist

Summary

In this tutorial we have installed Karaf and learned some commands. Then we created a small OSGi application that shows servlets, OSGi services, blueprint and the whiteboard pattern.

In the next tutorial we take a look at using Apache Camel and Apache CXF on OSGi.

Back to Karaf Tutorials

View Online
Categories: Christian Schneider

Integrating Apache Camel with Apache Syncope - part III

Colm O hEigeartaigh - Fri, 09/23/2016 - 18:00
This is the third in a series of blog posts about integrating Apache Camel with Apache Syncope. The first post introduced the new Apache Camel provisioning manager that is available in Apache Syncope 2.0.0, and gave an example of how we can modify the default behaviour to send an email to an administrator when a user was created. The second post showed how an administrator can keep track of user password changes for auditing purposes. In this post we will show how to integrate Syncope with Apache ActiveMQ using Camel.

1) The use-case

The use-case is that Apache Syncope is used for Identity Management in a large organisation. When users are created we would like to be able to gather certain information about the new users and process it dynamically in some way. In particular, we are interested in the age of the new users and the country in which they are based. Perhaps at the reception desk of the company HQ we display a map with the number of employees in each country highlighted. To decouple whatever applications are processing the data from Syncope itself, we will use a messaging solution, namely Apache ActiveMQ. When new users are created, we will modify the default Camel route to send a message to two topics corresponding to the age and location of the user.

2) Download and configure Apache ActiveMQ

The first step is to download Apache ActiveMQ (currently 5.14.0). Unzip it and start it via:
  • bin/activemq start 
Now go to the web interface of ActiveMQ - 'http://localhost:8161/admin/', logging in with credentials 'admin/admin'. Click on the "Queues" tab and create two new queues called 'age' and 'country'.

3) Download and configure Apache Syncope

Download and extract the standalone version of Apache Syncope 2.0.0. Before we start it we will copy the jars we need to get Camel working with ActiveMQ in Syncope. In the "webapps/syncope/WEB-INF/lib" directory of the Apache Tomcat instance bundled with Syncope, copy the following jars:
  • From $ACTIVEMQ_HOME/lib: activemq-client-5.14.0.jar + activemq-spring-5.14.0.jar + hawtbuf-1.11.jar + geronimo-j2ee-management_1.1_spec-1.0.1.jar
  • From $ACTIVEMQ_HOME/lib/camel: activemq-camel-5.14.0.jar + camel-jms-2.16.3.jar
  • From $ACTIVEMQ_HOME/lib/optional: activemq-pool-5.14.0.jar + activemq-jms-pool-5.14.0.jar + spring-jms-4.1.9.RELEASE.jar
Next we need to create a Camel spring configuration file containing a bean with the address of the broker. Add the following file to the Tomcat lib directory (called "camelRoutesContext.xml"):

Now we can start the embedded Apache Tomcat instance. Open a browser and navigate to 'http://localhost:9080/syncope-console' logging in with 'admin/password'. The first thing we need to do is to configure user attributes for "age" and "country". Go to "Configuration/Types" in the left-hand menu, and click on the "Schemas" tab. Create two plain (mandatory) schema types: "age" of type "String" and "country" of type "Long". Now click on the "AnyTypeClasses" tab and create a new AnyTypeClass selecting the two plain schema types we just created. Finally, click on the "AnyType" tab and edit the "USER". Add the new AnyTypeClass you created and hit "save".

Now we will modify the Camel route invoked when a user is created. Click on "Extensions/Camel Routes" in the left-hand configuration menu. Edit the "createUser" route and add the following above the "bean method" part:
  • <setBody><simple>${body.plainAttrMap[age].values[0]}</simple></setBody>
  • <to uri="activemq:age"/>
  • <setBody><simple>${exchangeProperty.actual.plainAttrMap[country].values[0]}</simple></setBody>
  • <to uri="activemq:country"/>
This should be fairly straightforward to follow. We are setting the message body to be the age of the newly created User, and dispatching that message to the "age" queue. We then follow the same process for the "country". We also need to change "body" in the "bean method" line to "exchangeProperty.actual", this is because we have redefined what the body is for each of the Camel routes above.


Now let's create some new users. Click on the "Realms" menu and select the "USER" tab. Create new users "alice" in country "usa" of age "25" and "bob" in country "canada" of age "27". Now let's look at the ActiveMQ console again. We should see two new messages in each of the queues as follows, ready to be consumed:



Categories: Colm O hEigeartaigh

Using SHA-512 with Apache CXF SOAP web services

Colm O hEigeartaigh - Thu, 09/22/2016 - 15:49
XML Signature is used extensively in SOAP web services to guarantee message integrity, non-repudiation, as well as client authentication via PKI. A digest algorithm crops up in XML Signature both as part of the Signature Method (rsa-sha1 for example), as well as in the digests of the data that are signed. As recent weaknesses have emerged with the use of SHA-1, it makes sense to use the SHA-2 digest algorithm instead. In this post we will look how to configure Apache CXF to use SHA-512 (i.e. SHA-2 with 512 bits) as the digest algorithm.

1) Configuring the STS to use SHA-512

Apache CXF ships with a SecurityTokenService (STS) that is widely deployed. The principal function of the STS is to issue signed SAML tokens, although it supports a wide range of other functionalities and token types. The STS (for more recent versions of CXF) uses RSA-SHA256 for the signature method when signing SAML tokens, and uses SHA-256 for the digest algorithm. In this section we'll look at how to configure the STS to use SHA-512 instead.

You can specify signature and digest algorithms via the SignatureProperties class in the STS. To specify SHA-512 for signature and digest algorithms for generated tokens in the STS add the following bean to your spring configuration:

Next you need to reference this bean in the StaticSTSProperties bean for your STS:
  • <property name="signatureProperties" ref="sigProps" />
2) Configuring WS-SecurityPolicy to use SHA-512

Service requests are typically secured at a message level using WS-SecurityPolicy. It is possibly to specify the algorithms used to secure the request, as well as the key sizes, by configuring an AlgorithmSuite policy. Unfortunately the last WS-SecurityPolicy spec is quite dated at this point, and lacks support for more modern algorithms as part of the default AlgorithmSuite policies that are defined in the spec. The spec only supports using RSA-SHA1 for signature, and only SHA-1 and SHA-256 for digest algorithms.

Luckily, Apache CXF users can avail of a few different ways to use stronger algorithms with web service requests. In CXF there is a JAX-WS property called 'ws-security.asymmetric.signature.algorithm' for AsymmetricBinding policies (similarly 'ws-security.symmetric.signature.algorithm' for SymmetricBinding policies). This overrides the default signature algorithm of the policy. So for example, to switch to use RSA-SHA512 instead of RSA-SHA1 simply set the following property on your client/endpoint:
  • <entry key="ws-security.asymmetric.signature.algorithm" value="http://www.w3.org/2001/04/xmldsig-more#rsa-sha512"/>
There is no corresponding property to explicitly configure the digest algorithm, as the default AlgorithmSuite policies already support SHA-256 (although one could be added if there was enough demand). If you really need to support SHA-512 here, an option is to use a custom AlgorithmSuite (which will obviously not be portable), or to override one of the existing ones.

It's pretty straightforward to do this. First you need to create an AlgorithmSuiteLoader implementation to handle the policy. Here is one used in the tests that creates a custom AlgorithmSuite policy called 'Basic128RsaSha512', which extends the 'Basic128' policy to use RSA-SHA512 for the signature method, and SHA-512 for the digest method. This AlgorithmSuiteLoader can be referenced in Spring via:


The policy in question looks like:
  • <cxf:Basic128RsaSha512 xmlns:cxf="http://cxf.apache.org/custom/security-policy"/>
Categories: Colm O hEigeartaigh

How to enable Fediz Plugin Logging

Jan Bernhardt - Thu, 09/22/2016 - 14:43
If you are using the Apache Fediz plugin to enable WS-Federation Support for your Tomcat container, you will not see any log statements from the Fediz Plugin by default. Especially when testing or analyzing issues with the plugin you will be interested in actually seeing some log statements from the plugin.

In this blog post I'll explain to you what need to be done to get all DEBUG log level statements from the Apache Fediz Tomcat Plugin using Log4J.
Apache Tomcat tells you how to enable logging on the container level.
1. Adding DependenciesFirst you need to ensure that the required libraries are available within your classpath. This can be done in one of two ways:
a) Adding Maven Dependencies to the Fediz Tomcat PluginAdd the following dependency to cxf-fediz/plugins/tomcat7/pom.xml:
<project . . .>
. . .
<dependencies>
<dependency>
<groupid>org.slf4j</groupid>
<artifactid>slf4j-log4j12</artifactid>
<version>${slf4j.version}</version>
<scope>runtime</scope>
</dependency>
</dependencies>
. . .
</project>
Now build the plugin again mvn clean package and deploy the content of cxf-fediz/plugins/tomcat7/target/fediz-tomcat7-1.3.0-zip-with-dependencies.zip into your tomcat/lib/fediz folder.
b) Adding lib files directly to your lib folderAdd slf4j and log4j libs (in the desired version) to your fediz plugin dependencies:

2. Adding Log4J configuration fileOnce your dependencies are added to your Tomcat installation, you need to add a log4j.properties file to your tomcat/lib folder. Here is an example content for this file:
# Loggers
log4j.rootLogger = WARN, CATALINA, CONSOLE
log4j.logger.org.apache.cxf.fediz = DEBUG, CONSOLE, FEDIZ
log4j.additivity.org.apache.cxf.fediz = false

# Appenders
log4j.appender.CATALINA = org.apache.log4j.DailyRollingFileAppender
log4j.appender.CATALINA.File = ${catalina.base}/logs/catalina.out
log4j.appender.CATALINA.Append = true
log4j.appender.CATALINA.Encoding = UTF-8
log4j.appender.CATALINA.DatePattern = '.'yyyy-MM-dd
log4j.appender.CATALINA.layout = org.apache.log4j.PatternLayout
log4j.appender.CATALINA.layout.ConversionPattern = %d [%t] %-5p %c %x - %m%n

log4j.appender.FEDIZ = org.apache.log4j.DailyRollingFileAppender
log4j.appender.FEDIZ.File = ${catalina.base}/logs/fediz-plugin.log
log4j.appender.FEDIZ.Append = true
log4j.appender.FEDIZ.Encoding = UTF-8
log4j.appender.FEDIZ.Threshold = DEBUG
log4j.appender.FEDIZ.DatePattern = '.'yyyy-MM-dd
log4j.appender.FEDIZ.layout = org.apache.log4j.PatternLayout
log4j.appender.FEDIZ.layout.ConversionPattern = %d [%t] %-5p %c %x - %m%n

log4j.appender.CONSOLE = org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Encoding = UTF-8
log4j.appender.CONSOLE.Threshold = INFO
log4j.appender.CONSOLE.layout = org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern = %d [%t] %-5p %c %x - %m%n

Now restart your tomcat container and you will see Fediz Info logs on your console and Debug messages within tomcat/logs/fediz-plugin.log.
Categories: Jan Bernhardt

Invoking on the Talend ESB STS using SoapUI

Colm O hEigeartaigh - Wed, 09/21/2016 - 17:11
Talend ESB ships with a powerful SecurityTokenService (STS) based on the STS that ships with Apache CXF. The Talend Open Studio for ESB contains UI support for creating web service clients that use the STS to obtain SAML tokens for authentication (and also authorization via roles embedded in the tokens). However, it is sometimes useful to be able to obtain tokens with a third party client. In this post we will show how SoapUI can be used to obtain SAML Tokens from the Talend ESB STS.

1) Download and run Talend Open Studio for ESB

The first step is to download Talend Open Studio for ESB (the current version at the time of writing this post is 6.2.1). Unzip it and start the container via:
  • Runtime_ESBSE/container/bin/trun
The next step is to start the STS itself:
  • tesb:start-sts
2) Download and run SoapUI

Download SoapUI and run the installation script. Create a new SOAP Project called "STS" using the WSDL:
  • http://localhost:8040/services/SecurityTokenService/UT?wsdl
The WSDL of the STS defines a number of different services. The one we are interested in is the "UT_Binding", which requires a WS-Security UsernameToken to authenticate the client. Click on "UT_Binding/Issue/Request 1" in the left-hand menu to see a sample request for the service. Now we need to do some editing of the request. Remove the 'Context="?"' attribute from RequestSecurityToken. Then paste the following into the Body of the RequestSecurityToken:
  • <t:TokenType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</t:TokenType>
  • <t:KeyType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/ws-sx/ws-trust/200512/Bearer</t:KeyType>
  • <t:RequestType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</t:RequestType>
Now we need to configure a username and password to use when authenticating the client request. In the "Request Properties" box in the lower left corner, add "tesb" for the "username" and "password" properties. Now right click in the request pane, and select "Add WSS Username Token" (Password Text). Now send the request and you should receive a SAML Token in response.

Bear in mind that if you wish to re-use the SAML Token retrieved from the STS in a subsequent request, you must copy it from the "Raw" tab and not the "XML" tab of the response. The latter adds in whitespace that breaks the signature on the token. Another thing to watch out for is that the STS maintains a cache of the Username Token nonce values, so you will need to recreate the UsernameToken each time you want to get a new token.

3) Requesting a "PublicKey" KeyType

The example above uses a "Bearer" KeyType. Another common use-case, as is the case with the security-enabled services developed using the Talend Studio, is when the token must have the PublicKey/Certificate of the client embedded in it. To request such a token from the STS, change the "Bearer" KeyType as above to "PublicKey". However, we also need to present a certificate to the STS to include in the token.

As we are just using the test credentials used by the Talend STS, go to the Runtime_ESBSE/container/etc/keystores and extract the client key with:
  • keytool -exportcert -rfc -keystore clientstore.jks -alias myclientkey -file client.cer -storepass cspass
Edit client.cer + remove the first and end lines (that contain BEGIN/END CERTIFICATE). Now go back to SOAP-UI and add the following to the RequestSecurityToken Body:
  • <t:UseKey xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512"><ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:X509Data><ds:X509Certificate>...</ds:X509Certificate></ds:X509Data></ds:KeyInfo></t:UseKey>
where the content of the X.509 Certificate is the content in client.cer. This time, the token issued by the STS will contain the public key of the client embedded in the SAML Subject.

Categories: Colm O hEigeartaigh

Securing an Apache Kafka broker - part II

Colm O hEigeartaigh - Mon, 09/19/2016 - 15:49
In the previous post, we looked at how to configure an Apache Kafka broker to require SSL client authentication. In this post we will add authorization to the example, making sure that only authorized producers can send messages to the broker. In addition, we will show how to enforce authorization rules per-topic for consumers.

1) Configure authorization in the broker

Configure Apache Kafka as per the previous tutorial. To enforce some custom authorization rules in Kafka, we will need to implement the Kafka Authorizer interface. This interface contains an "authorize" method, which supplies a Session Object, where you can obtain the current principal, as well as the Operation and Resource upon which to enforce an authorization decision.

In terms of the example detailed in the previous post, we created broker, service (producer) and client (consumer) principals. We want to enforce authorization decisions as follows:
  • Let the broker principal do anything
  • Let the producer principal read/write on all topics
  • Let the consumer principal read/describe only on topics starting with "test".
There is a sample Authorizer implementation available in some Kafka unit test I wrote in github that can be used in this example - CustomAuthorizer:

Next we need to package up the CustomAuthorizer in a jar so that it can be used in the broker. You can do this by checking out the testcases github repo, and invoking "mvn clean package jar:test-jar -DskipTests" in the "apache/bigdata/kafka" directory. Now copy the resulting test jar in "target" to the "libs" directory in your Kafka installation. Finally, edit the "config/server.properties" file and add the following configuration item:
  • authorizer.class.name=org.apache.coheigea.bigdata.kafka.CustomAuthorizer
2) Test authorization

Now lets test the authorization logic. Restart the broker and the producer:
  • bin/kafka-server-start.sh config/server.properties
  • bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties
Send a few messages to check that the producer is authorized correctly. Now start the consumer:
  • bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/consumer.properties --new-consumer
If everything is configured correctly then it should work as in the first tutorial. Now we will create a new topic called "messages":
  • bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic messages
Restart the producer to send messages to "messages" instead of "test". This should work correctly. Now try to consume from "messages" instead of "test". This should result in an authorization failure, as the "client" principal can only consume from the "test" topic according to the authorization rules.
Categories: Colm O hEigeartaigh

Securing an Apache Kafka broker - part I

Colm O hEigeartaigh - Fri, 09/16/2016 - 18:19
Apache Kafka is a messaging system for the age of big data, with a strong focus on reliability, scalability and message throughput. This is the first part of a short series of posts on how to secure an Apache Kafka broker. In this post, we will focus on authenticating message producers and consumers using SSL. Future posts will look at how to authorize message producers and consumers.

1) Create SSL keys

As we will be securing the broker using SSL client authentication, the first step is to create some keys for testing purposes. Download the OpenSSL ca.config file used by the WSS4J project. Change the "certificate" value to "ca.pem", and the "private_key" value to "cakey.pem". You will also need to create a directory called "ca.db.certs", and make an empty file called "ca.db.index". Now create a new CA key and cert via:
  • openssl req -x509 -newkey rsa:1024 -keyout cakey.pem -out ca.pem -config ca.config -days 3650
Just accept the default options. Now we need to convert the CA cert into jks format:
  • openssl x509 -outform DER -in ca.pem -out ca.crt
  • keytool -import -file ca.crt -alias ca -keystore truststore.jks -storepass security
Now we will create the client key, sign it with the CA key, and put the signed client cert and CA cert into a keystore:
  • keytool -genkey -validity 3650 -alias myclientkey -keyalg RSA -keystore clientstore.jks -dname "CN=Client,O=Apache,L=Dublin,ST=Leinster,C=IE" -storepass cspass -keypass ckpass
  • keytool -certreq -alias myclientkey -keystore clientstore.jks -file myclientkey.cer -storepass cspass -keypass ckpass
  • echo 20 > ca.db.serial
  • openssl ca -config ca.config -policy policy_anything -days 3650 -out myclientkey.pem -infiles myclientkey.cer
  • openssl x509 -outform DER -in myclientkey.pem -out myclientkey.crt
  • keytool -import -file ca.crt -alias ca -keystore clientstore.jks -storepass cspass
  • keytool -import -file myclientkey.crt -alias myclientkey -keystore clientstore.jks -storepass cspass -keypass ckpass
Now follow the same template to create a "service" key in servicestore.jks, with store password "sspass" and key password "skpass". In addition, we will create a "broker" key in brokerstore.jks, with storepass "bspass" and key password "bkpass".  

2) Configure the broker

Download Apache Kafka and extract it (0.10.0.1 was used for the purposes of this tutorial). Copy the keys created in section "1" into $KAFKA_HOME/config. Start Zookeeper with:
  • bin/zookeeper-server-start.sh config/zookeeper.properties
Now edit 'config/server.properties' and add the following:
  • ssl.keystore.location=./config/brokerstore.jks
  • ssl.keystore.password=bspass
  • ssl.key.password=bkpass
  • ssl.truststore.location=./config/truststore.jks
  • ssl.truststore.password=security
  • security.inter.broker.protocol=SSL
  • ssl.client.auth=required
  • listeners=SSL://localhost:9092
and start the broker and then create a "test" topic with:
  • bin/kafka-server-start.sh config/server.properties
  • bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
3) Configure the producer

Now we will configure the message producer. Edit 'config/producer.properties' and add the following:
  • security.protocol=SSL
  • ssl.keystore.location=./config/servicestore.jks
  • ssl.keystore.password=sspass
  • ssl.key.password=skpass
  • ssl.truststore.location=./config/truststore.jks
  • ssl.truststore.password=security
and start the producer with:
  • bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties
Send a few messages to the topic to make sure that everything is working ok.

4) Configure the consumer

Finally we will configure the message consumer. Edit 'config/consumer.properties' and add the following:
  • security.protocol=SSL
  • ssl.keystore.location=./config/clientstore.jks
  • ssl.keystore.password=cspass
  • ssl.key.password=ckpass
  • ssl.truststore.location=./config/truststore.jks
  • ssl.truststore.password=security
and start the consumer with:
  • bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/consumer.properties --new-consumer
The messages sent by the producer should appear in the console window of the consumer. So there it is, we have configured the Kafka broker to require SSL client authentication. In the next post we will look at adding simple authorization support to this scenario.
Categories: Colm O hEigeartaigh

Integrating Apache Camel with Apache Syncope - part II

Colm O hEigeartaigh - Fri, 09/09/2016 - 16:58
A recent blog post introduced the new Apache Camel provisioning manager that is available in Apache Syncope 2.0.0. It also covered a simple use-case for the new functionality, where the "createUser" Camel route is modified to send an email to an administrator when a User is created, with some details about the created User in the email. In this post, we will look at a different use-case, where the Camel provisioning manager is used to extend the functionality offered by Syncope.

1) The use-case

Apache Syncope stores users in internal storage in a table called "SyncopeUser". This table contains information such as the User Id, name, password, creation date, last password change date, etc. In addition, if there is an applicable password policy associated with the User realm, a list of the previous passwords associated with the User is stored in a table called "SyncopeUser_passwordHistory":

As can be seen from this screenshot, the table stores a list of Syncope User Ids with a corresponding password value. The main function of this table is to enforce the password policy. So for example, if the password policy states that a user can't change back to a password for at least 10 subsequent password changes, this table provides the raw information that is needed to enforce the policy.

Now, what if the administrator wants a stronger audit trail for User password changes other than what is provided by default in the Syncope internal storage? In particular, the administrator would like a record of when the User changes a password. The "SyncopeUser" table only stores the last password change date. There is no way of seeing when each password stored in the "SyncopeUser_passwordHistory" table was changed. Enter the Camel provisioning manager...

2) Configure Apache Syncope

Download and install  Apache Syncope (I used the "standalone" download for the purposes of this demo). Start Apache Syncope and log on to the admin console. First we will create a password policy. Click on "Configuration" and then "Policies". Click on the "Password" tab and then select the "+" button to create a new policy. Give a description for the policy and then select a history length of "10".

Now let's create a new user in the default realm. Click on "realms" and hit the "edit" button. Select the password policy you have just created and click "Finish". This means that all users created in this realm will have the applicable policy applied to them. Now click on the "User" tab and the "+" button to add a new user called "alice". Click on "Password Management" and enter a password for "alice". Now we have a new user created, we want to be able to see when she updates her password from now on.

Click on "Extensions" and "Camel Routes". Find the "updateUser" route (might not be on the first page) and edit it. Create a new Camel "filter" (as per the screenshot below) just above the "bean method=" line with the following content:
  • <simple>${body.password} != null</simple>
  • <setHeader headerName="CamelFileName"><simple>${body.key}.passwords</simple></setHeader>
  • <setBody><simple>New password '${body.password.value}' changed at time '${date:now:yyyy-MM-dd'T'HH:mm:ss.SSSZ}'\n</simple></setBody>
  • <to uri="file:./?fileExist=Append"/>
So what we are doing here is to audit the password changes for a particular user, by writing out the password + Timestamp to a file associated with that user. Let's examine what each of these statements do in turn. The first statement is the filter condition. It states that we should execute the following statements if the password is not null. The password will only be non null if it is being changed. So for example, if the user just changes a given attribute and not the password, the filter will not be invoked.

The second statement sets the Camel Header "CamelFileName" to the format of "<user.id>.passwords". This header is used by the Camel File component as the file name to write out to. The third statement sets the exchange Body (the file content) to show the password value along with a Timestamp. Finally, the fourth statement is an invocation of the Camel File component, which appends the exchange Body to the given file name. As we have overridden the message Body in the second statement above, we need to change the ${body} in the create call to ${exchangeProperty.actual}, which is the saved Body. Click on "save" to save the modified route.


Now let's update the user "alice" and set a new password a couple of times. There should be a new file in the directory where you launched Tomcat containing the audit log for password changes for that particular user. With the power of Apache Camel, we could audit to a database, to Apache Kafka, etc etc.


Categories: Colm O hEigeartaigh

Apache CXF Fediz 1.2.3 and 1.3.1 released

Colm O hEigeartaigh - Thu, 09/08/2016 - 19:45
Apache CXF Fediz 1.2.3 and 1.3.1 have been released. The 1.3.1 release contains the following significant features/fixes:
  • An update to use Apache CXF 3.1.7 
  • Support for Facebook Login as a Trusted IdP.
  • A fix for SAML SSO redirection on ForceAuthn or token expiry.
  • A bug fix to support multiple realms in the IdP.
  • A fix to enforce that mandatory claims are present in the received token.
In addition, both 1.2.3 and 1.3.1 contain a fix for a new security advisory - CVE-2016-4464:
Apache CXF Fediz is a subproject of Apache CXF which implements the WS-Federation Passive Requestor Profile for SSO specification. It provides a number of container based plugins to enable SSO for Relying Party applications. It is possible to configure a list of audience URIs for the plugins, against which the AudienceRestriction values of the received SAML tokens are supposed to be matched. However, this matching does not actually take place.

This means that a token could be accepted by the application plugin (assuming that the signature is trusted) that is targeted for another service, something that could potentially be exploited by an attacker.
    Categories: Colm O hEigeartaigh

    Syncope User Synchronisation with a Database

    Jan Bernhardt - Thu, 09/01/2016 - 13:12
    In a previous post I explained how to setup a datasource for an embedded H2 database and how to use it with the Karaf DB JAAS plugin.

    In this post, I'll explain to you how to setup Syncope to synchronize users from that database into syncope. Of course you can also use any other database with a matching JDBC driver.
    Install SyncopeIn this post I'll refer to the Syncope Installation which comes with the Talend 6.1.1 installer. If you need to setup Syncope manually, please take a look at some posts from Colm.
    Setup DB Connection ModuleSyncope uses connid to connect to other backend systems like LDAP.
    You need to download the DB connid bundle and follow the installation instructions.
    1. Open webapps/syncope/WEB-INF/classes/connid.properties and define your connid bundle location:
      Windows Style:connid.locations=file:/C:/Talend/6.1.1/apache-tomcat/webapps/syncope/WEB-INF/connid/Linux Style:connid.locations=file:/opt/Talend-6.1.1/apache-tomcat/webapps/syncope/WEB-INF/connid/
    2. Create the defined folder and copy your downloaded connid bundle (jar) into it
    3. Download and copy your required JDBC driver to your tomcat/lib folder
    4. Restart Syncope / Tomcat
    5. Login to Syncope Console: http://localhost:8080/syncope-console/
      Default-Username: admin
      Default-Password: password
    Setup DB ConnectorNext you need to setup a connection to your database, before you can define any synchronization pattern.
    1. Switch to Resources -> Connectors and click Create
    2. Enter your connection name and select your connid bundle:
    3. Configure your connection settings:
      Since Syncope expects SHA1 hashes to be Uppercase you must set this checkbox, or otherwise your users will not be able to authenticate against syncope with their synchronized password.

      In Syncope 2.x and newer it will also be possible to avoid user password synchronization, but instead to do a "pass-through authentication". This will be especially helpful if your passwords are not just hashed but also salted and encrypted.
    4. Perform a connection test by clicking on the top right world icon of the configuration tab

      If you are experiencing connection problems, take a look into the  tomcat/logs/core-connid.log file for detailed information.
    5. Select all checkboxes on the capabilities tab:
    6. Save your connection
      Define DB Resource Now you can setup a new resource to define the attribute matching from syncope internal DB and external DB.
      1. Click on Resources -> Resources -> Create
      2.  Switch to user mapping tab
      3. Click Save
      Add Synchronization Task To import users from your database you need to setup a synchronization task.
      1. Click on Task ->  Synchronization Tasks -> Create
      2. Click Save
      3. Execute your new synchronization task
        If your run was successful you will see alice as a new user under Users.
        Create a new UserTo test user propagation, you must create a new user and add this user to the H2-users Resource.
        1. Click Users -> List -> Create
        2. Select Resource
        3. Save
        You will now find Bob in your H2 database.

        I was not able to do a role synchronization with my DB backend, due to missing support in the UI / connid handler.
        Categories: Jan Bernhardt

        Integrating Apache Camel with Apache Syncope - part I

        Colm O hEigeartaigh - Wed, 08/31/2016 - 12:29
        Apache Syncope is an open-source Identity Management solution. A key feature of Apache Syncope is the ability to pull Users, Groups and Any Objects from multiple backend resources (such as LDAP, RDMBS, etc.) into Syncope's internal storage, where they can then be assigned roles, pushed to other backend resources, exposed via Syncope's REST API, etc.

        However, what if you wanted to easily perform some custom task as part of the Identity Management process? Wouldn't it be cool to be able to plug in a powerful integration framework such as Apache Camel, so that you could exploit Camel's huge list of messaging components and routing + mediation rules? Well with Syncope 2.0.0 you can do just this with the new Apache Camel provisioning manager. This is a unique and very powerful selling point of Apache Syncope in my opinion. In this article, we will introduce the new Camel provisioning manager, and show a simple example of how to use it.

        1) The new Apache Camel provisioning manager

        As stated above, a new provisioning manager is available in Apache Syncope 2.0.0 based on Apache Camel. A set of Camel routes are available by default which are invoked when the User, Groups and Any Objects in question are changed in some way. So for example, if a new User is created, then the corresponding Camel route is invoked at the same time. This allows the administrator to plug in custom logic on any of these state changes. The routes can be viewed and edited in the Admin Console by clicking on "Extensions" and then "Camel Routes".

        Each of the Camel routes uses a new "propagate" Camel component available in Syncope 2.0.0. This component encapsulates some common logic involved in using the Syncope PropagationManager to create some tasks, and to execute them via the PropagationTaskExecutor. All of the routes invoke this propagate component via something like:
        • <to uri="propagate:<propagateType>?anyTypeKind=<anyTypeKind>&options"/>
        Where propagateType is one of:
        • create
        • update
        • delete
        • provision
        • deprovision
        • status
        • suspend
        • confirmPasswordReset
        and anyTypeKind is one of:
        • USER
        • GROUP
        • ANY
        2) The use-case

        In this post, we will look at a simple use-case of sending an email to an administrator when a User is created, with some details about the created User in the email. Of course, this could be handled by a Notification Task, but we'll discuss some more advanced scenarios in future blog posts. Also note that a video available on the Tirasa blog details more or less the same use-case. For the purposes of the demo, we will set up a mailtrap account where we will receive the emails sent by Camel.

        3) Configure Apache Syncope

        Download and install  Apache Syncope (I used the "standalone" download for the purposes of this demo). Before starting Apache Syncope, we need to copy a few jars that are required by Apache Camel to actually send emails. Copy the following jars to $SYNCOPE/webapps/syncope/WEB-INF/lib:
        • http://repo1.maven.org/maven2/org/apache/camel/camel-mail/2.17.3/camel-mail-2.17.3.jar
        • http://repo1.maven.org/maven2/com/sun/mail/javax.mail/1.5.5/javax.mail-1.5.5.jar
        Now start Apache Syncope and log on to the admin console. Click on "Extensions" and then "Camel Routes". As we want to change the default route when users are created, click on the "edit" image for the "createUser" route. Add the following information just above the "bean method=" line:
        • <setHeader headerName="subject"><simple>New user ${body.username} created in realm ${body.realm}</simple></setHeader> 
        • <setBody><simple>User full name: ${body.plainAttrMap[fullname].values[0]}</simple></setBody>
        • <to uri="smtp://mailtrap.io?username=<username>&amp;password=<password>&amp;contentType=text/html&amp;to=dummy@apache-recipient.org"/>
        Let's examine what each of these statements do. The first statement is setting the Camel header "Subject" which corresponds to the Subject of the Email. It simply states that a new user with a given name is created in a given realm. The second statement sets the message Body, which is used as the content of the message by Camel. It just shows the User's full name, extracted from the "fullname" attribute, as an example of how to access User attributes in the route.

        The third statement invokes on the Camel smtp component. You'll need to substitute in the username + password you configured when setting up the mailtrap account. The recipient is configured using the "to" part of the URI. One more change is required to the existing route. As we have overridden the message Body in the second statement above, we need to change the ${body} in the create call to ${exchangeProperty.actual}, which is the saved Body. Click on "save" to save the modified route.


        Before creating a User, we need to add a "fullname" User attribute as the route expects. Go to "Configuration" and "Types", and click on the "Schemas" tab. Click on the "+" button under "PLAIN" and add a new attribute called "fullname". Then click on "AnyTypeClasses", and add the "fullname" attribute to the BaseUser AnyTypeClass.

        Finally, go to the "/" realm and create a new user, specifying a fullname attribute. A new email should be available in the mailtrap account as follows:

        Categories: Colm O hEigeartaigh

        Custom JSSE Truststore to enable XKMS Certificate Validation

        Jan Bernhardt - Mon, 08/29/2016 - 08:56
        Recently I was involved in a project which uses a central XKMS Server for certificate and trust management. This was all working fine within the Talend runtime with a custom wss4j crypto provider. However the need raised to perform client certificate validations (mutal SSL) with Apache Fediz running inside an Apache Tomcat server.


        Usually I would use a JKS truststore for Tomcat to add trusted certificates (CAs). However this was not possible for this project, because all certificates will be managed inside an LDAP accessible via a XKMS service. Searching for a solution to extend Tomcat to support XKMS based certificate validation I came across the JSSE Standard.

        Reading throw the documentation was not so straightforward and clear. But searching through the internet finally helped me to achieve my goal. In this blog post, I'll show you what I had to do, to enabled XKMS based SSL certificate validation in Tomcat. To manage your SSL truststore settings you can use standard System or Tomcat properties:

        System PropertiesTomcat PropertiesPurposejavax.net.ssl.trustStoretruststoreFileSystem location for JKS truststore filejavax.net.ssl.trustStorePasswordtruststorePassPassword for JKS truststorejavax.net.ssl.trustStoreProvidertruststoreProviderProvider for a truststoreFactory implementationjavax.net.ssl.trustStoreTypetruststoreTypeType of your truststore. Default is "JKS"n/atrustManagerClassNameCustom trust manager class to use to validate client certificates
        Settings are considered in the following order:
        1. Tomcat truststore properties
        2. System Properties
        3. Tomcat keystore properties
        4. Default Values
        If a trustManagerClassName is set, this implementation will be used and all other truststore settings will be ignored. If a truststore provider is defined any Java standard provider will be ignored.

        You can review this behavior in the Tomcat JSSESocketFactory init method.

        The easiest way to achieve my goal was to implement my own XKMSTrustManager implementing the javax.net.ssl.X509TrustManager interface.
        public class XKMSTrustManager implements X509TrustManager {

        private static final Logger LOG = LoggerFactory.getLogger(XKMSTrustManager.class);

        private XKMSInvoker xkms;

        public XKMSTrustManager() throws MalformedURLException {
        XKMSService xkmsService = new XKMSService(
        URI.create(System.getProperty("xkms.wsdl.location", "http://localhost:8040/services/XKMS/?wsdl"))
        .toURL());
        xkms = new XKMSInvoker(xkmsService.getXKMSPort());
        }

        @Override
        public void checkClientTrusted(X509Certificate[] chain, String authType) throws CertificateException {
        LOG.debug("Check client trust for: {}", chain);
        validateTrust(chain);
        }

        @Override
        public void checkServerTrusted(X509Certificate[] chain, String authType) throws CertificateException {
        LOG.debug("Check server trust for: {}", chain);
        validateTrust(chain);
        }

        @Override
        public X509Certificate[] getAcceptedIssuers() {
        return new X509Certificate[] {};
        }

        protected void validateTrust(X509Certificate[] chain) throws CertificateException {
        if (chain == null) {
        throw new CertificateException("Certificate chain is null");
        }

        if (!xkms.validateCertificate(chain)) {
        LOG.error("Certificate chain is not trusted: {}", chain);
        throw new CertificateException("Certificate chain is not trusted");
        }
        }
        }
        Tomcat\conf\server.xml
        <Server port="9005" shutdown="SHUTDOWN">

          <Service name="Catalina">

            <Connector port="9443" protocol="org.apache.coyote.http11.Http11Protocol"
                       maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
                       keystoreFile="idp-ssl-key.jks"
                       keystorePass="tompass"
                       trustManagerClassName="org.talend.ws.security.jsse.XKMSTrustManager"
                       clientAuth="true"
                       sslProtocol="TLS" />

          </Service>

        </Server>


        However setting a trustManager is only possible if this option is provided by your application or if you have access to the source code of the SSL SocketFactory. In all other cases you will have to implement your own Security Provider providing your own truststore factory. This task is much more challenging. During my internet research for this topic I found several pages, which should be a good reference for you, if you have to go this way:

        JCA Reference Guide - Crypto Provider

        Howto Implement a JCA Provider

        JSSE Reference Guide - Customized Certificate Storage

        Custom CA Truststore in addition to System CA Truststore
        HowTo Register global security provider
        <java-home>/lib/security/java.security
        security.provider.2=sun.security.provider.Sun
        security.provider.3=sun.security.rsa.SunRsaSign
        security.provider.4=sun.security.provider.SunJCE Advantage: Multiple providers. Adding just the "missing piece".
        Disadvantage: System wide configuration

        Override security provider settings with system properties
        Changing Security Settings via CodeSecurity.insertProviderAt(new FooBarProvider(), 1);
        Register a TrustManagerput("TrustManagerFactory.SunX509", "sun.security.ssl.TrustManagerFactoryImpl$SimpleFactory");
        put("TrustManagerFactory.PKIX", "sun.security.ssl.TrustManagerFactoryImpl$PKIXFactory");Using a Custom Certificate Trust Store
        Sun JSSE Provider Implementation
        Categories: Jan Bernhardt

        Pulling users and groups from LDAP into Apache Syncope 2.0.0

        Colm O hEigeartaigh - Fri, 08/26/2016 - 17:54
        A previous tutorial showed how to synchronize (pull) users and roles into Apache Syncope 1.2.x from an LDAP backend (Apache Directory). Interacting with an LDAP backend appears to be a common use-case for Apache Syncope users. For this reason, in this tutorial we will cover how to pull users and groups (previously roles) into Apache Syncope 2.0.0 from an LDAP backend via the Admin Console, as it is a little different from the previous 1.2.x releases.

        1) Apache DS

        The basic scenario is that we have a directory that stores user and group information that we would like to import into Apache Syncope 2.0.0. For the purposes of this tutorial, we will work with Apache DS. The first step is to download and launch Apache DS. I recommend installing Apache Directory Studio for an easy way to create and view the data stored in your directory.

        Create two new groups (groupOfNames) in the default domain ("dc=example,dc=com") called "cn=employee,ou=groups,ou=system" and "cn=boss,ou=groups,ou=system". Create two new users (inetOrgPerson) "cn=alice,ou=users,ou=system" and "cn=bob,ou=users,ou=system". Now edit the groups you created such that both alice and bob are employees, but only alice is a boss. Specify "sn" (surname) and "userPassword" attributes for both users.

        2) Pull data into Apache Syncope

        The next task is to import (pull) the user data from Apache DS into Apache Syncope. Download and launch an Apache Syncope 2.0.x instance. Make sure that an LDAP Connector bundle is available (see here).

        a) Define a 'surname' User attribute

        The inetOrgPerson instances we created in Apache DS have a "sn" (surname) attribute. We will map this into an internal User attribute in Apache Syncope. The Schema configuration is quite different in the Admin Console compared to Syncope 1.2.x. Select "Configuration" and then "Types" in the left hand menu. Click on the "Schemas" tab and then the "+" button associated with "PLAIN". Add "surname" for the Key and click "save". Now go into the "AnyTypeClasses" tab and edit the "BaseUser" item. Select "surname" from the list of available plain Schema attributes. Now the users we create in Syncope can have a "surname" attribute.




        b) Define a Connector

        The next thing to do is to define a Connector to enable Syncope to talk to the Apache DS backend. Click on "Topology" in the left-hand menu, and on the ConnId instance on the map. Click "Add new connector" and create a new Connector of type "net.tirasa.connid.bundles.ldap". On the next tab select:
        • Host: localhost
        • TCP Port: 10389
        • Principal: uid=admin,ou=system
        • Password: <password>
        • Base Contexts: ou=users,ou=system and ou=groups,ou=system
        • LDAP Filter for retrieving accounts: cn=*
        • Group Object Classes: groupOfNames
        • Group member attribute: member
        • Click on "Maintain LDAP Group Membership".
        • Uid attribute: cn
        • Base Context to Synchronize: ou=users,ou=system and ou=groups,ou=system
        • Object Classes to Synchronize: inetOrgPerson and groupOfNames
        • Status Management Class: net.tirasa.connid.bundles.ldap.commons.AttributeStatusManagement
        • Tick "Retrieve passwords with search".
        Click on the "heart" icon at the top of tab to check to see whether Syncope is able to connect to the backend resource. If you don't see a green "Successful Connection" message, then consult the logs. On the next tab select all of the available capabilities and click on "Finish".

        c) Define a Resource

        Next we need to define a Resource that uses the LDAP Connector.  The Resource essentially defines how we use the Connector to map information from the backend into Syncope Users and Groups. Click on the Connector that was created in the Topology map and select "Add new resource". Just select the defaults and finish creating the new resource. When the new resource is created, click on it and add some provisioning rules via "Edit provision rules".

        Click the "+" button and select the "USER" type to create the mapping rules for users. Click "next" until you come to the mapping tab and create the following mappings:


        Click "next" and enable "Use Object Link" and enter "'cn=' + username + ',ou=users,ou=system'". Click "Finish" and "save". Repeat the process above for the "GROUP" type to create a mapping rule for groups as follows:
        Similar to creating the user mappings, we also need to enable "Use Object Link" and enter "'cn=' + name + ',ou=groups,ou=system'". Click "Finish" and "save".

        d) Create a pull task

        Having defined a Connector and a Resource to use that Connector, with mappings to map User/Group information to and from the backend, it's time to import the backend information into Syncope.  Click on the resource and select "Pull Tasks". Create a new Pull Task via the "+" button. Select "/" as the destination realm to create the users and groups in. Choose "FULL_RECONCILIATION" as the pull mode. Select "LDAPMembershipPullActions"  (this will preserve the fact that users are members of a group in Syncope) and "LDAPPasswordPullActions" from the list of available actions. Select "Allow create/update/delete". When the task is created,  click on the "execute" button (it looks like a cogged wheel). Now switch to the "Realms" tab in the left-hand menu and look at the users and groups that have been imported in the "/" realm from Apache DS.


        Categories: Colm O hEigeartaigh

        SwaggerUI in CXF or what Child's Play really means

        Sergey Beryozkin - Tue, 08/23/2016 - 14:03
        We've had an extensive demonstration of how to enable Swagger UI for CXF endpoints returning Swagger documents for a while but the only 'problem' was that our demos only showed how to unpack a SwaggerUI module into a local folder with the help of a Maven plugin and make these unpacked resources available to browsers.
        It was not immediately obvious to the users how to activate SwaggerUI and with the news coming from a SpringBoot land that apparently it is really easy over there to do it it was time to look at making it easier for CXF users.
        So Aki, Andriy and myself talked and this is what CXF 3.1.7 users have to do:

        1. Have Swagger2Feature activated to get Swagger JSON returned
        2. Add a swagger-ui dependency  to the runtime classpath.
        3. Access Swagger UI

        For example, run a description_swagger2 demo. After starting a server go to the CXF Services page and you will see:


        Click on the link and see a familiar Swagger UI page showing your endpoint's API.

        Have you wondered what do some developers mean when they say it is a child's play to try whatever they have done ? You'll find it hard to find a better example of it after trying Swagger UI with CXF 3.1.7 :-)

        Note in CXF 3.1.8-SNAPSHOT we have already fixed it to work for Blueprint endpoints in OSGI (with the help from Łukasz Dywicki).  SwaggerUI auto-linking code has also been improved to support some older browsers better.

        Besides, CXF 3.1.8 will also offer a proper support for Swagger correctly representing multiple JAX-RS endpoints based on the fix contributed by Andriy and available in Swagger 1.5.10 or when API interface and implementations are available in separate (OSGI) bundles (Łukasz figured out how to make it work).

        Before I finish let me return to the description_swagger2 demo. Add a cxf-rt-rs-service-description dependency to pom.xml. Start the server and check the services page:


        Of course some users do and will continue working with XML-based services and WADL is the best language available around to describe such services. If you click on a WADL link you will see an XML document returned. WADLGenerator can be configured with an XSLT template reference and if you have a good template you can get UI as good as this Apache Syncope document.

        Whatever your data representation preferences are, CXF will get you supported.

         




        Categories: Sergey Beryozkin

        OpenId Connect in Apache CXF Fediz 1.3.0

        Colm O hEigeartaigh - Fri, 08/12/2016 - 18:02
        Previous blog posts have described support for OpenId Connect protocol bridging in the Apache CXF Fediz IdP. What this means is that the Apache CXF Fediz IdP can bridge between the WS-Federation protocol and OpenId Connect third party IdPs, when the user must be authenticated in a different security domain. However, the 1.3.0 release of Apache CXF Fediz also sees the introduction of a new OpenId Connect Idp which is independent of the existing (WS-Federation and SAML-SSO based) IdP, and based on Apache CXF. This post will introduce the new IdP via an example.

        The example code is available on github:
        • cxf-fediz-oidc: This project shows how to use interceptors of Apache CXF to authenticate and authorize clients of a JAX-RS service using OpenId Connect.
        1) The secured service

        The first module available in the example contains a trivial JAX-RS Service based on Apache CXF which "doubles" a number that is passed as a path parameter via HTTP GET. The service defines via a @RolesAllowed annotation that only users allowed in roles "User", "Admin" or "Manager" can access the service.

        The service is configured via spring. The endpoint configuration references the service bean above, as well as the CXF SecureAnnotationsInterceptor which enforces the @RolesAllowed annotation on the service bean. In addition, the service is configured with the CXF OidcRpAuthenticationFilter, which ensures that only users authenticated via OpenId Connect can access the service. The filter is configured with a URL to redirect the user to. It also explicitly requires a role claim to enforce authorization.

        The OidcRpAuthenticationFilter redirects the browser to a separate authentication endpoint, defined in the same spring file for convenience. This endpoint has a filter called OidcClientCodeRequestFilter, which initiates the OpenId Connect authorization code flow to a remote OpenId Connect IdP (in this case, the new Fediz IdP). It is also responsible for getting an IdToken after successfully getting an authorization code from the IdP.

        2) The Fediz OpenId Connect IdP

        The second module contains an integration test which deploys a number of wars into an Apache Tomcat container:
        • The "double-it" service as described above
        • The Apache CXF Fediz IdP which authenticates users via WS-Federation
        • The Apache CXF Fediz STS which performs the underlying authentication of users
        • The Apache CXF Fediz OpenId Connect IdP
        The way the Apache CXF Fediz OpenId Connect IdP works (at least for 1.3.x) is that user authentication is actually delegated to the WS-Federation based IdP via a Fediz plugin. So when the user is redirected to the Fediz IdP, (s)he gets redirected to the WS-Federation based IdP for authentication, and then gets redirected back to the OpenId Connect IdP with a WS-Federation Response. The OpenId Connect IdP parses this (SAML) Response and converts it into a JWT IdToken. Future releases will enable authentication directly at the OpenId Connect service.

        After deploying all of the services, the test code makes a series of REST calls to create a client in the OpenId Connect IdP so that we can run the test without having to manually enter information in the client UI of the Fediz IdP. To run the test, simply remove the @org.junit.Ignore assertion on the "testInBrowser" method. The test code will create the clients in Fediz and then print out a URL in the console before sleeping. Copy the URL and paste it into a browser. Authenticate using the credentials "alice/ecila".
        Categories: Colm O hEigeartaigh

        Introducing Apache Syncope 2.0.0

        Colm O hEigeartaigh - Thu, 08/11/2016 - 17:17
        Apache Syncope is a powerful and flexible open-source Identity Management system that has been developed at the Apache Software Foundation for several years now. The Apache Syncope team has been busy developing a ton of new features for the forthcoming new major release (2.0.0), which will really help to cement Apache Syncope's position as a first class Identity Management solution. If you wish to experiment with these new features, a 2.0.0-M4 release is available. In this post we will briefly cover some of the new features and changes. For a more comprehensive overview please refer to the reference guide.

        1) Domains

        Perhaps the first new concept you will be introduced to in Syncope 2.0.0 after starting the (Admin) console is that of a domain. When logging in, as well as specifying a username, password, and language, you can also specify a configured domain. Domains are a new concept in Syncope 2.0.0 that facilitate multi-tenancy. Domains allow the physical separation of all data stored in Syncope (by storing the data in different database instances). Therefore, Syncope can facilitate users, groups etc. that are in different domains in a single Syncope instance.

        2) New Console layout

        After logging in, it becomes quickly apparent that the Syncope Console is quite different compared to the 1.2.x console. It has been completely rewritten and looks great. Connectors and Resources are now managed under "Topology" in the menu on the left-hand side. Users and Groups (formerly Roles) are managed under "Realms" in the menu. The Schema types are configured under "Configuration". A video overview of the new Console can be seen here.


        3) AnyType Objects

        With Syncope 1.2.x, it was possible to define plain/derived/virtual Schema Types for users, roles and memberships, but no other entities. In Syncope 2.0.0, the Schema Types are decoupled from the entity that uses them. Instead, a new concept called an AnyType class is available which is a collection of schema types. In turn, an AnyType object can be created which consists of any number of AnyType classes. AnyType objects represent the type of things that Apache Syncope can model. Besides the predefined Users and Groups, it can also represent physical things such as printers, workstations, etc. With this new concept, Apache Syncope 2.0.0 can model many different types of identities.

        4) Realms

        Another new concept in Apache Syncope 2.0.0 is that of a realm. A realm encapsulates a number of Users, Groups and Any Objects. It is possible to specify account and password policies per-realm (see here for a blog entry on custom policies in Syncope 2.0.0). Each realm has a parent realm (apart from the pre-defined root realm identified as "/"). The realm tree is hierarchical, meaning that Users, Groups etc. defined in a sub-realm, are also defined on a parent realm. Combined with Roles (see below), realms facilitate some powerful access management scenarios.

        5) Groups/Roles

        In Syncope 2.0.0, what were referred to as "roles" in Syncope 1.2.x are now called "groups". In addition, "roles" in Syncope 2.0.0 are a new concept which associate a number of entitlements with a number of realms. Users assigned to a role can exercise the defined entitlements on any of the objects in the given realms (any any sub-realms).

        Syncope 2.0.0 also has the powerful concept of dynamic membership, which means that users can be assigned to groups or roles via a conditional expression (e.g. if an attribute matches a given value).

        6) Apache Camel Provisioning

        An exciting new feature of Apache Syncope 2.0.0 is the new Apache Camel provisioning engine, which is available under "Extensions/Camel Routes" in the Console. Apache Syncope comes pre-loaded with some Camel routes that are executed as part of the provisioning implementation for Users, Groups and Any Objects. The real power of this new engine lies is the ability to modify the routes to perform some custom provisioning rules. For example, on creating a new user, you may wish to send an email to an administrator. Or if a user is reactivated, you may wish to reactivate the user's home page on a web server. All these things and more are possible using the myriad of components that are available to be used in Apache Camel routes. I'll explore this feature some more in future blog posts.

        7) End-User UI

        As well as the Admin console (available via /syncope-console), Apache Syncope 2.0.0 also ships with an Enduser console (available via /syncope-enduser). This allows a user to edit only details pertaining to his/her-self, such as editing the user attributes, changing the password, etc. See the following blog entry for more information on the new End-User UI.


        8) Command Line Interface (CLI) client

        Another new feature of Apache Syncope 2.0.0 is that of the CLI client. It is available as a separate download. Once downloaded, extract it and run (on linux): ./syncopeadm.sh install --setup. Answer the questions about where Syncope is deployed and the credentials required to access it. After installation, you can run queries such as: ./syncopeadm.sh user --list.

        9) Apache CXF-based testcases

        I updated the testcases that I wrote before to use Apache Syncope 2.0.0 to authenticate and authorize web services calls using Apache CXF. The new test-cases are available here
        Categories: Colm O hEigeartaigh

        CXF Spring Boot Starters Unveiled

        Sergey Beryozkin - Mon, 08/08/2016 - 23:51
        The very first check some new users may do these days, while evaluating your JAX-RS implementation, can be: how well is it integrated into SpringBoot ?

        And the good news is that Apache CXF 3.1.7 users can start working with SpringBoot real fast.
         
        We have left it somewhat late. It is hard to prioritize sometimes on various new requirements. And see some users moving away. In such cases the community support is paramount. And the Power of Open Source Collaboration came to the rescue once again when it was really needed.

        I'd like to start with thanking James for providing an initial set of links to various SpringBoot documentation pages and reacting positively to the initial code we had. But you know yourself - sometimes we all value some little 'starters' - the initial code contributions :-)

        And then we had a Spring Boot expert coming in and getting the process moving. Vedran Pavic helped me to create the auto-configuration and starter modules for JAX-RS and JAX-WS, patiently explained how his initial contribution works, how these modules have to be designed, and helped with the advice throughout the process. I felt like I passed some SpringBoot qualification exam once we were finished which let me continue enhancing the JAX-RS starter independently before CXF 3.1.7 was released.

        CXF Spring Boot starters are now documented at this page which is also linked to from a Spring Boot README listing the community contributions.

        If you are working with CXF JAX-RS then do check this section. See the demos and get excited about the ease with which you can enable JAX-RS endpoints, their Swagger API docs (and auto-link Swagger UI - the topic of the next post).

        See how you can run your CXF WebClient or Proxy clients in Spring Boot, initialized if needed from the metadata found a Netflix Eureka. The demo code on the master uses a CXF CircuitBreakerFailoverFeature written by a legendary DevMind, a sound, simple and light-weight Apache Zest based implementation.
        Not all users may realize how flexible CXF Failover Feature is. 

        While the most effort went into a JAX-RS starter I'm sure we will add more support for JAX-WS users too.

        We'll need to do a bit more work - link CXF statistics to the actuator endpoints, support scanning JAX-RS Applications and few other things.

        If you prefer working with Spring Boot: be certain that a second to none support for running CXF services in Spring Boot will be there. Enjoy!



        Categories: Sergey Beryozkin

        Installing the Apache Ranger Key Management Server (KMS)

        Colm O hEigeartaigh - Mon, 08/08/2016 - 13:40
        The previous couple of blog entries have looked at how to install the Apache Ranger Admin Service as well as the Usersync Service. In this post we will look at how to install the Apache Ranger Key Management Server (KMS). KMS is a component of Apache Hadoop to manage cryptographic keys. Apache Ranger ships with its own KMS implementation, which allows you to store the (encrypted) keys in a database. The Apache Ranger KMS is also secured via policies defined in the Apache Ranger Admin Service.

        1) Build the source code

        The first step is to download the source code, as well as the signature file and associated message digests (all available on the download page). Verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting KMS archive to a location where you wish to install it:
        • tar zxvf apache-ranger-incubating-0.6.0.tar.gz
        • cd apache-ranger-incubating-0.6.0
        • mvn clean package assembly:assembly 
        • tar zxvf target/ranger-0.6.0-kms.tar.gz
        • mv ranger-0.6.0-kms ${rangerkms.home}
        2) Install the Apache Ranger KMS Service

        As the Apache Ranger KMS Service stores the cryptographic keys in a database, we will need to setup and configure a database. We will also configure the KMS Service to store audit logs in the database. Follow the steps given in section 2 of the tutorial on the Apache Ranger Admin Service to set up MySQL. We will also need to create a new user 'rangerkms':
        • CREATE USER 'rangerkms'@'localhost' IDENTIFIED BY 'password';
        • FLUSH PRIVILEGES; 
        You will need to install the Apache Ranger KMS Service using "sudo". If the root user does not have a JAVA_HOME property defined, then edit ${rangerkms.home}/setup.sh + add in, e.g.:
        • export JAVA_HOME=/opt/jdk1.8.0_91
        Next edit ${rangerkms.home}/install.properties and make the following changes:
        • Change SQL_CONNECTOR_JAR to point to the MySQL JDBC driver jar (see previous tutorial).
        • Set (db_root_user/db_root_password) to (admin/password)
        • Set (db_user/db_password) to (rangerkms/password)
        • Change KMS_MASTER_KEY_PASSWD to a secure password value.
        • Set POLICY_MGR_URL=http://localhost:6080
        • Set XAAUDIT.DB.IS_ENABLED=true
        • Set XAAUDIT.DB.FLAVOUR=MYSQL 
        • Set XAAUDIT.DB.HOSTNAME=localhost 
        • Set XAAUDIT.DB.DATABASE_NAME=ranger_audit 
        • Set XAAUDIT.DB.USER_NAME=rangerlogger
        • Set XAAUDIT.DB.PASSWORD=password
        Now you can run the setup script via "sudo ./setup.sh".

        3) Starting the Apache Ranger KMS service

        After a successful installation, first start the Apache Ranger admin service with "sudo ranger-admin start". Then start the Apache Ranger KMS Service via "sudo ranger-kms start". Now open a browser and go to "http://localhost:6080/". Log on with "keyadmin/keyadmin". Note that these are different credentials to those used to log onto the Apache Ranger Admin UI in the previous tutorial. Click on the "+" button on the "KMS" tab to create a new KMS Service. Specify the following values:
        • Service Name: kmsdev
        • KMS URL: kms://http@localhost:9292/kms
        • Username: keyadmin
        • Password: keyadmin
        Click on "Test Connection" to make sure that the KMS Service is up and running. If it is showing a connection failure, log out and log into the Admin UI using credentials "admin/admin". Go to the "Audit" section and click on "Plugins". You should see a successful message indicating that the KMS plugin can successfully download policies from the Admin Service:


        After logging back in to the UI as "keyadmin" you can start to create keys. Click on the "Encryption/Key Manager" tab. Select the "kmsdev" service in the dropdown list and click on "Add New Key". You can create, delete and rollover keys in the UI:



        Categories: Colm O hEigeartaigh

        Pages

        Subscribe to Talend Community Coders aggregator