Colm O hEigeartaigh

Securing an Apache Kafka broker using Apache Ranger and Apache Atlas

Colm O hEigeartaigh - Fri, 02/17/2017 - 17:28
Last year, I wrote a series of articles on securing Apache Kafka. In particular, the third article looked at how to use Apache Ranger to create authorization policies for Apache Kafka in the Ranger security admin UI, and how to install the Ranger plugin for Kafka so that it picks up and enforces the authorization policies. In this article, we will cover an alternative way of creating and enforcing authorization policies in Apache Ranger for Apache Kafka using Apache Atlas.

The Apache Ranger security admin UI allows you to assign users or groups a particular permission associated with a given Kafka topic. This is what is called a "Resource Based Policy" in Apache Ranger. However an alternative is also available called a "Tag Based Policy". Instead of explicitly associating the user/group + permission with a resource (such as a Kafka topic), instead we can associate the user/group + permission with a "tag" (we can also create "deny" based policies associated with a "tag"). The "tag" itself contains the information about the resource that is being secured.

How does Apache Ranger obtain the relevant tags and associated entities? This is where Apache Atlas comes in. The previous post described how to secure access to Apache Atlas using Apache Ranger. Apache Atlas allows you to associate "tags" with entities such as Kafka topics, Hive tables, etc. Apache Ranger provides a "tagsync" service which runs periodically and obtains the tags from Apache Atlas and uploads them to Apache Ranger. The Ranger authorization plugin for Kafka downloads the authorization policies, including tags, from the Ranger admin service and evaluates whether access is allowed or not based on the policy evaluation. Let's look at an example...

1) Start Apache Atlas and create entities/tags for Kafka

The first step is to start Apache Atlas as per the previous tutorial. Note that we are not using the Apache Ranger authorization plugin for Atlas, so there is no need to follow step 2). Next we need to upload the Kafka entity of type "kafka_topic" that we are interested in securing. That can be done via the following command:
  • curl -v -H 'Accept: application/json, text/plain, */*' -H 'Content-Type: application/json;  charset=UTF-8' -u admin:admin -d @kafka-create.json http://localhost:21000/api/atlas/entities
where "kafka-create.json" is defined as:
Once this is done, log in to the admin console using credentials "admin/admin" at http://localhost:21000. Click on "Tags" and "Create Tag" called "KafkaTag". Next go to "Search" and search for the entity we have uploaded ("KafkaTest"). Click on the "+" button under "Tags" and associate the entity with the tag we have created.


2) Start Apache Ranger and create resource-based authorization policies for Kafka

Next we will follow the first tutorial to install Apache Kafka and to get a simple test-case working with SSL authentication, but no authorization (there is no need to start Zookeeper as we already have Apache Atlas running, which starts a Zookeeper instance). Next follow the third tutorial to install the Apache Ranger admin service, as well as the Ranger plugin for Kafka. Create ("resource-based") authorization policies for the Kafka "test" topic in Apache Ranger. There is just one thing we need to change, call the Ranger service "cl1_kafka" instead of "KafkaTest" (this change needs to happen in Ranger, and in the "install.properties" when installing the Ranger plugin to Kafka).

Now verify that the producer has permission to publish to the topic, and the consumer has permission to consume from the topic. Once this is working, then remove the resource-based policy for the consumer, and verify that the consumer no longer has permission to consume from the topic.

3) Use the Apache Ranger TagSync service to import tags from Atlas into Ranger

To create tag based policies in Apache Ranger, we have to import the entity + tag we have created in Apache Atlas into Ranger via the Ranger TagSync service. After building Apache Ranger then extract the file called "target/ranger-<version>-tagsync.tar.gz". There are three alternatives available where the Ranger TagSync service can obtain tag information. From Apache Atlas via a Kafka topic, from Apache Atlas via the REST API and from a file. We will use the REST API of Atlas here. Edit 'install.properties' as follows:
  • Set TAG_SOURCE_ATLAS_ENABLED to "false"
  • Set TAG_SOURCE_ATLASREST_ENABLED to  "true"
  • Set TAG_SOURCE_ATLASREST_DOWNLOAD_INTERVAL_IN_MILLIS to "60000" (just for testing purposes)
  • Specify "admin" for both TAG_SOURCE_ATLASREST_USERNAME and TAG_SOURCE_ATLASREST_PASSWORD
Save 'install.properties' and install the tagsync service via "sudo ./setup.sh". It can now be started via "sudo ranger-tagsync-services.sh start".

4) Create Tag-based authorization policies in Apache Ranger

Now we can create tag-based authorization policies in Apache Ranger. Earlier we used the name "cl1_kafka" for the service name instead of "KafkaTest" as in the previous tutorial. The reason for this is that the service name must match the qualified name attribute of the Kafka entity that we are syncing into Ranger.

In Ranger, click on "Access Manager" and "Tag Based Policies". Create a new "TAG" service called "KafkaTagService". When this is done go into the new service and click on "Add New Policy". Hit (upper-case) "K" in the "TAG" field and "KafkaTag" should pop up automatically (hence the import of tags from Atlas was successful). Add an "allow condition" for the client user with permissions to "consume" and "describe" for "kafka" as shown in the following picture:


Finally, edit the "cl1_kafka" service we created and for "Select Tag Service" select "KafkaTagService" and save. Finally, wait some time for the Ranger plugin to download the new policies and tags and try the consumer again. This time it should work! So we have shown how Ranger can create authorization policies based on tags as well as resources.
Categories: Colm O hEigeartaigh

Securing Apache Atlas using Apache Ranger

Colm O hEigeartaigh - Mon, 02/06/2017 - 13:41
Apache Atlas, currently in the Apache Incubator, is a data governance and metadata framework for Apache Hadoop. It allows you to import data from a backend such as Apache Hive or Apache Falcon, and to classify and tag the data according to a set of business rules. In this tutorial we will show how to to use Apache Ranger to create authorization policies to secure access to Apache Atlas.

1) Set up Apache Atlas

First let's look at setting up Apache Atlas. Download the latest released version (0.7.1-incubating) and extract it. Build the distribution that contains an embedded HBase and Solr instance via:
  • mvn clean package -Pdist,embedded-hbase-solr -DskipTests
The distribution will then be available in 'distro/target/apache-atlas-0.7.1-incubating-bin'. To launch Atlas, we need to set some variables to tell it to use the local HBase and Solr instances:
  • export MANAGE_LOCAL_HBASE=true
  • export MANAGE_LOCAL_SOLR=true
Before starting Atlas, for testing purposes let's add a new user called 'alice' in the group 'DATA_SCIENTIST' with password 'password'. Edit 'conf/users-credentials.properties' and add:
  • alice=DATA_SCIENTIST::5e884898da28047151d0e56f8dc6292773603d0d6aabbdd62a11ef721d1542d8
Now let's start Apache Atlas with 'bin/atlas_start.py'. The Apache Atlas web service can be explored via 'http://localhost:21000/'. To populate some sample data in Apache Atlas, run the command 'bin/quick_start.py' (using credentials admin/admin). To see all traits/tags that have been created, use Curl as follows:
  • curl -u alice:password http://localhost:21000/api/atlas/types?type=TRAIT
2) Install the Apache Ranger Atlas plugin

To use Apache Ranger to secure Apache Atlas, the next step we need to do is to configure and install the Apache Ranger Atlas plugin. Follow the steps in an earlier tutorial to build Apache Ranger and to setup and start the Apache Ranger Admin service. I recommend to use the latest SNAPSHOT of Ranger (0.7.0-SNAPSHOT at this time) as there are some bugs fixed in relation to Atlas support since the 0.6.x release. Once this is done, go back to the Apache Ranger distribution that you have built and extract the atlas plugin:
  • tar zxvf target/ranger-0.7.0-SNAPSHOT-atlas-plugin.tar.gz
 Edit 'install.properties' with the following changes:
  • POLICY_MGR_URL=http://localhost:6080
  • Specify location for SQL_CONNECTOR_JAR 
  • Specify REPOSITORY_NAME (AtlasTest)
  • COMPONENT_INSTALL_DIR_NAME pointing to your Atlas install
Now install the plugin via 'sudo ./enable-atlas-plugin.sh'. If you see an error about "libext" then create a new empty directory called "libext" in the Atlas distribution and try again. Note that the ranger plugin will try to store policies by default in "/etc/ranger/AtlasTest/policycache". As we installed the plugin as "root" make sure that this directory is accessible to the user that is running Apache Atlas. Now restart Apache Atlas to enable the Ranger plugin.

3) Creating authorization policies for Atlas in the Ranger Admin Service

Now that we have set up Apache Atlas to use Apache Ranger for authorization, what remains is to start the Apache Ranger Admin Service and to create some authorization policies. Start Apache Ranger ('sudo ranger-admin start'). Log in to 'http://localhost:6080/' (credentials admin/admin). Click on the "+" button for Atlas, and specify the following fields:
  • Service Name: AtlasTest
  • Username: admin
  • Password: admin
  • atlas.rest.address: http://localhost:21000
Click on "Test Connection" to make sure that we can communicate successfully with Apache Atlas and then "Add". Click on the new link for "AtlasTest". Let's see if our new user "alice" is authorized to read the tags in Atlas. Execute the Curl command defined above (allowing 30 seconds for the Ranger plugin to pull the policies from the Ranger Admin Service). You should see a 403 Forbidden message from Atlas.

Now let's update the authorization policies to allow "alice" access to reading the tags. Back in Apache Ranger, click on "Settings" and then "Users/Groups" and "Groups". Click on "Add new group" and enter "DATA_SCIENTIST" for the name. Now go back into "AtlasTest", and edit the policy called "all - type". Create a new "Allow Condition" for the group "DATA_SCIENTIST" with permission "read" and click "Save". After waiting some time for the policies to sync, try again with the "Curl" command and it should work.


Categories: Colm O hEigeartaigh

Authenticating users in the Apache Ranger Admin Service via PAM

Colm O hEigeartaigh - Thu, 02/02/2017 - 13:08
Over the past few months, I've written various tutorials about different ways you can authenticate to the Apache Ranger Admin Service. In summary, here are the options that have been covered so far:
The remaining option is to authenticate users directly to the local UNIX machine. There is a legacy way of doing this that supports authentication using shadow files. However, a much better approach is to support user authentication using Pluggable Authentication Modules (PAM). This means we can delegate user authentication to various PAM modules, and so we have a wide range of user authentication options. In this post we will show how to configure the Ranger Admin Service to authenticate users on a local linux machine using PAM. There is also an excellent in-depth tutorial that covers PAM and Ranger available here.

1) Configuring the Apache Ranger Admin Service to use PAM for authentication

Follow the steps in a previous tutorial to build Apache Ranger and to setup and install the Apache Ranger Admin service. Edit 'conf/ranger-admin-site.xml' and change the following configuration value:
  • ranger.authentication.method: PAM
2) Add a PAM configuration file for Apache Ranger

The next step is to add a PAM configuration file for Apache Ranger. Create a file called '/etc/pam.d/ranger-admin' with the content:
  • auth    required    pam_unix.so
  • account    required    pam_unix.so
Essentially this means that we are delegating authentication to the local unix machine. Now start the Apache Ranger Admin service. You should be able to log on to http://localhost:6080/login.jsp using a local user credential.
Categories: Colm O hEigeartaigh

Syncing Users and Groups from UNIX into Apace Ranger

Colm O hEigeartaigh - Tue, 01/17/2017 - 13:40
The previous blog post showed how to authenticate users logging in to the Apache Ranger admin service via LDAP. An older blog post covered how to sync users and groups from LDAP into Apache Ranger so that they can be used both for authentication and to construct authorization policies. Another option is to sync users and groups from the local UNIX machine into Apache Ranger, something we will cover in this post.

1) Build the Apache Ranger usersync module

Follow the steps in the following tutorial to build Apache Ranger and to setup and start the Apache Ranger Admin service. Once this is done, go back to the Apache Ranger distribution that you have built and copy the usersync module:
  • tar zxvf target/ranger-0.6.0-usersync.tar.gz
  • mv ranger-0.6.0-usersync ${usersync.home}
2) Configure and build the Apache Ranger usersync service 

You will need to install the Apache Ranger Usersync service using "sudo". If the root user does not have a JAVA_HOME property defined, then edit ${usersync.home}/setup.sh + add in, e.g.:
  • export JAVA_HOME=/opt/jdk1.8.0_112
Next edit ${usersync.home}/install.properties and make the following changes:
  • POLICY_MGR_URL = http://localhost:6080
  • SYNC_SOURCE = unix
  • SYNC_INTERVAL = 1 (just for testing purposes....)
Now you can run the setup script via "sudo ./setup.sh". 

3) Start the Usersync service

The Apache Ranger Usersync service can be started via "sudo ./ranger-usersync-services.sh start". After 1 minute (see SYNC_INTERVAL above), it should successfully copy the users/groups from the local UNIX machine into the Apache Ranger Admin. Open a browser and go to "http://localhost:6080", and click on "Settings" and then "Users/Groups". You should see the users and groups synced successfully.
 

Categories: Colm O hEigeartaigh

Authenticating users in the Apache Ranger Admin Service via LDAP

Colm O hEigeartaigh - Wed, 12/21/2016 - 18:17
In the summer, I wrote a couple of blog posts covering how to configure and install the Apache Ranger Admin Service and then how to use the Apache Ranger usersync Service to import users and groups from LDAP into the Apache Ranger Admin Service. The Usersync service periodically imports all users and groups that matches the configured search base into the Apache Ranger Admin service. However, this bulk import of users and groups might be unnecessary for your particular requirements. In this post we will show instead how to authenticate users logging in to the Apache Ranger Admin Service UI using LDAP.

1) The OpenDS backend

As with the tutorial on the Apache Ranger usersync service, we will use OpenDS as the LDAP server. It contains a domain called "dc=example,dc=com", and 5 users (alice/bob/dave/oscar/victor) and 2 groups (employee/manager). Victor, Oscar and Bob are employees, Alice and Dave are managers. Here is a screenshot using Apache Directory Studio:


2) Configuring the Apache Ranger Admin Service to use LDAP for authentication

Follow the steps in the following tutorial to build Apache Ranger and to setup and install the Apache Ranger Admin service. Edit 'conf/ranger-admin-site.xml' and change/specify the following configuration values:
  • ranger.authentication.method: LDAP
  • ranger.ldap.url: ldap://localhost:2389
  • ranger.ldap.user.dnpattern: cn={0},ou=users,dc=example,dc=com
  • ranger.ldap.group.searchbase: ou=groups,dc=example,dc=com
  • ranger.ldap.group.searchfilter: (member=cn={1},ou=users,dc=example,dc=com)
  • ranger.ldap.group.roleattribute: cn
  • ranger.ldap.base.dn: dc=example,dc=com
  • ranger.ldap.bind.dn: cn=Directory Manager,dc=example,dc=com
  • ranger.ldap.bind.password: test
Note that the "group" configuration attributes must be specified, even though the group information is not actually used. I've submitted a patch for this which should be fixed for the next Ranger release. Now simply save the changes to 'conf/ranger-admin-site.xml' and start the Apache Ranger Admin service. You should be able to log on to http://localhost:6080/login.jsp using the LDAP credentials store in OpenDS.
Categories: Colm O hEigeartaigh

Home Realm Discovery in the Apache CXF Fediz IdP

Colm O hEigeartaigh - Fri, 11/11/2016 - 17:42
When a client application (secured via either WS-Federation or SAML SSO) redirects a user to the Apache CXF Fediz IdP, the IdP must figure out what the home realm of the user is. If the home realm of the user corresponds to the realm of the IdP, then the IdP can authenticate the user. However, if the home realm does not match that of the IdP, then the IdP has the option to forward the authentication request to a third party IdP for authentication, if it is configured to do this. In this post, we will look at the different options available in the IdP to figure out what the home realm of the user is.

1) The 'whr' query parameter

When using the WS-Federation protocol, the application can specify the home realm of the user by adding the 'whr' query parameter to the URI that the browser is redirected to. Alternatively, the 'whr' query parameter could be added by a reverse proxy sitting in front of the IdP. Here is an example of such a URI including a 'whr' query parameter:
  • https://localhost:45753/fediz-idp-realmb/federation?wa=wsignin1.0&wtrealm=urn%3Aorg%3Aapache%3Acxf%3Afediz%3Aidp%3Arealm-A&wreply=https%3A%2F%2Flocalhost%3A43618%2Ffediz-idp%2Ffederation&whr=urn:org:apache:cxf:fediz:idp:realm-B&wfresh=10&wctx=c07a5b9a-e270-4855-9201-fc1801851cc9
2) The 'hrds' configuration option in the IdP

If no 'whr' query parameter is available (this will always be the case for SAML SSO), then the IdP attempts to find out the home realm of the user by querying the "hrds" property of the IdP. This is a Spring Expression Language expression that is evaluated on the Spring WebFlow RequestContext.

For an example of how this can be used, let's look at the tests in Fediz for the SAML SSO IdP when redirecting to a trusted third party IdP. As there is no 'whr' query parameter for SAML SSO, instead we will define a class with a static method that maps application realms to home realms. The application realm is available in the IdP, as the SAML SSO AuthnRequest is already parsed at this point (it corresponds to the "Issuer" of the AuthnRequest). So we can specify the hrds configuration options in the IdP as follows:
  • <property name="hrds" value="T(org.apache.cxf.fediz.integrationtests.RealmMapper).realms()                                   .get(getFlowScope().get('saml_authn_request').issuer)" />
3) Via a form

If no 'whr' query parameter is available, and no 'hrds' configuration option is specified, then the IdP will display a form where the user can select the home realm. The IdP only does this if the configuration option "provideIdpList" is set to true. If it is set to false, then the current IdP is assumed to be the home realm IdP, unless the configuration option "useCurrentIdp" is also set to "false", in which case an error is displayed. The user can select the home realm in the form corresponding to the known trusted IdP realms of this IdP:


Categories: Colm O hEigeartaigh

Support for IdP-initiated SAML SSO in Apache CXF

Colm O hEigeartaigh - Fri, 11/04/2016 - 18:26
Previous blog posts have covered how to secure your JAX-RS web applications in Apache CXF using SAML SSO. Since the 3.1.8 release, Apache CXF also supports IdP-initiated SAML SSO. The typical use-case for SAML SSO involves the browser invoking on a JAX-RS application, and then being redirected to an IdP for authentication, which subsequently redirects the browser back to the application. However, sometimes a user will log on first to the IdP and then want to invoke on a web application. In this post we will show how to configure SAML SSO for a CXF-based web application to support the IdP-initiated flow, by demonstrating an interop test-case with Okta.

1) Configuring a SAML application in Okta

The first step is to create an account at Okta and configure a SAML application. This process is mapped out at the following link. Follow the steps listed on this page with the following additional changes:
  • Specify the following for the Single Sign On URL and audience URI: http://localhost:8080/fedizdoubleit/racs/sso
  • Specify the following for the default RelayState: http://localhost:8080/fedizdoubleit/app1/services/25
  • Add an additional attribute with name "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" and value "Manager".
The RequestAssertionConsumerService will process the SAML Response from Okta. However, it doesn't know where to subsequently send the browser. Therefore, we are configuring the RelayState parameter to encode the URL of the end application. In addition, our test application requires that the user has a specific role to invoke upon it, hence we add a "Manager" attribute with the URI corresponding to a role.

When the application is configured, you will see an option to "View Setup Instructions". Open this link in a new tab and set it aside for the moment - it contains information required when setting up the web application. Now click on the "People" tab and assign the application to the username that you have created at Okta.

2) Setting up the SAML SSO-enabled web application

We will use a trivial "double it" web application which I wrote previously to demonstrate the SAML SSO capabilities of Apache CXF Fediz. The web application is available here. Build the web application and deploy it in Apache Tomcat. You will need to edit 'webapps/fedizdoubleit/WEB-INF/cxf-service.xml'.

a) SamlRedirectBindingFilter configuration changes

First let's look at the changes which are required to the 'SamlRedirectBindingFilter':
  • Remove "idpServiceAddress" and "assertionConsumerServiceAddress". These aren't required as we are only supporting the IdP-initiated flow.
  • Also remove the "signRequest", "signaturePropertiesFile", "callbackHandler", "signatureUsername" and "issuerId" properties.
  • Add <property name="addWebAppContext" value="false"/>
  • Add <property name="supportUnsolicited" value="true"/>

b) RequestAssertionConsumerService (RACS) configuration changes

Now add the following properties to the "RequestAssertionConsumerService":
  • <property name="supportUnsolicited" value="true"/>
  • <property name="idpServiceAddress" value="..."/>
  • <property name="issuerId" value="http://localhost:8080/fedizdoubleit/racs/sso"/>
  • <property name="parseApplicationURLFromRelayState" value="true"/>
Paste in the value for "idpServiceAddress" from the "Identity Provider Single Sign-On URL" given in the "View Setup Instructions" page referenced above.
c) Adding Okta cert into the RACS truststore

As things stand, the SAML Response from Okta to the RequestAssertionConsumerService will fail, as the RACS will not trust the certificate Okta uses to sign the SAML Response. Therefore we need to insert the Okta cert into the truststore of the RACS. Copy the "X.509 Certificate" value from the "View Setup Instructions" page referenced earlier. Create a file called 'webapps/fedizdoubleit/WEB-INF/classes/okta.cert' and paste the certificate contents into this file. Import it into the truststore via:
  • keytool -keystore stsrealm_a.jks -storepass storepass -importcert -file okta.cert
At this point we should be all done. Click on the box for the application you have created in Okta. You should be redirected to the CXF RACS, which validates the SAML Response, and in turn redirects to the application.


Categories: Colm O hEigeartaigh

Client Credentials grant support in the Apache CXF Fediz OIDC service

Colm O hEigeartaigh - Wed, 11/02/2016 - 18:59
Apache CXF Fediz ships with a powerful and flexible OpenId Connect (OIDC) service that can be used to implement SSO for your organisation. All of the core OIDC flows are supported - Authorization Code flow, Implicit and Hybrid flows. As OIDC is just an identity layer over OAuth 2.0, it's possible to use Fediz as a purely OAuth 2.0 service as well, and all of the authorization grants defined in the spec are also fully supported. In this post we will look at support for one of these authorization grants in Fediz 1.3.1 - the client credentials grant.

1) The OAuth 2.0 client credentials grant

The client credentials grant is used for when the client is requesting access for a resource that is owned or controlled by that client. There is no enduser in this scenario, unlike say the authorization code flow or implicit flow. The client simply calls the token endpoint of the authorization service using "client_credentials" for the "grant_type" parameter. In addition, the client must authenticate (e.g. by supplying client_id and client_secret parameters). The authorization service authenticates the client and then returns an access token.

2) Supporting the client credentials grant in Fediz OIDC

It's easy to support the client credentials grant in the Fediz OIDC service.

a) Add the ClientCredentialsGrantHandler

Firstly, the ClientCredentialsGrantHandler must be added to the list of grant handlers supported by the token service as follows:

b) Add a way of authenticating the client

The next step is to add a way of authenticating the client credentials. Fediz uses JAAS to make it easy for the deployer to plug in different JAAS LoginModules if required. To configure JAAS, you must specify the name of the JAAS LoginModule in the configuration of the OAuthDataProviderImpl:

c) Example JAAS configuration

For the normal OIDC flows, the Fediz OIDC uses a WS-Federation filter to redirect the browser to the Fediz IdP, where the end user is then ultimately authenticated by the STS that bundles with Fediz. Therefore it seems like a natural fit to re-use the STS to authenticate the client in the Fediz OIDC. Follow steps (a) and (b) above. Start the Fediz STS, but before starting the OIDC service, specify the "java.security.auth.login.config" system property to point to the following JAAS configuration file:

You must substitute the correct port for "${idp.https.port}". The STSLoginModule takes the given username and password supplied by the client and uses them to authenticate to the STS.
Categories: Colm O hEigeartaigh

Switching authentication mechanisms in the Apache CXF Fediz STS

Colm O hEigeartaigh - Wed, 10/26/2016 - 17:56
Apache CXF Fediz ships with an Identity Provider (IdP) that can authenticate users via either the WS-Federation or SAML SSO protocols. The IdP delegates user authentication to a Security Token Service (STS) web application using the WS-Trust protocol. The STS implementation in Fediz ships with some sample user data for use in the tests. For a real-world scenario, deployers will have to swap the sample data out for an identity backend (such as Active Directory or LDAP). This post will explain how this can be done, with a particular focus on some recent changes to the STS web application in Fediz to make the process easier.

1) The default STS that ships with Fediz

First let's explain a bit about how the STS is configured by default in Fediz to cater for the testcases.

a) Endpoints and user authentication

The STS must define two distinct set of endpoints to work with the IdP. Firstly, the STS must be able to authenticate the user credentials that are presented to the IdP. Typically this is a Username + Password combination. However, X.509 client certificates and Kerberos tokens are also supported. Note that by default, the STS authenticates usernames and passwords via a simple file local to the STS.

After successful user authentication, a SAML token is returned to the IdP. The IdP then gets another SAML token "on behalf of" the authenticated user for a given realm, authenticating using its own credentials. So we need a second endpoint in the STS to issue this token. By default, the STS requires that the IdP authenticate using TLS client authentication. The security policies are defined in the WSDLs available here.

b) Realms

The Fediz IdP and STS support the concept of authenticating users in different realms. By default, the IdP is configured to authenticate users in "Realm A". This corresponds to a specific endpoint address in the STS. The STS also defines user authentication endpoints in "Realm B" for use in test scenarios involving identity federation between two IdPs.

In addition, the STS defines some configuration to map user identities between realms. In other words, how a principal in one realm should map to another realm, and how the claims in one realm map to those in another realm.

2) Changing the STS in Fediz 1.3.2 to use LDAP

From the forthcoming 1.3.2 release onwards, the Fediz STS web application is a bit easier to customize for your specific deployment needs. Let's see how easy it is to switch the STS to use LDAP.

a) Deploy the vanilla IdP and STS to Apache Tomcat

To start with, we will deploy the STS and IdP containing the sample data to Apache Tomcat.
  • Create a new directory: ${catalina.home}/lib/fediz
  • Edit ${catalina.home}/conf/catalina.properties and append ',${catalina.home}/lib/fediz/*.jar' to the 'common.loader' property.
  • Copy ${fediz.home}/plugins/tomcat/lib/* to ${catalina.home}/lib/fediz
  • Copy ${fediz.home}/idp/war/* to ${catalina.home}/webapps
  • Download and copy the hsqldb jar (e.g. hsqldb-2.3.4.jar) to ${catalina.home}/lib 
  • Copy idp-ssl-key.jks and idp-ssl-trust.jks from ${fediz.home}/examples/sampleKeys to ${catalina.home}
  • Edit the TLS Connector in ${catalina.home}/conf/server.xml', e.g.: <Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="want" sslProtocol="TLS" keystoreFile="idp-ssl-key.jks" keystorePass="tompass" keyPass="tompass" truststoreFile="idp-ssl-trust.jks" truststorePass="ispass" />
Now start Tomcat and then enter the following in a web browser (authenticating with "alice/ecila" in "realm A" - you should be directed to the URL for the default service application (404, as we have not configured it):

https://localhost:8443/fediz-idp/federation?wa=wsignin1.0&wreply=https%3A%2F%2Flocalhost%3A8443%2Ffedizhelloworld%2Fsecure%2Ffedservlet&wtrealm=urn%3Aorg%3Aapache%3Acxf%3Afediz%3Afedizhelloworld

b) Change the STS authentication mechanism to Active Directory

To simulate an Active Directory instance for demonstration purposes, we will modify some LDAP system tests in the Fediz source that use Apache Directory. Check out the Fediz source and build it via "mvn install -DskipTests". Now go into "systests/ldap" and edit the LDAPTest. "@Ignore" the existing test + uncomment the test which just "sleeps". Also change the "@CreateTransport" annotation to start the LDAP port on "12345" instead of a random port.

Next we'll configure the Fediz STS to use this LDAP instance for authentication. Edit 'webapps/fediz-idp-sts/WEB-INF/cxf-transport.xml'. Change "endpoints/file.xml" to "endpoints/ldap.xml". Next edit 'webapps/fediz-idp-sts/WEB-INF/endpoints/ldap.xml" and just change the port from "389" to "12345".

Now we need to configure a JAAS configuration file, which the STS uses to validate the received Username + Password to LDAP. Copy this file to the "conf" directory of Tomcat, substituting "12345" for "portno". Now restart Tomcat, this time specifying the location of the JAAS configuration file, e.g.:
  • export JAVA_OPTS="-Xmx2048M -Djava.security.auth.login.config=/opt/fediz-apache-tomcat-8.0.37/conf/ldap.jaas"
This is all the changes that are required to swap over to use an LDAP instance for authentication.
    Categories: Colm O hEigeartaigh

    Securing an Apache Kafka broker - part IV

    Colm O hEigeartaigh - Wed, 09/28/2016 - 13:24
    This is the fourth in a series of articles on securing an Apache Kafka broker. The first post looked at how to secure messages and authenticate clients using SSL. The second post built on the first post by showing how to perform authorization using some custom logic. The third post showed how Apache Ranger could be used instead to create and enforce authorization policies for Apache Kafka. In this post we will look at an alternative authorization solution called Apache Sentry.

    1) Build the Apache Sentry distribution

    First we will build and install the Apache Sentry distribution. Download Apache Sentry (1.7.0 was used for the purposes of this tutorial). Verify that the signature is valid and that the message digests match. Now extract and build the source and copy the distribution to a location where you wish to install it:
    • tar zxvf apache-sentry-1.7.0-src.tar.gz
    • cd apache-sentry-1.7.0-src
    • mvn clean install -DskipTests
    • cp -r sentry-dist/target/apache-sentry-1.7.0-bin ${sentry.home}
    Apache Sentry has an authorization plugin for Apache Kafka, amongst other big data projects. In addition it comes with an RPC service which stores authorization privileges in a database. For the purposes of this tutorial we will just configure the authorization privileges in a configuration file locally to the broker. Therefore we don't need to do any further configuration to the distribution at this point.

    2) Configure authorization in the broker

    Configure Apache Kafka as per the first tutorial. To enable authorization using Apache Sentry we also need to follow these steps. First edit 'config/server.properties' and add:
    • authorizer.class.name=org.apache.sentry.kafka.authorizer.SentryKafkaAuthorizer
    • sentry.kafka.site.url=file:./config/sentry-site.xml
    Next copy the jars from the "lib" directory of the Sentry distribution to the Kafka "libs" directory. Then create a new file in the config directory called "sentry-site.xml" with the following content:

    This is the configuration file for the Sentry plugin for Kafka. It essentially says that the authorization privileges are stored in a local file, and that the groups for authenticated users should be retrieved from this file. Finally, we need to specify the authorization privileges. Create a new file in the config directory called "sentry.ini" with the following content:

    This configuration file contains three separate sections. The "[users]" section maps the authenticated principals to local groups. The "[groups]" section maps the groups to roles, and the "[roles]" section lists the actual privileges. Now we can start the broker as in the first tutorial:
    • bin/kafka-server-start.sh config/server.properties 
    3) Test authorization

    Now lets test the authorization logic. Start the producer:
    • bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties
    Send a few messages to check that the producer is authorized correctly. Now start the consumer:
    • bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/consumer.properties --new-consumer
    If everything is configured correctly then it should work as in the first tutorial. 
    Categories: Colm O hEigeartaigh

    Securing an Apache Kafka broker - part III

    Colm O hEigeartaigh - Mon, 09/26/2016 - 18:13
    This is the third in a series of blog posts about securing Apache Kafka. The first post looked at how to secure messages and authenticate clients using SSL. The second post built on the first post by showing how to perform authorization using some custom logic. However, this approach is not recommended for non-trivial deployments. In this post we will show at how we can create flexible authorization policies for Apache Kafka using the Apache Ranger admin UI. Then we will show how to enforce these policies at the broker.

    1) Install the Apache Ranger Kafka plugin

    The first step is to download Apache Ranger (0.6.1-incubating was used in this post). Verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting plugin to a location where you will configure and install it:
    • tar zxvf apache-ranger-incubating-0.6.1.tar.gz
    • cd apache-ranger-incubating-0.6.1
    • mvn clean package assembly:assembly -DskipTests
    • tar zxvf target/ranger-0.6.1-kafka-plugin.tar.gz
    • mv ranger-0.6.1-kafka-plugin ${ranger.kafka.home}
    Now go to ${ranger.kafka.home} and edit "install.properties". You need to specify the following properties:
    • COMPONENT_INSTALL_DIR_NAME: The location of your Kafka installation
    • POLICY_MGR_URL: Set this to "http://localhost:6080"
    • REPOSITORY_NAME: Set this to "KafkaTest".
    Save "install.properties" and install the plugin as root via "sudo ./enable-kafka-plugin.sh". The Apache Ranger Kafka plugin should now be successfully installed (although not yet configured properly) in the broker.

    2) Configure authorization in the broker

    Configure Apache Kafka as per the first tutorial. There are a number of steps we need to follow to configure the Ranger Kafka plugin before it is operational:
    • Edit 'config/server.properties' and add the following: authorizer.class.name=org.apache.ranger.authorization.kafka.authorizer.RangerKafkaAuthorizer
    • Add the Kafka "config" directory to the classpath, so that we can pick up the Ranger configuration files: export CLASSPATH=$KAFKA_HOME/config
    • Copy the Apache Commons Logging jar into $KAFKA_HOME/libs. 
    • The ranger plugin will try to store policies by default in "/etc/ranger/KafkaTest/policycache". As we installed the plugin as "root" make sure that this directory is accessible to the user that is running the broker.
    Now we can start the broker as in the first tutorial:
    • bin/kafka-server-start.sh config/server.properties
    3) Configure authorization policies in the Apache Ranger Admin UI 

    At this point we should have configured the broker so that the Apache Ranger plugin is used to communicate with the Apache Ranger admin service to download authorization policies. So we need to install and configure the Apache Ranger admin service. Please refer to this blog post for how to do this. Assuming the admin service is already installed, start it via "sudo ranger-admin start". Open a browser and log on to "localhost:6080" with the credentials "admin/admin".

    First lets add some new users that match the SSL principals we have created in the first tutorial. Click on "Settings" and "Users/Groups". Add new users for the principals:
    • CN=Client,O=Apache,L=Dublin,ST=Leinster,C=IE
    • CN=Service,O=Apache,L=Dublin,ST=Leinster,C=IE
    • CN=Broker,O=Apache,L=Dublin,ST=Leinster,C=IE
    Now go back to the Service Manager screen and click on the "+" button next to "KAFKA". Create a new service called "KafkaTest". Click "Test Connection" to make sure it can communicate with the Apache Kafka broker. Then click "add" to save the new service. Click on the new service. There should be an "admin" policy already created. Edit the policy and give the "broker" principal above the rights to perform any operation and save the policy. Now create a new policy called "TestPolicy" for the topic "test". Give the service principal the rights to "Consume, Describe and Publish". Give the client principal the rights to "Consum and Describe" only.


    4) Test authorization

    Now lets test the authorization logic. Bear in mind that by default the Kafka plugin reloads policies from the admin service every 30 seconds, so you may need to wait that long or to restart the broker to download the newly created policies. Start the producer:
    • bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties
    Send a few messages to check that the producer is authorized correctly. Now start the consumer:
    • bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/consumer.properties --new-consumer
    If everything is configured correctly then it should work as in the first tutorial.
    Categories: Colm O hEigeartaigh

    Integrating Apache Camel with Apache Syncope - part III

    Colm O hEigeartaigh - Fri, 09/23/2016 - 18:00
    This is the third in a series of blog posts about integrating Apache Camel with Apache Syncope. The first post introduced the new Apache Camel provisioning manager that is available in Apache Syncope 2.0.0, and gave an example of how we can modify the default behaviour to send an email to an administrator when a user was created. The second post showed how an administrator can keep track of user password changes for auditing purposes. In this post we will show how to integrate Syncope with Apache ActiveMQ using Camel.

    1) The use-case

    The use-case is that Apache Syncope is used for Identity Management in a large organisation. When users are created we would like to be able to gather certain information about the new users and process it dynamically in some way. In particular, we are interested in the age of the new users and the country in which they are based. Perhaps at the reception desk of the company HQ we display a map with the number of employees in each country highlighted. To decouple whatever applications are processing the data from Syncope itself, we will use a messaging solution, namely Apache ActiveMQ. When new users are created, we will modify the default Camel route to send a message to two topics corresponding to the age and location of the user.

    2) Download and configure Apache ActiveMQ

    The first step is to download Apache ActiveMQ (currently 5.14.0). Unzip it and start it via:
    • bin/activemq start 
    Now go to the web interface of ActiveMQ - 'http://localhost:8161/admin/', logging in with credentials 'admin/admin'. Click on the "Queues" tab and create two new queues called 'age' and 'country'.

    3) Download and configure Apache Syncope

    Download and extract the standalone version of Apache Syncope 2.0.0. Before we start it we will copy the jars we need to get Camel working with ActiveMQ in Syncope. In the "webapps/syncope/WEB-INF/lib" directory of the Apache Tomcat instance bundled with Syncope, copy the following jars:
    • From $ACTIVEMQ_HOME/lib: activemq-client-5.14.0.jar + activemq-spring-5.14.0.jar + hawtbuf-1.11.jar + geronimo-j2ee-management_1.1_spec-1.0.1.jar
    • From $ACTIVEMQ_HOME/lib/camel: activemq-camel-5.14.0.jar + camel-jms-2.16.3.jar
    • From $ACTIVEMQ_HOME/lib/optional: activemq-pool-5.14.0.jar + activemq-jms-pool-5.14.0.jar + spring-jms-4.1.9.RELEASE.jar
    Next we need to create a Camel spring configuration file containing a bean with the address of the broker. Add the following file to the Tomcat lib directory (called "camelRoutesContext.xml"):

    Now we can start the embedded Apache Tomcat instance. Open a browser and navigate to 'http://localhost:9080/syncope-console' logging in with 'admin/password'. The first thing we need to do is to configure user attributes for "age" and "country". Go to "Configuration/Types" in the left-hand menu, and click on the "Schemas" tab. Create two plain (mandatory) schema types: "age" of type "String" and "country" of type "Long". Now click on the "AnyTypeClasses" tab and create a new AnyTypeClass selecting the two plain schema types we just created. Finally, click on the "AnyType" tab and edit the "USER". Add the new AnyTypeClass you created and hit "save".

    Now we will modify the Camel route invoked when a user is created. Click on "Extensions/Camel Routes" in the left-hand configuration menu. Edit the "createUser" route and add the following above the "bean method" part:
    • <setBody><simple>${body.plainAttrMap[age].values[0]}</simple></setBody>
    • <to uri="activemq:age"/>
    • <setBody><simple>${exchangeProperty.actual.plainAttrMap[country].values[0]}</simple></setBody>
    • <to uri="activemq:country"/>
    This should be fairly straightforward to follow. We are setting the message body to be the age of the newly created User, and dispatching that message to the "age" queue. We then follow the same process for the "country". We also need to change "body" in the "bean method" line to "exchangeProperty.actual", this is because we have redefined what the body is for each of the Camel routes above.


    Now let's create some new users. Click on the "Realms" menu and select the "USER" tab. Create new users "alice" in country "usa" of age "25" and "bob" in country "canada" of age "27". Now let's look at the ActiveMQ console again. We should see two new messages in each of the queues as follows, ready to be consumed:



    Categories: Colm O hEigeartaigh

    Using SHA-512 with Apache CXF SOAP web services

    Colm O hEigeartaigh - Thu, 09/22/2016 - 15:49
    XML Signature is used extensively in SOAP web services to guarantee message integrity, non-repudiation, as well as client authentication via PKI. A digest algorithm crops up in XML Signature both as part of the Signature Method (rsa-sha1 for example), as well as in the digests of the data that are signed. As recent weaknesses have emerged with the use of SHA-1, it makes sense to use the SHA-2 digest algorithm instead. In this post we will look how to configure Apache CXF to use SHA-512 (i.e. SHA-2 with 512 bits) as the digest algorithm.

    1) Configuring the STS to use SHA-512

    Apache CXF ships with a SecurityTokenService (STS) that is widely deployed. The principal function of the STS is to issue signed SAML tokens, although it supports a wide range of other functionalities and token types. The STS (for more recent versions of CXF) uses RSA-SHA256 for the signature method when signing SAML tokens, and uses SHA-256 for the digest algorithm. In this section we'll look at how to configure the STS to use SHA-512 instead.

    You can specify signature and digest algorithms via the SignatureProperties class in the STS. To specify SHA-512 for signature and digest algorithms for generated tokens in the STS add the following bean to your spring configuration:

    Next you need to reference this bean in the StaticSTSProperties bean for your STS:
    • <property name="signatureProperties" ref="sigProps" />
    2) Configuring WS-SecurityPolicy to use SHA-512

    Service requests are typically secured at a message level using WS-SecurityPolicy. It is possibly to specify the algorithms used to secure the request, as well as the key sizes, by configuring an AlgorithmSuite policy. Unfortunately the last WS-SecurityPolicy spec is quite dated at this point, and lacks support for more modern algorithms as part of the default AlgorithmSuite policies that are defined in the spec. The spec only supports using RSA-SHA1 for signature, and only SHA-1 and SHA-256 for digest algorithms.

    Luckily, Apache CXF users can avail of a few different ways to use stronger algorithms with web service requests. In CXF there is a JAX-WS property called 'ws-security.asymmetric.signature.algorithm' for AsymmetricBinding policies (similarly 'ws-security.symmetric.signature.algorithm' for SymmetricBinding policies). This overrides the default signature algorithm of the policy. So for example, to switch to use RSA-SHA512 instead of RSA-SHA1 simply set the following property on your client/endpoint:
    • <entry key="ws-security.asymmetric.signature.algorithm" value="http://www.w3.org/2001/04/xmldsig-more#rsa-sha512"/>
    There is no corresponding property to explicitly configure the digest algorithm, as the default AlgorithmSuite policies already support SHA-256 (although one could be added if there was enough demand). If you really need to support SHA-512 here, an option is to use a custom AlgorithmSuite (which will obviously not be portable), or to override one of the existing ones.

    It's pretty straightforward to do this. First you need to create an AlgorithmSuiteLoader implementation to handle the policy. Here is one used in the tests that creates a custom AlgorithmSuite policy called 'Basic128RsaSha512', which extends the 'Basic128' policy to use RSA-SHA512 for the signature method, and SHA-512 for the digest method. This AlgorithmSuiteLoader can be referenced in Spring via:


    The policy in question looks like:
    • <cxf:Basic128RsaSha512 xmlns:cxf="http://cxf.apache.org/custom/security-policy"/>
    Categories: Colm O hEigeartaigh

    Invoking on the Talend ESB STS using SoapUI

    Colm O hEigeartaigh - Wed, 09/21/2016 - 17:11
    Talend ESB ships with a powerful SecurityTokenService (STS) based on the STS that ships with Apache CXF. The Talend Open Studio for ESB contains UI support for creating web service clients that use the STS to obtain SAML tokens for authentication (and also authorization via roles embedded in the tokens). However, it is sometimes useful to be able to obtain tokens with a third party client. In this post we will show how SoapUI can be used to obtain SAML Tokens from the Talend ESB STS.

    1) Download and run Talend Open Studio for ESB

    The first step is to download Talend Open Studio for ESB (the current version at the time of writing this post is 6.2.1). Unzip it and start the container via:
    • Runtime_ESBSE/container/bin/trun
    The next step is to start the STS itself:
    • tesb:start-sts
    2) Download and run SoapUI

    Download SoapUI and run the installation script. Create a new SOAP Project called "STS" using the WSDL:
    • http://localhost:8040/services/SecurityTokenService/UT?wsdl
    The WSDL of the STS defines a number of different services. The one we are interested in is the "UT_Binding", which requires a WS-Security UsernameToken to authenticate the client. Click on "UT_Binding/Issue/Request 1" in the left-hand menu to see a sample request for the service. Now we need to do some editing of the request. Remove the 'Context="?"' attribute from RequestSecurityToken. Then paste the following into the Body of the RequestSecurityToken:
    • <t:TokenType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</t:TokenType>
    • <t:KeyType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/ws-sx/ws-trust/200512/Bearer</t:KeyType>
    • <t:RequestType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</t:RequestType>
    Now we need to configure a username and password to use when authenticating the client request. In the "Request Properties" box in the lower left corner, add "tesb" for the "username" and "password" properties. Now right click in the request pane, and select "Add WSS Username Token" (Password Text). Now send the request and you should receive a SAML Token in response.

    Bear in mind that if you wish to re-use the SAML Token retrieved from the STS in a subsequent request, you must copy it from the "Raw" tab and not the "XML" tab of the response. The latter adds in whitespace that breaks the signature on the token. Another thing to watch out for is that the STS maintains a cache of the Username Token nonce values, so you will need to recreate the UsernameToken each time you want to get a new token.

    3) Requesting a "PublicKey" KeyType

    The example above uses a "Bearer" KeyType. Another common use-case, as is the case with the security-enabled services developed using the Talend Studio, is when the token must have the PublicKey/Certificate of the client embedded in it. To request such a token from the STS, change the "Bearer" KeyType as above to "PublicKey". However, we also need to present a certificate to the STS to include in the token.

    As we are just using the test credentials used by the Talend STS, go to the Runtime_ESBSE/container/etc/keystores and extract the client key with:
    • keytool -exportcert -rfc -keystore clientstore.jks -alias myclientkey -file client.cer -storepass cspass
    Edit client.cer + remove the first and end lines (that contain BEGIN/END CERTIFICATE). Now go back to SOAP-UI and add the following to the RequestSecurityToken Body:
    • <t:UseKey xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512"><ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:X509Data><ds:X509Certificate>...</ds:X509Certificate></ds:X509Data></ds:KeyInfo></t:UseKey>
    where the content of the X.509 Certificate is the content in client.cer. This time, the token issued by the STS will contain the public key of the client embedded in the SAML Subject.

    Categories: Colm O hEigeartaigh

    Securing an Apache Kafka broker - part II

    Colm O hEigeartaigh - Mon, 09/19/2016 - 15:49
    In the previous post, we looked at how to configure an Apache Kafka broker to require SSL client authentication. In this post we will add authorization to the example, making sure that only authorized producers can send messages to the broker. In addition, we will show how to enforce authorization rules per-topic for consumers.

    1) Configure authorization in the broker

    Configure Apache Kafka as per the previous tutorial. To enforce some custom authorization rules in Kafka, we will need to implement the Kafka Authorizer interface. This interface contains an "authorize" method, which supplies a Session Object, where you can obtain the current principal, as well as the Operation and Resource upon which to enforce an authorization decision.

    In terms of the example detailed in the previous post, we created broker, service (producer) and client (consumer) principals. We want to enforce authorization decisions as follows:
    • Let the broker principal do anything
    • Let the producer principal read/write on all topics
    • Let the consumer principal read/describe only on topics starting with "test".
    There is a sample Authorizer implementation available in some Kafka unit test I wrote in github that can be used in this example - CustomAuthorizer:

    Next we need to package up the CustomAuthorizer in a jar so that it can be used in the broker. You can do this by checking out the testcases github repo, and invoking "mvn clean package jar:test-jar -DskipTests" in the "apache/bigdata/kafka" directory. Now copy the resulting test jar in "target" to the "libs" directory in your Kafka installation. Finally, edit the "config/server.properties" file and add the following configuration item:
    • authorizer.class.name=org.apache.coheigea.bigdata.kafka.CustomAuthorizer
    2) Test authorization

    Now lets test the authorization logic. Restart the broker and the producer:
    • bin/kafka-server-start.sh config/server.properties
    • bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties
    Send a few messages to check that the producer is authorized correctly. Now start the consumer:
    • bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/consumer.properties --new-consumer
    If everything is configured correctly then it should work as in the first tutorial. Now we will create a new topic called "messages":
    • bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic messages
    Restart the producer to send messages to "messages" instead of "test". This should work correctly. Now try to consume from "messages" instead of "test". This should result in an authorization failure, as the "client" principal can only consume from the "test" topic according to the authorization rules.
    Categories: Colm O hEigeartaigh

    Securing an Apache Kafka broker - part I

    Colm O hEigeartaigh - Fri, 09/16/2016 - 18:19
    Apache Kafka is a messaging system for the age of big data, with a strong focus on reliability, scalability and message throughput. This is the first part of a short series of posts on how to secure an Apache Kafka broker. In this post, we will focus on authenticating message producers and consumers using SSL. Future posts will look at how to authorize message producers and consumers.

    1) Create SSL keys

    As we will be securing the broker using SSL client authentication, the first step is to create some keys for testing purposes. Download the OpenSSL ca.config file used by the WSS4J project. Change the "certificate" value to "ca.pem", and the "private_key" value to "cakey.pem". You will also need to create a directory called "ca.db.certs", and make an empty file called "ca.db.index". Now create a new CA key and cert via:
    • openssl req -x509 -newkey rsa:1024 -keyout cakey.pem -out ca.pem -config ca.config -days 3650
    Just accept the default options. Now we need to convert the CA cert into jks format:
    • openssl x509 -outform DER -in ca.pem -out ca.crt
    • keytool -import -file ca.crt -alias ca -keystore truststore.jks -storepass security
    Now we will create the client key, sign it with the CA key, and put the signed client cert and CA cert into a keystore:
    • keytool -genkey -validity 3650 -alias myclientkey -keyalg RSA -keystore clientstore.jks -dname "CN=Client,O=Apache,L=Dublin,ST=Leinster,C=IE" -storepass cspass -keypass ckpass
    • keytool -certreq -alias myclientkey -keystore clientstore.jks -file myclientkey.cer -storepass cspass -keypass ckpass
    • echo 20 > ca.db.serial
    • openssl ca -config ca.config -policy policy_anything -days 3650 -out myclientkey.pem -infiles myclientkey.cer
    • openssl x509 -outform DER -in myclientkey.pem -out myclientkey.crt
    • keytool -import -file ca.crt -alias ca -keystore clientstore.jks -storepass cspass
    • keytool -import -file myclientkey.crt -alias myclientkey -keystore clientstore.jks -storepass cspass -keypass ckpass
    Now follow the same template to create a "service" key in servicestore.jks, with store password "sspass" and key password "skpass". In addition, we will create a "broker" key in brokerstore.jks, with storepass "bspass" and key password "bkpass".  

    2) Configure the broker

    Download Apache Kafka and extract it (0.10.0.1 was used for the purposes of this tutorial). Copy the keys created in section "1" into $KAFKA_HOME/config. Start Zookeeper with:
    • bin/zookeeper-server-start.sh config/zookeeper.properties
    Now edit 'config/server.properties' and add the following:
    • ssl.keystore.location=./config/brokerstore.jks
    • ssl.keystore.password=bspass
    • ssl.key.password=bkpass
    • ssl.truststore.location=./config/truststore.jks
    • ssl.truststore.password=security
    • security.inter.broker.protocol=SSL
    • ssl.client.auth=required
    • listeners=SSL://localhost:9092
    and start the broker and then create a "test" topic with:
    • bin/kafka-server-start.sh config/server.properties
    • bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
    3) Configure the producer

    Now we will configure the message producer. Edit 'config/producer.properties' and add the following:
    • security.protocol=SSL
    • ssl.keystore.location=./config/servicestore.jks
    • ssl.keystore.password=sspass
    • ssl.key.password=skpass
    • ssl.truststore.location=./config/truststore.jks
    • ssl.truststore.password=security
    and start the producer with:
    • bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties
    Send a few messages to the topic to make sure that everything is working ok.

    4) Configure the consumer

    Finally we will configure the message consumer. Edit 'config/consumer.properties' and add the following:
    • security.protocol=SSL
    • ssl.keystore.location=./config/clientstore.jks
    • ssl.keystore.password=cspass
    • ssl.key.password=ckpass
    • ssl.truststore.location=./config/truststore.jks
    • ssl.truststore.password=security
    and start the consumer with:
    • bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/consumer.properties --new-consumer
    The messages sent by the producer should appear in the console window of the consumer. So there it is, we have configured the Kafka broker to require SSL client authentication. In the next post we will look at adding simple authorization support to this scenario.
    Categories: Colm O hEigeartaigh

    Integrating Apache Camel with Apache Syncope - part II

    Colm O hEigeartaigh - Fri, 09/09/2016 - 16:58
    A recent blog post introduced the new Apache Camel provisioning manager that is available in Apache Syncope 2.0.0. It also covered a simple use-case for the new functionality, where the "createUser" Camel route is modified to send an email to an administrator when a User is created, with some details about the created User in the email. In this post, we will look at a different use-case, where the Camel provisioning manager is used to extend the functionality offered by Syncope.

    1) The use-case

    Apache Syncope stores users in internal storage in a table called "SyncopeUser". This table contains information such as the User Id, name, password, creation date, last password change date, etc. In addition, if there is an applicable password policy associated with the User realm, a list of the previous passwords associated with the User is stored in a table called "SyncopeUser_passwordHistory":

    As can be seen from this screenshot, the table stores a list of Syncope User Ids with a corresponding password value. The main function of this table is to enforce the password policy. So for example, if the password policy states that a user can't change back to a password for at least 10 subsequent password changes, this table provides the raw information that is needed to enforce the policy.

    Now, what if the administrator wants a stronger audit trail for User password changes other than what is provided by default in the Syncope internal storage? In particular, the administrator would like a record of when the User changes a password. The "SyncopeUser" table only stores the last password change date. There is no way of seeing when each password stored in the "SyncopeUser_passwordHistory" table was changed. Enter the Camel provisioning manager...

    2) Configure Apache Syncope

    Download and install  Apache Syncope (I used the "standalone" download for the purposes of this demo). Start Apache Syncope and log on to the admin console. First we will create a password policy. Click on "Configuration" and then "Policies". Click on the "Password" tab and then select the "+" button to create a new policy. Give a description for the policy and then select a history length of "10".

    Now let's create a new user in the default realm. Click on "realms" and hit the "edit" button. Select the password policy you have just created and click "Finish". This means that all users created in this realm will have the applicable policy applied to them. Now click on the "User" tab and the "+" button to add a new user called "alice". Click on "Password Management" and enter a password for "alice". Now we have a new user created, we want to be able to see when she updates her password from now on.

    Click on "Extensions" and "Camel Routes". Find the "updateUser" route (might not be on the first page) and edit it. Create a new Camel "filter" (as per the screenshot below) just above the "bean method=" line with the following content:
    • <simple>${body.password} != null</simple>
    • <setHeader headerName="CamelFileName"><simple>${body.key}.passwords</simple></setHeader>
    • <setBody><simple>New password '${body.password.value}' changed at time '${date:now:yyyy-MM-dd'T'HH:mm:ss.SSSZ}'\n</simple></setBody>
    • <to uri="file:./?fileExist=Append"/>
    So what we are doing here is to audit the password changes for a particular user, by writing out the password + Timestamp to a file associated with that user. Let's examine what each of these statements do in turn. The first statement is the filter condition. It states that we should execute the following statements if the password is not null. The password will only be non null if it is being changed. So for example, if the user just changes a given attribute and not the password, the filter will not be invoked.

    The second statement sets the Camel Header "CamelFileName" to the format of "<user.id>.passwords". This header is used by the Camel File component as the file name to write out to. The third statement sets the exchange Body (the file content) to show the password value along with a Timestamp. Finally, the fourth statement is an invocation of the Camel File component, which appends the exchange Body to the given file name. As we have overridden the message Body in the second statement above, we need to change the ${body} in the create call to ${exchangeProperty.actual}, which is the saved Body. Click on "save" to save the modified route.


    Now let's update the user "alice" and set a new password a couple of times. There should be a new file in the directory where you launched Tomcat containing the audit log for password changes for that particular user. With the power of Apache Camel, we could audit to a database, to Apache Kafka, etc etc.


    Categories: Colm O hEigeartaigh

    Apache CXF Fediz 1.2.3 and 1.3.1 released

    Colm O hEigeartaigh - Thu, 09/08/2016 - 19:45
    Apache CXF Fediz 1.2.3 and 1.3.1 have been released. The 1.3.1 release contains the following significant features/fixes:
    • An update to use Apache CXF 3.1.7 
    • Support for Facebook Login as a Trusted IdP.
    • A fix for SAML SSO redirection on ForceAuthn or token expiry.
    • A bug fix to support multiple realms in the IdP.
    • A fix to enforce that mandatory claims are present in the received token.
    In addition, both 1.2.3 and 1.3.1 contain a fix for a new security advisory - CVE-2016-4464:
    Apache CXF Fediz is a subproject of Apache CXF which implements the WS-Federation Passive Requestor Profile for SSO specification. It provides a number of container based plugins to enable SSO for Relying Party applications. It is possible to configure a list of audience URIs for the plugins, against which the AudienceRestriction values of the received SAML tokens are supposed to be matched. However, this matching does not actually take place.

    This means that a token could be accepted by the application plugin (assuming that the signature is trusted) that is targeted for another service, something that could potentially be exploited by an attacker.
      Categories: Colm O hEigeartaigh

      Integrating Apache Camel with Apache Syncope - part I

      Colm O hEigeartaigh - Wed, 08/31/2016 - 12:29
      Apache Syncope is an open-source Identity Management solution. A key feature of Apache Syncope is the ability to pull Users, Groups and Any Objects from multiple backend resources (such as LDAP, RDMBS, etc.) into Syncope's internal storage, where they can then be assigned roles, pushed to other backend resources, exposed via Syncope's REST API, etc.

      However, what if you wanted to easily perform some custom task as part of the Identity Management process? Wouldn't it be cool to be able to plug in a powerful integration framework such as Apache Camel, so that you could exploit Camel's huge list of messaging components and routing + mediation rules? Well with Syncope 2.0.0 you can do just this with the new Apache Camel provisioning manager. This is a unique and very powerful selling point of Apache Syncope in my opinion. In this article, we will introduce the new Camel provisioning manager, and show a simple example of how to use it.

      1) The new Apache Camel provisioning manager

      As stated above, a new provisioning manager is available in Apache Syncope 2.0.0 based on Apache Camel. A set of Camel routes are available by default which are invoked when the User, Groups and Any Objects in question are changed in some way. So for example, if a new User is created, then the corresponding Camel route is invoked at the same time. This allows the administrator to plug in custom logic on any of these state changes. The routes can be viewed and edited in the Admin Console by clicking on "Extensions" and then "Camel Routes".

      Each of the Camel routes uses a new "propagate" Camel component available in Syncope 2.0.0. This component encapsulates some common logic involved in using the Syncope PropagationManager to create some tasks, and to execute them via the PropagationTaskExecutor. All of the routes invoke this propagate component via something like:
      • <to uri="propagate:<propagateType>?anyTypeKind=<anyTypeKind>&options"/>
      Where propagateType is one of:
      • create
      • update
      • delete
      • provision
      • deprovision
      • status
      • suspend
      • confirmPasswordReset
      and anyTypeKind is one of:
      • USER
      • GROUP
      • ANY
      2) The use-case

      In this post, we will look at a simple use-case of sending an email to an administrator when a User is created, with some details about the created User in the email. Of course, this could be handled by a Notification Task, but we'll discuss some more advanced scenarios in future blog posts. Also note that a video available on the Tirasa blog details more or less the same use-case. For the purposes of the demo, we will set up a mailtrap account where we will receive the emails sent by Camel.

      3) Configure Apache Syncope

      Download and install  Apache Syncope (I used the "standalone" download for the purposes of this demo). Before starting Apache Syncope, we need to copy a few jars that are required by Apache Camel to actually send emails. Copy the following jars to $SYNCOPE/webapps/syncope/WEB-INF/lib:
      • http://repo1.maven.org/maven2/org/apache/camel/camel-mail/2.17.3/camel-mail-2.17.3.jar
      • http://repo1.maven.org/maven2/com/sun/mail/javax.mail/1.5.5/javax.mail-1.5.5.jar
      Now start Apache Syncope and log on to the admin console. Click on "Extensions" and then "Camel Routes". As we want to change the default route when users are created, click on the "edit" image for the "createUser" route. Add the following information just above the "bean method=" line:
      • <setHeader headerName="subject"><simple>New user ${body.username} created in realm ${body.realm}</simple></setHeader> 
      • <setBody><simple>User full name: ${body.plainAttrMap[fullname].values[0]}</simple></setBody>
      • <to uri="smtp://mailtrap.io?username=<username>&amp;password=<password>&amp;contentType=text/html&amp;to=dummy@apache-recipient.org"/>
      Let's examine what each of these statements do. The first statement is setting the Camel header "Subject" which corresponds to the Subject of the Email. It simply states that a new user with a given name is created in a given realm. The second statement sets the message Body, which is used as the content of the message by Camel. It just shows the User's full name, extracted from the "fullname" attribute, as an example of how to access User attributes in the route.

      The third statement invokes on the Camel smtp component. You'll need to substitute in the username + password you configured when setting up the mailtrap account. The recipient is configured using the "to" part of the URI. One more change is required to the existing route. As we have overridden the message Body in the second statement above, we need to change the ${body} in the create call to ${exchangeProperty.actual}, which is the saved Body. Click on "save" to save the modified route.


      Before creating a User, we need to add a "fullname" User attribute as the route expects. Go to "Configuration" and "Types", and click on the "Schemas" tab. Click on the "+" button under "PLAIN" and add a new attribute called "fullname". Then click on "AnyTypeClasses", and add the "fullname" attribute to the BaseUser AnyTypeClass.

      Finally, go to the "/" realm and create a new user, specifying a fullname attribute. A new email should be available in the mailtrap account as follows:

      Categories: Colm O hEigeartaigh

      Pulling users and groups from LDAP into Apache Syncope 2.0.0

      Colm O hEigeartaigh - Fri, 08/26/2016 - 17:54
      A previous tutorial showed how to synchronize (pull) users and roles into Apache Syncope 1.2.x from an LDAP backend (Apache Directory). Interacting with an LDAP backend appears to be a common use-case for Apache Syncope users. For this reason, in this tutorial we will cover how to pull users and groups (previously roles) into Apache Syncope 2.0.0 from an LDAP backend via the Admin Console, as it is a little different from the previous 1.2.x releases.

      1) Apache DS

      The basic scenario is that we have a directory that stores user and group information that we would like to import into Apache Syncope 2.0.0. For the purposes of this tutorial, we will work with Apache DS. The first step is to download and launch Apache DS. I recommend installing Apache Directory Studio for an easy way to create and view the data stored in your directory.

      Create two new groups (groupOfNames) in the default domain ("dc=example,dc=com") called "cn=employee,ou=groups,ou=system" and "cn=boss,ou=groups,ou=system". Create two new users (inetOrgPerson) "cn=alice,ou=users,ou=system" and "cn=bob,ou=users,ou=system". Now edit the groups you created such that both alice and bob are employees, but only alice is a boss. Specify "sn" (surname) and "userPassword" attributes for both users.

      2) Pull data into Apache Syncope

      The next task is to import (pull) the user data from Apache DS into Apache Syncope. Download and launch an Apache Syncope 2.0.x instance. Make sure that an LDAP Connector bundle is available (see here).

      a) Define a 'surname' User attribute

      The inetOrgPerson instances we created in Apache DS have a "sn" (surname) attribute. We will map this into an internal User attribute in Apache Syncope. The Schema configuration is quite different in the Admin Console compared to Syncope 1.2.x. Select "Configuration" and then "Types" in the left hand menu. Click on the "Schemas" tab and then the "+" button associated with "PLAIN". Add "surname" for the Key and click "save". Now go into the "AnyTypeClasses" tab and edit the "BaseUser" item. Select "surname" from the list of available plain Schema attributes. Now the users we create in Syncope can have a "surname" attribute.




      b) Define a Connector

      The next thing to do is to define a Connector to enable Syncope to talk to the Apache DS backend. Click on "Topology" in the left-hand menu, and on the ConnId instance on the map. Click "Add new connector" and create a new Connector of type "net.tirasa.connid.bundles.ldap". On the next tab select:
      • Host: localhost
      • TCP Port: 10389
      • Principal: uid=admin,ou=system
      • Password: <password>
      • Base Contexts: ou=users,ou=system and ou=groups,ou=system
      • LDAP Filter for retrieving accounts: cn=*
      • Group Object Classes: groupOfNames
      • Group member attribute: member
      • Click on "Maintain LDAP Group Membership".
      • Uid attribute: cn
      • Base Context to Synchronize: ou=users,ou=system and ou=groups,ou=system
      • Object Classes to Synchronize: inetOrgPerson and groupOfNames
      • Status Management Class: net.tirasa.connid.bundles.ldap.commons.AttributeStatusManagement
      • Tick "Retrieve passwords with search".
      Click on the "heart" icon at the top of tab to check to see whether Syncope is able to connect to the backend resource. If you don't see a green "Successful Connection" message, then consult the logs. On the next tab select all of the available capabilities and click on "Finish".

      c) Define a Resource

      Next we need to define a Resource that uses the LDAP Connector.  The Resource essentially defines how we use the Connector to map information from the backend into Syncope Users and Groups. Click on the Connector that was created in the Topology map and select "Add new resource". Just select the defaults and finish creating the new resource. When the new resource is created, click on it and add some provisioning rules via "Edit provision rules".

      Click the "+" button and select the "USER" type to create the mapping rules for users. Click "next" until you come to the mapping tab and create the following mappings:


      Click "next" and enable "Use Object Link" and enter "'cn=' + username + ',ou=users,ou=system'". Click "Finish" and "save". Repeat the process above for the "GROUP" type to create a mapping rule for groups as follows:
      Similar to creating the user mappings, we also need to enable "Use Object Link" and enter "'cn=' + name + ',ou=groups,ou=system'". Click "Finish" and "save".

      d) Create a pull task

      Having defined a Connector and a Resource to use that Connector, with mappings to map User/Group information to and from the backend, it's time to import the backend information into Syncope.  Click on the resource and select "Pull Tasks". Create a new Pull Task via the "+" button. Select "/" as the destination realm to create the users and groups in. Choose "FULL_RECONCILIATION" as the pull mode. Select "LDAPMembershipPullActions"  (this will preserve the fact that users are members of a group in Syncope) and "LDAPPasswordPullActions" from the list of available actions. Select "Allow create/update/delete". When the task is created,  click on the "execute" button (it looks like a cogged wheel). Now switch to the "Realms" tab in the left-hand menu and look at the users and groups that have been imported in the "/" realm from Apache DS.


      Categories: Colm O hEigeartaigh

      Pages

      Subscribe to Talend Community Coders aggregator - Colm O hEigeartaigh