Latest Activity

Open Source song :-)

Olivier Lamy - Mon, 11/01/2010 - 15:46
The famous french singer jcfrog has made a nice song on Open Source :-)
Have a look here :
Check out the other one on twitter :
I hope we will have one for ASF
Have Fun !

Categories: Olivier Lamy

Apache Camel 2.5.0 Released

Hadrian Zbarcea - Mon, 11/01/2010 - 04:34
The 2.5.0 release of Apache Camel is finally out. It is the result of some two and a half months of improvements since the last release in July. You can check the release notes for more details and give it a try.

The PMC is planning to focus now on releasing the 3.0 version of Camel in Q1 of next year. The 2.x version will continue to be actively improved for the foreseeable future (at least through 2011) and we plan to have another camel 2.x release this year.

To help planning the new 3.0 release the Camel PMC conducted a survey during the month of October. Many thanks to all who participated in the survey and made their voice heard. The results of the survey will be made public this week during ApacheCon. If you want to see the goodies coming in the 3.0 version, keep an eye on the roadmap.
Categories: Hadrian Zbarcea

Give us your headaches

Sergey Beryozkin - Wed, 10/27/2010 - 18:24
Have I already mentioned that I liked reading the Richard Branson's auto-biography ?

I honestly think it was a gem of a book which I picked up in the Gatwick airport on the way to Dublin from Minsk. That is a story about the life journey told with a lot of humor, filled with some absolutely hilarious accounts of various events and with some very serious thoughts about doing business and contributing to saving the humanity.

You might want to ask, what is it all to do with the web services or software development given this post is not marked with [OT] ?

During his student days, Richard set up the Student Advisory Centre which consulted students on all the issues they could have, free of charge. One of their slogans was "Give us your headaches". This centre still operates today.

I thought, when I was reading about it, that to some extent, this is what the OS business model was partly about. In OS we are working on projects such as CXF and we are determined to make it all work really well, without any headaches, in the most demanding environments.

The OS business model is partially about providing the insurance that customers will experience no headaches when running such OS-driven projects in their productions. In fact this model has been proven very successful, by such top companies such as RedHat for example.

Are you considering to start using CXF but a bit concerned what will happen further down the road with all the OS development going on into it ?

Give us your headaches :-)
Categories: Sergey Beryozkin

CXF : Here we come !

Sergey Beryozkin - Wed, 10/27/2010 - 17:59
I'm happy to confirm that after a short break I'm going to have a chance to resume working on CXF and its JAXRS project in particular.

I'll join Dan and the rest of the Sopera OS team and we will be determined as ever to bring you the best production-quality web services framework around.

And with your help we will make the CXF users and dev lists the "hippiest place to be", the phrase is borrowed from a Richard Branson's auto-biography.

CXF won't be the only effort we will focus in Sopera but it is going to be one of the main ones.

Stay tuned.
Categories: Sergey Beryozkin

Goodbye JBoss

Sergey Beryozkin - Wed, 10/27/2010 - 16:57
So after less than 6 months after starting to work for JBoss I decided to quit and accept an offer from Sopera. In hindsight, I'm thinking that talking about the career publicly is not always the very best idea - spending such a short period of time at one of the leading company in the industry is not something that will improve my CV.

Back in April/May I was indeed looking forward to that great opportunity and it was not a stop-gap measure by any means. But CXF and CXF JAXRS are calling me back - it is next to impossible to resist. I'll blog about it next but this post is about JBoss and I'd like to talk about it a bit.

Just 6 months ago I had a fairly vague idea about what JBoss were doing, apart from knowing that they were part of RedHat, that it was an application server and that their JBossWS project was providing a CXF JAXWS integration option. And of course I knew about RestEasy.

In the reality, JBoss is a very live, active and progressive project, with a lot of very clever and innovative people working on it and who are really passionate about JBoss. And they have a very open and democratic environment with everyone being able to express their thoughts aloud.

I've promised in my previous post I'd link to various JBoss subprojects. Here are the links to some of them :

JBossWS : Web Service Framework for JBoss AS
RestEasy : JAX-RS implementation with various features
PicketLink : STS, etc
JBossTS : While many are saying that WS transactions support is next to impossible to provide this team just does it
HornetQ : Putting the buzz in messaging

As far as I'm concerned it is obvious I've given away the real opportunity with a great company. That said, the life is about many opportunities, and I'm about to pursue the new and exciting one, with the young and innovative company.

Goodbye JBoss, Hello Sopera !
Categories: Sergey Beryozkin

Apache Maven Site Plugin 3.0-beta-3 for maven 3

Olivier Lamy - Thu, 10/21/2010 - 12:13
The Maven team is pleased to announce the release of the Maven Site Plugin, version 3.0-beta-3 for Maven 3.This version is intended to be the version of the Maven Site Plugin for Maven 3.
The Site Plugin is used to generate a site for the project.
You should specify the version in the section of your project's POM:
<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-site-plugin</artifactId> <version>3.0-beta-3</version></plugin>
Release Notes - Maven 3.x Site Plugin - Version 3.0-beta-3

  • [MSITE-500] - Warning message about missing report plugin version shows null instead of plugin groupId:aritfactId
  • [MSITE-504] - Maven site fails to run due to non-report goals
  • [MSITE-505] - Unable to use SVN SCM wagon to upload a site
  • [MSITE-506] - Maven3 conflict with plexus-archiver
  • [MSITE-507] - report plugin doesn't have dependencies section coming from build section configuration
  • [MSITE-508] - attach-descriptor goals leaks file handlers, causing sporadic build failures when gpg tries to sign descriptor during release
  • [MSITE-512] - [Regression] Configuration of m-javadoc-p at reportSet level is not taken into account

  • [MSITE-513] - use release final maven 3.0 artifacts

Have Fun !-- Olivier Lamy on behalf of the The Maven Team

Categories: Olivier Lamy

Maven Selenium Plugin 1.1 (Maven 3 compatible)

Olivier Lamy - Mon, 10/18/2010 - 10:30
Hi,The Maven Selenium Plugin 1.1 version has been released.The most important fix is the Maven 3 compatibility !
ImprovementDon't miss to update the version in your pom
<plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>selenium-maven-plugin</artifactId> <version>1.1</version> </plugin>
Have Fun !

Categories: Olivier Lamy

Tomcat Maven Plugin 1.1 Released

Olivier Lamy - Sat, 10/09/2010 - 13:16
Tomcat Maven Plugin 1.1 has been released.
Site documentation :

Here the release notes :

  • [MTOMCAT-55] - Can't shutdown tomcat started with tomcat:run with fork=true
  • [MTOMCAT-57] - Secondary war files deployed with incorrect context path
  • [MTOMCAT-63] - Make isWar() check consistent with MTOMCAT-23
  • [MTOMCAT-64] - Document how to change the Embedded Tomcat Version
  • [MTOMCAT-65] - Upgrade Tomcat to 6.0.29

Have Fun !

Categories: Olivier Lamy

Should you getIn or getOut?

Hadrian Zbarcea - Thu, 10/07/2010 - 03:37
From recent posts on the camel users@ mailing list there seems to be some confusion related to the use of getIn() and getOut() on a Camel Exchange. The question is relevant to developers who want to implement a processor and need to understand what should happen with the result of their processing. There is a wiki page that attempts to explain the usage of the two methods, but sadly that mostly fuels the confusion than clarifies anything. The page will be corrected soon, you may see a newer, corrected version, but at the time of my writing current is version 4 (which was recently improved and a bit more accurate than version 2).

So the idea (deeply rooted in some older discussion about what the Exchange api should be) is that you should use getIn(), because, uhm, "it's often easier just to adjust the IN message directly" and that has something to do with MEPs and a poor relative of camel called JBI. Before we get into more details, the reality is much simpler, using getIn() or getOut() has not much to do with MEPs and you should do what's right, which is process the in message (most likely), generate an out message (if needed) or update the in (sometimes).

Camel is one of the integration space best frameworks (I'd think the best, but then you'll say I'm biased) and it is true that it was influenced by JBI. Some of its roots are in Apache Servicemix and three Camel PMC members have been at some point involved in drafting the jbi specs: James with jbi 1.0 and Guillaume and Bruce with jbi 2.0. One of the main goals of Camel is to make integration simpler and more intuitive than jbi (and complement jbi in some cases).

Now, just because the jbi 2.0 was inactive for a good while doesn't mean that the Camel Exchange API is broken as some may say. In particular, the jbi 1.0 spec says that "The message exchange model is based on the web services description language 2.0 [WSDL 2.0] or 1.1 [WSDL 1.1]" (jbi-1.0-fr-spec, pg 5) and WSDL is pretty much alive and kicking. Then there is the discussion about the MEPs, which is 100% correct and 100% irrelevant. The reason is that the MEPs refers to the route not what happens within a particular Processor. You could use the exact same Processor on routes that are in-only or in-out.

The concept though is much older than WSDL also. When implementing a function one would pass arguments and return a value. Think of a process() function that gets a Message as an argument and returns another Message.

Message process(Message in) throws Exception;

So the question then becomes should I return the same in Message or another one? Well, that's entirely up to you and what the process() function is supposed to do. The only thing that happens in Camel is that we wrap the two messages in an Exchange (plus the exception and some other metadata, because we needed an execution context) and that's what we pass to process(), so the function signature becomes:

void process(Exchange exchange);

... which is pretty much the definition of the Processor interface. Or you can think of the exchange as being the stack frame prepared for the function execution in the first case (not much of a stretch).

One important thing to note, that may clarify a bit the confusion is that getOut() will not really return the out message, but rather will lazily create an empty DefaultMessage for you (on the first call, and the existing out message on subsequent calls). Therefore there is no out message until you create it by calling getOut(), the assumption being that you called getOut() with the intention of returning the new distinct Message resulted from your processing. As a consequence the assertion getOut() ==null will always fail, because the getOut() call will create a message. To address that there is a hasOut() method in Exchange you could use to tell you if you created the out message (in case you don't know that already).

As you probably know, what camel does is to pass an Exchange along the processor chain in a route and pass the out message of the previous processor as an in message for the next. If the previous processor did not produce an out (there are such processors, a logger for example, logs the in, but does not produce an out), then pass the previous in as an in message for the next processor. During its lifetime, an Exchange starts with an in, goes down the processor chain, outs are created and they replace the ins and the old ins are lost in history. And here's where the mep comes into the picture. If the route is in-out, at the end of processing chain, the last message in the exchange is passed back by the consumer to the caller, otherwise the last message is simply ignored (with an in-only mep, the caller is not waiting for an answer).

A problem faced by users is that if they call getOut() and produce a new message, the headers from the in message get lost (well, the whole message gets lost). One thing to know is that if one needs to access some value at different points during processing, the Exchange has a property map. Objects in that map stay with the Exchange, unlike headers that are tied to messages (and probably have no semantics outside the message). As we saw, messages come and go during processing. Exchange properties don't. Now if you want headers to be preserved, you probably have a very specific processor, that can only exist in particular places in a route and make heavy assumptions about the type of in message they receive. If that's the case, yes, you probably only need to slightly modify the in message and since you did not create an out the modified in will be passed further down the route. If you need to update a header you set on a previous processor (such as a timestamp) but don't really rely on the in message per se, you might consider using exchange properties instead of headers. Consider also that even if you modify the in message to preserve the headers, other processors down the route may produce an out (and hence replace the current in) anyway so headers could still get lost down the route.

That's pretty much it. Let's take a look at a few examples. A throttler processor will most likely not care about the in nor generate an out, it will only introduce a delay in processing (which may be adaptive and may depend on a header or property). A profiler processor will probably update timestamps in exchange properties, again ignoring the in, not producing an out and couldn't care less what the exchange pattern of the exchange is. A logger processor will log the in, won't produce an out, mep is again irrelevant. A crypto processor (partial message protection) will use credentials from the exchange properties to alter parts of the in message, sign/verify/encrypt/decrypt (may be either headers or parts of the body), will likely not produce an out. A proxy processor that allows you to integrate some legacy technology into a camel route, will use the input, format it according to the needs of the technology, use the proprietary api, get the result and put it as a new out on the exchange, possibly along with specific headers. A processor that fetches records from a database will most likely put the result set into a new out message as well. So again, should you use getIn() or getOut()? Things depend much on what your processor needs to do.

Don't do what's easy, do what's right!
Categories: Hadrian Zbarcea

Take the Apache Camel Survey!

Hadrian Zbarcea - Wed, 10/06/2010 - 05:18
A new Apache Camel release is just around the corner (version 2.5.0) and recently we announced taking the 1.x branch off life support. Therefore the race is on to deliver a new major version, Camel 3.0 early in Q1 2011 (see roadmap).

To that end the Camel PMC worked on a survey last week intended to get a better feel of how the community is using Apache Camel, what the major issues are (documentation, I know) and what improvements and new features you want to see in the next releases. The survey, sponsored by Sopera, is anonymous and its results will be made public and used by the committers to create a remarkable experience for you.

We care about you, our users, and, from the feedback we got so far, I am impressed by how much you care about us back. Your voice is important and it's one sure way to move the features you care most about higher on our priority lists and busy schedules. Needless to say, feel free to reach out to us on the mailing lists and IRC channel with questions and suggestions.

So please take the survey ( if you didn't already and receive our gratitude for your continued support.
Categories: Hadrian Zbarcea

Hudson Maven Dependency Update Trigger : First Hudson plugin with maven 3 apis

Olivier Lamy - Sun, 10/03/2010 - 09:22
I have made the first release of the Maven Dependency Update Trigger hudson plugin.
Wiki page here :
This plugin will check according to the cron expression if your SNAPSHOT dependencies has been updated and (optionnaly) check if your plugins SNAPSHOT has been updated too. And trigger a build if necessary.
This is usefull if you have project dependencies not in the same hudson instance or using an other continuous integration tool.
Note this plugin use maven 3 apis to build maven projects and check dependencies.
Hudson 1.379 is required to use this plugin.

So have fun and give me feedbacks.

Categories: Olivier Lamy

Use maven 3 apis in a hudson plugin

Olivier Lamy - Wed, 09/22/2010 - 22:56
Using maven 3 apis in a hudson needs to change classLoader hierarchy as hudson load some old maven artifacts in the parent classLoader.
So the solution is to use a child first classLoader.
This new classLoader has been introduced in hudson 1.378. (see HUDSON-5360).
So you can now use it to made your plugin dependencies win on hudson core classLoader.
Some steps are needed.
Configuring your hpi plugin


So now your pluginWrapper.classLoader will be of type PluginFirstClassLoader.

// pluginId is project.artifactId from your plugin pom
PluginWrapper pluginWrapper = Hudson.getInstance().getPluginManager().getPlugin( pluginId );
PluginFirstClassLoader pluginFirstClassLoader = (PluginFirstClassLoader) pluginWrapper.classLoader;

Now everthing is ready but some other tricks are needed :-)

Create a PlexusContainer (note you must use the new "plexus-guice" bridge).
This means you need the following dependency :


And very important you must exclude dependencies from the original plexus-container


To prevent this you can add an enforcer rule which will chech plexus-container-default inclusion

ensure-no-plexus-container doesn't work anymore with maven 3 librairies. you have to add some exclusions.

How to create the PlexusContainer from the PluginFirstClassLoader :

private PlexusContainer getPlexusContainer(PluginFirstClassLoader pluginFirstClassLoader) throws PlexusContainerException {
DefaultContainerConfiguration conf = new DefaultContainerConfiguration();
ClassWorld world = new ClassWorld();
ClassRealm classRealm = new ClassRealm( world, "project-building", pluginFirstClassLoader );
// olamy yup hackish but it's needed for plexus-shim which needs a URLClassLoader and PluginFirstClassLoader is not
for ( URL url : pluginFirstClassLoader.getURLs() )
classRealm.addURL( url );
LOGGER.fine( "add url " + url.toExternalForm() );
conf.setRealm( classRealm );

return new DefaultPlexusContainer( conf );

Then set the current Thread classLoader (don't miss to restore the original one in a finally statement

PlexusContainer plexusContainer = getPlexusContainer( pluginFirstClassLoader );

Thread.currentThread().setContextClassLoader( plexusContainer.getContainerRealm() );

Now you can play with maven 3 apis

ProjectBuilder projectBuilder = plexusContainer.lookup( ProjectBuilder.class );

A first sample is the plugin called : maven-dependency-update-trigger
: sources and Wiki Page (under construction :-) )
You must have a look at the dependencies used.

The plugin will check your project according to a cron expression and schedule a build if a snapshot dependency has changed (dependency and plugin).

It will be released as son as hudson 1.378 will be released.

Note : hpi:run doesn't work yet for this plugin as it needs some changes in the hpi mojo to handle the new PluginFirstClassLoader.

Some thoughts on the Java u21 problem

Francis Upton's blog - Thu, 07/15/2010 - 20:43

I see all of the platform UI bugs and caught my attention and I have made several comments on it. I thought I would share my observations and suggestions at this point, being a member of the Eclipse community.


  1. This problem happens when you run Helios on Windows (possibly other platforms) with the current version of Java (Java 6 update 21). The problem also manifests in the worst possible way, it shows up as a hang (or possibly and infinite loop). As I write this, I’m sure that hundreds of hours are being spent trying to figure out what’s going on by people trying to use Eclipse.
  2. If there is such a thing as an emergency about things not working right, I would say this would be it. Downloading the current stuff is broken.
  3. This is an Eclipse bug. We are using the internals of software that we should not be, it changed under us and it broke our stuff. Also we did not certify with this software. We need to fix this problem. Complaining about that Oracle changed it and depending on them to fix it is a disservice to our users. There are many many examples of this sort of thing happening (mainly in SWT) and in most cases Eclipse seems to do the right thing by working around the problem and fixing Eclipse so that the people using Eclipse have the best experience possible.
  4. The bug happens with many previous versions of Eclipse due to the Oracle VM change, and some solution needs to be provided to them.
  1. Fix Helios right now. Today if possible. Yes this will require a 3.6a or whatever, but we have to make this configuration seamlessly work.
  2. I know a lot of work has been done to develop workarounds and solutions for the previous releases. These need to be widely communicated, including on the front page of We need to get the word out to help people. And we have to keep in ming people will be looking for a solution to a hang.
  3. Improve our release certification process. There is a Java release process where they test and and provided releases available for testing and certification precisely so that these problems don’t happen. We are one of the leading IDEs for Java, we should be totally on top of this process (including having relationships internally with Oracle so that mechanism are in place such that stuff does not get broken if necessary). We have the means to see this is coming and we should have prepared for it by certifying Helios on the u21 version of Java. I know we are pretty formal about talking about our reference platforms, we need to do better here.
  4. Have some policies about accessing the internals of dependent products. Of course it’s necessary to do this in a lot of areas (particularly SWT and Equinox), but where this is done there should be some extra review and also some more centralized tracking so that we understand these dependencies and can consider this in our certification. Also having the extra review will minimize the temptation to quickly add something that’s an internal dependency. People should try to think long and hard about this before doing it to see if there is another way.
This problem is a very serious problem, and we should do like what they do when they investigate airplane crashes, we should look at this from every angle and get as much learning  out of this as possible and implement improvements of the process at all levels to make sure it does not happen again.
Categories: Francis Upton

MercurialEclipse and Andrei the “handle it” dude

Francis Upton's blog - Mon, 05/17/2010 - 04:07

As I posted previously I was having real trouble with hg and MercurialEclipse.

It turns out a lot of my problems with slowness in decorator updating and startup with Eclipse were related to the number of projects (and maybe size) of my repository. The repo contains 114 projects, a total of 3059 folders and about 98000 files. One project is about 500M, two are about 250M and the rest are pretty small, about half of the projects are only a few K (Eclipse feature projects). About 125 of the files exceed 2MB, 4 exceed 5MB and 1 exceeds 10MB.

I reported a bug on this yesterday and immediately Ekke (an hg/MercurialEclipse fan) made some tests and posted them there, Andrei (the MercurialEclipse maintainer) asked several questions and now, less than 48 hours after reporting the bug report, I have received two updates of MercurialEclipse, the second of which reduced the start of time from close to 4 minute to 30 seconds, which is in the acceptable realm.

A while ago I had a similar response from Andrei with a showstopper issue for me which was resolved in less than 24 hours.

I am very happy I made the decision to go with hg, even though it’s a relatively new path from a number of perspectives and the main reason I’m happy about it is that I know that any real problems will be taken care of immediately, and that’s something that’s very rare in this business, no matter how much money you are willing to pay.

Thanks Andrei!

Categories: Francis Upton

Sadly, converting from Subversion to Mercurial *almost* failed (revised)

Francis Upton's blog - Sat, 05/15/2010 - 07:45

(Since posting this I’m more encouraged and I am keeping going, in fact I have now committed to the switch — there are some serious performance problems for my large repo, but given the responsiveness of the HgEclipse folks I expect it will be resolved soon, and I am able to get work done — Thanks Ekke for your comments and encouragement about this)

I really loved the idea of using a DVCS and I settled on Mercurial (hg) because I use a very good bug tracking system called FogBugz and they offered a product called Kiln which provides hosted hg repositories (and good integration with the source control and bug tracking system). I have really begun to hate subversion which is what I have been using the last few years. One of the main annoying things about SVN is issues about copying folders or projects. Eclipse is not smart enough to *not* copy the .svn folders so you quickly end up with a mess if you forget to delete them in the destination. And there are other problems, so I was hoping that hg would be better.

It took tens of hours and many tries to convert my 130 or so projects containing about 1.5G of source and binaries to hg from SVN. This was time mainly spent pushing the newly created repositories up to the Kiln server. Sometimes it would run for several hours and die, or timeout, giving no indication as to what was wrong. And hg push does not seem to push one changeset at a time, unless you tell it to; it’s pretty much all or nothing. So if you are pushing 100 changesets and 90 got completed and it dies, you still get nothing in the target repository. There is also no status indication as to which changesets have been processes. The Kiln folks have provided some useful extensions that make it try to batch the pushes into groups but even this does not work in all situations as the heuristics for guessing the batch size are not good in all cases. Sometimes a single changeset can be huge (I have several big things checked in). I thought this would all get better once the initial work was done.

But that was not the case, I decided to reorganize some large projects by moving them from folders to the top level folder. Pushing to the hosted repo took several hours. And also the Eclipse tooling for hg (version 1.6.0 of Hg/MercurialEclipse) did not seem to figure this out correctly. Even hg seemed to be confused about this (I did an hg mv and then hg commit, but it left behind the old folders).

Another annoying thing was that empty folders from svn were simply dropped in the conversion.

Finally the Eclipse tooling is really not there. When I start up Eclipse with my 130 projects it takes nearly 4 minutes of the CPU being pegged for the hg status to be updated in the project explorer. Compare this with about 10 seconds with svn (yes I have filed a bug about this with the HgEclipse folks — they seem to be very responsive [I have filed other issues which have been fixed quickly]). And it seems to be very slow in responding to changes like ignoring certain files. And this was only after a few minutes of using this with all of my projects. It seems to want to refresh resources quite a lot which is not workable with my amount of code.

And Kiln itself had troubles. I would try to open certain directories using their web support and got a cute message about the Kiln being overheated and they were working on it.

I’m sure all of these issues will be worked out over time, but I have given up after a week of fighting this stuff and am going back to svn. I look forward to trying in a few months when hopefully these issues will be resolved. I am using version 1.5.2 of hg on an Ubuntu system.

Categories: Francis Upton

CNF Testing for 3.5.2 – Please test your projects

Francis Upton's blog - Wed, 01/20/2010 - 00:17

If you are a project that uses the Common Navigator/Project Explorer, please be sure and test over the next week with the 3.5.2 RC2 build.  A few critical fixes have gone into the CNF and we need to know sooner than later if there is a problem:

296253 maj P3 Linu RESO FIXE [CommonNavigator] An empty label is not properly shown when it is the only contributed label 295803 maj P3 Linu RESO FIXE [CommonNavigator] Source of Contribution set to lowest priority NCE, not the NCE providing the children 296728 nor P3 Wind ASSI [CommonNavigator] Problem with enablement on navigatorContent extension point 299438 nor P3 Linu RESO FIXE [CommonNavigator] CNF viewer state non properly reset when NCEs are activated or deactivated
Categories: Francis Upton

An incompatibility between Ganymede and Galileo

Francis Upton's blog - Sun, 11/22/2009 - 19:17

In working on my company’s product I saw that the correct icons were appearing when the product was run in Ganymede, but not Galileo.  The problem is captured in bug 295803 and has to do with the selection of the Navigator Content Extension (NCE) to provide the label when there are two NCEs that operate on the same content.  In my applications case, it was my app’s NCE and the NCE that takes care of resources.  In Ganymede, the higher priority of these NCEs has the first opportunity to provide the labels, but in Galileo, it’s the opposite, the lowest priority provides the labels, which is certainly not what you want.

I think we should address the above bug in 3.5.2, as it’s a bad incompatible change.

Here is the (amazingly pretty and elegant) code that I used to work around the problem:

/* * Yum. Due to Eclipse bug 295803 we need to change the priority of the * NCE for our product. This only needs to happen on a 3.5.0 or 3.5.1 * system. We sense this by the org.eclipse.ui.navigator bundle version * (in which the minor version lags one behind the main Eclipse * version.) Note that if this bug is fixed in 3.5.2, then the version * check needs to consider only CNF versions 3.4.0 and 3.4.1. */ Bundle bundle = Platform.getBundle("org.eclipse.ui.navigator"); //$NON-NLS-1$ Dictionary headers = bundle.getHeaders(); String version = (String)headers.get("Bundle-Version"); //$NON-NLS-1$ if (version.startsWith("3.4")) //$NON-NLS-1$ { INavigatorContentService ncs = getNavigator() .getNavigatorContentService(); NavigatorContentDescriptor desc = (NavigatorContentDescriptor)ncs .getContentDescriptorById("com.oaklandsw.transform.navigatorContent"); //$NON-NLS-1$ try { Field[] fields = desc.getClass().getDeclaredFields(); for (int i = 0; i < fields.length; i++) { if (fields[i].getName().equals("priority")) //$NON-NLS-1$ { fields[i].setAccessible(true); fields[i] .set(desc, new Integer(Priority.LOWEST_PRIORITY_VALUE)); break; } } } catch (Exception e) { Util.impossible(e); } }
Categories: Francis Upton

RCP, p2, Vista and VirtualStore

Francis Upton's blog - Sun, 05/24/2009 - 22:06

I have an RCP application that uses p2 and needs to run on all of our platforms (this uses 3.4.2).  Everything was great until Vista came along.  On Vista when going to the Software Updates… dialog it gets the “not properly configured” dialog and won’t let me do the fun p2 stuff.

The problem is that Vista does not like it (sort of) if you try to write things in the Program Files directory.  You can write as much as you like only during the install process, and there seems to be some way that it writes a “virtual store” into this area when the app is running, but p2 will not work with this.

The solution that I have found is to (my app is called “osdt”, which is also the name of my launcher executable):

  1. Add
    • -Dosgi.configuration.area=@user.home/osdt/configuration

    to my Launching Arguments in my product file in the VM Arguments section for all platforms.  This tells it to look for the configuration area in a location that’s relative to the Java “user.home” system property (c:/Users/<User name> on Vista and c:/Documents and Settings/<User name> on XP.)

  2. Add the following into my config.ini file:
    • osgi.instance.area=@user.home/osdt/workspace
  3. This makes sure that the p2 data area is relative to the configuration and my workspace is also created relative to the user’s home directory.

  4. Teach my installer to put the configuration and p2 directories in the expected location. I use install4j – and the key here is to have it come up with the variable setting the location to install these things at install time using the “user.home” system property from Java so that the Eclipse @user.home and where the kit is being installed to will always be in sync.

All of this was great, but it did not work, and I was finding that no matter what I did to my osdt.ini file it had no effect. The problem was that the Vista “virtual store” mechanism had written my information from the previous installs and that was silently overriding what was in the Program Files folder. In general, if you write something to the Program Files folder, it will really be written to c:/Users/<User name>/AppData/Local/VirtualStore/Program Files, and whatever is at this virtual store location will silently take precedence over what is in the Program Files folder. Once I deleted the stuff in the virtual store location everything was fine.

Categories: Francis Upton

Application “…” could not be found in the registry.

Francis Upton's blog - Wed, 05/20/2009 - 08:04

Just spent a couple of hours debugging one of these and thought I would write it up in hopes that it makes things better.

My use case is that I’m calling my application from an API which means I’m dynamically starting it using the EclipseStarter class (with reflection since I can’t reference Eclipse stuff from the JAR file that my customers use).

Everything used to work in this particular set of tests (don’t they all say that?), and then when I run them I get the error below.

It turns out the problem was the plugin (bundle) that contained the application definition (in the plugin.xml) did not get resolved.  It did not get resolved because it depended on a plugin that did not exist.  However there is nothing in the logs that indicated there was a problem loading this plugin, it was just silently not loaded.  I only discovered this debugging the bundle resolution code and wondering why it was not resolved.

So if you get this error and you can’t think of any other reason for it, make sure all of your dependent bundles are present, and ;resoution=optional could be your friend.

This bug has been filed about this: (this happened in 3.4.2, so maybe it’s fixed in 3.5).

Caused by: java.lang.RuntimeException: Application “com.oaklandsw.transform.runtime.engine” could not be found in the registry. The applications available are:, org.eclipse.update.core.standaloneUpdate, org.eclipse.updat
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
at java.lang.reflect.Method.invoke(
at com.oaklandsw.transform.runtime.RuntimeFactory.createRuntime(
… 76 more

Categories: Francis Upton


Subscribe to Talend Community Coders aggregator