Wednesday, 14 July 2010

Implementing CMIS

Since the first of may the first version of the CMIS spec is finalized. This new standard is ideally suited for Repository-to-Repository (R2R) and Application-to-Repository (A2R) CMS integration. In this post I’ll try to give you a brief overview of the possibilities and focus points concerning a CMIS integration.

The specification

CMIS allows you to communicate with a CMIS compliant CMS via a standard way. The communication can be based on Webs Services or on Restful AtomPub. If you can freely choose between these two technologies I definitely would go for AtomPub because it’s the most complete one (the spec handles AtomPub in almost 100 pages, Web Services takes only 3 pages).

So does a new standard for integrating a CMS solves all your problems/needs? The answer -of course- is No. After reading the spec this is my major concern: Optional Capabilities “...Thus, a repository implementation may not necessarily be able to support all CMIS capabilities. A few CMIS capabilities are therefore “optional” for a repository to be compliant…”

It states clearly that a compliant CMS not necessarily will support the whole spec. Especially if you have the idea to write an implementation which can handle multiple Content Management Systems at once, this could be a pitfall.
Luckily CMIS has a query service available to retrieve the possibilities of a repository, and it is clear, you should first investigate the repository's possibilities (to figure out how much of the spec is implemented) before integrating a CMS via CMIS. You find the information about this in the spec in the following chapter: getRepositoryInfo
Description: Returns information about the CMIS repository, the optional capabilities it supports and its Access Control information if applicable.

Apache Chemistry

Before going into detail on the architecture I would like to introduce another framework to you: Apache Chemistry. if you are a Java, Phyton, PHP or JavaScript developer you have the option to enable your application for CMIS by adding an extra layer based on this framework.
Apache Chemistry provides open source implementations of the Content Management Interoperability Services (CMIS) specification. There is an API available for different languages, but since I’m a Java guy, I will only focus on the Java API.

The Apache Chemistry Java API is named OpenCMIS.


OpenCMIS is a collection of Java libraries, frameworks and tools around the CMIS specification and is very complete. The main thing that is lacking for the moment (of course) is documentation. There are three main parts in the OpenCMIS collection: CMIS Client (consisting of Client API and Client Bindings API), OpenCMIS Server Framework and CMIS Browser. In this blog post I will only focus on the CMIS Client.

OpenCMIS Client

As said before the OpenCMIS Client contains two separate API’s: the Client API and the Client Bindings API.
The OpenCMIS Client Bindings API hides the CMIS AtomPub and Web Services bindings and provides an interface that is very similar to the CMIS domain model. The services, operations, parameters, and structures are named after the CMIS domain model and behave as described in the CMIS specification.

The primary objective of the Client Bindings API is to be complete, covering all CMIS operations and extension points. The result is a somewhat clunky interface. The Client API sits on top of the Binding API and exposes a nicer and simpler to use interface. It is the better choice for most applications.


A2R (Application-to-Repository)


A2R (with OpenCMIS)


R2R (Repository-to-Repository)


Put it all in practice

Since there’s almost no documentation available at this point I’ll share some code with you. It’s not too hard to get something working but most of the time to get there you have to dive in the source code of OpenCMIS from time to time.
In contrast to the OpenCMIS documentation the CMIS spec however is very complete, and there are a lot of useful examples based on AtomPub.

Another big help is the Alfresco Website, there even is a test site which you can use to test you implementation.

The code
Java imports:
import org.apache.chemistry.opencmis.client.api.CmisObject;
import org.apache.chemistry.opencmis.client.api.ItemIterable;
import org.apache.chemistry.opencmis.client.api.ObjectId;
import org.apache.chemistry.opencmis.client.api.QueryResult;
import org.apache.chemistry.opencmis.client.api.Session;
import org.apache.chemistry.opencmis.client.api.SessionFactory;
import org.apache.chemistry.opencmis.client.runtime.SessionFactoryImpl;
import org.apache.chemistry.opencmis.commons.SessionParameter;
import org.apache.chemistry.opencmis.commons.enums.BindingType;

Connecting to a repository:

SessionFactory f = SessionFactoryImpl.newInstance();
Map<String, String> parameter = new HashMap<String, String>();

//user credentials
parameter.put(SessionParameter.USER, "admin");
parameter.put(SessionParameter.PASSWORD, "admin");
parameter.put(SessionParameter.BINDING_TYPE, BindingType.ATOMPUB.value());

//session locale
parameter.put(SessionParameter.LOCALE_ISO3166_COUNTRY, "BE");
parameter.put(SessionParameter.LOCALE_ISO639_LANGUAGE, "nl");

// create session
Session s = f.createSession(parameter);

Retrieve a folder:

//Retrieve a Folder
ObjectId objectID =
CmisObject CO = s.getObject(objectID);

Query the repository:

ItemIterable<QueryResult> q =
s.query("SELECT * FROM cmis:document WHERE CONTAINS('RS232') AND " +
"cmis:createdBy = 'admin' AND IN_FOLDER(" +
"'workspace://SpacesStore/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx')", true);

int i = 0;
for (QueryResult qr : q) {

Insert a new document:

//Insert new document
try {
File file = new File("C:\\afile.pdf");
Map<String, String> properties = new HashMap<String, String>();
properties.put(PropertyIds.NAME, file.getName());
properties.put(PropertyIds.CHECKIN_COMMENT, "");
properties.put(PropertyIds.OBJECT_TYPE_ID, ObjectType.DOCUMENT_BASETYPE_ID);
ContentStream contentStream = new ContentStreamImpl(file.getName(),
new MimetypesFileTypeMap().getContentType(file), new FileInputStream(file));
contentStream, VersioningState.MAJOR, new ArrayList(), new ArrayList(),
new ArrayList());
} catch (Exception e) {

That’s all for the moment folks.


CMIS Specification:
Apache Chemistry:

Tuesday, 22 June 2010

jBPM Custom Mail Producer

As stated in the jBPM developers guide (13.4.1. Custom Producers) it is possible to plug in your own custom mail producer.

However, the topic in the developers guide is very brief and just by implementing the interface or extending the MailProducerImpl your code won’t work.

Option one: “Google”…. No luck this time - makes me wondering how many people really use jBPM 4?

Second option: diving into the source code myself and complete the puzzle on my own. After a couple of hours I came to a solutions that does the trick.

As you can see in the source code instantiating a mail provider is done in org.jbpm.jpdl.internal.xml.JpdlParser:


So to enable a process to use a custom mail producer you extend the notification node with a class attribute as shown below:


So far, so good.
Final step: writing your custom producer.
I started simply by creating an empty class which extends from MailProducerImpl :


Running my code however resulted in the following error: Exception in thread "main" java.lang.NullPointerException


It learned me that in case of specifying a custom mail producer a mail template is not set.


To solve this error adjust the constructor of the mail producer by adding the following code:


At the end this is how my Mail producer looks like (it is still extended from the default mail producer but at this point just implementing the interface would probably do the trick)

Last thing I want to mention is the override of the produce method. In that method I retrieve the current task from the TaskContext. This opens the possibility to use it in the other methods of the class . (Below also a code snippet from the org.jbpm.jpdl.internal.activity.MailListener which is responsible for maintaining the TaskContext).



Now you can spam yourself while testing and fine tuning your code :-)

Wednesday, 9 June 2010

5 things you should know when developing products with ADF11g

I’m trying to get this talk into OOW via Oracle Mix. So If you want to see my talk: 5 things you should know when developing products with ADF11g you should take a minute or 2 and vote for it.

Thursday, 4 March 2010

Java Memory Management

When writing programs it’s always good to know what’s happing in the core of the Framework you’re using. This allow you to get a better idea why things happen in a certain way or what to do in case you receive an error.

So if you’re a java programmer a good thing to read is this paper on memory management in the Virtual machine. It has a short introduction, a brief overview of the concepts and later on it dives a little deeper with an overview on the different garbage collectors and a chapter on ergonomics. At the end you get some tips and recommendations.

Chapter 6 also contains the paragraph ‘What to Do about OutOfMemoryError’ helpful for us all I think :-) (especially when you’ve ever have used BufferedImage)

Here you can find more information on Java HotSpot Garbage Collection.

Monday, 22 February 2010

ADF automated deployment: Hudson and WLST

As I told in my previous post on Hudson I had the wish/idea to extend my Hudson build script in a way that it would fully automate the deployment of the ear file to a WebLogic server.

So I gave it a try…

With the help of a blog post of Jay senSharma it actually was quite an easy job.

Step 1:
I copied the MIDDLEWARE_HOME\wlserver_10.3\server\bin\setWLSEnv.cmd to C:\Hudson_Slave_Node\workspace, and renamed it to deployWLS.cmd

Step 2
I created a python script file called and added it to the same directory.

This is the content of the python file
startApplication(' dummy_application1')

Step 3:
I changed the bottom of deployWLS.cmd to look like:

@echo Your environment has been set.

@echo Deploy to WLS
java weblogic.WLST C:\Hudson_Slave_Node\workspace\
@echo Deploy to WLS Finished


Step 4:
I added a new ‘Windows batch command’ Build step in Hudson that executes the deployWLS.cmd file.

Step 5:
Grab a pint of (Belgium) Beer, sit down, relax and see how the job is now done for you... :-)


If you receive the following error:

at weblogic.jdbc.module.JDBCModule.prepare(
at weblogic.application.internal.flow.ModuleListenerInvoker.prepare
Caused by: weblogic.common.ResourceException: weblogic.common.ResourceException: weblogic.common.ResourceException: No credential mapper entry found for password indirection user=cm for data source CM
at weblogic.jdbc.common.internal.DataSourceConnectionPoolConfig.getPoolProperties(
at weblogic.jdbc.module.JDBCModule.prepare(

You have to adjust your application properties in a way that they‘ll not include the weblogic-jdbc.xml. It’s not possible to automatically deploy your ear file with this option. So uncheck this option and add a DataSource with the correct JNDI name via the WLS console.

Jaysensharma's Blog
Forum: WebLogic Server - Upgrade / Install / Environment / Migration
Thread: WLST to deploy ear file
Using the WebLogic Scripting Tool
WLST Command and Variable Reference

Sunday, 14 February 2010

ADF Methodology: Customization

I started a new thread in the ADF Enterprise Methodology Group about customization.

I’ll share my question also here on my blog. If you would like to comment on this subject I suggest you do it in the thread at the ADF Enterprise Methodology Group.

This is the question...

Hi All,
We’re currently developing a product with ADF 11g technology (Rich Faces/BC).We already finished our first version and are busy with our first customer’s implementations. Since it’s a product it’s the idea to have the same basic product (core) installed for all the customers (for a specific version) and to do individual customizations (if necessary) for each one of them.

For the customizations there are two important requirements:
-The implementation has to be maintainable (we don’t’ want to do bugfixing for each individual implementation but only in our shared product core). So the implementations of the customizations have to guarantee not to pollute the product core.
-Secondly our product has to remain upgradeable. If our product core evolves to a new version it has to be possible to do an upgrade without rewriting the customizations.

To achieve this we already used the following strategy:
-The use of flexible fields and flexible tab pages (which doesn’t require any coding)
-The use of a Business rules engine
-The use of a flexible BPM engine

This makes it possible to reduce the amount of necessary customizations to a minimum but as always extra customization is sometimes needed. (For example the integration with another system, an extra application, ... )For these we still have two ideas/options:
-Modularization (this subject probably needs a thread for its own)
-MDS: first consideration here is that every site customization is implemented in the same code base, so with every customization our core would grow bigger and bigger….and therefore MDS seems not to be an option.

So the main question is how to approach customization? Are we making the wrong considerations? Are there other options?

The ideal world would give us the possibility to have our core packaged in a separate install (ear file), and to lets us do our customizations in a separate workspace with the possibility to override most of the core functionality… Is this possible within ADF?

Note: for clarity we are not speaking about personalization here (and the MDS possibilities for this issue).


Friday, 12 February 2010

JDeveloper & Hudson

Since a while I was looking for a description on how to do continuous integration with Hudson based on a JDeveloper project without any luck.

Last Wednesday however I read the blog post of Geoff Waymark ‘Ultra quick Hudson setup to build a JDeveloper ADF application’ and in no time I fished the job.
Geoff’s blog post is very complete and detailed but since I was not familiar with Hudson and scripting at all, I’ will go on a bit more detail on Hudson’s workspace variable and ojdeploy.

The %WORKSPACE% variable just represents the directory specified under ‘Remote FS root’ completed with your ‘Project name’.

So in my case C:\Hudson_Slave_Node\dummy-trunk

Secondly to get the Windows batch command right I read Steve Muench blog post ‘Online Help for ojdeploy Utility’ which brought me to the JDeveloper Help page ‘About Deploying from the Command Line’.

There you can find all the info you need to know about the ojdeploy utility.
You find the profile parameter under ‘Application Properties ’ > ‘Deploy’.

This is my batch code:
del *.ear
%OJDEPLOY% -profile dummy_application1 -workspace %WORKSPACE%\trunk\dummy.jws
copy %WORKSPACE%\trunk\deploy\*.ear %WORKSPACE%
rmdir /S /Q %WORKSPACE%\trunk

Job’s done.

Next thing on my wish list now is automatic deployment of my ear file to a WLS via the usage of WLST.

Monday, 8 February 2010


Oh yeah another thing, since I’m also a steering member of BeJUG (Belgian Java User Group) allow me to do some advertising…

Since 2009 BeJUG organizes bi-weekly evening sessions (In Belgium of course).

The first talks for our 2010 schedule are defined and can be found here
As you can see the first one (on Google App Engine) is just in two days….

ADF Enterprise Methodology Group

Last December I visited the UK OUG conference at Birmingham, I focused my schedule on the ADF sessions en talks and saw some interesting stuff, but the peak for me was the discovery of the ‘ADF Enterprise Methodology Group’ via a round table session I attended at the conference.

So what is the aim of this group? Let me try to situate them a bit for you….

At the one hand you have the JDeveloper and ADF forum, the blogs, otn, and a lot of other websites which all help you to solve your technical questions en issues.

But me and my colleagues had some questions (more related to methodology and best practices) for which we didn’t found the answer and for which we didn’t found a forum to ask those questions. And that’s where the ‘ADF Enterprise Methodology Group’ comes into the picture. Here you can find very nice threads which try to solve all your other questions. Some examples:
  • Best Practice for Organizing Business Logic code for large applications
  • Back to the basic question: How big/ how small is my AM should be?
  • Best Practice for Organizing Business Logic code for large applications
  • AM pools, connections pools & ADF scalability
  • Design for Performance or Tune After development?
  • ADF Coding Standards 2010
  • How many ADF library versions on one WLS installation?
  • ADF Applications Hardware Sizing?
  • ...

So if you’re dealing with the same ‘ADF methodology‘ questions the ADF Methodology Google Group is the place to look for the answers…

Monday, 1 February 2010

First Oracle ADF 11g based product release

Since the beginning of last year we were working on a brand new ADF Based product for ‘Case Management’ called (surprisingly :-) ) ‘Axi Case Management’.
On the 18the of January we fished our first release and installed it for the first time at a client.
The deadline for this release is mainly responsible for the small amount of blog posts I made the last months.
The great news is that our product runs very fine in any way (performance, reliability,...).
I hope i can post the customer case of this product development on the Oracle ADF website soon. In the meantime can you find a whitepaper here?


Currently I’m also trying to run an ADF application on a JBoss server. No luck yet. But once I get this job done I promise to publish an overview of my work here.