Shared Talend Open Studio Job Repository with Subversion

Talend Open Studio is an open source Data Integration/ETL tool that allows creating complex jobs through an easy-to-use graphical interface based on the Eclipse platform.  It allows for rapid development through its hundreds of prebuilt components to source and target systems.

However, as great as Talend is, there is one caveat; the FREE version does not come with version control and a shared job repository integrated into the product.


This means that if one developer created a job design, and a second developer needs to make modifications to it in the future to handle new requirements, that second developer would not able to do it because that job is in another’s developer workstation.


  1. You will need to have a Subversion Repository (SVN Server).
  2. You will need a Subversion client, TortoiseSVN. This will allow you to commit code (or any other type of file you want to version) and checkout code from Subversion server. I recommend that everybody is on the same version, Iuse the latest version at the moment, which is 1.7.5
  1. In order to develop new and or maintain existing Talend jobs you will need Talend Open Studio. You can also download it from (no questions asked). We are using version 5.0.2. I would also recommend that everybody is on the same version.
  1. In order to run jobs from the command line (like they are going to run once deployed to the server) you are going to need the Java JDK 1.5 or greater on your machine.  Instructions on how to set up java on your machine can be found here


Here, I will show you how to integrate Talend with SVN so that an entire team can work on the same jobs code base.

  1. Checking Out a Talend Project from a Shared Job Repository
  2. I Cannot View all my Jobs, Contexts, and Metadata in my  TOS!!!
  3. Creating a Shared Job Repository
  4. Checking In New Changes to a Shared Job Repository
  5. Updating your local workspace
  6. Resolving File Conflicts

Checking Out a Talend Project from a Shared Job Repository

  1. If you have TOS open, close it.
  2. Go to your Talend workspace. In my case(C:\Talend5.0.1\TOS_DI-Win32-r74687-V5.0.1\workspace)
  3. Right click with your mouse->SVN Checkout…
  4. Put the SVN Server URL of your Repository:
  5. The Talend Project from the server should be in your workspace now:

  6. Open TOS. You should see the project listed as follows:
  7. All the Jobs associated to that Project should be visible. If not, read the next section of the tutorial.

I cannot view my Jobs, Contexts, and Metadata in my TOS!!!

After checking out an entire Project from Subversion or doing an SVN Update on an existing Project in your workspace it is possible that your GUI has not picked up the changes. You can do two things to take care of that:

  1. Do a Refresh of the Repository left panel
  2. Do an Import Items. From TOS, Right click on Job Designs->Import Items

Creating a Shared Job Repository

  1. Create a Talend Project. This Talend project will house multiple Jobs. (Remember not to confuse  Project with Job)
  2. Navigate to your Talend workspace. Right clickon the project->TortoiseSVN->Import…

  3. The following prompt will come up. Enter your repository location and a message
  4. You should see the following message:

  5. Type your SVN Repo URL on the browser and verify that contents are there

Checking In New Changes to a Shared Job Repo

Rule of Thumb: ALWAYS, ALWAYS, ALWAYS before checking in anything, do an SVN Update first and resolve any existing conflicts. Read the next section for how to resolve conflicts.

If you do not have any conflicts to resolve, you are ready to commit your changes. Do as follows:

  1. Right Click on the Project through your C:/->TortoiseSVN->Check For Modifications
    Note:  You can skip this step and go to Step 2.

    This will look for modifications locally.

  2. Right click on your project in your workspace->SVN Commit…

  3. Click’OK’. Now other developers should be able to do an SVN Update and work off of the latest changes.

Updating your local Workspace with latest stuff from the Job Repo

  1. Right click your Project folder in C:\ drive->SVN Update

    In this case there was nothing new in the server, so nothing was updated. If you would have got conflicts, please read the next section. 

Resolving File Conflicts

When doing shared development with Version Control Systems (Subversion, Git, CVS, etc…) it could be that another person has edited a common file that you have also edited. What we want here is to keep the other developer’s change as well as ours. We have to resolve a conflict. This can be done in 5 easy steps.

  1. Go to your Project folder (C:/path_to_your_talend_workspace_project_folder) and do Right Click/SVN Update. This will update any local files with newer ones from the server
  2. If it encounters a file that you have also edited, it will raise a conflict. YOU MUST RESOLVE IT!!
  3. Right Click on the Conflicted file and select Edit Conflicts.  The TortoiseMerge editor will come up:
  4. To merge files do as follows:
  5. Finally, you must mark the file as resolved

What if I want to accept their file and Override mine?

Do Right Click on Conflicted file/Resolve conflict using ‘theirs’


Do Right Click on Conflicted file/Resolve conflict using ‘mine

It is a little bit cumbersome, but once you get the hang of it, it is not that bad. It is unfortunate that they do not release the Talend Open Studio version with hooks to Subversion and Git right out of the box. I doubt, that organizations are strictly buying their product Talend Integration Studio based mostly on the Shared Repository.

How too Root your Kindle Fire plus install Android Market

Here I am document the steps I have followed to root my Kindle fire and to install Android Market on it. This is not my own work, but a compilation of other’s work that has worked out well for me.

The high level steps are as follows:

  1. Get a Micro-USB to USB cable.
  2. Download and Install the JDK (Java 6 or later reommended).
  3. Download and Install the Android SDK.
  4. Install Kindle USB Drivers.
  5. Root your Kindle Fire with SuperOneClick.
  6. Install Android Market.
  7. Install OTA RootKeeper app from Android Market.

Get a Micro-USB to USB cable

Amazon has designed the Kindle Fire to be a media consumption device only, so they did not include a Micro-USB to USB cable in the box; they just included a power adapter. This means that they do not want you hooking up the device to your PC. They are selling this mighty tablet at a loss hoping to make the money  back on all the pay content available on the Amazon Cloud Store.

Anyways, check the link below for the cable that worked for me as the first one mentioned there DID NOT WORK.

Download and Install the JDK

To install the JDK (Java Development Kit) you can follow the top portion of this other post here

Download and Install the Android SDK

To install the Android SDK, please follow this tutorial

Install Kindle Fire’s adb USB driver

This step is necessary so that you can plug your Kindle Fire to your PC through a Micro-USB to USB adapter and have your PC recognize it. This way you will be able to drag and drop files to your Kindle Fire. Note: This has nothing to do with rooting your device. All these does is give you the power to add content to your device such as Music, Photos or PDFs etc…

Please follow the instructions here

If you do not wish to root your device, stop here.

Root your Kindle Fire with SuperOneClick

  1. Download SuperOneClick from Shortfuse I used version 2.3.1
  2. Plug your Kindle Fire to your Computer. You should see a message that states that you are ready to transfer files to the device.
  3. Run SuperOneClick executable and click the Root button.
  4. Let SuperOneClick do the work. Your device should be rooted.

Install the Android Market

I have found many threads online on how to install the Android Market, but the only one that worked for me was the found in this YouTube Video.  After you are done with that, you should be able to download apps from the Android Market

Install OTA RootKeeper app from Android Market

Once you have rooted your device and you are logged in as the root user you will not be able to download Movies from the Amazon store. The “Play Now” button will be grayed out.There is a workaround.

  1. Download OTA Rootkeeper app from Android Market.
  2. Temporarily un-root your device.
  3. Force stop your Amazon Video app if running.
  4. Access an Amazon Video again, you should be able to watch it now.

It is explained in this Youtube Video

Have Fun!!!

Add Oracle JDBC drivers to your local maven repository

The steps to add the Oracle JDBC Drivers to your local Maven repository can be found here:

Here I am recording them for me just in case that entry goes away:

Step 1

Find the oralce jdbc jar in your oracle client software. If you do not have the Oracle Client, install it.

Step 2

cd to the directory where the oracle client jar is located. In my case: C:\ORA10gClient\jdbc\lib

Step 3

Install the jar by typing the following maven command:

mvn install:install-file -Dfile=ojdbc14_g.jar \
-DartifactId=oracle -Dversion= -Dpackaging=jar -DgeneratePom=true

Step 4

Add the following dependency to your project’s pom.xml

Achieving Transparency with JNDI

Enterprise Web Applications are distributed systems. Typically, enterprise solutions are composed of many computers with different Operating Systems, one or more data sources (in many occasions from different vendors) and even of different architecture (relational, hierarchical, dimensional, ISAM, etc…). One important thing to remember is that to the end user it looks like one system.

The advantages of distributed systems are the different types of Transparency:

  1. Enterprise resources should be accessible and how they are accessed should be hidden from the user.
  2. The communication protocols the web application uses (HTTP, SOAP, JMS, JNDI etc…) to interact with other services or applications should be hidden from the user.
  3. The user should not be able to tell that different resources are scattered across the network, neither where they are located.
  4. The user should not have any critical information that might compromise the security/integrity of a subsystem.
  5. Distributed applications should be easy to extend, repair, or even move components around.

In this post I am going to talk about how to achieve Access, Location and Migration Transparency with JNDI (Java Naming Directory Interface).

Problem: Have you ever seen this at your Enterprise?

Oracle Forms & Reports Login

This is a typical log in screen to enterprise applications that use Oracle Forms & Reports technology. Even though this is old technology you will be surprised to see how many enterprises are running business critical applications under this platform; there is nothing really wrong with that, just that there are better enterprise application development platforms in the market than Oracle Forms and its successor APEX. For example J2EE nowadays most commonly known as JavaEE and .NET.

The main disadvantages are as follows:

  1. Oracle Forms only work with Oracle RDBMS (PL/SQL as the language).
  2. Migrating to a new database would require to rewrite your entire business logic to the platform of choice (Java, .NET, T-SQL, SQL PL, Postgres PL, etc…).
  3. Cannot benefit from current agile software trends of Test Driven Development and Continuous Integration (at least to my knowledge).

However,  the biggest problem (what we are going after here according to the topic of the post from the screenshot above) is as follows:

  1. The database text field.
  2. The Database Administrator (DBA) will have to maintain as many database user accounts as users are in the application.

What problems do you see?

  • Since when an end user should know the database name to connect to?  You have violated Access Transparency here.
  • If the application has 100 users for example. Will the DBAs have to maintain 100 users in QA and PROD environments? That sounds like a lot of tedious work and opportunity for mistakes.
  • What happens if you encounter a tech savvy user that manages to create an ODBC connection to your production database? Now you have a big security problem here.
  • Did you enforce all UPDATE, DELETE logic to go through Stored Procedures? Have you given the database user accounts rights to execute Stored Procedures only?
  • If not, the user accounts to the application now have direct rights to execute UPDATES and DELETES on certain tables. Now you have a big security problem here.

A lot of these problems can be easily solved in JavaEE by using JNDI.

Solution: Setup a JNDI Connection Pool with GlassfishV3 App Server

To illustrate the solution I have installed the following components:

  1. GlassFish Server Open Source Edition 3.1: A full blown Java EE 6 compliant application server.
  2. PostgreSQL Database Server 9.0.4: The world’s best open source database.

Note: I have chosen these two components but any other RDBMS (MySQL, MSSQLServer, Oracle, DB2, etc..) and any of these app servers (JBoss, Oracle WebLogic, Websphere) would have a similar way of setting up a JNDI Connection Pool.

Do as follows:

  1. Download the appropriate PostgreSQL JDBC driver jar and place it in C:\glassfish3\glassfish\lib.
  2. Start GlassFish application server by exectuting startserv.bat located at C:\glassfish3\glassfish\bin (or wherever you decided to install it).
  3. Open up a browser tab and go to the GlassFish Administration Console. http://localhost:4848/
  4. In the left navigation menu click on the New… button under Resources/JDBC/JDBC Connection Pools .
  5. Enter a Pool Name and select the appropriate database drivers as follows (Click on the image to enlarge):Create a Connection Pool
  6. Click Next and fill Step two as appropriate based on your settings as follows (Click on the images to enlarge):

    Insert Your Database Vendor Settings

    Insert Your Database Vendor Settings

  7. Click the Ping button. You should see a message as follows:
  8. Click the Finish button.
  9. Now we have created Connection Pool Successfully. Our next step is to make it available through a logical name with JNDI.
  10. In the left navigation menu click on the New… button under Resources/JDBC/JDBC Resources.
  11. Enter a JNDI Name and select the Connection Pool that we have just created and Click OK (Click on the image to enlarge):
    Bind JNDI name to DataSource Connection Pool


With this new setup we have the advantages as follows:

  • Access Transparency: We are not giving any clue how the resource is accessed. Now nobody knows the database name.
  • Location Transparency: Only GlassFish administrators and DBA(s) now know where the resource is actually located. Even the developers would not know where the database really is (I know this does not actually happen) but there will be nothing in the application code or configuration files that will unveil that.
  • Migration Transparency: The database now can be moved from one server to another, the DBA might want to change the default port, or the user name and password need reset. Any of these changes will not affect the application at all because it can be manged by an administrator through the GlassFish Admin Console.
  • LDAP Authentication: With this setup we can now delegate the Authentication piece of the application to an LDAP service, which would allow for Single Sign On across the Enterprise.
  • 1 Database User Account Only per Connection Pool: We now have reduced all the database user accounts to one and only one.
  • Same JNDI name, multiple environments: In an Enterprise you will most likely have a GlassFish QA instance and a GlassFish PROD instance. If you set up your JNDI Name to be the same in both (jdbc/grailsRocksApp in our example) you can deploy interchangeably without having to do any changes anywhere.



Configuring your Spring JDBC application to use JNDI

If you are using Spring JDBC to persist you database, this is what you have to do to obtain a DataSource through JNDI. Usually, in Spring web applications you will separate you context xml files in several as follows:

  • appName-servlet.xml
  • appName-datasource.xml
  • appName-services.xml
  • appName-security.xml

In the appName-datasource.xml put the following snippet:

Configuring your Spring + Hibernate app to use JNDI

If you are using the Hibernate in your Spring application to handle persistance. You would do as follows:


              yes 'Y', no 'N'

Configuring your Grails application to use JNDI

If you are developing in Grails and want to use JNDI for connection pooling, you would do as follows in the DataSource.groovy:

dataSource {
     pooled = true

hibernate {

// environment specific settings
environments {

    production {
        dataSource {



Asides from the advantages mentioned above we also get the following:

  • Connection pooling management and distributed transactions: If you use a regular DataSource like illustrated in the Spring JDBC example above, it’s ok for very small applications because it is not thread-safe, it’s single threaded. The server will lock down other requests, this will impact performance. So if you are in a concurrent environment you are better off using JNDI as the application server take care of handling connection pools threads for you.

How to use “Factory Method”

The Factory Method design pattern is a creational design pattern. This means that its main objective is to create objects. The official definition by GOF is as follows:

Define an interface for creating an object, but let the subclasses decide which class to instantiate. Factory Method lets a class defer instantiation to subclasses.

Note 1: By “decide” in the definition above it does not mean that the subclass magically decides which object to create (at least not in a parameterized version of the Factory Method). The decision actually happens in the client; however, the work of creating the appropriate object that the client is going to work with is done by the subclass that overrides the Factory Method.

Note 2: This pattern relies on inheritance for creating objects. The current trend is to use the Abstract Factory Pattern, which relies on composition.

Note 3: By “client” I mean the calling code, the final user of the API that makes up the subsystem.

The Problem

Consider a simple application that deals with vehicles with the following hierarchy in the mind map below:

Factory Method

Question: What would happen if we want to create a particular type of vehicle and we do not use any kind of Factory Pattern?

That we would be giving the client the responsibility of having to know how to instantiate a particular type of Vehicle. As far as the client is concerned it just wants a Vehicle, then it’s going to do whatever he wants with it.

Question: What would happen if we add many more types (Car, Truck, Bike) and many more subtypes (Van, Sedan …)?

In this example it may not be as obvious as we only have three types of Vehicles (Car, Truck, and Bike) and two specific implementations under each. But if we had ten different cars, trucks, and bikes then the amount of if/else logic that you would have to do in the client in order to instantiate the right type would be quite tedious. This is considered boilerplate code at the same time that is difficult to maintain.

Question: What if you want to add a new type of Vehicle (e.g a Boat)?

If you do not use the Factory Method or other Abstraction layer you would have to make changes to the client to support that.


The Factory Method

Since the definition of the Factory Method states that the purpose is to defer instantiation of an object in the family to a subclass, then we have to make the class that will contain the factory method abstract. Also to ensure that subclasses are the ones really responsible for the creation of a particular object (in our case it will be a Vehicle) we have to provide with an abstract method, this way subclasses must provide the implementation. This method is what is popularly known as the factory method. It usually takes the name of createXYZ or newXYZ (that seems to be the industry standard for what I have seen) . It looks as follows:

package factorymethod;

public abstract class VehicleFactory {

	//Factory method
	abstract Vehicle newVehicle(String type);


The Common Interface

The client should work with a known interface, this way it will allow us to change the subsystem transparently. It will look as follows:

package factorymethod;

import java.util.List;

public abstract class Vehicle {

    public abstract List<String> getFeatures();

    public abstract List<String> getSpecs();

The Factory

A Car Factory would be responsible for knowing all the types that fall under the category of Car. Do you see how much cleaner is to have the branching logic in this factory rather than having a big method somewhere with all types in there? It would look as follows:

package factorymethod;

public class CarFactory extends VehicleFactory {

	Vehicle newVehicle(String type) {

            return new Sedan();
            return new Van();

The Product

This is what the client asked for:

package factorymethod;

import java.util.List;

public class Sedan extends Vehicle{

    //Put some specific properties of a SEDAN here

    //Put some other specific methods of a SEDAN here  (To differentiate from Van for example)

    public List<String>getFeatures() {

        //Put code to query your Car database here
        return null;

    public List<String>getSpecs() {

        //Put code to query your Car database here
        return null;

The Client

Notice how the client uses the appropriate factory to go get the specific type of product. If the client wanted a Bike for the example, the Car and Truck factories would be ignored. Also notice that since the client gets a generic Vehicle product you can add many more specific products transparently without affecting the client code. It looks as follows (You could improve the client using Reflection like done at the end of this other post) :

package factorymethod;

public class Starter {

	public static void main(String[] args) {

		VehicleFactory vehicleFactory = new CarFactory();
		Vehicle vehicle = vehicleFactory.newVehicle("Sedan");

Now you could repeat what I have done above for the BikeFactory, and TruckFactory and its corresponding products as follows (Click on the image to enlarge):

Factory Method UML

Important: Your Specific “Products” should really be different (algorithm or logic wise)

While studying this pattern carefully from Head First Design Patterns (p.129) there was something that bothered me with their Pizzas example. If you look at all their specific Pizzas there wasn’t really any difference between the products, only String values!!! Yes that is different but what good is that in real life? If all that was read from a database for example you could get away with one class only. There was no need for 8 different Pizza types.

So my point is to make sure that you really have a different algorithm or logic in each specific product. Otherwise you will have added DRY (Don’t Repeat Yourself) in your application. (Unless I am totally missing something here :) )

In our example we could say that we originally started our application supporting Cars only (Sedans and Vans), but then our application became popular and we decided to buy two existing databases that had support for Trucks(Pickup and Commercial) and Bikes(Scooter and DirtBike). Then it is safe to say that we need all these concrete objects as each getFeatures() and getSpecs() would contain an specific query targeting that particular data model and RDBMS.


Similar to the Adapter pattern where the main goal was to decouple the client from the actual implementation by giving it a known interface, the Factory Method Pattern also gives us that kind of flexibility. You should use a factory method:

  1. When you want to hide object creation knowledge from the client.
  2. When you want the client to always get a known interface. In this case, it will be an abstract object.
  3. When you want to add multiple implementations transparently in the future without affecting the client(s) API.
  4. To avoid branching “spaghetti code” when you have multiple variations of an object and you try to solve it by means of a lot of “if/elses” in a single location.
  5. When you have multiple variations of a particular object and you do not have a default one.
  6. It is also very good to illustrate how Polymorphism works.


  1. It depends on inheritance, so changes to the superclass can break existing code in subclasses. Inheritance usually requires thorough testing so Joshua Bloch in Effective Java recommends to use Composition over Inheritance.
  2. Joshua Bloch also recommends to use inheritance only when classes are designed for inheritance. (The factory method is designed for inheritance, so not really a disadvantage).
  3. This pattern is only good for a hierarchy of three levels. I have tried solving this with more levels and I had to much more inconveniences than advantages (had to create intermediate empty classes that were not bringing more information). For such situations, you should look into the Builder Pattern.

Binding jQuery UI Datepicker to Grails Domain

In this post I am going to show you how you can write a custom tag to use the JQuery UI datepicker instead of the <g:datePicker> tag provided by Grails out of the box.

How <g:datePicker> tag works

When you have a Date field in your Grails domain class and have generated the default views for that domain you will notice that the date field(s) will be defaulted to three drop down boxes (year, month, day). Upon submit four request parameters will be sent as part of the <g:datePicker> tag. Say that our date field in our domain is declared as orderDate. The request parameters associated with that request would have the following naming convention:


All these parameters are necessary for grails to do the appropriate binding to the orderDate field and to successfully save to the database.

What if I do not want to use Grails default date picker?

While this is good, since Grails provided it for free without writing a single line of code, it is not very efficient for the end user to have to select three drop downs. I usually use the JQuery UI framework a lot in my Grails and Spring apps for widgets and DOM Scripting. Do as follows:

  1. Download your favorite Jquery theme.
  2. Unpack it, look at the demos (Specially at the path structure for js, css, and themes)
  3. Bring that into your Grails web-app folder
    1. Copy the themes folder from the download straight out of web-app folder.
    2. Create a jquery folder under js.
    3. Put jquery-1.4.4.min.js under jquery folder created in previous step.
    4. Copy the ui folder from the download to the jquery folder created in step 2.

That should give you the proper setup for you to start adding jQuery and jQuery UI components to your application.

Next I will show how to bind a Jquery date picker to a grails date field in a domain class in two different ways. They are as follows:

  1. Without writing a Grails custom tag
  2. Writing a Grails custom tag

Using Jquery date picker without using Grails custom tags

Step 1: Enter the following in head of your create.gsp

<head> code

Step 2: Replace the g:datePicker tag with


Step 3: Create the following Javascript file coche.js. This is responsible for populating the hidden fields once a date has been selected. The code is as follows:

$(document).ready(function() {

  $( "#orderDate" ).datepicker({
      onClose: function(dateText, inst) {
        $("#orderDate_month").attr("value",new Date(dateText).getMonth() +1);
        $("#orderDate_day").attr("value",new Date(dateText).getDate());
        $("#orderDate_year").attr("value",new Date(dateText).getFullYear());

Step 4: Run your grails application go to the create.gsp select a date through the calendar and click on create. You should have your custom orderDate field successfully binded in the domain orderDate field just like if you would have used the default <g:datePicker> grails tag.

A jQuery Date Picker Grails Custom Tag

There is nothing wrong with the first approach but it has a couple of disadvantages:

  1. For every jquery date picker that I want to add to the form I have to remember to add three extra hidden input fields with the naming convention dateDomainFieldName_year, dateDomainFieldName_month, dateDomainFieldName_day.
  2. For every jquery date picker field that I have in a single page I will have to add about 8 lines of code of Javascript to populate the hidden fields once a date has been selected.

So my goal is to write a Grails Custom tag that will be responsible for doing the two things above. Additionally it will work for as many date fields you want in your form.

Step 1: Create a Grails Tag Lib as folllows:

Grails tag for jQuery date picker ui widget

Step 2: Replace your input orderDate and associated hidden fields for this:

Step 3: Delete the coche.js

Step 4: Remove the coche.js reference from the head tag in your create.gsp

You should be able to add many jQuery calendars without having to do DOM scripting to populate hidden fields as the custom tag is already doing it for you.


The grails tag for the jQuery UI date picker widget has the advantages as follows

  1. It creates three hidden input fields dateField_day, dateField_month, dateField_year.
  2. It’s responsible for populating these hidden input fields when a date has been selected from the calendar.
  3. Supports having multiple date fields in the same form without any conflict.

The Code

The code can be found at this github repository

How to use the Adapter pattern

The definition of the Adapter design pattern according to GOF (Gang Of Four) in Design Patterns: Elements of Reusable Object-Oriented Software is as follows:

Converts the interface of class into another interface the clients expect. Adapter lets classes work together that couldn’t otherwise because of incompatible interfaces. (p.139)

To illustrate it we are going to start with a code base and then try to integrate it to an “external” code base using the Adapter design pattern.

The adapter pattern comes in two flavors:

  1. Object Adapter: Uses interfaces to benefit from polymorphism; this flavor is used if your language of implementation is Java.
  2. Class Adapter: Can be used in languages that support multiple inheritance, this technique can be used if your language of implementation is C++.

In this example we are going to look at the Object adapter implementation.

Note 1: I interpret that “clients” in the definition by Gang of Four above means the calling code. In some instances “client” might also refer to the starting code base.

Note 2: “Interface” in the definition by Gang of Four above means properties and methods of a class. But it can also be a proper Java interface.


Conceptually both systems that you are trying to integrate must have a coherent conceptual correspondence.


A real life example of adapters I faced is when I have purchased Europe based electronic devices that I later had to use in the US. In order to power them up, or charge them up depending on the device I had to get a Targus Travel AC Power Adapter. Recall that the adapter worked in that case because even though the interfaces were different, the problem domain was the same.

Let’s say that we have a program that works with US based heating devices; its reputation is so good that it has become a market leader, so now we want to go global and sell it in Europe and other parts of the world.

Starting Code Base

Because we are good OO coders, we code to interfaces. Here is our interface for US heating devices:

package adapter;

public interface USHeatingDevice {

     public int voltage();
     public int frequency();


Here is an implementation of our interface:

package adapter;

public class TowerHeater implements USHeatingDevice {

    public int voltage() {
        return 120;

    public int frequency() {
        return 60;

Here is what I interpret as the client per Note 1 above:

package adapter;

public class Starter {

       public static void main(String[] args){

          USHeatingDevice usHeatingDevice = new TowerHeater();
          System.out.println(usHeatingDevice.voltage() + " V");
          System.out.println(usHeatingDevice.frequency() + " Hz");


Running the program produces the following output:
120 V
60 Hz

Now we want/need to expand our program to work with foreign heating devices manufactured in Europe, South America, Asia etc. The problem is that their voltage and frequency are not the same, so our program will not work. Imagine that there is a vendor that has already developed an interface that can be hooked into our program so that it can work internationally. The code is illustrated below.

Vendor Code Base

Because the vendor also uses good OO techniques, here is their interface for non US heating devices:

package adapter.vendor;

public interface NonUSHeatingDevice {

    public int tension();

    public int hertz();

Here is their European implementation:

package adapter.vendor;

public class EUTowerHeater implements NonUSHeatingDevice {

    public int tension() {
        return 230;

    public int hertz() {
        return 50;

Notice two things here:

  1. The voltage and frequency have different values.
  2. The interface methods developed by the vendor have different names.


In order to use our program using the non US implementation we would have to find all our places where we make calls to voltage() and frequency() and put some sort of if/else logic to support calling tension() and hertz() methods when using the program in Europe.


Well that could be quite tedious, couldn’t it? Here is where the design pattern comes into place; the goal is to write and adapter that will implement our current existing interface (USHeatingDevice) but in reality will be executing an implementation of (NonUSHeatingDevice) interface. This technique will allow us to keep voltage() and frequency() calls throughout or code  and we will only have to make a small change in the calling code, the client.

Note: Some authors state that the adapter pattern allows you to not do any code changes in the “client” once you have created the adapter class see (p.237) of Head First Design Patterns. I completely disagree with that and you will see it in the following snippets.

First let’s create the adpater:

package adapter;

import adapter.vendor.NonUSHeatingDevice;

public class HeaterAdapter implements USHeatingDevice {      

     private NonUSHeatingDevice europeanHeatingDevice;     

     public HeaterAdapter(NonUSHeatingDevice europeanHeatingDevice){

         this.europeanHeatingDevice = europeanHeatingDevice;

     public int voltage() {

         return europeanHeatingDevice.tension();

     public int frequency() {  

         return europeanHeatingDevice.hertz();

We see as follows:

  1. The Adapter class implements our known interface (USHeatingDevice), the one we are used to work with (because it will be the one that the client will continue to work with).
  2. The Adapter class has a data member that is of type that we are adapting to (NonUSHeatingDevice).
  3. The implemented methods call the matching methods in the non US heating device interface.

As you can see we are dressing up the HeaterAdapter by making it look like a US heater device, but really behaving like a non US heater device.

Now let’s make the changes to the client to work with the HeaterAdapter that we have just created:

package adapter;

import adapter.vendor.EUTowerHeater;
import adapter.vendor.NonUSHeatingDevice;

public class Starter {

       public static void main(String[] args){

          USHeatingDevice usHeatingDevice = null;
          NonUSHeatingDevice euTowerHeater = null;


            usHeatingDevice = new TowerHeater();

          }else if(args[0].equals("EU")){

            euTowerHeater = new EUTowerHeater();
            usHeatingDevice = new HeaterAdapter(euTowerHeater);


          System.out.println(usHeatingDevice.voltage() + " V");
          System.out.println(usHeatingDevice.frequency() + " Hz");


Notice the changes in the client from the original; Pay attention to lines 19 and 20, we go through the adapter to instantiate a USHeatingDevice, which will behave as a non US heating device. As we can see on lines 24 and 25 The rest of the program is still working with the same interface as it was using before the vendor code was integrated into our application.

If we were to run this program with the EU command line argument we would get the result as follows:
230 V
50 Hz

Adapter Pattern as an abstraction layer

Another advantage of this pattern is that many implementations of the adaptee (vendor interface) could be added by the vendor and we would not have to touch a line of code in our thanks to Polymorphism.

Let’s say that the vendor add implementations of their non US heating device interface for countries such as Jamaica, Libya, and Colombia which have different electric power specification.

package adapter.vendor;

public class JamaicaTowerHeater implements NonUSHeatingDevice {

     public int tension() {
         return 110;

     public int hertz() {
         return 50;

The colombian implementation….

package adapter.vendor;

public class ColombianTowerHeater implements NonUSHeatingDevice {

    public int tension() {
        return 110;

    public int hertz() {
        return 60;

…and the Libya implementation

package adapter.vendor;

public class LibyaTowerHeater implements NonUSHeatingDevice {

    public int tension() {
        return 127;

    public int hertz() {
        return 50;

Let’s see the changes we have to make to the client, without touching the adapter at all.

package adapter;

import adapter.vendor.*;

public class Starter {

     public static void main(String[] args) {

        if (args.length > 0) {

            USHeatingDevice usHeatingDevice = null;

           if (args[0].equals("US")) {

                usHeatingDevice = new TowerHeater();

           } else if (args[0].equals("EU")) {

                NonUSHeatingDevice euTowerHeater = new EUTowerHeater();
                usHeatingDevice = new HeaterAdapter(euTowerHeater);

           } else if (args[0].equals("JM")) {

                NonUSHeatingDevice jamaicaTowerHeater = new JamaicaTowerHeater();
                usHeatingDevice = new HeaterAdapter(jamaicaTowerHeater);

           } else if (args[0].equals("CO")) {

                NonUSHeatingDevice colombianTowerHeater = new ColombianTowerHeater();
                usHeatingDevice = new HeaterAdapter(colombianTowerHeater);

           } else if (args[0].equals("LY")) {

                NonUSHeatingDevice libyaTowerHeater = new LibyaTowerHeater();
                usHeatingDevice = new HeaterAdapter(libyaTowerHeater);

           } else {
               System.out.println("You must pass an apppropriate region parameter");

           System.out.println(usHeatingDevice.voltage() + " V");
           System.out.println(usHeatingDevice.frequency() + " Hz");



As you can see supporting new functionality is just as easy as adding the highlighted lines above.

The UML diagram below summarizes our example:

Adapter Pattern

Adding flexibility to the Adapter Pattern

Thanks to Lariza Saenz a colleague from JavaHispano user group for pointing this out in the post comments. Let’s suppose that you are not interested in implementing all the methods of the interface. You could do it as follows:

1. Implement the method in the HeaterAdapter class and throw an UnsupportedOperationException.
2. or create an abstract class AbstractHeaterAdapter.

Let’s illustrate step #2. The AbstractHeaterAdapter class would look as follows:

package adapter;

public abstract class AbstractHeaterAdapter implements USHeatingDevice {

	public int voltage() {
		// TODO Auto-generated method stub
		return 0;

	public int frequency() {
		// TODO Auto-generated method stub
		return 0;


The implementation class would looks as follows:

package adapter;

public class Starter2 {

	public static void main(String[] args) {

		USHeatingDevice usHeatingDevice = new AbstractHeaterAdapter(){

			public int voltage() {

				return 125;



A real world example of this technique can be found in the java.awt.event package. Every listener interface (e.g MouseListener) has its correspondent adapter (MouseAdapter). The MouseAdapter is an abstract class that implements the MouseListener interface. So when you want to attached a mouseClicked event to a button you an either do it through a MouseListener or a MouseAdapter. If you do it trhough the MouseListener you will have to implement all the methods defined in the interface (mouseClicked, mousePressed, mouseReleased, mouseEntered). But what happens if you are only interested in one of the methods? Then you would use a MouseAdaper and provide the implementation for the method(s) that you are interested only similar to the example in above.

Bonus: Improving the client

Bladimir Rondon another colleague from the JavaHispano users group has suggested to use a ResourceBundle and reflection to clean up the if/else ugly code in . The main advantage of using a resource bundle is as follows:

  1. You will be able to add implementations and not have to do any client code changes (except for the first time when you add support for the NonUSHeatingDevice).
  2. You will only have to make entries in the file for each new implementation.

The file will look as follows:


The will look as follows:

package adapter;

import java.util.ResourceBundle;

import adapter.vendor.NonUSHeatingDevice;

public class Starter { 

	public static void main(String[] args) { 

		if (args.length > 0) { 

			ResourceBundle resBun = ResourceBundle.getBundle("implementations");

			USHeatingDevice heatingDevice = null;

			//If creation of a USHeatingDevice fails, then try to create a NonUSHeatingDevice
			try {
				heatingDevice = (USHeatingDevice) Class.forName(resBun.getString(args[0])).newInstance(); 

			} catch (Exception e){

				NonUSHeatingDevice foreignDevice = null;
				try {
					foreignDevice = (NonUSHeatingDevice) Class.forName(resBun.getString(args[0])).newInstance();
					heatingDevice = new HeaterAdapter(foreignDevice);
				} catch (Exception z){
					System.out.println("Check your properties file");


			System.out.println(heatingDevice.voltage() + " V");
			System.out.println(heatingDevice.frequency() + " Hz"); 

		} else {
			System.out.println("You must pass an apppropriate region parameter");

Now you could put a Facade Pattern to hide all the ugly try/catch out the main class into an static method in another class to improve this further.


The adapter pattern can be used in the situations as follows:

  1. To match methods of a new interface that has similar behavior but different method names.
  2. To allow one and only one adapter to work with many different implementations of the Adaptee.
  3. To provide custom behavior for a desired method(s) of the interface without having to implement all methods of the contract.

It is also good for:

  1. Illustrating the power of Polimorphysm.
  2. Promoting the good OO technique of coding to interfaces.

Database Inheritance : Subtyping

If you are familiar with any object oriented programming language, the term inheritance may not be new to you. In OO programming such as in Java or C++ you basically put all the common attributes and behavior in a parent class and then you provide specific attribute and behavior implementation in each subclass. This feature is also supported by all major DBMS; it is called subtyping.

Some examples of subtyping are as follows:

  • A School can be broken down into: regular school, charter school, administrative school, etc…
  • A person can be broken down into: man or woman.
  • A customer can be broken down into: an internal customer or an external customer.
  • A product can be broken down into several products, books, music, movies.
  • A book can be of different types: Hard Cover, PDF format, kindle format etc…

Let’s take a look at it in action:

Supertypes and Subtypes

Let’s depict the diagram above:

Book Entity

  1. Is the Parent entity, aka Supertype
  2. The Supertype is at the one and only one side of the relationship.
  3. Has all the information that is common among all of its subtypes.
  4. It’s PK is going to be the PK of the Subtype as well so that we can get to the details and/or generic information back and forth.

Paperback Book and Digital Book Entities

  1. Are the Child entities, aka Subtypes.
  2. The Subtypes are at the zero or one side of the relationship.
  3. They contain information specific to them only.
  4. Their PK is the Parent entity primary key so that you can easily get to the common information.

Alert: Some people put a char column in the parent entity to be able to determine the subtype of that record without having to look at the subtype. Be very careful with this approach because if a parent entity can be of both types the solution would not be consistent; if you do that make sure that the subtypes are mutually exclusive.

Self-Referencing relationships

Also known as recursive relationships, these are unary type relationships. These type of relationships occur when an entity can reference itself.

Recursive relationships occur often in hierarchies, and also in objects that are part of a whole. Below are some scenarios that could be modeled using recursive relationships:

  1. A course at a school or university can have 1 or more prerequisites.
  2. Organizations have people hierarchies (CEO,VP, Director, Managers, Employees).
  3. A finished product can be made of many sub products or parts.
  4. Also biological hierarchies(Dad, mom, siblings, son, etc..)

Let’s look at it in action, grab some Java mug, just an advice :)  We are going to solve this with the four steps as follows:

  1. Get a visual of the hierarchy (Extremely important).
  2. Create the database model.
  3. Fill it with some data.
  4. Write some queries.

Step #1: Get a visual of the hierarchy

The University in Illinois has one of the most reputable computer science programs in the world, so I have decided to model a very tiny section of their curriculum to illustrate self-referencing relationships. The following mind map represents two master level computer science classes and its prerequisites:

MSCS Illinois mini curriculum

Step #2: Create the database model

The database table that will support this solution looks as follows:

self-referencing relationship

Alert: Notice how the relationship line is optional at each end, you must do this otherwise the recursion would go in an infinite loop.

Step #3: Fill it with some data

The following screen shot shows all the data in the COURSE table that I have manually entered.

select * from course

Step #4: Write some queries

You can write the queries in two styles:

1.      By doing a query for each level of the hierarchy and combining them together by doing  union all.

2.      By writing a more generic query and providing an in clause. This step is not recommended if you have many rows in the table as it will drive you more insane than the first step.

Style #1: Query by “union all”:

We are going to do the first query technique to demonstrate what the data looks like for the master level course CS 414 (Refer to the course on the left in the mind map).

select, c.course_code, c.course_name, c.course_category, c.next_course
from course c
inner join course d
   on c.next_course =
where in (1)

union all

select, c.course_code, c.course_name, c.course_category, c.next_course
from course c
inner join course d
   on c.next_course =
where in (2)

union all

select, c.course_code, c.course_name, c.course_category, c.next_course
from course c
inner join course d
   on c.next_course =
where in (3)

union all

select, c.course_code, c.course_name, c.course_category, c.next_course
from course c
inner join course d
   on c.next_course =
where in (4)

The query above produces the resultset as follows:

CS 414 Hierarchy

Style #2: Query by “in” clause

Now we are going to write a shorter query that will demonstrate what the data looks like for the master level course CS 421 (Refer to the courses hanging to the  right of CS 421 in the mind map).

To obtain the dependencies hanging to the left of CS 421 we could write the following query:

from course c
inner join course d
   on c.next_course =
where in (5,6,8,9)

This query will produce the resultset as follows:

CS 421 Let tree

To obtain the dependencies hanging to the right of CS 421 we could write the following query:

from course c
inner join course d
   on c.next_course =
where in (5,7,10,11,12,13)

This query will produce the resultset as follows:

CS 421 right side tree


Now that I have finished writing the post i realize that I could improve the queries by replacing the next_course column with a next_course_name column. If you want to do that comment out the c.next_course column from the select clause and replace it with d.course_name as next_course_name.

Best Practices

The best practice is to have a collection of views that give you all the data representation that you need to have for each hierarchy. That way you only have to pull your hair once and hope that no reorganization is going to happen (EVIL LAUGH HERE) . If it does, then you will have to go through a similar painful exercise again.

Conclusion: Recursive relationships can be very tedious and difficult to implement. Hence, the bullet points below:

  • documentation to understand the recursion is very important.
  • A Mind Map is the best way to document a recursive relationship.
  • A business group should be responsible for owning that document and passing it along to IT staff. This document should be revised at least once a year (or more) to ensure the hierarchy has not changed.

Without the mind map I would have not had the right state of mind to solve this problem. The tool I use for mind mapping is called Novamind (it is not free though) but very good for learning, brainstorming, and other purposes.

Many-to-Many Relationships

This topic deserves special attention because many people from database designers to application developers (sometimes one person does both) do not get it right. A Many-to-Many relationship occurs when each instance from the entity on the left can have many instances from the entity on the right, and vice versa.

Some examples of many-to-many relationships are as follows:

  1. Each order can contain many items. Each item can also belong to many orders as it can be purchased by different customers.
  2. Each student can be signed up for many coursers. Each course can be assigned to multiple students.

Incorrect implementation of many-to-many relationships

We see this problem when people are trying to use foreign keys to solve this problem. Let’s illustrate it with the second example listed above as follows:

Unresolved many-to-many

What we are trying to say here is that each student can take many courses and that each course can be taken by many students. So to implement this we have put a FK to the student table which will point to the course table.

Do you see any problems with this?

  • Yes!!! we will have repeated student rows for each different course a student is taking.
  • At the moment you do this your database would no longer be in 3NF because we would have repeated groups.
  • This issue above can potentially yield to update anomalies.
  • This relationship is said to be unresolved.

Correct implementation of many-to-many relationships

In order to correct the many-to-many implementation above we have to resolve relationship. This is done as follows:

  1. By adding another table between the two entities; this table is called associative entity or intersection table.
  2. By making the PK of the associative entity a composite key that consist of each of the parent’s PK (We are using Identifying relationship).
  3. By naming the intersection table as follows (not required by DBMS, but suggested): ENTITYA_ENTITYB. In our case STUDENT_COURSE or STUDENT_COURSE_REL.
  4. By Changing the cardinality of the parent tables to be One-to-Many to the associative entity.

Let’s see it in action:

Resolved many-to-many

What improvements do you see?

  • The student is defined once so we avoid duplicate data.
  • The courses are defined once and we avoid the same problem as if we would have put the FK in the course table in scenario #1.
  • We do not run the risk of update anomalies.
  • We can add meaningful fields in the associative entity. Those are called the Fixed Intersection data.

Alert: The FID will become your FACT tables and the parent entities would become your DIMENSION tables in OLAP database structures. Dimensional modeling is a different modeling technique  which uses a lot of denormalization so that you can yield better query performance as data is only 1 join away at most.