Java “Losing its Mojo?” I Think Not!

Wired has an article titled Is Java Losing Its Mojo? While the article seems to contradict itself in some ways, I have to take issue with the general theme. As someone who pays attention, I simply haven’t seen this happening. On the contrary–It seems to me that Java continues to grow. Just peruse the job boards.

Continue reading

Everything about Java 8 – TechEmpower Blog

Everything about Java 8 – TechEmpower Blog

Java 8/Lambas

lambdaZeroTurnaround has a post about lamba expressions in Java 8 with some examples. Really cool. That last time I even really heard of a lambda expression was in college while briefly learning Lisp. Lambda expressions can be tried out now for anyone wishing to take a look. I’ll be trying it out when I have a free moment. I know this is old news to many software geeks…

C Overtakes Java in 2012

Interesting. I was surprised to see man of the entries on the list, especially MATLAB.

Java to Objective C Translator

Google Open Sources Java To Objective-C Translator []

The Diamond Problem

The Diamond Problem

“Fun” with multiple inheritance:
Multiple inheritance – Wikipedia, the free encyclopedia
Programmer Interview: What is the diamond problem? Does it exist in Java? (nope)

Apache Commons Configuration with JBoss 5

Here’s a problem that frustrated me for a bit: When using Apache Commons Configuration under JBoss 5, I kept running into the following error when attempting to save to my configuration file (which was a resource under the deployed /classes path):

ERROR [org.apache.catalina.core.ContainerBase.[jboss.web].[localhost].[/xxxx].[xxxx]] (http- Servlet.service() for servlet flint threw exception protocol doesn't support output

What the heck? This error happened every time I attempted to save to my configuration file. It worked fine in Tomcat 6.x, but any time I tested on JBoss, while I could read from the configuration file, the above error was thrown every time I attempted to write to it.

JBoss 5.x VFS (virtual file abstraction) for the files that it deploys, and this causes problems with Commons Config’s default FileChangedReloadingStrategy. So the fix is to do something like this instead:

VFSFileChangedReloadingStrategy f = VFSFileChangedReloadingStrategy();
((FileConfiguration) config).setReloadingStrategy(f);

It turns out that we really want to use VFSFileChangedReloadingStrategy (which means using Apache Commons Config 1.7) . This also requires that the Apache Commons VFS API be on your classpath. The good news is that VFSFileChangedReloadingStrategy works well even with non-VFS deployments (i.e. plain old Tomcat and Jetty). Problem solved!

Programming Trends to Watch

This was an interesting article:

[JavaWorld: 11 Programming Trends to Watch]

JBoss Deployment Gotchas

The other day I was attempting to deploy a web app in JBoss (5.1) that had deployed just fine in Tomcat. In JBoss I kept seeing the following error:

java.lang.ClassCastException: org.hibernate.ejb.HibernatePersistence cannot be cast to javax.persistence.spi.PersistenceProvider

It turns out that the problem was because JBoss was first loading its own Hibernate classes. When my application started to deploy there was a Hibernate jar conflict (my Hibernate version was newer than that which JBoss shipped with).

To solve this problem is is necessary to direct JBoss to load class files for the application first. To achieve this, create the following jboss-classloading.xml file in WEB-INF:

<classloading xmlns="urn:jboss:classloading:1.0"

I had been testing my application using Tomcat 6.x, and, knowing that JBoss uses Tomcat as its web container, I assumed there would be no problems when deploying to JBoss. I never saw this issue in Tomcat because it does not package its own Hibernate jars.

With that issue resolved, I was still perplexed as to why my application would not launch. In my JBoss server log files I continued to get errors the the deployment of my application would not complete. This is hardly a new issue to those with JBoss experience (I found this post from 2003) The problem, it turns out, was that my application included a newer version of xalan.jar than that which JBoss ships with. This problem is similar to the Hibernate issue above, however, it cannot be resolved in the same way, since the xalan classes are used during JBoss’s initial deployment of the web application (i.e., it relies on the xalan jar file prior to reading jboss-classloading.xml).

To resolve this issue I decided that it would be easiest to simply making a separate distribution for my JBoss deployment. I would like to be able to deploy on Glassfish, Jetty, Tomcat and JBoss, but since JBoss has this conflict, one solution is to make a build target similar to this (I am using ant):

<target name="jboss-dist" depends="compile">
 <war destfile="jboss-dist/MyProject.war">
   <fileset dir="WebContent">
     <exclude name="**/xalan*"/>
   <classes dir="build/classes"/>

Now I have a JBoss deployment which does not include xalan–and it deploys without error!

Why I Dislike the @author Annotation

Here’s why I LOVE the @author annotation (in Javadoc comments): It makes me look really good having my name on tons of code. It shows that I am a high-output engineer.

Here’s why I HATE the @author annotation: It implies some sort of ownership of a method, class or package to the rest of the team.

When it comes to writing software, my experience has always been that smaller teams with good developers can get significantly more done than large teams with mediocre developers. (On this note, it may be a good idea to hire a single great developer at $150k rather than 3 junior programmers at $60k, but that’s a separate post).

The @author annotation can be very good to place in your javadoc comments so that others know who to turn to for questions about the code that was written. I’m not opposed to its use (in fact, I use it all the time). Its use, however, should not imply that others on the team are prohibited from modifying the code written by another programmer. On the contrary: Code is the responsibility of the entire team. All code!

There have been many occasions throughout my career wherein another developer said to me, “Hey Matt, can you write such-and-such a method so that I can get such-and-such?” An obvious example is in DAO classes. Another developer may be writing some controller code that requires some DAO method downstream. That developer should not find it necessary to ask the guy or gal who wrote the DAO class to implement a method. Write it yourself! Okay, there are certainly times when it may be appropriate to do so, but the main point is that we should make it clear that all code is the responsibility of everyone on the team.

Another example is when a defect is found. We’ve all made them… Writing code, to some extent, means writing defects. When a defect is found it is never appropriate to allow the rest of the team to be hung up because of it. The person discovering the defect, whether he or she wrote the defect or not, is free to correct it.

I’ll use myself as an example. One time I wrote a POJO class with a method like this:


Oops! This is a pretty straightforward problem: That T should be lowercase. Because this was code that I checked in and my name appear as the @author, another team member pointed the typo out to me and asked me to correct the problem. This is certainly fair to do, but in the amount of time it took to call me over, show me the typo and send me to fix it, the other team member could have simply checked in a fix. That @author tag does not indicate that the @author is the only one allowed to modify any code.

A Bidirectional Add-To-List Mistake

Here’s a somewhat real-world example of a bad coding practice. When setting up bidirectional relationships, its important to remember that the add methods for adding to a list that results in a one-to-many and many-to-one relationship set the parent/child relationship in both directions. To make life simple, I like to provide a couple of different ways of adding to a one-to-many list. For example, say a database contains a “PEOPLE” table made of of Person objects, and each person has many Phones:

public class Person {
    private int id;

    private String name;
    private int age;

    @OneToMany(mappedBy = "person")
    @JoinColumn(name = "PERSON_ID")
    private List phones;
public class Phone {
    private int id;

    private String number;
    private String type;
    private boolean preferred;

    @ManyToOne(mappedBy = "person")
    @JoinColumn(name="PERSON_ID", nullable=false) 
    private Person person;  

    public Person getPerson() {
        return person;  

Naturally, we need to enforce the bidirectional relation ship like so that adding a phone requires us to add a person (the PERSON_ID foreign key cannot be null!) and adding a phone via a person (person.addPhone(…)) requires that the a phone is added to the phones property and the phone object added to that list is assigned a parent person.

So in our Person class we need a method like this:

addPhone(Phone phone) {
    if (!phones.contains(phone) {

And in the Phone class we need:

setPerson(Person person) {
    if (this.person != person)
        this.person = person;

This is all fine and nice, but sometimes it is much easier to have a method that allows us to set all the child relationships at once. I like to offer both methods (i.e., my Person class would have both an addPhone and addPhoneList method). This isn’t at all uncommon. The problem is that where the phone owner is set may be confusing. Is it inside or outside the POJO?

It is tempting to leave the calling logic to do the work. For example:

List phones = new Phones();
for (some loop) {
    Phone = new phone;


In our Person class we would then need a method like this:

addPhoneList(List phones) {
    this.phones = phones;

This assumes that whoever is writing the calling code understands the need to set the phone owner prior to persisting the person object. It also goes against our previous logic for adding a single phone to the list of phones for a person object. (I realize that I am ignoring the fact that my example method may leave some existing phone numbers orphaned or uni-directional, a separate issue).

A better approach to the addPhoneList method is this:

addPhoneList(List phones) {
    for (Phone phone : phones {

By doing things this way we keep the work properly encapsulated to the Person class, and we don’t have to assume that the calling logic (and whoever writes it) set the owner for each phone in the list. Its just cleaner… As with nearly everything, there is a tradeoff between efficiency and bulletproof code. We are forced to loop through the list of phones twice in this scenario: Once when the phones are created and again when inserting a phone list to a Assuming we don’t have a massive list and are not making this call with extreme frequency, the overheard is really negligible.

This seems like a basic problem with an obvious solution, but I have seen code this written this say a number of times (where the add single phone method covers the bidirectional relationship but the add multiple method does not). Hibernate/JPA cannot magically figure out the parent/child relationship, so it is very important to enforce it and properly encapsulate responsibility within the persisted POJOs.

[Level Up: One-To-Many Associations]
[Level Up: Many-To-One Associations]

Is there ever a reason NOT to use an Artificial Primary Key?

I found this post on the subject of choosing a primary key. While Java Persistence Annotations allow us to use any field we want as a primary key (as long as it is naturally unique), is there a good reason to use anything that is not a surrogate/artificial primary key?

There are plenty of fine examples for natural primary keys: SKUs, usernames, email addresses, and so on. While these may work fine as a primary key insofar as they satisfy uniqueness requirements, there are some drawbacks, the biggest being the fact that uniqueness may not be guaranteed.

This post lists the reasons against using natural primary keys with 10 very good points:

  • Con 1: Primary key size – Surrogate keys generally don’t have problems with index size since they’re usually a single column of type int. That’s about as small as it gets.
  • Con 2: Foreign key size – They don’t have foreign key or foreign index size problems either for the same reason as Con 1.
  • Con 3: Asthetics – Well, it’s an eye of the beholder type thing, but they certainly don’t involve writing as much code as with compound natural keys.
  • Con 4 & 5: Optionality & Applicability – Surrogate keys have no problems with people or things not wanting to or not being able to provide the data.
  • Con 6: Uniqueness – They are 100% guaranteed to be unique. That’s a relief.
  • Con 7: Privacy – They have no privacy concerns should an unscrupulous person obtain them.
  • Con 8: Accidental Denormalization – You can’t accidentally denormalize non-business data.
  • Con 9: Cascading Updates – Surrogate keys don’t change, so no worries about how to cascade them on update.
  • Con 10: Varchar join speed – They’re generally integers, so they’re generally as fast to join over as you can get.

So while on the surface it may seem simple to use a seemingly unique field for a primary key (a username on a domain, for example), it can be disastrous later on. Con 6 above is the big one, but Con 7 is something people don’t seem to think about as much. We can enforce uniqueness on any field we want, be it a key field or not… That said, I really cannot think of a good reason to use a natural key (other than developer laziness, which is in fact the key reason why bad code tends to be written in the first place).

[Stack Overflow: Deciding between an artificial primary key and a natural key for a Products table]
[Rapid Application Development: Surrogate vs Natural Primary Keys – Data Modeling Mistake 2 of 10]
[Wikipedia: Surrogate Key]
[Wikipedia: Natural Key]

Valuable Unit Tests in a Software Medical Device, Part 9

I thought I was done, but here is yet another good reason to incorporate complex function automated testing: Validation of multiple Java runtime environments. Fabrizio Giudici proposes this as a solution for testing with Java 7, but we can always take it a step further, verifying multiple OS environments as well. Of course, this requires that we have those build environments available (easy enough, using virtual machines).

Where Do Hibernate Transactions Fit In?

I recently got into a discussion on where Hibernate transactions should be placed in the scope of DAO logic. Some people have a desire to begin and end a transaction inside a DAO method. This is really a question of unit of work, and not scope of the DAO.

Let’s say we have a single unit of work that involves updates to several tables. Within the scope of this unit of work we have to update table 1, table 2 and table 3. During the sequence of events table 1 updates just fine and then table 2 fails to update. Now, are we to move on and update table 3? What are we to rollback? Do we want to rollback both changes (the partial change to table 2 and the complete change to table 1 that is within the same unit of work)? (We probably do.)

But with our transactions being committed within every single DAO method we cannot; The previous transaction was already committed before we discovered an error.  So the ultimate answer is this: Placing transaction begin and end statements within each DAO method makes things messy. While it is important to keep the length of a transaction as short as possible, it CANNOT be done at the expense of the integrity of a the unit of work!

In the example that I scribbled below (and its not a great example), a single unit of work (adding a person with an address and an employer) allows for partial completion. This isn’t what was desired in the first place.

[Stack OverFlow: DAO PAttern-Where to Transactions Fit In?]
[Java Bouique: Managing DAO Transactions in Java]

Hibernate: All Entities Must Have an Id–And All Tables Should Be Entities!

Sometimes the relational database designer in me sometimes wants to create a table with multiple columns without a primary key (i.e., only a foreign key lookup . An example of when this might be appropriate is when some parent data type had many attributes that don’t necessarily need to be persisted as unique entities (any entity being any persisted POJO) on their own. Think of a “customer” table in a database. A customer may have many phone numbers, so a table relationship like this may seem to make sense:



In the above example the cust_phone field has no primary key. It has a customer_id field, but this is a foreign key to the customer table. This seems to be a perfectly fine relational database design, but when it comes to Java persistence annotations we run into a few problems. We could go with a join table, but that isn’t really necessary here (we don’t have a many-to-many relationship).

Most importantly, if we want to do an insert or update on cust_phone, we’re going to need to have an annotated class to do it on, and we’re going to have to access that class with an Id (perhaps a composite Id in the case of a join table, but, again, there is little overhead in having a unique artificial primary key on a join or lookup table, and it makes the life of the Java developer easier). When a join table is implemented on the Java side as a POJO with its own Id, we can handle bidirectional updates using those POJOs. If we don’t have this ability we are stuck with navigating a collection of some sort and updating it all at once (at persistence time).

The above can be done using native SQL, but why bother (it would be complicated anyway)? We could also use the parent’s primary key as the child’s primary key, but that isn’t good database design. Ultimately, what makes the most sense is to accept that all Java persisted entities must have a unique @Id. There is little overhead and this prevents a number of problems at insert and update time.

To put it another way, we can think of something like this:

A student has many courses. A course has many students. Between these two entities we may have a student_course table like so:

505        |        11
505        |        12
505        |        13
506        |        12
507        |        15

From a pure relational database point of view, this is fine. Heck, its common!

But to make life easier in the annotated class with this bidirectional relationship, let’s add am artificial primary key to this join table, like this:

 1 |        505 |        11
 2 |        505 |        12
 3 |        505 |        13
 4 |        506 |        12
 5 |        507 |        15

Now instead of traversing a Hashmap or some other collection of student to course relationships inside of my student POJO, I can navigate each relationship as its own persisted entity class. I can even change an individual student/course enrollment. Although from a pure database point of view, this may be bad practice–we probably instead want some other column, such as “ACTIVE” or “DROPPED” or “REMOVED”. Herein lies yet an additional benefit: We have more details about the relationship. Not only do we know that the relationship exists, but we know some specifics about it, and we can access these specifics through a nice Java class with all sorts of helpful methods. The table above doesn’t have the appearance of a traditional join/lookup table in a relational database, sure, but we’re thinking in terms of database design as well as Java persistence annotations and Hibernate needs.

Here’s another major benefit: As long as my getters and setters account for the bidirectional nature of the object insertions (and they should!), I can add a student to a class or add a class to a list of student enrollments very easily in my Java code:

Class myClass = <some class>;
Student myStudent = <some student>;

Or I can do it the other way:

Class myClass = <some class>;
Student myStudent = <some student>;

Sure, I could have done the exact same thing with a set of classes using @JoinTable, but when it comes time to manage insertion, modification and remove we end up writing more complicated code and potentially updating more on the database (i.e. we have more transactions than necessary) on the database side. Anther benefit is the ability to query against the join table (in this case student_class) and get a list of unique objects (StudentClassEnrollment), the type of which we know to be a specific POJO.

Now I see the obvious issue above: We have introduced the potential for enrolling a single student into a class twice. This is easy enough to take care of with unique constraints on multiple columns (student_id and class_id) as well as on the DAO/Insert side (in any case, we would want to check for such potential unique constraint violations at the front-end and the DAO end).

Of course, it is very important to remember to create appropriate getter and setter methods so that both sides of this bi-directional relationship (@ManyToOne and @OneToMany) have the necessary key fields. Hibernate will not magically handle this step. The big “duh” moment is when you think about how on earth Hibernate would handle an update on a table with no @Id field on the POJO. It couldn’t, because there is every reason to expect duplicate data on the columns outside of the FK (assuming the FK is not used as a unique ID). Adding to the confusion, there is always the chance (even if you think you have designed against it) that the PK->FK relationship becomes broken, and we are left with orphaned data.

What this all boils down to is designing a database with simplicity of JPA and Hibernate concerns in mind.

[JavaRanch: Hibernate Mapping with no primary key in existing DB]
[Java Persistence: Many To One]

[Thoughts On: Hibernate Many-To-Many Revisited] – They guy has some other thoughts about how to achieve many-to-many mappings and handle for PK. He somewhat agrees with me, but offers a hybrid approach.