Monday, August 13, 2012

Improving Code Quality - Scheduling Technical Debt and the Bucket Parable

Overview

Please read the previous post on improving code-quality using LCOM4 and Cyclomatic Complexity before reading this entry. Below I present two ways of thinking about technical debt in your organizations. The first is the old-school approach, and the second is the approach preferred once the vast amount of cyber-fraud was detected.

Old way of thinking

Before cyber-security became a large concern, software development companies thought differently than now about their approach to their applications.  In those days, if you'd run Sonar against your codebase, and  identified some areas where your source-code was butt-hurt and you wanted to fix it, you had a problem.  See, the viewpoint was that acknowledging flaws in your code was also acknowledging legal liability. So, companies would fix bugs in "maintenance releases" but would not acknowledge the specific security failures they were fixing.  This environment of subtlety safeguarded application developers, but exposed end-users to the prospects of security holes as a result of bugs.

The Reality

Currently, there are massive cyber attacks against companies around the world making companies more appreciative of an increased level of due diligence with their applications.  The mark of a successful company is not that they pretend problems don't exist in their code, it is in the fact they acknowledge it, fix it, and communicate it. To understand how to plan to include technical debt resolution, you must first understand the Bucket Parable.

The Bucket Parable.. a digression (bear with me, it pertains)

Back in the olden-days, there was a small village that had a very unique annual contest: the rocks-in-a-bucket contest.  The goal was to see who among the competitors could get the most rocks in their buckets.  Now, their buckets were all the same size;so, the only thing that differed was the size of the rocks.  In the first year's contest, a strong man named "Hugo" won the prize because he had managed to stuff 2 massive rocks into his bucket. The next year, a strapping gentleman on a horse won with 3 rocks, each weighing five pounds each.  And the third year?  Well, the next legendary player was Mike Van himself, and he was able to stuff over 400 rocks into his bucket.  But how, you say?  Well, while all the other folks were worrying about the big rocks, Mike Van paid attention to the little rocks too. So, whenever there was a gap between the big rocks, Mike Van filled them with the little rocks. As he began to build up his bucket, he noticed that the massive number of little rocks also gave the bucket extra weight, making it more formidable. Well, Mike Van was proclaimed the winner, after which he went into retirement from rocks-in-a-bucket, moving into a cave to knit "delicates" for poodles. He apparently feels it is a growth investment zone.

The Bucket Parable and Scheduling Technical Debt

If you've ever been on a good development team, you know about feast and famine. There are times that you have so much work to do there you have to work stupid long hours, and there are other times when you play Call of Duty with your co-worker cuz there aint squat.  It is this time that the Bucket Parable comes into play.

If you are delivering product, meaning software, on a schedule, that means you have a specific period of time to deliver a specific set of functionality.  Think of your timeline as the bucket in the bucket parable.  Now, each of your major tasks are going to take up some of the time of one-or-more of your developers.  These major tasks are your big rocks.  However, what will they do when they're complete?

After you've completed your Sonar analysis and have noted the weaker areas in your codebase, you should create tasks for each item to fix.  Then,. in your tracking tool, allow your developers to choose those "little rocks" they can do when/while they do their big rocks. At the end of the development cycle, not only have you completed all of the intended features, you've also completed a large set of smaller tasks including paying down  your technical debt. Customers like this. Its good. Do it, then drink beer with friends.

Summary

Using Sonar to develop the conceptual model of a "bucket-of-rocks" we are able to identify the most significant areas of risk associated with our technical debt, and manage it while creating a negligible impact on our customer's bottom line.

Improving Code Quality - LCOM4 and Cyclomatic Complexity

Overview

One of the reasons that open-source software is so solid is that we use some of the most cutting-edge open-source code analytic-tools available to ensure our software does what we intend in a bug-free manner.  In this post, I will talk about one tool we use, Sonar, and two specific metrics I've found useful in focusing the resolution of our technical debt.

Technical debt is defined as all the stuff you should have done that you didn't have time to do. For example, you may have left off a couple of unit-tests. Or, perhaps you decided not refactor that 6,000 line java class because it worked as a prototype and "if it ain't broke...".  As your applications grow, the amount of technical debt will also grow.  In many commercial and consulting settings, it may not be realistic to take a few months off of implementing new features to resolve the technical debt.  In that same spirit of realism, the only time a team will realistically focus on resolving technical debt is when there is absolutely nothing else to do.  You know, after all tasks are completed, the Kanban board's WIP column is clear, and your team is tired of playing Call of Duty.

This lack of priority and time to resolve technical debt creates a problem.  Too much technical debt, and your codebase becomes unmaintainable. Your team is literally one production outage away from working 18 hours days seven days a week until the bug is found. You need to have a way to focus the resolution of your technical debt in order to reduce the risk of a production outage.

Sonar is a source-code analysis tool that has proven very useful in doing this.  Specifically, there are two metrics that are very useful for targeting the work: Cyclomatic Complexity and LCOM4. Together, these two metrics will provide you with a very easy, and inexpensive way to target your technical debt resolution.

Cyclomatic Complexity

The measure of the number of unique pathways through a class, method, or application is called "cyclomatic complexity".  This metric was originally proposed by Thomas McCabe in 1963.  It already has a write-up on Wikipedia that goes into great technical detail about what it is, how it is calculated, and even has some pretty pictures.  Instead of completely describing it here, instead I'll hope you clicked the link and skimmed the wiki before continuing to read further.

Another way to think about cyclomatic complexity is as a measure of the difficulty a new developer will have understanding source-code.  Usually, a cyclomatic complexity of 5 or less is considered good.  Anything between 6 and 11 is considered moderately risky. And, any source-code with a complexity over 10 is considered poor.  

In my experience, you should also take into consideration the difference between classes encapsulating your business algorithms, and those containing a large number of utility methods.  A utility class may have 100 methods, each doing something very small.  Your complexity for the utility class may be over 100, but when you look at the average complexity per method, it will be 1 or less because your methods will be very discreet.  Now, compare that with a class containing methods which implement business logic. In the prototype phase of development, these will likely consist of multiple if statements each with for loops and switches.  This kind of class will have a complexity which will grow with each "if" statement in your method.  While you usually can easily ignore large utility classes, you should absolutely refactor classes containing business logic with high complexity.

When using Cyclomatic Complexity to target technical debt resolution, identify those classes with the highest complexity that are not utility classes. Sonar provides this information is an easy format, and is fairly easy to set-up and use!

LCOM4

LCOM stands for "Lack of Cohesion of Methods" and generically is a set of metrics that measure the how methods in a class interact with each other. This metric was updated a number of times until LCOM4 was introduced by Hitz & Montazeri.  LCOM4 measures the connected components within a class. The term "connected components" refers to related methods and class-scope attributes. LCOM4 suggests that only methods and attributes that rely on each other should be in a class.

If you think about it, from a maintainability standpoint, it is a lot easier to understand a class if all of the components of the class refer to each other. Think of this as a single unit of algorithmic activity. Consider if your hello world class contained methods and attributes that convert between Celsius and Fahrenheit in addition to methods to print out the words "Hello World".  How much easier would it be for a new developer to maintain that code if the Celcius-to-Fahrenheit conversion code were in a different class than then hello-world code?

This may seem like a pretty simple example, but imagine a prototype composed of 500 classes each with 1000 lines or more, and with an average LCOM4 score over 5? Can you imagine being handed this codebase to maintain?  Better yet, can you imagine being asked, 4 years after you wrote the code, to come back and "upgrade" it to use the latest-and-greatest architecture?

Just as with complexity, you should also consider the difference between utility classes and classes containing business logic.  The LCOM4 score of a utility class may be in the 100's.  While this is completely unacceptable for classes containing the implementation of business algorithms, it is completely acceptable for utility classes.  When you use LCOM4 to identify classes to refactor, make sure that you don't focus on your utility classes.

Using Them Together

LCOM4 and Cyclomatic Complexity are related to each other, and by taking them both into account, you will be able to determine where to focus your technical debt. Below should help when you compare a given class' LCOM4 versus Cyclomatic Complexity. I am rating the re-factoring on a scale of one to four, where one is the first priority for re-factoring, and four is the lowest.:
  • Complexity is high and LCOM4 is high: This class has a refactor priority of one. This class implements a number of business algorithms and the methods are very complex. Refactor each algorithm into its own class, then simplify the methods.
  • Complexity is high and LCOM4 is low: This class has a refactor priority of two. This class contains a low number of distinct business algorithms, but its methods are very complex.  First simplify the methods. Then, refactor the algorithms into their own classes. This order works here because the problem isn't the mixture of the algorithms but rather the complexity of the methods. By simplifying the methods, you will see that some of the methods were written to apply to both algorithms. Refactoring the methods first will result in an easier refactoring of algorithms.
  • Complexity is low and LCOM4 is high: This class has a refactor priority three This is a utility class.  There are a large number of methods which implement different business algorithms. There is no need to bother with this class.
  • Complexity is low and LCOM4 is low: This class has a refactor priority of four.  This is a well-written class. Don't touch it.  Consider giving the developer accolades, bonuses, or perhaps not stealing their lunch from the fridge on brown-bag Fridays, Marvin!!

Summary

Technical debt represents the skeletons in a software development project's closet.  Tools like Sonar give you access to metrics like LCOM4 and cyclomatic complexity.  Don't be afraid though, the good thing about being able to see your skeletons is that you can fix them. In this case you can fix potential problems as they arise.

Friday, February 3, 2012

A History: Integration solutions, from ESB's to Camel + Karaf!

This was based largely from a response I made to an OSGi question on stackoverflow.com. I thought it would be good to share it with you folks.

First and foremost, ESB's were a really good idea 8 years ago when they were proposed. And, they solved an important problem: how do you define a business problem in a way that those pesky coders will understand? The goal was to develop a system that would allow a business person to create a software solution with little or no pesky developer interaction needed that would suck up money better spent on management bonuses.

To answer this the good folks at many organizations came up with JBI, BPMN and a host of other solutions that let business folks model the business processes they wanted to "digitize". But really, they were all flawed at a very critical level: they addressed business issues, but not integration issues. As such, many of these implementations were unsuccessful unless done by some high-priced consultant, and even then your prospects were sketchy.

At the same time, some really smart folks in the very late 90's published a book called "Enterprise Integration Patterns" which identified over 60 design patterns used to solve common integration problems. Many of the folks performing ESB stuff realized that their problem wasn't one of business modelling. Rather the problem was how to integrate thier existing applications. To help solve this James Strachan and some really smart guys started the Apache Software Foundation Project "Camel".

Camel is a good implementation of the basic Enterprise Integration Patterns in addition to a huge number of components designed to allow folks like you and I to hook stuff together.

So, if you think of your business process as simply a need to send data from one application to another to another, with the appropriate data transformations between, then Camel (riding in a container such as Karaf) is your answer. In fact, there is a large movement towards replacing ESB's with Karaf + Camel.

Now, what if you want to base the "route" (a specified series of application endpoints you want to send data thorugh) off of a set of configurable rules in a database? Well, Camel can do that too! There's an endpoint for that!

Thursday, February 2, 2012

Creating a Virtual Appliance with Karaf - Part 2

The last few months have been rife with decisions, hard work, and ultimately led to a number of good things including a new Virtual Appliance containing a fully pre-configured software development environment consisting of applications that are fully consistent with the Apache Software License 2.0! Completely open-source, not gimped, fully functional and best yet, fully configured.
First, a couple of decisions had to be made.  Instead of working inside of a full-blown cloud as I originally proposed, I decided that it would save time to target a specific virtualization technology: VMWare's VMPlayer. This technology was chosen because it is free to use lowering cost barriers for new developers.  Second, For an IDE, I spoke with a number of my open-source colleagues and chose IntelliJ's Community Edition.  Next, I had to decide what operating system to use, and I chose CentOS. How should I distribute this new VM?  That's the sticky part. To help manage the creation and distribution of this VM, I created a small open-source company called Atraxia Technologies. Unfortunatley, this really doesn't solve the problem of how to distribute it. For reasons that I'll explain later, I still haven't gotten an answer for that.

Lastly, I needed to decide what the purpose of the VM should be. Sure, creating a virtual appliance is a fun thing to do. But, if it doesn't have a clear purpose, nobody is going to use it.  So, after talking to my fellow open-source developers, I decided that my first Virtual Appliance would be a fully-configured software development environment.  Many  of my friends and I have a number of new software developers we mentor. Unfortunately, a large amount of time is needed to get these developers' environments configured, reducing the amount of time we can spend helping improve thier software development skills. Having a VM they can install by themselves will greatly configuration time.

The virtual appliance was pretty easy to set up.

First, I downloaded and installed VMWare VMWorkstation. This is a $200.00 product, but it made the creating of the VM pretty easy, so it was worth the cost.  Once the VMWorkstation was installed, I downloaded and installed CentOS 6 into it.  Again, this was pretty easy. The login and password are both "blue". Next I installed the IDE, the 1.6_22 compatible version of OpenJDK, git, subversion, maven 2.2.1 and 3.0,  and Nexus.

Why install nexus? Well, from my experience there are cases when a build is halted prematurely which results in certain maven metadata files becoming corrupted. In those cases most developers will simply delete their /home/blue/.m2/repositories directory instead of attempting to find the corrupted file. However, in a wireless environment blowing away your repository can result in a very long build time because maven will have to wirelessly re-download all of the libraries and then rebuild the repositories directory.  To fix this, each VM comes with its own preconfigured nexus repository and the .m2/setting.xml file is written to only pull files from the local Nexus.  The local Nexus, in turn points to the global Maven Repository, Codehaus, and a couple of other public repos.

The Nexus repsitory will cache all of the files that maven needs to build. The first time you build an application, it may take some time to populate the nexus repository. However, after that, even if you have to blow away your maven repo, it will take a very small amount of time to rebuild your application and your maven repo.

Now on to the JDK.  This is where things got sticky.  Despite all of hard work of the OpenJDK team, there are still some applications that won't build with it. And, due to licensing restrictions from Oracle, I can't bundle the Oracle JDK with each VM. So, to test the VM, I had to install the Oracle JDK, compile my test application (Apache Karaf), and then uninstall the JDK and point the path and JAVA_HOME environment variables back to OpenJDK.  OpenJDK is a perfectly fine JVM for most developers. But for folks developing applications like Hadoop, Accumulo, etc, the Oracle JDK is really necessary.  If you need it, it is free to download and use, but not free to distribute.

So, back to distributing the VM.  Currently, the VM is completed. However, I'd have to pay $$ in order to get the 3 Gigabyte file hosted.  As such, it is sitting on my hard-drive awaiting some generous donor of bandwith to host it. The VM is called "Atraxia Blue" and it is the first of three planned virtual machine offerings. This one's intent is to be a desktop development environment. The next one will take the place of the software hub used by most development teams to host thier ticketing system, Sonar, and central nexus respository.  I'm still researching whether I should also a central Git repository also.  This offering will be called "Atraxia Sienna".  The last VM I will produce will include open-source office automation software and some open-source back-end business tools that are being developed by the Apache Software foundation. This one will be called "Atraxia Pointy-Hair". I'm open to a different name though, if someone wants to suggest one.

The final goal in all of this is to have a suite of completely open-source virtual appliances a small team or company can use to stand up a complete business, fully configured out-of-the-box. 

Oh and thanks to my daughter for coming up with a new motto for Atraxia Technologies: "The Cloud, only Fluffier". 

Until next time!

Wednesday, September 21, 2011

Creating a Virtual Appliance with Karaf: Part 1 (Finding the Right Cloud)

Something I like to do to keep my understanding of technologies up-to-date is to identify a technology on the cusp of widespread acceptance, and to figure it out. This is why I decided to start working on Karaf and OSGi a year ago, and it has served me very well.

The current wave is the cloud and virtualization. Like OSGi, its been around for a while, and is being widely accepted. Many organizations are familiar with "virtual machines", not Java VM's, but the ability to start up an instance of an operating system within another operating system.

Anyhow, as a newcomer to cloud computing, my first small chore was to create a virtual appliance containing Karaf and Celler. Karaf is a small, lightweight OSGi container that allows you to do enterprise stuff. :-) To accomplish this, I used an older Toshiba Satellite computer system as my server, and a newer Dell laptop as my client. The client is planned to be used for development of the virtual appliance.

VMWare: VMWare has a great web-centric application called "Go" that small-to-medium businesses can use to download thier Hypervisor version of software. Using this, businesses can get access to thousands of virtual appliances, and leverage them (for a cost) to handle their business needs. However, there are some drawbacks. Specifically, my laptop needed a gigabit controller. Because my slow computer only had a little ol' nic-card, the driver couldn't be found and I was unable to install Hypervision to test it. Also, remember that installing this app successfully on your system will completely wipe out the previously installed operating system.

Susestudio: SuseStudio has an awesome web-based interface for creating virtual applications. The only drawback I saw to that was the fact that .rpm's needed to be created for any applications I wanted to install into my VApp. I'm still looking to see if this is the standard. In any case, this was a show-stopper for me.

XenServer: This, like VMWare, was a download that wiped out my hard-drive. They were nice-enough to let me know that during installation. The good thing is that this is the first cloud system I've been able to install on my test-system, so this is the server I'll use to buildand install my virtual appliance!

Next time: How to find the XenServer UI!!

Monday, September 19, 2011

Why OSGi?

Recently I answered a question on stackoverflow.com about the difference between component-based and modular architectures. However, what it really does is show one of larger benefits of OSGi, fixing the classpath.

Here is the question and my answer:

Q: OSGi is a modular architecture, JavaBeans is a component architecture. What's the diff?

A: The primary difference between OSGi and Java Beans is in how the classloader works. In a standard .jar file or EJB, the rt.jar file or EJB equivalent maintains the classpath. Additionally, if you are using a container to deploy your application into, you may have multiple classpath maintanance mechanisms which cause problems. As a result, when you make a .war file, for example, you usually create a lib directory with all of your .war's .jar dependencies inside of the .war. If you only have one war or .jar in your application, this isn't so bad. But imagine a large enteprise deployment with 100 EJB's all containing apache-commons! You end up with 100 instances of apache-commons all running inside the same container sucking up resources.

In OSGi, you deploy each .jar file (we'll call them bundles cuz this is OSGi now) into the OSGi container. Each .jar file exposes (exports) the packages it wants other packages to use, and also identifies the version of the bundle. Additionally, each bundle also expressly states (imports) the packages it needs from other bundles to work. The OSGi container will then manage all of these exports and match them up to the appropriate imports. Now you have apache-commons available to each of the EJB's you want to make available. You've done away with your /lib directory and now your application takes up less resources.

In your question you asked the difference between a component architecture and a modular architectures. Modularity refers to this process of making each bundle its own deployment unit and allowing it to talk to other bundles instead of balling them all up into one massive .jar file.

Wednesday, July 20, 2011

Hibernate Antlr and OSGi Fragments

From time to time, you'll run across instances where a bundle of yours in OSGi needs to have access to a class, but can't. The error you'll get is "ClassNotFoundException: foo.Bar" where foo.Bar is the package and class your bundle is trying to access. Then, after reviewing your OSGi environment, you see that the package of that class is available, but for some reason your bundle can't see it. What the heck? Why can't it see it?

This usually happens when you use a pre-bundled .jar file that calls "Class.forName(bar.Foo)". Calling Class.forName() isn't a good practice inside of OSGi, because it requires your bundle to be able to perform "wiring" after it has been active and started. But, seriously, you can't change over a decade of programming practices overnight, so folks still do this. Seeing that this would be an issue, the good folks at OSGi created a neat way of fixing it, the bundle fragment! (applause, Woo HOO!)

What a bundle fragment does is add functionality to an existing bundle. In the OSGi reference the purpose is usually listed as providing localization support. However, it is also a very power mechanism for adding and changing the contents of a bundle's MANIFEST.MF file. For those of you unaware of what this file does, it includes a set of directions for the OSGi environment on how to treat a bundle.

In the above example, you would create a fragment adding the foo package to the Import-Package section of the MANIFEST.MF file.

Most bloggers would stop there. I identified an issue and then told you how to fix it. But not me, nope, I want to show you how a real-life example of this using Hibernate and Antlr.

Hibernate uses a parsing service called AST. This, in turn uses Antlr to help with its parsing. Unfortunately, Antlr needs to use a hibernate class called "org.hibernate.hql.ast.HqlToken". And, of course, Antlr does a Class.forName() at runtime to get an instance of it. This kind of makes sense, Antlr wasn't written just for use with Hibernate. As such, it needs to be told at runtime what token it should use for parsing.

To fix this issue, you create a fragment that adds an Import-Package entry for the package HqlToken is in, "org.hibernate.hql.ast". Doing this is pretty simple. First, you create a normal java project. It doesn't matter what tool you use, as long as you have the following basic structure.

- antlr-hibernate-fragment
   pom.xml
   - src
     - main
       - java
       - resources
         - META-INF

Because we're only adding something to the MANIFEST.MF file, the only file in this project that will have anything in it will be the Maven pom.xml. There are a ton of places folks can go to get smart on Maven, so I'm not going to review how the pom.xml file should look other than the maven-bundle-plugin.

In your build section of your pom.xml, add the following

<plugins>
   <plugin>
     <groupId>org.apache.maven.plugins</groupId>
     <artifactId>maven-bundle-plugin</artifactId>
     <extensions>true</extensions>
     <configurations>
       <instructions>
         <Fragment-Host>com.springsource.antlr</Fragment-Host>
         <Import-Package>org.hibernate.hql.ast</Import-Package>
       </instructions>
     </configurations>
   </plugin>
<plugins>

Then, run the following from your console, or compile it with your IDE.

mvn clean install

This will create a bundle fragment ready for use with OSGi. In Karaf, after you deploy your original antlr bundle and the fragment, you can run the following console command to see the new import package directive added to the rest of antlr's import-package section. The bundleId referred to in this code is the bundleId of the original antrl bundle, not the new fragment.

headers (bundleId)


I added the new antlr fragment to my hibernate features.xml document (from a previous blog entry) right after my antlr bundle and then deployed it as part of my Hibernate feature.

Please let me know if that helps!