On Writing Tickets (Part 1)

Image(Right-Sizing Tickets)

Spoiler alert: I’m going to tell you right up front what my conclusion is: Tickets should be large in scope. Also, tickets should be medium in scope. Finally, tickets should be very small in scope. Tickets, tickets, tickets! Tickets for everything!

Imagine a ticket with the instructions “create user login.” Cringing? Me too. But most of us are familiar with tickets of huge scope that lack any kind of breakdown.

Back in the day, way back long, long ago (maybe not that long ago) we sent emails. “Hey Matt,” the email might read, “don’t forget to add the flim-flam to the doohickey. But don’t do this until the thingamabob layer is complete.”

We had defect tracking systems, but generally these were utilized for one of two purposes: Customer reporting of software issues, or quality assurance reporting of pre-release defects. Requirements were implemented by way of tracing through a document and checking items off of a list.  One of the earlier products for requirements tracking and traceability was Rational DOORS. There were a number of other tools we used, none of which integrated very well. Later I was introduced to Rational ClearCase, Rational ClearQuest, Microsoft SourceSafe, Bugzilla, Trac, CVS,  BugTracker, Subversion, Redmine, Jira, Confluence, Team Foundation Server… And on and on.

An early entry in the Rational suite of products, it was a rather expensive program in the way of licensing. If my recollection is correct, it seems that licenses were based on user seats. When such a model exists, and if each seat is expensive, a company may tend to limit the number of product users on a team. The side effect of this approach (again, only if the pricing is high) is limited usage within a team.  I recall project managers working on Gant charts, hosting meetings in which all the developers sat around a table gazing at a projected screen being asked for status updates on each line item. Talk about a yawn fest! Whether it was DOORS, Microsoft Project, some other tool, or a long list of line items with names attached to a number of tasks, the status meetings were always the same.

Some reading the list of products I just threw out will recognize that it isn’t very sensible. Some of the items in the list are used for version control. Some for ticket management. Some for a bit of both. Some of the products are no longer in use. Some are very old, but still in use. (And some, such as Redmine, Jira, and Confluence, aren’t all that old.) This was a problem then (and for many, it remains a problem now): The solutions to our development needs were viewed as separate entities. In addition, we just didn’t seem able to settle on how to use these separate items (which in the past, didn’t integrate well, if at all) in a way that actually helped rather than hindered.

In the early days of my career I almost always had an email inbox full of hundreds of items, some flagged, some marked as unread (so I wouldn’t forget), and some in the trash folder—either placed there deliberately or by accident. My monitor was littered with yellow sticky notes. A pad of paper next to my keyboard was packed with doodles, stars, asterisks, double and triple-underlines. If something was REALLY important, I may have drawn a box around it. For the REALLY REALLY important items, I drew two boxes. And for the REALLY REALLY REALLY important ones… You get the picture.

What a nightmare!

And let’s not forget the weekly status reports! I’m getting flustered just thinking about them! Well-meaning managers asked for status reports every Friday afternoon. When Friday rolled around, I often found myself digging through emails and flipping the pages of my notebook in an attempt to recall what I had worked on. Who has time to fill in a status report in increments each day when distracted by getting actual work done?  I loathed these reports. And I say this without hesitation, as I know I am not alone.

Inevitably, some line item would appear on the list with my name next to it. “Matt,” the project manager asked, “how are you coming with the doodad that implements the whatchamacallit?”

The other day someone asked me this question: “What are your thoughts on the difference between task-oriented and linear oriented?” It was an open ended question, so I wasn’t sure if this question was with regard to software functionality or design and development in general. I’m still not sure what the question means, but I assume it has something to do with the difference between viewing processes as a long line of things to be done in a sequence (with gates between each) and doing things as standalone tasks. Of course, even if something is ‘task-oriented’ there are still dependencies—things that require linear completion. This is why we write tickets that allow us to create a hierarchy of dependencies.

“Uh…” It seemed to happen every time! I was caught off guard by some entirely surprising task. My face felt a little warm, as I struggled to recall just what the hell the project manager was talking about. “Are you talking about the gizmo that ties in with the canootinator?”

“No! Not that… You remember, I sent you an email about the whatchamacallit and the doodad last week. It’s in my sent items. I’ll pull it up right now. Sure, I felt stupid. Maybe I should have. The truth of the matter, however, is that reliance on ineffective communication mechanisms is what led to this (not just for me, but for others as well). Sidenote: Being a software developer requires that one feel stupid every so often. It’s part of it.

As I write this, these recollections seem like the distant past—but the reality is that it wasn’t all that long ago.

(DOORS lives on as an IBM product. I have not used it in years, so I cannot speak to its current state of being.)

We’ve come a long way since then. Mostly. Maybe. Not all of us.

The subject I wish to write about today is effective utilization of tickets. It doesn’t matter what your development process is. Agile, SCRM, Kanban, some iterative process. Whatever the design and development approach—it’s high time to scrap the emails, notebooks, and sticky pads. Need I even mention that it is time to abandon the weekly status reports? (I’m not talking about sprint planning or standups when I say this). I don’t care what the development approach happens to be. Whatever it is, granularity of tasks, detail, and communication remain absolutely necessary (and email does not qualify as effective communication).

Should someone—from your boss to your boss’s boss to a well-meaning coworker, come to the door of your office, cubicle, desk, or couch with the expectation that a conversation is locked in as a done deal upon leaving—shame on all involved!

The same goes for IM and text messages. None of the above forms of communication should ever be considered a final lock-in of a task. There is one and only one way that work items—any work item—should be recorded: The ticket management sytem.

There are many too choose from, some better than others. The poorly-named BugZilla is a fine choice, but Team Foundation Server, Jira, Redmine, and Trac are all great options. Redmine and Trac are entirely free, and they are great tools with plugins for everything under the sun. My personal preference is Redmine, but this could be a bias simply because I have used it so much.  Before I continue, let me make one thing clear: Let’s never again refer to a ticket management system as a ‘bug tracker.’ NAY! To call it a defect tracker leads us back to square one. A defect is a category of a ticket. It may be something of high priority—but all tickets should have an associated priority.

There are a few integrations that I consider absolutely necessary. To make a ticket management system effective, it must integrate with:

  • The  CI (continuous integration) build

CI should integration with version control. It should pester the team when a build breaks. It should make it clear what changeset broke the build, and the changeset should point us to the ticket that was being worked on that prompted the changeset.

  • Email

Wait—didn’t I just say we need to scrap email? I did. Email is a good prompt, however, for team members to see new and changed tickets. For a small team, I like to see all of those emails, even if the task is not related to me. It’s good to know what others are doing.

  • Version Control

Changesets must be concise. As we check-in for commit stuff to our version control (by stuff I mean anything—not just code. That stuff could be configuration files, documents, spreadsheets, etc.).

  • Existing workflow (and if we have to shoehorn the concept of tickets into the workflow, perhaps it is time to rethink things.)

I’ll keep this on-point, as much as I can, as I see a great deal over overlap among the necessities (and for this blog I’ll be writing separate posts (lest my faithful readers become distracted while reading too many words).


soupThe other day my daughter wanted to heat up some soup in the microwave. She insisted on doing it herself. The lid of the Campbell’s Soup can the type with a tab that can be opened without a can opener. She stood in front of her mother as she attempted to open the can, wrestling with it a little. “Lift the tap up, and then pull on it,” her mother instructed. She added, “I really think you should open it in the kitchen over the sink.”

My daughter struggled with the lid, but still didn’t want help–she wanted to prove to us and to herself that she was capable. Soon enough Campbell’s Double Noodles were spilled all over the floor and her mother. Oops.

Did we get mad? No way! How could we? We knew the possible outcomes, but we also knew that we had to allow our daughter to figure this out. My daughter learned a few things in this situation:

  1. How to open a can of soup.
  2. What can go wrong if you tip the can sideways while opening.
  3. Why she should have done it over the sink.

Continue reading

Boss vs. Leader

I’m not sure where this image originated from, but I like it.leader

Do Not Flounder (Stay Un-bored)

The following is an article that I am working on for a yet-to-be-determined publication. Having done this before, I will say that getting an article published in a journal/magazine isn’t as difficult as one may think (as long as you have something to say). This hasn’t been proofed, so please forgive any typos or errors. This article is not about the role of management. I would like to follow up on that subject, because I do think management can and should help to produce great software engineers. I’ve seen otherwise good programmers flounder under lacking management. It happens. This article is about the role of the developer, the individual contributor, in making sure that his or her career starts of right and continues to grow.
One ore note: The original version of this post was written in about 2 hours and was full of errors. I’ve applied a number of corrections, but it will likely be further edited before publication.

I have no idea whether or not most developers using Agile have actually read the “Agile Manifesto.” Here it is:

We are uncovering better ways of developing
software by doing it and helping others do it.
Through this work we have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on
the right, we value the items on the left more.

This post is more about career growth than Agile, but somehow I think the Agile approach is relevant here. Agile, as we know it, is an approach to software design. It can also be an approach to managing one’s own career.


A friend of mine recently said this: “Your attitude determines your altitude.” Although he was speaking in a general sense, I couldn’t help but think of the application of this motivational advice in relation to software engineering. It is relevant because as software engineers, our career growth is in our hands—perhaps to a greater degree than in any other field.

Continue reading

Success as a Technical Lead – Article

I stumbled upon this article today. Its a little dated (from 2008), but still relevant. I don’t think the list is comprehensive, and I certainly think other technical leads would have varying opinions on things. All of the points listed are good, but there are a some points that really stand out:

6. Be part in the design of everything
This does not mean do the whole design. You want to empower team members. But your job is to understand and influence each significant subsystem in order to maintain architectural integrity.

7. Get your hands dirty and code
Yes you should take parts of the code and implement them. Even the least glamorous parts. This will help you not getting stuck alone between management and the team. It will also help you gain respect in the team.

20. Don’t blame anybody publicly for anything
In fact as a tech lead you cannot blame anybody but yourself for anything. The moment you blame a team member in public is the moment when the team starts to die. Internal problems have to be solved internally and preferably privately.

24. Mentor people
It is your job to raise the education level of your team. By doing this you can build relationships at a more personal level and this will gel the team faster and harder. It is very effective with more junior people in the team but there are ways to approach even more senior members, just try not to behave like a teacher.

25. Listen to and learn from people
Even if you are the most experienced developer on the team you can always find new things and learn form others. Nobody is a specialist in everything and some of your team members can be domain specialists who can teach you a lot.

28. Be sure you get requirements and not architecture/design masked as requirements
Sometimes business people fancy themselves architects, sometimes they just speak in examples. They can ask for technology XYZ to be used when what they really mean is they want some degree of scalability. Be sure to avoid hurting feelings but be firm and re-word everything that seems like implied architecture. Get real requirements. To be able to do this you have to understand the business.

36. React to surprises with calm and with documented answers
Never get carried away with refuses or promises when confronted with surprises. Ask for time to think and document/justify your answers. It will make you look better and it will get you out of ugly situations.

A theme throughout the list, and throughout a number of similar books and articles with such advice, is that a good technical lead appreciates and values the various talent and particular skills of the team. A great technical leader isn’t necessarily the “know it all” of the group. He or she should certainly be skilled and eager to maintain that skill–and even be a great developer. But smartest person in the room? Maybe. Maybe not. Personally, I like working around people who are smarter than me. This is the best way to learn.

And there’s a flip side to number 20: Don’t blame people publicly for problems, but be quick to praise people for successes, major and minor. A sense of recognition for one’s diligence is tremendous motivator. I don’t know a single person who doesn’t appreciate kudos. Most parents realize that their children respond better to positive reinforcement than negative… This doesn’t change when one reaches a certain age. I’m not suggesting that a team member not be confronted for problems. Of course he or she should (and must).

Very recently a company-wide email spoke of a major success of mine (successful deployment of a year long project), and mentioned me by name. It felt great, and it made me want to continue with even more success (and it was a great confidence boost). Simple put, its good to know that the folks at the top of the organization are aware and appreciative of the work of those in the trenches!

This all may sound like a lot of feel-good fluff. It isn’t.

Little Tutorials: 36 Steps to Success as a Technical Lead

Version Control/Wiki Control

I FIRMLY believe that documents related to a project should be managed in the same version repository as the source code. This gives us a snapshot in time of all items related to a project. The problem with this comes when using a wiki (and I love using wikis, so don’t get me wrong). There is no way, if we are using a wiki page for specs, requirements, etc., to link a wiki instance in time to a Subversion (Git, Mercurial, whatever) instance in time.

And I don’t think we would want such a feature anyway. A wiki covers many projects and many team needs, not just the needs of a small group of programmers on a single project. I can’t imagine “rolling back” an entire wiki to a given snapshot.

I wonder if there are any clever ideas out there for handling a need such as this. I can see the possibility of pre-commit hooks being used to label wiki pages, but this seems cumbersome (if not entirely unmanageable). The other solution is to rely on the wiki only for collaboration and not for official documentation of any kind. This approach, unfortunately, cripples much of the power of using a wiki.

I am open to ideas.

6 Developers, One Room

Under an extremely tight deadline one team member decided that it would be best if the developers took over a conference room. On a long conference room table there are 6 computers, and 6 extremely talented developers chat, joke, brainstorm and work away. The manager of the team is there too, explaining requirements and helping to clarify definitions and functionality.

There is pizza, too.

Its like that scene in Apollo 13 where the engineers have to figure out how to get the Apollo back to Earth. Ideas bounce freely and communication is immediate. I don’t have to wait for a response to an email or a response in a chat window (which may or may not come). And there’s something about sharing a space with a common goal: The team seems to gel. There is little or no arguing or passive-aggressive commentary as I have seen in meetings throughout my career. We’re all in this together, after all.

I’ve never seen software written with such efficiency.

Appropriate Checkin Comments

I read this post today with a list of funny checkin comments today. Some of them are funny simply because of the lacking description. Here are some comments I’ve seen in my personal experience:

  • many small changes
  • Microsoft IE sucks!
  • cleanup
  • oops
  • fix the bug

Worse, I’ve seen entirely empty changeset comments.

The above lists, along with those found on the funny checkin comments page provide some examples of inappropriate commit comments. Why? They are unprofessional and lacking in detail and meaning. Some projects are audited and reviewed by external third parties. As a project manager or architect, would you be embarrassed for  an auditor to see the comment “fix sucky code?” I would. Even worse than being embarrassed, there is a productivity problem that can arise from poor checkin comments.

What is an appropriate comment? An appropriate comment must (minimally) have a few things:

1. Appropriate level of detail about the change, including why the change was made, what impact there may be, etc.
2. Appropriate to the changeset. Along with this, a single changeset should, as much as possible, reflect a single ticket or change. Many lazy developers check in a large set of code with a number of intertwined changes that are unrelated. When it comes time to revert a particular change or track a defect this creates problems, and ultimately it defeats one major purpose of version control.
3. Details about the completeness of the change. Generally a changeset should complete a ticket or work item, but this is not always the case. If there is remaining work to be done, “TODO” items or further functional requirements that impact the changeset, this should be noted.
4. Finally, perhaps the most important part, the checkin comment should refer to a ticket. Not all changesets have tickets written, sure, but in general, if the ticket is a defect, enhancement or requirement implementation, there should be one or more tickets that are related. Any modern version control and ticket system will be able to tie these together.

One developer writes:

Many developers are sloppy about commenting their changes, and some may feel that commit messages are not needed. Either they consider the changes trivial, or they argue that you can just inspect the revision history to see what was changed. However, the revision history only shows what was actually changed, not what the programmer intended to do, or why the change was made. This can be even more problematic when people don’t do fine-grained commits, but rather submit a week’s worth of changes to multiple modules in one large pile. With a fine-grained revision history, comments can be useful to distinguish trivial from non-trivial changes in the repository. In my opinion, if the changes you made are not important enough to comment on, they probably are not worth committing either.

Getting developers to write good checkin comments seems to be an ongoing battle. In the business of writing software, its easy to convince oneself that checkin comments are a waste of time. The real waste of time is later, when trying to track the introduction of a defect or trace requirement implementation to code. There is simply no good excuse for lacking checkin comments.

Is the checkin comment “cleanup” appropriate? Yes, in some cases, as long as its true. If I am cleaning up formatting of code, including things like indents and spacing and correcting whitespace, then yes, “cleanup” is an appropriate changeset comment. Generally, however, real comments are required.

[Vistamix: The Humor of Code Checkin Comments]
[Loop Label: Best Practices for Version Control]

Test-Parallel Development

Here’s a post (albeit dated) where a developer lists a few problems with test driven development. There are plenty more where that came from. What I’ve found works better is a hybrid approach, where we write tests at the same time as code (or just after). The idea behind pure TDD is one of those that sounds good on paper but is difficult to implement practically. Developing to the test means that we abandon some of the best parts of Agile by again tying our hands to strict requirements (this time the requirements are automated tests that don’t work until the code required is implemented). While I am a big supporter of functional automated tests and their inclusion in CI, I don’t think pure TDD is practical. A much better approach is to write functional code and tests together.

The biggest problem I have with TDD is included on the Wikipedia entry on the subject:

Test-driven development is difficult to use in situations where full functional tests are required to determine success or failure. Examples of these are user interfaces, programs that work with databases, and some that depend on specific network configurations. TDD encourages developers to put the minimum amount of code into such modules and to maximize the logic that is in testable library code, using fakes and mocks to represent the outside world.

Fakes and mocks are fine, but I prefer to spend more time implementing tests that run against real world conditions. Also, most all applications that I work on include a UI and/or database. Often, database and UI design occurs alongside all other development.

Taking HTMLUnit as an example, how often do we know what form and input names will appear on a page before we implement it? The same is generally true of database design. In an ideal world, pure TDD would be a great approach. In the real world, where I work, I need more flexibility. This being said, I think most software teams aren’t anywhere near this being a problem. Most have yet to spend appropriate time on automated tests.

Ticket System as a Trigger for Peer Reviews

Somewhat recently I was thinking about what ticket status might be appropriate when using issue tracking for all tasks from functional requirements to documentation to defect tracking. It got me thinking about the need for peer reviews of code and how tedious these reviews can be. It turns out there is at least one plugin for Trac that includes hooks for annotation of code for the sake of peer review. It does not, however, appear to include any kind of formal sign-off capability.

I started thinking that it would be nice to have a plugin for peer reviews (for Trac or Redmine or whatever). Used wisely, however, defining our workflow in a manner that makes the peer review process an integral part of it, we can probably simplify things. Do we really need a plugin, or can we simply use a “In Review” status to achieve the same thing? I suppose the answer to this depends on how strict you want to be.

Here’s what I’m thinking with regard to the history of a ticket (or issue or task or work item or whatever we choose to call it):


  • New
  • In-Progress
  • Resolved (or, if we determine that a ticket should not be completed, we have alternatives, such as deferred, rejected, duplicate, etc.)
  • In-Review
  • Closed


With a setup such as this, we can use the Resolved status as an indicator that an Issue has been completed, but it is not yet ready to be closed. Tickets are only closed when appropriate peer review actions have been taken. Who determines what these actions are? That is up to the project manager (or the team lead), and it is enforced by proper routing of the ticket. Easy individual responsible for peer review is assigned the ticket. Seeing the “In-Review” status, this colleague reviews the code changes (observing the changset that is attached to the ticket) and makes comments (in the ticket notes).

I know this sounds like a bit of legwork, but I see a few major benefits of an approach like this:

  • Tracing – We now have an audit log of all peer review comments. Using our ticket system with configuration management integration, tickets, changesets and review comments are linked together and not lost in some email thread or document somewhere.
    Time Savings – Anyone who has ever sat through a peer review (and I’m guessing most project managers and developers have) knows how insanely time consuming they can be. Because nobody ever seems to have time, we attempt to save time by doing a large review of code; We wait for a long time, and then we are faced with a peer review involving an overwhelming amount of code. This leads to the next benefit…
  • Better Focus of Reviews – I don’t know about you, but I find that I am much better at reviewing a smaller amount of code or a single functional area than attempting to review thousands of lines of code all at once. We’re all busy, and this isn’t going to change. What happens when you find out that you have a peer review at the end of the week and you have to read through and mark up 5 class files? Do you set aside everything you are working on and do it? You try, but time is short, and so you hurry.
  • Communication – When I take the time to review a changeset, it benefits both team and the individual performing the review. Now I am better informed about what others are working on, where it is implemented, how it is implemented, etc. I don’t have to go bug Joe the Developer to ask him if he finished such-and-such. I already know that he did because I reviewed his code.

This all assumes that our team follows good project management when it comes to the handling of issue tracking and version control. It means that we have to have well organized tickets and we have to commit changesets in some meaningful fashion. This should be a no-brainer.

Is the Software Medical Device World Ready for Agile?

To begin with, I don’t see any real reason why software medical device manufacturers should fear Agile. I do, however, see some stipulations that need to be made. Here is a rather dated article on the subject (from 2007) : Agile Development in an FDA Regulated Setting.

The author of the blog post concludes:

It seems to me that Agile methodologies have a long way to go before we see them commonly used in medical device software development. I’ve searched around and have found nothing to make me think that there is even a trend in this direction. Maybe it’s that Agile processes are just too new. They seem popular as a presentation topic (I’ve been to several), but I wonder how prevalent Agile is even in mainstream software development?

Since the article was written (4 years ago), Agile has clearly gained a solid foothold in mainstream software development. With companies bound by medical device FDA guidelines, however (or even IEEE, ISO 9001, ISO 13484), there may be some understandable fear on new approaches. What seems to happen is that the known process becomes the only trusted process, and adoption of anything knew leads to so many questions that it is simply pushed aside (regardless of the potential benefit to the company).

The “twelve principles” of the Agile Manifesto include:

  • Customer satisfaction by rapid delivery of useful software
  • Welcome changing requirements, even late in development
  • Working software is delivered frequently (weeks rather than months)
  • Working software is the principal measure of progress
  • Sustainable development, able to maintain a constant pace
  • Close, daily co-operation between business people and developers
  • Face-to-face conversation is the best form of communication (co-location)
  • Projects are built around motivated individuals, who should be trusted
  • Continuous attention to technical excellence and good design
  • Simplicity
  • Self-organizing teams
  • Regular adaptation to changing circumstances

Uh oh. A few of these principles are very likely to send upper management, at least those that are used to their traditional waterfall SOPs, running for the door. But who says we can’t make modifications where we need to?

I suspect that much of the resistance to Agile methodologies is closely tied to a fear of change. Upper management trusts that which they know, despite some of the obvious shortfalls.

Valuable Unit Tests in a Software Medical Device, Part 9

I thought I was done, but here is yet another good reason to incorporate complex function automated testing: Validation of multiple Java runtime environments. Fabrizio Giudici proposes this as a solution for testing with Java 7, but we can always take it a step further, verifying multiple OS environments as well. Of course, this requires that we have those build environments available (easy enough, using virtual machines).

The V-Model Approach: Just a Fancy Waterfall

I read this article at MEDS Magazine about Strkyer’s software development process. Stryker is a large and very successful company, so I was a bit surprised to learn that they have had success with the V-Model in software development. In my humble opinion, the V-Model is simply a glorified waterfall model approach, and we’ve seen time and again that the waterfall model is not a good method to attempt.

I don’t believe that most medical device software can thrive in an XP or purely light weight Agile environment either. I believe that we need Agile/Scrum methodologies with some rules and more heavily weighted up-front requirements and design. I’d like to see those of us in the software medical device community come up with our own Scrum, and this may be something that a number of people are eager to collaborate on.

We need to recognize that fixed requirements scope is simply not a reality, in medical devices or otherwise. In the book Agile Software Requirements, Dean Leffingwell points out:

This “fixed requirements scope” assumption has indeed been found to be a root cause of problem failure. For example, one key study of 1,027 IT projects in the Unit Kingdom [Thomas 2001] reported this: “Scope management related to attempting waterfall practices was the single larged contributing factor for failure.”

(i.e., There is no such thing as a fixed requirements scope!) Need we further reason to abandon the waterfall model? Leffingwell offers even more. According to an “oft-cited Standish Group’s Chaos report survey [Standish, 1994]:

  • 31% of software projects [using waterfall approach] will be canceled before they are completed.
  • 53% of the projects will cost more than 189% of their estimates.
  • Only 16% of projects were completed on time and on budget.
  • For the largest companies, completed projects delivered only 42% of the original features and functions.


I’ve never observed a true waterfall in practice. I’ve only seen it attempted. Inevitably, there are modifications to the process, as the team and management realizes that there is a need to revisit requirements and design. I don’t care how much time is spend (wasted) on up-front design… It will never be enough, and there WILL be a need to revisit those stages that the waterfall model insists should have been locked down.

[MEDS Magazine: Developing Save and Effective Medical Device Software]
[Amazon: Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise. Dean Leffingwell]

Brain Rules: Why a Daily Standup Should Be in the Afternoon

I’m reading a book right now called Brain Rules. In it, the author discusses how after a number of hours sitting in front of a computer our brains literally start to call it quits for the day. This happens long before our typical 8 hour workday is up. If your job involves something that requires intense focus, such as writing software, you’re already well aware of this.

I wasn’t thinking of this when I planned my team’s daily standup meetings at 2 in the afternoon. Perhaps a bit selfishly, I was just thinking of when I like to have a break in my work. It turns out that I was probably motivated by the fact that this is a time of day when I feel the need for a change of pace.

I’ve found that when I am head-down into programming, the worst thing to deal with is an interruption. I’ve also found that I can only be hyper-focused on programming for 4-5 hours at a time (at best). After a while I just start to get tired, and its time for a break. The book Brain Rules offers some interesting insight into why this is.

Back to the stand up meeting: Long ago, when I first heard of daily “stand ups” I was alarmed. The last thing I needed was yet another meeting to interrupt an already busy day. What I didn’t understand was the fact that a daily stand up meeting achieves a few important things:

1. It actually reduces interruptions. People who may interrupt otherwise are encouraged to put the interruption off until the meeting.

2. It encourages work. I, and others, always feel like we want to have something to say at the stand up meeting.

3. It discourages long meetings. If the owner of the meeting (the team lead) is wise, he or she will insist that the meeting last no longer than 20 minutes.

4. It provides a much needed break in the afternoon, and an opportunity to refocus. Sometimes, if you live somewhere beautiful like North Carolina, its even a good idea to make the stand up meeting a walk outside. A little exercise and fresh air works wonders after sitting focused on a computer screen for hours.

[Amazon: Brain Rules]

Continuous Integration on Software Medical Device Projects, Part 9

Using a CI Environment to Replace the Traditional DHF

Naturally, an important part of continuous integration is having a CI build that can be checked regularly for continued build success. This is probably what is commonly though of as the key benefit, but there is much more to be gained. Any continuous integration environment that is worth using will allow the team to incorporate packaging of key project items with each build. This includes important documents, tests (both manual and automated test outcomes can be packaged), requirements, design specifications and build results (deployment packages, libraries, executables, installers, etc.). The important thing to note here is the fact that, used wisely, the CI environment can provide a snapshot of all project outputs at any given point in time. Hopefully it is becoming clear that this gives us the possibility of automated DHF creation. Not only that, we have a much more detailed DHF throughout the life of a project and not merely at a point in time in which a particular freeze was performed.


The continuous integration server should include unit tests (and by unit tests, I mean functional level automated tests) that provide a level of self-testing code such that any build that fails to pass these tests at build time is considered a failed build.


 Continuous integration output need not (nor should it) package only built objects. We can leverage CI build integration with our version control system to package everything required per our design outputs (21 CFR Part 820.30 supbart C (d)), design review (21 CFR Part 820.30 supbart C (e)), design verification (21 CFR Part 820.30 supbart C (f)), design validation (21 CFR Part 820.30 supbart C (g)), design transfer ((21 CFR Part 820.30 supbart C (h)), design changes (21 CFR Part 820.30 supbart C (i)) and even our design history file (21 CFR Part 820.30 supbart C (j)).

Read it all:

[CI on Software Medical Devices, Part 1]
[CI on Software Medical Devices, Part 2]
[CI on Software Medical Devices, Part 3]
[CI on Software Medical Devices, Part 4]
[CI on Software Medical Devices, Part 5]
[CI on Software Medical Devices, Part 6]
[CI on Software Medical Devices, Part 7]
[CI on Software Medical Devices, Part 8]
[CI on Software Medical Devices, Part 9]

Continuous Integration on Software Medical Device Projects, Part 8

Build Script Creation

Ant should automatically determine which files will be affected by the next build. Programmers should not have to figure this out manually. While we will use an IDE for most development, we must not rely on the build scripts that are generated by the IDE. There are a few reasons for this:

  • IDE generated build scripts are not as flexible as we need them to be (it is difficult to add, remove and modify build targets).
  • IDE generated build scripts often contain properties that are specified to the environment in which they were generated. Along with this, something that builds okay in one work environment may not build when committed and used by the CI build or when pulled into another environment.
  • IDE generated build scripts very likely lack all the build targets necessary.
  • IDE generated build scripts may rely on the IDE being installed on the build machine. We cannot assume that this will be the case.

The Ant buildfile (build.xml) should define correct target dependencies so that programmers do not have to invoke targets in a particular order in order to get a good build.

Triggering Builds

As noted, we will use Jenkins-CI to automatically perform a CI build every hour if there is a change in the repository. The system will be configured to send emails to the development team if there is a problem with the build (i.e., if a changeset breaks the CI build). It is anticipated that the CI build will break from time-to-time, however, a broken build should not be left unattended. A broken CI build indicates a number of possible problems:

  • A changeset didn’t include a necessary library or path.
  • A changeset caused a problem with a previous changeset, and merge of the changes must be address.
  • A unit test failed.
  • The CI build server has a problem.
  • The build script failed to build the new changeset (missing library or required inclusion).

In my experience, the most common cause of a broken CI build is a lack of attention to the build script. Each developer is responsible to making certain that the ant build scripts are up to date with all required changes. We cannot rely on the build scripts that are generated by an IDE. There are certainly more possible causes that could be added to the above list. It is a good idea for each developer to trigger a CI build immediately following any Subversion commit to ensure that the CI build has not been broken. If a CI build continues to be broken without being addressed, the team leader and/or project manager may revert the offending changeset and re-open any related issue.

Continuous Integration on Software Medical Device Projects, Part 7

Build Labelling

A build is labeled with a predetermined version number (e.g., “2.0”) and with a Subversion changeset number. The beauty of this is that we have a build that is associated with a particular changeset and, by association, an entire set of project documents and sources (as long as we put everything in a single version control system). Once again it should be clear how beneficial such a setup is when thinking in terms of a DHF. No longer do we have to assign a particular team member to fumble through documentation, ensuring that the proper documents are copied to some folder. In fact, we have very little overhead; Our CI server did all the heavy lifting of us!

Mixed Revisions are BAD!

The changeset number, placed in a property file at build time (by the ant build task). If the changeset number has the letter M at the end (e.g., 3001M), the currently checked out fileset is a “mixed” revision.

The current working copy changeset number is easily viewable in TortoiseSVN or at the command line with the svnversion command. It is expected that during development and testing a mixed revision will be used at almost all times. However, any final build must not have the letter M in the build number.

If the letter M does appear in the build number of a formal build, it indicates that there are items in that build that are not up-to-date in the repository, and therefore the build cannot be duplicated using only a changeset or tag. To avoid this, the continuous integration server should be used to create formal builds, and it should be configured to use only a current changeset and no locally modified files.

Ant Build Targets

Build.xml is the Ant build file. It is located at the root level of the project source tree. The following build targets are necessary:

  • build – Compile all code, creating .class files
  • dist – Call the build target and package all code and necessary configuration files, creating a .war file (web application archive).
  • test – Execute automated tests
  • clean – Clean an existing build and dist (remove previously build items)
  • javadoc – Generate Javadoc

Continuous Integration on Software Medical Device Projects, Part 6

Build Scheduling
Jenkins-CI allows teams to set up a project so that a build is performed whenever a change is committed through a version control system. The “Poll Version Control System” option is selected to do this. From there, the team must set up a schedule so that Jenkins will know how often to poll the version control system. Jenkins can be scheduled to poll monthly, daily, hourly, or even every minute. However, I would recommend building hourly (if and only if there has been a change committed).
In the project view, there are lists of the build history. I’ve set up build scripts and artifact archiving so that you can get the build at any point. The Java archive files will often be set up to be self-contained. This just means that all the “stuff” that is needed to run the application (classes, libs, properties, etc) is archived in that single file, and you can always grab it from the CI environment.

To get a current jar (Java executable) go to the build history. A build with a red circle next to it failed. A build with a blue circle is a good build. Click on the most recent successful build and you will see the build #, date and build artifacts. You can download the jar files from here,
This is how we will handle projects in general when someone wants to get a build. There will be more details on tags and so on going forward. We can modify the CI environment to include any artifacts that we want, just as Javadoc or documentation (as long as it is something that we can pull out of Subversion).

Builds are created locally by developers for a number of reasons, however, such builds must always be considered informal. Any build released for formal testing, at the end of a Sprint or iteration or upon project completion must be done in the controlled build environment. Using self-documenting features of code (such as Javadoc), it can be wise to incorporate the output of this extra documentation into the DHF. Why not? There was little overheard in doing so and the benefits are substantial.

Continuous Integration on Software Medical Device Projects, Part 5

What I am proposing in this article is something that I, personally, have never done. In my positions as a software lead, architect, developer and software quality analyst, I have worked only with a DHF as a particular folder with specific subsets of documentation within. This approach has always resulted in a documentation nightmare. I’ve used many version control, issue tracking and continuous integration tools, but I have yet to take the leap to reliance on CI as the source for DHF creation. So while this proposal makes sense, it takes a bit of a leap from the traditional DHF mindset to attempt.

The screenshot (click to see full view) here shows what a (simple) project setup may provide in the way of such packaging. It is up to the team to determine how much or how little the CI handles, but it makes the most sense to allow it to do what computers do very well and what humans tend not to do as well: align things.

The CI Environment
The continuous integration build server should closely mimic the environment in which the final product will be deployed. By doing this, a level of confidence is achieved with regard to system compatibility prior to user acceptance and integration testing. It must also have access (through the version control and/or ECM system) to access all the design controls and documents necessary to build a complete DHF. To this end, I propose using a single version control system for everything. It doesn’t make sense, for example, to store source code in one version control system and documents in another. To do so makes importing of all necessary items difficult, if not impossible. There are a number of benefits to utilization of a continuous integration server during project design and development. Do not think of CI as a tool only for software builds. Integrated with the project version control system, it can serve as much more.

  • Changesets tied to builds
  • “Changeset” is really a Subversion term. For the purposes of this chapter, a “changeset” is what happens any time a user commits a change (be it source code, documentation, graphics, etc.) to the version control system. The CI build should be configured to execute (i.e., build and package) the project when it detects that there is a new changeset.
  • Frequent Builds, Status Update and Rapid Feedback
  • The CI build gives the development team prompt feedback on the build status. If compilation failed, tests fail or some requirement element cannot be packaged, the entire team is flagged immediately. To this end, the entire team will know that a particular check-in has broken something. This feedback will eliminate the fear that an unknown break could be so extensive that progress will come to a screeching halt. The near real-time feedback of the CI build saves valuable time (and stress!) throughout development (and even design).
  • Project Progress Tracking (tickets, tests, etc.)
  • Improved communication
  • Feedback (peer review) is triggered by every changeset, each of which is easily viewable
  • Less overhead for communication
  • Improved team understanding of others’ work
  • Jenkins-CI is used for continuous integration builds. The CI build server runs Ant build scripts and and reports results. It is expected that software developers follow the CI build server to make sure that any code commits do not break the CI build.
  • While the use of an IDE is expected, a development team must not rely on usage of the IDE-generated build scripts for any project.
  • If using Apache Ant for a project build, the development team should create (minimally) these build targets: clean, build, dist, test
  • Ant build scripts are no different than other project code in that they must be written clearly, follow standards and commented. Developers are expected to maintain build scripts. Any code commit that requires a change to the build must include those changes upon commit. This includes addition of a class, library, package, path, etc.

Continuous Integration on Software Medical Device Projects, Part 4

21 CFR Part 820 – DHF Requirements

820.30(e) Design History File (DHF) means a compilation of records which describes the design history of a finished device.
–Device Advice: Regulation and Guidance, Software Validation Guidelines, http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance

Medical device software is audited and controlled by standards defined by FDA, specifically 21 CFR parts 11 and 820. Many of the requirements laid out in this somewhat difficult-to-understand guidance can be made very easy, second nature even, when we use a continuous integration environment throughout the course of project design and development. Looking specifically at the quality system requirements laid out by 21 CFR Part 820.30, Subpart C – Design Controls, it becomes apparent that a good continuous integration environment can help us to address each. A major consideration, perhaps the major consideration, is the completeness of the Design History File.

820.30(j) Design History File. Each manufacturer shall establish and maintain a DHF for each type of device. The DHF shall contain or reference the records necessary to demonstrate that the design was developed in accordance with the approved design plan and the requirements of this part.
–CFR – Code of Federal Regulations Title 21. Subpart C – Design Controls, Section 820.30 Design Controls

The “or reference” part of this statement stands out. Traditionally, medical device manufacturers have though of the DHF as a physical, self-contained item. But with a project of any complexity, it isn’t difficult to imagine how quickly a DHF may grow into an unruly mess of difficult-to-wade-through “stuff.” Why not simply leverage software tools to make the process seamless? Using a continuous integration build environment, development teams can pull together a baseline of all the elements of a DHF as frequently as they wish to; Furthermore, they can do so with a degree of accuracy that cannot be achieved through the diligent (yet distractible) legwork of a pre-occupied team.

I propose that the DHF need not be a single physical or soft folder with duplicate copies of items. Leveraging the CI environment along with the version control system, it is a much better idea to think of the DHF as a snapshot of all relevant design outputs at a given point in time. To that end, the development team can have many snapshots of the DHF throughout the project lifecycle. To achieve this, they need simply define this process in their standard operating procedures and work instructions.

Continuous Integration on Software Medical Device Projects, Part 3

Jenkins CI

For the purposes of this article, the focus will be on one specific continuous integration build tool, Jenkins CI. This is one of the more popular (open source) tools available. Jenkins CI (the continuation of a product formerly called Hudson) allows continuous integration builds in the following ways:

1.    It integrates with popular build tools (ant, maven, make) so that it can run the appropriate build scripts to compile, test and package within an environment that closely matches what will be the production environment.
2.    It integrates with version control tools, including Subversion, so that different projects can be set up depending on projection location within the trunk.
3.    It can be configured to trigger builds automatically by time and/or changeset. (i.e., if a new changeset is detected in the Subversion repository for the project, a new build is triggered.)
4.    It reports on build status. If the build is broken, it can be configured to alert individuals by email.

The above screenshot gives an example of what a main page for Jenkins CI (or any CI tool) may look like. It can be configured to allow logins at various levels and on a per-project basis. This main page lists all the projects that are currently active, along with a status (a few details about the build) and some configuration links on the side. These links may not be available to a general user.

Clicking any project (“job”) links to further details on the build history and status. This image provides us details on what the overview screen in the CI environment might look like, but it is at the detailed project level that we see the real benefit of packaging that can be performed by a well setup CI environment.

Continuous Integration on Software Medical Device Projects, Part 2

Continuous Integration refers to both the continuous compiling and building of a project tree and the continuous testing, releasing and quality control. This means that throughout the project, at every stage, the development team will have a build available with at least partial documentation and testing included. The CI Build is used to perform certain tasks at certain times. In general, this simply means that builds are performed in an environment that closely matches the actual production environment of the system. In addition, a CI environment should be used to provide statistical feedback on build performance, tests, and incorporation of a version control system and ticketing systems.  In a development environment, the team may use a version control tool (i. e. Subversion) to link to tickets. In this way, any CI build will be linked to a specific changeset, thereby providing linkage to Issues, Requirements and, ultimately, the Trace Matrix.

A development team should attempt to perform continuous integration builds frequently enough that no window of additional version control update occurs between commit and build, and such that no errors can arise without developers noticing them and correcting them immediately. This means that for a project that is in-development, it should be configured that a checking triggers a build in a timely manner. Likewise, it is generally a good practice for the developer committing a changeset of code to verify that his or her own changeset does not break the continuous integration build.

Allow me to address the word “build.” Most software engineers think of a build as the output of compiling and linking. I suggest moving away from this narrow definition and expanding it. A “build” is a completion (in both the compiler sense and beyond) of all things necessary for a successful product delivery. A CI tool runs whatever scripts the development team tells it to run. As such, the team has the freedom to use the CI tool as a build manager (sorry build managers, I don’t mean to threaten your job). It can compile code, create an installer, bundle any and all documents, create release notes, run tests and alert team members about its progress.

Continuous Integration on Software Medical Device Projects, Part 1

I’m currently working on an article about continuous integration on software medical device projects, and how the CI environment can actually be used to solve many of the design and tracing requirements that must be dealt with on such a project. I’m not finished, but I wanted to post a little bit here. Here goes.

A continuous integration (CI) tool is no longer simply something that is “nice to have,” during project development. As someone who has spent more time than I care to discuss wading through documents and making sure references, traceability, document version and design outputs are properly documented in a Design History File (DHF), I hope to make the value of using CI to automate such tedious and error prone manual labor clear. CI shouldn’t be though of as a “nice-to-have.” On the contrary: It is an absolute necessity!

What is Continuous Integration?
In software engineering, continuous integration (CI) implements continuous processes of applying quality control — small pieces of effort, applied frequently. Continuous integration aims to improve the quality of software, and to reduce the time taken to deliver it, by replacing the traditional practice of applying quality control after completing all development.
– Wikipedia: Continuous Integration

When I say that continuous integration is an absolute necessity, I mean that both the CI tool and the process are needed. A CI tool takes the attempts—sometimes feeble attempts—of humans to make large amounts of documentation consistently traceable and forces the computer system to do what it does best. The use of a CI tool is not simply an esoteric practice for those who are fond of its incorporation. As you will learn in this chapter, continuous integration is something that good development teams have always attempted, but have too often failed to utilize software tools to ease the process. Going a step further, development teams can use a CI tool to simplify steps that they may never have dreamed of before!

What is a Software Medical Device?

When writing software for medical purposes, that software may or may not be subject to FDA scrutiny. We may or may not be required to submit for a 510k. What does this mean? How do we know? Its all a little confusing.

As I considered a series of articles on the subject, I wanted to navigate through 21 CFR 820.30 — Quality System Regulation, and explain implementation of a quality system for each item in subpart C–Design Controls. The first item, however, deals with medical device classification. This is something that should be left to regulatory and FDA and not an design team working on the quality system. We are not fully qualified to determine whether or not we are working on a medical device, or what classification it is. Left to our own, we can likely come up with many great excuses as to why we think our product is not a medical device!

In subpart C—Design Controls of 21 CFR part 820, we are presented with the following:

(a) General.(1) Each manufacturer of any class III or class II device, and the class I devices listed in paragraph (a) (2) of this section, shall establish and maintain procedures to control the design of the device in order to ensure that specified design requirements are met.(2) The following class I devices are subject to design controls:
(i) Devices automated with computer software; and
(ii) The devices listed in the following chart.

Section Device
868.6810                     Catheter, Tracheobronchial Suction.
878.4460                     Glove, Surgeon’s.
880.6760                     Restraint, Protective.
892.5650                     System, Applicator, Radionuclide, Manual.
892.5740                     Source, Radionuclide Teletherapy.

Thomas H. Farris, in Safe and Sound Software – Creating an Efficient and Effective Quality System for Software Medical Device Organizations, offers us a definition of a medical device:


Medical Device

Any equipment, instrument, apparatus, or other tool that is used to perform or assist with prevention, diagnosis, or treatment of a disease or injury. As an industrial term of art, a “medical device” typically relates to a product that the FDA or other regulatory authority identifies as a regulated device for medical use. [2]

I’ve been contemplating a detailed writeup on this subject, but I haven’t had a good idea of where to begin, especially since my own regulatory experience is, at best, limited. Today I stumbled upon this article on the subject over at MEDS Magazine. Bruce Swope (the author), offers a little bit of insight on the 3 medical device classifications:

Generally, these three classes are determined by the patient risk associated with your device. Typically, low-risk products like tongue depressors are defined as Class I devices, and high-risk items like implantable defibrillators are defined as Class III devices. The marketing approval process is usually determined based on a combination of the class of the device and whether the product is substantially equivalent to an existing FDA-approved product. If the device is a Class I or a subset of Class II and is equivalent to a device marketed before May 28, 1976, then it may be classified as an Exempt Device. A 510(k) is a pre-marketing submission made to the FDA that demonstrates that the device is as safe and effective (substantially equivalent) to a legally marketed device that is not subject to Pre-market Approval (PMA). For the purpose of 510(k) decision-making, the term “pre-amendment device” refers to products legally marketed in the U.S. before May 28, 1976 and which have not been:

  • significantly changed or modified since then; and
  • for which a regulation requiring a PMA application has not been published by the FDA.

PMA requirements apply to Class III pre-amendment devices, “transitional devices” and “post-amendment” devices. PMA is the most stringent type of product marketing application required by the FDA. The device maker must receive FDA approval of its PMA application prior to marketing the device. PMA approval is based on the FDA’s determination that the application contains sufficient valid scientific evidence to ensure that the device is safe and effective for its intended use(s). For some 510(k) submissions and most PMA applications, clinical performance data is required to obtain clearance to market the device. In these cases, trials must be conducted in accordance with the FDA’s Investigational Device Exemption (IDE) regulation.

Ultimately, however, we cannot classify our own software medical device. That is the job of the FDA.

[MEDS Magazine: Executive Overview of FDA Medical Device Approval Requirements]

[1] 21 CFR Part 820—Quality System Regulation

[2] Safe and Sound Software – Creating an Efficient and Effective Quality System for Software Medical Device Organizations, Thomas H. Farris. ASQ Quality Press, Milwaukee, Wisconsin, 2006, pg. 118

Valuable Unit Tests in a Software Medical Device, Part 8

The Traceability Matrix

A critical factor in making unit tests usable in an auditable manner is incorporating them into the traceability matrix. As with any test, requirements, design elements and hazards must be traced to one another through use of the traceability matrix.

The project team must document traceability of requirements through specification and testing to ensure that all requirements have been tested and correctly implemented (product requirements traceability matrix).

Thomas H. Farris, Safe and Sound Software

Our SOPs and work instructions will require that we prove traceability of our tests and test results, whether manual or automated unit tests. Just as has always been done with the manual tests that we are familiar with, tests must be traced to software requirements, design specifications, hazards and risks. The goal is simply to prove that we have tested that which we have designed and implemented, and in the case of automated tests this is all very easy to achieve!

Do We Still Need Manual Tests?

Yes! Absolutely! There are a number of reasons why manual tests are still, and always will be, required: Installation Qualification and environmental tests. Both manual and automated tests are valid and valuable, and neither should be considered a replacement for the other.

Manual tests allow for a certain amount of “creative” testing that may not be considered during unit test development. Manual tests also lead to greater insight related to usability and user interaction issues.

To this end, defect that is discovered during manual testing should result in an automated test.


  • Device Advice: Regulation and Guidance, Software Validation Guidelines, http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance
  • Safe and Sound Software – Creating an Efficient and Effective Quality System for Software Medical Device Organizations, Thomas H. Farris. ASQ Quality Press, Milwaukee, Wisconsin, 2006
  • CFR – Code of Federal Regulations Title 21. Subpart C – Design Controls, Section 820.30 Design Controls
  • Agile Software Requirements, Dean Leffingwell. Addison-Wesley. Copyright © 2011, Pearson Education, Inc. Boston, MA
  • Continuous Delivery, Jez Humble, David Farley. Addison-Wesley, Copyright © 2011, Pearson Education, Inc. Boston, MA

Valuable Unit Tests in a Software Medical Device, Part 7

Regulated Environment Needs Per 21 CFR Part 820 (Subpart C—Design Controls):

(f) Design verification. Each manufacturer shall establish and maintain procedures for verifying the device design. Design verification shall confirm that the design output meets the design input requirements. The results of the design verification, including identification of the design, method(s), the date, and the individual(s) performing the verification, shall be documented in the DHF.

Simply put, our functional unit tests must be a part of our DHF, and we must document each test, each test result (success or failure) and tie tests and outcomes to specific software releases. This is made extremely easy with a continuous integration environment in which builds and build outcomes (including test results) are stored on a server, labeled and linked to from our DHF. Indeed, what is sometimes a tedious task when it comes to manual execution and documentation of test results, becomes quite convenient.

The same is true of Design validation:

(g) Design validation. Each manufacturer shall establish and maintain procedures for validating the device design. Design validation shall be performed under defined operating conditions on initial production units, lots, or batches, or their equivalents. Design validation shall ensure that devices conform to defined user needs and intended uses and shall include testing of production units under actual or simulated use conditions. Design validation shall include software validation and risk analysis, where appropriate. The results of the design validation, including identification of the design, method(s), the date, and the individual(s) performing the validation, shall be documented in the DHF.

Because our CI environment packages build and test conditions at a given point in time, we can successfully satisfy the requirements laid out by 21 CFR 820 Subpart C, Section 820.30 (f) and (g) with very little effort. We simply allow our CI environment to do that which is does best, and that which a human tester may spend many hours attempting to do with accuracy.

Document the Approach

As discussed, all these tests are indeed very helpful to the creation of good software. However, without a wise approach to incorporation of such tests in our FDA regulated environment, they are of little use in any auditable capacity. It is necessary to document our approach to unit test usage and documentation within our Standard Operating Procedures and work instructions, and this is to be documented in much the same way that we would document any manual verification and validation test activities.

To this end, it is necessary to make our unit tests and their outputs an integral part of our Design History File (DHF). Each test must be traceable, and this means that unit tests are given unique identifiers. These unique identifiers are very easily assigned using an approach in which we organize tests in logical units (for example, by functional area) and label tests sequentially.
Label and Trace Tests

An approach that I have taken in the past is to assign some high-level numeric identified and a secondary sub-identifier that is used for the specific test. For example, we may have the following functional areas: user session, audit log, data input, data output and web user interface tests (these are very generic examples of functional areas, granted). Given such functional areas I would label each test, using test naming annotations, with the following high level identifiers:

1000: user session tests
2000: audit log tests
3000: data input tests
4000: data output tests
5000: web user interface tests

Within each test it is then necessary to go a step further, applying some sequential identifier to each test. For example, the user test package may include tests for functional requirements such as user login, user logout, session expiration and a multiple-user-login concurrency test.

In such a scenario, we would label the tests as follows:

1000_010: user login
1000_020: user logout
1000_030: session expiration
1000_040: multiple concurrent user login

Using TestNG syntax, along with proper Javadoc comments, it is very easy to label and describe a test such that inclusion in our DHF is indeed very simple.

* Test basic user login and session creation with a valid user.
* @throws Exception

@Test(dependsOnMethods = {“testActivePatientIntegrationDisabled”},
groups = {“TS0005_AUTO_VE1023″})

public void testActivePatientIntegrationEnabled() throws Exception {
Fixture myApp new Fixture();
UserSession mySession = fixture.login(“test_user”, “test_password”);


Any numbering we choose to use for these tests is fine, as long as we document our approach to test labeling in some project level document, for example a validation plan or master test plan. Such decisions are left to those who design and apply a quality system for the FDA regulated project. As most of us know by now, the FDA doesn’t tell us exactly how we are to do things, rather, we are simply told that we must create a good quality system, trace or requirements through design, incorporate the history in our DHF and be able to recreate build and test conditions.

If I make this all sound a little too easy it is because I believe it is easy. Too often we view cGMP guidance as a terrible hindrance to productivity. But we are in control of making things as efficient as we can.

Device Advice: Regulation and Guidance, Software Validation Guidelines, http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance

Valuable Unit Tests in a Software Medical Device, Part 6

Many of the benefits of functional unit testing listed above are gained only when unit tests are written alongside design and development (test-driven methodologies aside). It is imperative that the development team develop and observe test results while design and activities take place. This is of benefit to the quality assurance team as well, as Dean Leffingwell notes:

A comprehensive unit test strategy prevents QA and test personnel from spending most of their time finding and reporting on code-level bugs and allows the team to move its focus to more system-level testing challenges. Indeed, for many agile teams, the addition of a comprehensive unit test strategy is a key pivot point in their move toward true agility—and one that delivers “best bang for the buck” in determining overall system quality.

Also, it is probably becoming clear that a key benefit of functional unit tests is the real-time feedback offered to the development team. One author refers to the unit tests that are executed with each software change as “commit tests. [1]”

Commit tests that run against every check-in provide us with timely feedback on problems with the latest build and on bugs in our application in the small. [8]

-Jez Humble, David Farley, Continuous Integration

Project unit tests, which should offer significant amount coverage (at least 80%), provide the team with built-in software change-commit acceptance criteria. If a developer causes the CI build to fail because of a code change, it is immediately known that the change involved does not meet minimum accepted criteria, and it requires urgent attention.

Humble and Farley continue,

Crucially, the development team must respond immediately to accepted test breakages that occur as part of the normal development process. They must decided if the breakage is a result of a regression that has been introduced, an intentional change in the behavior of the application, or a problem with the test. Then they must take appropriate action to get the automated acceptance test suite passing again. [8]

-Jez Humble, David Farley, Continuous Integration

Continuous Delivery, Jez Humble, David Farley. Addison-Wesley, Copyright © 2011, Pearson Education, Inc. Boston, MA
Agile Software Requirements, Dean Leffingwell. Addison-Wesley. Copyright © 2011, Pearson Education, Inc. Boston, MA

Valuable Unit Tests in a Software Medical Device, Part 5

What is the value of unit testing?

Immediate Feedback within Continuous Integration: Developer Confidence

Too often we view testing as an activity that occurs only at specific times during software development. At worst, software testing takes place upon completion of development (which is when it is learned that development is nowhere near complete). In other more zealous environments, it may take place at the end of each iteration. We can do better! How about complex unit tests perform validation continuously, with each code change? It is possible to perform full regression tests with every single code change. It sounds like a significant amount of overhead, but it is not. The real cost to a project is not inattention to complex functional unit tests; The danger is that we put off testing until it is too late to react to a critical issue discovered during some predetermined testing phase.

The most effective way of killing a project is to organize it so that testing becomes an activity that is so critical to its success that we do not allow for the possibility that testing can do what it is supposed to do: Discover a defect prior to go-live.

At its most basic level, a continuous integration build environment does just one thing: It runs whatever scripts we tell it to. To that end, it is important that the CI build execute unit tests, and that a failure of any single unit test is considered a failure of the continuous integration build. The power of a tool such as Hudson or Jenkins-CI is that we can tell it to run whatever we want, log the outcome, keep build artifacts, run third party evaluation tools and report on results. With integration of our software version control system (e.g., Subversion, Git, Mercurial, CVS, etc.), we know the changeset relevant to a particular build. It can be configured to generate a build at whatever interval we want (nightly, hourly, every time there is a code commit, etc.). When a test fails we know immediately what changeset was involved.

Personally, every time I do any code commit of significance, one of the first things I do is check the CI build for success. If I’ve broken the build I get to work on correcting the problem (and if I cannot correct the problem quickly, I roll my changeset out so that the CI build continues to work until I’ve fixed the issue).

Easy Refactoring

As a developer, refactoring can be a scary thing. Refactoring is perhaps the most effective way of introducing a serious defect while doing something that seems innocuous. With thorough unit tests performing a full regression test with each and every committed software changeset, however, a developer can have confidence that his or her simple code changes have not introduced a defect. We have continuous integration builds running our tests for many reasons, not the least of which is to alert developers to the possibility that their changes have broken the build.

As a developer I strive to avoid breaking the continuous integration build. When I do break it, however, I am very pleased to know that what was done to cause a problem has been discovered immediately. Correction of a defect becomes much more costly when its discovery is not noticed until the end of a development phase!
Regression Tests with Every Code Change

By “repeated” I mean something different than repeatable. The fundamental benefit with repeated tests is the fact that a test can be executed many more times by automation than by a human tester. Sometimes, even without a related code change, and much to our surprise, we see a test suddenly fail where it succeeded numerous times before. What happened?

The most difficult software defects to fix (much less, find) are the ones that do not happen consistently. Database locking issues, memory issues, deadlock bugs, memory leaks and race conditions can result in such defects. These defects are serious, but if we never detect them how can we fix them?

As stated previously, it is imperative that have unit tests that go above and beyond what we traditionally think of as “unit tests,” and go several steps further, automating functional testing). This is another one of those areas where team members often (incorrectly) feel that there is not sufficient time to deal with the creation of unit tests. Given a proper framework, however, creation of unit tests need not be overwhelming.
Another occasional issue has to do with misuse of the software version control system. Many developers know the frustration that can come with an accidental code change resulting from one developer stepping over the modifications of another. While this is a rare issue in a properly used version control environment, it does still happen, and unit tests can quickly reveal such a problem at build time.

Concurrency Tests

Concurrency tests are tricky, and it is in concurrency testing that the repeated and rapid nature of functional unit tests can shine where human testers cannot. I personally have witnessed many occasions in which a CI build suddenly fails for no obvious reason. There was no code commit related to the particular point-of-failure, and yet a unit test that once succeeded suddenly fails? Why?

This can happen (and it does happen) because concurrency problems, by their very nature, are hit or miss. Sometimes they are so unlikely to occur that we never witness them during the course of normal testing. When a continuous integration environment runs concurrency tests dozens of times a day, however, we increase the likelihood of finding a hidden and menacing problem. Additionally, unit tests can simulate many concurrent users and processes in a way that even a team of human testers cannot.
Repeatable and Traceable Test Results

This is the key to making our unit tests adhere to the standards we have set forth in our quality system so that we may use them as a part of our submission (see the following section on Regulated Environment Needs). If we are going to put forth the effort, and since we already know that unit tests result in a quality improvement to our software, why wouldn’t we want to include these test results?

Our continuous integration server can and should be used to store our unit test results right alongside each and every build that it performs.

This is a benefit not only in the world of an FDA-regulated environment, of course. In any software project it can be difficult to recreate conditions under which a defect was discovered. With a CI build executing our build and test scripts under a known environment with a known set of files (the CI build tool pulls from the version control system), it is possible to execute the tests under exact and specific circumstances.

Regression Tests with Every Code Change

By “repeated” I mean something different than repeatable. The fundamental benefit with repeated tests is the fact that a test can be executed many more times by automation than by a human tester. Sometimes, even without a related code change, and much to our surprise, we see a test suddenly fail where it succeeded numerous times before. What happened?

The most difficult software defects to fix (much less, find) are the ones that do not happen consistently. Database locking issues, memory issues, deadlock bugs, memory leaks and race conditions can result in such defects. These defects are serious, but if we never detect them how can we fix them?
As stated previously, it is imperative that have unit tests that go above and beyond what we traditionally think of as “unit tests,” and go several steps further, automating functional testing). This is another one of those areas where team members often (incorrectly) feel that there is not sufficient time to deal with the creation of unit tests. Given a proper framework, however, creation of unit tests need not be overwhelming.

Another occasional issue has to do with misuse of the software version control system. Many developers know the frustration that can come with an accidental code change resulting from one developer stepping over the modifications of another. While this is a rare issue in a properly used version control environment, it does still happen, and unit tests can quickly reveal such a problem at build time.

Repeatable and Traceable Test Results

This is the key to making our unit tests adhere to the standards we have set forth in our quality system so that we may use them as a part of our submission (see the following section on Regulated Environment Needs). If we are going to put forth the effort, and since we already know that unit tests result in a quality improvement to our software, why wouldn’t we want to include these test results?

Our continuous integration server can and should be used to store our unit test results right alongside each and every build that it performs.

This is a benefit not only in the world of an FDA-regulated environment, of course. In any software project it can be difficult to recreate conditions under which a defect was discovered. With a CI build executing our build and test scripts under a known environment with a known set of files (the CI build tool pulls from the version control system), it is possible to execute the tests under exact and specific circumstances.

Valuable Unit Tests in a Software Medical Device, Part 4

“The hardware system, software program, and general quality assurance system controls discussed below are essential in the automated manufacture of medical devices. The systematic validation of software and associated equipment will assure compliance with the QS regulation; and reduce confusion, increase employee morale, reduce costs, and improve quality. Further, proper validation will smooth the integration of automated production and quality assurance equipment into manufacturing operations.

Medical devices and the manufacturing processes used to produce them vary from the simple to the very complex. Thus, the QS regulation needs to be and is a flexible quality system. This flexibility is valuable as more device manufacturers move to automated production, test/inspection, and record-keeping systems.”

-Device Advice: Regulation and Guidance, Software Validation Guidelines

What is a GOOD Unit Test?

In his book Safe and Sound Software, Thomas H. Farris describes unit tests:

Software testing may occur on software modules or units as they are completed. Unit testing is effective for testing units as they are completed, when other units are components have not yet been completed. Testing still remains to be completed to ensure that the application will work as intended when all software units or components are executed together.

This is a start, but unit tests can achieve so much more! Farris goes on to break out a number of different categories of software:

    Black box test
    Unit test
    Integration test
    System test
    Load test
    Regression test
    Requirements-based test
    Code-based test
    Risk-based test
    Clinical test

Traditionally this may be a fair breakout. Used wisely, and with the proper frame work, however, we can perform black box, integration, system, load, regression, requirements, code-based, risk-based and clinical tests with efficient unit tests that simulate a true production environment. The purpose of this article is not to go into the technical details of how (to explain unit test frameworks, fixtures, mock objects and simulations would require much more space). Rather, I simply want to point out the benefits that result. To achieve these benefits, your software team will need to develop a deep understand of unit tests. It will take some time, but it will be time very well spent.

It’s a good idea to have unit tests that go above and beyond what we traditionally think of as “unit tests,” and go several steps further, automating functional testing). This is another one of those areas where team members often (incorrectly) feel that there is not sufficient time to do all the work.

As Harris goes on to state:

Software testing and defect resolution are very time-consuming, often draining more than one-half of all effort undertaken by a software organization [3].

Testing need not wait until the entire product is completed; iteratively designed and developed code may be tested as each iteration of code is completed. Prior to beginning of verification or validation, the project plan or other test plan document should discuss the overall strategy, including types of tests to be performed, specific functional tests to be performed, and a designation of test objectives to determine when the product is sufficiently prepared for release and distribution.

Harris is touching on something that is very important in our FDA-regulated environment, and this is the fact that we must document and describe our tests. For our unit tests to be useful we must provide documentation of what each test does (that is, what specifically it is testing) and what the results are. The beauty of unit tests and the tools available (incorporation into our continuous integration environment) is that this process is streamlined in a way that makes the traceability and re-creation of test conditions required for our 510k extremely easy!

To achieve all of this we will need to have a testing framework capable of application launch, simulations, mock objects, mock interfaces and temporary data persistence. This all sounds like much more overhead than it actually is, so fear not: The benefits far outweigh the costs.

Valuable Unit Tests in a Software Medical Device, Part 3

Automating Functional Tests Using Unit Test Framework

Most software projects, especially in any kind of Agile environment, undergo frequent changes and refactoring. If the traditional single-flow waterfall model worked, recorded test scripts such as those noted previously would probable work just fine as well, albeit with little benefit.

But it should be well known by know that the traditional single-flow waterfall model has failed, and we live in an iterative/Agile world. As such, our automated tests must be equally equipped for ongoing change. And because the functional unit tests are closely related to requirements at both a white-box and black-box level, developers, not testers, have an integral role in the creation of automated tests.

To achieve this level of unit testing, a test framework must be in place. This requires a bit of up-front effort, and the details of creating such a framework go well beyond the scope of this article. Additionally, the needs of a test framework will vary depending on the project.

Test fixtures become an important part of complex functional unit testing. A test fixture is a class that incorporates all of the setup necessary for running such unit tests. It provides methods that can create common objects (for example, test servers and mock interfaces). The details included in a test fixture are specific to each project, but some common methods include test setup, simulation and mock object creation and destruction, and declaration of any common functionality to be used across unit tests. To provide further detail on test fixture creation would require much more detail than can be provided here.

Given what may seem like extreme overhead in the creation of complex unit tests, we may begin to question the value. There is, no doubt, a significant up-front cost to the creation of a versatile and useful unit test framework (including a test fixture, which includes all the necessary objects and setup needed to simulate a running environment for the sake of testing). And given the fact that manual function and user acceptance testing remains a project necessity, it seems like there may be an overlap of effort.

But this is not the case!

With a little up-front creation of a solid unit test framework, we can make efforts to create unit tests simple. We can even go as far as requiring a unit test for any functional requirement implementation prior to allowing that requirement (or ticket) to be considered complete. Furthermore, as we discover potential functionality problems, we have the opportunity to introduce a new test right then and there.

“The hardware system, software program, and general quality assurance system controls discussed below are essential in the automated manufacture of medical devices. The systematic validation of software and associated equipment will assure compliance with the QS regulation; and reduce confusion, increase employee morale, reduce costs, and improve quality. Further, proper validation will smooth the integration of automated production and quality assurance equipment into manufacturing operations.

Medical devices and the manufacturing processes used to produce them vary from the simple to the very complex. Thus, the QS regulation needs to be and is a flexible quality system. This flexibility is valuable as more device manufacturers move to automated production, test/inspection, and record-keeping systems.”

-Device Advice: Regulation and Guidance, Software Validation Guidelines
(http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance) [1]