Category Archives: Writing Software

What’s The Value of College?

CNN: Surging College Tuition
CNN: Surging college costs price out middle class

Not long ago I found myself working alongside a brilliant college dropout–A young junior programmer who was just plain gifted when it came to software development. I was very surprised that he hadn’t completed a degree of any kind. It made me wonder why I had, without much consideration, put such high value on a four-year degree.

A recent InfoWorld article, 15 hot programming trends — and 15 going cold, touches on the issue of rising tuition costs and the questionable value that they bring.

I attended Ball State University — a place hardly known for being an engineering college. It was a nearby school with a Computer Science program that did not cost as much as IU or Purdue. For someone like me, it was attainable. While I enjoyed my time at Ball State, and I learned much, very little of my Computer Science education turned out to be directly applicable to my career. Sure, I learned formal concepts, design practices and perhaps a little about requirements gathering and QA (very little). Some of what I thought I knew had to be unlearned, as I came to realize that things operate differently in the “real world.”

Ultimately, as someone with an inherent interest in writing software, I suspect that everything I really needed to know could have been learned in a year of dedicated study. The rest comes from workplace experience.

The problem, of course, is that if I hadn’t gone the college route, spending 4+ years working on a Bachelor’s degree, I would never have been able to land my first job interview. And it was that first job where I really learned how all this stuff that I knew really came together in a real business environment on a project of significant size.

Through the years, I’ve met great, good and awful software engineers with varying backgrounds and educations. Many of the best software developers attended college, but graduated with a degree in something unrelated (History, Art, New Testament Studies, English, to name a few). These people gravitated to Software Engineering and Development through various means, some of them going on to pursue certifications and other training.

My experience hardly reflects any kind of comprehensive analysis, but I don’t hesitate to say that most of the software engineers with undergraduate degrees in non-CS fields are among those that I consider excellent.

There was a time, not all that long ago, when droves of students gravitated to Computer Science because they heard that it was a great career to pursue. While I happen to agree that it is a great career, I don’t think it is a career for just anyone. It requires a certain type of interest and motivation. Perhaps it is because some folks enter Computer Science undergraduate programs for the wrong reasons, but I have observed all ranges of skill level from those with CS backgrounds. I’ve found myself shocked (more than a few times) by the poor quality of code created by developers with formal CS educations. I once was asked to help debug some code written by a colleague that had compilation problems. It didn’t take long to find the problme: A 2,000+ lines-of-code function that caused the compiler to choke.

Doctors, Teachers, Lawyers, Accountants–These are all people who require specific degrees and certifications. I know that I don’t want to have my eyes checked by a self-trained Optometrist. In software fields it is different. After a software engineer has some experience, it seems that his or her degree becomes afterthought. Unless the subject of college comes up during a lunch conversation, rarely do I actually know the formal education or degree of a colleague. What I do know is that person’s quality and volume of work. Don’t get me wrong–there are things that may be taught in a Computer Science department that are absolutely necessary. Knowledge of algorithms and design patterns is important. It should be noted, however, that knowledge and application are different beasts.

I wonder–If college costs keep rising at such a staggering rate, at what point does the return on investment lose its worth? With companies hard pressed to find good software engineers, and with a greater percentage of the population unable to afford a 4 year degree at even a semi-respected university, when will the traditional model change? There are so many options–from certifications to local technical schools that are available at a fraction of the cost. At some point it seems that a college degree becomes more of a social status symbol than a true reflection of one’s talent or ability.

We’ll have to begin to ask ourselves: Which candidate is right for the job? Is it the one fresh out of college with a CS degree and a 3.8 GPA who lacks experience working with others on a project of scale, or is it the non-college-route self-taught programmer who has proven talent that can be seen by way of open-source contributions?

Occasionally I have seen job postings for software engineers which claim to require a Master’s Degree in Computer Science. I have to wonder: What does the hiring manager believe he or she might get from the engineer with a Master’s Degree that differs from the engineer with a lowly Bachelor’s Degree? In my experience, most Master’s Programs a little more than the same programs that undergraduates complete… The only difference is that the students in the program have already completed a four-year degree (and that degree could be anything).

This isn’t to demean formal education. If I had it to do over, I wouldn’t change my time at Ball State University. No way! I was fortunate, however. When I went to Ball State, college was merely ridiculously expensive. Today it is insanely expensive. In 10 years, it will be unattainably expensive. When that happens, where will the software engineers come from?

Coding Horror/The Software Career

LadderI don’t like to just post links to another blog or article. Anyone can do that, and there are far too many blogs out there that create no original content. So I try to write original thoughts and articles. That said, sometimes this is a rule worth breaking. Jeff Atwood has a great post over at Coding Horror titled So You Don’t Want to be a Programmer After All.

Atwood asks the question, “What career options are available to programmers who no longer want to program?” This is converse to a subject I’d like to write about soon (still gathering my thoughts: What career options are their for programmers who wish to move up in their career, perhaps into management, while never losing the ability to actually write code?”

Unfortunately, it seems to me that in this field the general career path goes something like this:

Junior Programmer->Senior Programmer->Super Senior Programmer->Awesome Amazing Programmer->Manager (stop writing software)

I know of at least one person who got into management, didn’t like it, and gave it up to move back into a full-time developer role. What about the programmer who wishes to do both? And why do we assume that software management means an end to coding in the role? Sure, this isn’t always the case, but in general I think it is. It strikes me that many of the best developers move into management, thereby eventually losing their hands-on skills. That seems unfortunate.

Mistakes

soupThe other day my daughter wanted to heat up some soup in the microwave. She insisted on doing it herself. The lid of the Campbell’s Soup can the type with a tab that can be opened without a can opener. She stood in front of her mother as she attempted to open the can, wrestling with it a little. “Lift the tap up, and then pull on it,” her mother instructed. She added, “I really think you should open it in the kitchen over the sink.”

My daughter struggled with the lid, but still didn’t want help–she wanted to prove to us and to herself that she was capable. Soon enough Campbell’s Double Noodles were spilled all over the floor and her mother. Oops.

Did we get mad? No way! How could we? We knew the possible outcomes, but we also knew that we had to allow our daughter to figure this out. My daughter learned a few things in this situation:

  1. How to open a can of soup.
  2. What can go wrong if you tip the can sideways while opening.
  3. Why she should have done it over the sink.

Continue reading

Do Not Flounder (Stay Un-bored)

The following is an article that I am working on for a yet-to-be-determined publication. Having done this before, I will say that getting an article published in a journal/magazine isn’t as difficult as one may think (as long as you have something to say). This hasn’t been proofed, so please forgive any typos or errors. This article is not about the role of management. I would like to follow up on that subject, because I do think management can and should help to produce great software engineers. I’ve seen otherwise good programmers flounder under lacking management. It happens. This article is about the role of the developer, the individual contributor, in making sure that his or her career starts of right and continues to grow.
One ore note: The original version of this post was written in about 2 hours and was full of errors. I’ve applied a number of corrections, but it will likely be further edited before publication.

I have no idea whether or not most developers using Agile have actually read the “Agile Manifesto.” Here it is:

We are uncovering better ways of developing
software by doing it and helping others do it.
Through this work we have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on
the right, we value the items on the left more.

This post is more about career growth than Agile, but somehow I think the Agile approach is relevant here. Agile, as we know it, is an approach to software design. It can also be an approach to managing one’s own career.

Floundering

A friend of mine recently said this: “Your attitude determines your altitude.” Although he was speaking in a general sense, I couldn’t help but think of the application of this motivational advice in relation to software engineering. It is relevant because as software engineers, our career growth is in our hands—perhaps to a greater degree than in any other field. Continue reading

Success as a Technical Lead – Article

I stumbled upon this article today. Its a little dated (from 2008), but still relevant. I don’t think the list is comprehensive, and I certainly think other technical leads would have varying opinions on things. All of the points listed are good, but there are a some points that really stand out:

6. Be part in the design of everything
This does not mean do the whole design. You want to empower team members. But your job is to understand and influence each significant subsystem in order to maintain architectural integrity.

7. Get your hands dirty and code
Yes you should take parts of the code and implement them. Even the least glamorous parts. This will help you not getting stuck alone between management and the team. It will also help you gain respect in the team.

20. Don’t blame anybody publicly for anything
In fact as a tech lead you cannot blame anybody but yourself for anything. The moment you blame a team member in public is the moment when the team starts to die. Internal problems have to be solved internally and preferably privately.

24. Mentor people
It is your job to raise the education level of your team. By doing this you can build relationships at a more personal level and this will gel the team faster and harder. It is very effective with more junior people in the team but there are ways to approach even more senior members, just try not to behave like a teacher.

25. Listen to and learn from people
Even if you are the most experienced developer on the team you can always find new things and learn form others. Nobody is a specialist in everything and some of your team members can be domain specialists who can teach you a lot.

28. Be sure you get requirements and not architecture/design masked as requirements
Sometimes business people fancy themselves architects, sometimes they just speak in examples. They can ask for technology XYZ to be used when what they really mean is they want some degree of scalability. Be sure to avoid hurting feelings but be firm and re-word everything that seems like implied architecture. Get real requirements. To be able to do this you have to understand the business.

36. React to surprises with calm and with documented answers
Never get carried away with refuses or promises when confronted with surprises. Ask for time to think and document/justify your answers. It will make you look better and it will get you out of ugly situations.

A theme throughout the list, and throughout a number of similar books and articles with such advice, is that a good technical lead appreciates and values the various talent and particular skills of the team. A great technical leader isn’t necessarily the “know it all” of the group. He or she should certainly be skilled and eager to maintain that skill–and even be a great developer. But smartest person in the room? Maybe. Maybe not. Personally, I like working around people who are smarter than me. This is the best way to learn.

And there’s a flip side to number 20: Don’t blame people publicly for problems, but be quick to praise people for successes, major and minor. A sense of recognition for one’s diligence is tremendous motivator. I don’t know a single person who doesn’t appreciate kudos. Most parents realize that their children respond better to positive reinforcement than negative… This doesn’t change when one reaches a certain age. I’m not suggesting that a team member not be confronted for problems. Of course he or she should (and must).

Very recently a company-wide email spoke of a major success of mine (successful deployment of a year long project), and mentioned me by name. It felt great, and it made me want to continue with even more success (and it was a great confidence boost). Simple put, its good to know that the folks at the top of the organization are aware and appreciative of the work of those in the trenches!

This all may sound like a lot of feel-good fluff. It isn’t.

Little Tutorials: 36 Steps to Success as a Technical Lead

Version Control/Wiki Control

I FIRMLY believe that documents related to a project should be managed in the same version repository as the source code. This gives us a snapshot in time of all items related to a project. The problem with this comes when using a wiki (and I love using wikis, so don’t get me wrong). There is no way, if we are using a wiki page for specs, requirements, etc., to link a wiki instance in time to a Subversion (Git, Mercurial, whatever) instance in time.

And I don’t think we would want such a feature anyway. A wiki covers many projects and many team needs, not just the needs of a small group of programmers on a single project. I can’t imagine “rolling back” an entire wiki to a given snapshot.

I wonder if there are any clever ideas out there for handling a need such as this. I can see the possibility of pre-commit hooks being used to label wiki pages, but this seems cumbersome (if not entirely unmanageable). The other solution is to rely on the wiki only for collaboration and not for official documentation of any kind. This approach, unfortunately, cripples much of the power of using a wiki.

I am open to ideas.

6 Developers, One Room

Under an extremely tight deadline one team member decided that it would be best if the developers took over a conference room. On a long conference room table there are 6 computers, and 6 extremely talented developers chat, joke, brainstorm and work away. The manager of the team is there too, explaining requirements and helping to clarify definitions and functionality.

There is pizza, too.

Its like that scene in Apollo 13 where the engineers have to figure out how to get the Apollo back to Earth. Ideas bounce freely and communication is immediate. I don’t have to wait for a response to an email or a response in a chat window (which may or may not come). And there’s something about sharing a space with a common goal: The team seems to gel. There is little or no arguing or passive-aggressive commentary as I have seen in meetings throughout my career. We’re all in this together, after all.

I’ve never seen software written with such efficiency.

Staying Current

During nearly ever job interview I’ve ever had, on the phone or face-to-face, I’ve been asked some form of the question, “How do you keep your experience current?”

Sometimes (emphasis on sometimes) this is asked by someone who seems impressed that I have such knowledge on a fair amount of “new stuff.” More often this is a genuine question (and a very good one) asked of an interviewee in an effort to gauge the type of software engineer that this candidate may be. Does this candidate have a desire to keep current with emerging trends? In many ways this is a unique necessity in the world of software engineering as a career.

Software engineering is a discipline that requires a real love of the work. Its not something that anyone hoping to find easy employment stability along with a solid paycheck should pursue (warning to those considering Computer Science).

So the question remains: How do you “keep current?”

No matter how much I love software, the fact remains that I have a life outside of work: family, friends, hobbies… Its not easy, but its necessary. One guy suggests learning a language every year. I like to pick up books that look interesting, read Stack Overflow and listen to podcasts. Woe to the “software engineer” who wishes to dismiss all emerging technologies as gimmicks or buzzwords… Such engineers will quickly find themselves (if lucky) performing maintenance work on antiquated legacy code. As long as software is something of a hobby, something that is personally rewarding, all of the above should be easy enough.

While all of the above is good, and its even better to have a pet project, I have found that nothing compares to working around extremely intelligent people for a company that is serious about software.

Appropriate Checkin Comments

I read this post today with a list of funny checkin comments today. Some of them are funny simply because of the lacking description. Here are some comments I’ve seen in my personal experience:

  • many small changes
  • Microsoft IE sucks!
  • cleanup
  • oops
  • fix the bug

Worse, I’ve seen entirely empty changeset comments.

The above lists, along with those found on the funny checkin comments page provide some examples of inappropriate commit comments. Why? They are unprofessional and lacking in detail and meaning. Some projects are audited and reviewed by external third parties. As a project manager or architect, would you be embarrassed for  an auditor to see the comment “fix sucky code?” I would. Even worse than being embarrassed, there is a productivity problem that can arise from poor checkin comments.

What is an appropriate comment? An appropriate comment must (minimally) have a few things:

1. Appropriate level of detail about the change, including why the change was made, what impact there may be, etc.
2. Appropriate to the changeset. Along with this, a single changeset should, as much as possible, reflect a single ticket or change. Many lazy developers check in a large set of code with a number of intertwined changes that are unrelated. When it comes time to revert a particular change or track a defect this creates problems, and ultimately it defeats one major purpose of version control.
3. Details about the completeness of the change. Generally a changeset should complete a ticket or work item, but this is not always the case. If there is remaining work to be done, “TODO” items or further functional requirements that impact the changeset, this should be noted.
4. Finally, perhaps the most important part, the checkin comment should refer to a ticket. Not all changesets have tickets written, sure, but in general, if the ticket is a defect, enhancement or requirement implementation, there should be one or more tickets that are related. Any modern version control and ticket system will be able to tie these together.

One developer writes:

Many developers are sloppy about commenting their changes, and some may feel that commit messages are not needed. Either they consider the changes trivial, or they argue that you can just inspect the revision history to see what was changed. However, the revision history only shows what was actually changed, not what the programmer intended to do, or why the change was made. This can be even more problematic when people don’t do fine-grained commits, but rather submit a week’s worth of changes to multiple modules in one large pile. With a fine-grained revision history, comments can be useful to distinguish trivial from non-trivial changes in the repository. In my opinion, if the changes you made are not important enough to comment on, they probably are not worth committing either.

Getting developers to write good checkin comments seems to be an ongoing battle. In the business of writing software, its easy to convince oneself that checkin comments are a waste of time. The real waste of time is later, when trying to track the introduction of a defect or trace requirement implementation to code. There is simply no good excuse for lacking checkin comments.

Is the checkin comment “cleanup” appropriate? Yes, in some cases, as long as its true. If I am cleaning up formatting of code, including things like indents and spacing and correcting whitespace, then yes, “cleanup” is an appropriate changeset comment. Generally, however, real comments are required.

[Vistamix: The Humor of Code Checkin Comments]
[Loop Label: Best Practices for Version Control]

What Every *GOOD* Developer Should Know: Quality Assurance

I’m currently reading “The Clean Coder,” and Robert Martin puts emphasis on the importance of software engineers incorporating QA practices into their regular work much better than I can. Here are a few quotes of his on the subject:

“Software is too complex to create without bugs. Unfortunately that doesn’t let you off the hook. The human body is too complex to understand in its entirety, but doctors still take an oath to do no harm. If they don’t take themselves off a hook like that, how can we?”

“Some folks use QA as the bug catchers. They send them code that they haven’t thoroughly checked. They depend on QA to find the bugs and report them back to the developers.  Indeed, some companies reward QA based on the number of bugs they find. The more bugs, the greater the reward.”

“…So automate your tests. Write unit tests that you can execute on a moment’s notice, and run those tests as often as you can.”

I am really enjoying this book. Its a fun and easy ready.

[Amazon: The Clean Coder]

What Every *Good* Developer Should Know

I came across this guy’s blog post titled “10 Things Every Good Web Developer Should Know.” The post is geared toward web developers, but it did get me thinking a bit about the more general questions. I’ve noticed shortcomings among developers (myself included) for a many years. What are some of the things that all GOOD developers SHOULD be expected to know? This list is hardly comprehensive, but I can think of a few things right away:

1. Linux/Unix

If you can’t do basic editing in vi you may find yourself in for a world of hurt at some point. I’ve known many programmers who attempt to write software in the safety of their IDE running on Windows only to find severe problems when it comes time to deploy on the server (and the server is typically some flavor of Linux). I recall a fellow software engineering student in my college days saying of his code, “It all works, it just won’t compile!” It sounds silly, right? Assuming software that is written, build and deployed in Windows will built and deploy just fine in another OS is equally as silly (yes, this goes for Java as well).

2. How to debug

Duh, right? Not so. I have helped many, many engineers with basic debugging. I don’t know if it is that I am particularly good at debugging (I’d like to think so), or that (some) others are particularly poor at it, but for most of my career I’ve heard, “Matt, this isn’t working, can you help?”

Generally this question is asked by an engineer who has spend a fair amount of time staring at the screen hoping to gain some diving inspiration and fix a bug. It never works this way. You have to be willing to dig into the code and actually find where the error is. Look through that stack trace! Run the debugger! When all else fails, start sticking print lines all over your code! Staring at the screen will rarely reveal a complex bug. A compile error, sure, but a bug, no.

3. Basic knowledge of C/C++ and or Assembly

In the day of virtual machines, powerful IDEs, scripted languages, OOAD and encapsulation on top of encapsulation on top of encapsulation, it can be too easy to write code and never understand exactly how much stuff has to happen for that code to work its magic. I have not written anything in assembly since college, and I have not written C code for 10 years, but I rely on my knowledge of the low-level “stuff” every time I write code. It helps to understand fundamentals of computer science, optimization, memory handling and what exactly makes all the magic of a 4GL come together. Many people get by without knowledge of assembly language, sure, but these people will not be “superstar” engineers… They’ll be programmers.

4. Version Control

There’s no excuse for not using version control. I would say it borders on negligent not to.

5. HTML, CSS, Javascript
This one may seem like another no-brainer, but I have run into many developers over the course of my career who simply do not have anything more than the most basic understanding of HTML.

6. System Administration

Just the other day sendmail quit working for me. I use sendmail to alert the team about project activity in Redmine, Subversion and Jenkins CI. I run project management software that is served using Ruby and Rails, Apache and Tomcat. I have written perl scripts for handling batch jobs and specialized email alerts. I have written bash scripts that tie in to various subversion triggers. I have installed Ant, Maven, Git, Subversion, Tomcat, Apache, GTK, GCC… You name it… All with NO help from a Linux administrator. Like it or not, these activities become the responsibility of the lead software engineer. If you embrace it and enjoy it, life will be easy. If you are lost, and waiting for the help of a system administrator, you may be in for a very long wait!

7. Database Design

EVERY good programmer MUST understand things likes normalization, joins, foreign keys, natural keys, sequences, race conditions, locking, and on and on. We cannot rely on a database designer. Even the largest companies I have worked for have, the ones with database administrators, have little if anything to say about database design. Database design is the responsibility of the software engineer. A poor design can cripple what may otherwise be good software.

8. Quality Assurance

Our goal as engineers it to deliver a high quality product with no defects. We all know that there will be defects, but this fact does not change the goal.

9. Communication, Documentation, Technical Writing

Even if your company does have the means to hire a dedicated technical writer, that employee will have no idea what your code is doing. Strong documentation is on the engineer (us). I never had to take a technical writing class in college. Fortunately, writing is something I enjoy. For the engineer who hopes to never have to write a document, he or she is likely to be very annoyed in this career.

Best Software Developers

From Kawseq Consulting:

An average developer can produce software 10 times as fast as the worst developers. The best software developers can produce software 10 times as fast as average developers. That means the best software developers can produce software as much as 100 times as fast as the worst developers.

1. Hiring cheaper developers actually costs much more up front than hiring the best developers, because you have to hire many more of the cheaper developers to get the same job done.

2. Hiring cheaper developers actually costs more later because you have to spend a lot more developer time fixing the higher number of bugs they put in.

3. Hiring cheaper developers means waiting longer to get working software because of the additional build-test-fix cycles to fix the larger number of bugs.

No matter how much time you give them or how many you throw at a project, cheaper sotware developers cannot produce code of the same quality as high quality software developers. You cannot expect a large number of cheaper software developers to produce a high quality result, just as you would not expect to hire 10 house painters and get them to produce the Mona Lisa. The lower quality produced by large teams of average or poor software developers inevitably leads to software that is more expensive to maintain and develop down the track.

That last bit hits the nail on the head!

Also, cheaper developers do not necessarily supplement more experienced and qualified developers. It just means that the better software developer has to rewrite the mistakes made by the less skilled team members.

Kawseq Consulting: Why Quality is More Important Than Price

Test-Parallel Development

Here’s a post (albeit dated) where a developer lists a few problems with test driven development. There are plenty more where that came from. What I’ve found works better is a hybrid approach, where we write tests at the same time as code (or just after). The idea behind pure TDD is one of those that sounds good on paper but is difficult to implement practically. Developing to the test means that we abandon some of the best parts of Agile by again tying our hands to strict requirements (this time the requirements are automated tests that don’t work until the code required is implemented). While I am a big supporter of functional automated tests and their inclusion in CI, I don’t think pure TDD is practical. A much better approach is to write functional code and tests together.

The biggest problem I have with TDD is included on the Wikipedia entry on the subject:

Test-driven development is difficult to use in situations where full functional tests are required to determine success or failure. Examples of these are user interfaces, programs that work with databases, and some that depend on specific network configurations. TDD encourages developers to put the minimum amount of code into such modules and to maximize the logic that is in testable library code, using fakes and mocks to represent the outside world.

Fakes and mocks are fine, but I prefer to spend more time implementing tests that run against real world conditions. Also, most all applications that I work on include a UI and/or database. Often, database and UI design occurs alongside all other development.

Taking HTMLUnit as an example, how often do we know what form and input names will appear on a page before we implement it? The same is generally true of database design. In an ideal world, pure TDD would be a great approach. In the real world, where I work, I need more flexibility. This being said, I think most software teams aren’t anywhere near this being a problem. Most have yet to spend appropriate time on automated tests.

Is the Software Medical Device World Ready for Agile?

To begin with, I don’t see any real reason why software medical device manufacturers should fear Agile. I do, however, see some stipulations that need to be made. Here is a rather dated article on the subject (from 2007) : Agile Development in an FDA Regulated Setting.

The author of the blog post concludes:

It seems to me that Agile methodologies have a long way to go before we see them commonly used in medical device software development. I’ve searched around and have found nothing to make me think that there is even a trend in this direction. Maybe it’s that Agile processes are just too new. They seem popular as a presentation topic (I’ve been to several), but I wonder how prevalent Agile is even in mainstream software development?

Since the article was written (4 years ago), Agile has clearly gained a solid foothold in mainstream software development. With companies bound by medical device FDA guidelines, however (or even IEEE, ISO 9001, ISO 13484), there may be some understandable fear on new approaches. What seems to happen is that the known process becomes the only trusted process, and adoption of anything knew leads to so many questions that it is simply pushed aside (regardless of the potential benefit to the company).

The “twelve principles” of the Agile Manifesto include:

  • Customer satisfaction by rapid delivery of useful software
  • Welcome changing requirements, even late in development
  • Working software is delivered frequently (weeks rather than months)
  • Working software is the principal measure of progress
  • Sustainable development, able to maintain a constant pace
  • Close, daily co-operation between business people and developers
  • Face-to-face conversation is the best form of communication (co-location)
  • Projects are built around motivated individuals, who should be trusted
  • Continuous attention to technical excellence and good design
  • Simplicity
  • Self-organizing teams
  • Regular adaptation to changing circumstances

Uh oh. A few of these principles are very likely to send upper management, at least those that are used to their traditional waterfall SOPs, running for the door. But who says we can’t make modifications where we need to?

I suspect that much of the resistance to Agile methodologies is closely tied to a fear of change. Upper management trusts that which they know, despite some of the obvious shortfalls.

Is there ever a reason NOT to use an Artificial Primary Key?

I found this post on the subject of choosing a primary key. While Java Persistence Annotations allow us to use any field we want as a primary key (as long as it is naturally unique), is there a good reason to use anything that is not a surrogate/artificial primary key?

There are plenty of fine examples for natural primary keys: SKUs, usernames, email addresses, and so on. While these may work fine as a primary key insofar as they satisfy uniqueness requirements, there are some drawbacks, the biggest being the fact that uniqueness may not be guaranteed.

This post lists the reasons against using natural primary keys with 10 very good points:

  • Con 1: Primary key size – Surrogate keys generally don’t have problems with index size since they’re usually a single column of type int. That’s about as small as it gets.
  • Con 2: Foreign key size – They don’t have foreign key or foreign index size problems either for the same reason as Con 1.
  • Con 3: Asthetics – Well, it’s an eye of the beholder type thing, but they certainly don’t involve writing as much code as with compound natural keys.
  • Con 4 & 5: Optionality & Applicability – Surrogate keys have no problems with people or things not wanting to or not being able to provide the data.
  • Con 6: Uniqueness – They are 100% guaranteed to be unique. That’s a relief.
  • Con 7: Privacy – They have no privacy concerns should an unscrupulous person obtain them.
  • Con 8: Accidental Denormalization – You can’t accidentally denormalize non-business data.
  • Con 9: Cascading Updates – Surrogate keys don’t change, so no worries about how to cascade them on update.
  • Con 10: Varchar join speed – They’re generally integers, so they’re generally as fast to join over as you can get.

So while on the surface it may seem simple to use a seemingly unique field for a primary key (a username on a domain, for example), it can be disastrous later on. Con 6 above is the big one, but Con 7 is something people don’t seem to think about as much. We can enforce uniqueness on any field we want, be it a key field or not… That said, I really cannot think of a good reason to use a natural key (other than developer laziness, which is in fact the key reason why bad code tends to be written in the first place).

[Stack Overflow: Deciding between an artificial primary key and a natural key for a Products table]
[Rapid Application Development: Surrogate vs Natural Primary Keys – Data Modeling Mistake 2 of 10]
[Wikipedia: Surrogate Key]
[Wikipedia: Natural Key]

Valuable Unit Tests in a Software Medical Device, Part 9

I thought I was done, but here is yet another good reason to incorporate complex function automated testing: Validation of multiple Java runtime environments. Fabrizio Giudici proposes this as a solution for testing with Java 7, but we can always take it a step further, verifying multiple OS environments as well. Of course, this requires that we have those build environments available (easy enough, using virtual machines).

Brain Rules: Why a Daily Standup Should Be in the Afternoon

I’m reading a book right now called Brain Rules. In it, the author discusses how after a number of hours sitting in front of a computer our brains literally start to call it quits for the day. This happens long before our typical 8 hour workday is up. If your job involves something that requires intense focus, such as writing software, you’re already well aware of this.

I wasn’t thinking of this when I planned my team’s daily standup meetings at 2 in the afternoon. Perhaps a bit selfishly, I was just thinking of when I like to have a break in my work. It turns out that I was probably motivated by the fact that this is a time of day when I feel the need for a change of pace.

I’ve found that when I am head-down into programming, the worst thing to deal with is an interruption. I’ve also found that I can only be hyper-focused on programming for 4-5 hours at a time (at best). After a while I just start to get tired, and its time for a break. The book Brain Rules offers some interesting insight into why this is.

Back to the stand up meeting: Long ago, when I first heard of daily “stand ups” I was alarmed. The last thing I needed was yet another meeting to interrupt an already busy day. What I didn’t understand was the fact that a daily stand up meeting achieves a few important things:

1. It actually reduces interruptions. People who may interrupt otherwise are encouraged to put the interruption off until the meeting.

2. It encourages work. I, and others, always feel like we want to have something to say at the stand up meeting.

3. It discourages long meetings. If the owner of the meeting (the team lead) is wise, he or she will insist that the meeting last no longer than 20 minutes.

4. It provides a much needed break in the afternoon, and an opportunity to refocus. Sometimes, if you live somewhere beautiful like North Carolina, its even a good idea to make the stand up meeting a walk outside. A little exercise and fresh air works wonders after sitting focused on a computer screen for hours.

[Amazon: Brain Rules]

Continuous Integration on Software Medical Device Projects, Part 9

Using a CI Environment to Replace the Traditional DHF

Naturally, an important part of continuous integration is having a CI build that can be checked regularly for continued build success. This is probably what is commonly though of as the key benefit, but there is much more to be gained. Any continuous integration environment that is worth using will allow the team to incorporate packaging of key project items with each build. This includes important documents, tests (both manual and automated test outcomes can be packaged), requirements, design specifications and build results (deployment packages, libraries, executables, installers, etc.). The important thing to note here is the fact that, used wisely, the CI environment can provide a snapshot of all project outputs at any given point in time. Hopefully it is becoming clear that this gives us the possibility of automated DHF creation. Not only that, we have a much more detailed DHF throughout the life of a project and not merely at a point in time in which a particular freeze was performed.

Tests

The continuous integration server should include unit tests (and by unit tests, I mean functional level automated tests) that provide a level of self-testing code such that any build that fails to pass these tests at build time is considered a failed build.

Packaging

 Continuous integration output need not (nor should it) package only built objects. We can leverage CI build integration with our version control system to package everything required per our design outputs (21 CFR Part 820.30 supbart C (d)), design review (21 CFR Part 820.30 supbart C (e)), design verification (21 CFR Part 820.30 supbart C (f)), design validation (21 CFR Part 820.30 supbart C (g)), design transfer ((21 CFR Part 820.30 supbart C (h)), design changes (21 CFR Part 820.30 supbart C (i)) and even our design history file (21 CFR Part 820.30 supbart C (j)).

Read it all:

[CI on Software Medical Devices, Part 1]
[CI on Software Medical Devices, Part 2]
[CI on Software Medical Devices, Part 3]
[CI on Software Medical Devices, Part 4]
[CI on Software Medical Devices, Part 5]
[CI on Software Medical Devices, Part 6]
[CI on Software Medical Devices, Part 7]
[CI on Software Medical Devices, Part 8]
[CI on Software Medical Devices, Part 9]

Continuous Integration on Software Medical Device Projects, Part 8

Build Script Creation

Ant should automatically determine which files will be affected by the next build. Programmers should not have to figure this out manually. While we will use an IDE for most development, we must not rely on the build scripts that are generated by the IDE. There are a few reasons for this:

  • IDE generated build scripts are not as flexible as we need them to be (it is difficult to add, remove and modify build targets).
  • IDE generated build scripts often contain properties that are specified to the environment in which they were generated. Along with this, something that builds okay in one work environment may not build when committed and used by the CI build or when pulled into another environment.
  • IDE generated build scripts very likely lack all the build targets necessary.
  • IDE generated build scripts may rely on the IDE being installed on the build machine. We cannot assume that this will be the case.

The Ant buildfile (build.xml) should define correct target dependencies so that programmers do not have to invoke targets in a particular order in order to get a good build.

Triggering Builds

As noted, we will use Jenkins-CI to automatically perform a CI build every hour if there is a change in the repository. The system will be configured to send emails to the development team if there is a problem with the build (i.e., if a changeset breaks the CI build). It is anticipated that the CI build will break from time-to-time, however, a broken build should not be left unattended. A broken CI build indicates a number of possible problems:

  • A changeset didn’t include a necessary library or path.
  • A changeset caused a problem with a previous changeset, and merge of the changes must be address.
  • A unit test failed.
  • The CI build server has a problem.
  • The build script failed to build the new changeset (missing library or required inclusion).

In my experience, the most common cause of a broken CI build is a lack of attention to the build script. Each developer is responsible to making certain that the ant build scripts are up to date with all required changes. We cannot rely on the build scripts that are generated by an IDE. There are certainly more possible causes that could be added to the above list. It is a good idea for each developer to trigger a CI build immediately following any Subversion commit to ensure that the CI build has not been broken. If a CI build continues to be broken without being addressed, the team leader and/or project manager may revert the offending changeset and re-open any related issue.

Continuous Integration on Software Medical Device Projects, Part 7

Build Labelling

A build is labeled with a predetermined version number (e.g., “2.0”) and with a Subversion changeset number. The beauty of this is that we have a build that is associated with a particular changeset and, by association, an entire set of project documents and sources (as long as we put everything in a single version control system). Once again it should be clear how beneficial such a setup is when thinking in terms of a DHF. No longer do we have to assign a particular team member to fumble through documentation, ensuring that the proper documents are copied to some folder. In fact, we have very little overhead; Our CI server did all the heavy lifting of us!

Mixed Revisions are BAD!

The changeset number, placed in a property file at build time (by the ant build task). If the changeset number has the letter M at the end (e.g., 3001M), the currently checked out fileset is a “mixed” revision.

The current working copy changeset number is easily viewable in TortoiseSVN or at the command line with the svnversion command. It is expected that during development and testing a mixed revision will be used at almost all times. However, any final build must not have the letter M in the build number.

If the letter M does appear in the build number of a formal build, it indicates that there are items in that build that are not up-to-date in the repository, and therefore the build cannot be duplicated using only a changeset or tag. To avoid this, the continuous integration server should be used to create formal builds, and it should be configured to use only a current changeset and no locally modified files.

Ant Build Targets

Build.xml is the Ant build file. It is located at the root level of the project source tree. The following build targets are necessary:

  • build – Compile all code, creating .class files
  • dist – Call the build target and package all code and necessary configuration files, creating a .war file (web application archive).
  • test – Execute automated tests
  • clean – Clean an existing build and dist (remove previously build items)
  • javadoc – Generate Javadoc

Continuous Integration on Software Medical Device Projects, Part 6

Build Scheduling
Jenkins-CI allows teams to set up a project so that a build is performed whenever a change is committed through a version control system. The “Poll Version Control System” option is selected to do this. From there, the team must set up a schedule so that Jenkins will know how often to poll the version control system. Jenkins can be scheduled to poll monthly, daily, hourly, or even every minute. However, I would recommend building hourly (if and only if there has been a change committed).
In the project view, there are lists of the build history. I’ve set up build scripts and artifact archiving so that you can get the build at any point. The Java archive files will often be set up to be self-contained. This just means that all the “stuff” that is needed to run the application (classes, libs, properties, etc) is archived in that single file, and you can always grab it from the CI environment.

To get a current jar (Java executable) go to the build history. A build with a red circle next to it failed. A build with a blue circle is a good build. Click on the most recent successful build and you will see the build #, date and build artifacts. You can download the jar files from here,
This is how we will handle projects in general when someone wants to get a build. There will be more details on tags and so on going forward. We can modify the CI environment to include any artifacts that we want, just as Javadoc or documentation (as long as it is something that we can pull out of Subversion).

Builds are created locally by developers for a number of reasons, however, such builds must always be considered informal. Any build released for formal testing, at the end of a Sprint or iteration or upon project completion must be done in the controlled build environment. Using self-documenting features of code (such as Javadoc), it can be wise to incorporate the output of this extra documentation into the DHF. Why not? There was little overheard in doing so and the benefits are substantial.

Continuous Integration on Software Medical Device Projects, Part 5

What I am proposing in this article is something that I, personally, have never done. In my positions as a software lead, architect, developer and software quality analyst, I have worked only with a DHF as a particular folder with specific subsets of documentation within. This approach has always resulted in a documentation nightmare. I’ve used many version control, issue tracking and continuous integration tools, but I have yet to take the leap to reliance on CI as the source for DHF creation. So while this proposal makes sense, it takes a bit of a leap from the traditional DHF mindset to attempt.

The screenshot (click to see full view) here shows what a (simple) project setup may provide in the way of such packaging. It is up to the team to determine how much or how little the CI handles, but it makes the most sense to allow it to do what computers do very well and what humans tend not to do as well: align things.

The CI Environment
The continuous integration build server should closely mimic the environment in which the final product will be deployed. By doing this, a level of confidence is achieved with regard to system compatibility prior to user acceptance and integration testing. It must also have access (through the version control and/or ECM system) to access all the design controls and documents necessary to build a complete DHF. To this end, I propose using a single version control system for everything. It doesn’t make sense, for example, to store source code in one version control system and documents in another. To do so makes importing of all necessary items difficult, if not impossible. There are a number of benefits to utilization of a continuous integration server during project design and development. Do not think of CI as a tool only for software builds. Integrated with the project version control system, it can serve as much more.

  • Changesets tied to builds
  • “Changeset” is really a Subversion term. For the purposes of this chapter, a “changeset” is what happens any time a user commits a change (be it source code, documentation, graphics, etc.) to the version control system. The CI build should be configured to execute (i.e., build and package) the project when it detects that there is a new changeset.
  • Frequent Builds, Status Update and Rapid Feedback
  • The CI build gives the development team prompt feedback on the build status. If compilation failed, tests fail or some requirement element cannot be packaged, the entire team is flagged immediately. To this end, the entire team will know that a particular check-in has broken something. This feedback will eliminate the fear that an unknown break could be so extensive that progress will come to a screeching halt. The near real-time feedback of the CI build saves valuable time (and stress!) throughout development (and even design).
  • Project Progress Tracking (tickets, tests, etc.)
  • Improved communication
  • Feedback (peer review) is triggered by every changeset, each of which is easily viewable
  • Less overhead for communication
  • Improved team understanding of others’ work
  • Jenkins-CI is used for continuous integration builds. The CI build server runs Ant build scripts and and reports results. It is expected that software developers follow the CI build server to make sure that any code commits do not break the CI build.
  • While the use of an IDE is expected, a development team must not rely on usage of the IDE-generated build scripts for any project.
  • If using Apache Ant for a project build, the development team should create (minimally) these build targets: clean, build, dist, test
  • Ant build scripts are no different than other project code in that they must be written clearly, follow standards and commented. Developers are expected to maintain build scripts. Any code commit that requires a change to the build must include those changes upon commit. This includes addition of a class, library, package, path, etc.

Continuous Integration on Software Medical Device Projects, Part 4

21 CFR Part 820 – DHF Requirements

820.30(e) Design History File (DHF) means a compilation of records which describes the design history of a finished device.
–Device Advice: Regulation and Guidance, Software Validation Guidelines, http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance

Medical device software is audited and controlled by standards defined by FDA, specifically 21 CFR parts 11 and 820. Many of the requirements laid out in this somewhat difficult-to-understand guidance can be made very easy, second nature even, when we use a continuous integration environment throughout the course of project design and development. Looking specifically at the quality system requirements laid out by 21 CFR Part 820.30, Subpart C – Design Controls, it becomes apparent that a good continuous integration environment can help us to address each. A major consideration, perhaps the major consideration, is the completeness of the Design History File.

820.30(j) Design History File. Each manufacturer shall establish and maintain a DHF for each type of device. The DHF shall contain or reference the records necessary to demonstrate that the design was developed in accordance with the approved design plan and the requirements of this part.
–CFR – Code of Federal Regulations Title 21. Subpart C – Design Controls, Section 820.30 Design Controls

The “or reference” part of this statement stands out. Traditionally, medical device manufacturers have though of the DHF as a physical, self-contained item. But with a project of any complexity, it isn’t difficult to imagine how quickly a DHF may grow into an unruly mess of difficult-to-wade-through “stuff.” Why not simply leverage software tools to make the process seamless? Using a continuous integration build environment, development teams can pull together a baseline of all the elements of a DHF as frequently as they wish to; Furthermore, they can do so with a degree of accuracy that cannot be achieved through the diligent (yet distractible) legwork of a pre-occupied team.

I propose that the DHF need not be a single physical or soft folder with duplicate copies of items. Leveraging the CI environment along with the version control system, it is a much better idea to think of the DHF as a snapshot of all relevant design outputs at a given point in time. To that end, the development team can have many snapshots of the DHF throughout the project lifecycle. To achieve this, they need simply define this process in their standard operating procedures and work instructions.

Continuous Integration on Software Medical Device Projects, Part 3

Jenkins CI

For the purposes of this article, the focus will be on one specific continuous integration build tool, Jenkins CI. This is one of the more popular (open source) tools available. Jenkins CI (the continuation of a product formerly called Hudson) allows continuous integration builds in the following ways:

1.    It integrates with popular build tools (ant, maven, make) so that it can run the appropriate build scripts to compile, test and package within an environment that closely matches what will be the production environment.
2.    It integrates with version control tools, including Subversion, so that different projects can be set up depending on projection location within the trunk.
3.    It can be configured to trigger builds automatically by time and/or changeset. (i.e., if a new changeset is detected in the Subversion repository for the project, a new build is triggered.)
4.    It reports on build status. If the build is broken, it can be configured to alert individuals by email.

The above screenshot gives an example of what a main page for Jenkins CI (or any CI tool) may look like. It can be configured to allow logins at various levels and on a per-project basis. This main page lists all the projects that are currently active, along with a status (a few details about the build) and some configuration links on the side. These links may not be available to a general user.

Clicking any project (“job”) links to further details on the build history and status. This image provides us details on what the overview screen in the CI environment might look like, but it is at the detailed project level that we see the real benefit of packaging that can be performed by a well setup CI environment.

Continuous Integration on Software Medical Device Projects, Part 2

Continuous Integration refers to both the continuous compiling and building of a project tree and the continuous testing, releasing and quality control. This means that throughout the project, at every stage, the development team will have a build available with at least partial documentation and testing included. The CI Build is used to perform certain tasks at certain times. In general, this simply means that builds are performed in an environment that closely matches the actual production environment of the system. In addition, a CI environment should be used to provide statistical feedback on build performance, tests, and incorporation of a version control system and ticketing systems.  In a development environment, the team may use a version control tool (i. e. Subversion) to link to tickets. In this way, any CI build will be linked to a specific changeset, thereby providing linkage to Issues, Requirements and, ultimately, the Trace Matrix.

A development team should attempt to perform continuous integration builds frequently enough that no window of additional version control update occurs between commit and build, and such that no errors can arise without developers noticing them and correcting them immediately. This means that for a project that is in-development, it should be configured that a checking triggers a build in a timely manner. Likewise, it is generally a good practice for the developer committing a changeset of code to verify that his or her own changeset does not break the continuous integration build.

Allow me to address the word “build.” Most software engineers think of a build as the output of compiling and linking. I suggest moving away from this narrow definition and expanding it. A “build” is a completion (in both the compiler sense and beyond) of all things necessary for a successful product delivery. A CI tool runs whatever scripts the development team tells it to run. As such, the team has the freedom to use the CI tool as a build manager (sorry build managers, I don’t mean to threaten your job). It can compile code, create an installer, bundle any and all documents, create release notes, run tests and alert team members about its progress.

Continuous Integration on Software Medical Device Projects, Part 1

I’m currently working on an article about continuous integration on software medical device projects, and how the CI environment can actually be used to solve many of the design and tracing requirements that must be dealt with on such a project. I’m not finished, but I wanted to post a little bit here. Here goes.

A continuous integration (CI) tool is no longer simply something that is “nice to have,” during project development. As someone who has spent more time than I care to discuss wading through documents and making sure references, traceability, document version and design outputs are properly documented in a Design History File (DHF), I hope to make the value of using CI to automate such tedious and error prone manual labor clear. CI shouldn’t be though of as a “nice-to-have.” On the contrary: It is an absolute necessity!

What is Continuous Integration?
In software engineering, continuous integration (CI) implements continuous processes of applying quality control — small pieces of effort, applied frequently. Continuous integration aims to improve the quality of software, and to reduce the time taken to deliver it, by replacing the traditional practice of applying quality control after completing all development.
- Wikipedia: Continuous Integration

When I say that continuous integration is an absolute necessity, I mean that both the CI tool and the process are needed. A CI tool takes the attempts—sometimes feeble attempts—of humans to make large amounts of documentation consistently traceable and forces the computer system to do what it does best. The use of a CI tool is not simply an esoteric practice for those who are fond of its incorporation. As you will learn in this chapter, continuous integration is something that good development teams have always attempted, but have too often failed to utilize software tools to ease the process. Going a step further, development teams can use a CI tool to simplify steps that they may never have dreamed of before!

Valuable Unit Tests in a Software Medical Device, Part 8

The Traceability Matrix

A critical factor in making unit tests usable in an auditable manner is incorporating them into the traceability matrix. As with any test, requirements, design elements and hazards must be traced to one another through use of the traceability matrix.

The project team must document traceability of requirements through specification and testing to ensure that all requirements have been tested and correctly implemented (product requirements traceability matrix).

Thomas H. Farris, Safe and Sound Software

Our SOPs and work instructions will require that we prove traceability of our tests and test results, whether manual or automated unit tests. Just as has always been done with the manual tests that we are familiar with, tests must be traced to software requirements, design specifications, hazards and risks. The goal is simply to prove that we have tested that which we have designed and implemented, and in the case of automated tests this is all very easy to achieve!

Do We Still Need Manual Tests?

Yes! Absolutely! There are a number of reasons why manual tests are still, and always will be, required: Installation Qualification and environmental tests. Both manual and automated tests are valid and valuable, and neither should be considered a replacement for the other.

Manual tests allow for a certain amount of “creative” testing that may not be considered during unit test development. Manual tests also lead to greater insight related to usability and user interaction issues.

To this end, defect that is discovered during manual testing should result in an automated test.

References

  • Device Advice: Regulation and Guidance, Software Validation Guidelines, http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance
  • Safe and Sound Software – Creating an Efficient and Effective Quality System for Software Medical Device Organizations, Thomas H. Farris. ASQ Quality Press, Milwaukee, Wisconsin, 2006
  • CFR – Code of Federal Regulations Title 21. Subpart C – Design Controls, Section 820.30 Design Controls
  • Agile Software Requirements, Dean Leffingwell. Addison-Wesley. Copyright © 2011, Pearson Education, Inc. Boston, MA
  • Continuous Delivery, Jez Humble, David Farley. Addison-Wesley, Copyright © 2011, Pearson Education, Inc. Boston, MA

Valuable Unit Tests in a Software Medical Device, Part 7

Regulated Environment Needs Per 21 CFR Part 820 (Subpart C—Design Controls):

(f) Design verification. Each manufacturer shall establish and maintain procedures for verifying the device design. Design verification shall confirm that the design output meets the design input requirements. The results of the design verification, including identification of the design, method(s), the date, and the individual(s) performing the verification, shall be documented in the DHF.

Simply put, our functional unit tests must be a part of our DHF, and we must document each test, each test result (success or failure) and tie tests and outcomes to specific software releases. This is made extremely easy with a continuous integration environment in which builds and build outcomes (including test results) are stored on a server, labeled and linked to from our DHF. Indeed, what is sometimes a tedious task when it comes to manual execution and documentation of test results, becomes quite convenient.

The same is true of Design validation:

(g) Design validation. Each manufacturer shall establish and maintain procedures for validating the device design. Design validation shall be performed under defined operating conditions on initial production units, lots, or batches, or their equivalents. Design validation shall ensure that devices conform to defined user needs and intended uses and shall include testing of production units under actual or simulated use conditions. Design validation shall include software validation and risk analysis, where appropriate. The results of the design validation, including identification of the design, method(s), the date, and the individual(s) performing the validation, shall be documented in the DHF.

Because our CI environment packages build and test conditions at a given point in time, we can successfully satisfy the requirements laid out by 21 CFR 820 Subpart C, Section 820.30 (f) and (g) with very little effort. We simply allow our CI environment to do that which is does best, and that which a human tester may spend many hours attempting to do with accuracy.

Document the Approach

As discussed, all these tests are indeed very helpful to the creation of good software. However, without a wise approach to incorporation of such tests in our FDA regulated environment, they are of little use in any auditable capacity. It is necessary to document our approach to unit test usage and documentation within our Standard Operating Procedures and work instructions, and this is to be documented in much the same way that we would document any manual verification and validation test activities.

To this end, it is necessary to make our unit tests and their outputs an integral part of our Design History File (DHF). Each test must be traceable, and this means that unit tests are given unique identifiers. These unique identifiers are very easily assigned using an approach in which we organize tests in logical units (for example, by functional area) and label tests sequentially.
Label and Trace Tests

An approach that I have taken in the past is to assign some high-level numeric identified and a secondary sub-identifier that is used for the specific test. For example, we may have the following functional areas: user session, audit log, data input, data output and web user interface tests (these are very generic examples of functional areas, granted). Given such functional areas I would label each test, using test naming annotations, with the following high level identifiers:

1000: user session tests
2000: audit log tests
3000: data input tests
4000: data output tests
5000: web user interface tests

Within each test it is then necessary to go a step further, applying some sequential identifier to each test. For example, the user test package may include tests for functional requirements such as user login, user logout, session expiration and a multiple-user-login concurrency test.

In such a scenario, we would label the tests as follows:

1000_010: user login
1000_020: user logout
1000_030: session expiration
1000_040: multiple concurrent user login

Using TestNG syntax, along with proper Javadoc comments, it is very easy to label and describe a test such that inclusion in our DHF is indeed very simple.

/**
* Test basic user login and session creation with a valid user.
*
* @throws Exception
*/

@Test(dependsOnMethods = {“testActivePatientIntegrationDisabled”},
groups = {“TS0005_AUTO_VE1023″})

public void testActivePatientIntegrationEnabled() throws Exception {
Fixture myApp new Fixture();
UserSession mySession = fixture.login(“test_user”, “test_password”);

assertNotNull(mySession);
asertTrue(mySession.active());
}

Any numbering we choose to use for these tests is fine, as long as we document our approach to test labeling in some project level document, for example a validation plan or master test plan. Such decisions are left to those who design and apply a quality system for the FDA regulated project. As most of us know by now, the FDA doesn’t tell us exactly how we are to do things, rather, we are simply told that we must create a good quality system, trace or requirements through design, incorporate the history in our DHF and be able to recreate build and test conditions.

If I make this all sound a little too easy it is because I believe it is easy. Too often we view cGMP guidance as a terrible hindrance to productivity. But we are in control of making things as efficient as we can.

Device Advice: Regulation and Guidance, Software Validation Guidelines, http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance