The following is an article that I am working on for a yet-to-be-determined publication. Having done this before, I will say that getting an article published in a journal/magazine isn’t as difficult as one may think (as long as you have something to say). This hasn’t been proofed, so please forgive any typos or errors. This article is not about the role of management. I would like to follow up on that subject, because I do think management can and should help to produce great software engineers. I’ve seen otherwise good programmers flounder under lacking management. It happens. This article is about the role of the developer, the individual contributor, in making sure that his or her career starts of right and continues to grow.
One ore note: The original version of this post was written in about 2 hours and was full of errors. I’ve applied a number of corrections, but it will likely be further edited before publication.
I have no idea whether or not most developers using Agile have actually read the “Agile Manifesto.” Here it is:
We are uncovering better ways of developing
software by doing it and helping others do it.
Through this work we have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on
the right, we value the items on the left more.
This post is more about career growth than Agile, but somehow I think the Agile approach is relevant here. Agile, as we know it, is an approach to software design. It can also be an approach to managing one’s own career.
A friend of mine recently said this: “Your attitude determines your altitude.” Although he was speaking in a general sense, I couldn’t help but think of the application of this motivational advice in relation to software engineering. It is relevant because as software engineers, our career growth is in our hands—perhaps to a greater degree than in any other field. Continue reading
I stumbled upon this article today. Its a little dated (from 2008), but still relevant. I don’t think the list is comprehensive, and I certainly think other technical leads would have varying opinions on things. All of the points listed are good, but there are a some points that really stand out:
6. Be part in the design of everything
This does not mean do the whole design. You want to empower team members. But your job is to understand and influence each significant subsystem in order to maintain architectural integrity.
7. Get your hands dirty and code
Yes you should take parts of the code and implement them. Even the least glamorous parts. This will help you not getting stuck alone between management and the team. It will also help you gain respect in the team.
20. Don’t blame anybody publicly for anything
In fact as a tech lead you cannot blame anybody but yourself for anything. The moment you blame a team member in public is the moment when the team starts to die. Internal problems have to be solved internally and preferably privately.
24. Mentor people
It is your job to raise the education level of your team. By doing this you can build relationships at a more personal level and this will gel the team faster and harder. It is very effective with more junior people in the team but there are ways to approach even more senior members, just try not to behave like a teacher.
25. Listen to and learn from people
Even if you are the most experienced developer on the team you can always find new things and learn form others. Nobody is a specialist in everything and some of your team members can be domain specialists who can teach you a lot.
28. Be sure you get requirements and not architecture/design masked as requirements
Sometimes business people fancy themselves architects, sometimes they just speak in examples. They can ask for technology XYZ to be used when what they really mean is they want some degree of scalability. Be sure to avoid hurting feelings but be firm and re-word everything that seems like implied architecture. Get real requirements. To be able to do this you have to understand the business.
36. React to surprises with calm and with documented answers
Never get carried away with refuses or promises when confronted with surprises. Ask for time to think and document/justify your answers. It will make you look better and it will get you out of ugly situations.
A theme throughout the list, and throughout a number of similar books and articles with such advice, is that a good technical lead appreciates and values the various talent and particular skills of the team. A great technical leader isn’t necessarily the “know it all” of the group. He or she should certainly be skilled and eager to maintain that skill–and even be a great developer. But smartest person in the room? Maybe. Maybe not. Personally, I like working around people who are smarter than me. This is the best way to learn.
And there’s a flip side to number 20: Don’t blame people publicly for problems, but be quick to praise people for successes, major and minor. A sense of recognition for one’s diligence is tremendous motivator. I don’t know a single person who doesn’t appreciate kudos. Most parents realize that their children respond better to positive reinforcement than negative… This doesn’t change when one reaches a certain age. I’m not suggesting that a team member not be confronted for problems. Of course he or she should (and must).
Very recently a company-wide email spoke of a major success of mine (successful deployment of a year long project), and mentioned me by name. It felt great, and it made me want to continue with even more success (and it was a great confidence boost). Simple put, its good to know that the folks at the top of the organization are aware and appreciative of the work of those in the trenches!
This all may sound like a lot of feel-good fluff. It isn’t.
I FIRMLY believe that documents related to a project should be managed in the same version repository as the source code. This gives us a snapshot in time of all items related to a project. The problem with this comes when using a wiki (and I love using wikis, so don’t get me wrong). There is no way, if we are using a wiki page for specs, requirements, etc., to link a wiki instance in time to a Subversion (Git, Mercurial, whatever) instance in time.
And I don’t think we would want such a feature anyway. A wiki covers many projects and many team needs, not just the needs of a small group of programmers on a single project. I can’t imagine “rolling back” an entire wiki to a given snapshot.
I wonder if there are any clever ideas out there for handling a need such as this. I can see the possibility of pre-commit hooks being used to label wiki pages, but this seems cumbersome (if not entirely unmanageable). The other solution is to rely on the wiki only for collaboration and not for official documentation of any kind. This approach, unfortunately, cripples much of the power of using a wiki.
I am open to ideas.
Under an extremely tight deadline one team member decided that it would be best if the developers took over a conference room. On a long conference room table there are 6 computers, and 6 extremely talented developers chat, joke, brainstorm and work away. The manager of the team is there too, explaining requirements and helping to clarify definitions and functionality.
There is pizza, too.
Its like that scene in Apollo 13 where the engineers have to figure out how to get the Apollo back to Earth. Ideas bounce freely and communication is immediate. I don’t have to wait for a response to an email or a response in a chat window (which may or may not come). And there’s something about sharing a space with a common goal: The team seems to gel. There is little or no arguing or passive-aggressive commentary as I have seen in meetings throughout my career. We’re all in this together, after all.
I’ve never seen software written with such efficiency.
I read this post today with a list of funny checkin comments today. Some of them are funny simply because of the lacking description. Here are some comments I’ve seen in my personal experience:
- many small changes
- Microsoft IE sucks!
- fix the bug
Worse, I’ve seen entirely empty changeset comments.
The above lists, along with those found on the funny checkin comments page provide some examples of inappropriate commit comments. Why? They are unprofessional and lacking in detail and meaning. Some projects are audited and reviewed by external third parties. As a project manager or architect, would you be embarrassed for an auditor to see the comment “fix sucky code?” I would. Even worse than being embarrassed, there is a productivity problem that can arise from poor checkin comments.
What is an appropriate comment? An appropriate comment must (minimally) have a few things:
1. Appropriate level of detail about the change, including why the change was made, what impact there may be, etc.
2. Appropriate to the changeset. Along with this, a single changeset should, as much as possible, reflect a single ticket or change. Many lazy developers check in a large set of code with a number of intertwined changes that are unrelated. When it comes time to revert a particular change or track a defect this creates problems, and ultimately it defeats one major purpose of version control.
3. Details about the completeness of the change. Generally a changeset should complete a ticket or work item, but this is not always the case. If there is remaining work to be done, “TODO” items or further functional requirements that impact the changeset, this should be noted.
4. Finally, perhaps the most important part, the checkin comment should refer to a ticket. Not all changesets have tickets written, sure, but in general, if the ticket is a defect, enhancement or requirement implementation, there should be one or more tickets that are related. Any modern version control and ticket system will be able to tie these together.
Many developers are sloppy about commenting their changes, and some may feel that commit messages are not needed. Either they consider the changes trivial, or they argue that you can just inspect the revision history to see what was changed. However, the revision history only shows what was actually changed, not what the programmer intended to do, or why the change was made. This can be even more problematic when people don’t do fine-grained commits, but rather submit a week’s worth of changes to multiple modules in one large pile. With a fine-grained revision history, comments can be useful to distinguish trivial from non-trivial changes in the repository. In my opinion, if the changes you made are not important enough to comment on, they probably are not worth committing either.
Getting developers to write good checkin comments seems to be an ongoing battle. In the business of writing software, its easy to convince oneself that checkin comments are a waste of time. The real waste of time is later, when trying to track the introduction of a defect or trace requirement implementation to code. There is simply no good excuse for lacking checkin comments.
Is the checkin comment “cleanup” appropriate? Yes, in some cases, as long as its true. If I am cleaning up formatting of code, including things like indents and spacing and correcting whitespace, then yes, “cleanup” is an appropriate changeset comment. Generally, however, real comments are required.
Here’s a post (albeit dated) where a developer lists a few problems with test driven development. There are plenty more where that came from. What I’ve found works better is a hybrid approach, where we write tests at the same time as code (or just after). The idea behind pure TDD is one of those that sounds good on paper but is difficult to implement practically. Developing to the test means that we abandon some of the best parts of Agile by again tying our hands to strict requirements (this time the requirements are automated tests that don’t work until the code required is implemented). While I am a big supporter of functional automated tests and their inclusion in CI, I don’t think pure TDD is practical. A much better approach is to write functional code and tests together.
The biggest problem I have with TDD is included on the Wikipedia entry on the subject:
Test-driven development is difficult to use in situations where full functional tests are required to determine success or failure. Examples of these are user interfaces, programs that work with databases, and some that depend on specific network configurations. TDD encourages developers to put the minimum amount of code into such modules and to maximize the logic that is in testable library code, using fakes and mocks to represent the outside world.
Fakes and mocks are fine, but I prefer to spend more time implementing tests that run against real world conditions. Also, most all applications that I work on include a UI and/or database. Often, database and UI design occurs alongside all other development.
Taking HTMLUnit as an example, how often do we know what form and input names will appear on a page before we implement it? The same is generally true of database design. In an ideal world, pure TDD would be a great approach. In the real world, where I work, I need more flexibility. This being said, I think most software teams aren’t anywhere near this being a problem. Most have yet to spend appropriate time on automated tests.
Somewhat recently I was thinking about what ticket status might be appropriate when using issue tracking for all tasks from functional requirements to documentation to defect tracking. It got me thinking about the need for peer reviews of code and how tedious these reviews can be. It turns out there is at least one plugin for Trac that includes hooks for annotation of code for the sake of peer review. It does not, however, appear to include any kind of formal sign-off capability.
I started thinking that it would be nice to have a plugin for peer reviews (for Trac or Redmine or whatever). Used wisely, however, defining our workflow in a manner that makes the peer review process an integral part of it, we can probably simplify things. Do we really need a plugin, or can we simply use a “In Review” status to achieve the same thing? I suppose the answer to this depends on how strict you want to be.
Here’s what I’m thinking with regard to the history of a ticket (or issue or task or work item or whatever we choose to call it):
- Resolved (or, if we determine that a ticket should not be completed, we have alternatives, such as deferred, rejected, duplicate, etc.)
With a setup such as this, we can use the Resolved status as an indicator that an Issue has been completed, but it is not yet ready to be closed. Tickets are only closed when appropriate peer review actions have been taken. Who determines what these actions are? That is up to the project manager (or the team lead), and it is enforced by proper routing of the ticket. Easy individual responsible for peer review is assigned the ticket. Seeing the “In-Review” status, this colleague reviews the code changes (observing the changset that is attached to the ticket) and makes comments (in the ticket notes).
I know this sounds like a bit of legwork, but I see a few major benefits of an approach like this:
- Tracing – We now have an audit log of all peer review comments. Using our ticket system with configuration management integration, tickets, changesets and review comments are linked together and not lost in some email thread or document somewhere.
Time Savings – Anyone who has ever sat through a peer review (and I’m guessing most project managers and developers have) knows how insanely time consuming they can be. Because nobody ever seems to have time, we attempt to save time by doing a large review of code; We wait for a long time, and then we are faced with a peer review involving an overwhelming amount of code. This leads to the next benefit…
- Better Focus of Reviews – I don’t know about you, but I find that I am much better at reviewing a smaller amount of code or a single functional area than attempting to review thousands of lines of code all at once. We’re all busy, and this isn’t going to change. What happens when you find out that you have a peer review at the end of the week and you have to read through and mark up 5 class files? Do you set aside everything you are working on and do it? You try, but time is short, and so you hurry.
- Communication – When I take the time to review a changeset, it benefits both team and the individual performing the review. Now I am better informed about what others are working on, where it is implemented, how it is implemented, etc. I don’t have to go bug Joe the Developer to ask him if he finished such-and-such. I already know that he did because I reviewed his code.
This all assumes that our team follows good project management when it comes to the handling of issue tracking and version control. It means that we have to have well organized tickets and we have to commit changesets in some meaningful fashion. This should be a no-brainer.
To begin with, I don’t see any real reason why software medical device manufacturers should fear Agile. I do, however, see some stipulations that need to be made. Here is a rather dated article on the subject (from 2007) : Agile Development in an FDA Regulated Setting.
The author of the blog post concludes:
It seems to me that Agile methodologies have a long way to go before we see them commonly used in medical device software development. I’ve searched around and have found nothing to make me think that there is even a trend in this direction. Maybe it’s that Agile processes are just too new. They seem popular as a presentation topic (I’ve been to several), but I wonder how prevalent Agile is even in mainstream software development?
Since the article was written (4 years ago), Agile has clearly gained a solid foothold in mainstream software development. With companies bound by medical device FDA guidelines, however (or even IEEE, ISO 9001, ISO 13484), there may be some understandable fear on new approaches. What seems to happen is that the known process becomes the only trusted process, and adoption of anything knew leads to so many questions that it is simply pushed aside (regardless of the potential benefit to the company).
- Customer satisfaction by rapid delivery of useful software
- Welcome changing requirements, even late in development
- Working software is delivered frequently (weeks rather than months)
- Working software is the principal measure of progress
- Sustainable development, able to maintain a constant pace
- Close, daily co-operation between business people and developers
- Face-to-face conversation is the best form of communication (co-location)
- Projects are built around motivated individuals, who should be trusted
- Continuous attention to technical excellence and good design
- Self-organizing teams
- Regular adaptation to changing circumstances
Uh oh. A few of these principles are very likely to send upper management, at least those that are used to their traditional waterfall SOPs, running for the door. But who says we can’t make modifications where we need to?
I suspect that much of the resistance to Agile methodologies is closely tied to a fear of change. Upper management trusts that which they know, despite some of the obvious shortfalls.
I thought I was done, but here is yet another good reason to incorporate complex function automated testing: Validation of multiple Java runtime environments. Fabrizio Giudici proposes this as a solution for testing with Java 7, but we can always take it a step further, verifying multiple OS environments as well. Of course, this requires that we have those build environments available (easy enough, using virtual machines).
I read this article at MEDS Magazine about Strkyer’s software development process. Stryker is a large and very successful company, so I was a bit surprised to learn that they have had success with the V-Model in software development. In my humble opinion, the V-Model is simply a glorified waterfall model approach, and we’ve seen time and again that the waterfall model is not a good method to attempt.
I don’t believe that most medical device software can thrive in an XP or purely light weight Agile environment either. I believe that we need Agile/Scrum methodologies with some rules and more heavily weighted up-front requirements and design. I’d like to see those of us in the software medical device community come up with our own Scrum, and this may be something that a number of people are eager to collaborate on.
We need to recognize that fixed requirements scope is simply not a reality, in medical devices or otherwise. In the book Agile Software Requirements, Dean Leffingwell points out:
This “fixed requirements scope” assumption has indeed been found to be a root cause of problem failure. For example, one key study of 1,027 IT projects in the Unit Kingdom [Thomas 2001] reported this: “Scope management related to attempting waterfall practices was the single larged contributing factor for failure.”
(i.e., There is no such thing as a fixed requirements scope!) Need we further reason to abandon the waterfall model? Leffingwell offers even more. According to an “oft-cited Standish Group’s Chaos report survey [Standish, 1994]:
- 31% of software projects [using waterfall approach] will be canceled before they are completed.
- 53% of the projects will cost more than 189% of their estimates.
- Only 16% of projects were completed on time and on budget.
- For the largest companies, completed projects delivered only 42% of the original features and functions.
I’ve never observed a true waterfall in practice. I’ve only seen it attempted. Inevitably, there are modifications to the process, as the team and management realizes that there is a need to revisit requirements and design. I don’t care how much time is spend (wasted) on up-front design… It will never be enough, and there WILL be a need to revisit those stages that the waterfall model insists should have been locked down.
I’m reading a book right now called Brain Rules. In it, the author discusses how after a number of hours sitting in front of a computer our brains literally start to call it quits for the day. This happens long before our typical 8 hour workday is up. If your job involves something that requires intense focus, such as writing software, you’re already well aware of this.
I wasn’t thinking of this when I planned my team’s daily standup meetings at 2 in the afternoon. Perhaps a bit selfishly, I was just thinking of when I like to have a break in my work. It turns out that I was probably motivated by the fact that this is a time of day when I feel the need for a change of pace.
I’ve found that when I am head-down into programming, the worst thing to deal with is an interruption. I’ve also found that I can only be hyper-focused on programming for 4-5 hours at a time (at best). After a while I just start to get tired, and its time for a break. The book Brain Rules offers some interesting insight into why this is.
Back to the stand up meeting: Long ago, when I first heard of daily “stand ups” I was alarmed. The last thing I needed was yet another meeting to interrupt an already busy day. What I didn’t understand was the fact that a daily stand up meeting achieves a few important things:
1. It actually reduces interruptions. People who may interrupt otherwise are encouraged to put the interruption off until the meeting.
2. It encourages work. I, and others, always feel like we want to have something to say at the stand up meeting.
3. It discourages long meetings. If the owner of the meeting (the team lead) is wise, he or she will insist that the meeting last no longer than 20 minutes.
4. It provides a much needed break in the afternoon, and an opportunity to refocus. Sometimes, if you live somewhere beautiful like North Carolina, its even a good idea to make the stand up meeting a walk outside. A little exercise and fresh air works wonders after sitting focused on a computer screen for hours.
Using a CI Environment to Replace the Traditional DHF
Naturally, an important part of continuous integration is having a CI build that can be checked regularly for continued build success. This is probably what is commonly though of as the key benefit, but there is much more to be gained. Any continuous integration environment that is worth using will allow the team to incorporate packaging of key project items with each build. This includes important documents, tests (both manual and automated test outcomes can be packaged), requirements, design specifications and build results (deployment packages, libraries, executables, installers, etc.). The important thing to note here is the fact that, used wisely, the CI environment can provide a snapshot of all project outputs at any given point in time. Hopefully it is becoming clear that this gives us the possibility of automated DHF creation. Not only that, we have a much more detailed DHF throughout the life of a project and not merely at a point in time in which a particular freeze was performed.
The continuous integration server should include unit tests (and by unit tests, I mean functional level automated tests) that provide a level of self-testing code such that any build that fails to pass these tests at build time is considered a failed build.
Continuous integration output need not (nor should it) package only built objects. We can leverage CI build integration with our version control system to package everything required per our design outputs (21 CFR Part 820.30 supbart C (d)), design review (21 CFR Part 820.30 supbart C (e)), design verification (21 CFR Part 820.30 supbart C (f)), design validation (21 CFR Part 820.30 supbart C (g)), design transfer ((21 CFR Part 820.30 supbart C (h)), design changes (21 CFR Part 820.30 supbart C (i)) and even our design history file (21 CFR Part 820.30 supbart C (j)).
Read it all:
[CI on Software Medical Devices, Part 1]
[CI on Software Medical Devices, Part 2]
[CI on Software Medical Devices, Part 3]
[CI on Software Medical Devices, Part 4]
[CI on Software Medical Devices, Part 5]
[CI on Software Medical Devices, Part 6]
[CI on Software Medical Devices, Part 7]
[CI on Software Medical Devices, Part 8]
[CI on Software Medical Devices, Part 9]
Ant should automatically determine which files will be affected by the next build. Programmers should not have to figure this out manually. While we will use an IDE for most development, we must not rely on the build scripts that are generated by the IDE. There are a few reasons for this:
- IDE generated build scripts are not as flexible as we need them to be (it is difficult to add, remove and modify build targets).
- IDE generated build scripts often contain properties that are specified to the environment in which they were generated. Along with this, something that builds okay in one work environment may not build when committed and used by the CI build or when pulled into another environment.
- IDE generated build scripts very likely lack all the build targets necessary.
- IDE generated build scripts may rely on the IDE being installed on the build machine. We cannot assume that this will be the case.
The Ant buildfile (build.xml) should define correct target dependencies so that programmers do not have to invoke targets in a particular order in order to get a good build.
As noted, we will use Jenkins-CI to automatically perform a CI build every hour if there is a change in the repository. The system will be configured to send emails to the development team if there is a problem with the build (i.e., if a changeset breaks the CI build). It is anticipated that the CI build will break from time-to-time, however, a broken build should not be left unattended. A broken CI build indicates a number of possible problems:
- A changeset didn’t include a necessary library or path.
- A changeset caused a problem with a previous changeset, and merge of the changes must be address.
- A unit test failed.
- The CI build server has a problem.
- The build script failed to build the new changeset (missing library or required inclusion).
In my experience, the most common cause of a broken CI build is a lack of attention to the build script. Each developer is responsible to making certain that the ant build scripts are up to date with all required changes. We cannot rely on the build scripts that are generated by an IDE. There are certainly more possible causes that could be added to the above list. It is a good idea for each developer to trigger a CI build immediately following any Subversion commit to ensure that the CI build has not been broken. If a CI build continues to be broken without being addressed, the team leader and/or project manager may revert the offending changeset and re-open any related issue.
A build is labeled with a predetermined version number (e.g., “2.0”) and with a Subversion changeset number. The beauty of this is that we have a build that is associated with a particular changeset and, by association, an entire set of project documents and sources (as long as we put everything in a single version control system). Once again it should be clear how beneficial such a setup is when thinking in terms of a DHF. No longer do we have to assign a particular team member to fumble through documentation, ensuring that the proper documents are copied to some folder. In fact, we have very little overhead; Our CI server did all the heavy lifting of us!
Mixed Revisions are BAD!
The changeset number, placed in a property file at build time (by the ant build task). If the changeset number has the letter M at the end (e.g., 3001M), the currently checked out fileset is a “mixed” revision.
The current working copy changeset number is easily viewable in TortoiseSVN or at the command line with the svnversion command. It is expected that during development and testing a mixed revision will be used at almost all times. However, any final build must not have the letter M in the build number.
If the letter M does appear in the build number of a formal build, it indicates that there are items in that build that are not up-to-date in the repository, and therefore the build cannot be duplicated using only a changeset or tag. To avoid this, the continuous integration server should be used to create formal builds, and it should be configured to use only a current changeset and no locally modified files.
Ant Build Targets
Build.xml is the Ant build file. It is located at the root level of the project source tree. The following build targets are necessary:
- build – Compile all code, creating .class files
- dist – Call the build target and package all code and necessary configuration files, creating a .war file (web application archive).
- test – Execute automated tests
- clean – Clean an existing build and dist (remove previously build items)
- javadoc – Generate Javadoc
Jenkins-CI allows teams to set up a project so that a build is performed whenever a change is committed through a version control system. The “Poll Version Control System” option is selected to do this. From there, the team must set up a schedule so that Jenkins will know how often to poll the version control system. Jenkins can be scheduled to poll monthly, daily, hourly, or even every minute. However, I would recommend building hourly (if and only if there has been a change committed).
In the project view, there are lists of the build history. I’ve set up build scripts and artifact archiving so that you can get the build at any point. The Java archive files will often be set up to be self-contained. This just means that all the “stuff” that is needed to run the application (classes, libs, properties, etc) is archived in that single file, and you can always grab it from the CI environment.
To get a current jar (Java executable) go to the build history. A build with a red circle next to it failed. A build with a blue circle is a good build. Click on the most recent successful build and you will see the build #, date and build artifacts. You can download the jar files from here,
This is how we will handle projects in general when someone wants to get a build. There will be more details on tags and so on going forward. We can modify the CI environment to include any artifacts that we want, just as Javadoc or documentation (as long as it is something that we can pull out of Subversion).
Builds are created locally by developers for a number of reasons, however, such builds must always be considered informal. Any build released for formal testing, at the end of a Sprint or iteration or upon project completion must be done in the controlled build environment. Using self-documenting features of code (such as Javadoc), it can be wise to incorporate the output of this extra documentation into the DHF. Why not? There was little overheard in doing so and the benefits are substantial.
What I am proposing in this article is something that I, personally, have never done. In my positions as a software lead, architect, developer and software quality analyst, I have worked only with a DHF as a particular folder with specific subsets of documentation within. This approach has always resulted in a documentation nightmare. I’ve used many version control, issue tracking and continuous integration tools, but I have yet to take the leap to reliance on CI as the source for DHF creation. So while this proposal makes sense, it takes a bit of a leap from the traditional DHF mindset to attempt.
The screenshot (click to see full view) here shows what a (simple) project setup may provide in the way of such packaging. It is up to the team to determine how much or how little the CI handles, but it makes the most sense to allow it to do what computers do very well and what humans tend not to do as well: align things.
The CI Environment
The continuous integration build server should closely mimic the environment in which the final product will be deployed. By doing this, a level of confidence is achieved with regard to system compatibility prior to user acceptance and integration testing. It must also have access (through the version control and/or ECM system) to access all the design controls and documents necessary to build a complete DHF. To this end, I propose using a single version control system for everything. It doesn’t make sense, for example, to store source code in one version control system and documents in another. To do so makes importing of all necessary items difficult, if not impossible. There are a number of benefits to utilization of a continuous integration server during project design and development. Do not think of CI as a tool only for software builds. Integrated with the project version control system, it can serve as much more.
- Changesets tied to builds
- “Changeset” is really a Subversion term. For the purposes of this chapter, a “changeset” is what happens any time a user commits a change (be it source code, documentation, graphics, etc.) to the version control system. The CI build should be configured to execute (i.e., build and package) the project when it detects that there is a new changeset.
- Frequent Builds, Status Update and Rapid Feedback
- The CI build gives the development team prompt feedback on the build status. If compilation failed, tests fail or some requirement element cannot be packaged, the entire team is flagged immediately. To this end, the entire team will know that a particular check-in has broken something. This feedback will eliminate the fear that an unknown break could be so extensive that progress will come to a screeching halt. The near real-time feedback of the CI build saves valuable time (and stress!) throughout development (and even design).
- Project Progress Tracking (tickets, tests, etc.)
- Improved communication
- Feedback (peer review) is triggered by every changeset, each of which is easily viewable
- Less overhead for communication
- Improved team understanding of others’ work
- Jenkins-CI is used for continuous integration builds. The CI build server runs Ant build scripts and and reports results. It is expected that software developers follow the CI build server to make sure that any code commits do not break the CI build.
- While the use of an IDE is expected, a development team must not rely on usage of the IDE-generated build scripts for any project.
- If using Apache Ant for a project build, the development team should create (minimally) these build targets: clean, build, dist, test
- Ant build scripts are no different than other project code in that they must be written clearly, follow standards and commented. Developers are expected to maintain build scripts. Any code commit that requires a change to the build must include those changes upon commit. This includes addition of a class, library, package, path, etc.
820.30(e) Design History File (DHF) means a compilation of records which describes the design history of a finished device.
–Device Advice: Regulation and Guidance, Software Validation Guidelines, http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance
Medical device software is audited and controlled by standards defined by FDA, specifically 21 CFR parts 11 and 820. Many of the requirements laid out in this somewhat difficult-to-understand guidance can be made very easy, second nature even, when we use a continuous integration environment throughout the course of project design and development. Looking specifically at the quality system requirements laid out by 21 CFR Part 820.30, Subpart C – Design Controls, it becomes apparent that a good continuous integration environment can help us to address each. A major consideration, perhaps the major consideration, is the completeness of the Design History File.
820.30(j) Design History File. Each manufacturer shall establish and maintain a DHF for each type of device. The DHF shall contain or reference the records necessary to demonstrate that the design was developed in accordance with the approved design plan and the requirements of this part.
–CFR – Code of Federal Regulations Title 21. Subpart C – Design Controls, Section 820.30 Design Controls
The “or reference” part of this statement stands out. Traditionally, medical device manufacturers have though of the DHF as a physical, self-contained item. But with a project of any complexity, it isn’t difficult to imagine how quickly a DHF may grow into an unruly mess of difficult-to-wade-through “stuff.” Why not simply leverage software tools to make the process seamless? Using a continuous integration build environment, development teams can pull together a baseline of all the elements of a DHF as frequently as they wish to; Furthermore, they can do so with a degree of accuracy that cannot be achieved through the diligent (yet distractible) legwork of a pre-occupied team.
I propose that the DHF need not be a single physical or soft folder with duplicate copies of items. Leveraging the CI environment along with the version control system, it is a much better idea to think of the DHF as a snapshot of all relevant design outputs at a given point in time. To that end, the development team can have many snapshots of the DHF throughout the project lifecycle. To achieve this, they need simply define this process in their standard operating procedures and work instructions.
For the purposes of this article, the focus will be on one specific continuous integration build tool, Jenkins CI. This is one of the more popular (open source) tools available. Jenkins CI (the continuation of a product formerly called Hudson) allows continuous integration builds in the following ways:
1. It integrates with popular build tools (ant, maven, make) so that it can run the appropriate build scripts to compile, test and package within an environment that closely matches what will be the production environment.
2. It integrates with version control tools, including Subversion, so that different projects can be set up depending on projection location within the trunk.
3. It can be configured to trigger builds automatically by time and/or changeset. (i.e., if a new changeset is detected in the Subversion repository for the project, a new build is triggered.)
4. It reports on build status. If the build is broken, it can be configured to alert individuals by email.
The above screenshot gives an example of what a main page for Jenkins CI (or any CI tool) may look like. It can be configured to allow logins at various levels and on a per-project basis. This main page lists all the projects that are currently active, along with a status (a few details about the build) and some configuration links on the side. These links may not be available to a general user.
Clicking any project (“job”) links to further details on the build history and status. This image provides us details on what the overview screen in the CI environment might look like, but it is at the detailed project level that we see the real benefit of packaging that can be performed by a well setup CI environment.
Continuous Integration refers to both the continuous compiling and building of a project tree and the continuous testing, releasing and quality control. This means that throughout the project, at every stage, the development team will have a build available with at least partial documentation and testing included. The CI Build is used to perform certain tasks at certain times. In general, this simply means that builds are performed in an environment that closely matches the actual production environment of the system. In addition, a CI environment should be used to provide statistical feedback on build performance, tests, and incorporation of a version control system and ticketing systems. In a development environment, the team may use a version control tool (i. e. Subversion) to link to tickets. In this way, any CI build will be linked to a specific changeset, thereby providing linkage to Issues, Requirements and, ultimately, the Trace Matrix.
A development team should attempt to perform continuous integration builds frequently enough that no window of additional version control update occurs between commit and build, and such that no errors can arise without developers noticing them and correcting them immediately. This means that for a project that is in-development, it should be configured that a checking triggers a build in a timely manner. Likewise, it is generally a good practice for the developer committing a changeset of code to verify that his or her own changeset does not break the continuous integration build.
Allow me to address the word “build.” Most software engineers think of a build as the output of compiling and linking. I suggest moving away from this narrow definition and expanding it. A “build” is a completion (in both the compiler sense and beyond) of all things necessary for a successful product delivery. A CI tool runs whatever scripts the development team tells it to run. As such, the team has the freedom to use the CI tool as a build manager (sorry build managers, I don’t mean to threaten your job). It can compile code, create an installer, bundle any and all documents, create release notes, run tests and alert team members about its progress.