Many of us have seen them: The job posts claiming to be seeking a “Ninja Programmer.”
I presume that these are companies that are:
- Looking for a well-versed candidate with diverse skills and the ability to tackle any project.
- A candidate that will find more value in the way he/she is perceived than salary. (Reading between the lines: “We can’t pay you much, but we will appreciate you a lot!”) This may not always be the case, but there there often seems to be a hint of this in “Ninja” job descriptions.
The second point is based on other verbiage I have seen alongside such job posts. Things such as “Do you find more value in what you get to do each day than anything else?” Sure, I find value in the more exciting aspects of a role–The opportunity to learn new things, set direction, and get things done. Of course! I also find value in money. Let’s be honest here.
Sometimes the word Ninja is replaced by other crafty (or not-so-crafty) buzzwords: Rock Star, Guru, Genius, Superstar. It doesn’t take much insight to recognize the aim of such verbiage: Flattery.
I’m sure that any company using such lingo in a job description is sincere in the desire to find a candidate who is very good–one who will be able to complete sizable, complex tasks. Naturally! I also think that a single superb programmer can often achieve the work of three, perhaps four or even five, average programmers. I’m fascinated by some of the legendary programmers out there: People like Linus Torvalds and James Gosling. But even the most famous programmers rely on a tremendous and ever-growing amount of community insight and preexisting work. (By the way, there is a video from a Google I/O Conference, The Myth of the Genius Programmer, that addresses this subject very well.)
I’ve worked with a few “Ninja Programmers” over the years. The term is highly relative. I’ve had positions where I may have been considered the Ninja. I’ve had other positions where any Ninja-like self satisfaction was as elusive as the stealth and cunning of a Ninja portrayed in a 1980s movie.
How did this lingo come about? Those of us in the business of writing software often have a few other desires. I know I do. Anyone who grew up in the 80s dreams of being a Rock Star, Ninja, or at least Frank Dux. The buzzword job titles are a way of making a job that might be very difficult, taxing, and demanding of time and talent sound appealing. I may have to work 100 hours a week, but at least I’ll finally be a Ninja!
It’s no different than job descriptions that contain the infamous words, “We work hard and play hard!” What does play hard even mean? It sounds like something that might involve torn ligaments.
The point of this post isn’t to seem cynical (although it might). The point is this: Software Developers, Architects, Engineers, whatever you call them, aren’t some strange group of people that have to be wooed or tricked into accepting a position. We’re grown adults. There are certainly great Software Engineers out there. But they aren’t stealthy, and they don’t hide in trees or karate chop bad guys.
I’ve worked with some brilliant software folks over the years. I’ve worked with some very poor ones as well. Those times in my career where I’ve found myself the lone “Ninja” of the team have been among the most floundering times of my career. It is difficult to teach oneself new things in a vacuum. I’ve found that it is best to be on a team with lots of other “smart folks”–people from whom you can learn, and people who will add checks and balances. That so-called Ninja–The lone genius that a company relies on for all software needs–is going to cause a few problems.
A few that I can think of right away:
- A lone programmer–the company “genius”–will soon face burnout. No matter how much the individual loves writing software, one can only be stretched so far. This highly talented individual has all sorts of opportunities coming his or her way. It won’t be long before such a talented person is offered a job making more money and working fewer hours. What happens when the single guru leaves the company?
- The lone programmer may not play nice as the company grows. It can be difficult to let others touch your baby. When you’ve written thousands of lines of code and a new team member comes along and starts mucking with it, there can be problems. I’ve been the new guy, pestering the old guy, and messing around with legacy code, much of it poorly documented. I’ve also been the guy on the other side, a bit perturbed when someone dare say that my code might be better if… Be gone, you and your new design pattern!
- Along with number 2, any programmer with enough of an ego to allow himself or herself to be labelled the company’s Ninja, is likely to have an ego that does not lend itself well to “playing nice with others.” I have to confess once again to having been on both sides of this. It feels great to be in a position where you are thought of as being “the smart guy.” Although burdensome, it feels good to be trusted with the complexities of software that nobody else understands. It also leads to a certain feeling over ownership of code, and heavy reliance on a single individual.
- When trusting that lone smart guy/gal with all of the code, a determination has been made: There will be no collaboration–no merging of ideas–no team to challenge each other, from within, to do better. It’s the sharing of backgrounds and experience that leads to the best software design, and I believe this is true no matter how talented one programmer happens to be.
I’m sure there is more that could be added to this list. These are just a few quick thoughts on the matter. While being a Rock Star might not be all that bad, I don’t want to be a Ninja. Sometimes Ninjas get blow-darts stuck in their necks. Sometimes they get beat up by Bruce Lee.
RTP is cool and all, but, honestly, there’s lots of space around here, and we don’t all need to be driving the same direction. I’d love to see more companies build in downtown Raleigh. Hopefully the RedHat move gives it a kick-start.
According to this recent article (Canadian HR Reporter), the high starting salaries are still in fields related to software and other fields of engineering.
The careers with the highest starting salaries for graduates with a bachelor’s degree in the United States are software engineering ($71,666) (all numbers US$), industrial engineering ($62,245), chemical engineering ($57,500), electrical/electronic engineering ($57,145), and computer science ($55,664), according to the Employers Resource Association (ERA).
If you’re going to spend outrageous sums of money on college, make it count. (But don’t go into software because you think you’ll make a lot of money. If you don’t love it, you’ll hate it.)
It is a thrill and an honor to have an article published in a journal, especially Software Developer’s Journal. This is the second time I have had such an honor. My article, Do Not Flounder, appears in the June, 2013 issue.
“Do not flounder!” by Matthew Rupert will surely leave all software engineers entertained and intrigued”
The print edition is soon to come, and I’ll scan and include my article here. I just received a pdf copy of the issue today. It is a subscription-only publication, so unfortunately there is no link to share. Many thanks to the folks at SDJ for making this happen!
I recently started a new position with a new employer, and in so doing I went from being the token goto guy for everything software development related to working with several of the best software engineers I have ever worked with. At first this can seem intimidating, and there is certainly a degree of comfort in feeling like the “best” software engineer at the company. However, the complacency that can come from being the “lead guru” is not good for one’s career path.
The greatest leaps in learning and growing in a career come from being challenged by others with more experience and greater skills. So its good to be working with really, really smart people–ones who push me to learn things that I may not be pushed to learn otherwise.
I wrote before about unit tests and one benefit being increased developer confidence. On the flip-side of this, unit tests can lead to developer overconfidence. There is no perfect replacement for thorough integration tests in a realistic environment. Most of us (developers) at some point have probably scratched our heads and said, “well the unit tests passed.”
Unit test success in such a case is still beneficial. It helps us to identify where a problem is not. (Or, worse, it helps us to realize that we need better unit tests.)
Just a couple of thoughts.
I heard the term “Bikeshed Conversation” the other day and had to Google it…
I’m reading a book right now titled The Clean Coder. Here’s a quote from chapter 1:
Your career is your responsibility. It is not your employer’s responsibility to make sure you are marketable. It is not your employer’s responsibility to train you, or to send you to conferences, or to buy you books. These things are your responsibility. Woe to the software engineer who entrusts his career to his employer.
-Robert C. Martin, The Clean Coder
Here’s a good question/answer that I came across today over at Stackoverflow. The answer is that it NOT a good idea to set a collection inside of a getter or setter method because:
I would strongly discourage lazy initialization for properties in an ORM entity.
We had a serious issue when doing lazy initialization of an entity property using Hibernate, so I would strongly discourage it. The problem we saw was manifest by a save taking place when we were issuing a search request. This is because when the object was loaded the property was null, but when the getter was called it would return the lazy initialized object so hibernate (rightly) considered the object dirty and would save it before issuing the search.
I’ve been burned by this as well.
Ant should automatically determine which files will be affected by the next build. Programmers should not have to figure this out manually. While we will use an IDE for most development, we must not rely on the build scripts that are generated by the IDE. There are a few reasons for this:
- IDE generated build scripts are not as flexible as we need them to be (it is difficult to add, remove and modify build targets).
- IDE generated build scripts often contain properties that are specified to the environment in which they were generated. Along with this, something that builds okay in one work environment may not build when committed and used by the CI build or when pulled into another environment.
- IDE generated build scripts very likely lack all the build targets necessary.
- IDE generated build scripts may rely on the IDE being installed on the build machine. We cannot assume that this will be the case.
The Ant buildfile (build.xml) should define correct target dependencies so that programmers do not have to invoke targets in a particular order in order to get a good build.
As noted, we will use Jenkins-CI to automatically perform a CI build every hour if there is a change in the repository. The system will be configured to send emails to the development team if there is a problem with the build (i.e., if a changeset breaks the CI build). It is anticipated that the CI build will break from time-to-time, however, a broken build should not be left unattended. A broken CI build indicates a number of possible problems:
- A changeset didn’t include a necessary library or path.
- A changeset caused a problem with a previous changeset, and merge of the changes must be address.
- A unit test failed.
- The CI build server has a problem.
- The build script failed to build the new changeset (missing library or required inclusion).
In my experience, the most common cause of a broken CI build is a lack of attention to the build script. Each developer is responsible to making certain that the ant build scripts are up to date with all required changes. We cannot rely on the build scripts that are generated by an IDE. There are certainly more possible causes that could be added to the above list. It is a good idea for each developer to trigger a CI build immediately following any Subversion commit to ensure that the CI build has not been broken. If a CI build continues to be broken without being addressed, the team leader and/or project manager may revert the offending changeset and re-open any related issue.
When writing software for medical purposes, that software may or may not be subject to FDA scrutiny. We may or may not be required to submit for a 510k. What does this mean? How do we know? Its all a little confusing.
As I considered a series of articles on the subject, I wanted to navigate through 21 CFR 820.30 — Quality System Regulation, and explain implementation of a quality system for each item in subpart C–Design Controls. The first item, however, deals with medical device classification. This is something that should be left to regulatory and FDA and not an design team working on the quality system. We are not fully qualified to determine whether or not we are working on a medical device, or what classification it is. Left to our own, we can likely come up with many great excuses as to why we think our product is not a medical device!
In subpart C—Design Controls of 21 CFR part 820, we are presented with the following:
(a) General.(1) Each manufacturer of any class III or class II device, and the class I devices listed in paragraph (a) (2) of this section, shall establish and maintain procedures to control the design of the device in order to ensure that specified design requirements are met.(2) The following class I devices are subject to design controls:
(i) Devices automated with computer software; and
(ii) The devices listed in the following chart.
868.6810 Catheter, Tracheobronchial Suction.
878.4460 Glove, Surgeon’s.
880.6760 Restraint, Protective.
892.5650 System, Applicator, Radionuclide, Manual.
892.5740 Source, Radionuclide Teletherapy.
Thomas H. Farris, in Safe and Sound Software – Creating an Efficient and Effective Quality System for Software Medical Device Organizations, offers us a definition of a medical device:
Any equipment, instrument, apparatus, or other tool that is used to perform or assist with prevention, diagnosis, or treatment of a disease or injury. As an industrial term of art, a “medical device” typically relates to a product that the FDA or other regulatory authority identifies as a regulated device for medical use. 
I’ve been contemplating a detailed writeup on this subject, but I haven’t had a good idea of where to begin, especially since my own regulatory experience is, at best, limited. Today I stumbled upon this article on the subject over at MEDS Magazine. Bruce Swope (the author), offers a little bit of insight on the 3 medical device classifications:
Generally, these three classes are determined by the patient risk associated with your device. Typically, low-risk products like tongue depressors are defined as Class I devices, and high-risk items like implantable defibrillators are defined as Class III devices. The marketing approval process is usually determined based on a combination of the class of the device and whether the product is substantially equivalent to an existing FDA-approved product. If the device is a Class I or a subset of Class II and is equivalent to a device marketed before May 28, 1976, then it may be classified as an Exempt Device. A 510(k) is a pre-marketing submission made to the FDA that demonstrates that the device is as safe and effective (substantially equivalent) to a legally marketed device that is not subject to Pre-market Approval (PMA). For the purpose of 510(k) decision-making, the term “pre-amendment device” refers to products legally marketed in the U.S. before May 28, 1976 and which have not been:
- significantly changed or modified since then; and
- for which a regulation requiring a PMA application has not been published by the FDA.
PMA requirements apply to Class III pre-amendment devices, “transitional devices” and “post-amendment” devices. PMA is the most stringent type of product marketing application required by the FDA. The device maker must receive FDA approval of its PMA application prior to marketing the device. PMA approval is based on the FDA’s determination that the application contains sufficient valid scientific evidence to ensure that the device is safe and effective for its intended use(s). For some 510(k) submissions and most PMA applications, clinical performance data is required to obtain clearance to market the device. In these cases, trials must be conducted in accordance with the FDA’s Investigational Device Exemption (IDE) regulation.
Ultimately, however, we cannot classify our own software medical device. That is the job of the FDA.
 21 CFR Part 820—Quality System Regulation
 Safe and Sound Software – Creating an Efficient and Effective Quality System for Software Medical Device Organizations, Thomas H. Farris. ASQ Quality Press, Milwaukee, Wisconsin, 2006, pg. 118
A brief article over at Medical Electronics Design Magazine points out that doctors, while they are loving iPads, have yet to fully embrace them for use with electronic health record systems. It seems simple enough, given the fact that many EHRs are now web-based, or at least have some sort of web-based UI.
The article ends with this question:
If fancy tablets aren’t doing the trick, what will it take to get doctors to embrace EHRs?
I suspect its just has to do with the fact that iPads are new and many doctors still have expensive computers (expensive because of support and installation agreements) computers in each room. Its difficult to give up on something that cost a lot of money. Remember the pain you felt throwing your old Gateway 486 computer in the trash?
I have yet to see a doctor walk into the room with an iPad in hand. I was very impressed that my new dentist had iMacs and had gone digital with his x-rays. I was far less impressed by the old cracked filling that I had to have replaced.
I’ve worked in a few environments where IT/IS maintains very strict control over what software is installed on employee computers. As a developer this can be a real annoyance, especially when people from the IT/IS department question the needs of software developers to install, update, remove and re-install software on their computers. I understand that an IS department may find it necessary to control corporate machines, but engineering needs to be a clear exception to this rule.
That rant aside, I’ve found myself working on a few small desktop applications for users that require an application be placed on a computer. Its very easy to create a simple Java application without an installer than can be run from wherever. What isn’t so easy is instructing a user how to use the “java jar…” command, or, worse, installing the JRE on a computer that doesn’t already have it (or has an outdated version). There are tools out there to take care of this problem, JSmooth and Launch4j being two freebies among them.
These tools make it very convenient to package a .jar as a native Windows executable file, and bundle a JRE along with it (the bundled JRE being placed in some path relative to the .exe file).
The CEO of Expensify wrote a post on the company’s blog about why he considers .NET experience on a resume to be a general liability. Wow. I can’t say that I agree, but there are some interesting points. I have been known to think of Windows developers as somehow having less experience, but this is probably more of a personal bias than anything. Many great developers work in .NET because they have no choice.
Last night I finally made it to a Ruby Hack Night in Raleigh… And it was good. Being a “techie” the single MOST important thing we can do for our career is to stay relevant (and any techie who does this because he/she likes it should never have a problem in this department). The folks who attend a “hack night” on a Thursday evening are the types that do this stuff because they enjoy it.
Last night a lady showed up who was not a programmer or a Ruby enthusiast. She was there because she wanted to know more about how to hire a good programmer for her company. Having spoken to countless recruiters over the years who had no idea what they were talking about, I immediately had a great deal of respect for the approach this lady was taking. The approach she is taking is not the easy one: Find a programmer and get a butt in a chair; Rather, she is looking for someone who is good at what he/she does and will be a real asset. (And, frankly, if I wasn’t committed to my new role, I may have thrown my resume her way!)
So this got me thinking… I know what I dislike when I am approached by recruiters, but what is it that I like? At this point in my career it is about much more than having a job and programming. I asked this lady if she was looking for a “code monkey” or a leader, and her response was something that was indeed very honest: “I’m not sure yet.”
I think the fact that she wasn’t sure points to the answer: They need a geek with solid programming skills, organizational and management skills, current knowledge and business sense. This is VERY DIFFICULT to come by (and I’ve alluded to the reason why in pasts posts).
Anyway, when I have more time, I’m going to put my thoughts about what I think high level techies want to see in a position.
I posted this issue to the Redmine group. It seems that Redmine doesn’t play nice with Oracle without some enhancements. This one relates to the nasty 30-character limit on Oracle table names. I’m anxious to see if they pick it up. I also added #7826.
Will we ever be past the point where management, in an almost predictable manner, rejects open source solutions on the basis of “lacking support?”
Just the other day a friend of mine spend a good portion of his day troubleshooting an open source package with, get this, THE DEVELOPER who wrote the code! I cannot recall every having direct contact with a developer from Microsoft or Oracle. I’m not suggesting that open source is always perfect, but concern about support should not be an issue. When it comes down to it, I can look directly at the source code to troubleshoot an issue (if I even need to). Generally, however, I have never had to take it to this level, since all the widely used open source libraries/applications/packages have a community of support that contributes to user forums. I can find all the support I need with Google.
Anyway, this argument is nothing new. When it comes to deciding between Microsoft Team Foundation Server or Subversion/Redmine, I think it is better to consider the needs of the company/project/team and not let concern about “support” be a factor. (And if concern about support is still a factor, perhaps it is the team that is lacking, not the software vendor. Good developers need to realize that they must also be able to understand and support their own tools.)
I don’t say any of this to be idealistic (some open source supporters can get downright religious about it), but my experiences in supporting and using open source software has almost always been positive.
I got a laugh out of a link I saw on the Jenkins CI page today: “Upgrading from Hudson?” Of course, I’ll be using Jenkins going forward, as it appears that most people in the community are already heading this direction (because Jenkins IS the real Hudson). Comparing the Jenkins CI change log to the Hudson CI change log, already we see that there is more activity on Jenkins.
I read somewhere, and I can’t remember where (so I can’t give a link), that Hudson CI is (was) the CI tool of choice for the Java community, with over 80% of Java projects using it. That’s a pretty large following. It will be interesting to see how this plays out, but my money is on Jenkins.
Of course, when creating software solutions in a corporate environment, most of us don’t intend to update our build server frequently at all. Its simply not feasible to do so.