Why Software Is So Bad
Charles C. Mann wrote this great Technology Review piece describing the dramatically diminishing quality of software, its dangers, and some work towards solutions. This piece has been passed around my department; I hope it will have some positive effects.
Mann uses the word "incredibly" to qualify several items. Incredible or not, they're all too familiar to software developers:
Incredibly ... the design for large software projects is sometimes "nothing but a couple [of] bubbles on the back of an envelope."
Incredibly ... software projects often devote 80 percent of their budgets to repairing flaws they themselves produced--a figure that does not include the even more costly process of furnishing product support and developing patches for problems found after release.
Incredibly ... the purpose of new software is often not clearly spelled out before programmers begin writing it. Indeed, it often changes in midstream as marketers come up with wish lists, with predictably bad results.
Also described is the manner in which contrdictory user demands place unreasonable constraints on coders. This is has a very painful resonance for me: I am regularly asked to add functionality (and with it, significant new complexity) to a particular body of code that I have written, while at the same time being asked to dramatically improve its performance.
Are these items really so shocking to those who use software but do not write it? They continue to disturb me, but they have long since ceased to surprise me.
The cost of developing software is very high--daily billing rates for consultants commonly exceed US$2000, and it's not unusual for a consulting team to have a dozen or more members--and it only gets higher when additional time is taken for quality assurance testing and bug fixing. This leads to another shocker, which may seem even more "incredible": companies regularly fail to staff project teams (and even entire IT departments) with professional software quality assurance personnel, opting for short-term savings ... and tremendous risks.
In the absence of professional testers, coders are often asked to be fully responsible for testing their code. Sounds like a good idea, right? Not really. While coders must test their code for expected behavior, they carry with them a set of assumptions about how the users will behave, and as several of Mann's examples demonstrate, these assumptions tend not to be so good.
Mann and others call for new standards and practices among software engineers and their instructors, and these are indeed needed. But it will take many years to refine these and start seeing benefits, and even then, getting closer to "bug-free" software is likely to take more time and money then we're used to.
And As Jim McCarthy points out, the usefulness of software is high enough that users put up with its bugs. Would you be willing to pay twice as much for increased stability? Would you wait twice as long?