A friend was telling me the other day that there is a pyramid for the costs of fixing a problem in the software development life cycle. Where could I find this?
He was referring to the cost of fixing a problem.
For example,
To fix a problem at the requirements stage costs 1.
To fix a problem at the development stage costs 10.
To fix a problem at the testing stage costs 100
The Most Expensive Software Bug Control
To fix a problem at the production stage costs 1000.
(These numbers are just examples)
I would be interested in seeing more about this if anyone has references.
This is a well-known result in empirical software engineering that has been replicated and verified over and over again in countless studies. Which is very rare in software engineering, unfortunately: most software engineering 'results' are basically hearsay, anecdotes, guesses, opinions, wishful thinking or just plain lies. In fact, most software engineering probably doesn't deserve the 'engineering' brand.
Unfortunately, despite being one of the most solid, most scientifically and statistically sound, most heavily researched, most widely verified, most often replicated results of software engineering, it is also wrong.
The problem is that all of those studies do not control their variables properly. If you want to measure the effect of a variable, you have to be very careful to change only that one variable and that the other variables don't change at all. Not 'change a few variables', not 'minimize changes to other variables'. 'Only one' and the others 'not at all'.
Or, in the brilliant Zed Shaw's words: if you want to measure shit, don't measure other shit.
In this particular case, they did not just measure in which phase (requirements, analysis, architecture, design, implementation, testing, maintenance) the bug was found, they also measured how long it stayed in the system. And it turns out that the phase is pretty much irrelevant, all that matters is the time. It's important that bugs be found fast, not in which phase.
This has some interesting ramifications: if it is important to find bugs fast, then why wait so long with the phase that is most likely to find bugs: testing? Why not put the testing at the beginning?
The problem with the 'traditional' interpretation is that it leads to inefficient decisions. Because you assume you need to find all bugs during the requirements phase, you drag out the requirements phase unnecessarily long: you can't run requirements (or architectures, or designs), so finding a bug in something that you cannot even execute is freaking hard! Basically, while fixing bugs in the requirements phase is cheap, finding them is expensive.
If, however, you realize that it's not about finding the bugs in the earliest possible phase, but rather about finding the bugs at the earliest possible time, then you can make adjustments to your process, so that you move the phase in which finding bugs is cheapest (testing) to the point in time where fixing them is cheapest (the very beginning).
Note: I am well aware of the irony of ending a rant about not properly applying statistics with a completely unsubstantiated claim. Unfortunately, I lost the link where I read this. Glenn Vanderburg also mentioned this in his 'Real Software Engineering' talk at the Lone Star Ruby Conference 2010, but AFAICR, he didn't cite any sources, either.
If anybody knows any sources, please let me know or edit my answer, or even just steal my answer. (If you can find a source, you deserve all the rep!)
Unfortunately the situation is as Jörg depicts, in fact somewhat worse: most of the references cited in this document strike me as bogus, in the sense that the paper cited either is not original research, or does not contain words supporting the claim being made, or - in the case of the 1998 paper about Hughes (p54) - contains measurements that in fact contradict what is implied by the curve in p42 of the presentation: different shape of the curve, and a modest x5 to x10 factor of cost-to-fix between the requirements phase and the functional test phase (and actually decreasing in system test and maintenance).
MorendilMorendil
Never heard of it being called a pyramid before, and that seems a bit upside-down to me! Still, the central thesis is widely considered to be correct. just thick about it, the costs of fixing a bug in alpha stage are often trivial. By beta stage it might take a bit more debugging and user reports. After shipping it could be very expensive. a whole new version has to be created, you have to worry about breaking in-production code and data, there may also be lost sales due to the bug?
Try this article. It uses the 'cost pyramid' argument (no naming it), among others.
Raúl C.Raúl C.
Not the answer you're looking for? Browse other questions tagged software-quality or ask your own question.
Instabug provides in-app feedback and bug reporting to mobile apps looking for bug tracking. After integrating the SDK, it allows you to have a seamless two-way communication with users or testers, while providing detailed environment report for developers. The integration process takes less than a minute and the results are outstanding! The top apps in the world rely on Instabug. We provide exceptional support 24/7 and is constantly rated highly as a bug reporting & tracking tool. Learn more about Instabug
A solution focusing on bug reporting, for customer & beta testing feedback, user engagement, crash reporting, and more. Integrate now! Learn more about Instabug
Instabug provides in-app feedback and bug reporting to mobile apps looking for bug tracking. After integrating the SDK, it allows you to have a seamless two-way communication with users or testers, while providing detailed environment report for developers. The integration process takes less than a minute and the results are outstanding! The top apps in the world rely on Instabug. We provide exceptional support 24/7 and is constantly rated highly as a bug reporting & tracking tool.
Earlier this year a man lost a $57 million jackpot when a casino alleged a 'software glitch' on the slot machine. Well, that's nothing compared to the backlog of $9 billion in unprocessed payments that happened in Japan in March.
Most Expensive Software Licenses
Casino Denies $57 Million Jackpot Because of 'Software Glitch'
Have you ever imagined how would it feel to hit a $57 million slot machine jackpot? It must be an…
Read more Read
Advertisement
Kichiku megane r ending list. Here are the top five worserest, most expensive computer glitches of 2011, according to SQS, a UK company specialized in software quality assurance:
1. Financial firm services AXA Rosenberg lost $217 million of its investors' money because of a software glitch in its investment model. The company hid the bug from its clients, so they had to pay back that amount—plus a $25 million fine—to the US Securities and Exchange Commission. Oh you cheeky 1% bastards you.
Advertisement
2. Car manufacturer Honda had to recall 2.5 million cars because of a bug that allowed vehicles to shift out of park or simply stall out. That's a lot of dope for some bad lines of code.
3. Japanese bank Mizuho Financial Group's clients experienced a software glitch that collapsed its ATM network and internet banking systems. The result was $1.5 billion in salary payment delays and $9 billion in unprocessed payments. Nine billion. With B.
Most Expensive Software For Pc
Advertisement
4. A $2.7 billion US Army cloud computing network failed miserably, leaving troops unable to perform simple operations like sharing data with other users, which, incidentally, is one of the network's main intended functions. You have to wonder how much time and money was ultimately lost—not to mention the number of lives endangered. Not surprisingly, nobody will say; maybe their computers are down.
5. Here's a good one—for those who were able to enjoy the glitch. A Commonwealth Bank ATM network bug caused the machines to dispense large amounts of money to random people. Police actually arrested two people who took the mistakenly spit-out money, saying that it was a crime. No word about the hundreds who took the money and ran—and got away.