Here is another intresting blog i visited....
How did we miss THAT?
Filed under: Thinking Like a Tester, Ruminations — Elisabeth Hendrickson @ 11:12 am
“Oh goodness. How did I miss THAT bug?”
Over the years, I’ve asked myself that question numerous times.
I asked that question when another tester found a blazingly obvious, critical bug that I completely missed. (The answer: I spent too much time tinkering with an ineffective automated script I’d written, and too little time observing the actual behavior of the system. That’s the project where I learned a lot about how NOT to do test automation.)
I asked myself that question, repeatedly, when I participated on a project some years ago now where we shipped software that crashed left and right. (I’m still sorting out the answers to that last one. Catastrophic failures are almost never the result of a single, simple error. And this particular catastrophic failure represented failures at all levels in an organization that had, um, issues. But I digress.)
I asked the question again when I learned that a web site I tested had back-button problems. After all, I was sure I’d tested for that. And I had. But I hadn’t re-tested for it after a particular set of code changes that changed some operations from HTTP GETs to HTTP POSTs. Oops.
And I asked myself that question more recently when I learned that a system I worked on earlier this year failed to save a change, and also failed to report an error, when it encountered data misformatted in a particular way in one specific field. Badly formatted data is one of my specialties, and I couldn’t believe I forgot to test the particular case that resulted in the problem. But it turned out that I did, indeed, fail to test what would happen if you entered “www.testobsessed.com” into a URL field instead of “http://www.testobsessed.com”. In hindsight, it’s an obvious test. Another lesson learned.
I reflected on those missed bugs when a colleague, Sandeep, a test manager, recently wrote to say that he’s been asking himself “how did my testing team miss that?”
He decided to seek out patterns of testing problems by categorizing escaped bugs according to the hole(s) in testing that allowed the bug to slip through. The idea is to improve the test effort by figuring out the common causes behind escaped bugs.
My initial reaction was, “that makes sense.” If you can identify the top 20% of testing holes that let 80% of the bugs through, and you can make some serious improvements to the test effort.
And my next reaction was, “but be careful.” Sandeep’s intent is good: use lessons learned from escaped bugs to improve testing. However, asking “How did we miss that?” is perilously close to heading down the slippery slope to “How did JoeBob miss that?” to “It’s JoeBob’s fault.” Having talked to Sandeep, I know he’s not trying to play “pin the blame on the tester.”
So I suggested a small reframe.
Instead of categorizing escaped bugs by asking the question, “How did testing miss that?”, categorize them by asking the question, “How can we improve the probability that testing will find bugs like that in the future?”
It’s a subtle difference.
But the result of reframing the question is that instead of identifying categories as noun phrases like “insufficient test data,” we end up with imperative statements like, “add test data.” Those two categories may look almost identical, but only one is actionable. I can “add test data.” The statement prompts me to do something different next time. But “insufficient test data” only gives me something to regret. And regret won’t help me ship better software.
So how can you categorize escaped bugs to improve the test effort without falling into the blaming trap? Try an Affinity Exercise with the question, “What could we do differently next time to increase the probability that if we have another bug like this we’ll catch it in test?”
To prepare:
Choose a team to participate in the activity. Affinity exercises can work with any number of people, but for this particular activity, I find a smaller group - say 3 to 5 people - works best. It’s a good idea to include people with diverse roles and skill sets.
Set up a meeting time and place. Plan for the whole activity to take 2 hours. And arrange to meet in a place with plenty of table and/or wall space.
Gather (or shop for) office supplies. You’ll need:
Index cards or sticky notes. Bigger is better. I like 5×8 cards or the SuperSticky 5×8 Post Its.
Felt-tip markers. I like Sharpies because they make consistently dark, readable marks. (Beware: Sharpies are permanent. Do NOT confuse your Sharpies and your White Board markers in the conference rooms. Facilities people get tetchy about such mix-ups.)
Gather a list of escaped bugs you want to analyze. If you have a lot of escaped bugs, prioritize them and time box the exercise. (You probably won’t be able to analyze more than 50 in an hour, possibly less, so don’t print out a list of 500.)
In the Meeting:
Review each bug with the team, asking: “What could we do differently next time to increase the probability that if we have another bug like this we’ll catch it in test?”
Have participants write their suggestions on the cards/stickies, one idea per card, in the form of an actionable statement. The suggestions should complete the sentence, “In the next release/iteration/sprint, we can ______.” Tips:
Also ask the participants to make their suggestions as concrete and specific as possible. For example, instead of writing “add test data,” write “add titles with ampersands (&) to the test data.”
And ask the participants to stick to test-related actions, and avoid blaming individuals. “Revoke NancySue’s checkin priviledges” is not an acceptable suggestion.
When you’ve reviewed all the bugs, or when an hour has passed, stop reviewing bugs. (If you still have lots of bugs to go and want to continue analyzing after an hour, stop anyway. Finish the rest of the exercise - the grouping. When you’ve worked through the whole process, if you still think more analysis would help, you can always do the exercise again.)
Gather all the cards/stickies, and lay them out on a large work surface: a table, the walls, or even the floor can all work well.
Sort through the cards/stickies as a team. The cards are now owned by the team, and everyone should take a hand in organizing them. Encourage participants to move cards that seem alike together so they are stacked together. Continue until the team agrees that it’s satisfied with the stacks of cards/stickies.
Ask the team to give meaningful names to the stacks of cards/stickies. This is the part of the activity where will generate the more abstract categories like “add test data.”
The result of this exercise is a list of categories for improving the testing effort that we can then use to determine which kinds of improvements will have the biggest bang for the buck. And the best part is that the list emerged from the actual problems your software has had in the field rather than being some arbitrary list of theoretical “improvements” based on someone else’s unrelated experience.
But wait; you’re not done. Now that you’ve created a first draft list of categories, test it. (Did I mention I’m Test Obsessed?) Choose a different set of escaped bugs, and assign each to one or more categories from the list. Notice how easy or hard it seems to find a category for each bug. This will give you a lot of feedback about how well the category list will work in practice. You may find the team needs to spend some additional time iterating on the list.
Once you’re satisfied with your list of categories, you can run the numbers to see how many bugs are in each category. Then you can create a Pareto diagram of the results to see what 20% of the improvement opportunities on your list will result in an 80% improvement. Now you can truly leverage escaped bug information into concrete actions that will make the test effort more effective.
Over time, as you try to use that original list to categorize reports of new bugs in the field, you will probably find that the list becomes less and less relevant. I hope so, anyway. It indicates that the improvement efforts are working, that the team has improved the test effort.
That’s when you know it’s time to do the process all over again to classify the next generation of escaped bugs.
Search For Knowledge
Friday, August 24, 2007
Monday, August 20, 2007
Ten Worst Software Bugs
Before going into the worst bugs, do you know why computer program errors are called “bugs” and why developers like myself “debug” programs? It all goes back to September, 1945. Here is the story:
Moth found trapped between points at Relay # 70, Panel F, of the Mark II Aiken Relay Calculator while it was being tested at Harvard University, 9 September 1945. The operators affixed the moth to the computer log, with the entry: “First actual case of bug being found”. They put out the word that they had “debugged” the machine, thus introducing the term “debugging a computer program”.
Here is the log, with the moth still taped by the entry:
source
Now on to the worst moths… Oops, I mean bugs. Wired News published an article about history’s worst software bugs. Here is a summary of the 10 worst software bugs in chronological order:
1962 — Mariner I space probe. A bug in the flight software for the Mariner 1 causes the rocket to divert from its intended path on launch.
1982 — Soviet gas pipeline. Operatives working for the CIA allegedly plant a bug in a Canadian computer system purchased to control the trans-Siberian gas pipeline. The resulting event is reportedly the largest non-nuclear explosion in the planet’s history.
1985-1987 — Therac-25 medical accelerator. A radiation therapy device malfunctions and delivers lethal radiation doses at several medical facilities.
1988-1996 — Kerberos Random Number Generator. The authors of the Kerberos security system neglect to properly “seed” the program’s random number generator with a truly random seed.
1990 — AT&T Network Outage. A bug in a new release of the software that controls AT&T’s #4ESS long distance switches causes these mammoth computers to crash.
1993 — Intel Pentium floating point divide. A silicon error causes Intel’s highly promoted Pentium chip to make mistakes when dividing floating-point numbers that occur within a specific range.
1995-1996 — The Ping of Death. A lack of sanity checks and error handling in the IP fragmentation reassembly code makes it possible to crash a wide variety of operating systems by sending a malformed “ping” packet from anywhere on the internet.
1996 — Ariane 5 Flight 501. Working code for the Ariane 4 rocket is reused in the Ariane 5, but the Ariane 5’s faster engines trigger a bug in an arithmetic routine inside the rocket’s flight computer.
2000 — National Cancer Institute, Panama City. In a series of accidents, therapy planning software created by Multidata Systems International, a U.S. firm, miscalculates the proper dosage of radiation for patients undergoing radiation therapy.
I noticed that “Y2K” did not make the list. I guess because it was not the worst or it was not a bug, or is it?
---Thanks to awards.net
Moth found trapped between points at Relay # 70, Panel F, of the Mark II Aiken Relay Calculator while it was being tested at Harvard University, 9 September 1945. The operators affixed the moth to the computer log, with the entry: “First actual case of bug being found”. They put out the word that they had “debugged” the machine, thus introducing the term “debugging a computer program”.
Here is the log, with the moth still taped by the entry:
source
Now on to the worst moths… Oops, I mean bugs. Wired News published an article about history’s worst software bugs. Here is a summary of the 10 worst software bugs in chronological order:
1962 — Mariner I space probe. A bug in the flight software for the Mariner 1 causes the rocket to divert from its intended path on launch.
1982 — Soviet gas pipeline. Operatives working for the CIA allegedly plant a bug in a Canadian computer system purchased to control the trans-Siberian gas pipeline. The resulting event is reportedly the largest non-nuclear explosion in the planet’s history.
1985-1987 — Therac-25 medical accelerator. A radiation therapy device malfunctions and delivers lethal radiation doses at several medical facilities.
1988-1996 — Kerberos Random Number Generator. The authors of the Kerberos security system neglect to properly “seed” the program’s random number generator with a truly random seed.
1990 — AT&T Network Outage. A bug in a new release of the software that controls AT&T’s #4ESS long distance switches causes these mammoth computers to crash.
1993 — Intel Pentium floating point divide. A silicon error causes Intel’s highly promoted Pentium chip to make mistakes when dividing floating-point numbers that occur within a specific range.
1995-1996 — The Ping of Death. A lack of sanity checks and error handling in the IP fragmentation reassembly code makes it possible to crash a wide variety of operating systems by sending a malformed “ping” packet from anywhere on the internet.
1996 — Ariane 5 Flight 501. Working code for the Ariane 4 rocket is reused in the Ariane 5, but the Ariane 5’s faster engines trigger a bug in an arithmetic routine inside the rocket’s flight computer.
2000 — National Cancer Institute, Panama City. In a series of accidents, therapy planning software created by Multidata Systems International, a U.S. firm, miscalculates the proper dosage of radiation for patients undergoing radiation therapy.
I noticed that “Y2K” did not make the list. I guess because it was not the worst or it was not a bug, or is it?
---Thanks to awards.net
Friday, August 17, 2007
A wonderful learning experience on testing
A true story about learning about software testing… a long while ago I had a boss named Bob. At the time I was fairly new to software testing. Bob had knowledge on many topics. He had an approachable way about him that made working with him and for him one of my most positive experiences in my life. I realize now the primary reasons for this: I love to learn and enjoy the role of the student. Bob was a great teacher by nature. Together for a long stretch we made a great pair.I use to ask Bob time and time again, what should I learn now? He would sit back at his desk and think about this. Then he would take a book from his bookshelf. Sometimes he would suggest reading a whole book and sometimes just a section. The concept of not reading a book front to back like a novel gave me permission to take what I wanted and leave the rest. He’d answer in ways like this: learn a little more SQL, learn more about data types, or go look read more about user interface standards. Sometimes depth, sometimes breadth. But here’s one point. he knew me. He knew what I knew, he knew pretty well what I didn’t know and he knew clearly what my work objectives and challenges were – and sometimes he would ask me, what are your current frustrations or stopping points? This made my asking Bob a good question because he had a frame of reference. This is an important point – sometimes people email me this same question, so please consider that I’m going to be limited in how I answer the question because I don’t know you and I don’t know your background. And I don’t know what you might be trying to accomplish. But you do.I remember one day after working with him for years, I asked the same age old question. Bob, what should I learn? He looked up at his bookshelf turned to me and said; it may be time for you to teach. I was actually sad that day; I wanted to remain the student. But with a deep sigh, I accepted that it might be time to pass on some knowledge and that it was also time for me to continue on my own.I had resolve this continuing question for myself. I think I learned a bit or more about learning from Bob. I look for my educational gaps. I know where they are. I have a mix of topics that interest me, topics I feel/know I need to learn more and sometimes topics I need learn to resolve new technical challenges at work. Sometimes I catch up on pure software testing materials from the work of other people in the field.Bob hasn’t been my boss for a decade or more now. But I was fortunate to have had the time with him. Years later, I find myself hiring testers. I share every book I own. I highlight sections, I point out websites. I suppose, the cycle repeats. It feels good to help other people. Though a large part of me I suspect remains a grasshopper.
-Thanks Karen N. Johnson for sharing the great time with ur teacher.
-Thanks Karen N. Johnson for sharing the great time with ur teacher.
Friday, August 10, 2007
Intelligent search engine?
Today I searched the basic functions of the project I'm working with, and performed some search options available with. The application used to perform some default search if we not specified (default specification is there) anything. I removed all the default search input and performed the search. (Without any input) But it result me the same default search. Actually I expected some error message to insist the end user to give some input (intelligent programming…:-). But it was not. (it has a special specification to result everything in the DB)
Then I did an empty search test in some famous search engines. Just open some search engines, without entering any search key perform the search. Aaha it all returns to its original page. This happened to most of the search engine like Google, yahoo, AltaVista, etc.
What I expect in the search engine is this.
The search engine may result some message (error) to the user to enter some input or at least it may lead to the web search help page.
The search engine may provide some search result about the empty space (preferably white space or blank space).
I really don’t know what is right on this case. Anyone want to comment about this…pls shoot.
Then I did an empty search test in some famous search engines. Just open some search engines, without entering any search key perform the search. Aaha it all returns to its original page. This happened to most of the search engine like Google, yahoo, AltaVista, etc.
What I expect in the search engine is this.
The search engine may result some message (error) to the user to enter some input or at least it may lead to the web search help page.
The search engine may provide some search result about the empty space (preferably white space or blank space).
I really don’t know what is right on this case. Anyone want to comment about this…pls shoot.
The Web Testing CheckList
The Checklist
The links in the checklist lead to a discussion of each issue.
Validation
Validate the HTML
Validate the CSS
Check for broken links
Flexibility
Try varying window sizes
Try varying font sizes
Speed
Access the site via a modem
Check image size specifications
Accessibility
Test accessibility
View in text browser
Browser independence
Try different browsers
Check printed pages
Switch Javascript off
Switch plug-ins off
Switch images off
Other checks
Check non-reliance on mailto
Check no orphan pages
Check sensible page titles
want full details...visit http://www.xs4all.nl/~sbpoley/webmatters/checklist.html
The links in the checklist lead to a discussion of each issue.
Validation
Validate the HTML
Validate the CSS
Check for broken links
Flexibility
Try varying window sizes
Try varying font sizes
Speed
Access the site via a modem
Check image size specifications
Accessibility
Test accessibility
View in text browser
Browser independence
Try different browsers
Check printed pages
Switch Javascript off
Switch plug-ins off
Switch images off
Other checks
Check non-reliance on mailto
Check no orphan pages
Check sensible page titles
want full details...visit http://www.xs4all.nl/~sbpoley/webmatters/checklist.html
Friday, August 3, 2007
THINK OUT OF BOX AND FIND MORE BUGS
Hi everyone,
For good software tester, out of box thinking is very essential. So that he can find some good bugs in the application. And make that application a fool proof one.
Let me give an example. If a programmer is coding for a Building Lift then he won't code for every floor. He may code for a single floor (like first floor) then slightly change the code and use that in other floors.
Here the out of box thinking works. If you test the application in every floor you may find some bugs, but a smart tester will specifically do testing for ground (lower) floor and the top (upper) floor.
The ground floor may not need the options for going down and the top floor don’t require going up.
This is how the out of box thinking works…
Regards,
-Jerry-
For good software tester, out of box thinking is very essential. So that he can find some good bugs in the application. And make that application a fool proof one.
Let me give an example. If a programmer is coding for a Building Lift then he won't code for every floor. He may code for a single floor (like first floor) then slightly change the code and use that in other floors.
Here the out of box thinking works. If you test the application in every floor you may find some bugs, but a smart tester will specifically do testing for ground (lower) floor and the top (upper) floor.
The ground floor may not need the options for going down and the top floor don’t require going up.
This is how the out of box thinking works…
Regards,
-Jerry-
Subscribe to:
Posts (Atom)