Search For Knowledge

Google

Monday, December 31, 2007

Friday, December 21, 2007

How to write a good Defect Report

Here is the key(s)

  1. Be very specific when describing the bug. Don’t let there be any room for interpretation. More concise means less ambiguous, so less clarification will be needed later on.
  2. Calling windows by their correct names (by the name displayed on the title bar) will eliminate some ambiguity.
  3. Don’t be repetitive. Don’t repeat yourself. Also, don’t say things twice or three times.
  4. Try to limit the number of steps to recreate the problem. A bug that is written with 7 or more steps can usually become hard to read. It is usually possible to shorten that list.
  5. Start describing with where the bug begins, not before. For example, you don't have to describe how to load and launch the application if the application crashes on exit.
  6. Proofreading the bug report is very important. Send it through a spell checker before submitting it.
  7. Make sure that all step numbers are sequenced. (No missing step numbers and no duplicates.)
  8. Please make sure that you use sentences. This is a sentence. This not sentence.
  9. Don’t use a condescending or negative tone in your bug reports. Don’t say things like "It's still broken", or “It is completely wrong”.
  10. Don’t use vague terms like “It doesn’t work” or “not working properly”
  11. If there is an error message involved, be sure to include the exact wording of the text in the bug report. If there is a GPF (General Protection Fault) be sure to include the name of the module and address of the crash.
  12. Once the text of the report is entered, you don’t know whose eyes will see it. You might think that it will go to your manager and the developer and that’s it, but it could show up in other documents that you are not aware of, such as reports to senior management or clients, to the company intranet, to future test scripts or test plans. The point is that the bug report is your work product, and you should take pride in your work.

Hope this will help you to write a proper Bug Report.

Thanks to Pavan for his help on this.

Cheers,

-jerry-

DETAILS ON DEFECT TRACKING

Here r few things to be included in the Defect Tab.
(M) - Mandatory
(D) - Default Appearance


DETAILS ON DEFECT TRACKING
  1. Use Case Package (M)
  2. Test Case (M)
  3. Detected By (M) (D)
  4. Detected On Date (M) (D)
  5. Defect Type
  6. Status (M) (D)
  7. Detected in Version (M)
  8. Build Number (M)
  9. Assigned To (M) (D)
  10. Business Impact / Severity (M)
  11. Priority
  12. Reproducible (M) (D) 'Y'
  13. Testing Impact (M)
  14. Module / Page
  15. Environment
  16. Defect Summary (M)
  17. Defect Description (M)
  18. Comments
  19. Attachments

Hope this will help a little.

Cheers,

-jerry-

Thursday, December 20, 2007

Tuesday, December 18, 2007

AST's Official Magazine December Edition

Here we GO,

AST (Association for Software Testing) released their December 2007 Issue of Sapient Testing.
Another Good Work from them. Keep it up friends.

http://www.sapienttesting.com/files/December.Final.pdf

Cheers,
-jerry-

Instant Performance Test

Hi All,

Like to do a instant performance test on your site with no cost. Here we go,
http://www.gomez.com/info_center/instant_test.php
try your site with this and get the instant performance details of it.

Thanks to sapienttesting and Scott Barber for his wonder ful work on December edition.

Cheers,
-jerry-

Wednesday, December 12, 2007

The 10 Commandments of Load Testing

Here r the 10 Commandments of Load Testing.

  1. Thou shalt know how thy test tool works.
  2. Thou shalt gather realistic usage data.
  3. Thou shalt have testable requirements.
  4. Thou shalt write a test plan.
  5. Thou shalt test for the worst case.
  6. Thou shalt monitor your test environment infrastructure.
  7. Thou shalt enforce change control on your environment.
  8. Thou shalt use a defect tracking tool.
  9. Thou shalt rule out thy own errors before raising a defect.
  10. Thou shalt pass on your knowledge.

Visit http://www.myloadtest.com/ten-commandments-of-load-testing/ for full details.

Thanks to Stuart Moncrieff for his wonderful blog and his great service to Our Community.

Don't forget to visit his http://www.mypentest.com/ - security testing blog

and http://www.myloadtest.com/ - load testing blog.

Cheers,

-jerry-

Automation Testing Framework for Distributed Systems

BizUnit
The adoption of an automated testing strategy is fundamental in reducing the risk associated with software development projects, it is key to ensuring that you deliver high quality software. Often, the overhead associated with developing automated tests is seen as excessive and a reason to not adopt automated testing.

BizUnit is a framework and as such does not have any dependency on either NUnit of VS Unit Testing, either of these make a great way to drive BizUnit test cases, though equally you could write custom code to do the same.


For further Details Vist :http://www.codeplex.com/bizunit
Thanks to kevinsmi for his great work.

Cheers,
-jerry-

Mother of all checklist for Software testing

Hi All,
Here is the Mother of All Check List for software testing.
Hope this will be very helpful for most of our testing.
http://www.sqaforums.com/download.php?Number=437212
or
http://www.4shared.com/file/31856547/84194eaf/Mother_of_All_CheckListdoc.html

Thanks to Shreya for his wonderful work.

Cheers,
-jerry-

Wednesday, December 5, 2007

Project Testing Folder Structure


Here is a good Testing Folder Structure. (just a sample)
Cheers,
-Jerry-














Entry and Exit criteria discussion in a Forum - ofcourse me too..

More_Enjoy--
I am testing an web application. Any one can tell me what will be the entry and exist criteria for it.

Joe--
Entry Criteria : A set of decision-making guidelines used to determining whether a system under test is ready to move into, or enter, a particular phase of testing. Entry criteria tend to become more rigorous as the test phases progress. [R. Black]
Exit Criteria : A set of decision-making guidelines used to determining whether a system under test is ready to exit a particular phase of testing. When exit criteria are met, either the system under test moves on to the next test phase or the test project is considered complete. Exit criteria tend to become more rigorous as the test phases progress. [R. Black]

Jerryrajan (Me)--
Here comes the biggest problem of Software Testing. In my view specific definitions will not work for these cases. Because most of the people understand the testing terms in their own way and the very important thing is either they r in the right path or not.In my view the Entry Criteria is a collection of conditions / Objects that are very important to start testing; with out that if you start your testing that will be a failure.Exit Criteria is also collection of conditions / Objects that decides whether you can stop the testing or not.

Richard--
Which part of the definitions that Joe posted do not fulfill what you are saying? As I read them they match up.

Jerryrajan (Me)--
That is my own view Richard, not specifically for Joe.In simple words, "A set of decision-making guidelines" if someone got all guidelines and no resource to follow that.What are the entry criteria?If you r going to prepare a tour plan for your family using your car. What are the entry criteria?You, car, family, then the set of guidelines...correct me if I'm wrong.

Richard--
The guidelines will be specific per project, so the guidelines for deciding if you are going on your road trip would be: - Has the design documents been completed? (Tour plan written)- Are the resources available? (Car, people, luggage, holiday booked) If you can say yes, then you can decided to go live (go on the trip) So the entry criteria and exit criteria should include everything you need to go to the next stage of development, not some of it.

Jerryrajan (Me)--
Agreed.

Wednesday, November 28, 2007

Portable code editors

As a Test Analyst, one must aware of the code behaviour. To work with codes occasionally some portable code editors are necessary. Here r two good code editors.
Notepad++ supports lot of languages.
VBS edit supports VB Script only. (specifically useful for QTP scripting)

Both are unregistered versions, (freeware anyway) and also portable.
So extract them and run it. (No installation required)
Pls purchase them if you feel they are useful.

http://www.4shared.com/file/30592543/d30fe88a/Notepad_Portable.html
http://www.4shared.com/file/30592566/91537e87/vbsedit_Portable.html

Cheers,
-Jerry-

Real-time HTML Editor

Here is a Real-time HTML Editor.
If you r working with the Web Application and like to view your developer's code in other HTML editor and like to locate some defects.
Just copy their page source and paste in the Real-time HTML Editor. It will show you the page of your script.

http://htmledit.squarefree.com/

Cheers
-jerry-

Tuesday, November 27, 2007

Package Execution Status Sheet

Here is a Package Execution Status Sheet for use cases or test cases.
This supports different cycles and different modules.
The sheet is fully automated.
Type your project name in the "Main Sheet", it will be updated in sub sheets.
Update the data in sub -cycle sheets, the status will get calculated in Main Sheet.

http://www.4shared.com/file/30577726/7e5521be/Package_Execution_Status_123.html

Post your comments to improve the sheet.

Cheers,
-Jerry-

Here is the comments and discussions on my previous question in a famous qa forum

Jerry-
At last a Software Test Analyst's effectiveness is just based on his number of Defects….? This may looks funny, but is the Truth. In a project a Software tester's effectiveness is just measured (at last...) by his number of defects.This happened in my nearby project. The team has to release a resource according to the plan and the team manager is breaking his head to make a decision. At last he got two names to select one. Member A is very good in overall application knowledge and designing test cases. Member B is very good in finding defects (even apart from TC execution), with lot of sense and logical aptitude. Now almost the TC design stage is over and they are in the execution stage. What the Team Manager do, whom will he release...Pls post your comments.

Muthu-
Hi Rajan,I would recommend retaining Member B since your going to start with Execution Cycle. Test Case updating would be minimal during this stage.As a Test Manager we would need a Resource with good Logical and Analytical Skills to find more Valid Defects.

Richard-
Member B does sound like the logical choice, however the Test Manager must ensure that they do not lose any domain knowledge that can not be gained elsewhere. So they need to check that any information member A has can also be gained from other team members or get them to document it.

Jerry-
Thanks Muthu and Richard,Both views sounds good.If he released Member A: 1. The team manager will lose a domain knowledge Resource.2. No one is equivalent and accurate like executing his own test cases. (Hope we all experienced when executing other's cases)3. In India we use a term "Karuveapili" (the ingredient we add in food for good flavour, once cooked we throw it away)the Member A must not treated in such a way. The manager has to give him some acknowledge to that member and retain him in the project.4.I personally know that even Member B find some good critical logical defect apart from Use Cases he used to consult and clarify that defect with Member A.(I'm very confused)

Richard-
Sounds like there is not going to be an answer that is perfect, the project is going to lose out some way, and somebody is going to be upset. (assuming that they are not leaving the company I would do the below)I would explain to member A that he has to lose one person. Make it clear that they have done tremendous work on the project, and the project would not be in such a good state going into the testing phase without them, however he is letting them go as they are much more valued for their test case planning and use case reviewing (etc etc) and therefore will be more value to another project in that capacity. To member B I would explain that the have been kept on due to their excellent eye for detail and finding really good incidents. However as member A is moving on to other projects this is a really good opportunity for them to grow in their role and take responsibility and ownership of the issues and have the confidence to speak to developers/designers etc to clarify issues. Hopefully this will mean member A understands that they are not Karuveapili (love that phase, thanks!) and member B can grow in their role and it will effect the project in a good way. Does that help?

Jerry-
Wow Man Richard,Really wonderful decision. I'm feeling very happy on reading this reply. (Small fish with big humanity and human administration...!)Many Thanks for your effort.

Joe-
I completely disagree.Defect count by itself is a poor measure of effectiveness.Consider...Tester 1 finds 100 defects. All of them are surface-level defects in code that has many, many defects present. And, the existing critical defects have been completely missed.Tester 2 finds 10 defects. All of them are critical, and there are no other critical-level defects remaining in the code.I know which of the two I would choose.

Jerry-
Yes Joe, you r right.In your situation, Every one will give First priority to the Tester 2 and Second priority to Tester 1.(Anyway a defect is a defect is a defect, doesn't matter big or small)But here the case is different. Either you keep the one good in finding defects or the one good in Domain Knowledge and talented in Test Design.


Hope you all enjoyed.
-Jerry

Thursday, November 22, 2007

At last a Software Test Analyst's effectiveness is just based on his number of Defects….?

This may looks funny, but is the Truth. In a project a Software tester's effectiveness is just measured (at last...) by his number of defects.

This happened in my nearby project. The team has to release a resource according to the plan and the team manager is breaking his head to make a decision. At last he got two names to select one. Member A is very good in overall application knowledge and designing test cases. Member B is very good in finding defects (even apart from TC execution), with lot of sense and logical aptitude. Now almost the TC design stage is over and they are in the execution stage. What the Team Manager do, whom will he release...

Pls post your comments.

Wednesday, November 21, 2007

A good function to get ur o/p in XLS

Here is a good function in VBS to generate your application output in XLS.


Function create_excel ()

Dim xl, St
Set xl = CreateObject("Excel.Application")
xl.Workbooks.Open "C:\Project-Main\Project-Sub\Output.xls"
Set St = xl.ActiveWorkBook.Worksheets("Global")
St.Cells(2,13) = "123"
xl.ActiveWorkbook.Save
xl.ActiveWorkbook.Close
xl.Application.Quit

End Function

This function will export the data from the Global sheet to Output.xls

-jerry (Thanks to RD)

Tuesday, November 20, 2007

Security Testing Publication

Here is a very good publication on security testing by
Computer Security Division, National institute of Standards and Tech of USA.
May be the old one, but very intresting and usefull.

http://csrc.nist.gov/publications/nistpubs/800-42/NIST-SP800-42.pdf

Cheers,
-Jerry-

Thursday, November 15, 2007

SOA Testing

SOA (Service-oriented architecture) is a very hot topic now in market. SOA Testing is hottest then automation. Visit this site to update ur knowledge in SOA.

The Blog is all about SOA Testing techniques used for testing IT assets that are part of a Service Oriented Architecture. As SOA begins to tie the fabric of IT infrastructure, actively and aggressively testing Web Services has become crucial. Comprehensive Functional, Performance, Interoperability and Vulnerability Testing form the Pillars of SOA Testing. Only by adopting a comprehensive testing stance, enterprises can ensure that their SOA is robust, scalable, interoperable, and secure.

http://soa-testing.blogspot.com/
Thanks to Mamoon Yunus for his good work.. Go on Friend.

25 Photographs Taken at the Exact Right Time


Wow, this is not any photographic gimmick. Just true snaps with ultra fast frame shot.


Tuesday, November 13, 2007

Another good Estimation article

Here is another Good Estimation Article...
Thanks to Paul for this.

Why are our estimates always too low?

At last week's Test Management Forum, Susan Windsor introduced a lively session on estimation – from the top down. All good stuff. But during the discussion, I was reminded of a funny story (well I thought it was funny at the time).
Maybe twenty years ago (my memory isn’t as good as it used to be), I was working at a telecoms company as a development team leader. Around 7pm one evening, I was sat opposite my old friend Hugh. The office was quiet, we were the only people still there. He was tidying up some documentation, I was trying to get some stubborn bug fixed (I’m guessing here). Anyway. Along came the IT director. He was going home and he paused at our desks to say hello, how’s it going etc.
Hugh gave him a brief review of progress and said in closing, “we go live a week on Friday – two weeks early”. Our IT director was pleased but then highly perplexed. His response was, “this project is seriously ahead of schedule”. Off he went scratching his head. As the lift doors closed, Hugh and I burst out laughing. This situation had never arisen before. What a problem to dump on him! How would he deal with this challenge? What could he possibly tell the business? It could be the end of his career! Delivering early? Unheard of!
It’s a true story, honestly. But what it also reminded me of was that if estimation is an approximate process, our errors in estimation in the long run (over or under estimation) expressed as a percentage under or over, should balance statistically around a mean value of zero, and that mean would represent the average actual time or cost it took for our projects to deliver.
Statistically, if we are dealing with a project that is delayed (or advanced!) by unpredictable, unplanned events, we should be overestimating as much as we under estimate, shouldn’t we? But clearly this isn’t the case. Overestimation, and delivering early is a situation so rare, it’s almost unheard of. Why is this? Here's a stab at a few reasons why we consistently 'underestimate'.
First, (and possibly foremost) is we don't underestiate at all. Our estimates are reasonably accurate, but consistently we get squeezed to fit with pre-defined timescales or budgets. We ask for six people for eight weeks, but we get four people for four weeks. How does this happen? If we've been honest in our estimates, surely we should negotiate a scope reduction if our bid for resources or time is rejected? Whether we descope a selection of tests or not, when the time comes to deliver, our testing is unfinished. Of course, go live is a bumpy period - production is where the remaining bugs are encountered and fixed in a desperate phase of recovery. To achieve a reasonable level of stability takes as long as we predicted. We just delivered too early.
Secondly, we are forced to estimate optimistically. Breakthroughs, which are few and far between are assumed to be certainties. Of course, the last project, which was so troublesome, was an anomaly and it will always be better next time. Of course, this is nonsense. One definition of madness is to expect a different outcome from the same situation and inputs.
Thirdly, our estimates are irrelevant. Unless the project can deliver in some mysterious predetermined time and cost contraints, it won't happen at all. Where the vested interests of individuals dominate, it could conceivably be better for a supplier to overcommit, and live with a loss-making, troublesome post-go live situation. In the same vein, the customer may actually decide to proceed with a no-hoper project because certain individuals' reputation, credibility and perhaps jobs depend on the go live dates. Remarkable as it may seem, individuals within customer and supplier companies may actually collude to stage a doomed project that doesn't benefit the customer and loses the supplier money. Just call me cynical.
Assuming project teams aren't actually incompetent, it's reasonable to assume that project execution is never 'wrong' - execution just takes as long as it takes. There are only errors in estimation. Unfortunately, estimators are suppressed, overruled, pressured into aligning their activities with imposed budgets and timescales, and they appear to have been wrong.
Visit his site...

Monday, November 12, 2007

A good magazine for testing

Here is a good magazine for software testing.
http://www.stpmag.com/

Cheers,
-jerry

Wanna update yourself....

Ya, if you want to update yourself in testing (black or white). Visit the new Microsoft's Test Blog.
http://msdn2.microsoft.com/en-us/testing/default.aspx
you can find great articles here soon...

Equivalence class partitioning By Testy

Equivalence class partitioning (ECP) is a functional testing technique useful in either black box or white box test design. A technique is a systematic approach to help solve a complex problem. Techniques are not silver-bullets, but they are a logical and analytical approach to problem solving that heavily draws upon the tester's cognitive abilities (having a basis in or reducible to empirical factual knowledge) as opposed to random guessing (or little men running about inside one's head triggering turbid thoughts). Contrary to popular misconceptions the application of ECP is not a rote, brain-dead activity. The ECP technique requires in-depth knowledge of the data set (data type, encoding method, etc), the programming language used in the implementation, the algorithm structure, the operating environment, protocols, and even the hardware platform may impact how the data for a particular parameter might be decomposed. The effectiveness of the application of this technique solely lies in the testers ability to adequately decompose the data set for a given parameter into subsets in which any element from a specific subset would produce the same result as any other element from that subset.

To View the full article visit..http://blogs.msdn.com/imtesty/archive/2007/10/31/equivalence-class-partitioning-part-1.aspx

Thanks to Mr. Testy for his wonderful work.

Here r few funny Testing related cartoons.












































Thanks to Leo Rex for this pictures...

Tuesday, November 6, 2007

Cross-browser compatibility


For All Web Application Browser compatibility test is very essential. If you look at the picture above here about the usability of different browsers worldwide. Then we can understand the percentage of compatibility test we must try with different browsers.




For further Detail about the Cross-browser compatibility test Visit



-Good Day



Monday, November 5, 2007

Generate Data - the test data generator

Today i found an Test Data Generator. http://www.generatedata.com/#generator
This really rocks. May be simple but works fine. Forgive them if you find some page errors in between.

Ofcourse free but if you want donate them something.

Monday, October 29, 2007

QA Time Estimation based on Dev's Time Estimation

Actually for QA Time Estimation we have to work and calculate a lot with lot of things like the previous post says. But that all things will never available in real life. If Project Manager asked for QA Time Estimation without any statistical data but just with Development time estimation, then we can follow the "general rule of thumb" that is 2/3 of the development estimate.

Remember, this will not be very accurate. But truthfully nothing is accurate in estimation, even for satellite lunching they may not have accurate plane.

In case dev estimated 66 days and for QA you can suggest 48 days, hopefully that won't cross then that. (4 days for backup). But still this estimation is just a lollypop to show not the real cake to eat…!!

Software Testing Cycle Diagram


Here is an good software testing diagram from research.ibm

Estimating Testing Projects

Hi All, today i found a good article on "Estimating Testing Projects"
This is really easy and good to understand ...
here is this for u all...

Estimating Testing Projects
Walkthrough on How-To

The online content i have found regarding developing sound estimations for testing projects are found to be wanting in a lot of ways


# Articles start of promising and end up with “Software project estimation - Seat of the pants approach”.
# Articles packed with a lot of know-know but absolutely no how-to
# Articles that tell me how to keep doing what we are already doing.


Current Affairs

The current situation of software project estimation can be best described as CMM level 1 heroics. The nearest we conventionally come to it is a WAG (Wild Ass Guess). As to how this has become an acceptable practice in the software industry defies my comprehension!!?$!?!^@?!!! Sometimes people even go to the extent of categorizing this “exercise in futility” as a SWAG( Scientific Wild Ass Guess).


Bottomline: Doing a WAG or SWAG (Whatever you call it) is as good as doing no estimation at all.


What is an estimate?

An estimate serves as a masterplan for a software project covering all aspects like costing,staffing,timing. Hence basing this on pure guess work is a definite NO


Ok history apart lets get started……


Since this is a walkthrough on how to do an estimate we will start from scratch and move to the end…..Please note that there are a lot of conventional things that we do which can remain as is. Lets concentrate on the flaws that haunt the practice.

1. Collectibles to start an estimate

Starting an estimation is back breaking work . There are the following elusive documents that one has to procure before sitting down to a sensible estimate.

1. Customer Requirements Specification

2. Request for proposal,

3. System specification/ Architecture.

4. Software Requirements Specification

I have often hear the following complaint, “Owwww but these don,t exist at that point”. The reply to that is that as per SDLC (Waterfall, Iterative, Jumbo circus whatever) models of software development these are the primary documents that need to be in place before we estimate for a project.The necessity of having these document in hand is to prevent the wild ass from guessing about our project

2. Approaches to preparing QA estimates for a project

There are a lot of sound practices that people have been using to prepare development estimates like system lines of code (SLOC), use-case based analysis, function point analysis, object hierarchy trees etc.In the following section i have proposed some of the techniques that can be used to prepare QA estimates for projects.

1. Begin estimation by conducting a comprehensive study of the system architecture, scope of work and the analyzing the complexity of work.

2. Determine what style of testing should be used in the strategy (you could chose from many options like use case based testing, scenario based testing, module hierarchy trees to name a few). This is a very important step which most QA personnel skip.

3. If you are adopting a module based testing, prepare a module hierarchy tree to visualize the inter-dependencies between modules.

4. Analyze and assign complexity to each node of this tree. Estimate the number of test cases in each module by analyzing the # of functionalities bundled in each module.

5. Ascertain by past experience or analysis, a realistic projection of QA productivity (no of test cases per person day). This is a metric which varies and WILL NOT FOLLOW ANY PRE-DEFINED ORGANIZATIONAL NORM.

6. Analyze each module to arrive at a preliminary idea of the extent of automation that is possible in each of these modules.

7. Estimate the strategy of automation in terms of how you will automate the testing, what will be the coverage of automation, what will be the complexity of developing automation scripts if any. ( a POC might be required for this at a later stage. This has to go into the estimate. Doing POC’s for software test automation is a severly neglected critical component of delivering QA processes.)

The above 7 steps have to necessarily completed and documented before you can start with the estimate.

3. Preparing the estimate

Identify the risks involved. These can range from technology risks, risks introduced by the delivery model that has been adopted and many other factors. Each risk identified has the potential of throwing your estimate haywire. Hence de-risking the project is an important part of estimation. In situation where the number of risks identified are high, it would be a good practice to prepare 2 separate estimates

# Conservative estimate: This will factor all the overruns in terms of effort, time and cost that would transpire if the risks realize themselves.
# Optimistic estimate: This will envision the delivery of the project if the risks identified do not materialize.

The benefit of having 2 versions of the estimate is that it would provide a clear cut picture of how bad things can go when risks materialize and what would be the losses incurred if the are let to remain.Conservative estimates can be projectional in nature where the increase in time, effort and cost can be show as a projection in relation to the risk.

The process of preparing the estimate start only once the above steps have been completed. All factors in the estimate have to be traceable to the documentation that has been prepared as part of section 2 of this document. The actual process of doing the estimate like assigning timelines and resource loading will be guided by the above section and help you arrive at a sound estimate.

The Golden rule of all estimations “DO NOT DO SOFTWARE PROJECT ESTIMATIONS WITHOUT ALL NECESSARY SUPPORT DOCUMENTATION IN PLACE.”

I have outlined only the measures to curb the flawed practices in the estimates. These could be incorporated to modify the existing process(if any) of preparing estimates within your organization. A complete walkthrough would expand the scope of this article exponentially

Thankfully From : http://www.testinglounge.com/2007/06/12/qa-estimation/

By :Robin Thomas

Wednesday, October 24, 2007

Testing Measurement - a nice high leval article

Someone has rightly said that if something cannot be measured, it can not be managed or improved. There is immense value in measurement, but you should always make sure that you get some value out of any measurement that you are doing. You should be able to answer the following questions:

# What is the purpose of this measurement program?
# What data items you are collecting and how you are reporting it?
# What is the correlation between the data and conclusion?

Value addition
Any measurement program can be divided into two parts. The first part is to collect data, and the second is to prepare metrics/chart and analyse them to get the valuable insight which might help in decision making. Information collected during any measurement program can help in:

#Finding the relation between data points,
#Correlating cause and effect,
#Input for future planning.

Normally, any metric program involves certain steps which are repeated over a period of time. It starts with identifying what to measure. After the purpose is known, data can be collected and converted in to the metrics. Based on the analysis of these metrics appropriate action can be taken, and if necessary metrics can be refined and measurement goals can be adjusted for the better.

Data presented by testing team, together with their opinion, normally decides whether a product will go into market or not. So it becomes very important for test teams to present data and opinion in such a way that data looks meaningful to everyone, and decision can be taken based on the data presented.

Every testing projects should be measured for its schedule and the quality requirement for its release. There are lots of charts and metrics that we can use to track progress and measure the quality requirements of the release. We will discuss here some of the charts and the value addition that they bring to our product.

# Defect Finding Rate
This chart gives information on how many defects are found across a given period. This can be tracked on a daily or weekly basis.

#Defect Fixing Rate
This chart gives information on how many defects are being fixed on a daily/weekly basis.

#Defect distribution across components
This chart gives information on how defects are distributed across various components of the system.

# Defect cause distribution chart
This chart given information on the cause of defects.

# Closed defect distribution
This chart gives information on how defects with closed status are distributed.

# Test case execution

# Traceability Matrics

# Functional Coverage

# Platform Matrics


-----------------With thanks from http://testinggeek.com/
Cheers,
-jerry

Browser Tests, Services and Compatibility Test Suites

Here is an wonderful article about browser compatibility and cross browser testin for web application.

Worth visiting...

http://www.smashingmagazine.com/2007/10/02/browser-tests-services-and-compatibility-test-suites/

Cheers,
-Jerry

Tuesday, October 23, 2007

Better Folder Structure

Here is an better folder structure for an project drive.

Cheers,
-jerry

Monday, October 22, 2007

A wonderful GUI Checklist from Rathish

Description
All screens should include blue face and white borders
Look and feel of all buttons should be fixed according to windows buttons
“Century” application selected by default on all screens
Hyperlink on century logo, help, sign out
Refreshing the page
Hyperlink on welcome user information
Links to the two tabs-User management, Reports
On mouse over of the tab drop down should occur
Equal borders on the right pane
Alignment of the Frame
Equal spacing between search tab and table header
Color consistency between left margin and table header
Color gradient on the frame
Font consistency
Representation of hierarchy with folders
Change of icon in the tree
Checking for GIF
Vertical scroll bar on the left pane
No scrolls on right pane
Scroll bars should be as XP color
Frame consistency in showing the details
Selection of links on mouse over
Link reflection (tree expansion)
Visited link color
Unvisited link color
Links on tree list allows the user to traverse to the particular page
Navigation link with records on all screens
10 records are listed in a page
Display of total number of records at the bottom of the page
No display of records on empty listing page
“First” and “Prev” is disabled if the user is on the first page
“Next” and “Last” is disabled if the user is on the last page
Basic search tab enabled by default
Criteria combo enabled on loading
Condition combo disabled on loading
Size of the combo
“Add Condition” with hyperlink on advanced search tab
In advanced search, “match any” selected by default
Delete button is added for each row newly inserted in advanced search
No text box for “assign to” search option
All icons are placed at top right corner
All icons should be vertically centered
Pop up messages for all icons
Check for button name
Hyperlink on tabs with user privilege
Search tab visibility based on privileges
Title - population of hierarchy
Equal spacing between Assign to, Start and Target date
Check box on the table header
Hyperlinks on each hierarchy listing
All listing screens are read only
Check box for all records listed
Proper alignment of text in the listing page
All records listed in alternative color
Display of description value with tool tips
Eclipse is appended for values whose length is greater than the specified value
Text area should have hairline border
Gray scroll bar in the description
Tool tip range
Tool tip on mouse over
Display of dates in dd-mon-yy format on all listing screens
Focus set on specific fields - on loading
Calendar popup
Text box for display of dates is read only
Space consistency between description field and buttons while creating hierarchy
Hyperlink on buttons on creation and edition of hierarchy
Inner borders for create / edit hierarchy
Success / confirmation messages
Alert / failure messages
Lowercase, upper case and check for sentences in all messages
After message no tab key functionality should be allowed
Attachment symbol on the header
Attachment symbol inclusion for each test case
Attachment symbol width consistency
Assign and deassign buttons are placed next to target date field
No link to view tab
Links on the test cases allows the user to move to view tab
All the fields in view tab are non editable
Order transaction type selected by default
Color consistency between attachment button sub menus and screen controls
Hyperlink on attachment tab
Attachment tab with table structure
Check box on attachment table
Hyper link on all buttons at the bottom of the attachment tab
Reset to clear values
Moving back and forth in explorer window
Back button to traverse
All screens under user management should have frame borders
All loading screens should be frame centered
Double borders not allowed in privileges
Application name can be darker in assign privileges
“Assign to” combo is not present in project and cycle management
Table headers should have 5 pixel spacing
Hyperlink on the value in the attachment type
Listing tabs are enabled always
Adjustable divider is required
Navigation as link as blue/underline non-link gray
Movement of the screen (Click and drag)
Horizontal line in the message window
Text Alignments
Window Alignment/positioning
Attachment file - size constrain

Here r Team SST - MM photos-enjoy






































Planed to post an article on SST-MM soon..
Cheers,
-jerry and Team SST







Friday, October 19, 2007

30 Usability Issues to be aware of

Today i found a wonderful article about usability issues of web.
Hope this will be very helpful in designing a good web page and in testing too...

http://www.smashingmagazine.com/2007/10/09/30-usability-issues-to-be-aware-of/

The next one is about the same usability in web but some nightmares to come out from


http://www.smashingmagazine.com/2007/09/27/10-usability-nightmares-you-should-be-aware-of/


Cheers,
-jerry

Friday, October 12, 2007

The final question to KAREN N. JOHNSON by QA Zone

Q8- Thank you very much for your time, Karen. Let’s end with a quick recommendation…For QAZone members that are looking to grow their team, what do you think are the most important skills or personality characteristics that they should be looking for in new team members during their hiring process?

KNJ- Curiosity, I can’t stress this enough!. Testers need to be curious people, constantly wondering things such as what if I do this, or what if fill-in-the-blank here. Technical skills can be taught if someone has the aptitude and motivation. But behind all good testers, there is a burning curiosity. I think this is why testers enjoy TV shows such as the CSI series. We like to solve mysteries, we like the investigations, the fact-finding and the details. Coincidentally I think having a curious mind keeps us young and engaged with life as well.

Five Important things to work with in before Test Plan

Its so simple...
1. Your Project.
2. Your Resources.
3. Your Budget.
4. Time Available.

Keep all these things in your mind and prepare Test Plan.
Don't do test plan with a mind setup of satisfying ur customer or to do 100% testing.
That's not possible. So work with the possible things..

-jerry .. :-)

Friday, October 5, 2007

Good explanation on test-driven development

There seems to be a lot of confusion and misinformation out there about what exactly we mean by test-driven development.

I sometimes use this little analogy to help explain it:When I moved in to my first house, me and my girlfriend of the time (who was a total cow, I should hasten to add - not that I'm bitter or anything) went shopping for kitchenware.
Our method was simply to sit down and draw up a list of "things that a kitchen should have" - like crockery, cutlery, mugs, glasses, scales, egg whisk, bowls, and so on.We spent hours in IKEA, British Home Stores and various other purveyors of culinary paraphernalia, and then we dragged everything back to the house and unpacked it into our new kitchen.
When we'd finished we sat at the breakfast bar with a cup of tea and surveyed our handiwork. It wasn't until the next day when I tried to make a mushroom omelette that I realised we'd forgotten to buy a frying pan. And when my girlfriend tried to make herself a Jack Daniels & Coke on ice that we realised we'd forgotten to buy any ice trays for the freezer. Or suitably sized glasses for a whiskey and coke, come to that. So I had cornflakes for breakfast, and and in the evening she enjoyed a warm whiskey and coke served in a small wine glass.

What we should have done, of course, was make a list of examples of things we'd be using our kitchen to do. And then figuring out what we'd need in our kitchen in order to successfully execute each of these scenarios.That way, I would have most certainly remembered the frying pan, and Sarah would have had a nice, cold drink our of a more appropriate glass.
TDD is specification by example. Each test case describes a way in which the end product will be used - whether that end product is a kitchen, a software applictaion, or just a single method in a single class in a software application. It's not really a testing discipline at all. It's a design discipline. I think maybe it's the "t" word that confuses people, though. Some teams write seven billion lines of code, and then write a handful of unit tests to cover it, and then come to me and say "we're doing test-driven development". No. Not if you write the tests after you've written the code, you're not. In TDD, tests drive development - the clue's in the name. You only write code needed to pass tests. So your tests are a specification for the code you need to write.

The fact that afterwards you end up with a suite of executable regression tests is a positive side effect of doing TDD. It is not the primary goal of TDD, though.
The primary goal of TDD is to use tests to drive the design and implementation of the software.And that, my fine feathered friends, is test-driven development.

-Thanks to Jason Gorman....wonderfully explained.. :-)

Tuesday, September 4, 2007

10 Software Development No-Brainers

Here is a cool post by jason Gorman....

I spend a considerable amount of my life locked in meeting rooms listening to protracted discussions about stuff that I could have sworn we'd sorted out years ago.
I'm thinking maybe, to save time and heartache, I should make a list of these no-brainers and their screamingly obvious solutions so we can all get on and apply our feeble minds to more important problems - like where to get the cheapest liquor.
1. What should be our Coding Standards? - Google "coding standards" for your chosen programming language, find a set that makes sense to you and then use them.
2. Should we do iterative development? - No. You should stick your head up an elephant's arse instead. It has about the same effectiveness as waterfall delivery, but is slightly less unpleasant for the parties involved.
3. Model-View-Controller or Model-View-Presenter? - seriously, who gives a sh*t? Pick one and get on with it.
4. .NET or Java? - ditto.
5. Are we doing too much unit testing? - No.
6. Do we need a business/systems analyst? - nobody needs a business/systems analyst. They're like nipples for men - they serve no practical purpose, but they can still be used to hurt you.
7. Should we adopt a Service-Oriented Architecture? - Sure. Just let the rest of us know when you've figured out what the hell that means, okay?
8. Are we doing too much refactoring? - No.
9. The guy we want is just outside our budget. What do we do? - Increase your budget. Find a way.
10. How can we make more time for quality? - Just do it. The extra time will make itself.
--Thanks Jason...this is cool

Friday, August 24, 2007

How did we miss THAT?

Here is another intresting blog i visited....


How did we miss THAT?
Filed under: Thinking Like a Tester, Ruminations — Elisabeth Hendrickson @ 11:12 am
“Oh goodness. How did I miss THAT bug?”
Over the years, I’ve asked myself that question numerous times.
I asked that question when another tester found a blazingly obvious, critical bug that I completely missed. (The answer: I spent too much time tinkering with an ineffective automated script I’d written, and too little time observing the actual behavior of the system. That’s the project where I learned a lot about how NOT to do test automation.)
I asked myself that question, repeatedly, when I participated on a project some years ago now where we shipped software that crashed left and right. (I’m still sorting out the answers to that last one. Catastrophic failures are almost never the result of a single, simple error. And this particular catastrophic failure represented failures at all levels in an organization that had, um, issues. But I digress.)
I asked the question again when I learned that a web site I tested had back-button problems. After all, I was sure I’d tested for that. And I had. But I hadn’t re-tested for it after a particular set of code changes that changed some operations from HTTP GETs to HTTP POSTs. Oops.
And I asked myself that question more recently when I learned that a system I worked on earlier this year failed to save a change, and also failed to report an error, when it encountered data misformatted in a particular way in one specific field. Badly formatted data is one of my specialties, and I couldn’t believe I forgot to test the particular case that resulted in the problem. But it turned out that I did, indeed, fail to test what would happen if you entered “www.testobsessed.com” into a URL field instead of “http://www.testobsessed.com”. In hindsight, it’s an obvious test. Another lesson learned.
I reflected on those missed bugs when a colleague, Sandeep, a test manager, recently wrote to say that he’s been asking himself “how did my testing team miss that?”
He decided to seek out patterns of testing problems by categorizing escaped bugs according to the hole(s) in testing that allowed the bug to slip through. The idea is to improve the test effort by figuring out the common causes behind escaped bugs.
My initial reaction was, “that makes sense.” If you can identify the top 20% of testing holes that let 80% of the bugs through, and you can make some serious improvements to the test effort.
And my next reaction was, “but be careful.” Sandeep’s intent is good: use lessons learned from escaped bugs to improve testing. However, asking “How did we miss that?” is perilously close to heading down the slippery slope to “How did JoeBob miss that?” to “It’s JoeBob’s fault.” Having talked to Sandeep, I know he’s not trying to play “pin the blame on the tester.”
So I suggested a small reframe.
Instead of categorizing escaped bugs by asking the question, “How did testing miss that?”, categorize them by asking the question, “How can we improve the probability that testing will find bugs like that in the future?”
It’s a subtle difference.
But the result of reframing the question is that instead of identifying categories as noun phrases like “insufficient test data,” we end up with imperative statements like, “add test data.” Those two categories may look almost identical, but only one is actionable. I can “add test data.” The statement prompts me to do something different next time. But “insufficient test data” only gives me something to regret. And regret won’t help me ship better software.
So how can you categorize escaped bugs to improve the test effort without falling into the blaming trap? Try an Affinity Exercise with the question, “What could we do differently next time to increase the probability that if we have another bug like this we’ll catch it in test?”
To prepare:
Choose a team to participate in the activity. Affinity exercises can work with any number of people, but for this particular activity, I find a smaller group - say 3 to 5 people - works best. It’s a good idea to include people with diverse roles and skill sets.
Set up a meeting time and place. Plan for the whole activity to take 2 hours. And arrange to meet in a place with plenty of table and/or wall space.
Gather (or shop for) office supplies. You’ll need:
Index cards or sticky notes. Bigger is better. I like 5×8 cards or the SuperSticky 5×8 Post Its.
Felt-tip markers. I like Sharpies because they make consistently dark, readable marks. (Beware: Sharpies are permanent. Do NOT confuse your Sharpies and your White Board markers in the conference rooms. Facilities people get tetchy about such mix-ups.)
Gather a list of escaped bugs you want to analyze. If you have a lot of escaped bugs, prioritize them and time box the exercise. (You probably won’t be able to analyze more than 50 in an hour, possibly less, so don’t print out a list of 500.)
In the Meeting:
Review each bug with the team, asking: “What could we do differently next time to increase the probability that if we have another bug like this we’ll catch it in test?”
Have participants write their suggestions on the cards/stickies, one idea per card, in the form of an actionable statement. The suggestions should complete the sentence, “In the next release/iteration/sprint, we can ______.” Tips:
Also ask the participants to make their suggestions as concrete and specific as possible. For example, instead of writing “add test data,” write “add titles with ampersands (&) to the test data.”
And ask the participants to stick to test-related actions, and avoid blaming individuals. “Revoke NancySue’s checkin priviledges” is not an acceptable suggestion.
When you’ve reviewed all the bugs, or when an hour has passed, stop reviewing bugs. (If you still have lots of bugs to go and want to continue analyzing after an hour, stop anyway. Finish the rest of the exercise - the grouping. When you’ve worked through the whole process, if you still think more analysis would help, you can always do the exercise again.)
Gather all the cards/stickies, and lay them out on a large work surface: a table, the walls, or even the floor can all work well.
Sort through the cards/stickies as a team. The cards are now owned by the team, and everyone should take a hand in organizing them. Encourage participants to move cards that seem alike together so they are stacked together. Continue until the team agrees that it’s satisfied with the stacks of cards/stickies.
Ask the team to give meaningful names to the stacks of cards/stickies. This is the part of the activity where will generate the more abstract categories like “add test data.”
The result of this exercise is a list of categories for improving the testing effort that we can then use to determine which kinds of improvements will have the biggest bang for the buck. And the best part is that the list emerged from the actual problems your software has had in the field rather than being some arbitrary list of theoretical “improvements” based on someone else’s unrelated experience.
But wait; you’re not done. Now that you’ve created a first draft list of categories, test it. (Did I mention I’m Test Obsessed?) Choose a different set of escaped bugs, and assign each to one or more categories from the list. Notice how easy or hard it seems to find a category for each bug. This will give you a lot of feedback about how well the category list will work in practice. You may find the team needs to spend some additional time iterating on the list.
Once you’re satisfied with your list of categories, you can run the numbers to see how many bugs are in each category. Then you can create a Pareto diagram of the results to see what 20% of the improvement opportunities on your list will result in an 80% improvement. Now you can truly leverage escaped bug information into concrete actions that will make the test effort more effective.
Over time, as you try to use that original list to categorize reports of new bugs in the field, you will probably find that the list becomes less and less relevant. I hope so, anyway. It indicates that the improvement efforts are working, that the team has improved the test effort.
That’s when you know it’s time to do the process all over again to classify the next generation of escaped bugs.

Monday, August 20, 2007

Ten Worst Software Bugs

Before going into the worst bugs, do you know why computer program errors are called “bugs” and why developers like myself “debug” programs? It all goes back to September, 1945. Here is the story:
Moth found trapped between points at Relay # 70, Panel F, of the Mark II Aiken Relay Calculator while it was being tested at Harvard University, 9 September 1945. The operators affixed the moth to the computer log, with the entry: “First actual case of bug being found”. They put out the word that they had “debugged” the machine, thus introducing the term “debugging a computer program”.

Here is the log, with the moth still taped by the entry:
source

Now on to the worst moths… Oops, I mean bugs. Wired News published an article about history’s worst software bugs. Here is a summary of the 10 worst software bugs in chronological order:
1962 — Mariner I space probe. A bug in the flight software for the Mariner 1 causes the rocket to divert from its intended path on launch.
1982 — Soviet gas pipeline. Operatives working for the CIA allegedly plant a bug in a Canadian computer system purchased to control the trans-Siberian gas pipeline. The resulting event is reportedly the largest non-nuclear explosion in the planet’s history.
1985-1987 — Therac-25 medical accelerator. A radiation therapy device malfunctions and delivers lethal radiation doses at several medical facilities.
1988-1996 — Kerberos Random Number Generator. The authors of the Kerberos security system neglect to properly “seed” the program’s random number generator with a truly random seed.
1990 — AT&T Network Outage. A bug in a new release of the software that controls AT&T’s #4ESS long distance switches causes these mammoth computers to crash.
1993 — Intel Pentium floating point divide. A silicon error causes Intel’s highly promoted Pentium chip to make mistakes when dividing floating-point numbers that occur within a specific range.
1995-1996 — The Ping of Death. A lack of sanity checks and error handling in the IP fragmentation reassembly code makes it possible to crash a wide variety of operating systems by sending a malformed “ping” packet from anywhere on the internet.
1996 — Ariane 5 Flight 501. Working code for the Ariane 4 rocket is reused in the Ariane 5, but the Ariane 5’s faster engines trigger a bug in an arithmetic routine inside the rocket’s flight computer.
2000 — National Cancer Institute, Panama City. In a series of accidents, therapy planning software created by Multidata Systems International, a U.S. firm, miscalculates the proper dosage of radiation for patients undergoing radiation therapy.

I noticed that “Y2K” did not make the list. I guess because it was not the worst or it was not a bug, or is it?

---Thanks to awards.net

Friday, August 17, 2007

A wonderful learning experience on testing

A true story about learning about software testing… a long while ago I had a boss named Bob. At the time I was fairly new to software testing. Bob had knowledge on many topics. He had an approachable way about him that made working with him and for him one of my most positive experiences in my life. I realize now the primary reasons for this: I love to learn and enjoy the role of the student. Bob was a great teacher by nature. Together for a long stretch we made a great pair.I use to ask Bob time and time again, what should I learn now? He would sit back at his desk and think about this. Then he would take a book from his bookshelf. Sometimes he would suggest reading a whole book and sometimes just a section. The concept of not reading a book front to back like a novel gave me permission to take what I wanted and leave the rest. He’d answer in ways like this: learn a little more SQL, learn more about data types, or go look read more about user interface standards. Sometimes depth, sometimes breadth. But here’s one point. he knew me. He knew what I knew, he knew pretty well what I didn’t know and he knew clearly what my work objectives and challenges were – and sometimes he would ask me, what are your current frustrations or stopping points? This made my asking Bob a good question because he had a frame of reference. This is an important point – sometimes people email me this same question, so please consider that I’m going to be limited in how I answer the question because I don’t know you and I don’t know your background. And I don’t know what you might be trying to accomplish. But you do.I remember one day after working with him for years, I asked the same age old question. Bob, what should I learn? He looked up at his bookshelf turned to me and said; it may be time for you to teach. I was actually sad that day; I wanted to remain the student. But with a deep sigh, I accepted that it might be time to pass on some knowledge and that it was also time for me to continue on my own.I had resolve this continuing question for myself. I think I learned a bit or more about learning from Bob. I look for my educational gaps. I know where they are. I have a mix of topics that interest me, topics I feel/know I need to learn more and sometimes topics I need learn to resolve new technical challenges at work. Sometimes I catch up on pure software testing materials from the work of other people in the field.Bob hasn’t been my boss for a decade or more now. But I was fortunate to have had the time with him. Years later, I find myself hiring testers. I share every book I own. I highlight sections, I point out websites. I suppose, the cycle repeats. It feels good to help other people. Though a large part of me I suspect remains a grasshopper.


-Thanks Karen N. Johnson for sharing the great time with ur teacher.

Friday, August 10, 2007

Intelligent search engine?

Today I searched the basic functions of the project I'm working with, and performed some search options available with. The application used to perform some default search if we not specified (default specification is there) anything. I removed all the default search input and performed the search. (Without any input) But it result me the same default search. Actually I expected some error message to insist the end user to give some input (intelligent programming…:-). But it was not. (it has a special specification to result everything in the DB)

Then I did an empty search test in some famous search engines. Just open some search engines, without entering any search key perform the search. Aaha it all returns to its original page. This happened to most of the search engine like Google, yahoo, AltaVista, etc.

What I expect in the search engine is this.

The search engine may result some message (error) to the user to enter some input or at least it may lead to the web search help page.
The search engine may provide some search result about the empty space (preferably white space or blank space).

I really don’t know what is right on this case. Anyone want to comment about this…pls shoot.

The Web Testing CheckList

The Checklist
The links in the checklist lead to a discussion of each issue.

Validation
Validate the HTML
Validate the CSS
Check for broken links
Flexibility
Try varying window sizes
Try varying font sizes
Speed
Access the site via a modem
Check image size specifications
Accessibility
Test accessibility
View in text browser
Browser independence
Try different browsers
Check printed pages
Switch Javascript off
Switch plug-ins off
Switch images off
Other checks
Check non-reliance on mailto
Check no orphan pages
Check sensible page titles

want full details...visit http://www.xs4all.nl/~sbpoley/webmatters/checklist.html

Friday, August 3, 2007

THINK OUT OF BOX AND FIND MORE BUGS

Hi everyone,
For good software tester, out of box thinking is very essential. So that he can find some good bugs in the application. And make that application a fool proof one.

Let me give an example. If a programmer is coding for a Building Lift then he won't code for every floor. He may code for a single floor (like first floor) then slightly change the code and use that in other floors.
Here the out of box thinking works. If you test the application in every floor you may find some bugs, but a smart tester will specifically do testing for ground (lower) floor and the top (upper) floor.

The ground floor may not need the options for going down and the top floor don’t require going up.
This is how the out of box thinking works…


Regards,
-Jerry-

Friday, July 27, 2007

A Review of Error Messages

What Does a Good Error Message Look Like?
A well-constructed error message

should identify the program that is posting the error message
should alert the customer to the specific problem
should provide some specific indication as to how the problem may be solved
should suggest where the customer may obtain further help
should provide extra information to the person who is helping the customer
should not suggest an action that will fail to solve the problem and thus waste the customer’s time
should not contain information that is unhelpful, redundant, incomplete, or inaccurate
should provide an identifying code to distinguish it from other, similar messages


Want to read more about error messages...visit
http://www.developsense.com/essays/AReviewOfErrorMessages.html

Thanks to MICHAEL

No User Would Ever Do That

"No user would ever do that!"
"No user would ever try that!"
"No user would ever need that feature!"
"That's a cool idea, but no user would ever want it."

When developers say, "No user would ever do that," what they really mean is "No user that I've thought of, and that I like, would do that on purpose. In the Rapid Software Testing course, James and I have been encouraging testers to probe that statement for users that the developer didn't think of, for users that the developer doesn't like (like hackers or inexperienced users), or for things that legitimate, likable users might do by accident.

It recently occurred to me, though, that developers often say this after a tester has done something that has surprised the developer. "No user would ever do that!" "Well, I'm a user, and I just did it." "Yeah, but... you're not a real user."

One implication from this exchange is that testers aren't real users. Another is that testers' questions, actions, requirements, needs, and tactics don't matter. Fair enough--but let's keep that idea in mind, and maybe revisit it, when we hear another common software development question: "Why did it take you so long to find that bug?"


Posted by MICHAEL in his BLOG....Thanks to him

Monday, July 23, 2007

Ten Commandments of Software Testing

Testing is harder than living. There are 15:

1.Stay alert
2.Run the tests as written
3.Keep an activity log
4.Document any problems encountered
5.Re-run the tests as they should have been written
6.Understand the system.
7.Understand the people.
8.Understand the tests.
9.Understand the requirement.
10.Understand the dependencies
11.Hope for the best
12.Prepare for the worst
13.Expect the unexpected.
14.Don't fake any results.
15.Agitate for improvement


Thanks to David Smiles

Tips on Writing Bug ReportsHow to Write a Bug Report

1. Be very specific when describing the bug. Don’t let there be any room
for interpretation. More concise means less ambiguous, so less clarification
will be needed later on.

2. Calling windows by their correct names (by the name displayed on the
title bar) will eliminate some ambiguity.

3. Don’t be repetitive. Don’t repeat yourself. Also, don’t say things
twice or three times.

4. Try to limit the number of steps to recreate the problem. A bug that is
written with 7 or more steps can usually become hard to read. It is usually
possible to shorten that list.

5. Start describing with where the bug begins, not before. For example,
you don't have to describe how to load and launch the application if the
application crashes on exit.

6. Proofreading the bug report is very important. Send it through a spell
checker before submitting it.

6. Make sure that all step numbers are sequenced. (No missing step numbers
and no duplicates.)

8. Please make sure that you use sentences. This is a sentence. This not
sentence.

9. Don’t use a condescending or negative tone in your bug reports. Don’t
say things like "It's still broken", or “It is completely wrong”.

10. Don’t use vague terms like “It doesn’t work” or “not working properly”

11. If there is an error message involved, be sure to include the exact wording
of the text in the bug report. If there is a GPF (General Protection Fault) be
sure to include the name of the module and address of the crash.

12. Once the text of the report is entered, you don’t know whose eyes will see
it. You might think that it will go to your manager and the developer and
that’s it, but it could show up in other documents that you are not aware of,
such as reports to senior management or clients, to the company intranet, to
future test scripts or test plans. The point is that the bug report is your
work product, and you should take pride in your work.

Thanks to Bernie Berger

Xtreem Programming or Xtreem Testing

At STAREast 2007, James Bach told a story of a project he once worked on. The project’s methodology was somewhat traditional, in the sense that developers wrote code and testers then performed inspective testing of the application. James found numerous bugs, reporting them back. This seemed to cause the programmer some frustration. James told of the programmer complaining… “why don’t you just tell me what tests you’re going to do and I’ll make sure that the code passes them before I give it to you?”

I liked the story… I now use it to explain something to the teams I work with.

After telling the story, I ask “What if we could tell the developer what tests we’re going to do in advance”?

This is exactly the opportunity presented in the development element of extreme programming where implementation is driven by Acceptance Tests. The key difference is that the process of ‘telling the programmer what tests the code needs to pass’ is a collaborative activity. The Acceptance Tests are arrived at in collaboration between tester, programmer and the customer.

This way, when we get the code, we know that all ‘reasonable’ tests that we can think of are accounted for in advance and (ideally) automated.

Because we are human and time is limited, there will always be tests that we simply can’t be expected to think of in advance. This is where Exploratory Testing comes in handy. By interacting with the application we realise what previously unaccounted for behaviours have been enabled by our development efforts.

Sometimes the outcome of these behaviours is highly undesirable… i.e. bugs. I simply think of these bugs as ‘gaps in our collective thinking’… gaps that we identify as additional tests that reveal a new User Story - that we implement soon after in a subsequent iteration. If it is urgent enough, ‘the team’ (programmers, testers etc) and ‘the customer’ may decide to implement it immediately, instead of work we’ve already planned in the current iteration. With short 1-2 week iterations, it’s gotta be pretty darn urgent before that happens.

Monday, July 16, 2007

An wonderful article on Manual and Automation Testing

Thoughts on the Coexistence of Full Test Automation and Manual Testing

Published: April 3, 2007

by Colin Armitage

http://www.itjungle.com/fhs/fhs040307-story03.html

Monday, July 9, 2007

New kind of web testin - Web Service Testing

Dear Friends,

This is the very latest thing in Web Testing....
Web Service Testing. To do this you r inneed of very good
web service knowledge and some good tool...here is an rare
tool of a kind...

"http://www.stylusstudio.com/ws_tester.html"

Surfe and Enjoy

- jerry

Web Page Stress test using Microsoft WAS

Hi Friends,

Here is the link to learn, do web (stress) test using
Microsoft's Web Applicaiton Stress....

"http://www.west-wind.com/presentations/webstress/webstress.htm"

Thanks to west-wind.

Enjoy and Explore...

- jerry

Sunday, July 8, 2007

Basic Guidelines and Checklist for Website Testing

Hi Friends.
Here is an intresting PPT for a good web testing...

http://www.authorstream.com/presentation.aspx?pun=pawan-357-basic-guidelines-and-checklist-for-website-testing-PowerPoint

Enjoy and Learn

Monday, June 25, 2007

The words l like the most in tamil

உணர்ந்துகொள்
நீ தோல்வியுற்றது
வாழ்க்கையிலல்ல
வாழ்க்கையை புரிதலில்...

Monday, June 18, 2007

Code to check if a parameter exists in DataTable or not.

code:
on error resume next
val=DataTable(”ParamName”,dtGlobalSheet)
if err.number<> 0 then
‘Parameter does not exist
else
‘Parameter exists
end if

Here is the code for the Pass/Fail status of a check point.

chk_PassFail = Browser(…).Page(…).WebEdit(…).Check (Checkpoint(”Check1″))
if chk_PassFail then
MsgBox “Check Point passed”
else
MsgBox “Check Point failed”
end if

Wednesday, June 13, 2007

story of a LITTLE bird

A little bird was flying south for the winter. It was so cold; the bird froze and fell to the ground in a large field. While it was lying there, a cow came by and dropped some dung on it. As the frozen bird lay there in the pile of cow dung, it began to realize how warm it was. The dung was actually thawing him out! He lay there all warm and happy, and soon began to sing for joy. A passing cat heard the bird singing and came to investigate. Following the sound, the cat discovered the bird under the pile of cow dung, and promptly dug him out and ate him!
Management Lessons:
1) Not everyone who drops shit on you is your enemy.
2) Not everyone who gets you out of shit is your friend.
3) And when you're in deep shit, keep your mouth shut!

thAts wAt they R thinking

1) Project Manager is a Person who thinks nine women can
deliver a baby in One month.
2) Developer is a Person who thinks it will take 18 months
to deliver a Baby.
3) Onsite Coordinator is one who thinks single woman can
deliver nine babies in one month.
4) Client is the one who doesn't know why he wants a baby.
5) Marketing Manager is a person who thinks he can deliver
a baby even if no man and woman are available.
6) Resource Optimization Team thinks they don't need a man
or woman; they'll produce a child with zero resources.
7) Documentation Team thinks they don't care whether the
child is delivered, they'll just document 9 months.
8) Quality Auditor is the person who is never happy with the
PROCESS to Produce a baby.
9) Tester is a person who always tells his wife that this is
not the Right baby

Top 24 replies by programmers when their programs don't work:

24. "It works fine on MY computer"
23. "Who did you login as ?"
22. "It's a feature"
21. "It's WAD (Working As Designed)"
20. "That's weird..."
19. "It's never done that before."
18. "It worked yesterday."
17. "How is that possible?"
16. "It must be a hardware problem."
15. "What did you type in wrong to get it to crash?"
14. "There is something funky in your data."
13. "I haven't touched that module in weeks!"
12. "You must have the wrong version."
11. "It's just some unlucky coincidence."
10. "I can't test everything!"
9. "THIS can't be the source of THAT."
8. "It works, but it's not been tested."
7. "Somebody must have changed my code."
6. "Did you check for a virus on your system?"
5. "Even though it doesn't work, how does it feel?"
4. "You can't use that version on your system."
3. "Why do you want to do it that way?"
2. "Where were you when the program blew up?"
1. "I thought I fixed that."

Why we need reviews.


In an ancient monastery in a far away place, a new monk arrived to join his brothers in copying books and scrolls in the monastery's scriptorium. He was assigned as a rubricate on copies of books that had already been copied by hand.

One day, while working on the monks' Book of Vows, he asks old Father Florian, the Armarius of the Scriptorium, 'Does not the copying by hand of other copies allow for chances of error? How do we know we are not copying the mistakes of someone else? Are they ever checked against the original?'

Fr. Florian was set back a bit by the obvious logical observation of this youthful monk. 'A very good point, my son. I will take one of the latest copies of the Book of Vows down to the vault and compare it against the original.' Fr. Florian went down to the secured vault and began his verification.

A day passed and the monks began to worry and went down looking for the old priest. They were sure something may have happened. As they approached the vault they heard sobbing and wailing... they opened the door and found Fr. Florian crying over the new copy and the original, ancient Book of Vows, both opened before him on the table. It was obvious to all that the poor man had been crying his old heart out for a long time.

'What is the problem, Reverend Father???' asked one of the monks.

'Oh, my Lord,' sobbed the priest, 'The word is 'CELEBRATE'!!!'

And this is why we need reviews.