Search For Knowledge
Monday, October 29, 2007
QA Time Estimation based on Dev's Time Estimation
Remember, this will not be very accurate. But truthfully nothing is accurate in estimation, even for satellite lunching they may not have accurate plane.
In case dev estimated 66 days and for QA you can suggest 48 days, hopefully that won't cross then that. (4 days for backup). But still this estimation is just a lollypop to show not the real cake to eat…!!
Estimating Testing Projects
This is really easy and good to understand ...
here is this for u all...
Estimating Testing Projects
Walkthrough on How-To
The online content i have found regarding developing sound estimations for testing projects are found to be wanting in a lot of ways
# Articles start of promising and end up with “Software project estimation - Seat of the pants approach”.
# Articles packed with a lot of know-know but absolutely no how-to
# Articles that tell me how to keep doing what we are already doing.
Current Affairs
The current situation of software project estimation can be best described as CMM level 1 heroics. The nearest we conventionally come to it is a WAG (Wild Ass Guess). As to how this has become an acceptable practice in the software industry defies my comprehension!!?$!?!^@?!!! Sometimes people even go to the extent of categorizing this “exercise in futility” as a SWAG( Scientific Wild Ass Guess).
Bottomline: Doing a WAG or SWAG (Whatever you call it) is as good as doing no estimation at all.
What is an estimate?
An estimate serves as a masterplan for a software project covering all aspects like costing,staffing,timing. Hence basing this on pure guess work is a definite NO
Ok history apart lets get started……
1. Collectibles to start an estimate
Starting an estimation is back breaking work . There are the following elusive documents that one has to procure before sitting down to a sensible estimate.
1. Customer Requirements Specification
2. Request for proposal,
3. System specification/ Architecture.
4. Software Requirements Specification
I have often hear the following complaint, “Owwww but these don,t exist at that point”. The reply to that is that as per SDLC (Waterfall, Iterative, Jumbo circus whatever) models of software development these are the primary documents that need to be in place before we estimate for a project.The necessity of having these document in hand is to prevent the wild ass from guessing about our project
2. Approaches to preparing QA estimates for a project
There are a lot of sound practices that people have been using to prepare development estimates like system lines of code (SLOC), use-case based analysis, function point analysis, object hierarchy trees etc.In the following section i have proposed some of the techniques that can be used to prepare QA estimates for projects.
1. Begin estimation by conducting a comprehensive study of the system architecture, scope of work and the analyzing the complexity of work.
2. Determine what style of testing should be used in the strategy (you could chose from many options like use case based testing, scenario based testing, module hierarchy trees to name a few). This is a very important step which most QA personnel skip.
3. If you are adopting a module based testing, prepare a module hierarchy tree to visualize the inter-dependencies between modules.
4. Analyze and assign complexity to each node of this tree. Estimate the number of test cases in each module by analyzing the # of functionalities bundled in each module.
5. Ascertain by past experience or analysis, a realistic projection of QA productivity (no of test cases per person day). This is a metric which varies and WILL NOT FOLLOW ANY PRE-DEFINED ORGANIZATIONAL NORM.
6. Analyze each module to arrive at a preliminary idea of the extent of automation that is possible in each of these modules.
7. Estimate the strategy of automation in terms of how you will automate the testing, what will be the coverage of automation, what will be the complexity of developing automation scripts if any. ( a POC might be required for this at a later stage. This has to go into the estimate. Doing POC’s for software test automation is a severly neglected critical component of delivering QA processes.)
The above 7 steps have to necessarily completed and documented before you can start with the estimate.
3. Preparing the estimate
Identify the risks involved. These can range from technology risks, risks introduced by the delivery model that has been adopted and many other factors. Each risk identified has the potential of throwing your estimate haywire. Hence de-risking the project is an important part of estimation. In situation where the number of risks identified are high, it would be a good practice to prepare 2 separate estimates
# Conservative estimate: This will factor all the overruns in terms of effort, time and cost that would transpire if the risks realize themselves.
# Optimistic estimate: This will envision the delivery of the project if the risks identified do not materialize.
The benefit of having 2 versions of the estimate is that it would provide a clear cut picture of how bad things can go when risks materialize and what would be the losses incurred if the are let to remain.Conservative estimates can be projectional in nature where the increase in time, effort and cost can be show as a projection in relation to the risk.
The process of preparing the estimate start only once the above steps have been completed. All factors in the estimate have to be traceable to the documentation that has been prepared as part of section 2 of this document. The actual process of doing the estimate like assigning timelines and resource loading will be guided by the above section and help you arrive at a sound estimate.
The Golden rule of all estimations “DO NOT DO SOFTWARE PROJECT ESTIMATIONS WITHOUT ALL NECESSARY SUPPORT DOCUMENTATION IN PLACE.”
I have outlined only the measures to curb the flawed practices in the estimates. These could be incorporated to modify the existing process(if any) of preparing estimates within your organization. A complete walkthrough would expand the scope of this article exponentially
Thankfully From : http://www.testinglounge.com/2007/06/12/qa-estimation/
By :Robin Thomas
Wednesday, October 24, 2007
Testing Measurement - a nice high leval article
# What is the purpose of this measurement program?
# What data items you are collecting and how you are reporting it?
# What is the correlation between the data and conclusion?
Value addition
Any measurement program can be divided into two parts. The first part is to collect data, and the second is to prepare metrics/chart and analyse them to get the valuable insight which might help in decision making. Information collected during any measurement program can help in:
#Finding the relation between data points,
#Correlating cause and effect,
#Input for future planning.
Normally, any metric program involves certain steps which are repeated over a period of time. It starts with identifying what to measure. After the purpose is known, data can be collected and converted in to the metrics. Based on the analysis of these metrics appropriate action can be taken, and if necessary metrics can be refined and measurement goals can be adjusted for the better.
Data presented by testing team, together with their opinion, normally decides whether a product will go into market or not. So it becomes very important for test teams to present data and opinion in such a way that data looks meaningful to everyone, and decision can be taken based on the data presented.
Every testing projects should be measured for its schedule and the quality requirement for its release. There are lots of charts and metrics that we can use to track progress and measure the quality requirements of the release. We will discuss here some of the charts and the value addition that they bring to our product.
# Defect Finding Rate
This chart gives information on how many defects are found across a given period. This can be tracked on a daily or weekly basis.
#Defect Fixing Rate
This chart gives information on how many defects are being fixed on a daily/weekly basis.
#Defect distribution across components
This chart gives information on how defects are distributed across various components of the system.
# Defect cause distribution chart
This chart given information on the cause of defects.
# Closed defect distribution
This chart gives information on how defects with closed status are distributed.
# Test case execution
# Traceability Matrics
# Functional Coverage
# Platform Matrics
-----------------With thanks from http://testinggeek.com/
Cheers,
-jerry
Browser Tests, Services and Compatibility Test Suites
Worth visiting...
http://www.smashingmagazine.com/2007/10/02/browser-tests-services-and-compatibility-test-suites/
Cheers,
-Jerry
Tuesday, October 23, 2007
Monday, October 22, 2007
A wonderful GUI Checklist from Rathish
All screens should include blue face and white borders
Look and feel of all buttons should be fixed according to windows buttons
“Century” application selected by default on all screens
Hyperlink on century logo, help, sign out
Refreshing the page
Hyperlink on welcome user information
Links to the two tabs-User management, Reports
On mouse over of the tab drop down should occur
Equal borders on the right pane
Alignment of the Frame
Equal spacing between search tab and table header
Color consistency between left margin and table header
Color gradient on the frame
Font consistency
Representation of hierarchy with folders
Change of icon in the tree
Checking for GIF
Vertical scroll bar on the left pane
No scrolls on right pane
Scroll bars should be as XP color
Frame consistency in showing the details
Selection of links on mouse over
Link reflection (tree expansion)
Visited link color
Unvisited link color
Links on tree list allows the user to traverse to the particular page
Navigation link with records on all screens
10 records are listed in a page
Display of total number of records at the bottom of the page
No display of records on empty listing page
“First” and “Prev” is disabled if the user is on the first page
“Next” and “Last” is disabled if the user is on the last page
Basic search tab enabled by default
Criteria combo enabled on loading
Condition combo disabled on loading
Size of the combo
“Add Condition” with hyperlink on advanced search tab
In advanced search, “match any” selected by default
Delete button is added for each row newly inserted in advanced search
No text box for “assign to” search option
All icons are placed at top right corner
All icons should be vertically centered
Pop up messages for all icons
Check for button name
Hyperlink on tabs with user privilege
Search tab visibility based on privileges
Title - population of hierarchy
Equal spacing between Assign to, Start and Target date
Check box on the table header
Hyperlinks on each hierarchy listing
All listing screens are read only
Check box for all records listed
Proper alignment of text in the listing page
All records listed in alternative color
Display of description value with tool tips
Eclipse is appended for values whose length is greater than the specified value
Text area should have hairline border
Gray scroll bar in the description
Tool tip range
Tool tip on mouse over
Display of dates in dd-mon-yy format on all listing screens
Focus set on specific fields - on loading
Calendar popup
Text box for display of dates is read only
Space consistency between description field and buttons while creating hierarchy
Hyperlink on buttons on creation and edition of hierarchy
Inner borders for create / edit hierarchy
Success / confirmation messages
Alert / failure messages
Lowercase, upper case and check for sentences in all messages
After message no tab key functionality should be allowed
Attachment symbol on the header
Attachment symbol inclusion for each test case
Attachment symbol width consistency
Assign and deassign buttons are placed next to target date field
No link to view tab
Links on the test cases allows the user to move to view tab
All the fields in view tab are non editable
Order transaction type selected by default
Color consistency between attachment button sub menus and screen controls
Hyperlink on attachment tab
Attachment tab with table structure
Check box on attachment table
Hyper link on all buttons at the bottom of the attachment tab
Reset to clear values
Moving back and forth in explorer window
Back button to traverse
All screens under user management should have frame borders
All loading screens should be frame centered
Double borders not allowed in privileges
Application name can be darker in assign privileges
“Assign to” combo is not present in project and cycle management
Table headers should have 5 pixel spacing
Hyperlink on the value in the attachment type
Listing tabs are enabled always
Adjustable divider is required
Navigation as link as blue/underline non-link gray
Movement of the screen (Click and drag)
Horizontal line in the message window
Text Alignments
Window Alignment/positioning
Attachment file - size constrain
Friday, October 19, 2007
30 Usability Issues to be aware of
Hope this will be very helpful in designing a good web page and in testing too...
http://www.smashingmagazine.com/2007/10/09/30-usability-issues-to-be-aware-of/
The next one is about the same usability in web but some nightmares to come out from
http://www.smashingmagazine.com/2007/09/27/10-usability-nightmares-you-should-be-aware-of/
Cheers,
-jerry
Friday, October 12, 2007
The final question to KAREN N. JOHNSON by QA Zone
KNJ- Curiosity, I can’t stress this enough!. Testers need to be curious people, constantly wondering things such as what if I do this, or what if fill-in-the-blank here. Technical skills can be taught if someone has the aptitude and motivation. But behind all good testers, there is a burning curiosity. I think this is why testers enjoy TV shows such as the CSI series. We like to solve mysteries, we like the investigations, the fact-finding and the details. Coincidentally I think having a curious mind keeps us young and engaged with life as well.
Five Important things to work with in before Test Plan
1. Your Project.
2. Your Resources.
3. Your Budget.
4. Time Available.
Keep all these things in your mind and prepare Test Plan.
Don't do test plan with a mind setup of satisfying ur customer or to do 100% testing.
That's not possible. So work with the possible things..
-jerry .. :-)
Friday, October 5, 2007
Good explanation on test-driven development
I sometimes use this little analogy to help explain it:When I moved in to my first house, me and my girlfriend of the time (who was a total cow, I should hasten to add - not that I'm bitter or anything) went shopping for kitchenware.
Our method was simply to sit down and draw up a list of "things that a kitchen should have" - like crockery, cutlery, mugs, glasses, scales, egg whisk, bowls, and so on.We spent hours in IKEA, British Home Stores and various other purveyors of culinary paraphernalia, and then we dragged everything back to the house and unpacked it into our new kitchen.
When we'd finished we sat at the breakfast bar with a cup of tea and surveyed our handiwork. It wasn't until the next day when I tried to make a mushroom omelette that I realised we'd forgotten to buy a frying pan. And when my girlfriend tried to make herself a Jack Daniels & Coke on ice that we realised we'd forgotten to buy any ice trays for the freezer. Or suitably sized glasses for a whiskey and coke, come to that. So I had cornflakes for breakfast, and and in the evening she enjoyed a warm whiskey and coke served in a small wine glass.
What we should have done, of course, was make a list of examples of things we'd be using our kitchen to do. And then figuring out what we'd need in our kitchen in order to successfully execute each of these scenarios.That way, I would have most certainly remembered the frying pan, and Sarah would have had a nice, cold drink our of a more appropriate glass.
TDD is specification by example. Each test case describes a way in which the end product will be used - whether that end product is a kitchen, a software applictaion, or just a single method in a single class in a software application. It's not really a testing discipline at all. It's a design discipline. I think maybe it's the "t" word that confuses people, though. Some teams write seven billion lines of code, and then write a handful of unit tests to cover it, and then come to me and say "we're doing test-driven development". No. Not if you write the tests after you've written the code, you're not. In TDD, tests drive development - the clue's in the name. You only write code needed to pass tests. So your tests are a specification for the code you need to write.
The fact that afterwards you end up with a suite of executable regression tests is a positive side effect of doing TDD. It is not the primary goal of TDD, though.
The primary goal of TDD is to use tests to drive the design and implementation of the software.And that, my fine feathered friends, is test-driven development.
-Thanks to Jason Gorman....wonderfully explained.. :-)