Friday, October 31, 2008

Testing Methodologies

Acceptance Testing
Testing the system with the intent of confirming readiness of the product and customer acceptance. Acceptance testing, which is a black box testing, will give the client the opportunity to verify the system functionality and usability prior to the system being moved to production. The acceptance test will be the responsibility of the client; however, it will be conducted with full support from the project team. The Test Team will work with the client to develop the acceptance criteria.

Ad Hoc Testing
Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing.

Alpha Testing
Testing after code is mostly complete or contains most of the functionality and prior to users being involved. Sometimes a select group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.

Automated Testing
Software testing that utilizes a variety of tools to automate the testing process and when the importance of having a person manually testing is diminished. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tool and the software being tested to set up the tests.

Beta Testing
Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large in hopes that they will buy the final product when it is released.

Black Box Testing
Testing software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as a specification or requirements document.

Compatibility Testing
Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested.

Configuration Testing
Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.

End-to-End Testing
Similar to system testing, the 'macro' end of the test scale involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Functional Testing
Testing two or more modules together with the intent of finding defects, demonstrating that defects are not present, verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do.

Independent Verification and Validation (IV&V)
The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The individual or group doing this work is not part of the group or organization that developed the software. A term often applied to government work or where the government regulates the products, as in medical devices.

Installation Testing
Testing with the intent of determining if the product will install on a variety of platforms and how easily it installs. Testing full, partial, or upgrade install/uninstall processes. The installation test for a release will be conducted with the objective of demonstrating production readiness. This test is conducted after the application has been migrated to the client's site. It will encompass the inventory of configuration items (performed by the application's System Administration) and evaluation of data readiness, as well as dynamic tests focused on basic system functionality. When necessary, a sanity test will be performed following the installation testing.

Integration Testing
Testing two or more modules or functions together with the intent of finding interface defects between the modules or functions. Testing completed at as a part of unit or functional testing, and sometimes, becomes its own standalone test phase. On a larger level, integration testing can involve a putting together of groups of modules and functions with the goal of completing and verifying that the system meets the system requirements. (see system testing)

Load Testing
Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation.

Parallel/Audit Testing
Testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.

Performance Testing
Testing with the intent of determining how quickly a product handles a variety of events. Automated test tools geared specifically to test and fine-tune performance are used most often for this type of testing.

Pilot Testing
Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Often is considered a Move-to-Production activity for ERP releases or a beta test for commercial products. Typically involves many users, is conducted over a short period of time and is tightly controlled. (see beta testing)

Recovery/Error Testing
Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Regression Testing
Testing with the intent of determining if bug fixes have been successful and have not created any new problems. Also, this type of testing is done to ensure that no degradation of baseline functionality has occurred.

Sanity Testing
Sanity testing will be performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It will normally include a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.

Security Testing
Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers.

Software Testing
The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The organization and management of individuals or groups doing this work is not relevant. This term is often applied to commercial products such as internet applications. (contrast with independent verification and validation)

Stress Testing
Testing with the intent of determining how well a product performs when a load is placed on the system resources that nears and then exceeds capacity.

System Integration Testing
Testing a specific hardware/software installation. This is typically performed on a COTS (commercial off the shelf) system or any other system comprised of disparent parts where custom configurations and/or unique installations are the norm.

Unit Testing
Unit Testing is the first level of dynamic testing and is first the responsibility of the developers and then of the testers. Unit testing is performed after the expected test results are met or differences are explainable / acceptable.

Usability Testing
Testing for 'user-friendliness'. Clearly this is subjective and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

White Box Testing
Testing in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose.

Languages

• It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter. (Nathaniel S Borenstein)

• There are only two kinds of programming languages: those people always bitch about and those nobody uses. (Bjarne Stroustrup)

• Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration. (Stan Kelly-Bootle)

• Voodoo Programming: Things programmers do that they know shouldn't work but they try anyway, and which sometimes actually work, such as recompiling everything. (Karl Lehenbauer)

• Please don't fall into the trap of believing that I am terribly dogmatical about [the goto statement]. I have the uncomfortable feeling that others are making a religion out of it, as if the conceptual problems of programming could be solved by a single trick, by a simple form of coding discipline! (Edsger Dijkstra)

• Computer language design is just like a stroll in the park. Jurassic Park, that is. (Larry Wall)

• XML is not a language in the sense of a programming language any more than sketches on a napkin are a language. (Charles Simonyi)

• Using TSO is like kicking a dead whale down the beach. (Stephen C Johnson)

• The object-oriented model makes it easy to build up programs by accretion. What this often means, in practice, is that it provides a structured way to write spaghetti code. (Paul Graham)

• Reusing pieces of code is liked picking off sentences from other people's stories and trying to make a magazine article. (Bob Frankston)

• [The BLINK tag in HTML] was a joke, okay? If we thought it would actually be used, we wouldn't have written it! (Mark Andreessen)

• I had a running compiler and nobody would touch it. They told me computers could only do arithmetic. (Rear Admiral Grace Hopper)

• If you don't think carefully, you might think that programming is just typing statements in a programming language. (Ward Cunningham)

• A language that doesn't have everything is actually easier to program in than some that do. (Dennis M Ritchie)

• Projects promoting programming in natural language are intrinsically doomed to fail. (Edsger Dijkstra)

• Pointers are like jumps, leading wildly from one part of the data structure to another. Their introduction into high-level languages has been a step backwards from which we may never recover. (Charles Hoare)

• The string is a stark data structure and everywhere it is passed there is duplication. It is a perfect vehicle for hiding information. (Alan J Perlis)

• First learn computer science and all the theory. Next develop a programming style. Then forget all that and just hack. (George Carrette)

• I fear the the new object-oriented systems may suffer the fate of LISP, in that they can do many things, but the complexity of the class hierarchies may cause them to collapse under their own weight. (Bill Joy)

• If we wish to count lines of code, we should not regard them as lines produced but as lines spent. (Edsger Dijkstra)

• You can either have software quality or you can have pointer arithmetic, but you cannot have both at the same time. (Bertrand Meyer)

• Syntax, my lad. It has been restored to the highest place in the republic. (John Steinbeck)

• Are you quite sure that all those bells and whistles, all those wonderful facilities of your so called powerful programming languages, belong to the solution set rather than the problem set? (Edsger Dijkstra)

• Thou shalt not follow the NULL pointer, for chaos and madness await thee at its end. (Henry Spencer)

• I think conventional languages are for the birds. They're just extensions of the von Neumann computer, and they keep our noses in the dirt of dealing with individual words and computing addresses, and doing all kinds of silly things like that, things that we've picked up from programming for computers; we've built them into programming languages; we've built them into Fortran; we've built them in PL/1; we've built them into almost every language. (John Backus)

• Get and set methods are evil. (Allen Holub)

• Writing code has a place in the human hierarchy worth somewhere above grave robbing and beneath managing. (Gerald Weinberg)

• Part of the reason so many companies continue to develop software using variations of waterfall is the misconception that the analysis phase of waterfall completes the design and the rest of the process is just non-creative execution of programming skills. (Steven Gordon)

• If the programmer can simulate a construct faster than a compiler can implement the construct itself, then the compiler writer has blown it badly. (Guy Steele)

• Classes struggle, some classes triumph, others are eliminated. (Mao Zedong)

• If buffer overflows are ever controlled, it won't be due to mere crashes, but due to their making systems vulnerable to hackers. Software crashes due to mere incompetence apparently don't raise any eyebrows, because no one wants to fault the incompetent programmer and his incompetent boss. (Henry Baker)

• There is not a fiercer hell than the failure in a great object. (John Keats)

• Objects can be classified scientifically into three major categories: those that don't work, those that break down and those that get lost. (Russell Baker)

• Memory is like an orgasm. It's a lot better if you don't have to fake it. (Seymour Cray)

Thursday, October 30, 2008

Bug Quotes

• I never make stupid mistakes. Only very, very clever ones. ("Dr Who")

• One: demonstrations always crash. And two: the probability of them crashing goes up exponentially with the number of people watching. (Steve Jobs)

• The service life of a cobbled up fix is inversely proportional to the time required to slap it together. (Nick Lappos)

• A crash is when your competitor's program dies. When your program dies, it is an idiosyncrasy. Frequently, crashes are followed with a message like ID 02. ID is an abbreviation for idiosyncrasy and the number that follows indicates how many more months of testing the product should have had. (Guy Kawasaki)

• Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it? (Brian Kernighan)

• When you say: "I wrote a program that crashed Windows", people just stare at you blankly and say: "Hey, I got those with the system -- for free." (Linus Torvalds)

• In some cases, all it requires is that you rationally point out that there is a problem. In others, all you can do is turn the other cheek. At the far end of the spectrum are those for whom the only appropriate response is to carve out their still-beating heart and force them to eat it. (Marc Carlson)

• The wages of sin is debugging. (Ron Jeffries)

• Voodoo Programming: Things programmers do that they know shouldn't work but they try anyway, and which sometimes actually work, such as recompiling everything. (Karl Lehenbauer)

• All sorts of computer errors are now turning up. You'd be surprised to know the number of doctors who claim they are treating pregnant men. (Isaac Asimov)

• Every program starts off with bugs. Many programs end up with bugs as well. There are two corollaries to this: first, you must test all your programs straight away. And second, there's no point in losing your temper every time they don't work. (Z80 Users Manual)

• Ever tried. Ever failed. No matter. Try again. Fail again. Fail better. (Samuel Beckett)

• There are no significant bugs in our released software that any significant number of users want fixed. (Bill Gates)

• The honest truth is that having a lot of people staring at the code does not find the really nasty bugs. The really nasty bugs are found by a couple of really smart people who just kill themselves. (Bill Joy)

• We didn't have to replicate the problem. We understood it. (Linus Torvalds)

• It's harder than you might think to squander millions of dollars, but a flawed software development process is a tool well suited to the job. (Alan Cooper)

• Beware of bugs in the above code; I have only proved it correct, not tried it. (Donald Knuth)

• It has been discovered that C++ provides a remarkable facility for concealing the trival details of a program -- such as where its bugs are. (David Keppel)

• There has never been an unexpectedly short debugging period in the history of computers. (Steven Levy)

• Act in haste and repent at leisure: Code too soon and debug forever. (Raymond Kennington)

• Microsoft programs are generally bug-free. If you visit the Microsoft hotline, you'll literally have to wait weeks if not months until someone calls in with a bug in one of our programs. 99.99% of calls turn out to be user mistakes. (Benedikt Heinen)

• The only man who never makes mistakes is the man who never does anything. (Theodore Roosevelt)

• Failure is the opportunity to begin again more intelligently. (Henry Ford)

• In his errors a man is true to type. Observe the errors and you will know the man. (Kong Fu Zi aka Confucius)

• The invalid assumption that correlation implies cause is probably among the two or three most serious and common errors of human reasoning. (Stephen Jay Gould)

• People get annoyed when you try to debug them. (Larry Wall )

• It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong. (Richard Feynman)

• The sum of the bugginess in a system can be far greater than the bugs in the individual products that comprise it, and often exponentially. (Chad Dickerson)

• The road to truth is long, and lined the entire way with annoying bästards. (Alexander Jablokov)

• The best-laid schemes o’ mice an’ men Gang aft a-gley. (Robbie Burns)

• I have not failed. I've just found 10,000 ways that won't work. (Thomas Edison)

• About 90 percent of the downtime comes from, at most, 10 percent of the defects. (Barry Boehm)

• Of all my programming bugs, 80% are syntax errors. Of the remaining 20%, 80% are trivial logical errors. Of the remaining 4%, 80% are pointer errors. And the remaining 0.8% are hard. (Marc Donner)

• The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at and repair. (Douglas Adams)

• Truth is a good dog; but always beware of barking too close to the heels of an error, lest you get your brains kicked out. (Samuel T Coleridge)

• If people never did silly things, nothing intelligent would ever get done. (Ludwig Wittgenstein)

• I am one of the culprits who created the problem. I used to write those programs back in the '60s and '70s, and was so proud of the fact that I was able to squeeze a few elements of space by not having to put '19' before the year. (Alan Greenspan)

• Every big computing disaster has come from taking too many ideas and putting them in one place. (Gordon Bell)

• Product quality has almost nothing to do with defects or their lack. (Tom DeMarco)

• I know that most men, including those at ease with problems of the greatest complexity, can seldom accept even the simplest and most obvious truth if it be such as would oblige them to admit the falsity of conclusions which they have delighted in explaining to colleagues, which they have proudly taught to others, and which they have woven, thread by thread, into the fabric of their lives. (Leo Tolstoy)

• One should expect that the expected can be prevented, but the unexpected should have been expected. (Norman Augustine)

• When debugging, novices insert corrective code; experts remove defective code. (Richard Pattis)

• It's not the prevention of bugs but the recovery -- the ability to gracefully exterminate them -- that counts. (Victoria Livschitz)

• In a software project team of 10, there are probably 3 people who produce enough defects to make them net negative producers. (Gordon Schulmeyer)

• When trouble is solved before it forms, who calls that clever? (Sun Tzu)

• I never guess. It is a shocking habit -- destructive to the logical faculty. ("Sherlock Holmes")

• Sometimes it pays to stay in bed in Monday, rather than spending the rest of the week debugging Monday's code. (Dan Salomon)

• The gods too are fond of a joke. (Aristotle)

Friday, October 17, 2008

How to write an effective test report

The test report is the primary work deliverable from the testing phase. It disseminates the information from the test execution phase that is needed for project managers, as well as the stakeholders, to make further decisions. Anomalies and the final decomposition of the anomalies are recorded in this report to ensure the readers know the quality status of the product under test.

This tip will be a guideline for testers to identify the vital information that needs to be included in the report. At a bare minimum, the test report should contain the test summary identifier, objective, summary of testing activity, variances, testing activities and last but not least, the important piece of information -- defects.

Test summary identifier -- The identifier need to be associated on each round of testing. In other words, each round of testing must have a unique identifier to ensure readability and traceability.

Objective -- This is the objective of each round of testing. Does this round of testing cater for component testing, system testing, regression testing, integration testing or others?

Summary -- This section includes the summary of testing activity in general. Information detailed here includes the number of test cases executed, the scope of testing, the number of defects found with severity classification, and test environments set up and used for the testing.

Variances -- If there's a discrepancy between the complete product and the requirement, use this section to highlight it. Variances can be on the plan, procedures and test items.

Activity -- Summarize all major testing milestones such as Test Plan, Test Case Development, Test Execution and Test Reporting in this section. Information on resource consumption, total staffing level and total lapsed time should be reported as well.

Defects -- This is the most essential section in the report. This is where you report defect information such as the number of defects found, defect severity classification, defect density, etc. Test metrics are important to complement this section. Below are list of available metrics that can be used:

* Test defect by test phase -- Analyze test defects by test phase
* Test defect by severity level -- Analyze test defects by severity level
* Accepted vs. rejected test defects
* Defect density -- The number of defects per KLOC or test cases

In general, writing the test report is important to make sure readers can make correct conclusions based on it. The output of this report will mold your readers' perception of you. The better your report, the better your reputation as tester.

Black Box and White Box

Black box Testing:
Functional testing addresses the overall behavior of the program by testing transaction flows, input validation and functional completeness. Which is known as Black box Testing. There are four important techniques, which are significantly important to derive minimum test cases and input data for the same.

Equivalence partitioning:
An equivalence class data is a subset of a larger class. This data is used for technically equivalence partitioning rather than undertaking exhaustive testing of each value in the larger set of data. For example, a payroll program, which edits professional tax deduction limits within Rs. 100 to Rs. 400, would have three equivalence partitions.

Less than Rs.100/- (Invalid Class)
Between Rs.100 to Rs.400/- (Valid Class)
Greater than Rs.400/- (Invalid Class)

If one test case from one equivalence class results in an error, all other test cases in the equivalence class would be expected to result the same error. Here, tester needs to write very few test cases, which is going to save our precious time and resources.

Boundary Value Analysis:
Experiences show that the test cases, which explore boundary conditions, have a higher payoff than test cases that do not. Boundary conditions are the situations directly on, above and beneath the edges of input and output equivalence classes.

This technique consists of generating test cases and relevant set of data, that should focus on the input and output boundaries of given function. In the above example of professional tax limits, boundary value analysis would derive the test cases for:

Low boundary plus or minus one (Rs.99/- and Rs.101/-)
On the boundary (Rs.100/- and Rs.400/-)
Upper boundary plus or minus one (Rs.399 and Rs.401/-)

Error Guessing:
This is based on the theory that test cases can be developed, based upon intuition and experience of the test engineer. Some people tend to adapt very naturally with program testing. We can say these people have a knack for ’Smelling out’ errors without incorporating any particular methodology.
This “Error Guessing” quality of a tester enables him to put in practice, more efficient and result oriented testing than a test case should be able to guide a Tester.
It is difficult to give procedure for the error guessing technique since it is largely intuitive and ad hoc process. For example Where on of the input is the date test engineer may try February 29,2000 or 9/9/99.

Orthogonal Array:
Particularly this technique is useful in finding errors associated with region faults. An error category associated with faulty logic within software component.

For example there are three parameters (A, B & C) each of which has one of the three possible values. Which may require 3X3X3=27 Test cases. But because of the way program works it is probably it is more likely that the fault will depend on the values of only two parameters. In that case fault may occur for each of these 3 test cases.
1. A=1,B=1,C=1
2. A=1,B=1,C=2,
3. A=1,B=1,C=3

Since the value of the 'C' seems to be irreverent to the occurrence of this particular fault, any one of the three test cases will suffice. Depending upon the above assumption, test engineer may derive only nine test cases. Which will show all possible pairs within all three variables. The array is orthogonal because of each pair of parameters all combination of their values occurs once.

That is all possible pair wise combination between parameters A & B, B & C, C & A are shown since we are thinking in terms of pairs we say this array has strength of 2, It does not have strength of 3,
because not all thee way combination occurs A=1, B=2, C=3 for example, don’t appear but it covers the pair wise possibilities which is what we are concern about.

White box Testing:
Structural testing includes path testing, code coverage testing and analysis; logic testing nested loop testing and many similar techniques. Which is known as white box testing.

1. Statement Coverage: Execute all the statements at least once.
2. Decision Coverage: Execute each decision directions at least once.
3. Condition Coverage: Execute each condition with all possible outcomes at least once.
4. Decision / Condition Coverage: Execute all possible combinations of condition outcomes in each decision. Treat all iterations as two way conditions exercising the loop zero times and Once.
5. Multiple Condition Coverage: Invokes each point of entry at least once.

A Tester would choose a combination from above technique that is appropriate for the application and available time frame. A very detailed focus on all these aspects would lead to too much of vague information at times.

Question about Automation

1. What automating testing tools are you familiar with?
Win Runner , Load runner, QTP , Silk Performer, Test director, Rational robot, QA run.

2. How did you use automating testing tools in your job?
1. For regression testing
2. Criteria to decide the condition of a particular build

3. Describe some problem that you had with automating testing tool.
The problem of winrunner identifying the third party controls like infragistics control.

4. How do you plan test automation?
1. Prepare the automation Test plan
2. Identify the scenario
3. Record the scenario
4. Enhance the scripts by inserting check points and Conditional Loops
5. Incorporated Error Handler
6. Debug the script
7. Fix the issue
8. Rerun the script and report the result.

5. Can test automation improve test effectiveness?
Yes, Automating a test makes the test process:
1.Fast
2.Reliable
3. Repeatable
4.Programmable
5.Reusable
6.Comprehensive

6. What is data - driven automation?
Testing the functionality with more test cases becomes laborious as the functionality grows. For multiple sets of data (test cases), you can execute the test once in which you can figure out for which data it has failed and for which data, the test has passed. This feature is available in the WinRunner with the data driven test where the data can be taken from an excel sheet or notepad.

7. What are the main attributes of test automation?
software test automation attributes :
Maintainability - the effort needed to update the test automation suites for each new release
Reliability - the accuracy and repeatability of the test automation
Flexibility - the ease of working with all the different kinds of automation test ware
Efficiency - the total cost related to the effort needed for the automation
Portability - the ability of the automated test to run on different environments
Robustness - the effectiveness of automation on an unstable or rapidly changing system
Usability - the extent to which automation can be used by different types of users

8. Does automation replace manual testing?
There can be some functionality which cannot be tested in an automated tool so we may have to do it manually. therefore manual testing can never be repleaced. (We can write the scripts for negative testing also but it is hectic task).When we talk about real environment we do negative testing manually.

9. How will you choose a tool for test automation?
choosing of a tool deends on many things ...
1. Application to be tested
2. Test environment
3. Scope and limitation of the tool.
4. Feature of the tool.
5. Cost of the tool.
6. Whether the tool is compatible with your application which means tool should be able to interact with your appliaction
7. Ease of use

10. How you will evaluate the tool for test automation?
We need to concentrate on the features of the tools and how this could be benficial for our project. The additional new features and the enhancements of the features will also help.

11. What are main benefits of test automation?
FAST ,RELIABLE,COMPREHENSIVE,REUSABLE

12. What could go wrong with test automation?
1. The choice of automation tool for certain technologies
2. Wrong set of test automated

13. How you will describe testing activities?
Testing activities start from the elaboration phase. The various testing activities are preparing the test plan, Preparing test cases, Execute the test case, Log teh bug, validate the bug & take appropriate action for the bug, Automate the test cases.

14. What testing activities you may want to automate?
1. Automate all the high priority test cases which needs to be exceuted as a part of regression testing for each build cycle.

15. Describe common problems of test automation.
The commom problems are:
1. Maintenance of the old script when there is a feature change or enhancement
3. The change in technology of the application will affect the old scripts

16. What types of scripting techniques for test automation do you know?
5 types of scripting techniques:
Linear
Structured
Shared
Data Driven
Key Driven

17. What are principles of good testing scripts for automation?
1. Proper code guiding standards
2. Standard format for defining functions, exception handler etc
3. Comments for functions
4. Proper errorhandling mechanisms
5. The apprpriate synchronisation techniques

18. What tools are available for support of testing during software development life cycle?
Testing tools for regressiona and load/stress testing for regression testing like, QTP, load runner, rational robot, winrunner, silk, testcomplete, Astra are availalbe in the market. -For defect tracking BugZilla, Test Runner are availalbe.

19. Can the activities of test case design be automated?
As I know it, test case design is about formulating the steps to be carried out to verify something about the application under test. And this cannot be automated. IHowever, I agree that the process of putting the test results into the excel sheet.

20. What are the limitations of automating software testing?
Hard-to-create environments like out of memory, invalid input/reply, and corrupt registry entries, make applications behave poorly and existing automated tools cant force these condition - they simply test your application in normal environment.

21. What skills needed to be a good test automator?
1.Good Logic for programming.
2. Analatical sklls.
3.Pessimestic in Nature.

22. How to find that tools work well with your existing system?
1. Discuss with the support officials
2. Download the trial version of the tool and evaluate
3. Get suggestions from peopel who are working on the tool

23. Describe some problem that you had with automating testing tool.
1. The inabality of winrunner to identify the third party control like infragistics controls
2. The change of the location of the table object will cause object not found error.
3. The inability of the winrunner to execute the script against multiple langauges

24. What are the main attributes of test automation?
Maintainability, Reliability, Flexibility, Efficiency, Portability, Robustness, and Usability - these are the main attributes in test automation.

25. What testing activities you may want to automate in a project?
Testing tools can be used for :
Sanity tests(which is repeated on every build),
stress/Load tests(U simulate a large no of users,which is manually impossibele) &
Regression tests(which are done after every code change)

Wednesday, October 15, 2008

Risk-Based Testing

How to Conduct Heuristic Risk Analysis
By James Bach

Summary:
Testing is motivated by risk. If you accept this premise, you might well wonder how the term "risk-based testing" is not merely redundant. Isn’t all testing risk-based?

This is risk-based testing:

1. Make a prioritized list of risks.
2. Perform testing that explores each risk.
3. As risks evaporate and new ones emerge, adjust your test effort to stay focused on the current crop.

Any questions? Well, now that you know what risk-based testing is, I can devote the rest of the article to explaining why you might want to do it, and how to do it well.

Why Do Risk-Based Testing?

As a tester, there are certain things you must do. Those things vary depending on the kind of project you’re on, your industry niche, and so on. But no matter what else you do, your job includes finding important problems in the product. Risk is a problem that might happen. The magnitude of a risk is a joint function of the likelihood and impact of the problem—the more likely the problem is to happen, and the more impact it will have if it happens, the higher the risk associated with that problem. Thus, testing is motivated by risk. If you accept this premise, you might well wonder how the term "risk-based testing" is not merely redundant. Isn’t all testing risk-based?

To answer that, look at food. We all have to eat to live. But it would seem odd to say that we do "food-based living." Under normal circumstances, we don’t think of ourselves as living from meal to meal. Many of us don’t keep records of the food we eat, or carefully associate our food with our daily activities. However, when we are prone to eat too much, or we suffer food allergies, or when we are in danger of running out of food, then we may well plan our lives explicitly around our next meal. It is the same with risk and testing.

Just because testing is motivated by risk does not mean that explicit accounting of risks is required in order to organize a test process. Standard approaches to testing are implicitly designed to address risks. You may manage those risks just fine by organizing the tests around functions, requirements, structural components, or even a set of predefined tests that never change. This is especially true if the risks you face are already well understood or the total risk is not too high.

If you want higher confidence that you are testing the right things at the right time, risk-based testing can help. It focuses and justifies test effort in terms of the mission of testing itself. Use it when other methods of organizing your effort demand more time or resources than you can afford.

If you are responsible for testing a product where the impact of failure is extremely high, you might want to use a rigorous form of risk analysis. Such methods apply statistical models and/or comprehensively analyze hazards and failure modes. I’ve never been on a project where we felt the cost of rigorous analysis was justified, so all I know about it is what I’ve read. One well-written and accessible book on this subject is Safety-Critical Computer Systems by Neil Storey. There is also a technique of statistically justified testing taught by John Musa in his book Software Reliability Engineering.

There is another sort of risk analysis about which relatively little has been written. This kind of analysis is always available to you, no calculator required. I call it heuristic risk analysis.

Heuristic Analysis

A heuristic method for finding a solution is a useful method that doesn’t always work. This term goes back to Greek philosophers, but George Polya introduced it into modern usage in his classic work How to Solve It. Polya writes, "Heuristic reasoning is reasoning not regarded as final and strict but as provisional and plausible only, whose purpose is to discover the solution of the present problem."

Heuristics are often presented as a checklist of open-ended questions, suggestions, or guidewords. A heuristic checklist is not the same as a checklist of actions that you might include as "steps to reproduce" in a bug report. Its purpose is not to control your actions, but to help you consider more possibilities and interesting aspects of the problem. For a wonderful set of heuristics for developing software requirements, see Exploring Requirements: Quality Before Design, by Don Gause and Gerald M. Weinberg.

Two Approaches to Analysis

Let’s look at some heuristics for exploring software risk. I think of risk analysis as either "inside-out" or "outside-in." These are complementary approaches, each with its own strengths.

Inside-Out
Begin with details about the situation and identify risks associated with them. With this approach, you study a product and repeatedly ask yourself "What can go wrong here?" More specifically, for each part of the product, ask these three questions:

* Vulnerabilities What weaknesses or possible failures are there in this component?
* Threats What inputs or situations could there be that might exploit a vulnerability and trigger a failure in this component?
* Victims Who or what would be impacted by potential failures and how bad would that be?

This approach requires substantial technical insight, but not necessarily your insight. The times I’ve been most successful with inside-out risk analysis were when making "stone soup" with a developer. I brought the stones (the heuristics); he brought the soup (the facts).

Here’s what that looks like: In a typical analysis session we find an empty conference room that has a big whiteboard. I ask "How does this feature work?" The developer then draws a lot of scrunched boxes, wavy arrows, crooked cylinders, and other semi-legible symbology on the board. As he draws, he narrates the internal workings of the product. Meanwhile, I try to simulate the mechanism in my head as fast as the developer describes it. When I think I understand the process or understand how to test it, I explain it back to him. The whiteboard is an important prop because I get confused easily as I’m assimilating all the information. When I lose the thread of the explanation, I can scowl mysteriously, point to any random part of the diagram, and say something like "I’m still not clear on how this part works."

As I come to understand the mechanism, I look for potential vulnerabilities, threats, and victims. More precisely, I make the developer look for them with questions such as:

* [pointing at a box] What if the function in this box fails?
* Can this function ever be invoked at the wrong time?
* [pointing at any part of the diagram] What error checking do you do here?
* [pointing at an arrow] What exactly does this arrow mean? What would happen if it were broken?
* [pointing at a data flow] If the data going from here to there were somehow corrupted, how would you know? What would happen?
* What’s the biggest load this process can handle?
* What external components, services, states, or configurations does this process depend upon?
* Can any of the resources or components diagrammed here be tampered with or influenced by any other process?
* Is this a complete picture? What have you left out?
* How do you test this as you’re putting it together?
* What are you most worried about? What do you think I should test?

This is not a complete list of questions, but it’s a good start. Meanwhile, as the developer talks, I listen for whether he is operating on faith or on facts. I listen for any uncertainty or concern in his voice, hesitations, or a choice of words that may indicate that he has not thought through the whole problem of requirements, design, or implementation. Confusion or ambiguity suggests potential risk. When we identify a risk, we also talk about how I might test so as to evaluate and manage that risk.

A session like this lasts about an hour, usually—and I leave with an understanding of the feature, as well as a list of specific risks and associated test strategies. The tests I perform as a result of that conversation serve not only to focus on the risks, but also to refute or corroborate the developer’s story about the product.

There are wonderful advantages to this approach, but it requires effective communication skills on the part of the developer and tester, and a willingness to cooperate with each other. You can perform this analysis without the developer, but then you have the whole burden of studying, modeling, and analyzing the system by yourself.

Inside-out is a direct form of risk analysis. It asks "What risks are associated with this thing?" Inside-out is the opposite of the outside-in approach, which asks "What things are associated with this kind of risk?"

Outside-In
Begin with a set of potential risks and match them to the details of the situation. This is a more general approach than inside-out, and somewhat easier. With this approach, you consult a predefined list of risks and determine whether they apply here and now. The predefined list may be written down, or it may be something burned into your head by the flames of past experience. I use three kinds of lists: quality criteria categories, generic risk lists, and risk catalogs.

Quality Criteria Categories These categories are designed to evoke different kinds of requirements. What would happen if the requirements associated with any of these categories were not met? How much effort is justified in testing to assure they are met to a "good enough" standard?

* Capability Can it perform the required functions?
* Reliability Will it work well and resist failure in all required situations?
* Usability How easy is it for a real user to use the product?
* Performance How speedy and responsive is it?
* Installability How easily can it be installed onto its target platform?
* Compatibility How well does it work with external components and configurations?
* Supportability How economical will it be to provide support to users of the product?
* Testability How effectively can the product be tested?
* Maintainability How economical will it be to build, fix, or enhance the product?
* Portability How economical will it be to port or reuse the technology elsewhere?
* Localizability How economical will it be to publish the product in another language?

I cobbled together this list from various sources including the ISO 9126 standard, Hewlett Packard’s FURPS list (Functionality, Usability, Reliability, Performance, Supportability), and a few other sources. There is nothing authoritative about it except that it includes all the areas I’ve found useful in desktop application testing. I remember this list using the acronym CRUPIC STeMPL. To memorize it, say the acronym out loud and imagine that it’s the name of a Romanian hockey player. With a little practice, you’ll be able to recall the list any time you need it.

Generic Risk Lists Generic risks are risks that are universal to any system. These are my favorite generic risks:

* Complex - anything disproportionately large, intricate, or convoluted
* New - anything that has no history in the product
* Changed - anything that has been tampered with or "improved"
* Upstream Dependency - anything whose failure will cause cascading failure in the rest of the system
* Downstream Dependency - anything that is especially sensitive to failures in the rest of the system
* Critical - anything whose failure could cause substantial damage
* Precise - anything that must meet its requirements exactly
* Popular - anything that will be used a lot
* Strategic - anything that has special importance to your business, such as a feature that sets you apart from the competition
* Third-party - anything used in the product, but developed outside the project
* Distributed - anything spread out in time or space, yet whose elements must work together
* Buggy - anything known to have a lot of problems
* Recent Failure - anything with a recent history of failure

Risk Catalogs A risk catalog is an outline of risks that belong to a particular domain. Each line item in a risk catalog is the end of a sentence that begins with "We may experience the problem that..." Risk catalogs are motivated by testing the same technology pattern over and over again. You can put together a risk catalog just by categorizing the kinds of problems you have observed during testing. Here’s an example of part of an installation risk catalog:

(For an example of a very broad risk catalog, see Appendix A of Testing Computer Software by Cem Kaner, Jack Falk, and Hung Nguyen.)

* Wrong files installed
Temporary files not cleaned up
Old files not cleaned up after upgrade
Unneeded file installed
Needed file not installed
Correct file installed in the wrong place

* Files clobbered
Older file replaces newer file
User data file clobbered during upgrade

* Other apps clobbered

File shared with another product is modified

File belonging to another product is deleted

* Hardware not properly configured
Hardware clobbered for other apps
Hardware not set for installed app

* Screen saver disrupts install
* No detection of incompatible apps

Apps currently executing

Apps currently installed

* Installer silently replaces or modifies critical files or parameters
* Install process is too slow
* Install process requires constant user monitoring
* Install process is confusing

User interface is unorthodox

User interface is easily misused

Messages and instructions are confusing
You can use these risk lists in a number of ways. Here’s one that works for me:

1. Decide what component or function you want to analyze. Are you looking at the whole product, a single component, or a list of components?

2. Determine your scale of concern. I like to use a scale of "normal," "higher," and "lower." Everything is presumed to be a normal risk unless I have reason to believe it’s a higher or a lower risk. Use a scale that’s meaningful to you, but beware of ambiguous scales, or scales that appear more objective than they really are.

3. Gather information (or people with information) about the thing you want to analyze. Obviously, you need to know something about the situation in order to analyze it. When I’m doing "outside-in" analysis on a product, I gather whatever information is convenient, make a stab at the analysis, then go to the people who are more expert than I and have them critique the analysis. Another way to do this is to get all those people in the same room at the same time and do the analysis in that meeting.

4. Visit each risk area on each list and determine its importance in the situation at hand. For each area, ask "Could we have problems in this area? If so, how big is that risk?" Record your impression. Think of specific reasons that support your impression. If you’re doing this in a meeting, ask "How do we know that this is or is not a risk? What would we have to know in order to make a better risk estimate?"

5. If any other risks occur to you that aren’t on the lists, record them. Special risks are bound to occur to you during this process.

6. Record any unknowns, which impact your ability to analyze the risk. During the process, you will often feel stumped. For example, you might wonder whether a particular component is especially complex. Maybe it’s not complex at all. What do you need to know in order to determine that? As you go through the analysis, it helps to make a list of information-gathering "to do" items. At some point, go get that information and update your analysis.

7. Double-check the risk distribution. It’s common to end up with a list of risks in which everything is considered to be equally risky. That may indeed be the case. On the other hand, it may be that your distribution of concerns is skewed because you’re not willing to make tough choices about what to test and what not to test. Whatever distribution of risks you end up with, double-check it by taking a few examples of equal risks and asking whether those risks really are equal. Take some examples of risks that differ in magnitude and ask if it really does make sense to spend more time testing the higher risk and less time testing the lower risk. Confirm that the distribution of risk magnitudes feels right.

I recommend including a variety of people from a variety of roles in this analysis. Use people from Technical Support, Development, and Marketing, for instance.

Three Ways to Organize Risk-Based Testing

Whether you employ outside-in, inside-out, or some hybrid approach to doing the analysis, I can suggest three different ways to communicate the risks and organize the testing around those risks: risk watch list, risk/task matrix, or component risk matrix.

Risk Watch List
This is probably the simplest way to organize risk-based testing. A risk watch list is just a list of risks that you periodically review to ask yourself what your testing has revealed about those issues. If you feel you don’t have enough recent information about problems in the product that are associated with a risk, then do some more testing to gather that information.

Risk/Task Matrix
The risk/task matrix consists of a table with two columns. On the left is a list of risks; on the right is a list of risk mitigation tasks associated with each risk. Sort the risks by importance, with the most important risks at the top. Think of each row in the matrix as a statement of the form "If we’re worried about risk X, then we should invest in tasks Y."

The risk/task matrix is useful mainly as a tool in negotiating for testing resources. I like using this technique in situations where Management would not accept poor testing, yet also would not provide enough testing staff to do that job. The matrix helps bring management expectations in line with available resources. It’s a lot easier to get testing resources when you can explain the impact of not having enough.

A disadvantage of this approach is that some tasks mitigate more than one risk. Also, some mitigation tasks cost so much or take so much time that they actually add more problems to the project than they’re worth in terms of the problems they help detect. Still, it’s a simple way to show the gross relationships between risk and test effort on a project-wide basis.

Component Risk Matrix
The component risk matrix consists of a table with three columns. Break the product into thirty or forty areas or components. These components can be physical code (such as "the install program"), functions (such as "print"), or data (such as "clipart library"). In other words, a component is anything that is subject to testing. In the leftmost column of each row of the matrix, list a component. In the rightmost column, list all of the known risk heuristics that indicate significant risk in that component (if a risk heuristic applies equally to all components, don’t bother listing it). In the middle column, write a summary risk judgment of "higher," "lower," or "normal" (see Table 1).
Table 1
Component Risk Risk Heuristics
Printing Normal Distributed, Popular
Report Generation Higher New, Strategic, Third-party, Complex, Critical
Installation Lower Popular, Usability, Changed
ClipArtLibrary Lower Complex

What this matrix helps you do is help communicate and negotiate which components will get more effort. I use a general rule that higher-risk items get twice the effort as normal items, which in turn get twice the effort as the components that are lower-risk. (This is just an approximation, of course.)

The risk heuristics are included in the table because they help provoke questions about your risk judgments, but remember—there is no hard relationship between the heuristics and any particular judgment. You may find yourself in a situation where you will argue that one component is more risky than another, even though the first component has more heuristics driving its risk than the second. Risk analysis is a matter of evaluating factors that influence risk, not merely counting them.

As the project proceeds, you pay testing attention to different components in rough accord with their associated levels of risk. A disadvantage of this approach is that it focuses only on highlighting risks that increase the need to test, and not on those factors that decrease the need to test. You could add those risk-lowering factors into the matrix, of course, but I find that it makes the matrix too complicated.

Making It All Work
Always keep this in mind: your risk analysis is going to be incomplete and inaccurate to some degree, and it may be very wrong. All you really have at the beginning of a project are rumors of risks. As the project progresses, and you gain information about the product, you should adjust your test effort to match your best estimation of risk. Also, to deal with the risk of poor risk analysis, don’t let risk-based testing be the only kind of testing you do. Spend at least a quarter of your effort on approaches that are not risk-focused—such as field testing, code coverage testing, or functional coverage testing. This is called the principle of diverse half-measures: use a diversity of methods because no single heuristic always works.

Finally, if I were to choose two vital factors needed to make risk-based testing work, I would name experience and teamwork. Over a period of time, any product line or technology will reveal its pattern of characteristic problems (assuming that you pay attention to problems found in the field). Learn from that. And do whatever you can to invite different people with different points of view into the risk analysis process.

If there’s a magic to risk-based testing, it’s the magic of noticing the signs and clues, all around you, about where the problems lie. Some people do this without consciously thinking about it, and maybe that’s good enough. But when a problem slips by you because you couldn’t do perfectly exhaustive testing, you may be called upon to explain why you did what you did. Management may assume that you did a sloppy job, and they may not be impressed with the standard argument that all testing is incomplete. That’s when it’s nice to have that risk list or matrix. With risk-based testing, you can show Management that you strive to make the best use of the resources they invest. They’ll respect you for that.