Tuesday, September 30, 2008

All About Traceability


Agenda
•What is traceability?
•Why is traceability important?
•How is traceability performed?
•What tools perform traceability?
•What is the future of traceability?


Introduction
•What makes a software project successful?
•Meets stakeholder requirements
•How can this be encouraged?
•Traceability
•Traceability in a nutshell
•Shows forward and backward relationships linking requirements with design, implementation, test, and maintenance
•Know reasoning for everything and how to test


Why is Traceability Important?
•Ensures that requirements are met
•Understand relationship between requirements and the delivered system
•Lowers risk
•Creates an audit trail
•Consistency
•Control
•Change
•Development
•Risk


Problems with Traceability
•Manual process
•Viewed by developers as a low priority
•Misunderstood
•No single modeling method
•Poor documentation


When Does Traceability Occur?
•Entire lifecycle!


How is Tracing Performed?
•Client gives developers rough requirements
•Developers create system, hardware, and software requirements
•Each element is given a unique identifier
•Element – requirement, design attribute, test, etc
•Linkages done manually and managed by a CASE tool
•Traceability tables are made
•Matrix


Traceability Example
•SRD – System Requirements Document
•High level requirements
•Done by stakeholders
•SS – System Specification
•More detailed requirements
•Developer interpretation
•Segments
•More detailed portions of the SS
•Includes design information


Traceability Management
•Requirements are added/deleted/modified
•Impact analysis
•Trace changed
•Continues through maintenance


Traceability in a Perfect World
•Steps
•Identification of requirements
•Architecture selection
•Classification schema
•Functions, Performance and Security
•Translate into views
•Allocation into schemas
•Flow-down to design, code, and test
•Entry into traceability database
•Linkages
•Management


Traceability in the Real World
•Labor Intensive
•Classification schemas are frequently changed as requirements are allocated
•Ensure that semantics and syntax are correct


Semantics and Syntax
•Semantics required to assure that a trace is used in context •Syntax required to assure that a trace goes to a specific word or phrase
•Manual verification of outcomes


Real World Traceability Workflow
•Receipt of requirements documents
•Select architecture form to be followed
•Select classification schema
•Parse document and assign unique numbers
•Allocate according to classification scheme
•Establish linkages across all requirements
•Generate traceability matrices
•Maintain traceability linkages in database
•Maintain traceability links across entire project


Return on Investment
•Very difficult to measure
•Many factors
•Costs
•Time
•CASE Tools
•Training
•Benefits
•???
•Only an estimation
•What rework was avoided?


Tools
•CASE Tools
•Characteristics
•Hypertext linking
•Unique identifiers
•Syntactical similarity coefficients
•Problems
•Hypertext linking and syntactical similarity does not consider context
•Unique identifiers do not show requirement information
•Choosing architecture view and classification schemas will always be manual


Tools
•DOORS
•Telelogic
•“capture, link, trace, and manage”
•For large applications
•From the datasheet
•Similar look and feel to explorer
•Gap analysis for unaddressed requirements
•Traceability analysis for identifying areas of risk
•Impact analysis reports
•Volatility
•Traceability by drag and drop


Tools
•Caliber-RM
•Borland
•From the datasheet
•Centralized repository
•Requirements traceability across the lifecycle
•Impact analysis


Future Predictions
•Automation of allocation into architectures and classification schemas
•Little additional automation seen in current tools
•AIRES (Automated Integrated Requirements Engineering System)
•Center for Software Systems Engineering at George Mason University
•Relies heavily of semantics and syntax


Pros/Cons
•Pros
•Clearly reflected traceability importance and need
•Practical workflow
•Cons
•Examples did not reflect lifecycle
•Little practicality with tools

Friday, September 26, 2008

Why are there Bugs?

Bugs exist because humans aren't perfect.

by Mark Glaser

Since human’s design and program hardware and software, mistakes are inevitable. That's what computer and software vendors tell us, and it's partly true. What they don't say is that software is buggier than it has to be. Why? Because time is money, especially in the software industry. This is how bugs are born: a software or hardware company sees a business opportunity and starts building a product to take advantage of that. Long before development is finished, the company announces that the product is on the way. Because the public is (the company hopes) now anxiously awaiting this product, the marketing department fights to get the goods out the door before that deadline, all the while pressuring the software engineers to add more and more features. Shareholders and venture capitalists clamor for quick delivery because that's when the company will see the biggest surge in sales. Meanwhile, the quality-assurance division has to battle for sufficient bug-testing time. "The simple fact is that you get the most revenues at the release of software," says Bruce Brown, the founder of BugNet, a newsletter that has chronicled software bugs and fixes since 1994. "The faster you bring it out, the more money you make. You can always fix it later, when people howl. It's a fine line when to release something, and the industry accepts defects." It may seem that there are more bugs these days than ever before, but longtime bug watchers like Brown say this is mostly a visual illusion caused by increased media coverage. Not only has the number of bugs not changed, but manufacturers are fixing them more quickly. But while the industry as a whole may not be buggier, one important new category is, arguably, more flawed than other genres: Internet software. The popularity of the Internet is pushing companies to produce software faster than ever before, and the inevitable result is buggier products. "Those are crazy release schedules," says Brian Bershad, an associate professor of computer science at the University of Washington. His Kimera project helped catch several security bugs in Java. "The whole industry is bonkers. Web standards need to be developed and thoughtfully laid out, but look at all the versions of Java and HTML. It's not that the people aren't smart; it's just that they don't have time to think." But software and hardware companies persist in arguing that we should put up with bugs. Why? Because the cost of stamping out all bugs would be too high for the consumer. "Software is just getting so incredibly complicated," says Bershad. "It's too expensive to have no bugs in consumer software."

Thursday, September 18, 2008

References

** Exploratory Testing, Cem Kaner, Florida Institute of Technology, Quality Assurance Institute Worldwide Annual Software Testing Conference, Orlando, FL, November 2006

** Software errors cost U.S. economy $59.5 billion annually, NIST report

** a b c Kaner, Cem; Falk, Jack and Nguyen, Hung Quoc (1999). Testing Computer Software, 2nd Ed.. New York, et al: John Wiley and Sons, Inc., 480 pages. ISBN 0-471-35846-0.

** a b Section 1.1.2, Certified Tester Foundation Level Syllabus, International Software Testing Qualifications Board

** From 1988 on it was seen as prevention oriented period where tests were to demonstrate that software satisfies its specification, to detect faults and to prevent faults. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782.

** Principle 2, Section 1.3, Certified Tester Foundation Level Syllabus, International Software Testing Qualifications Board

** Tran, Eushiuan (1999). "Verification/Validation/Certification", in Koopman, P.: Topics in Dependable Embedded Systems. USA: Carnegie Mellon University. Retrieved on 2008-01-13.

** see D. Gelperin and W.C. Hetzel

** Myers, Glenford J. (1979). The Art of Software Testing. John Wiley and Sons. ISBN 0-471-04328-1.

** Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782.

** until 1956 it was the debugging oriented period, when testing was often associated to debugging: there was no clear difference between testing and debugging.Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782.

** From 1957-1978 there was the demonstration oriented period where debugging and testing was distinguished now - in this period it was shown, that software satisfies the requirements. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782.

** The time between 1979-1982 is announced as the destruction oriented period, where the goal was to find errors. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782.

** 1983-1987 is classified as the evaluation oriented period: intention here is that during the software lifecycle a product evaluation is provided and measuring quality. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782.

** From 1988 on it was seen as prevention oriented period where tests were to demonstrate that software satisfies its specification, to detect faults and to prevent faults. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782.

** Laycock, G. T. (1993). "The Theory and Practice of Specification Based Software Testing" (PostScript). Dept of Computer Science, Sheffield University, UK.Retrieved on 2008-02-13.

** Bach, James (June 1999). "Risk and Requirements-Based Testing". Computer 32 (6): pp. 113-114. Retrieved on 2008-08-19.

** Introduction, Code Coverage Analysis, Steve Cornett

** e)Testing Phase in Software Testing:-

** Myers, Glenford J. (1979). The Art of Software Testing. John Wiley and Sons, 145-146. ISBN 0-471-04328-1.

** Dustin, Elfriede (2002). Effective Software Testing. Addison Wesley, 3. ISBN 0-20179-429-2.

** Binder, Robert V. (1999). Testing Object-Oriented Systems: Objects, Patterns, and Tools. Addison-Wesley Professional, 45. ISBN 0-201-80938-9.

** Beizer, Boris (1990). Software Testing Techniques, Second Edition, pp.21,430. ISBN 0-442-20672-0.

** IEEE (1990). IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries. New York: IEEE. ISBN 1559370793.

** Kaner, Cem; James Bach, Bret Pettichord (2001). Lessons Learned in Software Testing: A Context-Driven Approach. Wiley, 4. ISBN 0-471-08112-4.

** McConnell, Steve (2004). Code Complete, 2nd edition, Microsoft Press, 960. ISBN 0-7356-1967-0.

** IEEE (1998). IEEE standard for software test documentation. New York: IEEE. ISBN 0-7381-1443-X.

** Pan, Jiantao (Spring 1999), "Software Testing (18-849b Dependable Embedded Systems)", Topics in Dependable Embedded Systems, Electrical and Computer Engineering Department, Carnegie Mellon University

** context-driven-testing.com

** [http://www.technicat.com/writing/process.html Article on taking agile traits without the agile method.

** IEEE article about differences in adoption of agile trends between experienced managers vs. young students of the Project Management Institute. See alsoAgile adoption study from 2007

** Agile software development practices slowly entering the military

** IEEE article on Exploratory vs. Non Exploratory testing

** An example is Mark Fewster, Dorothy Graham: Software Test Automation. Addison Wesley, 1999, ISBN 0-201-33140-3. See also Studies on automatic vs. manual testing

** Article referring to other links questioning the necessity of unit testing

** Microsoft Development Network Discussion on exactly this topic

** Kaner, Cem (2001). "NSF grant proposal to "lay a foundation for significant improvements in the quality of academic and commercial courses in software testing"" (pdf).

** Kaner, Cem (2003). "Measuring the Effectiveness of Software Testers" (pdf).

** Quality Assurance Institute

** International Institute for Software Testing

** K. J. Ross & Associates

** International Institute for Software Testing

** a b "ISTQB".

** a b "ISTQB in the U.S.".

** ALATS

** American Society for Quality

** Quality Assurance Institute

** American Society for Quality