Lecture 5
1. test case design and techniques
- blackbox
- based on requirements
- techniques
- equivalcence partitioning
- valid partitions
- invalid partitions
- boundary value analysis (only for ordered numeric or sequential partitions)
- decision tables
minimum coverage standardis to have at least one test case per decision rule in the table.- different combinatiosns of conditions result in different outcomes
- good for recording complex business rules
- state transition testing
state tableshows the relationship between states and inputs and highlights invalid transitions
- pairwise testing
- use case testing
- useful for designing acceptance testings with customer participation
- equivalcence partitioning
- white
- based on code and the design of the system
- ability to derive caoverage of the whole application
- techniques
- statement caoverage
coverageis the # of executable statements executed by the test divided by the total # of statement
- branch caoverage
- decision caoverage
coverageis the # of decision outcomes executed by the test divided by the total # of decision outcomes
- statement caoverage
- expericed
- based on the knowledge of the tester with similar applications
- techniques
- error guessing
- enumerate al list of possible defects
- exploratory testing (informal tests)
- informal (not pre-defined) tests
- the test results are used to create tests for the areas that may need more testing
- conducted using session-based testing
- useful where
- there are few specifictaion
- severe time pressure
- error guessing
Decision coverageis stronger thanstatement coverage, 100% decision coverage guarantees 100% statement coverage, but not vice versa.
2. choosing a test technique (slides 21)
depends on some factors
- type of the system
- level of risks
- type of risks
- knowledge of the testers
- time and budgets
3.1 tester role
- review and contribute to test plans
- analyze, review requirements and user stories
- identify test conditions
- design test environment
- implements test cases
- prepaire test data
- execute tests and evaluate the result
- automate the testing
3.2 testing role based on level
| level | resposibility (done by) |
|---|---|
| component itegration | developer |
| system integration | independent test team |
| operational acceptance | operations/systems administration staff |
| user acceptance | business analyst , subject matter experts and users |
4. independent tester
- benefits
- testers are unbiased and see other and different defects
- testers can verify assumptions made by people during specification and implmentation of the system
- drawbacks
- isolation from development team
- developers may lose sense of responsibility for quality
- testers may be seen as bottleneck for release
5. configuration management
- the purpose is to
- maintain integrity of work products throught the life cycle
- maintain traceability throughout the test process
- for the testers, it helps uniquely identifing the tested item, test documents, ...
6.1 risks and testing
riskthe chance of an event, threat occurring and resulting in undesirable problem
level of risksdepends on
- likelihood/probability of an event happening
- impact/harm resulting from event
risk-based testingused to
- focus effort required during testing
- decide where/when to start testing and identify areas that need more attention.
- reduce the probability of an adverse event occurring, or to reduce its impact.
Resulting product risk information is used to guide test activities
- determine the test techniques to be employed.
- determine the extent of testing to be carried out.
- prioritize testing to find the critical defects as early as possible.
- determine whether any non-testing activities could be employed to reduce risk (e.g., providing training to inexperienced designers).
6.2 risk types
project riskshave negative effect on project's ability to achieve its objectives examples
- organizational factors
- skill, training & staff shortages
- personal issues
- political issues
- improper attitude such as not appreciating the value of finding defects during testing
- technical issues
- problem identifing the right requirements
- requirements can't be met
- environment not ready
- low code design
- supplier issues
- contractual issues
pruduct risksthe possibility that a work product may fail to satisfy the needs of users/stakeholders examples
- may not perform intended functions
- may not support some non functional requirements
- bad response time
- bad user experience
3.1 bug life cycle
defect age: time gap between date of detection & date of closure
newposted for the 1st timeassignedassign bugs to the developing teamopendeveloper started working on the defect fix to be added to (duplicate, deffered or not a bug)fixedbug is fixed and is passed to testing teampendeing retestretestdo the retesting on of the changed codeverifiedthe bug is verified to be fixedreopenbug is still remains after is fixedclosedthe bug no longer exists on the software
3.2 bug status by the developer
duplicaterejecteddefferedto be fixed in the next releases, or its priority is lownot a bug
4. testing tools
benefits of using testing tools
- reduction in repetivive manual work
- greater consistency and repeatability
- easier access to information about testing (metrics, graphs, ...)
risks of using testing tools
- expectations for the tool may be unrealistic
- time and cost for the initial introduction of the tool
- vendor may provide a poor response for support, defect fixes, and upgrades
- may be relied on too much
- new tehcnology may not be support by the tool
- open source project may be suspended
5. test execution tools
execute test objects using automated test scripts
approuches
- data-driven testing
- separating the “data set” from the actual “test case” (code)
- the same test can be run with many input avaiable to get better coverage
- keyword-driven testing
- framework has all instructions written in an external execl file
- generic script processes keywords describing the actions to be taken to perform a specific step
- model-based testing (MBT)
- use system's models (UML diagrams) to generate test caes
6. principles for Tool Selection
Identification of opportunities for an improvedtest process supported by tools- Understanding of technologies used by test objects (
select tool compatible with technology) - Check used build and CI tools within organization, to
ensure tool compatibility and integration Evaluation of the tool against clear requirementsand objective criteria- Consideration of whether or not the
tool is available for a free trial period(and for how long) - Evaluation of the vendor (
including training/commercial aspects/support after sale) - Identification of
internal requirements for coaching and mentoring in the use of the tool - Evaluation of
training needs and skillsof those who will be working directly with the tool(s) - Consideration of
pros and cons of various licensing models(e.g., commercial or open source) Estimation of a cost-benefitratio based on a concrete business case (if required)
7. pilot Projects for introducing a Tool
objectives
- gain in-depth knowledge about tool strengths and weaknesses
- evaluate how the tool fits with existing processes
- decide on standard ways of using, managing, storing it
- Assess whether the benefits will be achieved
- Understand the metrics that you wish the tool to collect and report
8 success Factors for Tools
Rolling out the toolto the rest of the organization incrementally- Adapting and improving processes to fit with the use of the tool
Providing training, coaching,and mentoringfor tool usersDefining guidelinesfor the use of the tool (e.g., internal standards for automation)Gathering usage informationfrom the actual use of the tool- Monitoring tool use and
benefits Providing supportto the users of a given toolGathering lessonslearned from all users