Sunday, November 30, 2008

glossary of terms used in Software Testing


Standard glossary of terms used in Software TestingInternational Software Testing Qualification Board

A

Aabstract test case: See high level test case.

acceptance criteria: The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity. [IEEE 610]
acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. [After IEEE 610]
accessibility testing: Testing to determine the ease by which users with disabilities can use a component or system. [Gerrard]
accuracy: The capability of the software product to provide the right or agreed results or effects with the needed degree of precision. [ISO 9126] See also functionality testing.
actual outcome: See actual result.actual result: The behavior produced/observed when a component or system is tested.
ad hoc review: See informal review.
ad hoc testing: Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and arbitrariness guides the test execution activity.adaptability: The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered. [ISO 9126] See also portability.
agile testing: Testing practice for a project using agile methodologies, such as extreme programming (XP), eating development as the customer of testing and emphasizing the test-first design paradigm. See also test driven development.
alpha testing: Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing.analyzability: The capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identified. [ISO 9126]anomaly: Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, etc. or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable dcumentation. [IEEE 1044] See also defect, deviation, error, fault, failure, incident, and problem.
Attractiveness: The capability of the software product to be attractive to the user. [ISO 9126]audit: An independent evaluation of software products or processes to ascertain compliance to standards, guidelines, specifications, and/or procedures based on objective criteria, including documents that specify:(1) the form or content of the products to be produced(2) the process by which the products shall be produced(3) how compliance to standards or guidelines shall be measured. [IEEE 1028]audit trail: A path by which the original input to a process (e.g. data) can be traced back through the process, taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out. [After TMap]automated testware: Testware used in automated testing, such as tool scripts.availability: The degree to which a component or system is operational and accessible when
back-to-back testing: Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies. [IEEE 610]
baseline: A specification or software product that has been formally reviewed or agreed upon, that thereafter serves as the basis for further development, and that can be changed only through a formal change control process. [After IEEE 610]
basic block: A sequence of one or more consecutive executable statements containing no branches.basis test set: A set of test cases derived from the internal structure of a component or specification to ensure that 100% of a specified coverage criterion will be achieved.behavior: The response of a component or system to a set of input values and preconditions.benchmark test: (1) A standard against which measurements or comparisons can be made. (2) A test that is be used to compare components or systems to each other or to a standard as in (1). [After IEEE 610]bespoke software: Software developed specifically for a set of users or customers. The opposite is off-the-shelf software.
best practice: A superior method or innovative practice that contributes to the improved performance of an organization under given context, usually recognized as ‘best’ by other peer organizations.beta testing: Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market. big-bang testing: A type of integration testing in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages. [After IEEE 610] See also integration testing.
black-box testing: Testing, either functional or non-functional, without reference to the internal structure of the component or system.black-box test design technique: Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.blocked test case: A test case that cannot be executed because the preconditions for its execution are not fulfilled.
bottom-up testing: An incremental approach to integration testing where the lowest level components are tested first, and then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested. See also integration testing.boundary value: An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.boundary value analysis: A black box test design technique in which test cases are designed based on boundary values.
boundary value coverage: The percentage of boundary values that have been exercised by a test suite.branch: A basic block that can be selected for execution based on a program construct in which one of two or more alternative program paths are available, e.g. case, jump, go to, ifthen- else.branch coverage: The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.branch testing: A white box test design technique in which test cases are designed to execute branches.bug: See defect.business process-based testing: An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.
C
Capability Maturity Model (CMM): A five level staged framework that describes the key elements of an effective software process. The Capability Maturity Model covers estpractices for planning, engineering and managing software development and maintenance.
Capability Maturity Model Integration (CMMI): A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers best-practices for planning, engineering and managing product development and maintenance. CMMI is the designated successor of the CMM
capture/playback tool: A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e.replayed). These tools are often used to support automated regression testing.
CAST: Acronym for Computer Aided Software Testing. See also test automation.cause-effect graph: A graphical representation of inputs and/or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.cause-effect graphing: A black box test design technique in which test cases are designed from cause-effect graphs. [BS 7925/2]changeability: The capability of the software product to enable specified modifications to beimplemented. [ISO 9126] See also maintainability.change control: See configuration control. classification tree method: A black box test design technique in which test cases, described by means of a classification tree, are designed to execute combinations of representatives of input and/or output domains. [Grochtmann]code coverage: An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.
code-based testing: See white box testing.
co-existence: The capability of the software product to co-exist with other independent software in a common environment sharing common resources. [ISO 9126]
commercial off-the-shelf software: See off-the-shelf software.compatibility testing: See interoperability testing.compiler: A software tool that translates programs expressed in a high order language into their machine language equivalents. [IEEE 610]complete testing: See exhaustive testing.
completion criteria: See exit criteria.complexity: The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify. See also cyclomatic complexity.compliance: The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions. [ISO 9126]compliance testing: The process of testing to determine the compliance of component or system.component: A minimal software item that can be tested in isolation.component integration testing: Testing performed to expose defects in the interfaces and interaction between integrated components.component specification: A description of a component’s function in terms of its output values for specified input values under specified conditions, and required non-functional behavior (e.g. resource-utilization).
component testing: The testing of individual software components. [After IEEE 610]compound condition: Two or more single conditions joined by means of a logical operator (AND, OR or XOR), e.g. ‘A>B AND C>1000’.concrete test case: See low level test case.concurrency testing: Testing to determine how the occurrence of two or more activities within the same interval of time, achieved either by interleaving the activities or by simultaneous execution, is handled by the component or system. [After IEEE 610]condition: A logical expression that can be evaluated as True or False, e.g. A>B. See also test condition.
condition coverage: The percentage of condition outcomes that have been exercised by a test suite. 100% condition coverage requires each single condition in every decision statement to be tested as True and False.
condition determination coverage: The percentage of all single condition outcomes that independently affect a decision outcome that have been exercised by a test case suite. 100% condition determination coverage implies 100% decision condition coverage.
condition determination testing: A white box test design technique in which test cases are designed to execute single condition outcomes that independently affect a decisionoutcome.condition testing: A white box test design technique in which test cases are designed to execute condition outcomes.
condition outcome: The evaluation of a condition to True or False.confidence test: See smoke test.configuration: The composition of a component or system as defined by the number, nature, and interconnections of its constituent parts.
configuration auditing: The function to check on the contents of libraries of configuration items, e.g. for standards compliance. [IEEE 610]configuration control: An element of configuration management, consisting of the evaluation, co-ordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification. [IEEE610]
configuration control board (CCB): A group of people responsible for evaluating and approving or disapproving proposed changes to configuration items, and for ensuring implementation of approved changes. [IEEE 610]configuration identification: An element of configuration management, consisting of selecting the configuration items for a system and recording their functional and physical characteristics in technical documentation. configuration item: An aggregation of hardware, software or both, that is designated for configuration management and treated as a single entity in the configuration management process. [IEEE 610]
configuration management: A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements.[IEEE 610]
configuration testing: See portability testing.
confirmation testing: See re-testing.conformance testing: See compliance testing.consistency: The degree of uniformity, standardization, and freedom from contradiction among the documents or parts of a component or system. [IEEE 610]control flow: A sequence of events (paths) in the execution through a component or system.control flow graph: A sequence of events (paths) in the execution through a component or system.

conversion testing: Testing of software used to convert data from existing systems for use in replacement systems.COTS: Acronym for Commercial Off-The-Shelf software. See off-the-shelf software.coverage: The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.
coverage analysis: Measurement of achieved coverage to a specified coverage item during test execution referring to predetermined criteria to determine whether additional testing is required and if so, which test cases are needed.
coverage item: An entity or property used as a basis for test coverage, e.g. equivalencepartitions or code statements.
coverage tool: A tool that provides objective measures of what structural elements, e.g. statements, branches have been exercised by a test suite.
custom software: See bespoke software.
cyclomatic complexity: The number of independent paths through a program. Cyclomatic complexity is defined as: L – N + 2P, where- L = the number of edges/links in a graph- N = the number of nodes in a graph- P = the number of disconnected parts of the graph (e.g. a called graph and a subroutine)[After McCabe]cyclomatic number: See cyclomatic complexity.
D
daily build:
a development activity where a complete system is compiled and linked every day (usually overnight), so that a consistent system is available at any time including all latest changes.
data definition: An executable statement where a variable is assigned a value. data driven testing: A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools. [Fewster and Graham] See also keyword driven testing.
data flow: An abstract representation of the sequence and possible changes of the state of data objects, where the state of an object is any of: creation, usage, or destruction. [Beizer]data flow analysis: A form of static analysis based on the definition and usage of variables. data flow coverage: The percentage of definition-use pairs that have been exercised by a test suite.
data flow test: A white box test design technique in which test cases are designed to execute definition and use pairs of variables.
data integrity testing: See database integrity testing. database integrity testing: Testing the methods and processes used to access and manage the data(base), to ensure access methods, processes and data rules function as expected and that during access to the database, data is not corrupted or unexpectedly deleted, updated or created.
debugging: The process of finding, analyzing and removing the causes of failures in software.
debugging tool: A tool used by programmers to reproduce failures, investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step by step, to halt a program at any program statement and to set and examine program variables.
decision condition coverage: The percentage of all condition outcomes and decision outcomes that have been exercised by a test suite. 100% decision condition coverage implies both 100% condition coverage and 100% decision coverage. decision condition testing: A white box test design technique in which test cases are designed to execute condition outcomes and decision outcomes.decision coverage: The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.decision table: A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases.decision table testing: A black box test design techniques in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table. [Veenendaal]decision testing: A white box test design technique in which test cases are designed to execute decision outcomes.
decision outcome: The result of a decision (which therefore determines the branches to be taken).defect: A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
defect density: The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-ofcode, number of classes or function points).
Defect Detection Percentage (DDP): the number of defects found by a test phase, divided by the number found by that test phase and any other means afterwards.defect management: The process of recognizing, investigating, taking action and disposing of defects. It involves recording defects, classifying them and identifying the impact. [After IEEE 1044]defect management tool: A tool that facilitates the recording and status tracking of defects. They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of defects and provide reporting facilities. See also incident management tool.
defect masking: An occurrence in which one defect prevents the detection of another. [After IEEE 610]defect report: A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function. [After IEEE 829] defect tracking tool: See defect management tool.
definition-use pair: The association of the definition of a variable with the use of that variable. Variable uses include computational (e.g. multiplication) or to direct the execution of a path (“predicate” use).
deliverable: Any (work) product that must be delivered to someone other than the (work) product’s author.design-based testing: An approach to testing in which test cases are designed based on thearchitecture and/or detailed design of a component or system (e.g. tests of interfaces between components or systems).
desk checking: Testing of software or specification by manual simulation of its execution.See also static analysis. development testing: Formal or informal testing conducted during the implementation of a component or system, usually in the development environment by developers. [After IEEE 610]
deviation: See incident.deviation report: See incident report.dirty testing: See negative testing.documentation testing: Testing the quality of the documentation, e.g. user guide or installation guide.domain: The set from which valid input and/or output values can be selected.driver: A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. [After TMap]
dynamic analysis: The process of evaluating behavior, e.g. memory performance, CPU usage, of a system or component during execution. [After IEEE 610]
dynamic analysis tool: A tool that provides run-time information on the state of the software code. These tools are most commonly used to identify unassigned pointers, check pointer arithmetic and to monitor the allocation, use and de-allocation of memory and to flag memory leaks.
dynamic comparison: Comparison of actual and expected results, performed while the software is being executed, for example by a test execution tool.
dynamic testing: Testing that involves the execution of the software of a component or system. Eefficiency: The capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions. [ISO 9126]
efficiency testing: The process of testing to determine the efficiency of a software product.elementary comparison testing: A black box test design techniques in which test cases are designed to execute combinations of inputs using the concept of condition determination coverage. [TMap]
emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. [IEEE 610] See also simulator.
entry criteria: the set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria. [Gilb and Graham]
entry point: The first executable statement within a component.equivalence class: See equivalence partition.
equivalence partition: A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.
equivalence partition coverage: The percentage of equivalence partitions that have been exercised by a test suite.equivalence partitioning: A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.
error: A human action that produces an incorrect result. [After IEEE 610]error guessing: A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.error seeding: The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects. [IEEE 610]
error tolerance: The ability of a system or component to continue normal operation despite the presence of erroneous inputs. [After IEEE 610].
evaluation: See testing.
exception handling: Behavior of a component or system in response to erroneous input, from either a human user or from another component or system, or to an internal failure.
executable statement: A statement which, when compiled, is translated into object code, and which will be executed procedurally when the program is running and may perform an action on data.exercised: A program element is said to be exercised by a test case when the input value causes the execution of that element, such as a statement, decision, or other structural element.
exhaustive testing: A test approach in which the test suite comprises all combinations of input values and preconditions.
exit criteria: The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished.
Exit criteria are used by testing to report against and to plan when to stop testing. [After Gilb and Graham]exit point: The last executable statement within a component.expected outcome: See expected result.
expected result: The behavior predicted by the specification, or another source, of the component or system under specified conditions.
exploratory testing: An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests. [After Bach]
source:

No comments: