Posted at 01.02.2019
We start screening activities from the first period of the software development life cycle. We may generate test cases from the SRS and SDD documents and utilize them during system and popularity evaluation. Hence, development and evaluation activities are completed simultaneously in order to produce good quality maintainable software with time and within budget. We might carry out trials at many levels and may also take help of a software trials tool. Whenever we experience a failure, we debug the foundation code to find reasons for such a failure. Locating the reasons of failing is very significant evaluation activity and consumes huge amount of resources and could also delay the discharge of the program.
Software testing is normally completed at different levels. A couple of four such levels particularly unit evaluating, integration trials, system evaluation, and acceptance tests as shown in number 8. 1. First three degrees of trials activities are done by the testers and previous level of evaluation (approval) is done by the customer(s)/user(s). Each level has specific tests objectives. For instance, at unit tests level, independent devices are analyzed using functional and/or structural trials techniques. At integration testing level, several units are combined and evaluation is carried out to check the integration related issues of various devices. At system trials level, the system is tested as a whole and primarily efficient testing techniques are being used to test the system. Non functional requirements like performance, stability, usability, testability etc. are also tested at this level. Load/stress tests is also performed at this level. Previous level i. e. approval testing is performed by the customer(s)/users for the purpose of accepting the ultimate product.
We develop software in parts / items and every device is expected to have defined features. We might call it an element, module, process, function etc, that will have a purpose and could be developed independently and simultaney. A. Bertolino and E. Marchetti have identified a device as [BERT07]:
"A unit is the tiniest testable piece of software, which may contain hundreds or even just few lines of source code, and generally presents the result of the work of 1 or few builders. The unit test situations purpose is to ensure that the unit satisfies its practical specification and / or that its applied structure complements the designed design framework. [BEIZ90, PFLE01]. "
There are also issues with unit testing. How do we run a unit independently? A unit might not be completely unbiased. It may be calling few systems and also called by one or more units. We might have to write additional source code to implement a product. A product X may call a unit Y and a product Y may call a device A and a unit B as shown in body 8. 2(a). To implement a device Y independently, we may have to create additional source code in a unit Y which may handle the activities of a unit X and the activities of a unit A and a unit B. The additional source code to handle the activities of your unit X is called "driver" and the excess source code to handle the activities of the device A and a product B is named "stub". The complete additional source code which is written for the look of stub and driver is called scaffolding.
The scaffolding should be removed after the completion of device testing. This might help us to find an error easily scheduled to small size of a unit. Many white field evaluating techniques may be effectively applicable at unit level. We ought to keep stubs and drivers simple and small in proportions to reduce the price tag on screening. If we design models so that they can be analyzed without writing stubs and motorists, we might be very successful and lucky. Generally, used, it may be difficult and therefore dependence on stubs and drivers may well not be eliminated. We may only minimize the necessity of scaffolding depending after the functionality and its division in various units.
A software may have many products. We test items independently during product screening after writing required stubs and motorists. When we combine two units, we may like to test the interfaces between these devices. We combine two or more units because they discuss some marriage. This romance is displayed by an interface and is recognized as coupling. The coupling is the way of measuring the amount of interdependence between devices. Two devices with high coupling are highly connected and therefore, dependent on one another. Two items with low coupling are weakly linked and thus have low dependency on one another. Hence, highly coupled units are seriously dependent on other models and loosely coupled units are comparatively less reliant on other units as shown in shape 8. 3.
Coupling increases as the number of calls amongst models increases or the amount of shared data raises. The look with high coupling may have more errors. Loose coupling minimize the interdependence and some of the steps to reduce the coupling are given as:
(i) Go away only data, not the control information.
(ii) Avoid transferring undesired data.
(iii) Minimize mother or father / child romance between getting in touch with and called products.
(iv) Minimize the amount of variables to be exceeded between two models.
(v) Avoid passing complete data composition.
(vi) Usually do not declare global variables.
(vii) Reduce the scope of variables.
Different types of coupling are data (best), stamp, control, exterior, common and content (worst). When we design test conditions for interfaces, we ought to be clear about the coupling amongst units and if it is high, large number of test situations should be designed to test that particular interface.
A good design should have low coupling and therefore interfaces become very important. When interfaces are important, their testing may also be important. In integration screening, we focus on the issues related to interfaces among units. There are several integration strategies that really have little basis in a rational methodology and are given in number 8. 4. Top down integration starts from the main unit and keeps on adding all called models of next level. This portion should be examined thoroughly by concentrating on software issues. After conclusion of integration trials at this level, add next level of units so that as etc till we reach the lowest level items (leaf units). There will never be any requirement of drivers and only stubs will be designed. In bottom-up integration, we start from the bottom, (i. e. from leaf models) and keep on adding higher level models till we reach the top (i. e. main node). There will not be any need of stubs. A sandwich strategy operates from top and bottom concurrently, depending upon the availability of units and may meet somewhere in the middle.
(b) Bottom level up integration (target starts from edges i, j etc)
c) Sandwich integration (concentrate starts from a, b, i, j and so on)
Each methodology has its own benefits and drawbacks. Used, sandwich integration methodology is popular. This is started as so when two related products are available. We may use any practical or structural testing ways to design test instances.
The functional screening techniques are easy to use with a specific concentrate on the interfaces and some structural evaluation techniques may also be used. Whenever a new unit is added as part of integration testing then your software is recognized as a improved software. New pathways are designed and new insight(s) and productivity(s) conditions may emerge and new control logic may invoke. These changes could also cause issues with units that recently worked perfectly.
We perform system tests after the conclusion of product and integration assessment. We test complete software alongwith its expected environment. We generally use functional tests techniques, although few structural screening techniques could also be used.
A system is thought as a blend of the program, hardware and other associated parts that collectively provide product features and solutions. System testing ensures that each system function works as expected looked after checks for non-functional requirements like performance, security, stability, stress, weight etc. This is the only phase of tests which studies both efficient and non-functional requirements of the system. A team of the evaluation persons does the system screening under the supervision of a test team leader. We also review all associated documents and guides of the program. This verification activity is similarly important and may improve the quality of the final product.
Utmost health care should be taken for the problems found during system evaluation phase. A proper impact analysis should be achieved before correcting the defect. Sometimes, if system permits, instead of fixing the flaws are just recorded and stated as the known restriction. This may happen in a situation when mending is very time consuming or technically it is not possible in today's design etc. Progress of system evaluation also builds self-assurance in the development team as this is the first phase where complete product is analyzed with a specific focus on customer's expectations. After the completion of this phase, customers are asked to test the software.
This is the expansion of system evaluation. When tests team feels that the merchandise is ready for the customer(s), they invite the client(s) for demo. After demonstration of the merchandise, customer(s) may like to use the product because of their satisfaction and self confidence. This may range between adhoc usage to systematic well-planned usage of the merchandise. This sort of usage is essential before accepting the final product. The tests done for the intended purpose of accepting something is known as acceptance testing. This can be completed by the client(s) or people authorized by the client. The place may be developer's site or customer's site depending on the mutual agreement. Generally, acceptance screening is carried out at the customer's site. Popularity testing is carried out only when the software is developed for a particular customer(s). If, we develop software for anonymous customers (like operating systems, compilers, circumstance tools etc), then approval trials is not feasible. In such instances, customers are identified to evaluate the software and also this type of screening is named alpha / beta evaluation. Beta testing is performed by many potential clients at their sites without the involvement of builders / testers. Although alpha tests is performed by some prospects at developer's site under the path and guidance of testers.
Whenever a software fails, we would like to understand the reason why(s) of such failing. After knowing the reason(s), we may attempt to find solution and could make necessary changes in the foundation code appropriately. These changes will maybe remove the reason(s) of that software failure. The process of discovering and fixing a software problem is known as debugging. It begins after receiving a failure survey and completes after ensuring that all corrections have been rightly positioned and the software does not are unsuccessful with the same group of input(s). The debugging is quite a difficult phase and could become one of the reason why of the software delays.
Every bug diagnosis process is different which is difficult to know how long it will take to find and fix a insect. Sometimes, it may well not be possible to identify a insect or if a bug is discovered, it might not exactly be feasible to improve it in any way. These circumstances should be handled very carefully. In order to remove bugs, designer must first discover that a problem prevails, then classify the insect, locate where the situation actually is based on the source code, and finally correct the situation.
Debugging is a hard process. That is probably credited to human participation and their mindset. Developers become uncomfortable after getting any need of debugging. It is used against their professional pleasure. Shneiderman [SHNE80] has rightly commented on real human aspect of debugging as:
"It is one of the very most frustrating parts of programming. It has components of problem handling or brain teasers, in conjunction with the annoying recognition that we have made a mistake. Heightened panic and the unwillingness to simply accept the likelihood of errors, boost the task difficulty. Thankfully, there's a great sigh of relief and a lessening of stress when the insect is ultimately corrected. "
These comments clarify the issue of debugging. Pressman [PRES97] has given some hints about the characteristics of bugs as:
"The debugging process tries to match symptom with cause, in so doing leading to problem correction. The symptom and the cause may be geographically remote. That is, sign may appear in a single part of program, while the cause may actually be situated in other part. Highly combined program structures may further complicate this situation. Symptom could also disappear temporarily when another mistake is corrected. In real time applications, it could be difficult to effectively reproduce the input conditions. In some instances, indicator may be anticipated to causes that are sent out across lots of tasks working on different processors".
There may be multiple reasons which may make debugging process difficult and frustrating. However, emotional reasons are more frequent over technological reasons. Over time, debugging techniques have substantially improved and they will continue steadily to develop significantly in the near future. Some debugging tools can be found and they minimize the human involvement in the debugging process. However, it is still a difficult area and uses significant timeframe and resources.
Debugging means detecting and getting rid of pests from the programs. Whenever a program generates an urgent behaviour, it is known as a failure of the program. This failure may be moderate, annoying, troubling, serious, extreme, catastrophic or infectious. Based on the kind of failure, actions must be taken. Debugging process starts off after receiving a failure survey either from tests team or from users. The steps of the debugging process are replication of the insect, understanding the insect, locate the insect, fix the insect and retest this program.
The first step in correcting a insect is to replicate it. This means to recreate the undesired behaviour under handled conditions. The identical group of input(s) should be given under similar conditions to the program and this program, after execution, should produce similar unexpected behaviour. Should this happen, we're able to replicate a bug. In many cases, this is easy and straight forward. We execute the program on a particular input(s) or we press a specific button on a specific dialog, and the insect occurs. In other cases, replication is quite difficult. It could require many steps or in an interactive program like a game, it could require specific timing. In most severe situations, replication may be nearly impossible. If we do not replicate the bug, how will we validate the fix? Hence, failure to reproduce a insect is a real problem. If we cannot take action, any action, which cannot be verified, has no meaning, how so ever before important it may be. A number of the reasons for non-replication of bug are:
The user incorrectly reported the condition.
The program has failed scheduled to hardware problems like ram overflow, poor network connectivity, network congestion, non availability of system buses, deadlock conditions etc.
This program has failed scheduled to system software problems. The reason why would be the usage of different kind of operating system, compilers, device individuals etc. there may be any previously listed reason for the inability of this program, although there is no inherent insect in program because of this particular inability.
Our effort should be to replicate the bug. If we can not achieve this task, it is advisable to keep the matter pending till we are able to replicate it. There is absolutely no point in playing with the foundation code for a situation which is not reproducible.
After replicating the bug, we may prefer to understand the bug. This means, we want to find the reason why(s) of this failure. There may be a number of reasons and is generally the most frustrating activity. We should understand the program very plainly for understanding a insect. If we will be the designers and source code writers, there might not exactly be any problem for understanding the insect. If not, then we might even have more serious problems. If readability of the program is good and associated documents are available, we may have the ability to manage the problem. If readability is not that good, (which happens in many situations) and associated documents are not proper, situation becomes very hard and complex. We may call the designers, if we are blessed, they might be available with the business and we may get them. Imagine otherwise, exactly what will happen? This is a real challenging situation and in practice many times, we must face this and struggle with the source code and documents written by the persons unavailable with the company. We may have to place effort in order to understand the program. We may start from the first assertion of the foundation code to the previous statement with a special focus on critical and complicated areas of the foundation code. We ought to be able to know, where to look in the foundation code for any particular activity. It will also tell us the general manner in which the program works.
The worst situations are large programs written by many folks over a long time. These programs may not have consistency and may become poorly readable as time passes due to various maintenance activities. We have to simply do the best and try to avoid making the clutter worse. We might also take the help of source code research tools for analyzing the top programs. A debugger may also be helpful for understanding this program. A debugger inspects an application statement wise and may be able to show the active behaviour of the program by using a breakpoint. The breakpoints are being used to pause this program anytime needed. At every breakpoint, we may look at principles of variables, material of relevant storage locations, registers etc. The main point is that in order to comprehend a bug, program understanding is essential. We ought to put desired work before locating the reasons of the program inability. If we fail to achieve this task, unnecessarily, we may waste our work, which is neither required nor desired.
There are two portions of the foundation code which need to be considered for locating a bug. First part of the foundation code is one which causes the visible wrong behaviour and second portion of the foundation code is one which is actually inappropriate. In the majority of the situations, both portions may overlap and sometimes, both portions may be in various parts of this program. We ought to first find the source code which causes the incorrect behaviour. After knowing the incorrect behaviour and its own related part of the foundation code, we may find the portion of the source code which is at fault. Sometimes, it may be very easy to identify the problematic source code (second part of the source code) with manual inspection. In any other case, we may have to take the help of your debugger. If we've center dumps, a debugger can immediately identify the line which fails. A key dumps is the printout of most registers and relevant storage area locations. We have to document them and also preserve them for possible future use. We might provide breakpoints while replicating the bug and this process may also help us to find the insect.
Sometimes simple printing statements may help us to locate the resources of the bad behavior. This simple way provides us the status of various variables at different locations of this program with specific group of inputs. A series of print claims may also portray the dynamics of varying changes. However, it is cumbersome to use in large programs. They could also generate superfluous data which might be difficult to investigate and control.
Another useful procedure is to add check regimens in the source code to confirm that data structures are in a valid talk about. Such routines may help us to slim down where data corruption occurs. When the check exercises are fast, we might want to always enable them. In any other case, leave them in the source code, and offer some kind of mechanism to turn them on whenever we need them.
The most useful and powerful way is to do the foundation code inspection. This may help us to comprehend the program, understand the bug and finally track down the bug. An obvious understanding of this program is an complete dependence on any debugging activity. Sometimes, bug may not have the program by any means. It may be in a collection daily habit or in the operating system, or in the compiler. These situations are very exceptional, but there are chances and when everything fails, we might have to consider such options.
After locating the bug, we may like to fix the insect. The fixing of a bug is a development exercise rather than a debugging activity. After making necessary changes in the source code, we may have to retest the foundation code in order to ensure that the corrections have been rightly done at right place. Every change may have an effect on other servings of the foundation code also. Hence an impact analysis must identify the affected portion which portion should also be retested thoroughly. This retesting activity is named regression trials which is vital activity of any debugging process.
There are many popular debugging approaches, but success of any procedure is dependant after the understanding of the program. In the event the persons involved in debugging understand the program correctly, they may be able to find and take away the bugs.
This approach would depend on the power and experience of the debugging persons. After getting a failure statement, it is analyzed and program is inspected. Predicated on experience and intelligence, and also using hit and trial approach, the bug is situated and a solution is found. This is a slow strategy and becomes impractical in large programs.
This can be utilized effectively in small programs. We start at the stage where program gives incorrect result such as unpredicted output is paper. After inspecting the result, we trace backward the foundation code physically until a cause of the failure is available. The foundation code from the statement where symptoms of failure is available to the statement where reason behind failure is available is examined properly. This technique mounting brackets the locations of the bug in this program. Subsequent careful study of bracketed location can help us to rectify the bug. Another obvious variance of backtracking is forwards traffic monitoring, where we use print out statements or other means to analyze a succession of intermediate leads to determine at what point the result first became wrong. These strategies (backtracking and front traffic monitoring) may be useful only when how big is this program is small. As the program size boosts, it becomes difficult to manage these techniques.
This is probably the most typical and efficient approach to identify the cause of a software failing. In this approach, storage dumps are used and run time traces are invoked and the program is packed with print assertions. When this is performed, we may find a idea by the info produced which leads to identification of reason behind a bug. Ram traces act like memory dumps, except that the printout consists of only certain recollection and register details and printing is depending on some event occurring. Typically conditional events are entry, leave or use of 1 of the following:
(a) A particular subroutine, declaration or database
(b) Communication with I/O devices
(c) Value of your variable
(d) Timed actuations (periodic or arbitrary) using real-time system.
A special problem with trace programs would be that the conditions are entered in the foundation code and any changes require a recompilation. The huge amount of data is generated which although can help to identify the cause but may be difficult to manage and examine.
Cause eradication is manifested by induction or deduction and also introduces the concept of binary partitioning. Data related to error occurrence are sorted out to isolate potential causes. Alternatively, a list of all possible causes is developed and assessments are conducted to get rid of each. Therefore, we might rule out triggers one at a time until just a single one remains for validation. The cause is discovered, properly set and retested appropriately.
Many debugging tools can be found to support the debugging process. Some of the manual activities can even be automated utilizing a tool. We may desire a tool which may execute every affirmation of an application at the same time and print principles of any changing after performing every affirmation of this program. We will be free from placing print claims in this program physically. Thus, run time debuggers are designed. In process, a run time debugger is only an automatic printing affirmation generator. It we can trace the program way and the variables and never have to put print claims in the source code. Every compiler available for sale comes with run time debugger. It we can put together and run the program with a single compilation, somewhat than modifying the source code and recompiling even as we try to slim down the bug.
Run time debuggers may identify bugs in this program, but may neglect to find the sources of failures. We might need a special tool to find factors behind failures and perfect the bug. Some mistakes like memory corruption and recollection leaks may be found automatically. The automation was the modification in debugging process, because it automated the procedure of finding the bug. An instrument may detect an error, and our job is to simply fix it. These tools are known as programmed debugger and come in a number of varieties. The easiest ones are just a collection of functions that can be linked into an application. When the program executes and these functions are called, the debugger inspections for memory corruption, if it discovers this, it accounts it.
Compilers are also used for finding bugs. Naturally, they check only syntax errors and particular type of run time problems. Compilers should give proper and comprehensive messages of mistakes that'll be of great help to the debugging process. Compilers can provide all such information in the attribute stand, which is printed along with the listing. The attribute table consists of various levels of warnings which were picked up by the compiler scan and that are observed. Hence, compilers are arriving with error diagnosis feature and there is absolutely no excuse to design compilers without important error text messages.
We may apply wide selection of tools like run time debugger, automatic debugger, automatic test circumstance generators, recollection dumps, cross reference point maps, compilers etc through the debugging process. However, tools aren't the replacement for careful examination of the foundation code after in depth understanding.
The most significant effort consuming job in software assessment is to design the test cases. The execution of the test cases may not require much time and resources. Hence, designing part is more significant than execution part. Both parts are normally handled manually. Do we really need a tool? If yes, where so when can we utilize it? In first part (creating of test situations) or second part (execution of test circumstances) or both. Software testing tools enable you to decrease the time of testing and also to make testing as easy and pleasant as is possible. Automated tests may be carried out without human engagement. This may help us in the areas where similar data set in place is to be given as source to the program again and again. A tool may do the repeated evaluation, unattended also, during times or weekends without individuals intervention.
Many non-functional requirements may be tested by using a tool. We want to test the performance of the software under insert, which might require many personal computers, manpower and other resources. An instrument may simulate multiple users on one computer and also a situation when many users are being able to access a database simultaneously.
There are three broad types of software trials tools i. e. static, strong and process management. A lot of the tools fall obviously into one of the categories but there are few exceptions like mutation analysis system which falls in several the categories. A wide variety of tools can be found with different range and quality plus they assist us in lots of ways.
Static software assessment tools are those that perform research of the programs without executing them by any means. They could also find the source code which will be hard to test and keep maintaining. As we all know, static testing is about prevention and vibrant testing is approximately cure. We should use both the tools but avoidance is always better than remedy. These tools will see more bugs when compared with dynamic evaluation tools (where we execute the program). There are numerous areas that effective static tests tools can be found, and they show their results for the improvement of the quality of the software.
Complexity of a program plays very important role while determining its quality. A favorite way of measuring complexity is the cyclomatic complexity as mentioned in chapter 4. Thus giving us the theory about the amount of independent paths in the program and would depend upon the amount of decisions in this program. Higher value of cyclomatic complexity may reveal about poor design and risky implementation. This might also be employed at module level and higher cyclomatic complexity value modules may either be redesigned or may be tested very thoroughly. There are other complexity methods also which are used used like Halstead software size methods, knot complexity measure etc. Tools are available which derive from any of the complexity strategy. These tools might take the program as an source, process it and produce a complexity value as outcome. This value may be an sign of the grade of design and execution.
These tools find syntax and semantic problems. Although compiler may discover all syntax problems during compilation, but early on recognition of such errors may help to minimize other associated mistakes. Semantic errors are incredibly significant and compilers are helpless to find such mistakes. You will discover tools in the market that may review the program and discover errors. Non-declaration of the variable, double declaration of any variable, divide by zero concern, unspecified inputs, non-initialization of the variable are a few of the issues which may be discovered by semantic analysis tools. These tools are language based mostly and may parse the foundation code, maintain a set of errors and offer implementation information. The parser may find semantic mistakes as well as make an inference in regards to what is syntactically right.
These tools are language dependent and take the program as an source and convert it to its circulation graph. The stream graph may be used for most purposes like complexity computation, paths identification, era of meaning use paths, program slicing etc. These tools assist us to understand the dangerous and badly designed areas of the source code.
These tools may help us to understand the unfamiliar source code. They could also identify useless source code, duplicate source code and areas that may necessitate special attention and really should be reviewed significantly.
A source code inspectors does indeed the simple job of enforcing benchmarks in a homogeneous method for many programs. They check the programs and power us to implement the guidelines of good programming practices. Although, they may be language based mostly but most of the guidelines of good programming tactics are similar in many dialects. These tools are simple and may find many critical and vulnerable areas of the program. They could also suggest possible changes in the foundation code for improvement.
Dynamic software testing tools choose test cases and execute the program to get the results. In addition they evaluate the results and discover reasons of failures (if any) of the program. They will be used following the implementation of the program and could also test non-functional requirements like efficiency, performance, dependability etc.
These tools are being used to get the level of coverage of this program after executing the preferred test cases. They provide us idea about the potency of selected test circumstances. They point out the unexecuted part of the source code and pressure us to design special test conditions for that portion of the foundation code. There are several levels of coverage like statement coverage, branch coverage, condition coverage, multiple condition coverage, avenue coverage etc. We may like to ensure that at least every declaration must be executed once and every results of branch assertion must be executed once. This bare minimum degree of coverage may be shown by an instrument after executing appropriate set of test cases. You will find tools available for checking affirmation coverage, branch coverage, condition coverage, multiple conditions coverage and path coverage. The profiler exhibits the number of times each declaration is executed. We might study the productivity to learn which part of the source code is not executed. We may design test situations for those portions of the foundation code to be able to achieve the desired degree of coverage. Some tools are also available to check whether the source code is as per requirements or not and also generate volume of commented lines, range of non-commented lines, amount of local variables, volume of global variables, duplicate declaration of factors etc. Some tools check the portability of the foundation code. A source code is not portable if some operating-system dependent features are being used. Some tools are Automated QA's time, Parasoft's Insure++, Telelogic's Logicscope.
We may prefer to test the performance of the program under stress / fill. For example, if we are examining an outcome management software, we might take notice of the performance when 10 users are entering the data and also when 100 users are getting into the data simultaneously. Similarly, we may prefer to test a website with 10 users, 100 users, 1000 users working concurrently. This may require huge resources and sometime it may not be possible to build such real life environment for assessment in the business. A tool may help us to simulate such situations and test these situations in a variety of stress conditions. This is actually the most popular area for the consumption of any tool and many popular tools can be purchased in the market. These tools simulate multiple users about the same computer. We may also see the response time for a data source when 10 users access the repository, when 100 users access the databases so when 1000 users gain access to the data base concurrently. Will response time be 10 mere seconds or 100 mere seconds or even 1000 moments? No customer may prefer to tolerate the response amount of time in minutes. Performance screening is also known as insert or stress tests. A number of the popular tools are Mercury Interactive's Load Runner, Apache's J Meter, Segue Software's Silk Performer, IBM Rational's Performance Tester, Comuware's QALOAD, AutoTester's AutoController.
These tools are being used to test the software on the basis of its efficiency without taking into consideration the implementation details. They may also generate test cases automatically and implement them without individual involvement. Many combinations of inputs may be considered for producing test circumstances automatically and these test conditions may be executed, thus, relieving us from repeated assessment activities. A number of the popular available tools are IBM Rational's Robot, Mercury Interactive's Be successful Runner, Comuware's QA Centre, Segue Software's Silktest.
These tools help us to control and increase the software tests process. We may build a test plan, allocate resources, prepare program for unattended assessment, tracking the status of a bug using such tools. They improve many aspects of tests and make it a disciplined process. A number of the tools are IBM Rational Test Director, Mercury Interactive's Test Director, Segue Software's Silk Plan Pro, Compuware's QA Director. Some configuration management tools are also available which may help bug traffic monitoring, its management and correctness like IBM Rational Software's Clear DDTs, Bugzilla, Samba's Jitterbug.
Selection of any tool would depend upon application, objectives, quality requirements and available trained manpower in the organization. Tools assist us to make tests effective, reliable and performance focused.
It is a record to specify the systematic method of plan the evaluation activities of the software. If we carry out testing as per well designed systematic test plan document, the effectiveness of assessment will improve which may further help produce a good quality product. The test plan record may induce us to maintain certain level of standards and disciplined approach to tests. Many software test plan documents can be found, however the most popular file is the IEEE standard for Software Test Paperwork (Std 829 - 1998). This document addresses the range, program, milestones and purpose of various trials activities. It also specifies the things and features to be examined and features which are not to be examined. Pass/fail criteria, roles and responsibilities of persons included, associated hazards and constraints are also defined in this report. The composition of the IEEE 829 - 1998 test plan document is given in desk 8. 1 [IEEE98c]. All ten portions have specific purpose. Some changes may be made as per requirement of the task. Test plan doc is prepared after the conclusion of the SRS record and may be modified combined with the progress of the task. We should plainly designate the test coverage criteria and testing ways to achieve the requirements. We ought to also describe who'll perform assessment, at what level so when. Roles and obligations of testers must be evidently documented.
IEEE standard for software test documents (829 - 1998)Remarks
1. 1 Objectives
1. 2 Testing strategy
1. 3 Scope
1. 4 Guide Material
1. 5 Explanation and Acronym
Overview of the project
2. Test Items
(A) Software Documentations to be tested
2. 1 Requirements specification
2. 2 Design specification
2. 3 Users guide
2. 4 Functions guide
2. 5 Assembly guide
2. 6 Other available documents
(B) Source code to be tested
2. 7 Verification activities
2. 8 Validation activities
Documentation and source code to be tested
3. Features to be tested
Include all features and combinations of features
4. Features not to be tested
List out such features along with reasons
5. 1 Device testing
5. 2 Integration testing
5. 3 System testing
5. 4 Approval testing
5. 5 Regression testing
5. 6 Every other testing
Describe over-all approach of testing
6. Go away / Fail criteria
6. 1 Suspension criteria
6. 2 Resumption criteria
6. 3 Acceptance criteria
Criteria to be used for go / fail
7. Examining process
7. 1 Test Deliverables
7. 2 Screening tasks
7. 3 Responsibility
7. 4 Resources
7. 5 Schedules
Specify evaluation processes
1. Environmental requirements
1. 1 Hardware
1. 2 Software
1. 3 Security
1. 4 Tools
1. 5 Publications
1. 6 Hazards and Assumptions
Identify environmental requirement for testing
9. Change management procedures
Identify change procedures
10. Plan approvals
Identify plan approvers. They have to sign the document after acceptance.
Note: Select most appropriate answer of the next questions.
(a) To get faults in the system
(b) To guarantee the correctness of the system
(c) To check the system from business perspective
(d) To demonstrate the potency of the system
(a) Performance, weight and stress testing
(b) Bottom level up integration testing
(c) Usability testing
(d) Business point of view testing
(a) Top down
(b) Bottom up
(d) Design based
(a) Complexity examination tools
(b) Coverage examination tools
(c) Syntax and semantic evaluation tools
(d) Code Inspectors
(a) Move graph generator tools
(b) Performance screening tools
(c) Regression trials tools
(d) Coverage analysis tools
(a) Mercury Interactive's Insert Runner
(b) Apache's J Meter,
(c) IBM Rational's Performance tester
(d) Parasoft's Insure ++
(a) IBM Rational's Robot
(b) Comuware's QALOAD
(c) Programmed QA's time
(d) Telelogic's Reasoning scope
(a) IBM Rational Test Manager
(b) Mercury Interactive's Test Director
(c) Segue Software's Silk Plan Pro
(d) All the above
(a) Automated QA's time
(b) Parasoft's Insure ++
(c) Telelogic's Reasoning Scope
(d) Apache's J Meter
(a) Mercury Interactive Win Runner
(b) IBM Rational's Robot
(d) Segue Software's Silk test
(a) Integration testing
(b) Acceptance testing
(c) Regression testing
(d) System testing
(a) Device testing
(b) Integration testing
(c) System testing
(d) Popularity testing
(a) Cross only control information not data
(b) Avoid moving undesired data
(c) Do not declare global variables
(d) Minimize the opportunity of variables
(a) Data coupling
(b) Stamp coupling
(c) Control coupling
(d) Common coupling
(a) Stamp coupling
(b) Content coupling
(c) Common coupling
(d) Control coupling
(a) Bottom level up integration
(b) Top down integration
(c) Sandwich integration
(d) None of the above
(a) Replication of the bug
(b) Knowledge of the bug
(c) Selection of bug traffic monitoring tool
(d) Fix the insect and retest the program
(a) Brute force
(c) Cause elimination
(d) Insect multiplication
(a) Cause elimination
(b) Brute force
(d) Learning from your errors method
(a) Run time debugger
(c) Storage area dumps
(d) Samba's Jitterbug
(a) System testing
(b) Popularity testing
(c) Unit testing
(d) (a) and (b) both
(a) Indicator with cause
(b) Cause with inputs
(c) Symptoms with outputs
(d) Inputs with outputs
(a) After their execution
(b) Without their execution
(c) Throughout their execution
(d) None of the above
1. 1 What are various levels of testing? Explain the goals of every level. Who must do assessment at every level and why?
1. 2 Is unit tests possible or even advisable in all circumstances? Justify your answer with instances.
1. 3 What is scaffolding? Why do we use stubs and motorists during unit screening?
1. 4 What are various steps to reduce the coupling amidst various devices? Discuss different kinds of coupling from best coupling to the most detrimental coupling.
1. 5 Compare the very best down and lower part up integration testing approaches to test an application.
1. 6 What's debugging? Discuss two debugging techniques. Write top features of these techniques and compare quite features.
1. 7 How come debugging so hard? What are various steps of the debugging process?
1. 8 What are popular debugging methods? Which one is popular and just why?
1. 9 Explain the importance of debugging tools. List some commercially available debugging tools.
1. 10 (a) Discuss the static and vibrant testing tools by making use of examples.
(b) Discuss a few of the areas where evaluation can't be performed effectively minus the help of a evaluation tool.
1. 11 Write brief notes on:
(i) Coverage examination tools
(ii) Performance tests tools
(iii) Functional / Regression testing tools
1. 12 What exactly are non-functional requirements? How do we use software tools to check these requirements? Discuss some popular tools with their areas of applications.
1. 13 Explain stress, load and performance trials.
1. 14 Differentiate between the following:
(a) Integration trials and system testing
(b) System screening and approval testing
(c) Unit evaluation and integration testing
(d) Screening and debugging
1. 15 What are the aims of process management tools? Illustrate the process of collection of such a tool. List some commercially available process management tools.
1. 16 What's the use of your software test plan file in testing? Is there any standard available?
1. 17 Discuss the put together of the test plan report as per IEEE Std 829-1998.
1. 18 Consider the condition of the URS given in chapter 5, and design a software test plan file.
1. 19 That is typically the most popular level of tests a software used and just why?
1. 20 Which can be typically the most popular integration testing way? Discuss with ideal examples.