Software Process and Job Metrics are Quantitative options that enable Software Visitors to gain insight in to the efficacy of the program Process and the Project that are conducted using the procedure as a framework.
Basic ''Quality and Efficiency Data'' are gathered. These data are then analyzed, compared against previous Averages and assessed to find out whether quality and production advancements have accrued.
Metrics are also used to identify problem areas so that remedies can be developed and Software Process can be increased.
Software Measures are often collected by Software Technical engineers/ Practitioner. Software Metrics are analyzed and assesses by Software Managers.
If you don't measure your view can be founded only on subjective assessments.
With measurement, styles (either good or foundation) can be discovered, better estimates can be made, and true improvement can be achieved over time.
Begin by determining a limited group of Process and Task steps that are easy to get.
(These measures are often normalized using either Size or Function-oriented metrics. )
The effect is examined and in comparison to ''Past Average'' for similar Task performed within the organization.
Trends are evaluated and conclusions are made.
WHAT IS WORK PRODUCT?
A set of Software Metrics offering insight into the Process and knowledge of the Job.
Within the Context of Software Engineering we are concerned with :-
What was software development efficiency on previous products?
What was the grade of software produced?
How can past output and quality data be extra polated to the present?
How can it help us plan estimate more effectively?
There are four known reasons for Measuring Software Process, Product and resources:-
To Characterize to get understanding of Operations, Products, Resources and Surroundings and to establish Baselines for comparisons with future assessments.
To Evaluate to determine the status with respect to Plans.
To Predict by gaining understandings of connections among Operations and Products and building models of these associations.
To Improve by figuring out roadblocks, root triggers, inefficiencies, and other opportunities for improving Product Quality and Process performance.
Project Metrics are collected across all Tasks and over extended periods of time. Their intent is to provide a group of Process Indications that brings about long-term Software Process improvements
Assess the status of an ongoing Project
Track potential Risks
Uncover Trouble spots before they "Go critical"
Adjust work movement or Jobs Software work Products.
Evaluate the Job Team's ability to regulate Quality of Software Work Product.
Measures that are gathered by Project Team and changed into Metrics for use during a Project can be transmitted to those with responsibility for Software Process improvement. Because of this, many of the same Metrics are used in both the Process and Task domain.
The only logical way to improve any Process is to:-
- Measure specific traits of the Process,
- Develop a set of important metrics based-on these attributes
- Use the metrics to provide indications that will lead to strategy for improvement.
It is important to note that Software Process is only one of a number ''Controllable Factors'' in enhancing Software Quality and Organizational performance.
Process is in the heart of triangle linking there factors which have a profound effect on Software quality and Organizational performance.
The Shill and Desire of people has been shown as the one most influential factor in quality and performance. The Skill and Motivation of individuals has been shown to be single most influential factors in Quality and Performance.
The Complexity of product can have significant impact on Quality and Team Performance.
The Technology that populates the Process also has an impact.
The Process Triangle exists whether a group of environmental conditions which includes the Development Environment (Circumstance Tools), Business conditions and Customer characteristics.
We gauge the effectiveness of Software Process indirectly. That's we derive a couple of Metrics predicated on the outcome that can be derived from the Process.
- Problems uncovered before release of Software,
- Defects delivered to and reported by end-users,
- Work products supplied (Productivity),
- Human work expanded,
- Calendar time widened,
- Timetable conformance
- Other actions.
We also derive Process Metrics by Calculating the Characteristics of Specific Software Anatomist responsibilities. (e. g. We may measure the ''Work'' and they spent accomplishing the Generic Software Anatomist Activities. )
There are "Private and General public" uses for different types of Process data. Since it is natural that Individual Software Technical engineers might be sensitive to the use of metrics collected on a person basis, these data should be Private to the average person and provide as an sign for the individual only.
(eg. Defect Rates by individuals, Defect Rates by Software part and problems found during Development)
Some Process Metrics are private to the Software Task Team but Public to all Associates.
(eg. Defects confirming for major software functions, Errors found during formal technical reviews
and Lines of Code (LOC) or Function Points (FP) per components or function.
These data are analyzed by the Team to uncover indicators that can improve Team performance.
Public Metrics generally assimilate Home elevators that actually was private to individuals and clubs. Task level Defect Rates, Effort, Calendar Times and related data are gathered and evaluated so that they can uncover signals that can improve organizational Process Performance.
Software Process Metrics can offer significant benefit as an Organizational work to boost its overall level of Process Maturity. However, like all Metrics these can be misused, creating more problems they solve.
Use common sense and organizational awareness, where interpreting metrics.
Provide regular reviews to the individual and teams who collect steps and metrics.
Do not use metrics to appraise specific.
Work device partitions and groups to create clear goal and metrics that'll be used to accomplish them.
Never use metrics to threaten specific on team.
Do not consider metrics data indicating a problem area as negative. Consider it as and signal for
Don't obsess about the same metrics to the exclusion of other important metrics.
As a business become more more comfortable with the collection and use of Process Metrics, the derivation of simple indications cave in to a more rigorous strategy called "STATISTICAL SOFTWARE PROCESS IMPROVEMENT" (SSPI). .
SSPI in essence, uses Software Failure Analyses to collect information about the Mistakes and Defects experienced as an Application System or Product is developed and used. .
Unlike Software Process Metrics that are used for Strategic goal, Software Project Metrics are used for Tactical goal.
Project Metrics and the 'Signals' derived from them are used by a Project Manger and Software Team to look at workflow and Tech Activities.
Metrics gathered from the past Projects are being used as a basis from which
''Effort and Time Quotes'' are created for current Software work.
As a Project proceeds, Actions of Effort and Calendar Time expended ( Real Times) are in comparison to Original Quotes. The Project Director uses these data to Screen and Control Improvement.
As complex work commences other Task Metrics rates commence to have relevance.
Such rates include:
- Development Rates (symbolized in conditions of Models created),
- Review Hours,
- Function Points
- Delivered Type of Source Codes (LOC)
Also ''Problems uncovered during each Software Engineering tasks'' are monitored.
As the program evolves from Requirements into Design, Technical Metrics are gathered to evaluate Design Quality and to provide indicators that influence the approach taken to
Code Era and Testing.
Project metrics are used to Minimize the Job Development Schedule by causing the necessary changes to avoid delays and mitigate potential problems and dangers.
b) Job Metrics are being used to Asses Product Quality on a continuing basis and, when necessary,
modify the complex method of improve Quality.
- Cost and Work applied.
- Lines of Code (LOC) Produced,
- Execution Speed
- Storage size
- Defect reported on the set period of time.
Direct Measurements such as Cost, Efforts, LOC are easy to gather as long as specific conventions of Measurements are set up beforehand.
Indirect Measurements such as:- Quality and Efficiency of Software or Software Efficiency or Maintainability are more difficult to asses and can be measured only indirectly.
Product Metrics are Private to a person and tend to be combined to develop Project Metrics that are Community to Software team.
Project Metrics are then consolidated to produce Process Metrics that are People to the Software organization as a whole. To be able to do that we should ''Normalize the Way of measuring'', so that we develop a Measurements to enable us to compare ''Organizational Average''.
Before ReleaseSize Oriented Metrics are derived by normalizing Quality and/or Production Measures by considering the Size of the Software that is produced.
DP DOC (web pages)
The stand lists 3 Software Development Projects namely Project A, B and C that contain been completed over the past couple of years and corresponding Steps for each Jobs.
For Project A: 12, 100 Lines of Code were developed with 24 Person-month of Work at a
Cost of $168, 000. (It should be noted that Work and cost includes Examination,
- 365 Pages of Paperwork has been produced.
- 134 Mistakes were recorded before the Software premiered.
- 29 Flaws were encountered following the release of Software to Customer
within the first yr of procedure.
- 3 People done the introduction of Job A.
From the rudimentary Data contained in the above Metrics stand, a set of simple Size-Oriented Metrics can be developed for each and every Project.
- Errors / KLOC
- Defect / KLOC
- $ / KLOC
- Web page of Record / KLOC
- Errors / Person-Month
- LOC / Person-Month
- $ / Web pages of Documentation
Size Oriented Metrics are not universally accepted as the best way to measure the Procedure for Software Development. The Proponent (Supporters) and the Opposition arguments about the Size Oriented Metrics are the following:-
Proponents claim that LOC is an "Artifact" of software development tasks. That may be easily counted, that many existing software estimation model use LOC or KLOC as the insight and that a huge body of literature and data forecasted on LOC already is present.
Opponents argues that most of the controverts swirls around the use of LOC as the
According to the argument LOC actions are PROGRAM WRITING LANGUAGE dependent, that when Productivity is considered, they penalize well designed but shorter Programs, they can not easily cater to Non-procedural Languages, and that their use in Project Estimation need a level of depth which may be difficult to accomplish. (i. e. The Planner must calculate the LOC to be produced a long time before the Evaluation and Design have been completed. )
The Function Point Solution is also controversial like LOC Measures. The Arguments of Proponents and Opponents are as follows:
PROPONENTS Declare that Function Point (FP) is PROGRAM WRITING LANGUAGE independent, rendering it ideal for applications using Procedural (regular) and non-procedural Programming Languages, and that it is based on data that are more likely to be known early in the development of a project, making FP more attractive as an estimation way.
OPPONENTS declare that the technique requires some ''sleight of palm'' in that computation is dependant on subjective alternatively than objective data, that matters of the info domain name can be difficult to accumulate after the simple fact, which FP has no direct physical so this means - It's only a number.
Function Things (FP) are derived using an Empirical Romance predicated on countable procedures (Direct methods) of Software's Information Area and assessments of Software Complexity.
Function Factors (FP) are computed by completing ''Five Information Website Characteristics'' that are identified and Counts are positioned in the stand.
Each User Type that provides different application focused data to the program is counted.
Inputs should be recognized from what from Enquires. Enquires is counted individually).
Output refers to Reports, Screens, Mistake announcements etc. (individual data items within a report is not
Usually each enquires (online) generate On-line End result.
Each ''Logical'' Master document (Logical grouping of part of large repository on another file is
All machine readable Program (i. e Data, file or storage marketing) that are being used to transmit
Information to some other Systems are counted.
No. of User Input
No. of User Output
No. of User Enquires
No. of Files
No. of Alternative Interfaces
Organizations that use FP Method develop conditions for deciding whether a specific entry is easy, Average or Intricate. Nevertheless the perseverance of complexity is subjective. A Complexity Value is associated with each Count. .
Complexity Modification Factor
The Count number Total is the sum of (FP) entries in the table
The (Fi) is Complexity Weighting Value, predicated on responses to the followings assumptions:-.
Each of these answers is allocated a complexity Value over a range of (0 to 5).
1. Back-up and Restoration ? 4
2. Data Communication ? 2
3. Distributed Processing ? 0
4. Performance Critical ? 4
5. Existing Operational Environment ? 3
6. On-line Data Entrance ? 4
7. Input transactions over multiple Monitors? 5
8. Online Changes ? 3
9. Information Domain Values Complex ? 5
10. Internal Handling Complex? 5
11 Code Created for reuse? 4
12. Conversion / installation in Design? 3
13. Multiple Installations? 5
14. Application Created for change ? 5
Once Function Details (FP) have been determined they are being used in a way corresponding (analogous) to LOC in an effort to Normalize Methods for Software Productivity, Quality and other traits:
ERROR / FP
DEFECTS / FP
$ / FP
PAGES / FP
FP / PERSON MONTHLY
The romance between LINES OF CODE and FUNCTION Factors depends on Programming language that is utilized to implemented the program and the grade of the design.
For example tough estimations of Average Variety of LOC required to build one FP in various Programming Languages areas follows :
VISUAL BASIC 32
As you can see Visual Basic provides 4 times the operation of an LOC for P/L C.
LOC and FP strategy can be used to derive Productivity Metrics.
However it is debatable to appraise the Performance of specific by using these metrics, because so many factors influence Efficiency.
FP and LOC structured Metrics have been found to be relatively appropriate Predictors (Quotes) of Software Development Effort and Cost. . To be able to use LOC and FP for Estimation, a historical baseline of information must be established.
Conventional Software Job Metrics such as LOC and FP Metrics may be used to estimate Object-
Oriented Software Jobs. However, these Metrics do not provide enough granularities for the
Project Agenda and Effort Alterations that are required once we iterate through Evolutionary or
Incremental Process Method. (Incremental Development Technique).
No. Of Circumstance Scripts
No. Of Key Classes
No. of Support Classes
Average No. Of Support School / Key Class
No. Of Sub-systems
The Number of Situation Scripts is straight correlated to how big is the application form Software and to the number of Test Cases that must be developed to exercise the machine once it is designed.
Since Key Classes are central to the challenge domain, the amount of such Classes can be an indication of the amount of Effort necessary to develop the program and also a sign of the potential amount of Reuse to be applied during System development.
Support Classes are required to implement the machine but are not immediately related to the challenge area. (e. g. User Interface Classes, Database Gain access to and Manipulation classes). Also Support Classes can prepare yourself for each Key Class. No. of Support Classes is also an sign of the amount of Effort necessary to develop the Software and Indicator of Reuse to be employed during System Development.
In Graphical User Interface (GUI) Applications the common volume of Support School per Key Class Ratio is 2 or 3 3 class (i. e. For 1 Key category two or three 3 Support School will be developed. ) For non-GUI Request the ratio is 1 or 2 2.
A sub System is an aggregation of Classes that support a Function that is visible to the end-User. Once Sub-systems are recognized it is better to lay out an acceptable Project Schedule in which the work will be partitioned among Job development personnel.
To be used effectively in an Object-Oriented Software Anatomist Environment, metrics just like those known above must be gathered combined with the Project Measurers such as:
Errors and Flaws uncovered
Models or Doc Web pages produced.
As the Data source grow (following a few Object-oriented Project completed) interactions between O-O
Measures and Job Measures will provide Metrics that can aid in Project Estimation.
It is affordable to apply the Use-Case as a Normalization Measurers just like LOC or FP.
Like FP the Use-Case is described early in the program Process allowing it to be used for Project
Estimation before significant Modeling and Building activities are initiated.
The Use-case is also unbiased of Programming Languages. Moreover, the amount of Use-
case is straight proportional to how big is the Application in LOC and the amount of Test Cases
that will have to be designed to completely exercise the application form.
Because Use-case can be created at various degrees of abstraction, there is no standard Size
for a Use-case. With out a Standard Measure of just what a Use-case is, its program as a
Normalization solution (e. g. Effort extended per Use-case) is suspect.
Although lots of research workers have attemptedto derive Use-case Metrics, much work remains
to be done.
The Objectives of most Web Engineering Projects is to build a Web Request (WebApp) that offers a blend of Content and Features to the End-users.
Measures and Metrics used for Traditional Software Engineering Project are difficult to translate right to Web-Apps. Yet an online Engineering organization must gather Measurers and build a Database that let it assess its internal Efficiency and Quality over a number of Projects. Among the Measurers that can be collected are:-
No. of Static Web Pages
No. of Active Web Pages
No. of Internal Web page Links
No. of Exterior System interfaces
No. of Persistent Data Objects
No. of Static Content Objects
No. of Active Content Objects
No. of Executable Functions
Web Pages with Static content are the most common of all Web Applications. These Internet pages represent ''Low comparative complexity'' and generally require less work to create than Dynamic pages.
This measure supply the overall Size of the Application and the Effort necessary to develop it.
Are essential for e-commerce Applications an Search Engines, Financial Applications and may other
WebApps. These internet pages represents ''Higher comparative Complexity'' and thus require more Work to
construct than Static web pages.
This measure provide the overall Size of the application form and the Effort necessary to develop it.
Are Pointers offering a Hyperlink to another Webpages within the WebApp. This strategy provides an indication of the degree of Architectural coupling within the WebApp. As the Link pages increases, the Effort expended on developing and building Navigations.
WebApps must often interface with ''Backroom Business Applications''. As certain requirements for Exterior interfacing grow, System complexity and development work increases.
One or more Database data files may be accessed by the WebApp. As the number of required files develop, the complexity of WebApp also expand and Effort to Implement it increases proportionally.
A Static content items may contain content material, graphic, video, computer animation and music information. A Multiple content Objects may appear on a single Web Page increasing the complexity.
Dynamic Content Thing are generated based on User actions and includes internally generated words, graphic, video, computer animation and sound information that are integrated within WebApp. Multiple content Items may appear about the same WEBSITE.
An Executable function (also known as Script or Applet) provides some conceptual service to the end-user. As the number of Functions boosts, Modeling and construction Effort also raises.
Each of the above Measures can be motivated at a relatively early level of the net Engineering Process. Web Request Metrics can be computed and correlated with Task Actions such as :-
Errors and Defects uncovered
Models or Record Internet pages produced.
WebApp Methods and Project Methods provide Metrics that can certainly help in Project Estimation.
The overriding goal of Software Engineering is to produce a High-quality Program System or Product in just a ''Timeframe that fulfill a market need.
To accomplish that goal, Software Engineer must apply effective Methods coupled with modern tools within the context of a mature Software Process. Furthermore a good Software Engineer must measure if high quality is to be realized.
Private Metrics accumulated by specific Software Designers are assimilated to provide Task- Level results.
Although many Quality Options can be accumulated principal thrust at the Job -level is to evaluate ''Problems and Problems''.
The pursuing Metrics provide perception into the effectiveness of each of the actions implied by the Metrics.
Error data can be used to compute the ''Defect Removal Efficiency'' for every Process Platform activity.
Although there are extensive Options of Software Quality the following Four measures provide useful indicators for the Task team.
A program must operate effectively or it offers little value to its Users. Correctness is the degree to that your software packages its required function.
Most common methods for Correctness is Flaws / KLOC, where a Defect is defined as a verified lack of conformance to requirements. Problems are these problems that are reported by users following the program has been released for standard use.
For quality assessments, defects are counted over a standard period of time, typically for one year.
Maintainability is the simplicity with which an application can be corrected if one is encountered, modified if its conditions changes, or enhanced if the customer desires a change in requirements. There is absolutely no way to measure Maintainability straight so we must use
A simple Time-Oriented Metric is MTTC (Mean Time TO IMPROVE). Enough time it takes to Analyze the change required, Design the appropriate modification, Test it, Implement the change, and distribute the Change to all Users.
On average, Programs that are maintainable will have a lower MTTC than the Programs that aren't Maintainable.
Integrity is becoming vitally important in age ''Hackers and Firewalls''. This attribute measures System ability to withstand attacks (both unintentional and intentional) to its security. Attacks can be made on all three components a Software i. e. Programs, Data and Documents.
Where Threat and Security are summed Over each type of strike.
Example:- If Threat probability is 0. 25 and Security (Likelehood of repelling an assault) is
0. 95, the integrity of System is 0. 99. Which is quite High.
If Hazard is 0. 50 and the Security is 0. 25 then your System's Integrity is 0. 63
which is unacceptably low.
Usability is an try to quantify User-friendliness and can be assessed in conditions of team of four characteristics.
DRE essentially is a Measure of filtering ability of Quality assurance and Control activities as they are applied through all process platform activities.
E - No of Problems before delivery to uses.
D - No of Defects found by users after delivery.
The ideal DRE value is "1" that is no Defect found in Software.
Realistically (D) will be greater than 0, however the value of DRE can still plan 1.
As "E" enhances chances are that the overall Value of "D" will lower.
DRE, encourages a Software Job team to institute way of finding errors as much as problems before delivery.
DRE can even be used within the Project to Asses a Team's ability to found Errors before they can be passed to another Framework activity of Software Executive task.
Those Errors that aren't found through the Reviews of Analysis stage are passed on to the look task.
When DRE is employed in this context:-. DRE i = Ei(Ei + E(i+1))
Ei: Variety of mistakes found deriving analyze activity.
Ei+1: Are traceable to errors which were not discovered in S/E activity.
Majority of Software Designers still do not measure, and regrettably, most have little desire to begin with. The catch is cultural! Attempting to collect steps often precipitates resistance.
In order to instituting a Metrics Program we must consider some arguments and present a procedure for Software Metrics.
Why could it be important to gauge the Process of Software Anatomist and the Product that it produces?
The answer is relatively obvious. If we do not evaluate, there is no real way of determining
By requesting and Evaluating Efficiency and Quality options, Older Management can create goals for improvement of the program Anatomist Process.
To establish Goals for improvement the existing position of Software Development must be known. Hence, Measurement is used to determine process Baseline that improvements can be evaluated.
The Day-to-day rigors of Software Project work leaves little time for Strategic thinking. Software Professionals are concerned with an increase of mundane issues such as :-
- Developing important Project Quotes,
- Producing TOP QUALITY Systems,
Getting Product out of door promptly.
By using Dimension to determine a Project Baseline, each of these issues become more manageable. Project Baseline acts as a basis for Estimation.
By establishing set up a baseline, benefits can be obtain at the procedure, Job and Product (technical) levels. The information that is accumulated do not need to be fundamentally different. The some metrics can provide many masters.
The Metrics Baseline consists of data collected from past Software development Tasks and can be very simple to vary complex and comprehensive database containing a large number of Project metrics produced from them.
Data must be sensible exact (Avoid guestimates)
Data should be gathered from as many Projects as possible
Measures must be constant.
Application should be comparable to work that is to be estimated.
It makes little sense to employ a Baseline for Batch information systems work to estimate a Real time, embedded application.
The ideal way of collecting Baseline data should be an ongoing activity. Regretfully, this is practically the truth. Therefore, data collection takes a historical inspection of past Task to reconstruct required data.
Once, methods have been accumulated then Metrics computation can be done. Rely on the collected methods, Metrics can span a broad range of Application-Oriented Metrics (e. g. LOC or FP, O-O metrics WebApp) as well as Quality and Project-Oriented Metrics.
Metrics Evaluation targets the underlying known reasons for the results obtained and produces a set of ''Signals'' that guide the Job or Process.
It is unreasonable and unrealistic to expect the small organizations to build up extensive. Software Metrics Programs. Nonetheless it is reasonable to suggest that Software organization of most size solution and then use the resultant Metrics to help improve their local Software Process and the product quality and Timelines of products they produces.
A small organizations might determined the following set of easily accumulated measures:-
Time (hours / day) elapsed from the time a Change Get (Also known as Systems Question) is made until Evaluation is complete.
Effort ( person/ hours) to execute the Evaluation
Time (hours / days) elapsed from completion of Analysis to assignment of Change Request to workers.
Effort Required to make the Required change
Time to make the Change
Errors uncovered during work to make Change
Defect uncovered after Change is released to the client base.
Once these Options have been accumulated for several Change Request, it is possible to compute the full total elapsed time from Change Question to Execution of the Change and the percentage of Elapsed time assimilated by initial queering, analysis and change project and complementation.
These Metrics can be assessed in the Framework of Quality of data, Error change (Ec) and Defect change (Dc)
The Percentage provides insight into where the System / Change Get process shows down and many lead to Process improvement steps.
DRE can be in comparison to ELAPSED TIME and TOTAL Work to look for the impact
of Quality Assurance (QA) activities or enough time and Effort required to make a change.
The software Anatomist Institute is rolling out a comprehensive guidebook for establishing a Goal-Driven" Software Metrics Program that implies Steps and prioritized business goals. See Software Engineering Book page 668 for further detail.