当前位置:首页 >> >>

Metrics-based evaluation of object-oriented software development methods

Evaluated Object-Oriented Software Development
Reiner R. Dumke and Erik Foltin
University of Magdeburg, Postfach 4120, D-39016 Magdeburg, Germany E-mail: {dumke,foltin}@irb.cs.uni-m

agdeburg.de Fax: +49-391-67-12810 ABSTRACT: This paper describes the fundamental ideas of our present project the Software Measurement Laboratory - as a method of metrication of the objectoriented software development. The underlying measurement framework starts at the first step of the software development (the problem definition) and measures the metric mutations in the object-oriented paradigm of Coad/Yourdon and the implementation in Smalltalk and C++. The object-oriented software development was described with development indicators. KEY WORDS: software measurement, object-oriented software development, software quality assurance The recent works in software measurement for object-oriented software development can be shortly characterized as ? statistical analysis by Rocache /Rocache 89/ of elements of an objectoriented development system (Smalltalk80), or by Szabo and Khoshgoftaar of a C++ communication system (/Szabo et al 95/), ? metrics set definitions by Abreu /Abreu et al 94/ for C++ with the two vectors the category (design, size, complexity, reuse, productivity, and quality), the granularity (system, class, and method), by Binder /Binder 94/ in a set of C++ metrics to measure the encapsulation, the inheritance, the polymorphism and the complexity, by Arora et al for the real-time software design in C++ (/Arora et al 95/), and by Lorenz as a metrics set that can be used for both languages (C++ and Smalltalk, /Lorenz 93/), ? metrics for aspect measurement by Ott et al of the class cohesion (/Ott et al. 95/), or by Bieman or John of the reuseability (/Karunanithi et al 93/, /John et al 95/), and by Lejter of the maintenance (/Lejter et al 92/), ? information theoretical approaches in the measure of the conceptual entropy by Dvorak /Dvorak 94/ or in the cognitive approach by Henderson-Sellers et al /Cant et al 94/ with the landscape idea along the method routes or the learnability aspects in the use of class libraries (/Lee et al 94/), and



The use of a new software development paradigm in general starts with a lot of persuasive (but not proved) hypotheses. Therefore, from the software engineering point of view the goal of the measured object-oriented software development (OOSE) can be characterized as: 1. the understanding of the special development method or paradigm, 2. the measurement based method comparison, and the controlled software development process in the manner of the Capability Maturity Model of the Carnegie Mellon University in Pittsburgh.

? validation of enclosed approaches by
software development component

Chidamber and Kemerer /Chidamber et al
evaluation (empirical) criteria


measurement theoretical view (statistical analysis) model scale ESTIMATION scale goal tree

design documents drawings charts source code test tables etc.

flow graph call graph text schemata structure tree code schemata etc.

cost effort grade

factor-criteria tree CALIBRATION cause and effect diagram CORRELATION GQM paradigm etc.

quality actuality etc.

abstraction (tool-based)




abstraction (expert’s report)

94/ as an apporach of a metrics definition based on a measurement theoretical view (with a viewpoint as empirical attribute), the extensions of these measures by Li et al (/Li et al 95/), the analysis of Churcher and Shepperd (/Churcher et al 95/), and the investigations of Zuse (/Zuse 94/). These and the other concepts are first steps in a global measurement approach for the object-oriented software development and are missing (see also /Jones 94/) the evaluation of the continuity of the objectoriented software development process: the object-oriented analysis (OOA) ? objectoriented design (OOD) ? object-oriented programming (OOP) and the possibility of the reverse process (!).

2. The OOSE Framework


The general approach

method. A standardized metric set for OOSE does not exist (only a metrics definition standard /IEEE 93/). Therefore, it is necessary to define metrics and to analyse them. The validation problem is the main problem in the application of software metrics (see the figure above). The software measurement is directed to the three main components in the (object-oriented) software development (see also /Fenton 91/) ? the process measurement for understanding, evaluation and improvement the development method, ? the product measurement for the quantification of the product (quality) characteristics and validation of these measures, ? the resource measurement for evaluate the support (CASE tools, measurement tools etc.) and the chosen implementation system. Some main ideas and some short results of an application on the Software Measurement Laboratory of the University of Magdeburg (SMLAB) are given in the following (see also in the WWW in

The principal ideas of this measurement framework are given in /Dumke et al 94/ and are related to the object-orientation to understand and to quantify the chosen

http://irb.cs.uni-magdeburg.de/ se/metrics_eng.html). 2.2. The Process Measurement

The chosen OOSE method is the Coad/Yourdon approach (described in /Coad et al 93/) and begins with the transformation of the problem definition in a graphical representation with an underlying documentation. The documentation stores all information that cannot be presented in the drawing. The drawings (also possible in some variants) and the documentation constitute the OOA model. In a first evaluation of this method we can establish as goals of the process measurement and the realized activities: How can we measure the object definition process? This question leads us to the first step of the software development - the problem statement. We need a computational stored problem definition to measure the object definition. The SMLAB problem definition must be presented for all members of the software engineering team

system. The elements of our problem statement are a ? list of contents as ? problem description, ? constraints, ? given situation, ? functional requirements, ? management requirements (controlling and quality) and a ? list of components as ? notions, names, ? dates, ? pictures, and ? (hypertext) relations. The process measures for the problem definition are (in a first step) ? the number of notions, names or titles, ? the number of dates (times and events). The measure mutation was analysed, for example in the problem definition (#notions/names) to the number of the class definitions in the model and in the implementation. Further measurements are related to the adjectives/predicates into the class attributes or variables, verbs/adverbs into the class services or methods and dates/constraints into the model documentation and implementation. We can establish the relation of 4600 notions (names or titles) into 76 object classes. Note, that a lot of notions in the problem definition are instances of the defined classes. So, we get the specification indicators as class definition indicator (CDI) as number of notions number of defined classes attribute definition indicator (ADI) as number of adjectives or predicates number of defined attributes

and the document itself is an essential source for many outputs such as milestones in the different investigations, overview for some administrations. Therefore, we decided to use a local net html file set of the World-Wide Web as a living document

service definition indicator (SDI) as number of verbs or adverbs number of defined services Our project has the indicator values: CDI ≈ 0,02, ADI ≈ 0,03, SDI ≈ 0,06.

How can we measure the OOA/OOD model itself? The main steps in the objectoriented analysis are 1. Finding classes and objects, 2. Identifying structures (class structure (Gen-Spec) and Whole-Part structure), 3. Identifying subjects (as ``view’’ of the class structure defined in framed areas), 4. Defining attributes of the classes and the object connections, 5. Defining services of the classes and the message connections. The documentation contains all information that cannot be presented in the drawing. The drawings (also possible in some variants) and the documentation achieve the OOA model. The OOA model must be ``open’’ for the measurement. This is given because the OOSE CASE tool - the ObjecTool - is based on a

prove the missing inheritance documentation and the small critique that is only directed to an object/class symbol. Further, the estimation of effort, costs and quality is not possible in this development phase (a general problem in the OOSE). The OOD step ensures a full cintinuity to the OOA step. The development phases in OOD are 1. Designing the problem domain component (to achieve further requirements and special aspect of the programming system), 2. Designing the human interaction component, 3. Designing the task management component, 4. Designing the data management component But, the basis model in the maintenance phase is the OOD model. So, we do not have a method independent specification. We have ‘’implemented’’ 38 classes of 114 classes in the OOD model in the realization of the software measurement laboratory in the design phase as organizatorial orders. So, we have the design indicators as

class modification indicator (CMI) as number of organizational classes number of all designed classes attribute modification indicator (AMI) as number of organzational attributes number of all designed attributes service modification indicator (SMI) as number of organizational services number of all designed services Our project has the values for these indicators CMI ≈ 0,33, AMI ≈ 0,48, SMI ≈ 0,21.

file set for the graphical models. So, the measurement tool OOM (Papritz, 1993) was implemented to measure the OOA model. The evaluation of the OOA step

The OOD phase does also missing the relation to the object-oriented implementation (programming) system. So, some browsing activities are necessary in the OOP systerm in the OOD phase. Therefore, we have implemented the OOC tool for browsing in the Smalltalk class library (/Lubahn 94/). How can we measure the OOP system? The development steps in the OOP phase include 1. Implementation of ’’model“ as main object (under the root class in the object-otiented programming system), 2. Implementation of the concrete model, 3. Extension of the object-oriented system with new classes/objects, 4. Extension the object-oriented system with the new methods (as class methods and instance methods; with class variables and instance variables), 5. Modification of existing classes or methods, 6. Testing of the object-oriented application with the designed scenarios. Here we must choose a special OOP system or a OOP language. The ObjecTool is developed for C++ or Smalltalk implementations. The evaluation of this phase in-

Note that the measures in this development phase would be added by the code measures. For the quality measurement of the process we use the development complexity (see /Dumke et al 94/) as set of the used methods and tools and their structure. Other measures (performance etc.) have not been included in this first approach for the development complexity evaluation. The measurement tools were implemented in the same method and programming language to reduce this development complexity. We have implemented a C++ measurement tool (/Kuhrau 94/) in C++ and a Smalltalk measurement extension (/Heckendorf 95/). We implemented 36 classes (as metrics definitions) of the 116 classes in the OOD model that incudes the metrics set. So, we have as implementation indicators

class implementation indicator (CII) as number of new implemented classes number of designed classes attribute implementation indicator (AII) as number of new implemented attributes number of designed attributes service implementation indicator (SII) as number of new implemented services number of designed services These indictaors for our project give the values CII ≈ 0,31, AII ≈ 0,51, SII ≈ 0,22.

dicates that the OOP ? OOD direction is not possible here. So we introduce maintenance problems at the beginning. The knowledge of the existing OOP systems or libraries is the main effort for an efficient OOSE.

The given description of the process measurement is a good example for the method understanding. We can see the essential approach to analyse measurement

in the direction of the ?, m, and M measure mutation. Some missing tools for the completion of an measurable OOSE method on this basis was designed and implemented. 2.3. The Product Measurement The defined measures for the problem defintion (as html document set) are ? length of the document title (σ? ), ? average length of all headlines (σ? ), ? length of the document in Bytes (σ? ), ? count of words in the document (σ? ), ? average length of words (σ? ), ? maximal length of words (σ? ), ? number of bold or italic words (σ? ), ? number of notions, names or titles (σ? ), ? number of dates (times and events) (σ? ), ? number of lists (UL, OL, DL) (σ? ),
1 2 3 4 5 6 7 8 9 10

all parts of the problem defintion document set1
N me o n tio s a dn ms u br f o n n a e 6 00

5 00

4 00

3 00

2 00

1 00

0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a0 b1 b2 b3 b4 b5 b6 b7 b8 b9 b0 c1 c2 c3 c4 c5 c6 c7 c8 c9 c0 d1 d2 d3 d4 d5 d6

the chosen measures for all documents of the problem definition (as an example of the limited facilities of the EXCEL tool for the presentation of the full details)
10 9 sm 1

and the structure based measures as ? average number of words in the lists (θ? ), ? number of HR lines (θ? ), ? average number of list elements (in UL or OL) (θ? ), ? average number of list elements in DL (θ? ), ? maximal number of the depth of the lists (θ? ), ? number of hypertext relations (θ? ), ? average length of the loaded files (θ? ), ? average distance of the dates to the actual date (θ? ).
1 2 3 4 5 6 7 8



Further measures are the summarized number about the whole problem defintion. The most of these measures are ratio scaled. An implementation of a measurement tool to measure the problem definition (PDM) was necessary (/Foltin 95/). The measurement values for the SMLAB were presented with the EXCEL tool. Three examples of the measurement value presentation are the number of notions in

and the measurement values of one document about the experiment duration of three weeks. It is good to see the fiew changement in the measure period. This was the reason to define the future measurement activities in a monthly form. The measures give also an overview about the synchro-nized activities of our team.


In these figures the measures σ?i was

substi-tute with the names sm .

a1 a3 a5 a7 a9 b1 b3 b5 b7 b9 c1 c3 c5 c7 c9 d1 d3 d5


AAAA sm 2 AAAA AAAA AAAA10000 sm 3/ AAAA1000 sm 4/ AAAA AAAA AAAA sm 5 AAAA10 sm 6/ AAAA AAAA AAAA100 sm 7/ AAAA sm 8 AAAA AAAA AAAA10 sm 9/ AAAA 10 sm 10/ AAAA

8 7 6 5 4 sm 10 6/ 3 2 1 0 10.5. 11.5. 12.5. 15.5. 16.5. 17.5. 18.5. 19.5. 2.5. 3.5. 4.5. 5.5. 8.5. 9.5. sm 10 7/ sm 10 8/ sm 9 sm 10 sm 1 sm 2 sm 1000 3/ sm 100 4/ sm 5

The evaluation of the product quality in every development phase is defined as (see also /ISO 9126/) comprehensibility, clarity and usability for the problem statement on the basis of the measures use frequency, availability, size and structure. The correlations between the empirical measures and the indicators are ? readability: σ? , σ? , σ? , σ? , σ? , θ? , θ? , θ? , θ? ,
3 5 7 8 10 1 2 3 5

Unfortunately, we couldn’t prove these correlations in the measurement phase of three weeks in a convinced manner - but also not the opposite fact. The measures (indicators) OOA/OOD model are ? ? ? ? ? ? ? ?



? mnemonics: σ? , σ? , σ? , θ? ,
1 2 6 4

? clarity: σ? , σ? ,
8 9

? correctness: θ? ,

number of abstract classes (sm ), number of object classes (sm ), total number of attributes (sm ), total number of services (sm ), number of object connections (sm ), number of message connections (sm ), number of subclasses (sm ), number of subjects (sm ),
2 3 4 5 6 7 8

? useability: θ? , θ? ,
6 7

and the structure based measures ? average number of attributes per class (tm ) ? average number of services (tm ), ? average number of object connections (tm ), ? average number of message connections (tm ) ? maximum depth of the inheritance (tm ).
1 2 3 4 5

? layout: σ? , θ? .
7 2

The ‘’measurement’’ of the empirical aspects was made by an expertise of the team members as an evaluation in an order scale from ‘’low’’ (1) to ‘’high’’ (5). The results of the empirical evaluation are criteria readability mnemonics clarity correctness useability layout value 3 4 3.75 4 4,75 4

The main measurement values for the OOA model are measure sm1 sm2 sm3 value 1 76 195 measure sm4 tm1 tm2 value 63 2,57 0,83

All measures - excluded the tm - are ratio scaled.

The empirical evaluation of the OOA/OOD model was founded on the ? completenes, ? conformity and ? feasability for the OOA/OOD phase on the basis of the measures consistency, performance, size and structure. The results of the evaluation are

measure M1 M2 M3 M4 M5 M6 M7 M8

MPP 9 4 3 18 5 1 2.8 3.2

Smalltalk 8 1 1 12 3 1 22 2.6

The empirical evaluation of the OOP components are founded on the ? understandability, ? stability and ? effort for the OOP phase on the basis of measuring testability, size, structure and reusability. In a first approximation of the quality assurance, we can want to hold the given quality of the OO programming system. So, we must look to the evaluation of the resources. The values in the table above satisfy this condition. The most of these measure based on a ordinal scale and can be used only for a classification of the quality. 2.4. The Resource Measurement

criteria completeness conformity feasability consistency performance

value 2 2 3 3,5 1

A higher granularity of the OOA/OOD model measures is necessary to demonstrate a correlation between the empirical aspects and the model related measures. In the phase of the OOP we must add the code measures and the other charactersitics of the chosen OO programming system. Such measures are ? the coupling between objects (classes) (M7), ? the lines of method code (M4), ? the method name length (M1), ? the number of method parameters (M2), ? the number of local variables (M8), ? the number of method returns (M6), ? the number of comments (M3), ? the method complexity (M5), and some other indicators (see /Dumke et al 94/). The measures for the first implmentations as an average manner are (see /Heckendorf 95/ and /Kuhrau 94/)

The essential aspect in the OOSE is the initial measure of the chosen resources (CASE tools, measurement tools, programming environment etc.). measure tm3 tm4 words tm8 ObjecTool 2 3 8 3 OOM 4 0.8 14 3

tm1 tm2

25 1.9

2.6 1.4

The given initial quality can be proved by the given measure values. The tabular on the next page includes someone. In accordance with our validation aspect we can quantitatively evaluate the usefulness of the chosen object-oriented programming system. For example, we can see the functional approach characteristics in the Smalltalk/V for Windows or in Borland C++ etc. and we can expect a lot of maintenance effort.

‘’words’’ includes the average number of words in a documentation part and was added to rhe measures to evaluate the documentation level . The size of a documentation of all OOA/ OOD model parts is essential for the size or complexity of the implemented class.

measure # classes depth of the inheritance width of the inheritance average number of class methods average number instance methods average number of subclasses

Smalltalk/V ST for Windows Objectworks Borland C++ 100 170 397 407 5 7 8 6 69 2.7 17.2 1 118 2.9 27.7 1.3 82 2.4 17.7 6.8 208 16.3 0.46 0.74



This short paper describes only the main ideas in our present project. This project includes a tool-based evaluation of the object-oriented software development for the methodology of Coad/Yourdon. The goal is to help to quantify the development documents at the beginning for better understanding the OO method and better comparing this method with the other OO development paradigms.

/Abreu et al 94/ Abreu, F.B.; Carapuca, R.: Candidate Metrics for Object-Oriented Software within a Taxonomy Frame-

work. Journal of Systems and Software, 26(1994), pp. 87-96 /Arora et al 95/ Arora, V.; Kalaichelvan, K.; Goel, N.; Munikoti, R.: Measuring High-Level Design Complexity of RealTime Object-Oriented Systems. Proc. of the Annual Oregon Workshop on Software Metrics, June 5-7, 1995, Silver Fall, Oregon, pp. 2/2-1 - 2/2-11 /Binder 94/ Binder, R.V.: Design for Testatbility in Object-Oriented Systems. Comm. of the ACM, 37(1994)9, pp. 87101 /Cant et al 94/ Cant, S.N.; HendersonSellers, B.; Jeffery, D.R.: Application of cognitive complexity metrics to objectoriented programs. Journal of ObjectOriented Programming, July-August 1994, pp. 52-63 /Chidamber et al 94/ Chidamber, S.R.; Kemerer, C.F.: A Metrics Suite for

Object-Oriented Design. IEEE Transations on Software Engineering, 20(1994)6, pp. 476-493 /Churcher et al 95/ Churcher, N.I.; Shepperd, M.J.: Towards a Conceptual Framework for Object Oriented Software Metrics. Software Engineering Notes, 20(1995)2, pp. 68-75 /Coad et al 93/ Coad, P,; Nicola, J.: ObjectOriented Programming. Prentice-Hall Inc., 1993 /Dumke et al 94/ Dumke, R.; Kuhrau, I.: Tool-Based Quality Management in Object-Oriented Software Development. Proc. of the Third Symposium on Assessment of Quality Software Development Tools, Washington D.C., June 7-9, 1994, pp. 148-160 /Dvorak 94/ Dvorak, J.: Conceptual Entropy and its Effect on Class Hierarchy. IEEE Computer, June 1994, pp. 59-63 /Fenton 91/ Fenton, N.: Software Metrics A rigorous approach. Chapman & Hall Publ., 1991 /Foltin 95/ Foltin, E.: Implementation of a problem definition measurement tool PDM. Technical Report, University Magdeburg, 1995 /Heckendorf 95/ Heckendorf, R.: Design and Implementation of a Smalltalk Measurement Extension. Technical Report, University of Magdburg, 1995 /IEEE 93/ IEEE Standard for a Software Quality Metrics Methodology. IEEE Publisher, March 1993 /ISO9126/ ISO/IEC 9126 Standard for Information Technology, Software Product Evaluation - Quality Charateristics and Guidelines for their Use. Geneve 1991 /John et al 95/ John, R.; Chen, Z.; Oman, P.: Stactic Techniques for Measuring Code Reusability. Proc. of the Annual Oregon Workshop on Software Metrics, June 5-7, 1995, Silver Fall, Oregon, pp. 3/2-1 - 3/2-26

/Jones 94/ Jones, C.: Gaps in the objectoriented paradigm. IEEE Computer, June 1994, pp. 90-91 /Karunanithi, S.; Bieman, J.M.: Candidate Reuse Metrics for Object Oriented and Ada Software. Proc. of the Firts Int. Software Metrics Symposium, May 2122, Baltimore, 1993, pp. 120-28 /Kuhrau 94/ Kuhrau, I.: Design and Implementation of a C++ Measurement Tool. Master Thesis, University of Magdeburg, March 1994 /Lee et al 94/ Lee, A.; Pennington, N.: The effects of paradigm on cognitive activities in design. Int. Journal of Human-Computer Studies (1994)40, pp. 577-601 /Lejter et al 92/ Lejter, M.; Meyers, S.; Reiss, S.P.: Support for Maintaining Object-Oriented Programs. IEEE Transactions on Software Engineering, 18(1992), pp. 1045-1052 /Li et al 95/ Li, W.; Henry, S.; Kafura, D.; Schulman, R.: Measuring objectoriented design. JOOP, July-August 1995, pp. 48-55 /Lorenz 93/ Lorenz, M.: Object-Oriented Software Development. Prentice Hall Inc., 1993 /Lubahn 94/ Lubahn, D.: The OOC tool description. Technical Report, University of Magdeburg, 1994 /Ott et al 95/ Ott, L.M.; Kang, B.; Bieman, J.M.; Mehra, B.: Developing Measures of Class Cohesion for Object-Oriented Sofwtare. Proc. of the Annual Oregon Workshop on Software Metrics, June 57, 1995, Silver Fall, Oregon, pp. 3/1-1 3/1-11 /Papritz 93/ Papritz, T.: Implementation of an OOM tool for the OOA model measurement. Technical Report, University of Magdeburg, July 1993 /Rocache 89/ Rocache, D.: Smalltalk Measure Analysis Manual. ESPRIT Project 1257, CRIL, Rennes, Franche, 1989

/Szabo, R.M.; Khoshgoftaar, T.M.: Modeling Software Quality in an Object Oriented Software System. Proc. of the Annual Oregon Workshop on Software Metrics, June 5-7, 1995, Silver Fall, Oregon, pp. 2/3-1 - 2/3-20

/Zuse 94/ Zuse, H.: Foundations of the Validation of Object-Oriented Software Measures. in: Dumke/Zuse: Theory and Practice of Software Measurement. Deutscher Universit?tsverlag, Wiesbaden, 1994, pp. 136-214

Methods d. Tools 8. What are the three generic...Another name for component-based development. b. ...consumer or producer of data. b. data object ...
Structured(有结构的) analysis, based on a ...and reporting upon the development of an information...requested models by the object-oriented methods. ...
Answer: b Process Manufacturing Methods Tools 2....:c Another name for component-based development. ...In the context of object-oriented software ...
(conponent-based) 形式化方法模型(formal method)...software development statement 敏捷宣言 个体交互胜过...object-oriented view of component-level design ...
Software Metrics 软件经济学:Software Economics 软件...Object-Oriented Analysis 包含:Contains 临近:Is ...Component Based Software Development ,CBSD 领域工程...
an element of an object-oriented analysis model?...()The component-based development model is (A)...effective by known quantifiable software metrics. 16...
WebApps are a mixture of print publishing and software development, making ...Graph-based testing methods can only be used for object-oriented systems ...
4. The fundamental development activities of the ...software 基于组件的软件工程 component-based software...object-oriented database 面向对象数据库 database ...
( A)Process B)Manufacturing C)Methods ) ) D)...The prototyping model of software development is (...10. The UML approach to object-oriented design ...
Common activities in object-oriented design ...development, Incremental development, Reuse-based ...Please answer the two methods of testing and ...
evaluation metrics    development oriented    oriented    disoriented    detail oriented    object oriented    result oriented    oriented 翻译    

All rights reserved Powered by 甜梦文库 9512.net

copyright ©right 2010-2021。