当前位置:首页 >> >>

Development of Novel Multimedia Browser Interface Mechanisms- the

Technical Report UL-CSIS-96-4 Department of Computer Science and Information Systems 3rd Annual Research Conference 17th September, 1996 Jim Buckley Annette McElligott Juan Pedraja Algorithmic Concept Identification within Program Understanding Terminology - What? Why? How? Migration of tools from a metaCASE Environment to a Standard Object Oriented Database Management System Designing an Object Oriented schema for IBM 370 Assembly Language including Structured Programming Macros Digital Filtering using the Residue Number System Connectionless Service Support for an ATM Network Use of QFD in the requirements engineering phase of software development Software Development Process - a QFD application Strategies for Controlling Generalisation Performance in Neural Network Learning Diploidy in Genetic Algorithms Representation Schemes for Genetic Algorithms Experts, Diversity and Diverse Experts : Engineering stableANN’s Using Music Corpora to verify Hypotheses Towards a Standard Independent Score Processor

Enda Fadian

Richard Conway Fergal Daly Brendan Lynch Ita Richardson John Kinsella Conor Ryan JJ Collins Niall Griffith Donncha O Maidin John Morris

Mikael Fernstrom, Liam Bannon, Hugh Roberts, John Burton Development of Novel Multimedia Browser Interface Mechanisms - the first prototype

Automatic Concept Identification within Program Understanding
Jim Buckley

Program understanding is the bottleneck component of many software engineering activities. To facilitate program understanding, a number of tools have been developed which use internal representations of the code to derive ‘views’ of the system that are relevant to the understander. Typically each view presents a specific software characteristic to the user at a specific level of granularity. These views are available to the user on request, and the tools allow them to navigate between the views. Hence, the user can selectively explore aspects of the system that they do not, as yet, fully understand. However, the views presented by these tools merely paraphrase the software under study. Recently, a set of knowledge-based tools have been developed which use an intelligent agent to analyse the software and capture information implicit in the code. This agent effectively mimics part of the software understanding process by applying expert knowledge to the source code in order to derive the implied knowledge, or concepts, present. These tools are called Concept Identification Tools. They derive the implicit concepts by performing a static analysis on the code and identifying “cliched” patterns of programming. (A knowledge base holds a library of these cliched patterns and of their associated concepts.) On identification of a pattern in the code, the associated concept is recognised and matched with it’s software embedding. This talk will concentrate on Concept Identification Tools. An example of concept recognition will be presented, demonstrating the modelling of plans, and the matching of these plans with source code embeddings. This example will also be used to outline the problems associated with deriving concepts from source code in this fashion and in particular, to demonstrate the difficulties in scaling this approach to deal with understanding legacy systems. Possible solutions to these problems will be outlined as areas for future research.


Jim Buckley Department of Computer Science & Information Systems Schuman Building University of Limerick jim.buckley@ul.ie +353 61 202712 +353 61 330876

E-mail: Tel: Fax:

Terminology - What? Why? How?
Annette McElligott

A technical text comprises subject-specific and general-scientific items or vocabulary. Additionally, such a text includes general-language items that may have been assigned special reference status either by redefinition or in conjunction with other lexical items to form a term. The ability to identify domainspecific terminology and general-domain terminology is increasing in importance in many sub-fields of natural language engineering, such as, document indexing, machine translation, information retrieval, localisation, and natural language database query systems. This talk will introduce the general theory of terminology together with outlining the area of fusion of the various disciplines with terminology. It will examine the composition of terms and detail some of the methods that have been used to automate the extraction of such items from a technical text. Finally, an overview of some research avenues the author proposes to investigate in this discipline will be presented, such as, extending the extraction process, converting of terms to root forms and modelling the semantic relations between terms.


Annette McElligott Department of Computer Science & Information Systems University of Limerick annette.mcelligott@ul.ie +353-61-202734 +353-61-330876

E-mail: Tel: Fax:

Migration of tools from a MetaCASE Environment to a standard Object Oriented Database Management System
Juan Pedraja

During the last few years the ISEG group (Information Systems Engineering Group) has been conducting extensive research on the field of Reverse Engineering and Redocumentation. This research has resulted in the development of a set of tools such as parsers, diagramming tools, restructuring tools. Initially these tools were highly dependant on the particular Object Management System used. A series of tools has already been developed to generate middleware which makes these tools independent and portable to different platforms, separating both the tools and their environment from the technology used to store the information. With the advent of the new Object Oriented Management Systems, this middleware is being adapted to interact with the proposed standard interface for these systems.


Juan Pedraja Department of Computer Science & Information Systems Schuman Building University of Limerick juan.pedraja@ul.ie +353 61 202712 +353 61 330876

E-mail: Tel: Fax:

Designing an Object Oriented Schema for IBM 370 Assembly Language including Structured Programming Macros.
Enda Fadian

The goal of the ESPRIT project CARES(Cargo Assembly Re-Engineering Support) is to establish the degree to which reverse and re-engineering technology can improve the maintainability of high level performance systems written in low level languages such as IBM Assembly language. The process adopted to do this revolves around a parser and repository. The parser is used to analyse the code and transform it into an abstract syntax tree from which the repository can be populated. The repository holds the key to the success of the reverse and re-engineering tools and the project itself. I propose to discuss the design philosophy of the abstract syntax and illustrate it by considering some of the statements and macros and their semantics.


Enda Fadian Department of Computer Science & Information Systems Schuman Building University of Limerick enda.fadian@ul.ie +353 61 202712 +353 61 330876

E-mail: Tel: Fax:

Digital Filtering using the Residue Number System
Richard Conway

This paper introduces the basic concepts of the Residue Number System and the application of a new RNS representation for digital filtering structures . The problem of scaling in the RNS is then presented, with reference to using the Chinese Remainder Theorem (CRT) and Mixed Radix Conversion (MRC) as a means of implementing this operation. While certain modular operations can be performed in a fast manner, other operations, including scaling and sign detection are much more complex. Recent developments in the last decade are also presented such as the QRNS and the PRNS. Common filtering structures are introduced for fixed and adaptive filtering, which include biquad sections, lattice structures and the LMS algorithm. A novel use of the RNS allows for almost halving the scaling time. This new method can be used with a number of common filtering structures and algorithms. A new property of the CRT is also used to implement the new scaling algorithm. Residue arithmetic or Residue Numbering Systems (RNS) which dates back to 100 AD. offers speed increases in the basic operations of addition and multiplication and in the last decade has found use in certain DSP algorithms. In the 60s, unsuccessful attempts were made at using residue arithmetic as an alternative numbering system for general purpose computers. However, it has since then been applied to DSP applications. Fixed and adaptive filters have been employed successfully in many different areas including adaptive equalisers, echo cancellers and speech coding. A host of different architectures exist, that result from the use of different algorithms and various types of arithmetic. There has been very limited efforts to combine these the two areas of Residue Number Systems and adaptive filtering. The following example illustrates the operation of residue arithmetic for multiplication. Note that |a|b represents the integer remainder of dividing a by b. Decimal Arithmetic (base 10) 23 *4 92 Residue arithmetic (Moduli {3,5,7}) 23={|23|3,|23|5,|23|7}={2,3,2} 4 ={|4|3,|4|5,|4|7} ={1,4,4} { 2 ,3 ,2 } *{ 1 ,4 ,4 } {|2|3,|12|5,|8|7} ={2,2,1} {2,2,1} is the RNS number for 92, which is the correct answer


Richard Conway Department of Electronics and Computer Engineering University of Limerick richard.conway@ul.ie +353 61 202080 +353 61 330316

E-mail: Tel: Fax:

Connectionless Service Support for an ATM Network
Fergal Daly

Asynchronous Transfer Mode (ATM) is the fast packet switching technology which will be used in future broadband networks. These networks support the transmission of multimedia information, such as video and voice, at high speeds. ATM is a connection-oriented technology. Legacy LAN technologies, such as Ethernet, are connectionless. ATM must be able to provide a service for the transfer of connectionless traffic over its connection-oriented network in order to support the large installed base of network applications (such as web browsers) and also provide LAN interconnection services. This presentation focuses on a number of key issues that arise when interconnecting traditional LANs to ATM (for example, bridging, routing and address resolution). The work of the ATM Forum, the IETF and the ITU-T is presented. It also investigates how the connectionless traffic can be efficiently transported as ATM cells. The problem of multiplexing connectionless flows of traffic is examined and a possible scheme for this supporting this is introduced. Extensive computer simulation has been used to support this research. This research has been carried out over a period of two years and is funded by the Department of Computer Science and Information Systems at the University of Limerick.


Fergal Daly Department of Computer Science & Information Systems Foundation Building University of Limerick fergal.daly@ul.ie +353 61 202695 +353 61 330876

E-mail: Tel: Fax:

Use of QFD in the Requirements Engineering Phase of Software Development
Brendan Lynch

Requirements engineering is becoming the key issue for the development of software systems that meet the expectations of their customers and users, are delivered on time, and are developed within budget. Requirements engineering has yet to be defined but it is agreed that it deals with activities which attempt to understand the exact needs of the users of a software intensive system and to translate such needs into precise and unambiguous statements which will subsequently be used in the development of the system. Requirements are still, in many cases, collected analysed and translated to system components thanks to excessive informal interaction between users and developers, trial and error, and the ingenuity of a few individuals - often with failures that are more spectacular than the success stories.

Figure 1. House of quality Quality Function Deployment offers a mechanism by which users wishes and needs (Voice of the customer ) can be collected and translated into fully functioning products. The principle of QFD is to take customer requirements (WHATs) and develop design requirements (HOWs) for a specific product or service. By gathering data from potential customers regarding requirements, it is possible to develop priorities for software development. When applied correctly QFD boasts of being able to produce detailed design requirements for new products. QFD contains a number of different tools which allow developers to assign rankings to customer requirements and design requirements. A number of different questionnaire types exist to enable the developer to assign importances and values of uniqueness to different requirements. By using a house of quality, a matrix based method, QFD demonstrates to developers the effect of different design requirements on customer requirements. Trade-offs between design requirements can also be seen from the matrix. This allows the developer to develop a detailed specification how to accomplish the functionality as set out by the user. The developer will finish with a matrix like that in figure 1. Contact: Brendan Lynch

Department of Computer Science & Information Systems Schuman Building University of Limerick E-mail: Tel: Fax: brendan.lynch@ul.ie +353 61 202695 +353 61 330876

Software Development Process - a QFD Application
Ita Richardson

Paulk et al. (1993a) define the software development process as "a set of activities, methods, practices and transformations that people use to develop and maintain software and the associated products". The process may be defined as a set of guide-lines and/or procedures, which are followed during software development. However, in practice, a number of problems arise:
? ? ?

Software process is not defined; Software process is ill-defined; Software process is well defined but not adhered to.

One model which has been developed to measure the process maturity of an organisation is the Capability Maturity Model. One of the major weaknesses of the Capability Maturity Model which has been recognised and quoted, Peterson (1995), Draper et al. (1995), is that, while it measures process maturity, it does not present a process improvement strategy. Consequently, an organisation must develop an improvement strategy for themselves. This paper proposes that organisations can successfully develop their own improvement strategy with the support of Quality Function Deployment. Taking the Capability Maturity Model practices as being customer requirements, and because of the measurement techniques used, QFD would allow the team to prioritise those practices which need to be most improved to move the organisation from one level to the next, The output being an improvement strategy for the organisation. References Draper, Lee, Kromer, Dana, Moglilensky, Judah, Pandelios, George, Pettengill, Nate, Sigmund, Gary, Quinn, David, (1995), "Use of the Software Engineering Institute Capability Maturity Model in Software Process Appraisals", output from CMM v2 Workshop, February, Pittsburgh, Pennsylvania. Paulk, Mark C., Curtis, Bill, Chrissis, Mary Beth, Weber, Charles V. (1993), "The Capability Maturity Model for Software, Version 1.1", Technical Report SEI-93-TR-24, Software Engineering Institute, Carnegie Mellon University, U.S.A. Peterson, Bill, (1995), "Transitioning the CMM into Practice", Proceedings of SPI 95 - The European Conference on Software Process Improvement, The European Experience in a World Context, 30th Nov-1st Dec, Barcelona, Spain, pp. 103-123.


Ita Richardson Department of Computer Science & Information Systems Schuman Building University of Limerick ita.richardson@ul.ie +353 61 202765 +353 61 330876

E-mail: Tel: Fax:

Strategies for Controlling Generalisation Performance in Neural Network Learning
John A. Kinsella

Feed-forward neural networks are a powerful tool for a variety of recognition and classification tasks. From a mathematical viewpoint, feed-forward neural networks trained using Backward Error Propagation (B.E.P.) perform a non-linear least-squares fit to the training data. As a consequence, they are subject to the difficulty common to all non-linear models: selection of an appropriate model structure; in particular the determination of an appropriate number of parameters (weights). A network with too many parameters will have poor generalisation performance due to “over-fitting” while one with too few parameters will be unable to incorporate all the structure in the training data. Rather than analyse the selection of the “optimal” number of parameters --- a difficult and subjective task --- this talk addresses approaches which penalise excessive complexity; thereby rendering the final network insensitive to the initial (necessarily heuristic) choice of network structure. Two strategies will be discussed: Constrained Optimisation and Penalty Function methods.


John A. Kinsella Department of Mathematics and Statistics University of Limerick john.kinsella@ul.ie +353 61 202148 +353 61 330316

E-mail: Tel: Fax:

Diploidy in Genetic Algorithms
Conor Ryan

This paper investigates the use of diploid structures and the possibility of using diploidy in non-binary genetic algorithms. Unlike the conventional implementation of diploidy in GAs, which typically involves the use of dominance operators, the methods presented in this paper avoid dominance while still remaining rooted firmly in natural biology. The use of diploidy allows a population to adapt more quickly to a changing environment, which allows an implementor to vary testcases, possibly to vary the difficulty of the problem or to reduce testing time. Two new schemes are introduced, additive diploidy and polygenic inheritance, both of which outperform current methods on a benchmark problem. Both schemes are general enough to be applied to a GA with any number of phenotypes, without any need for the use of dominance operators.


Conor Ryan Department of Computer Science & Information Systems Schuman Building University of Limerick conor.ryan@ul.ie +353 61 330876

E-mail: Fax:

Representation Schemes for genetic Algorithms
J.J. Collins

1. Representation Recent research in the field of genetic algorithms has concentrated on modelling the characteristics of selection, crossover and mutation operators using binary notation as the representation scheme for encoding chromosomes (genocode). Whether one subscribes to the theory of Punctuated Equilibrium or Steady Rate and Continuous Mutational Activity in evolution, mutation plays a central role in the dynamics of genetic structures. Under mutation, the Hamming distance of high order bits in the sequential binary genocode results in destruction of useful schemata (building blocks). An alternative representation - E-code - is described, in which the standard deviation of the effects of mutation is very much reduced, thus increasing the probability of preserving fit schemata from preceding to proceeding generations. Using a basic haploid genetic algorithm implemented with roulette selection and uniform crossover, as a test platform to minimise distortion of results; we evaluated binary, E-code, Gray and real coding schemes on a standard test bed of functions. Using statistical analysis, it is demonstrated that E-code yields better overall performance than the other coding schemes. 2. Development of Analytical Methodologies. Current research focuses on the derivation of a methodology and associated tools for global modelling and visualisation of genetic algorithms. Using Principal Component Analysis (PCA), a generation (instance of a population of individuals) is conceptualised as a point in a high dimensional vector or Eigen space (space of all possible solutions). We derive the principal components of the distribution of generations (eigenvectors of the covariance matrix of the set of points). The normalised eigenvectors of the covariance matrix yield eigen-generations or images for visualisation, and form a basis set with which to describe the evolution of the population in time. A pattern vector (set of weights) is calculated for each generation by projecting it into the vector space. A 3D manifold (parametric eigenspace representation) is derived from the pattern vector which depicts the dynamics of evolution. The short term objective is to evaluate this methodology for: ? Validation of E-code as a sound representation scheme. ? Genetic engineering, specifically for evolutionary derivation of neural architectures. 3. Long Term Objectives Our long term research objectives is described, particularly the role of evolutionary algorithms and neural networks in the modelling the behaviour of intelligent autonomous interacting agents. A parallel engineering objective is the realisation of a generic flexible scheduling system based on agent technology, for a poorly constrained dynamic manufacturing environment. To bring our aims to fruition, we have commenced setting up of an inter-departmental group which will focus on the field of computational intelligence, facilitating project co-ordination and informal meetings. All enquiries and contributions very much appreciated (first meeting scheduled for start of the forthcoming Autumn semester).


J.J. Collins Department. of Computer Science & Information Systems University of Limerick j.j.collins@ul.ie +353 61 202409 +353 61 330876

E-mail: Tel: Fax:

Experts, Diversity and Diverse Experts : Engineering stable ANN’s
Niall Griffith

I will talk about how I hope to develop my research on improving the stability of solutions produced by training Neural Networks. This is important if inductive programming is to provide systems that generalise well from novel inputs. I am interested in techniques that are general and that do not rely upon knowledge about the domain being modelled. My approach thus far has been to partition problems, assuming that a part of the problem will be simpler to learn, and hence will generalise better. This certainly seems to be born out by comparing ’specialist’ networks with single un-decomposed nets. Expert ensembles give largely comparable results to committees of diverse un-decomposed networks. However, measures of the success of the networks show that the diversity that can be exploited over an entire function is much greater than that available to an expert sub-function. Having partitioned the problem each sub-problem has less potential diversity. So currently I must decide to exploit one or the other. I am now looking at how to engineer diversity across sets of experts in a way that is not thwarted by the reduced diversity available to an expert. I will describe my intended research path. Hopefully, I will have time to relate this it to feature selection and dimensional reduction.


Niall Griffith Department of Computer Science & Information Systems Schuman Building University of Limerick niall.griffith@ul.ie +353 61 330876

E-mail: Fax:

Using Music Corpora to verify Hypotheses
Donncha O Maidin

The task of modelling a music score in programming environment is one of some complexity. A score consists of a variable number of staves, which may be modelled by means of a linked list-of-staves. Each stave may, in turn be represented by a linked list of heterogeneous objects. Some of these objects, such as chords, may in turn be represented by yet another linked list. The complexity of such list-oflists-of-lists increases considerably when we try to map the various elements to a linear time scale. Added to these complexities we have the additional problems of representing a non-hierarchical set of scoping mechanisms, such as those involved in clefs, key signatures, time signatures and may others which have the effect of modifying the meaning of the various symbols in a score. In order to do any efficient processing on a music score, it is essential to find a way of reducing this complexity to manageable proportions. Using an object-oriented approach, it is possible to model a music score with just two objects. Each of these objects hides the underlying complexity (using encapsulation + message passing). A combination of inheritance and polymorphism proved useful to manage the complexity of the run-time binding that arises from the nested lists of heterogeneous objects. The overall effect is that a programmer’s environment is created in which attention may be focused on the thing being modelled instead of how it is modelled. The language of implementation used here is C++. This environment provided a number of additional useful features of help in the score design. These include the availability of multiple inheritance, templates, default parameters, operator overloading and various additional classes libraries. This score model is used to verify a series of statements which a musicologist made about a body of music. For this purpose, two existing music corpora are used. Statements made by the musicologist are extracted from their context, and algorithms are constructed which are then converted into programs. These are run on the contents of the corpora in order to seek verification of the original assertions made.


Donncha O Maidin Department of Computer Science & Information Systems Schuman Building University of Limerick donncha.omaidin@ul.ie +353 61 202705 +353 61 330876

E-mail: Tel: Fax:

Towards a Standard Independent Score Processor
John Morris

In May 1995, a model for representing music scores in a form suitable for general processing called CPNView, was completed by Dr. Donncha O Maidin. Dr. O Maidin is head of the Research Centre for Computational Musicology and Computer Music at the University of Limerick, Ireland. CPNView provides an abstract view of the information content of a score. The model accepts as input standard file-based representations (e.g. ALMA) and converts them to the internal CPNView format. The reason for this translation is that file-based formats are in a form unsuitable for general processing of music databases. The Notational Interchange File Format (NIFF) was developed in August of last year by Cindy Grande of Grande Software Inc. NIFF is a standard File-based representation, and its exceptionally thorough design is the product of a lengthy consensus-building process between a diverse group of notation software designers and researchers. Some of these include Steve Keller and Lowell Levinger of Passport Designs, Randy Stokes of Coda Music and Cindy Grande. The NIFF format is gaining wider and wider acceptance as the preferred standard in the Computer Music Community. This is due to NIFF’s completeness, flexibility and portability(Cross-Platform). CPNView is currently undergoing further development with NIFF. One area of augmentation of CPNView is in extending it’s capabilities to handle General Polyphony. That is, music in which concurrent notes and rests share the same stave, but have rhythmic independence (e.g. Piano Music). In order to process and analyse information in a music score one needs to perform what is termed a Standard Traversal of a score. A Standard Traversal is the method of visiting each element in a score starting from the beginning of the score, then moving from the top to the bottom and from the left to the right of the score. Polyphonic traversal must obey the rules of time synchronisation across each concurrent note in a stave and each stave in a score if it to work properly. That is, if must visit all concurrent notes in the score and make sure that their time values are synchronised before moving any further. Instead of a Standard Traversal of a score which is the norm for most score processing models, a parallel traversal of all the simultaneous notes in a score is illustrated. This is required if we are to process polyphonic scores. This involves the use of a number of traversal operators(one for each concurrent note in a stave) moving from left to right; from the start of a stave to the end. A number of tools for polyphonic processing are also under investigation. These include Harmonic Iterators and Harmonic Pattern Matchers. The development of these new structures requires some modest augmentation of the internal representations in CPNView, the enhancement of the input translator, and extensive testing with polyphonic corpora. With the development of further input and output translators for other file-based score representations, CPNView will become independent of a particular representation for processing score abstractions. Furthermore the environment can be used for translating between formats so that the score abstraction can be represented in various file formats, thus facilitating the use of specialised pieces of software to process the representations. e.g. ALMA -> CPNView, CPNView -> NIFF = ALMA -> NIFF OLD NEW STANDARD STANDARD


John Morris Department of Computer Science & Information Systems Schuman Building University of Limerick john.morris@ul.ie +353 61 202782 +353 61 330876

E-mail: Tel: Fax:

Development of Novel Multimedia Browser Interface Mechanisms - the first prototype
Mikael Fernstrom Liam Bannon Hugh Roberts John Burton

The development of Graphical User Interfaces (GUI) and systems for selection with windows, icons, menus and pointers (WIMP) have made it easier for people to interact with systems, but at the same time the early ideas have become a de facto standard for building user interfaces and browsers. These windows-styled GUI’s cannot always cater for the multi-dimensionality of the relation and interaction between the information and the users. Most existing systems only deal with visual and in some cases audio representations of information, and the user is left with the traditional WIMP interface. The electronic world of information is quite poor in offering representations that cover the full scope of the human senses; time, temperature, tactile, texture, etc. that cannot be directly represented by a computer-system (yet). Multimedia has started to provide a richer user interface, providing sound and images, integrating video, graphics and text on a single platform - a computer, thus multimedia in itself is not a product, it is merely a method to create a more effective communication with the user [Bannon 1993]. The common WIMP-ideas and the use of poor domain metaphors have become restrictive to the access of information [Nelson 1990] and new models and methods are required. It is also important to note that the user's requirements and abilities change; in time, in different situations and with different individuals [Bannon 1991]. A browser has to provide two main functions: the ability to represent objects and the ability for the user to interact with objects or sets of objects. Browsing is closely related to searching, but based on a more informal and opportunistic approach. The use of filters and search-tools can enhance the browsing by helping the user to find a focal-point close to the problem domain, or to reduce the number of items to suitable number. Another key aspect of browsing mechanisms could also be the possibility for the user to make the output from a process the input of a subsequent process Shneiderman 1994]. [

References Bannon 1991 Bannon L, B?dker S, "Beyond the Interface: Encountering Artifacts in Use", Designing Interaction: Psychology at the Human-Computer Interface, Ed. J Carroll, Cambridge University Press, Cambridge, England. Bannon L, "Problems & Pitfalls in Multimedia or Multimedia: What's the Fuss?", Lecture notes, PMTC Course on multimedia, University of Limerick, 1993. Nelson T. H, "The Right Way to Think About Software Design", The Art of Human-Computer Interface Design, Ed. Brenda Laurel, Addison-Wesley Publishing, Reading, Massachusetts, USA, 1990-93 (5th ed). Shneiderman, Ahlberg, "Visual Information Seeking using the FilmFinder", ACM SIGGRAPH on Computer Graphics, 1994

Bannon 1993

Nelson 1990

Shneiderman 1994


Mikael Fernstrom Department of Computer Science & Information Systems Foundation Building University of Limerick mikael.fernstrom@ul.ie +353 61 202699 +353 61 330876

E-Mail: Tel: Fax:



All rights reserved Powered by 甜梦文库 9512.net

copyright ©right 2010-2021。