Treffer: Design and Implementation Issues for an Object-Based Distributed Software Engineering Support System ; Technical Report 2018-07-ECE-024; Technical Report 88-CSE-18
Title:
Design and Implementation Issues for an Object-Based Distributed Software Engineering Support System ; Technical Report 2018-07-ECE-024; Technical Report 88-CSE-18
Authors:
Publisher Information:
University of Alabama at Birmingham. Department of Electrical and Computer Engineering
Publication Year:
2018
Collection:
University of Alabama at Birmingham: UAB Digital Collections
Subject Terms:
Document Type:
Fachzeitschrift
text
File Description:
application/pdf
Language:
English
Relation:
Technical report (University of Alabama at Birmingham. Department of Electrical and Computer Engineering); 2018-07-ECE-024; Technical report (Southern Methodist University. Department of Computer Science and Engineering); 88-CSE-18; Technical Report 2018-07-ECE-024 Technical Report 88-CSE-18 Design and Implementation Issues for an Object-Based Distributed Software Engineering Support System M.G. Christiansen Murat M. Tanik This technical report is a reissue of a teclmical repmi issued March 1988 Department of Electrical and Computer Engineering University of Alabama at Birmingham July 2018 Technical Report 88-CSE-18 DESIGN AND IMPLEMENTATIOR ISSUES FOR AN OBJECT-BASED DISTRIBUTED SOFTWARE ENGINEERING SUPPORT SYSTEM M. G. Christiansen M. M. Tanik Department of Computer Science and Engineering Southern Methodist University Dallas, Texas 75275-0122 March 1988 Design and Implementation Issues for an Object-Based Distributed Software Engineering Support System 1. Introduction M. G. Christiansen M. M. Tanik Southern Methodist Univ. Dept. of Computer Science and Engineering Dallas Tx. 75275 Current developments in VLSI technology have enabled the development of low cost processors and memories. The availability of these components have generated interest in distributed and parallel processing applications, and has made available many new architectures. But little advancement has been made in providing software engineering support for distributed applications. The major objective of this work is the development of a software engineering support system for distributed applications. The key feature of this support is the use of object programming in the definition of applications. The use of an object paradigm provides several advantages over conventional approaches to program design [ 1 J. We shall see that this approach is particularly well suited for the development of distributed applications. Because distributed applications are in reality a set of cooperating processes, another goal of this support system is the abstraction of processes into objects that are managed by the software engineer. The system supports partitioning the application into a hierarchy of software objects called "Applications", "Abstract Processing Elements", and "Objects" are distributed and executed in a network of processing elements . - :l - A distributed processing environment requires the support of a communications medium and the allocation of that medium between cooperating processes. When the distributed environment is a set of heterogeneous processors, issues of individual processor capabilities, and data compatibility also arise. This system supports these allocation of communication channels in the development and implementation of the distributed applications. Perhaps one of the greatest difficulties in developing distributed applications is the monitoring and debugging of the developing application . This system provides the means of monitoring remote processes from the development workstation . The software engineer examines the state of remote processes and the objects they contain . The ability to checkpoint and set breakpoints is also provided . Although the design of a general purpose programming tool is not an objective , the distributed support system certainly has many features of one. These features include a management tool for the development of object classes and associated code . It makes use of graphics and graphical programming metaphors like windows, icons, and pointing devices. Features associated with automatic programming such as the automated combination of object definitions , and the construction of processes for remote execution are also provided. 2. Discussion The following sections discuss the use of the object paradigm in defining cooperating processes, management of network resources, monitoring and debugging facilities, and run-time support. Before addressing these topics we will clarify some of the concepts addressed in latter sections. Specifically, we will present a definition of distributed processing and discuss motivation. 2.1. Distributed Processing and Applications A distributed application can fall into one of two categories. In the first case we have a process that request a service of one or more processes resident on remote processors . These requests are for resources or services that are not available at the local requesting machine. In - 3 - this model we refer to the requesting process as the client, and the process rece1vmg the request as the server. The request causes a state change and a result being returned through a message from the server. The second category are applications in which a set of processes cooperate in the solution of some problem. These applications are thought of as parallel processing, and the goal is not so much the sharing of resources , as a division of labor in order to obtain a result faster . Today see that microprocessor and inexpensive memories are making it possible to combine dozens to hundreds of processors in various configurations to form parallel processing systems. As the cost and size of processors and memory decreases, it is foreseeable that distributed systems with thousands of sophisticated processing elements will be interconnected and utilized. A goal of our research is the development of programming support tools that allow the efficient utilization of these capabilities as they become available. Even today there is much potential for distributed processing applications that can take advantage of resource sharing and parallel processing. \Vc have special purpose processors that support database , array processing, graphics, signal processing applications. These resources could be shared among a group of interconnected workstations, whose individual usage of a single resource is not great, but necessary for certain tasks. Another potential application exist in environments with large networks of workstations that spend much of their time idle or lightly loaded. The support system could aid in the development of applications that can utilize the combined resources of these workstations. In supporting resource sharing and parallel processing the underlying distributed architectures should have a set of common attributes [2]. First each processor communicates with its neighbors through a set of one or more communications channels. That is, no shared memory exist between processing elements, and the bandwidth of these channels are considerably slower than that of processor memory. Second, exist an underlying network protocol which the processing element uses to communicate with its neighbors. The network must be able to reliably transfer information between neighboring processing elements and to route information to - 4 - processors that are non-adjacent. Third, each processing element must provide some run-time support to the processes that reside locally. The processor and its local operating system must be able to load and execute at least one process. This process must be allowed access to the network to exchange information with other processes executing in the network. We shall see that local run-time support is also important to features supported by the distributed software development environment. Applications that are likely candidates for implementation using this distributed processing support system include: Distributed control systems, operating systems, and parallel scientific applications. 2.2. Object Oriented Application Development The object oriented paradigm supports the development of programs as a set of software modules or objects. These object are similar to abstract data types in that they provide the ability to define a set of variables and operations in a single construct. Smalltalk [3] is perhaps the best known of the object oriented languages, and it provides the class construct which allows the software engineer to define a software object as a set of instance variables and methods that operate on these variables. The class constructs provide some important features : 1. It allows for the natural decomposition of the application into a set of cooperating objects. This fits well into a top-down, structured approach to software design. 2. Methods provide an interface for the object which restricts access by external objects to its set of operations. This interface serves to protect the object from illegal access and enforces the original design structure. It also serves to hide the implementation of the object behind its set of operations . Object languages are often criticized because of the inefficiency of message passing, dynamic binding, and the resources of maintaining a context for every object. To support these features special architectures have been developed to provide processor support for the requirements of languages like Smalltalk. Still these implementations are not efficient or cost effective - 5 - enough to consider the use of these pure object-oriented languages for general applications. To address these run-time issues new object-based languages such as Ada [ 4] and C++ [5] have been developed. These languages have relaxed the requirement that all interaction between objects be restricted to message passing, and have provided methods of allowing an object to grant access to its internal data structures providing for faster interaction between objects. These languages have also scraped dynamic binding and run-time type checking in favor of compile-time type checking and static binding. By allowing restricted access to internal data structures these object can take advantage of the fast interprocess communication capabilities of memory shared between them as well as any message passing provided by the language. This allows the conceptual partitioning of the application that object-based programming was designed to provide. But when the interaction between two objects requires that a large amount of information be exchanged or shared, the partition between these objects can be relaxed, and the data can be exchanged directly between the two. 2.3. Distributed Application Development This section discusses support for the development of multiple cooperating processes in a network of processing elements. Our system provides three levels of abstraction to aid in the design of applications . These abstractions are the "Application", the "Abstract Processing Element" (APE), and the "Object". An Abstract Processing Element is defined to execute on a physical processing element, possibly concurrently with other APEs in a multi-tasking fashion. An APE contains Objects, which are executed or applied in the context of the APE. As in the traditional definition of objects, these contain a state and set of operations. An APE can support any number of objects, limited by processor capabilities. Each APE supports a single thread of execution, thus only a single object can be executing within an APE at any instant. An Application is defined as a set of cooperating APEs. Applications contain the bindings - 6 - of the Abstract Processing Elements to physical processors, and it provides a context for other information that is specific to each application developed. One of the characteristics of this system is the reuse of APEs and objects, and the Application provides the mechanisms needed to implement the features that are application dependent and cannot be reused in later designs. The support system will place no restrictions on memory management supported by each processing element in the network. In a processing environment that supports virtual memory, each APE can be supported by its own virtual memory space. Objects execute in the memory context of an APE. In this way objects can permit the access of its local state to other objects in its APE to allow data to be readily exchanged between two objects. Two objects that share the same APE cannot execute in parallel. Objects that reside on separate APEs can only exchange data through message passing, but if these APEs also exist on separate physical processing elements, the objects can execute in parallel. The case where two objects reside in the same APE is referred to as interprocess communication. When the objects reside on separate APEs we refer to the communication as interprocess communication. 2.3.1. Applications The highest level of abstraction offered by this system is the Application, and its purpose is mainly to contain the bindings of Abstract Processing Elements to the physical processors. It can be thought of as an executable job that is evoked and executed on a set of processors. These bindings are made before the Application is executed, and cannot be changed during run-time. The static binding of APEs to physical processors allows us to verify that the resources needed by the APE and its Objects are available at the target processor. By disallowing dynamic bindings we increase the speed at which the application executes by eliminating run-time consistency checks that would be needed to verify a dynamically changing situation . 2.3.2. AlEtract Processing Elements An Abstract Processing Element is a programing abstraction that represents a unit of execution i.e. an APE can only support a single thread of execution. Multiple APEs can share - 7 - the same processing element in a multitasking fashion, but an APE also maps onto simpler processing elements that can support only a single process. An APE can be created dynamically. The application has the ability to send a request to a processing element that an APE previously mapped onto it be created in its local space , where the new APE and its objects are allocated. The body of code that defines an APE includes an initialization procedure that is executed by the remote processor when the it is created. This provides the ability to initialize the needed objects before any messages are received and processing begins. The objects that an APE has access to are defined at compile time and are referred to as the "Object Set" of the APE. Although all APE can dynamically create new objects from its object set, it is not able to add new objects to Lhis set. By statically defining the set of objects all APE can support, we are able to determine the resources and special requirements a process object set has. This allows us to determine whether the remote processing element intended to support that APE has the needed capability before run-time. Each APE is responsible for receiving messages across the network from other APEs as a service to the Objects that reside internally. The APE decodes the message , calls the appropriate object method, and passes the message body along to it. An APE has the ability to respond to a set of messages itself. These messages are designed to perform certain maintenance operations and to allow the system to poll its local state. It also responds to request from objects that reside locally. The body of the code that forms the APE are constructed by the software engineer using the distributed software support system. The support system allows the engineer to define and combine object definitions into a code body that forms the APE. The APE and its objects are defined to make use of what resources are provided for it by the remote machine . Once an APE body is defined, it is shipped across the network to the remote machine where it is compiled and linked into an executable module. - 8 - 2.3.3. Objects Objects are program units that execute in the context of an APE. As mentioned earlier objects can communicate with other objects tJ1rough either intraprocess or interprocess communication. When an object communicates with objects that reside in the same local APE it is considered an interprocess communication. The key feature of this form of communication is that it can take place through either function calls or data sharing, but requires no message passing through a communication channel. An interprocess communication occurs when the object wishes to communicate with an object that resides in a remote APE. It has no choice but pass a message to the remote object through a communications channel. Any attempt to share data between remote objects is an error and will be flagged as such by the system. From an object's perspective the channel is transparent and all communication takes place directly between it and related objects. In the case of intraprocess communication this is clearly true. But in the case of interprocess communication the channels are being managed by the APEs supporting the objects and the underlying network communication mechanisms . An exception to this rule is an object's ability to poll local and remote APEs for services such as dynamic object creation, communication services, and others . The set of messages, and their arguments which are supported by an object are also statically typed. This restriction allows us to verify the messages sent from one object to another at compile time. In addition, because an Application and the APE object set is fixed at compile time ·we are able to verify tJ1at the request for dynamic object creation can be supported by a specific APE at compile time. 2.4. Network Support This section deals with the issues of network support for the distributed application . We will discuss the issues of dynamic vs. static name binding, forms of message passing, data marshaling, and general support for networks of heterogeneous processors. - g - 2.4.1. Name Space Binding Name spaces are abstractions commonly found in network and distributed applications. They can be thought of as a database of names and associated attributes distributed on the network, whose scope in not limited to single processing elements. Entities in the network use names to access the information associated with them. Name spaces are usually implemented as server processes that reside on each processor and insure that the information contained in the name space remains consistent across machine boundaries [7]. Attempts have been made for object based programming paradigms that rely on distributed name spaces for all forms of intraprocess communication [8]. From the perspective of distributed software support systems the nan1e space includes , but is not limited to , the binding of Abstract Processing Elements and Objects to network addresses within an Application. Within certain limitations these bindings can be defined as static or dynamic. A static binding is made at compile time and is enforced throughout the life of the application . A dynamic binding is made at run-time and can be created and removed at any point. Each of these approaches has its positive and negative points with respect to the capabilities provided and run-time support required . The decision of which binding to form for an APE or Object instance depends on several factors including run-time efficiency, application requirements, and remote processor capabilities. Furthermore , the decision might be made to support one form of binding for APEs and another for its Objects, depending on the requirements of the application. In each of the following sections these issues are discussed with respect to static and dynamic name space binding of APEs and Objects . 2.4.1.1. Issues of Static vs. Dynamic Name Binding The binding of a. name can be applied to both APEs and Objects . Each of these cases require their own form of compile-time and run-time support for static or dynan1ic binding. These requirements are discussed below. - 10 - Abstract Processing Elements and the Objects contained within them require the mechanisms of allocation and communication. That is, an existing object must be able to request that a new object be allocated, and communication with the new object must be established between it and the other objects in the name space. The difference between dynamic and static bindings is when and how these operations are performed. The static objects are allocated when the Application is initialized. Except for the underlying communication channels provided by the network, all references are established at compile time. Message routing can be performed by simple indexing methods, and assuming that each of the remote processing elements was properly initialized, no run-time verification of object existence is needed. The Dynamic objects can be allocated and de-allocated at any point in the Application's lifetime. The run-time support must deal with request for the creation and removal of objects from the name space, and deal with messages from faulty objects to non-existent APEs and Objects. Because of the static nature of Application and APE object sets, we can determine that a message can be supported by an APE at compile time, but we cannot insure that the object has yet been allocated. For example, the Application defines the binding of an Abstract Processing Element to a physical processing element. The implementation of an APE is an executable module that is loaded by the remote processor and has access to the communications medium. For any APE, its module must be distributed to the remote processor before its execution can begin. The distribution can be done when the application is booted, previously, or during run-time. This distribution is static in the sense that APEs are statically bound to remote processors by the Application. Therefore a run-time request for a specific APE can be verified against the Application specification at compile time, and any inconsistencies flagged as errors. 2.4.2. Message Passing Once a binding between two objects JS established m the name space, messages can be - 11- passed between them in the Application . In this section we discuss the run-time requirements and forms of message passing in distributed applications. The extensive use of static typing in this distributed software engineering support system greatly reduces the burden of supporting message passing. There are three aspects of application definition that are fixed at compile time in our development environment: 1. The messages and arguments that the object supports . 2. The objects that make up an APE's object set. 3. The assignment of APEs to remote processing elements. By requiring these assignments to remain static we are able to verify that the requested operation is supported by the target object and that the type and number of arguments are correct. In the case of intraprocess communication, which occurs locally and is implemented by function call alone, we are able to bind the call at link time . The two aspects of application definition that is not fixed at compile time are the allocation of objects from an object set in an APE, and the allocation of APEs in a processing element. So it is possible for a remote object to make reference to an object that is not currently allocated. This is a run-time error and the handling of it falls ilito the category of run-time support covered in a latter section. A unique feature of this distributed support system is that it supports three modes of message passing. These three modes are similar in philosophy to the modes of message passing supported by ABCL/1 [9]. These modes are immediate, blocking, and forwarded . The immediate mode is similar to message passing as it is commonly thought of. The object forms a message and sends it to the receiving object. The sending object then continues with its processing and the appropriate operation is evoked by the receiving object. Immediate mode message passing is only possible if both the sending and receiving objects reside on separate processing elements. This allows processing to occur in parallel. The blocking mode is similar to the Remote Procedure Call [ 10]. The sending object suspends operation until a reply is sent from the receiving object. This form is intended to emulate a procedure or function call. Note that results can be passed by value or result if the - 12 - called object resides in the APE local to the sending object. But if the recurring object is remote, the result can be passed by value only. This verification can be made at compile time. Using tl1is mode the execution of the caller and callee can be thought of as a single thread of execution that possibly crosses machine boundaries. This mode would be most useful when a object needs the resources of a remote processor such as an array processor, and local processing cannot continue until a result is formed. The forwarded mode allows the specification of an object as the recipient of the result of the operation invoked by the message. This mode allows the sending object to specify to the receiving object that the result of the operation be forwarded to a third object. This mode is especially useful in parallel applications where a manager process distributes work to server processes. Each server process forms a partial solution that is forwarded to a third process which combines these partial solutions into a final form . 2.4.3. Data Marshalling An important issue of distributed processing and message passing in a heterogeneous processing environment is the compatibility of tl1e data exchanged from one machine to another. The conversion of data from one machine into a form compatible with anotl1er is termed the marshaling of that data. A distributed programming system should perform the needed maTshaling transparently to the application. Three forms of data marshaling have been suggested [11] . These are "sender makes it right", "receiver makes it right", and "intermediate format". In "sender makes it right", the sending process transforms the data i11to a format that the receiver will understand. This method requires that the sender know the type of the machine it is sending to, and bow the format of the receiver differs from its own. This knowledge must be provided for every type of machine that can be accessed within the Application. In "receiver makes it right" the data received is transformed by the receiver of the message. This requires that the message be tagged with ilie machine type of the sender. It also requires that the receiver be provided the translation for each machine it is capable of receiving - 13 - messages from . In the "intermediate format" the sender translates its message into a common format that 1s shared by the entire Applica.tion. At the receiving end the message is translated into a machine usable format for processmg. This method offers the greatest flexibility and ease of implementation, but requires two translations per message and the associated overhead. As of this time the form of marshaling supported by our system is undecided. The extenslve use of static typing employed in this system allows the determination of the correct set of marshaling operations needed by an APE in an Application. This reqmres that the software engineering support system be supplied with appropriate translation knowledge of the remote machines. The description of how this knowledge is represented and utilized 1s discussed in a later section. 2.4.4. Special Processing Elements When an Application is constructed, the Abstract Processing Elements are bound to a physical processor in the network . This binding carries with it all of the information about the processor required by the APE and its Objects . An example of this knowledge is special processing capabilities provided by a specific machine in the network. For example an array processor, or graphics capabilities . The objects in the application being developed on the support system must be provided with general knowledge of how those resources are utilized. This knowledge might be about tl1e support libraries needed to utilize the hardware, in which case tl1e knowledge would be supported by a set of procedure definitions that allow the system to verify the procedure formats before finished modules are transported across the network to the remote machine and compiled. Another important issue 1s the sharing of remote processors and their special features. The support system provides the ability to share the network among two or more Applications . This calls for the implementation of resource locking and denial of request for resources. Supporting tl1is requires either pre-defined methods of error handling, or operator intervention to - 14- correct problem situations. 2.5. Run-time Support The run-time support provided by this system can be partitioned into two categories: "Application Definition" and "Application Monitoring". 2.5.1. Application Definition Application definition provides the capability to bind Abstract Processing Elements to physical processing elements . This binding includes constraint checking, module transfer, resource allocation, remote status of processors , resource allocation, and static deadlock detection. Much of these capabilities are supported by the graphics and symbolic capabilities provided by the support system . An application is modeled as a set of interacting APEs and bound to processors. This process is represented as a directed graph, where nodes represent the Objects and APEs, and the arcs represent the relations between them. This graph is then manipulated and processed to provide the features mentioned above . 2.5.2. Application Monitoring Application Monitoring provides the ability to monitor the Application during run-time . In a distributed application, run-time monitoring is extremely difficult due to multiple threads of execution that interact in a nondetennistic manner. This system provides the ability to monitor multiple APEs executing in an Application. The monitoring capability comes in the form of: 1. Network message monitoring. 2. Views or windows into APE's contexts . 3. Break Points with notification . 4. Checkpoints with notification. 5. Step and trace capabilities. 5. Constraint monitors that can be established for an APE. Network message monitoring allows the developer to monitor messages that are passed around the network. From the console the software engineer can specify the set of messages they wish to monitor. Copies of these messages will be routed to development system console - 15 - and displayed for examination. Views or APE windows allow the developer to examine the state of objects contruned in specific APEs in the network . This feature is similar to the symbolic debugging capabilities, but in this context objects and member variables will be examined. This process is enhanced by the ability to use the object definition to access and transform the contents into a readable format. Breakpoints and checkpoints provide the ability to monitor the execution of certain objects within an APE in the application. These features is coupled with step and trace capabilities which allow the developer to monitor the run-time behavior of the application. Notification refers to the process of sending messages from remote processes to the support system console in the event of a breaking point, checkpoint, exception, etc. This process makes extensive use of windows to provide display areas in which this information can be properly organized. In order to provide the ability to step and trace a remote APE, the developer will require an interactive input to the remote process . This is another application of APE views mentioned earlier. Construct monitors can be thought of as agents that execute in the APE and monitor the executing system checking for specified constraints. These constraints might be on value ranges of variables , or the consistency of a list or queue. This provides the ability to enforce constraints on some aspect of an APE in the Application. It will be useful in locating run-time errors when the cause has not yet been determined. An object is expected to provide a constraint verification method that returns an indication of the health of the object. 3. Implementation Issues The art of program construction has been described as the task of specifying the solutions to problems in machine readable forms. The development of programming languages and software engineering can be viewed from the perspective of developing "better" ways in which we specify solutions . In many ways the development of this software support system can be seen as developing a programming support system. - 16 - In the implementation of this support system we have many complicated representation problems to deal with . For example , we wish to represent the resources and capabilities of remote processing units. We wish to use these representations to verify that the requirements of an APE can be met by a processing element. These representations can more easily be made and manipulated by a language specifically designed to support such work. In the following sections the issues of object programming support, network support, and run-time support will be discussed. 3.1. Object Programming The applications developed in this system will rely on the object paradigm; the application will be described in terms of cooperating objects that can be partitioned into Abstract Processing Elements and executed on remote processors. These objects will be written in the Ada language which supports the object paradigm. At the development system level the Objects and APEs are described in a symbolic manner that can be manipulated by the support system . This relies heavily on Lisp and packages provided by the Explorer on which the prototype system is being developed. The Application abstraction is described as a directed graph of interconnected objects . This graph representation is used to determine the optimal partitioning for processor placement, constraints on inter-object communication, necessary data marshaling, compatible message arguments, and more. After the application is defined in terms of APEs, these APE modules will be transported to the target processor and compiled automatically into executable modules. An APE module includes the needed network interface . Each module includes an initialization routine that will allocate the static objects, open the needed communication channels, and stand ready to receive messages for its internal objects. Appropriate methods are supplied to perform dynamic allocation of APEs and Objects. Each remote processing element will be supplied with a simple server process that performs - 17 - such duties as receiving and compilation of APEs across the network returning, the status or error messages if any. It will also stand ready to create a new process that contains the image of an APE stored locally. This server will be specialized for each specific machine , such that it will be able to respond to queries about the status of its processor and any local resources . The server also determines the revision levels of the APE modules stored locally to determine if newer versions exist before an application is started . This list of requirements is sure to grow as our implementation of the support system prototype advances. 3.2. Network Support Little discussion has been ofiered up to this point of the support expected from the network for this system. Our model of communication is that of point to point interaction between objects . At the implementation level, the APE will establish a channel for itself to be shared among the objects it hosts. A message passed between two objects is routed between the two processors on which they are hosted by the underlying communications services. A message will require a destination address composed of the processor id, the APE id, the Object id, and the operation id. All of these ids, except for the operation , can be dynamically allocated. Therefore a request for the dynamic allocation of an APE or Object requires that the appropriate ids be returned to the requesting objects . A possibility implementation strategy is io base our system on TOP/IP [15] and sockets which are supported on our Sun and Explorer workstations. Sockets offer various levels of service, depending on the desired dependability of the connection . It offers a fast, packet oriented service that is not guaranteed to be error free, and it also offers a more expensive service that guarantees error free transmission . If the faster, less secure service were used, the system would need to implement its own protocol to ensure error free transmission . Another possibility offered by our workstation environment are Remote Procedure Calls (RPO) [16]. This is a fast protocol that allows the passing of packets between RPO servers. It is - 18- implemented on the less reliable services, and has implemented its own protocol. It would provide the system with a fast method of transferring small amounts of data between processes . TI1is fits nicely into the message passing mentioned in section 2.4.2 . 3.3. Run-time Support An important feature of this system is the run-time support given to the software engmeer developing a distributed application. Two types of run-time support were defined in section 2.5 . These were "Application definition" and "Application monitoring". The implementation of Application definition support was discussed in section 3.2 . This section discusses the implementation of the features of Application monitoring. Application monitoring comes in the form of: 1. Network message monitoring. 2. Views or windows into the APE context. 3. Break Points with notification. 4. Checkpoints with notification. 5. Step and trace capabilities. 5. Constraint monitors that can be established for an APE. Many of these features would be difficult to implement without the support of the compiler and linker. For example checkpoints, breal(points, trace, and step capabilities are provided in the Unix environment through the combination of special compiler features that implement code capable being traced and stepped, and through a set of function calls linked into the program that allows the manipulation of these features. To implement these features in a remote process, and to control them from the system console we need a server at the remote processor capable of receiving debug request messages, setting the appropriate software switches in the APE, and then relaying the results of the stepping, tracing, or breakpoints back to the console workstation. Another feature of Application monitoring is the ability to define an APE window and to interactively examine the state of objects that reside locally. This requires the assistance of the local operating systems, compilers and linkers . DBX under Unix provides this capability that could be extended to this system [ 12]. - 19- 4. Comparative Perspective In this section we make comparisons between the distributed programming support described above and the previous work performed in this area. Much of the work in distributed programming has been in the form of distributed operating systems and languages. Examples of operating systems are Mach [17], Emerald [18], and Xylem [19] . These operating systems provide object like features, but are implemented as a set of library calls that support message passing, process and object management, and network communication. The Mach system requires a modified Unix kernel to support the message passing mechanism and so is not applicable to unsupported machines. A feature of our work is that the support required from a remote processor has been kept to a minimum . The APE and Object abstractions require only features that can be implemented on a simple microprocessor. For example by restricting the definition of an APE to strictly static object allocation, it could be implemented on the Transputer, which only offers static process and data allocation . Another feature of this work is the unified approach to programming environments and network communic ations witl1 Ada in mind. This approach makes use of objects that communicate through message passing. This message passing can support both intraprocess and interprocess communication . The developer need not concern themselves with the distribution of the application across the network of processors until they are ready to. In addition, this system also supports the explicit placement of these objects on processing elements where necessary. The use of the object paradigm has enabled us to define a programming methodology that does not differentiate between the fast function call and slower message passing. Therefore we can apply both interprocess and intraprocess communication without having to make changes in the semantics of the program. We can also define our objects such that data is shared between them. This provides for fast inter-object communication while still providing a functional partition. - 20- The use of the three forms of message passing allows the developer to take full advantage of the unique features of distributed processing. The blocking and non-blocking formats facilitates the development of resource sharing and parallelism into the application . The development system allows the explicit definition of resource sharing and parallelism . In the other approaches, remote procedure calls for example, attempt to hide the remote processing from the application. This system encourages and supports the introduction of resource sharing and parallelism explicitly in the application. Debugging applications that apply RPC would be difficult at best as no support is offered in allowing the developer to manipulate the state of remote processes . A feature of this system no t addressed in related works is the issue of reuse of Object and APE definitions. Reuse is supported in part through the use of the object paradigm , and in part through the application of symbolic techniques in manipulating and combining these objects to form an Application. Because of the interactive nature of our development system the developer is able to in teractive ly debug remote APEs and Objects. The software engineer is provided the ability to define test programs which will exercise the remote processes to ensure their robustness before integration into an application. 5. Conclusions In this paper we have described our experiences m designing a distributed software engineering support system. We have described how this system would incorporate the object paradigm of programming. This system allows the definition of distributed applications through levels of abstraction that unify all aspects of development. These abstractions support object communication through remote procedure calls. The use of these abstractions also promote their reuse in later projects . This system supports the explicate partitioning and distribution of the application on a network of heterogeneous processors. Support is given to the requirements of resource sharing - 21 - and data marshaling. Forms of message passing are provided that allow the exploitation of resource sharing and parallelism . Finally assistance is given m the manipulation and real-time monitoring of developing applications. Remote debugging facilities, and a user interface that supports the developer in the complex task of managing a set of executing processes is discussed . Acknowledgement TI1e authors wish to thank Milan Milenkovic for his constructive suggestion m this research. 6. References [1] B. J, Cox , "Object Oriented Programming," Addison Wesley, Reading Ma., 1986. [2] K. Hwang and F . A . Briggs, "Computer Architecture and Parallel Processing," McGraw-Hill, New York N.Y., 1984. [3] A . Goldberg and D. Robson , "SMALLTALK-80, The language and its implementation," Addison Wesley, Reading Ma., 1983 . [4] "Reference Manual for the Ada Programming Language," ANSijMIL-STD 1815A, D ept. of Defe ns e, Jan . 1983. [5] B. J. Stroustrup, "The C++ Programming Language ," Addison Wesley, New York N.Y., 1986 . [7] D. Gelernter, 'Dynamic Global Name Spaces on Network Computers," 1984 Proceedings of the International Conference on Parallel Processing, pp. 25-31 , 1984. [8] M. G. Christiansen , M. M. Tanik, and S. L . Stepoway, "Objective Linda: An ObjectCentered Perspective of Linda Concepts, and Issues of Implementation," Southern Methodist Univ. Tech. Report 87-CSE-13, Hl 87. [9] S. Etsuya and A . Yonezawa, 'Distributed Computing in ABCL/ 1," Object Oriented Concurrent Programming, MIT Press, Cambridge Mass., pp. 91-128 , 1986. [10] A. Birrell and B. Nelson , "Implementing Remote Procedure Calls ," ACM Transaction on Computing Systems, vol. 2, no . 1, pp. 39-59, Feb. 1984. [11] P. Gibson, "A Stub Generator for Multilanguage RPC in Heterogeneous Environments," IEEE Trans. Software Engineering, vol. SE-13, no.1 , pp. 77-88 , Jan . 1987 . [12] B. Tuhill and K . J. Dunlap, "Debugging with DBX," UNIX Programmer's Supplementary Documents Vol.l (PS1) , 4.3 Berkeley Software Distribution, April1986 . [15] A. S. Tanenbaum, "Computer Networks," Prentice-Hall, Englewood Cliffs , New Jersey, Hl81. - 22 - [16] "Sun Network Services on the VAX Running 4.3BSD ," NFS System Administration Guide and Sun Network Services (NFS) , 4 .3 Berkeley Software Distribution, June Hl86 . [17] M . B. Jones and R. F. Rashid, 'Mach and Matchmaker: Kernel and Language Support for Object-Oriented Distributed Systems," in Proc. Object-Oriented Programming Systems, Languages, and Applications, AC1\1 SIGPLAN Notices, vol 21, pp. 67-78, Nov. 1986. [18] A . Black and E . Hutchinson, 'Distribution and Abstract Types in Emerald," IEEE Trans. Software Engineering, vol. SE-13 , no. 1, pp. 65-77, Jan . 1987. [19] P . Emrath, 'Xylem: An Operating System for the Cedar Multiprocessor," IEEE Software, vol. 4, no. 4, pp. 30-38 , July 1985.; http://uab.contentdm.oclc.org/cdm/ref/collection/uab_ece/id/56
Accession Number:
edsbas.D034714B
Database:
BASE