• Nem Talált Eredményt

The Control of Access to Meaory

4. THE DISPERSED MONITOR CONCEPT

4.2 The Control of Access to Meaory

In the previous section w e defined a monitor as a collection of procedures and data which together control the manipulation of some resource. We should therefore be able to define a monitor to control access to the common data in the memory; the problem is where this monitor should reside, and how the user programs should access it.

Let us consider a (small) program which has allocated to it a certain amount of memory additional to that required for its own purposes, and let us call this program X. Let us further consider a second program, which we shall call program A, which will read data and produce results in a conventional fashion. Finally, let us require program A to access and to modify some of the data stored in the extra memory allocated to program X.

This may clearly be achieved by some simple communication system, such as that shown diagraamatically in Figure 4.1; the exact mechanism used is immaterial so long as it ensures that at some time after A has sent data to be stored X accepts it and stores it in the m e m o r y allocated to it, and that at some time after A has requested some data X extracts it from the m e m o r y and sends it to A, having first processed any outstanding storage requests.

Figure 4.1 Dual access to memory

An obvious, and well-known, method is to use two message buffers - each of which is controlled by a semaphore, or similar device, w h ich ensures that after sending a request for data A "waits" for a reply.

Figure 4.2 illustrates this approach; the small program dA in the diagram represents the procedures etc. which are required to control the sending and receiving of data. It is worth pointing out that the receive buffer is shown as smaller than the send buffer because the former may only contain one message at a time, while the latter may contain several data storage requests in addition to at most one data request.

Figure 4.2 Dual memory access using buffers

Let us now postulate the existence of several additional programs, similar to program A, which we shall call program B, C, D, ... Clearly we may set up a similar mechanism for each of these programs, as shown in figure 4.3. We shall, however, ignore any inter-program synchronisation that may be required if, for example, program A may modify data which is accessed by program B, and will concentrate on the data handling; we shall then examine the synchronisation problem in the next section.

80

-Figure 4.3 Multiple memory access using buffers

A few moments thought will show that it is not necessary to have two message buffers (as shown) for each of the programs A, B, C, ... , and that a single send buffer will suffice, as long as each message which requires a reply (e.g. a request for data) indicates from whence it was sent. As well as simplifying matters, this creates a more orthogonal structure, as every program (including program X) can proceed until it wishes to read from its input buffer and finds it empty; it must then wait until the buffer has been filled.

We therefore have a basis for synchronising the programs, and we shall examine this in more detail in the next section. Before doing so, however, we note that the "extra" program elements dA, dB, dC, etc. will contain identical procedures, since their sole purpose is to handle communication with program X. We may therefore rename them all as dX, resulting in the structure shown in figure 4.4.

Figure 4.4 Improved multiple memory access

4.3 Synchronisation

The approach described above will enable a number of programs to access and modify a common set of data; however, ubless it is possible for appropriate synchronisation to be achieved the programs may access the data in the wrong order thus, for example, causing payslips to be produced before the hours worked have been recorded!

We have seen that the basic data-handling method described above requires that programs must wait, when they expect a message, until that message is available; it follows, therefore, that program X may readily suspend any of the other programs simply by refusing to respond to a request for data. We shall use this principle as the basis for a complete synchronisation method for the whole system.

82

-First, however, let us examine why we should wish to suspend a program, and for how long. We have already seen that in certain circumstances some programs m u s t not "get ahead" of others; how do we define "getting ahead"? Such an expression implies some pre-determined sequence of operations, and indeed the data for most applications is deterministic - that is, it is intended to be processed in a particular order. In practice it is frequently only partially deterministic, and, for example, in a situation in which a number of geometric surfaces are to be defined the order in which the definitions are processed is irrelevant, except when one definition refers to another surface which should already have been defined. Thus we see that in many of the cases in which it is feasible to separate the processing into two or more parallel elements there will be a serial data stream of some kind which can be used to define the "order" in which the programs should be running at any time.

In the case of language processors, there is frequently another synchronising situation, which occurs at a jump or other interruption of the normal order of processing. A backward jump would clearly upset the simple ordering strategy already proposed as well as causing problems with the updating of data, while a forward jump has the additional problem that the destination of the jump m a y be unknown and must be found without allowing any of the programs to proceed beyond the jump. An obvious approach is to require programs to wait until they have all reached the jump before proceeding although, as we shall see, a less primitive solution is also possible. Finally, it may be necessary (or desirable) for one program to be able to instruct all the others to abort, or otherwise terminate processing unexpectedly due to an error.

We may deal with all these situations, as well as with others which may easily be imagined, by one small extension to the system already developed. In that system program X did nothing except update the common data, or extract data from it, at the request of one of the other programs.

We shall now give program X the ability to keep a table showing the "stage"

reached by ever^y other program. This can easily be achieved by requiring that whenever one of the programs A, B, C, etc. wishes to read a new record from the fundamental data stream it must first ask program X for the record number. Program X can therefore suspend any program until others have reached the same point (by not replying), and can then either allow it to

continue or can alter the normal sequence (i.e. to jump) by returning an out-of-sequence record number. It is easy to see how this principle can be extended to allow one program to instruct program X to cause all other programs to abort, to reset for a new run, or to carry out some other exceptional action. The description of the MILDAPT 2 processor in section 4.5 will show how easily such synchronisation can be achieved.