• Nem Talált Eredményt

An Outline of the APT IV Processor

the I.L. in a form suitable for interpretation, and will essentially be incorporated within the last part of the semantic analysis

PLX = PLANE/2.5,4.7

2.3 An Outline of the APT IV Processor

Like NELAPT, APT IV was developed during the mid-1960's. However, whereas NELAPT followed a similar philosophy to APT III, with a somewhat restricted subset of the APT language, the APT IV processor was designed to provide a largely computer-independent processor for the complete APT language. Thus, although many of the subroutines for use at execution time were carried over from APT III largely unaltered, the lexical, syntactic and semantic analysis phases were totally re-written.

One of the reasons for this was a recognition by the design team that the APT language was a programming language like Fortran or Algol, albeit a special purpose language, and that the principles of compiler design which had been, and still were being, developed for general purpose languages would apply equally to APT. This was a major philosophical change from APT III and its predecessor APT II.

All the early APT literature (for example [Ross, 1960] [Bates, 1962]) refers to the APT part-program being a sequence of instructions to an "APT computer", and states that this APT computer would process the APT part- program to produce a control tape for a numerically-controlled machine- tool. Of course the APT computer did not actually exist and was simulated on a real computer such as the IBM 704 or 7090 (in the first instance), but, nevertheless, the philosophy was that this APT computer directly obeyed the APT part-program statements. In practice, the simulated APT computer processed the part-program in several stages in a similar way to that already described for the NELAPT processor.

By 1964, however, when the APT IV design was being produced [IITRI, 1964], a great deal of progress had been made in both hardware and software development, and the "APT New System" was intended to exploit the then state-of-the-art. The pilot implementation of the New System [IITRI, 1965]

identified four major functions in an APT processor (Translator, Post- Translator, Subroutine Library and CLTAPE Editor) and by separating these functions endeavoured to specify the bulk of the processor in a computer- independent fashion. The first implementation (other than the development one) was made in England by English Electric Computers later in the same year by a team of three in the remarkably short time of 4i months [Ellis, 1966]. This implementation identified some desirable changes, especially in the link between the Translator and Post-Translator, which were simple and yet of fundamental importance [EELM, 1966a] [Ellis, 1967], and the incorporation of these (or variants of them) was the only significant design change that was made before the official release of APT IV after several years of "field trials" [IITRI, 1971].

Essentially the APT IV Translator is the complete analysis phase, whereas the Post-Translator is the synthesis phase, in the sense defined above in section 2.1. The APT IV design is such that, apart from a handful of well-specified assembly-code routines, the Translator is completely computer-independent. The separate Post-Translator allows the implementor the option of either code-generation or interpretation, using the i n ter­

mediate language (I.L.) produced by the Translator. One of the results of the changes recommended by English Electric was that this phase became much simpler and, in particular, that it became possible to write an interpreter in Fortran. This phase in APT IV is now known, therefore, as Execution

44

-Initialisation, and the implementor may either use the Fortran interpreter supplied or write his own code generator and follow a compiler approach.

The Translator contains all three analysis phases (lexical, syntactic and semantic) and is based upon the production method first described by Floyd [Floyd, 1961] and now a standard method for syntax-driven translators and compilers. The Translator actually uses two, independent, sets of production tables - one to process the basic syntax of the statements (including their lexical analysis), and the other to deal with the semantics of the very many forms of geometric definition statements.

The main production table consists of two parts. The first part is used by the Translator to carry out the lexical analysis of the part- program statement, and the remaining part is used to perform the syntactic (and some semantic) analysis.

The Translator reads a statement character by character using the first part of the production table to determine the next course of action.

This action may be to concatenate the character with a partially formed name (or number), or to store a name in the vocabulary table (or n a m e table), or to store a name, n u m b e r or special symbol in a stack. Every time a complete entity is added to the stack the remainder of the production table is searched and compared with the stack. If the top item in the stack (the last item added) matches the first item in a production then the next item in the stack is compared with the second item in the production, and so on. If the end of the production is reached before the stack is exhausted then a positive integer value is returned by the searching routine and used in a Computed GOTO to initiate appropriate processing of the stack. If there is a difference between the items in the production and those in the stack then no match is possible and searching continues for another possible match. If no match is made with any production then a syntactic error has occurred in the input statement, since every valid combination of symbols will find a match somewhere in the production table.

The routine which carries out the comparison of the stack and production table is highly computer-dependent since it is working at the

level of individual bits of a word; however the rest of the process is computer-independent both in concept and in implementation.

This method of processing means that any nested definitions or arith­

metic expressions are dealt with automatically, since the nested items will be recognised and processed before the full statement has even been read.

Thus, for example, if we consider the following statement

C1 = CIRCLE/CENTER,(P0INT/INT0F,L1,L2),RADIUS,1.5

we shall find that (considerably simplified) the stack will be built up as follows:

i) C1 ii) C1 =

iii) C1 = CIRCLE iv) C1 = CIRCLE /

V) C1 = CIRCLE / CENTER