• Nem Talált Eredményt

The LISA QA model

In document The Modern Translator and Interpreter (Pldal 130-133)

PART 1: THE MODERN TRANSLATOR’S PROFILE

4. Translation quality assessment in practice, QA models

4.2. The LISA QA model

The LISA QA model developed by the Localisation Industry Standards Association (LISA) provides an extensive, broad solution to evaluating translation quality and recognising errors. The model applies a far broader meaning to translation than SAE J2450 does. It interprets the translation as being more than just the text itself:

the translation is considered to be the entire product. It evaluates graphs, the table of contents, the localisation of the text and its topic. It gives a comprehensive overview of what kind of linguistic and non-linguistic factors need to be taken into account during the translation process, which makes it a far more complex model than the SAE J2450.

The model’s developers say it can be used for any type of translation, since unlike the SAE model it does not restrict the areas to which it can be applied.

However, all of its examples are borrowed from the field of IT. In addition, it lists six editing tasks: document formatting, document functionality testing, document language, software language, software formatting and software functionality testing. As noted, three of these apply to IT.

In contrast to the SAE, the LISA QA models say that the reviser does not need to take into account any errors stemming from the source document. It does, however, recommend that the source document be checked the same way the translation is.

Like the SAE J2450 model, the LISA QA model also employs error categories and severity levels. The error categories are in line with the editing tasks. In terms of severity, they can be critical, major or minor. The weighting figures are always 10, 5 and 1. The central category of the model is “minimal acceptability”, which it says needs to be defined depending on the project or client and has no absolute value.

Many organisations that use the LISA QA model look only for language-related errors and they do not evaluate the text based on the other categories. The graph below shows the evaluations that need to be carried out in terms of documentation language (Figure 1).

Figure 1

Error categories of documentation language in the LISA QA model

The most interesting category is probably “cross references”, which takes into account text formatting and logical structure. Chapter titles in the text must correspond to the titles listed in the table of contents, and if a chapter is omitted, the references must also be modified. The categories of company and country standards are also interesting. A translation is considered good if it suits the company’s style.

Regional standards refer to cases like the need to re-write references that would not be relevant for or would not resonate with local audiences.

Country standards apply to numbers, units of measurements and alphabetical orders. Country lists, for example, when they are in alphabetical order, can seem strange when they are translated and suddenly appear out of alphabetical order.

The functionality testing of the document is a category that most translators would not assess on their own, but in reality, it contains very important error categories:

Terminology of the user interface: if the elements displayed on the interface do not show up as they do in the document, then the translation is incorrect.

Terminology across applications: documents for related products must use the same terms for corresponding functions. This applies to new versions of existing products, products from the same product family or one product designed to fit different platforms (for example, software for Windows or Macintosh), etc.

Abbreviations: abbreviations for given words must be the same in the software as they are in the document.

Key combinations and keystrokes: if, for example, a letter can be made bold with the keyboard shortcut CTRL+F, then the same action should be performed with the same key combination in the localised version of the software.

Screenshots: screenshots in the document must be target language versions.

Graphs: graphs must fit and be acceptable in the relevant target culture.

Country standards: screenshots and other figures must be country-specific, for instance, the date format must be the format used in the target country.

User text input: if the software requires the user input to be “Hello”, the document should not say that the user input needs to be “Szia” or “Hola”, etc.

Accuracy of graphics: the relevant graphical elements must be in the right place in the right localised version.

Lists: lists in the document must be complete and they should show the complete process of the action they are instructing the user on, without omitting any steps.

Practical visual information: the various technical details and terms referenced in the document must accurately represent the software itself.

For example, in the translated version, it would be incorrect to refer to the button that turns the machine on as the “Power” button if that button is labelled as “On/Off”. The document should also reference other products, components and functions correctly. It would be an error for the localised version of the manual to refer to a publication that is not available in the given language.

Due to space constraints, this article will not discuss the three editing tasks listed in the model that refer strictly to software localisation. Readers looking for more information on software localisation can take a more in-depth look at the LISA QA model.

In document The Modern Translator and Interpreter (Pldal 130-133)