Introduction

When the FATMEN project was launched, the problems of data handling for the LEP experiments were expected to be enormous: each of the four experiments was expected to have accumulated some 7 Terabytes of data by 0 the end of 1991, when the total number of Z events per LEP experiment was to have reached 10 million[bib-MUSCLE]. Although the majority of this data was to reside on IBM 3480 cartridges, large disk farms were also required to facilitate data analysis. In addition, the physicists involved came from many different institutes, in Europe and elsewhere. Thus, any management tools that were to be developed had to take into account the distributed, and highly heterogeneous, nature of computing in high energy physics (HEP). With this in mind, the FATMEN committee was formed at the beginning of 1989 to propose and develop solutions to these problems. The committee involved members from the LEP groups, plus major fixed target and collider experiments. The recommendations of this committee are summarised in the FATMEN Report, CERN DD/89/15. From a user (physicist) point of view, the most important features of the proposed system were

  1. It should be possible to access data in a consistent manner, regardless of the medium on which it is stored, its location, host operating system, etc.
  2. All data should be accessed via a meaningful name.

    Thus, a physicist on an Apollo in the control room of the OPAL experiment at CERN and his colleague logged on to CERNVM would use the same command to access a dataset stored on cartridge in the central tape robot.

    Advantages of using the FATMEN system

    The components of the FATMEN system