SLD Computing Strategy Statement

Introduction

SLD has been in existence since the early 1980's, with its Design Report submitted in May 1984. The engineering run occurred in 1991. SLD's goal is to study Z boson decays produced by the SLC.

Event rates are moderate at about 0.3 Hz with event sizes of about 250 kB. Our issues have been compute cycles, primarily for MC simulation and user analysis; access to the data on disk and in the tape silo; and the desktop.


Goals

SLD expects to acquire approximately one million Z's in the next four years. The online acquisition and monitoring are handled in VMS on two machines, a VAX 8800 and a 6420. Data is written directly into the tape silo from the acquisition machine.

We estimate that we will need 1000 MIPs of processing to handle the data processing and Monte Carlo simulation for that data set. We hope that 500 MIPs will handle the interactive data analysis needs. Interactive analysis tends to sop up all available cycles when there is data to be analyzed.

Network traffic of a few MB/s is anticipated for the batch processing of MC and analysis. Similar loads local to compute engines are expected for user interactive analysis.

We expect to need on the order of 250 GB of disk to handle our data and MC DSTs, while we estimate requiring about 10,000 slots for 0.8 GB cartridges in the tape silo.

Remote institutes are expected to acquire local computing resources and to transport copies of the DSTs for home consumption.

Close collaboration with SCS remains an essential ingredient to success.

Strategies

SLD is becoming mature in its operation. Incremental improvements to existing systems are envisaged for such areas as data access, interactive analysis and collaboration communication.

We are addressing the immediate compute cycle needs by acquiring DEC Alphas. Our collaborators are acquiring Alphas for analysis at their home institutions. Code is distributed to remote sites automatically immediately after submission to the code hub. Distribution is via BITNET and DECNET.

Presently, our compute base is on IBM VM and DEC VMS engines. We expect that the current generation of FDDI networks will suffice for our needs. If not, we expect that additional ports into the central hub will.

Access to the tape silo from the VMS cluster currently goes through a VAX 9000. We anticipate changing this access path to channel-connected RS6000's within the next year or so.

We anticipate separating the CPU's into MC simulation, user batch, user analysis and fileserving functions. The majority of user access to data will be via Xterms on the network.

Our analysis system is based on SLAC packages: IDA and Jazelle for data handling and management; Handypak and UG for histogramming and display graphics; and Mortran as a Fortran preprocessor. Oracle is used heavily to track our data processing. Use is being expanded to include our Monte Carlo simulation and data catalogues. Rexx is used heavily for servers and for its convenient interface to Oracle.

We have recently introduced CERN's PAW as an end-analysis/presentation tool and are attempting to convert from Handypak to HBOOK as our histogramming tool. We would also be interested in a SLAC supported graphics alternative to UG.

We are beginning to use WWW heavily for collaboration communication and documentation. We are also moving more and more of our tools into an X-window environment. It will soon be essentially impossible to do real work on Ascii terminals.

In light of the directions the laboratory is taking, we are embarking on a project to port our offline system and environment to UNIX. We are also preparing for the shutdown of the IBM VM mainframe. We are receiving help from SCS in the form of one and soon another UNIX programmer to assist in the port. We expect the entire port to take about two years to complete. We expect SLD analysis within SLAC and outside to rely on both UNIX and VMS cycles through the lifetime of the experiment.

Timeline

SLD expects to be actively acquiring and analyzing Z data through 1998. We would expect two years of rampdown analysis following the end of running.

Costs

We estimate that the CPU's and disks to analyze and hold our data will cost about $600k (including 1994 costs).


R.Dubois
Last Updated: