European Space Agency


Collection and Dissemination of Cluster Data

F. Drigani

Cluster Project Division, ESA Directorate for Scientific Programmes, ESTEC, Noordwijk, The Netherlands

The Cluster Science Data System (CSDS) concept

Cluster is a very challenging mission in terms of data collection and dissemination. Each of the four Cluster spacecraft has 11 instruments on board and thus the information system must be capable of distributing data gathered by 44 instruments. Moreover, the scientific commu-nity interested in the Cluster data is very large and spread all over the world. In order to disseminate such a complex amount of scientific data to such a large community, the Cluster Science Data System (CSDS) has been established.

The CSDS (Fig. 1) is based on National Data Centres that have been established near the Principal Investigators sites. Each Data Centre processes the raw instrument data from a specific set of experiments into data products and makes it available to the other Data Centres via the network. Each Centre is responsible for the nearby PI's set of experiments.

CSDS
Figure 1. The Cluster Science Data System (CSDS) and its components

The National Data Centres set up to be directly involved in the CSDS are the following:

Moreover, two other data centres will also participate in CSDS via directly-involved National Data Centres:

The other components of the overall CSDS configuration are:

Data dissemination

The Cluster mission will produce several types of data; the CSDS will not handle all of them. It will not handle directly the raw instrument data nor the high-resolution data.

The raw data will be distributed on a set of CD-ROMs to each participating institute (about 80 worldwide) on a weekly basis. Each CD-ROM will contain one day's entire payload data and the ancillary data. In this way, each member of the Cluster scientific community will have available the entire mission data.

The high-resolution data will be handled by the PIs only (in some cases using the facilities of their National Data Centre). The PIs will also respond to requests for the data from the user community. The CSDS infrastructure will probably be used to route the requests for high-resolution data.

The CSDS, however, will handle the following types of data:

Table 1. CSDS's basic scientific data products

Product 		Nature			 Typical Use

Summary Parameter 54 scientific parameters Unresctricted access. Used Data Base (SPDB) at 1 min resolution to select events for Data from 1 spacecraft detailed analysis, detection only. of boundary crossing. Summary Data Plots Postscript plots of As above. (SPD) SPDB parameters. Plot resolution: 6 h/20cm. Prime Parameter Data 48 scientific parameters Restricted access (Cluster Base (PPDB) at 4 s relotution. Data community only). Used in from all four satellites,support of scientific including ancillary data.analysis.

The instruments will also measure the most important space physics and geophysical parameters, sometimes using more than one instrument technique. This provides a certain degree of redundancy and increases confi-dence in the data produced.

Not all the Data Centres will offer the same services. Some will offer nearly all products on-line (e.g. the French and the British data centres), while others may only keep a certain amount of data on-line. A request for data that is not on-line will require the retrieval of the data from an off-line storage medium.

One also has to remember that the user community interested in the Cluster payload data served by CSDS is very large and is not limited to the National Data Centres. At the moment, at least 250 investigators around the world are expected to be interested. CSDS has therefore been designed to cater to a large community of users with varying levels of familiarity with data manipulation. This has created the need to have both a convenient user interface and a solid and reliable network infrastructure. Unfortunately the existence of such a large community and the fact that National Data Centres are funded at national level and not by ESA, has also added some further complexity to the system design. It has in fact been necessary to develop the user interface to be compatible with both the Open VMS and the Unix operating systems. However, a Cluster-specific standard for data exchange, based on the Common Data Format (CDF), was established.

JSOC

Considering the complexity of the Cluster scientific operations, with 11 instruments per spacecraft, a Joint Scientific Operations Centre (JSOC) has been established to ensure adequate planning and smooth execution of the scientific operations.

The JSOC's main task is to support the Project Scientists, and their Science Working Team (SWT), in the conducting of an efficient mission operation by coordinating the requests of the scientific community with the Operations Control Centre at ESOC. All PIs submit their observation requests to the JSOC. The Project Scientists then consolidate them and establish a mission plan, in collaboration with other space missions. The JSOC then ensures the implementation, testing and operations of tools for instrument commanding (e.g. spacecraft separation strategy).

The JSOC is not intended to take over some PI tasks nor to interfere with the responsibilities of the Operations Control Centre. The JSOC is needed because the task of coordinating all the entities specifying commands to the 44 instruments is simply too big to be left to the Project Scientist in his coordination role between the SWT and the Operations Control Centre.

As part of the operations support activities, the JSOC generates and distributes:

The JSOC is located at the Rutherford Appleton Laboratory (UK), where the UK Cluster Data Centre also resides.

In terms of overall design, the JSOC functionalities can be grouped into the following four sub-systems:

  1. The Monitoring sub-system, which monitors the progress of the mission as a whole, executes specific monitoring software on behalf of PIs and produces the Cluster event catalogue.

  2. The Planning sub-system, which provides information concerning the Cluster and other missions, and supports the SWT and the PIs in planning observations with their instrument.

  3. The Commanding sub-system, which implements the high-level science planning determined by the SWT, iterates these plans with the PIs and interfaces with ESOC.

  4. The Information Dissemination sub-system, which provides the PIs, the SWT and the wider community with relevant planning, commanding and monitoring information.

Figure 2 shows the flow between the four sub-systems and the JSOC's interaction with the other CSDS and Cluster Mission entities.

Functions of JSOC
Figure 2. The functions of the Joint Scientific Operations Centre (JSOC) and its interaction with the other CSDS and Cluster Mission entities

The CSDS user interface

The CSDS user interface provides the interface with the National Data Centres to allow ingestion of their data products into the data bases and make these products available to the other Data Centres. It also permits the scientific users to interact with the system. The overall system architecture is shown in Figure 3.

System Architecture of CSDS
Figure 3. Overall system architecture of the CSDS user interface. Each organisation's contribution is also indicated.

With the user interface, the Data Centres can:

The scientific users can:

Given the different configurations existing at the various National Data Centres, two versions of the CSDS user interface have been developed, one running on Solaris and the other on Open VMS.

In order to minimise costs and development time, it was decided to develop the CSDS user interface based on software available at ESRIN, ESA's establishment in Frascati, Italy, from the ESIS pilot phase. Under the overall responsibility of the Cluster Project, ESRIN has tailored the existing software to the specific Cluster requirements. Moreover, ESRIN is responsible for the delivery, installation and maintenance of the product.

Contributions on how to increase the functionalities of the CSDS user interface were solicited from the scientific community. This led to the participation of the Swedish Institute for Space Physics (IRF-U) in Uppsala, Sweden; the Rutherford and Appleton Laboratory (RAL), UK; and the Queen Mary and West-field College, UK. The Swedish Institute for Space Physics, most notably, contributed a modification that renders the existing ISDAT software particularly suitable to coping with Cluster's specific requirements with respect to scientific data and data file manipulation.

The Operations Control Centre (OCC)

ESOC plays two roles with respect to CSDS: it hosts the Operations Control Centre (OCC), and provides the network infrastructure or CSDSnet.

The OCC is part of the Cluster Ground Segment and has the following main functions:

In this way, the ESOC OCC is part of the CSDS both as recipient of the requests coming from JSOC and as distributor of data coming from the spacecraft.

The network infrastructure (CSDSnet)

The CSDSnet interconnects the CSDS National Data Centres, the JSOC and various ESA establishments (ESTEC, ESOC and ESRIN). In principle, CSDSnet is based on the existing ESA infrastructure implemented as a self-contained logical system from an addressing, routing and security point of view. This is necessary to ensure the dedicated, Cluster-specific nature of CSDSnet.

The main services of the communication network are:

As mentioned earlier, in order to reduce network loading, all raw data products, except quick-look data, will be exchanged on CD-ROM rather than via the network.

The CSDSnet has been designed to provide a logical interconnection between Local Area Networks (LANs) across a Wide Area Network (WAN) infrastructure. The end-user systems are hosts at the CSDS Data Centres, which are attached to Ethernet LAN segments at their local institute. They gain end-to-end access to their remote peers across Internet work routers, which connect the LANs to the common WAN. The logical connectivity thus supported on the CSDSnet is shown in Figure 4.

Logical Connectivity
Figure 4. Logical connectivity supported by the CSDSnet

The implementation and operation of the hosts and local network is the users responsibility. The CSDSnet protocol is TCP/IP. The infrastructure on which the CSDS-WAN is implemented has been contracted out both in terms of supply and of operations. Its configuration is shown in Figure 5.

CSDS WAN Configuration
Figure 5. CSDS Wide Area Network (WAN) configuration

The CSDS schedule

The preparation phase for the CSDS started in 1989, at the first meeting of the SWT. The system was then actually developed in the timeframe 1993 1995 to be ready for the original Cluster launch (the launch is now scheduled for the late spring of 1996). The overall CSDS development schedule had to be compatible with the overall Cluster schedule.

Master Schedule for CSDS
Figure 6. Master schedule for CSDS development

Based on the current launch date, the mission-level Mission Launch Readiness Review (M-LRR) is planned to be undertaken in late February 1996. The Mission Flight Readiness Review (M-FRR) was performed in September. The CSDS development schedule is summarised in Figure 6.

In that Figure, all the National Data Centres and JSOC are included in the so-called 'Science Data System Schedule' which has already achieved the following major milestones:

The schedule foresees also a period for CSDS validation up to the Mission Launch Readiness Review. Work on the other two main elements in the schedule, the ESOC interfaces and the CSDS user interface, started in 1994 but in time to meet the mission-level milestones.

In particular, the user interface schedule was designed in such a way as to have multiple releases to the National Data Centres, which allowed the Data Centres to learn the system in stages while they participated in the debugging process. The Centres then accepted Release 3 of the user interface, in July 1995. Enhancements that the users had, in the meanwhile, suggested to include in the design baseline, will be incorporated in Release 4.


About | Search | Feedback

Right Left Up Home ESA Bulletin Nr. 84.
Published November 1995.
Developed by ESA-ESRIN ID/D.