European Space Agency

Radar image processing and interpretation

Two different processing and interpretation approaches can be distinguished. These are basically the more scientific method based on radar backscatter evaluation and the application- oriented way of dealing with the interpretation of grey values. Although each of the methods has its advantages the latter one has been applied for this report because the user group targeted consists mainly of operation oriented users. In addition, the methods described and applied to the ERS images shown in this report have been chosen in such a way that even less experienced users who are not familiar with radar data should be able to process and interpret the images for their applications.

Conversion from 16 to 8 bit data sets

The first step of converting the original 16 bit data set to an 8 bit one is mainly disk space dependent as a complete scene takes up 135 megabytes, while an image, reduced to 8 bits only occupies about 60 megabytes. In addition, the visual displays, as well as a large number of filters and classification algorithms, are currently only available as 8 bit versions. The transfer itself can be performed by the division of each count (grey value) by a number (for example 7, 11, 13, 17, 19). The division value should be dependent on the range and the target to be detected. This means for instance that, within an agriculture application, taking a range of 100 - 20,000 grey values within a normal ERS-SAR image into account, 13 could be a good division value. In the case of a multitemporal approach, all images should be divided by the same value, in order to keep the grey level range on the same level. This processing step is of particular importance in view of the reflection around an axis.

Reflection around horizontal or vertical axis

Reflection of the ERS-SAR data is necessary due to the lateral viewing (to the right) of the satellite. For data taken during the descending orbit, the reflection has to be made around a vertical axis and for ascending orbits around a horizontal axis (see also Figure 2). Image processing software currently on the market does not provide a simple reflection function, but performs a resampling of the image (similar to the rectification). Unfortunately, although this process is not very complex, the processing through a nearest neighbour function may take several hours and a large amount of disk space. However, this phase is inevitable.

Reflection
Figure 2: Reflection around an axis

Superposition of different ERS-SAR images

A direct superposition with the help of a simple shifting is only possible for data having the same frame and track parameters (ascending or descending). This shift is due to the lateral displacement of the satellite of only a few hundred metres between the different orbits, which shows the high stability of the spaceborne platform. For the superposition of data, taken from different frames in descending and ascending orbits, this approach is not suitable. These images have to be rectified with the help of a normal rectification procedure (see below). The superposition can be performed by taking a few ground control points only (3 to 5).

Rectification to geographic basis

The rectification of ERS-SAR images taken under different orbit conditions to a common geographic basis starts with the choice of a certain number of ground control points in two images (road and river networks). Highly reflecting points like houses have to be treated with more care as the displacement of the objects taken from different view points (orbits) can be very important. Following this, a rectification matrix is calculated. This matrix is a set of numbers that can be plugged into polynomial equations (choice of order from 1 to 10). The polynomial equations are used to transform the coordinates from one system to another. Afterwards, the rectification is performed with either nearest neighbour, cubic convolution or other, more complex, algorithms. Major problems occur in areas with relief distortion. These areas cannot be successfully rectified without a digital terrain model (DTM). It is therefore recommended that when possible these areas should be interpreted using monotemporal data sets.

Mosaicking

Cartography can be considered as one important application of ERS-SAR data. Two approaches are possible - either data sets are interpreted separately or data are mosaicked before interpretation. In the following, examples illustrate the second possibility. In this case, two images over the south of Nigeria were mosaicked using the geo-referencing procedure described above. The resulting image 1 shows the limits of this processing as data were taken on different dates from different orbits . Therefore, the radiometric calibration cannot be similar.

Mosaicked data sets
Image 1: Mosaicked data sets (without any further radiometric processing)

In image 2, a radiometric correction algorithm has been applied and land surfaces are visible with almost the same grey levels. Only the sea surface differences are so great that radiometric correction does not work sufficiently well to reduce the effect.

Mosaicked data sets
Image 2: ERS image mosaicking with radiometric correction

In order to control the output of the radiometric correction better, a subset of the boundary area was made.

In image 3, one can see on the right and left of the white line that some differences can still be detected, but the interpretation can now be performed without any risk of mixing classes.

Mosaicked data sets
Image 3: Detailed view of mosaicking results

Filtering

In contrast to the optical data of for example Spot or Landsat, the ERS-SAR images show a granular aspect, with important grey level variations that may occur between adjacent resolution cells. These variations create a grainy salt and pepper texture, the so-called speckle effect , and might make interpretation of the image difficult. This phenomenon, induced by the coherence of the radar signal, is of particular importance for homogeneous surfaces (for example agricultural fields) where the random changes due to the noise speckle the homogeneous background. This noise can be explained by the fact that each pixel of the radar image corresponds to a surface which is much bigger than the wavelength (16 - 20 m to 5.7 cm).

Therefore, one pixel contains many scatterers which contribute to the final reflection. The random combination of these elementary contributions yields a darker or brighter pixel in the image. As this phenomenon greatly hiders visual interpretation and specifically the automatic classification of radar images, its reduction by the so-called speckle filters is useful. A number of filters have recently been developed for this purpose. They are either statistical filters (LEE, FROST, SIGMA, MAP), geometric filters (ASF), multi-temporal filters or filters based on wave theory. Currently, only the statistical filters are used in a more or less applied way and included in commercial image processing packages. In the majority of cases a simple averaging can be considered to be sufficient for visual interpretation.

However, since some objects might disappear through this processing, the technique is not recommended for urban application. In general, it can be said that the larger the scale of application, the larger the filter (averaging) that can be chosen. On the other hand, the larger the window (box) chosen, the longer the processing takes. Two image subsets (see image 4 and image 5) demonstrate the usefulness of filters. Image 4 is not filtered while image 5 was processed with a 6 x 6 pixels average filter. One can see that certain objects (here rice fields) indicated in dark tones became visible only after filtering the data.

Mosaicked data sets
Image 4: ERS image subset

Mosaicked data sets
Image 5: Filtered ERS image subset

Specific further application-related processing steps

In general, two processing or classification methods can be distinguished for the thematic interpretation of satellite data. These are visual interpretation and automatic classification. Visual interpretation can either be performed on paper prints of the images or directly on the screen (then called computer-aided visual interpretation). Automatic classification is done either by using a pixel-oriented approach, or with the help of recently developed field classifiers based on image segmentation. All methods and approaches can be applied to one image (monotemporal) or a multiple of images (multitemporal). Very often, a combination of the different approaches (e.g. first a visual interpretation for overview purposes followed by an automatic, pixel based approach) gives the best results.


About| Search| Feedback

Left Up Home SP-1199
Published June 1996.
Developed by ESA-ESRIN ID/D.