Научная статья на тему 'State of the art and future developments in photogrammetry and remote sensing'

State of the art and future developments in photogrammetry and remote sensing Текст научной статьи по специальности «Медицинские технологии»

CC BY
343
53
i Надоели баннеры? Вы всегда можете отключить рекламу.

Аннотация научной статьи по медицинским технологиям, автор научной работы — Trinder John C.

New developments in digital imaging have enabled high resolution imaging from aerial sensors. This is claimed to have resulted in a paradigm shift in procedures for aerial imaging which will result in a significant change in methods, applications and economy of aerial imaging. As well, close range photogrammetry now incorporates digital cameras with capabilities that will result in moderate to very high accuracies of measurement of objects. These developments will be covered in the paper. New satellite systems, particularly those based on small satellite technologies are being developed by many countries to satisfy their priorities to enter the space industry. In parallel with these developments are those of GEO and GEOSS which aim to coordinate developments in satellite technologies to reduce overlaps and provide more comprehensive Earth observation datasets to all regions of the world. The paper will consider these developments and how they will satisfy the future needs of society.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «State of the art and future developments in photogrammetry and remote sensing»

УДК 528.8 (94)

John C. TRINDER

School of Surveying and SIS, the University of NSW, Sydney

STATE OF THE ART AND FUTURE DEVELOPMENTS IN PHOTOGRAMMETRY AND REMOTE SENSING

SUMMARY

New developments in digital imaging have enabled high resolution imaging from aerial sensors. This is claimed to have resulted in a paradigm shift in procedures for aerial imaging which will result in a significant change in methods, applications and economy of aerial imaging. As well, close range photogrammetry now incorporates digital cameras with capabilities that will result in moderate to very high accuracies of measurement of objects. These developments will be covered in the paper. New satellite systems, particularly those based on small satellite technologies are being developed by many countries to satisfy their priorities to enter the space industry. In parallel with these developments are those of GEO and GEOSS which aim to coordinate developments in satellite technologies to reduce overlaps and provide more comprehensive Earth observation datasets to all regions of the world. The paper will consider these developments and how they will satisfy the future needs of society.

INTRODUCTION

The development of new computer technologies, laser scanning, radar imaging and satellite technologies is leading to a range of new equipment available for automatic processes for mapping, GIS database acquisition, terrain and city modelling, land cover analysis and environmental modelling. This paper will review the current state of art of the technologies for digital aerial imaging, digital imaging for close range photogrammetric applications, terrain laser scanning, interferometic Synthetic Aperture Radar (InSAR), new satellite developments, especially small satellites, and some trends in processing for automation information extraction from digital imaging, especially multi-sensor data fusion.

DIGITAL AERIAL CAMERAS

It is only in the last 6-7 years that digital aerial cameras have been developed that can replace film cameras. Film cameras have been developed to a very advanced level over a period approaching 100 years and have proved to be very efficient in acquiring high resolution and high quality aerial images. To compete with these highly advanced film aerial cameras, the new digital cameras should be able to acquire high resolution images with Ground Sampling Distance (GSD) of 10cm or less, have similar or better geometric accuracies, have comparable angles of field, include efficient management and storage of large volumes (TBytes) of data, take advantage of the particular characteristics of digital image acquisition, and be suitable for mapping and orthophoto production. The majority of these aims have been largely achieved with the current digital aerial cameras. Their efficiency is likely to improve as the technologies are advanced. Sales of digital aerial cameras have increased rapidly over recent years.

There are two approaches to the design of digital aerial cameras:

- Systems based on linear array scanners (pushbroom), in which the terrain is scanned by one or more linear arrays as the aircraft moves over the terrain. They normally incorporate at least three sensors, one looking forward, one looking vertically and one backwards to acquire three separate overlapping images of the terrain that can be used for determining elevations. An integrated GPS/IMU system is essential for this configuration for the determination of camera position and tilts, because the image acquisition is a continuous process and not frame based. Typical systems available or soon to be available are Leica Geosystems ADS40, Wehrli 3-DAS-1, DLR HRSC, Jenoptik (JAS-150).

- Systems based on multiple area arrays, in which the images are stitched together to form a larger frame image. The formats of the images are usually not square, but their dimensions approach those of frame aerial film cameras. The images can be processed using standard digital photogrammetric software. GPS/IMU system will not be essential for its operation, but some components of such a system may be included as an option. Typical systems are Z/I DMC, Vexcel UltraCamD and UltraCamX.

The general characteristics of most of these digital cameras are:

- 12 bit dynamic range of panchromatic images with ground pixel sizes as small as 5cm;

- Most acquire multi-spectral images, some with larger pixel sizes than for the panchromatic images, covering wavelengths from blue to NIR;

- Proven high geometric accuracies;

- Data storage capabilities of several TerraBytes both for the original data and backup;

- Area coverages are smaller than those of wide-angle film cameras of 90°; the cross-track and along-track coverages are typically less than 50°;

- The Base/Height (B/H) is about 0.7 for push-broom cameras, but as low as 0.3 for area array cameras; this has led to the use of high percentage overlaps (ie highly redundant imaging) for area array cameras of 80% to 90% to achieve high geometric accuracy;

- If highly redundant imaging is acquired, then ‘true’ or ‘near-true’ orthophotos are achievable.

The development of digital cameras, according to some manufacturers has led to a ‘paradigm shift’ in the way photogrammetry is undertaken. The elements of this paradigm shift are: digital imaging; high levels of redundancy in the imaging and hence multiple images used for processing; digital image processing is possible for information extraction; highly accurate and reliable information extraction; much higher throughput; improved quality of orthophotos; and more imaging for a range of applications. Highly redundant imaging is possible, since manual handling of the data is eliminated and all data is processed automatically.

CLOSE RANGE DIGITAL PHOTOGRAMMETRY

The multitude of digital cameras on the consumer market provides a wide selection for photogrammetric applications. Many processes can be undertaken automatically on digital images for close range application, such as point identification and coordinate measurement, full automation of the measuring process (real-time or off-line), and enhanced modelling and visualization. Coded targets can also be used for defining positions of known points, while the measurement of individual points can be done manually if required, using a so-called EO (exterior orientation) device that speeds up the process.

The range of digital cameras that can be used for close range photogrammetry can be broadly grouped into 3 types:

Amateur: cost <$US500 giving an accuracy of <1:20,000.

Professional: cost up to $US2,000 giving accuracies <100,000.

Photogrammetric: cost of the order of $US50,000, giving accuracies 1:200,000.

New low cost close range photogrammetry software packages are also becoming available, that can be used for processing of the digital images, leading to a broader range of close range photogrammetric applications.

LIDAR OR TERRAIN LASER SCANNING

In LiDAR (Light Detection And Ranging) or airborne laser scanning (ALS) a laser scans the terrain surface at right angles to the flight direction of an aircraft. The measured distance from the aircraft to visible points on the terrain surface will enable the position and elevation of points to be determined. The following equipment is included:

- The laser scanning normal to the flight direction.

- GPS to determine the location of the aircraft based on kinematic measurements.

- IMU (Inertial Measuring Unit) or INS (Inertial Navigation System) to continuously determine the tilts of the aircraft.

A dense set of elevation posts (XYZ coordinates), or ‘point cloud’, is determined at a separation typically of about 1m, that represent a digital surface model (DSM) of the visible terrain. This means that the elevations are determined of visible objects from the air, such as buildings and trees, as well as the terrain surface if the laser beam penetrates the vegetation. The accuracy of the elevation posts is of the order of 10-20cm, although tests have shown that accuracies of interpolated hard surfaces can better than 5cm are achievable. The separation of the posts will depend on the configuration of the LiDAR equipment and the scanning frequency, which is increasing as the systems are developed further.

Most LiDAR systems register multiple returns or echoes of the laser beam, but as a minimum the first and the last pulses will be recorded. Therefore, if the laser beam hits a tree, a part of the beam will be reflected by the canopy, resulting in the returned signal being registered by the sensor as the first pulse. The rest of the signal may penetrate the canopy and, thus be reflected further below the top of

the tree, maybe even by the terrain surface. The last pulse registered by the sensor corresponds to the lowest point from where the signal was reflected. In certain cases, the difference in elevation between the first and last pulses can be assumed to measure the heights of trees.

Along with the time of transmission of the signal from the sensor to the terrain and back to the sensor, the intensity of the returned laser beam may also registered by LiDAR systems. LiDAR systems typically operate in the infrared part of the electromagnetic spectrum and therefore the intensity can be interpreted as an IR image. As well as the laser data, images of the terrain surface may also be recorded with a low or high resolution digital camera. These images may be used to identify the location and description of points on the terrain surface. The combination of colour and the IR as multi-spectral images can provide valuable information for information extraction.

A summary of characteristics of LIDAR data for information extraction is as follows:

- Since a DSM is recorded on the visible surface, it is necessary to process the data for the extraction of the bare earth DEM.

- Multiple returns of the same signal, especially the first and last returns, enable the determination of tree and building heights. As well, penetration of the vegetation by the laser signal enables the measurement of elevations of the bare earth, even though the terrain does not appear to be visible from the air.

- The intensity of the return laser signal, combined with colour images, enables the use of image processing techniques, such as classification.

- The economics of LIDAR equipment require it to be used over large areas, and therefore GBytes of data are likely to be acquired in a single mission (250,000 points may be recorded in a few seconds). Automatic processes are required for the extraction of information from the data.

Typical applications of LIDAR data may include:

- DEMs of the bare earth surface.

- Beach erosion studies.

- Infrastructure analysis.

- Flood risk analysis, flood simulation, and drainage design.

- Ground subsidence.

- Visibility analysis.

- Telecommunications planning.

- Noise propagation studies.

- Volume change monitoring.

- Buildings extraction for 3D city models.

- Forest analysis.

Recent developments in LiDAR include:

- Full wave sensing in which the full wave characteristics of the return signal from the terrain are recorded: this enables better identification of the terrain

characteristics and hence feature identification, but little or no software is currently available to interpret the data.

- Higher frequency laser pulsing brought about by high quality systems and also multiple pulsing during data acquisition. This will enable closer spacing of the elevation posts on the terrain surface and more accurate determination of elevations on the terrain, as well as on features such as buildings and vegetation.

INTERFEROMETRIC SAR (InSAR)

In InSAR, signals from an antenna illuminate the terrain and echoes scattered from the surface are recorded by two separated antennas on board the aircraft. For most satellite InSAR systems, the two images are recorded on two separate satellite passes over the same area of the terrain. The time separation between the two passes for satellite systems will be a minimum of 1 day, but sometimes it will a longer interval. The significant measurement in InSAR is the difference in phases between the signals received at the two antennas. These differences can be used to determine terrain elevations with high accuracy. After registering the two images, the phases are calculated and differenced on a pixel by pixel basis, resulting in a phase difference image or interferogram. The process of computing the phase differences is referred to as phase ‘unwrapping’, and it is designed to eliminate redundancies in the phase information derived from the two antennas. Accuracies of 0.5m are possible for short wavelength airborne radars, while accuracies of the order of 5m to 10m are achievable with spaceborne systems. The recent development of longer wavelength P-band interferometric airborne systems enable elevation determination over heavily forested areas, that is not possible with shorter wavelength systems, since P-band radar is able to penetrate the vegetation. InSAR systems are very economical for medium accuracy elevation determination over large areas. Differential interferometric SAR systems that determine changes in elevations over several epochs can result in very high accuracies of mm and are important for measuring ground subsidence.

InSAR is a complex process and special software is required to process the data. Typical problems that must be overcome include: temporal decorrelation, particularly for repeat pass satellite InSAR, caused by the time differences between the two satellite passes; spatial decorrelation, also called geometric decorrelation caused by the magnitude of the baseline; volume decorrelation, due to variations in the penetration of the radar signal into ground features within a resolution pixel; and atmospheric artefacts. These issues will reduce the accuracy of elevation determination, but means are available to significantly correct for these errors.

DEVELOPMENTS IN SATELLITE EARTH OBSERVATION

There are more than 50 Earth observation (EO) satellites already in orbit or planned by 2010, largely with the same spectral coverage. A brief summary of the characteristics of these satellites is as follows:

- Civil land imaging satellites with resolutions <36m in orbit or planned by 2010.

- Optical, 26 in orbit, 25 planned.

- Radar, 3 in orbit, 9 planned.

- These can be divided into two major resolution groups.

- 18 high resolution systems (0.5m to 1.8m).

- 44 mid resolution systems (>2.0m to 36m).

- The two groups have greatly different coverage capabilities.

- Hi-resolution swaths are between 8km to 28km range.

- Mid-resolution swaths are generally between 70km to 185km except for the Disaster Monitoring Constellations (DMC) which have swaths of 600km.

More than 20 countries, including many small countries are prepared to fund development of satellite systems. Many of these countries are developing small satellites, which are lower cost and can be developed much more quickly to satisfy their ambitions to become space faring nations.

Small satellites are characterised by:

- Rapid development scales for experimental missions ranging from just 6 to 36 months.

- They are based on leading-edge commercial off-the-shelf (COTS) technology.

- Include innovative solutions and cheaper alternatives to the established systems.

- They are lighter weight and designed with smaller volumes.

A typical classification of satellites is as follows (Table 1):

Table 1

Satellite Group Name Mass Possible Cost ($US1999) Examples

Large >1000kg

Medium 500-1000kg

Mini 100-500kg 5-20 million DMC, TopSat

Micro 10-100kg 2-5 million SSTL MicroSat-70

Nano 1-10kg >1 million

Pico 0.1-1kg

Femto >100g

The question can be asked as to why so many countries wish to develop a space capability including EO. Some of the reasons are:

- Improve communications in remote areas.

- National security and national pride.

- Scientific endeavours, such as planetary exploration, studies of aspects of global change.

- Development of high technology capability for future, where the spin-off benefits will enable the countries to develop leading edge high technology industries.

- Operational aspects of EO, such as EU farming policies.

- Providing for a space capability that may not be available from elsewhere in the future, RiceSAT, fire monitoring.

Given the current level of development in satellite systems, there are a number of questions that can be asked. Due to CCD sensor technology being used in most of these satellites, there is a concentration on limited spectral range of most sensors, to less than about 1200nm. Therefore, it remains to be seen whether the medium range of sensors are adequately catered for. Similar questions apply to microwave sensors and hyperspectral sensors. Coordination in the development of EO would ensure that gaps are filled in the range of data acquired by EO systems. It appears that many developments of EO sensors are being driven only by national priorities and not global needs.

The establishment of GEO (Group on Earth Observation) and GEOSS (Global Earth Observation System of Systems) is intended to coordinate the developments of future satellite systems. However, there is no compulsion for member countries of GEO to limit their plans for space developments, or to cooperate with other members. Indeed, the technical commitments of GEO members or participating organizations will only apply to those contributions that they themselves have identified. Hence, in the short-term it seems that there will be very little reduction in the high level of redundancy in the development of satellite systems, since many countries are already developing their own for national reasons.

GEOSS is an ambitious plan which should involve inputs from all members and participating organizations. The best knowledge available should be input into the development of GEOSS, which should result in the acquisition of an adequate range of data for such aspects as sustainable development, climate change and global warming studies, as well as food security, disaster management, provision of safe drinking water and other issues described under societal benefits above. It is hoped that all participating countries will recognize the anticipated benefits of international cooperation in developing GEOSS and commit their space programs to its fulfillment.

PROCESSING OF MULTI-SOURCE DATA

Data fusion is a formal framework for combining data originating from different sources. It aims at obtaining information of greater quality than can be obtained from the data when processed separately; the exact definition of ‘greater quality’ will depend upon the application.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Examples of data fusion are: optical satellite data with radar data; aerial photography with LiDAR data; high resolution panchromatic and multi-spectral data; and interferometric SAR with other types of data. The advantages of data fusion to the remote sensing industry have been argued for more than two decades, yet efficient methods of data fusion still need to be developed. The concept of data fusion is often referred to as one plus one is greater than 2. That is, the fusion of two sets of data will result in the extraction of significantly more information than may be assumed by a linear combination of the two sets of data. It is important that methods are developed in remote sensing in which

data fusion is applied based on strong theoretical principles so that the full potential of fusion of multiple sources of data can be achieved. The advantages of data fusion are that it uses the complementarities of the individual data sets, but it also provides redundancies in the processes, so that when data is not operational in one medium, it may be operational in the other form.

An example of the fusion of several sources of data is given in the determination of DEMs from aerial photography and optical remotely sensed data based on a statistical approach of the Dempster-Shafer algorithm (Lu et al 2006). The research is designed for 3D reconstruction from stereo images over trees or built-up areas, based on an attempt to understand and interpret the image content. In order to provide an accurate DTM of the terrain surface, the characteristics of the terrain cover, such as buildings and trees is determined. Figure 1 illustrates the architecture of the automatic building extraction system.

Figure 1: Architecture of building extraction system

Data fusion is used to resolve multiple propositions for the identification of the buildings, ie the DSM, the building classification, the building outlines derived from the Level Set method. The Dempster-Shafer algorithm is based on probability masses of a proposition A, through ‘support (Sup(A)) and ‘plausibility’ (Pls(A)). Sup(A) are the probabilities derived directly from the data sources, while the Pls(A) are the probabilities not assigned to the negation of a proposition, ie range of uncertainties free to support the proposition A. The possible sets of classes availabl e include single classes, unions of classes and negations of classes.

By combining the probabilities of the sources of data derived from the DSM, the NDVI and the extracted building outlines, the most probable description of the buildings or trees could be determined. This is one demonstration of the application of a number of sources of information for the derivation of image content. A similar application of the Dempster-Shafer algorithm has been used to combine aerial multi-spectral images and LiDAR data, also for the extraction of buildings (Rottensteiner et al 2005).

CONCLUSIONS

- Digital sensing is a major new development in photogrammetry, which is being rapidly adopted worldwide. Improvements to the earlier models of digital cameras are continuing and therefore better quality and more efficient digital images will be acquired in future.

- LiDAR has developed as a major source of elevation data, and with improvements in the technology, more detailed data at much higher sampling interval will be possible in the future. Merging of LiDAR with digital imaging is also a future development.

- The large number of Earth observation satellites, including very high resolution satellite sensors with resolutions as small as 0.4m, demonstrates the level of interest in developing satellite technology in many countries around the world. However, this leads to a high level of redundancy in the provision of satellites for Earth observation. It is hoped that GEO and GEOSS will coordinate satellite developments for EO for the next 10 years so that efficient comprehensive monitoring of the environment will be possible.

- Data fusion is a powerful tool for combining data from a number of different sources. With the increase in automation, such techniques will become important in the future for extracting features from the digital images.

BIOGRAPHICAL NOTES

John Trinder graduated with a BSurv at University of New South Wales in Sydney Australia (UNSW) in 1963, MSc (ITC) 1965 and PhD (UNSW) 1971. He worked at the UNSW from 1965 to 1999, progressing to the position of Professor in 1991. He was Head of School, now named School of Surveying and SIS at UNSW from 1990-1999, and currently holds the position of Visiting Emeritus Professor. John has published extensively on his research at UNSW on photogrammetry and remote sensing. He has held executive positions in the Council of ISPRS, including Treasurer (1992-1996), Secretary General (19962000), President (2000-2004) and is currently First Vice President.

REFERENCES:

1. Lu Y. H., Trinder J. C. and Kubik K., 2006 Automatic Building Detection Using The Dempster-Shafer Algorithm, Photogramm. Eng. & Rem Sens, Vol. 72(4) pp. 395-404

2. Rottensteiner F., J. Trinder, S. Clode, and K. Kubik: 2005 'Using the Dempster-Shafer Method for the Fusion of LIDAR data and Multi-spectral Images for Building Detection'. Information Fusion (an International Journal on Multi-Sensor, Multi-Source Information Fusion

by Elsevier), special issue on Fusion of Urban Remote Sensed Features. Guest Editors: O. Hellwich (Berlin), P. Gamba (Pavia), Vol 6/4 pp 283-300.

© John C. Trunder, 2007

i Надоели баннеры? Вы всегда можете отключить рекламу.