Transactions of Karelian Research Centre of
Russian Academy of Sciences
No. 3. 2018. P. 57-71
Published online in December 2017
DOI: 10.17076/lim692
Труды Карельского научного центра РАН № 3. 2018. С.57-71
Опубликовано в онлайн-версии в декабре 2017
методы исследовании
УДК 551. 482.213 + 528.8.04
MODIS-AQUA AND SENTINEL-2 DATA FUSION: APPLICATION TO OPTICALLY SHALLOW WATERS OF LAKE MICHIGAN
A. A. Korosov1, A. V. Moiseev2, R. Shuchman3, D. V. Pozdnyakov2
1 Nansen Environmental and Remote Sensing Centre, Norway
2 Scientific foundation "Nansen International Environmental and Remote Sensing Centre", Russia
3 Michigan Tech Research Institute, Ann Arbor, USA
Subsumed under the category of ocean colour (OC) data fusion tools, a new approach is developed to efficiently use the merits of two OC satellite sensors differing in their spatial and spectral resolution characteristics. The tool permits to combine high spectral but lower spatial resolution optical data from one satellite sensor with higher spatial resolution but lower spectral resolution data from the other one into an image possessing simultaneously both high spectral and high spatial resolution qualities. The developed algorithm employs the artificial intelligence tool: emulated/artificial neuron networks (ANNs). The developed ANN algorithm performance and efficiency are demonstrated for Lake Michigan. The fusion was effected making use of multiband data from Sentinel-2 Multispectral Instrument (MSI) and MODIS-Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. In this version MODIS-Aqua sensor is chosen as an analog of the Sentinel 3 OLCI, whose spectrometric and atmospheric corrected data are yet unavailable. The multisensor (MS) optical-optical fusion results have persuasively demonstrated the efficiency of the approach and its applicability to studies of natural water bodies of different optical complexity. It can be utilized in combination with any biogeochemical retrieval algorithms. In the case of retrieving water quality parameters (WQP) in optically shallow aquatic environments, the employment of the fusion tool developed is particularly promising as the bottom reflectance properties are frequently highly heterogeneous. Indeed, in such cases, remote sensing optical data acquired at simultaneously high spatial and spectral resolution are certainly more advantageous as compared to those acquired separately by two different sensors operating either at high spatial (but low spectral) or high spectral (but low spatial) resolution. For the retrieval of WQP in optically shallow waters (OSW) a special algorithm called Biooptical Retrieval Algorithm (BOREALI) - OSW was applied to study the eastern coastal zone of Lake Michigan. The application of both the OC fusion tool and our BOREALI-OSW algorithm permitted to document both interannual dynamics in WQPs as well as bottom substrate spatial heterogeneity in the target OSW area of Lake Michigan.
Keywords: fusion of multi-sensor ocean colour remote sensing data; optically shallow waters; retrieval of water quality parameters; bottom cover identification; Lake Michigan.
А. А. Коросов, А. в. моисеев, р. Шухман, д. в. поздняков. совмЕЩЕниЕ данных MODIS-AQUA и SENTINEL-2: применение
к оптически мелким водам озера мичигАн
Разработан инструмент для совмещения данных двух спутниковых датчиков цвета океана (ЦО), один из которых имеет более высокое пространственное разрешение, а другой - более высокое спектральное разрешение. В результате создается изображение, имеющее одновременно высокое пространственное и спектральное разрешение. Разработанный алгоритм совмещения данных использует аппарат искусственных нейронных сетей (ИНС), позволяющий устанавливать функциональную зависимость между входными и выходными данными, в качестве которых выступают значения радиационного сигнала, регистрируемого датчиком высокого пространственного разрешения в его спектральных каналах, со значениями радиационного сигнала, регистрируемого датчиком высокого спектрального разрешения в своих спектральных каналах. Эффективность разработанного ИНС-алгоритма демонстрируется для озера Мичиган с использованием спектральных данных многоспектрального прибора (MSI) Sentinel-2 и спектрорадиометра среднего разрешения (MODIS) MODIS-Aqua. Разработанный инструмент совмещения данных ЦО не зависит от конкретного сочетания датчиков ЦО и может сочетаться с различными алгоритмами восстановления искомых биогеохимических параметров. В случае восстановления параметров качества воды (ПКВ) в оптически мелких водах применение разработанного инструмента совмещения данных ЦО особенно эффективно, поскольку отражательные характеристики донного покрытия могут иметь высокую пространственную изменчивость. Для восстановления из совмещенных данных ЦО значений ПКВ в оптически мелких водах использовался разработанный нами специальный алгоритм BOREALI-OSW, который позволяет на количественном уровне получать информацию не только о ПКВ, но и о характере донного покрытия. Эти возможности продемонстрированы на примере исследований восточного побережья озера Мичиган, в ходе которых была документирована внутригодовая динамика значений ПКВ и выявлена пространственная неоднородность донного субстрата в этой мелководной части водоема.
Ключевые слова: совмещение данных многоспектральных датчиков цвета океана; оптически мелкие воды; восстановление параметров качества воды; идентификация типа донного покрытия; озеро Мичиган.
1. Introduction
Known as a process of combining two or more different images into a single one, image fusion is intended to generate a new image carrying refined/ improved information sought for by researchers.
According to the needs of the latter, image fusion is performed at three processing levels, viz., pixel, feature, and decision levels [Pohl and van Genderen, 1998].
High-level fusion, i. e. feature level and decision level fusion is a multi-source data fusion that employs certain combinations of data sources of various nature being dictated by specific aims. Feature level fusion extracts various features (e. g. texture parameters) from different data sources to further combine them into one or more feature maps.
Methods of decision level fusion encompass voting, statistical and fuzzy logic-based methods. in high-level fusion, optical, synthetic aperture radar (SAR), and light detection and ranging (LiDAR) data are often subjects to fusion, but
also geographic information systems (GiS) data, and ground data [Pohl and van Genderen, 2015].
Fusion of spaceborne image data at pixel level is intended to integrate the information yielded at different spatial and spectral resolutions to the effect of including the data from high spatial but low spectral resolution into a low spatial but higher spectral resolution image while preserving in the latter its high spectral resolution properties. Pixel-level fusion can also aim at increasing the temporal resolution of a sensor of low temporal resolution but high spatial resolution through fusing its data with the data of a sensor of lower spatial resolution but higher temporal resolution [Zang, 2010; Boschetti et al., 2015].
Very often this process is called resolution merging or pan-sharpening. The latter nickname reflects that in great majority of fusion cases at pixel level panchromatic (PAN) imagery is used as a high spatial and low spectral resolution information source [Amro et al., 2011].
Pan-sharpening is frequently applied to a single optical sensor data containing both panchro-
0
matic and multispectral data (MS), but sometimes to multi-source data provided by two independent optical sensors differing in spatial, spectral and temporal resolution. Ideally, the outcome of pan-sharpening is an artificial image identical to the image that the MS sensor would yield provided it had the spatial resolution of the panchromatic (such as Satellite Pour l'Observation de la Terre (SPOT), Landsat 7, IKONOS, Quickbird, OrbView), or a paired higher spatial resolution optical sensor on board some other satellite.
Below we focus on the first of the abovemen-tioned processing levels, i. e. pixel level, and more specifically, on pan-sharpening procedures.
Pixel level fusion: techniques and algorithm
advancement
Due to a huge number of suggested and implemented (over the last 30 years!) pixel-level pan-sharpening fusion methods, it is a challenge to overview and categorize them [Aiazzi and Aparone, 2012].
Pohl and van Genderen [2015] suggested the following categorization constituted by five groups: (i) component substitution (CS); (ii) numerical and statistical image fusion; (iii) modulation-based techniques, (iv) multi-resolution approaches, and (v) hybrid techniques.
Prior to overviewing the pan-sharpening techniques, it is noteworthy to explicitly emphasize that in optical-optical data fusion the high spatial resolution data might be provided not solely by PAN images but also by multispectral and hyperspectral images with a lower spatial resolution in comparison with a higher spatial resolution multispectral image liable to fusion. Necessitated by many applications, such as studies of fine features of surficial manifestations of biogeochemical processes in aquatic environments, simultaneously high spectral resolution and high spatial resolution images are mandatory. However, at least presently, obtaining such data from one and the same optical sensor is unattainable due to its hardware limitations. For instance, data from the Sea-viewing Wide Field of-view Sensor (SeaWiFS), Moderate Imaging Spectroradiom-eter (MODIS), Medium Resolution Imaging Spectrometer (MERIS), and the recently put in orbit Ocean and Land Colour Instrument (OLCI) on Sentinel 3. OLCI has a reasonably good set of water colour wavelengths but at a rather coarse (300 m) spatial resolution. Contrarily, e. g. Landsat TM, ETM (+), and the MultiSpectral Instrument (MSI) on Sentinel 2 yield data at much higher spatial resolution (several tens of meters), but a rather scarce number of spectral channels in the visible range. Thus, frequently, it implies a multi-sensor data fusion.
(i) Briefly, CS techniques (also called projection techniques) convert a number of spectral bands inherent in the original image into e. g., another colour space where one of the resulting channels is replaced by a new image; the reverse transform yields the actual fused image accommodating information from both input data. Within this category, the intensity hue saturation (IHS) [e. g. Aiazzi et al., 2007], principal component substitution (PCS), and the Gram-Schmidt [Laben, and Brower, 2000] techniques are most frequently exploited [Liu, and Liang, 2016]. In recent years IHS has undergone numerous improvements to overcome the deterioration of the spectral content in the fused image (inclusion of trade-off parameters, enhancement of IHS image- and edge-adaptivity, combination of MS-induced and PAN-induced weights, adjustment of histograms of the input images to assure equality of the mean and standard deviations, for rfs. see Pohl and van Genderen [2015].
CS methods suffer from spectral distortion due to the significant incompatibility of PAN (or any high spatial resolution image) and substituted component. To overcome, at least partially, this challenge, many modified CS approaches have been suggested. Thus, there appeared a generalized IHS (GIHS), a spectrally adjusted IHS (SAIHS) [Tu et al., 2004]. Based on the minimum mean square error criterion, an adaptive Principal Component Analysis (PCA) method [Aiazzi et al., 2007] calculates the optimal weights with respect to a low pass filter version of the Pan image. Capitalizing on the cross-correlation coefficients, Shah et al. [2008] combined PCA with the contourlets transform approach, but this procedure should be rather ascribed to the family of hybrid techniques (see below).
Based on the technique nicknamed a guided filter (GF), Liu and Liang [2016] developed two novel methods, referred to as a band-dependent version and a multispectral version. Operating with two parameters (regularization parameter and window size), this technique extracts the missing spatial details in the MS images by minimizing, with the help of MS images, the difference between the PAN image and its filtered output. The claimed advantage of this approach is that it assures edge-preserving and structure-transferring.
(ii) Numerical and statistical approaches (NSA) perform multiplicative operations, create subtrac-tive and ratios images. Widely used Bovery transform (BT) resides in spectral modeling intended to attain a normalization of the input bands via addition, subtraction and ratio. The colour normalized (CN) colour sharpening and local modulation of the MS image by the ratio of the new intensity and initial intensity components is developed to avoid colour distortion inherent in BT [Vrabel, 2000].
0
The principal component analysis (PCA) is also a very popular tool. it implies the replacement of the first PC by a high-resolution (e. g. a PAN or a low spatial MS) image. The other version of this technique is the substitution of the last PC in the course of which spatial detail is injected instead of replacement by either a PAN or a low spatial MS image [Cakir and Khorram, 2008].
Provided the geometry (i. e. spatial feature) is encompassed by PAN or low spatial MS image, variational model using filtering and subsampling allows considering local relationships between neighboring pixels bringing about a noising effect [Duran et al., 2014].
A purely statistical approach (Fuze GoTM) to fuse PAN and high spectral resolution images seeks a least square fit between the gray hues of the input image, and estimates the output values with statistical methods.
(iii) The so called indusion technique also known as modulation-based approach (MBA) uses a ratio between the PAN and its low-pass filtered image with a further modulation of a lower spatial MS image. The latter can be upscaled by nonlinear interpolation to attain better results [Khan et al., 2008]. if instead of PAN a low spectral resolution MS image is used, a modulation of the MS channel with spatial detail assures a robust implementation of this approach. The typical modulation-based fusion algorithms are composite and encompass Brovery [Vrabel, 2000], Smoothing Filter-based intensity Modulation (SFiM) [Liu, 2000], High-Pass spatial Filter (HPF) [Chavez et al., 1991; Gangkof-ner et al., 2008; Rong, 2014] and Synthetic Variable Ratio (SVR) [Zhang, 1999] fusion algorithms (see below in hybrid algorithms paragraph).
if within NSA the spatial detail is injected into MS from PAN with the interband structure model [Garzelli and Nencini, 2005], the application of the modulation transfer function of the imaging system might be desirable as this permits to a certain degree avoiding spectral distortion.
(iv) Employing wavelets, curvelets, contourlets and similar transforms [Starck et al., 2003; Choi et al., 2004; Zang, 2009; Metwalli et al., 2014], multiresolution analysis (MRA) techniques decompose input images into multiple channel images and find their application for revealing high frequency spatial detail. Multi-scale models are layered as a pyramid whose base is the original image. Layering is performed using the above transforms. The fused image is obtained by the inverse transform. Otazu et al. [2005] have extended this technique so that it can be applied to any number of MS bands.
(v) To overcome the spectral incompatibility of PAN and MS images, instead of inserting gray hue values into MS spectral components, several alter-
native ways were exploited. They are subsumed under the category of hybrid techniques. For instance, the iHS transform [Hong and Zhang, 2009] converts the original MS bands into iHS space. Then the fast Fourier transform (FFT) [Nussbaumer, 1982] is applied to both the obtained intensity spectrum and the PAN images [Ling et al., 2007]. Further, the former is low spatial frequency pass filtered, while the latter is high spatial frequency filtered. With the inverse FFT the thus processed MS images are converted back into the spatial domain. Called the Ehler fusion, this method eliminates the limitations inherent in other methods even for multi-sensor or multi-temporal images [Klonus, 2008].
To facilitate the optimization of both spectral and spatial content of fused images, the iHS transform in combination with, e. g., wavelet transform (WT) is suggested [e. g. Hong et al., 2009]. MS data are transformed by iHS and the intensity component is further decomposed by WT to reach the same pixel size as PAN. This is followed by replacing the intensity wavelet decompositions by the PAN decompositions.
The idea of parameter optimization in conjunction with low spatial frequency filter (LF) and empirical adjustment of intensity image by means of regressing MS and histogram-matched PAN was exploited to derive a general hybrid algorithm [Choi et al., 2013].
The comparison of different methods in order to identify the best fusion algorithm is a challenging and doubtful task as the authors report on their own algorithm efficiency applied to the solution of the tasks they had to tackle. Moreover, it is hardly appropriate to compare the fusion algorithm efficiency when it is applied to different sensors, different covered area, etc. Even if the studies are similar, different choices of individual parameters should be of significant essence. The same refers to fusion quality assessment as the criteria chosen by different workers are so diverse [Palubinskas, 2013; Pohl and van Genderen, 2015]. Undoubtedly, the most successful image fusion algorithms must be sensor-specific and adaptive. This accentuates the problem of exploitation of data from a continuously extending number of new sensors such as, e. g., the satellite sensors recently put in the orbit and planned for the years to come under the COPERNiCUS Program.
Below we present our original approach to the optical-optical MS fusion. in what follows we show that in the present form the developed procedure is of the hybrid family: belonging ideologically to the CS cohort of fusion methods (radiometric intensity values are substituted by RGB values), it exploits artificial neural networks (ANNs) to inject high spatial resolution features into a higher spec-
®
Fig. 1. An example of the ANN architecture
tral resolution image. The performance of this approach is illustrated for Michigan.
The fusion option selected by us (fusion of the MSI MS images with the MODIS RGB image of one and the same area and time of data acquisition) was purported to provide a visually easily perceivable result of fusion. A statistical assessment of correspondence between remote sensing reflectance, Rrs values inferred from the MODIS and fused data at three wavelengths used to generate the RGB images was intended to illustrate the fusion procedure adequacy.
2. Methodology description
ANN approach
The ANN approach is veritably sensor-specific and adaptive. It is known that in pattern recognition tasks it proved to be more powerful and efficient in comparison to, e. g., linear and simple nonlinear analyses [Haykin, 1998].
In application to image fusion, the ANN-based method employs a nonlinear response function that iterates many times in a special network structure (exemplified in Fig. 1) in order to learn the complex functional relationship between input and output training data.
The input layer has several neurons, which represent the feature factors extracted and normalized from image A and image B. The function of each neuron is a sigmoid function given by:
f(x) = —^ (1)
1+e *
The hidden layer has several neurons and the output layer has one neuron (or more neurons). The ith neuron of the input layer connects with the jth neuron of the hidden layer by weight Wij. The weight between the j-neuron of the hidden layer and the fth neuron of the output layer is V (in the considered application of the ANN-based algorithm t = 1). The weighting function is used to simulate and recognize the response relationship between features of the fused image and the corresponding feature from the original images (image A and image B).
The ANN fusion model can be presented as follows:
Y =
1
1 + exp
inm-y)
(2)
where Y = pixel value of the fused image exported from the neural network model, q = number of nodes in the hidden layer (s) (we employed two hidden layers with q = 14 + 2), Vj = weight between the jth hidden node and the output node t (t = 1), Y = threshold of the output node t, Hj = exported values from the jth hidden node:
H =-'-^ (3)
1 + exp
-(Vn W..a.
ij i
6
where W as above is the weight between the ith input node and the jth hidden node, at = values of the ith input factor, n = number of nodes in the input layer (n=13 in the ANN architecture used in the present work), dj = threshold of the jth hidden node.
61
in the first step of ANN-based data fusion, two registered MS images are decomposed into several blocks/windows with the size of M and N. Then, features of the corresponding blocks/windows in the two original images are extracted, and the normalized feature vector incident to neural networks can be constructed. The features used here to evaluate the fusion effect are normally spatial frequency, visibility, and edge. The next step is to select some vector samples to train neural networks. An ANN is a universal function-approx-imator that adapts to any nonlinear function defined by a representative set of training data. once trained, the ANN model can remember the learned functional relationship and eventually be used for further calculations. it is exactly because of these reasons that the ANN concept has been adopted to develop strongly nonlinear models for multiple sensors data fusion.
The ANN-based fusion method exploits the pattern recognition capabilities of artificial neural networks. The learning capability of neural networks makes it feasible to customize the image fusion process. Many of applications indicated that the ANN-based fusion methods had more advantages over traditional statistical methods, specifically, when input multiple sensor data were incomplete or noisy. it often serves as an efficient tool at a decision fusion level for its self-learning capability, especially in land use/land cover classification. in addition, the multiple inputs - multiple outputs framework makes it a useful approach to fusion of high dimension data, such as long-term time-series or hyperspectral data.
Fuzzy k-mean classification algorithm for
bottom type identification
The fuzzy k-means classification is based on clusterization, i. e. the bringing together of objects into groups/clusters based on the likelihood of features for the objects of one group and unlikeness between other groups. An important advantage of such algorithms is that they do not rely on the traditional assumptions for statistical methods: they can be employed under conditions of a near-complete absence of information on the type of data distribution. For such algorithms, the input information for clusterization is the matrix of observations (X) with the size M x N, where M is the number of rows, corresponding to the number of objects and N is the number of characteristics. Fuzzy clusters are described by a matrix of fuzzy partitioning/fragmentation [Pintore et al., 2003]:
F = M ^£[0,1],
k = 1.....M i = 1.....c
(4)
where the kth line assigns the degrees/weights with which the object (xk1, xk2,...xk m) belongs to clusters Av A2,..., Ac. The matrix F describes the degree of belonging to a cluster and, in the case of fuzzy partitioning, the given object belongs to the cluster and the degree to which this belongingness can vary is assessed within the interval [0,1]. The conditions of fuzzy partitioning are formalized as follows:
/f=1.....M
°<Er=A<A/' /=1.....c
(5)
Fuzzy partitioning permits to easily solve the problem related to objects located at the interface of two clusters: it is done through attributing to them the degree of belonging equal to 0.5. The intrinsic drawback of fuzzy partitioning stands out when dealing with objects distanced from the centers of all clusters. Distanced objects have little in common with each of the cluster, so that intuitively it seems reasonable to attribute to them low degrees of belonging. However, according to the condition stipulated by the equation (6), the sum of their degrees of belonging is the same as it is for the objects, located close to the cluster centers, i. e. equal to 1. To overcome this drawback, it is possible to resort to partitioning based on plausibility. it requires the fulfillment of a sole condition: an arbitrary object from X must belong to at least one cluster. Such partitioning is achieved through setting a less rigorous condition (see equation (5)).
For the assessment of the quality of fuzzy partitioning the following criterion of departures is used:
EE^rilK--**
(6)
where ^S^r^/EO^T are the centers
of fuzzy clusters, me [1,^) is the exponential weight determining the "fuzziness", overlapping of clusters.
There is a considerable number of algorithms of fuzzy clusterization based on the minimization of the criterion in equation (3). Developing matrix F of fuzzy partitioning with a minimal value of the criterion in equation (5) is the task of a nonlinear optimization, which in turn can be resolved making use of different methods. The frequently used one is the algorithm of fuzzy k-means based on the Lagrangian method of non-determined factors/multiplies [Zimmermann, 2001, see also Shahraiyni et al., 2009]. The assessment of the accuracy of this algorithm dedicated to mapping of L. chlorophorum is reported in [Morozov et al., 2010].
3. Lake Michigan: a concise general description
Due to the nature of its formation (initially a pristine melt water body), and the resulting morphometry, thermal regime, watershed soil and geochemistry of Lake Michigan (41°35'N - 46°N; 85°W -88°W), this water body was originally oligotrophic [Chapra et al., 1981; Gillespie et al., 2008].
It still remains mostly as such due to its glacial heritage, although there are indications that the lake's trophic status should now be defined as oligo-mesotrophic [Mida et al., 2010]. This is because Lake Michigan has been subjected to external pressure produced by climate warming (water temperature growth in upper layers), atmospheric fallouts (phosphorus deposition) and human activities (input of phosphorus and other pollutants, including toxic ones, through sewages and atmospheric deposition). At the same time, during the second part of the previous century the lake became an arena of ballast water mediated introduction of invasive species such as quagga and zebra mussels, which act as water filterers. As a result, they damage the lake ecosystem via disrupting some intrinsic trophic interactions but also increase the water transparency (e. g. at the Sleeping Bear Dunes the water bottom visibility depth has increased from ca 2.5 m in 1970 up to 20 m in 2010), let more solar light reach the bottom in shallow coastal zones and stimulate the growth and increase the areal extent of macro-phytes [Nalera and Schloesser, 2014].
Location of the target shallow area in Lake
Michigan
The fusion methodology developed was applied to the lake's eastern coast, and more specifically, a colocation called "Sandy Bear Dunes" (44°50'N, 88°W). It has a sandy beach, and the bottom depths not exceeding (https:// www.ngdc.noaa.gov/mgg/greatlakes/michigan. html). The bottom substrate is predominantly sandy with occasional inclusions of spots of macro-phyte stands with Cladophora as the main species [Mida et al., 2004]. Reportedly, the offshore extent of macrophytes along the coastal zone generally does not exceed 5-10 m, although along the northernmost periphery of the lake the standing stocks are found at depths nearing 20-25 m [Shuchman et al., 2013].
Presently, the phytoplankton community comprises four major groups: blue-green and green algae, diatoms and flagellates (http://www.glerl.noaa.gov/pubs/brochures/ foodweb/LMfoodweb. pdf).
4. Input and output data description
We employed the radiometric data from two satellites, viz. Sentinel-2 and MODIS-Aqua. Sentinel is the name for a family of environmental remote sensing platforms launched and also waiting for launching under the ESA COPERNICUS Programme (www.copernicus.eu).
For our purposes, data from only one satellite of this series, viz. Sentinel-2a, were available so far (below referred to as S-2a). The S-2a platform accommodates the Multispectral Imager (MSI), which provides data at high spatial resolution (10-60 m) in several spectral channels in the visible. However, the number of spectral channels in the visible (only four) and their placement (Table) are, respectively, rather limited and not optimally suited. These deficiencies preclude the use of this sensor for efficient retrieval of water quality parameters in situations of optically complex or shallow waters. Thus, in our studies S2a acted in the capacity of a sensor with high spatial but low spectral resolution (Table), whose data were to be fused with a sensor providing higher spectral resolution although at a rather coarse spatial resolution. S-2a was launched on June 23, 2016, and its orbit was adjusted to assure the revisit time of 10 days. In the case of Lake Michigan, the time of this satellite overflight was close to 5 p. m. Satellite S-2b was launched on 07.03.2017 and presently the data from this satellite are yet unavailable.
MSI data were from the L1C level (https:// earth.esa.int/web/sentinel/user-guides/sentinel-2-msi/processing-levels) [i. e. not atmospherically corrected] in 13 spectral channels (Table, A). Further on, these L1C data were atmospherically corrected (see section Input Data Processing).
These data were fused with the data from MO-DIS-Aqua available at a much higher spectral resolution (6 bands in the visible and 1 band in the near IR [865 nm]), but at a lower spatial resolution (with 1 km in the visible) (Table, B).
Satellite level 1 data processing
Atmospheric correction and image preparation
for the fusion procedure
a) S-2a data: we applied the ESA algorithm of atmospheric correction Sen2cor (http://step.esa.int/main/third-party-plugins-2/ sen2cor/). S-2a data are provided by ESA in granules sizing 100 km by 100 km) (https://earth.esa.int/ web/sentinel/user-guides/sentinel-2-msi/prod-uct-types). The granules are to be further mosaicked, and latitudinally-longitudinally reprojected (EPSG:4326 - WGS84). For fusion, S-2a data were reduced to a 60 m spatial resolution per pixel.
©
Spectral channels: location and spatial resolution for S-
Band # Central wavelength, nm Bandwidth, nm
1 443 (60 m) 27
2 490 (10 m) 94
3 560 (10 m) 45
4 665 (10 m) 38
5 705 (20 m) 19
6 740 (20 m) 18
7 783 (20 m) 28
8 842 (10 m) 145
8A 865 (20 m) 33
9 945 (60 m) 26
10 1375 (60 m) 75
11 1610 (20 m) 143
12 2190 (20 m) 242
(A) and MODIS-Aqua (B) B
Band # Central wavelength, nm Bandwidth, nm
8 412 (1 km) 405-420
9 443 (1 km) 438-4428
10 488 (1 km) 483-493
11 531 (1 km) 526-536
1 645 (1 km) 620-670
14 678 (1 km) 673-683
b) MODIS-Aqua data
Our analysis of the spectral curvature of remote sensing reflectance, Rrs (A) [which is the upwelling spectral radiance above the water-air interface, L (0) normalized to the downwelling spectral irradi-ance, E(0) at the same level [e. g. Jerome et al., 1996] revealed that the MODIS-Aqua atmospheric correction frequently results in negative values of Rrs (A) in the blue part of the spectrum. It implies that the standard atmospheric correction is very in-
accurate, and such data could not be used for further processing.
To overcome this problem, we applied the MUMM code based on the GW94 AC. https:// www.osapublishing.org/vjbo/fulltext.cfm?uri=oe-21-18-21176&id=260880#g001). The application of the MUMM correction procedure significantly eased the problem with negative values of Rrs in the shortwave region of the visible spectrum. Performance of the MUMM procedure
Fig. 2. Flowchart of S-2 and MODIS imagery processing
of atmospheric correction, is effected through the employment of the SeaDAS processing code (https://seadas.gsfc.nasa.gov/) extended for working with ocean colour data (OCSSW), i. e. images were downloaded from https:// oceancolor.gsfc.nasa.gov/ site and then subjected to geolocation (L1B level). Thus, MODIS L2A level spectrometric data were obtained.
of phases of phytoplankton development in Lake Michigan. Indeed, the green areas (corresponding to enhanced concentrations of phytoplankton chlorophyll) stand out twice in the year, viz., in spring and early autumn (i. e. 09.05 and 15.08), which is in complete conformance with vernal and autumnal phytoplankton outbreaks in Lake Michigan [Shuch-man et al., 2006; their Fig. 6].
5. S-2a and MODIS-Aqua data fusion procedure
To prepare MODIS-Aqua L2A images to fusion they were reprojected and synchronized (in terms of geolocation) with the paired S-2a images.
ANN architecture
Our ANN consists of four layers of neurons (Fig. 1). The first layer encompasses 13 neurons accommodating Rrs values from the S2a 13 spectral channels (Table, A). Two hidden layers have 14 and 2 neurons. The third layer consists of only 1 neuron yielding the value of Rrs at each of the MODIS spectral channels, i. e. 412, 443, 488, 531, 645 and 678 nm (Table, B). That is, we develop separate NN for each MODIS spectral channel. The development of NNs with only one output neuron could be performed relatively fast: the computing time required for training the NNs for the fusion of one pair of MODIS-Aqua and S-2a images depends on the computer power, but in our case, it took one hour. Thus, it makes the developed method quite practical. Training of each NN was conducted until the RMSE reaches the value of 10-15 %.
Thus, the established values of S-2a Rrs at the MODIS 6 wavelengths permit to obtain the desired information at the MODIS spectral resolution and S-2a high spatial resolution, and hence attain the aim of data fusion.
6. Results of S-2a and MODIS-Aqua data fusion
RDG images
Visual analysis of paired SMI and MODIS data has shown that for the entire 2016 growing season only five pairs could be used for fusion. The dates of overflights are 09.05; 05.076; 26.07; 15.08; 04.09. The time difference of the five overflights did not exceed 2.5 hours. Figure 3 illustrates the spatial distribution of RGB images generated from MODIS-Aqua and fused MODIS-Aqua - S-2a data for the above dates.
The RGB images developed from the fused S-2a and MODIS-Aqua data exhibit a logical sequence
Application of the BOLEALI-OSWalgorithm
The BOLEALI-OSW algorithm is described in detail elsewhere [Korosov et al., 2017]. It is based on both the Levenberg-Marquardt multivariate procedure [Press et al., 1992] and the theory of light transfer in semi-infinite media [Maritorena et al., 1994]. Within our approach, the remote sensing reflectance Rrs (A) is presented as a sum of two components originating from the light interactions within the water column, Rrs DEEP and the bottom. The optical influence of the latter is determined by the bottom substrate spectral albedo, A (A). Thus, the resultant (total) spectral remote sensing reflectance, RrsTOT can be formalized as follows:
RrsTOT (A+0) = flre0EEP[1 - exp(-2KH)]+Aexp(-2KH) Q
(7)
where +0 indicates the air-water interface, K = spectral coefficient of upwelling and down-welling light attenuation in the water column, H = bottom depth, Q = ratio of the upwelling radiance to downwelling irradiance at level +0.
The fused ocean color data were processed with the BOREALI-OSW algorithm to retrieve the concentrations of phytoplankton chlorophyll, total suspended matter and cdom.
Fig 4 illustrates the spatial distributions of phytoplankton chl concentrations for the above five dates as obtained from the fused MODIS-Aqua and S-2a data. The paired plates in Fig. 4 explicitly show the advantage of the fusion procedure over the results from solely MODIS-Aqua.
The adequacy of the retrieved concentrations could only be assessed through a comparison with in situ data, but the latter were unavailable for us. Nevertheless, the retrieved chl and tsm concentrations comply well with the data reported for this part of the lake and this time range [Korosov et al., 2017].
Mapping of bottom type
Mapping of bottom type was performed for an area called Pyramid Point within the aforementioned Sandy Bear Dunes site. The k-means technique concisely described above was applied to bottom substrate classification. Spectra of R
0.0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0
CHL, mg/m3
Fig. 4. Spatial distributions of CHL from (a) MODiS-Aqua data and (b) fused MODiS-Aqua and S-2a data for 09.05; 05.076; 26.07; 15.08; 04.09 (presented in the top to bottom sequence).
Fig. 5. Spatial distribution of bottom types (classes 1-3) as obtained for the Pyramid Point making use of the k-means and fusion techniques.
values from the fused data (at 60 m resolution) were partitioned into three classes: sand, chara stands (chara is a genus of charophyte green algae in the family Characeae known to be common for the coastal area of Lake Michigan [Shuchman et al., 2013]), and some sandy substrate either sparsely covered by macrophytes (chara or cladophora) or slightly silted. The area was confined to depths not exceeding 15 m.
The clusterization thus performed permitted to produce a map of bottom type heterogeneity at a 60 m spatial resolution (Fig. 5).
The above-described bottom type classification reveals that the area adjacent to the coast is sandy (it is characterized by the highest bottom albedo). Further off-coast, the area with a depth of 10-15 m is covered by chara stands. The intermediate area belongs to the intermediate class, although it contains sandy spots as well as spots covered by submerged vegetation stands. These features explicitly indicate that the spatial heterogeneity of the area ascribed to class 2 is not due to depth changes, but is driven by changes in the bottom albedo.
Concluding remarks
Thus, summing up, we have developed and realized in a computer code our own method of ocean
color data fusion. The fused images were processed with the BOREALI-OSW algorithm to yield the CPA concentrations in the target optically shallow area of Lake Michigan. The retrieved concentrations comply well with the respective values typical of this lacustrine area.
We have investigated the possibility of employing the fused data for retrieving the bottom type. This tentative bottom type classification is rather rough as only three classes were presumed. In reality the bottom cover might be more heterogeneous if the respective mosaic elements are smaller than the spatial resolution of the fused radiometric data.
The attained results strongly suggest that the developed algorithm can be successfully used for fusion of data from Sentinel-2 and Senti-nel-3 because Sentinel-3 is highly akin to MODIS-Aqua in terms of the spectral and spatial resolution [Donlon et al., 2012].
Understandably, our fusion algorithm can be applied to data of the above sensors not only to generate RGB images of higher spatial resolution but also Rrs values in the Sentinel-3 spectral bands in the visible. It will require training of a larger number of NNs (according to the number the Sentinel-3 spectral bands in the visible). Because of this, the approach proposed might appear at first sight rather cumbersome. But the relative sim-
plicity of the method (as compared to those we discussed in the review section) and reasonably manageable computing time can successfully balance this seeming drawback.
We envisage that employment of S-3 data will also be very beneficial: sooner or later MODiS-Aqua will cease its performance, while S-3 (with nearly the same radiometric characteristics as MODiS Aqua) will, supposedly, last at least for the next decade.
References
Aiazzi B., Alparone L., Baronti S. et al. Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis. IEEE Transactions and Geoscience and Remote Sensing. 2002. 40 (10). P. 2300-2312.
Aiazzi B., Aparone L. Twenty-five years of pan-sharpening: a critical review and new developments. Signal and image Processing for Remote Sensing. Second Ed. Boca Raton: CRC Press. 2012. P. 533-548.
Amro I., Mateos J., Vegfa M., Molina R., Katsag-gelos A. K. A survey of classical methods and new trends in panshartpening of multispectral images. Journal of Advances in Signal Processing. 2011. 79. doi: 10.1186/1687-6180-2011-79
Boschetti L. D., Justice C. O., Humber M. L. MODiSLandsat fusion for large area 30 m burned area mapping. Remote Sensing of Environment. 2015. Vol. 161. P. 27-42. doi: 10.1016/j.rse.2015.01.022
Cakir H. I., Khorram S. Pixel level fusion of panchromatic and multispectral images based on correspondence analysis. Photogrammetric Engineering and Remote Sensing. 2008. Vol. 74, no. 2. P. 183-192. doi: 10.14358/PERS.74.2.183
Chapra S. C., Dobson H. F. H. Quantification of the Lake trophic typologies of Nauman (surface quality) and Thienemann (oxygen) with special reference to the Great Lakes. Journal of Great Lakes Research. 1981. Vol. 7, no. 2. P. 182-193. doi: 10.1016/ S0380-1330(81)72044-6
Chavez P. S., Sides S. C., Anderson I. A. Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic. Photogrammetric Engineering and Remote Sensing. 1991. Vol. 57, no. 3. P. 295-303.
Choi J., Yeom J., Chang A., Byun Y., Kim Y. Hybrid pan-sharpening algorithm for high spatial resolution satellite imagery to improve spatial quality. IEEE Geoscience and Remote Sensing Letters. 2013. Vol. 10, no. 3. P. 490-494. doi: 10.1109/LGRS.2012.2210857
Choi M., Kim R. Y., Kim M. G. The curvelet transform for image fusion. International Society for Photogramme-try and Remote Sensing. iSPRS. 2004. Vol. 35. P. 59-64.
Donlon C., Berruti B., Buongiomo A., Ferreira M.-H., Femenias P., Frerick J., Goryl P., Klein U., Laur H., Mavrocordatos C., Nieke J., Rebhan H., Seitz B., Stroede J., Sciarra R. The Global monitoring of Environment and security (GMES) Sentinel-3 mis-
sion. Remote sensing of Environment. 2012. Vol. 120. P. 37-57. doi: 10.1016/j.rse.2011.07.024
Duran J., Buades A., Coll B., Sbert C. A nonlocal variational model for pan-sharpening image fusion.
SIAM Journal on Imaging Sciences. 2014. Vol. 7, iss. 2. P. 761-796. doi: 10.1137/130928625
Gangkofner U. G., Pradhan P. S., Holcomb D. W. Optimizing the high-pass filter addition technique for image fusion. Photogrammetric Engineering and Remote Sensing. 2008. Vol. 74, no. 9. P. 1107-1118. doi: 10.14358/PERS.74.9.1107
Garzelli A., Nencini F. interband structure modeling for pan-sharpening of very high resolution multispectral images. Information Fusion. 2005. Vol. 6, no. 3. P. 213224. doi: 10.1016/j.inffus.2004.06.008
Gillespie T. W., Foody G. M., Rocchini D., Gior-gi A. P., Saatchi S. Measuring and Modelling biodiversity from Space. Progress in Physical Geography. 2008. Vol. 32, no. 2. P. 203-221. doi: 10.1177/0309133308093606
Haykin S. Neural Networks. A Comprehensive Foundation. Upper Saddle River, NJ: Prentice Hall. 1998.
Hong G., Zhang Y., Mercer B. A wavelet and iHS integration method to fuse high resolution SAR with moderate resolution multispectral images. Photogrammetric Engineering and Remote Sensing. 2009. Vol. 75, no. 10. P. 1213-1223. doi: 10.14358/PERS.75.10.1213
Jerome J. H., Bukata R. P., Miller J. R. Remote sensing reflectance and its relationship to optical properties of natural water. International Journal of Remote Sensing. 1996. Vol. 17, no. 1. P. 43-52. doi: 10.1080/01431169608949135
Khan M. M., Chanussot J., Condat L., Montan-vert A. indusion: fusion of multispectral and panchromatic images using the induction scaling technique. IEEE Geoscience and Remote Sensing Letters. 2008. Vol. 5, no. 1. P. 98-102. doi: 10.1109/LGRS.2007.909934 Klonus S. Comparison of pan-sharpening algorithms for combining radar and multispectral data. XXi iSPRS congress (Beijing, 3-11 July, iSPRS). 2008. P. 189-194.
Korosov A. A., Pozdnyakov D. V., Shuchman R. A., Sayers M., Sawtell R., Moiseev A. V. Bio-optical retrieval algorithm for the optically shallow waters of Lake Michigan. i. Model description sensitivity/robustness assessment. Transactions of the KarRC of RAS. 2017. No. 3. P. 79-93. doi: 10.17076/lim473
Laben C. A., Brower B. V. Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening. Patent US 6011875 A [US patent No 6011875 A]. 2000.
Ling Y., Ehlers M., Usery E. L., Madden M. FFT-en-hanced iHS transform method for fusing high-resolution satellite images. ISPRIS Journal of Photogrammetry and Remote Sensing. 2007. Vol. 61, iss. 6. P. 381-392. doi: 10.1016/j.isprsjprs.2006.11.002
Liu J. G. Smoothing filter-based intensity modulation: a spectral preserve image fusion technique for improving spatial details. International Journal of Remote Sensing. 2000. Vol. 21, no. 18. P. 3461-3472. doi: 10.1080/014311600750037499
e
Liu J., Liang S. Pan-sharpening using a guided filter. International Journal of Remote Sensing. 2016. Vol. 37, no. 8. P. 1777-1800. doi: 10.1080/01431161.2016.1163749
Maritorena S., Morel A., Gentili B. Diffuse reflectance of oceanic shallow waters: Influence of water depth and bottom albedo. Limnology and Oceanography. 1994. Vol. 39, iss. 7. P. 1689-1703. doi: 10.4319/ lo.1994.39.7.1689
Metwalli M. R., Nasr A. H., Faragallah O. S., El-Rabaie E.-S. M., Abbas A. M., Alshebeili S. A., Abd El-Samie F. E. Efficient pan-sharpening of satellite images with the contourlet transform. International Journal of Remote Sensing. 2014. Vol. 35, iss. 5. P. 1979-2002. doi: 10.1080/01431161.2013.873832
Mida J. L., Scavia D., Fahnenstiel G. L. Long-term and recent changes in southern Lake Michigan water quality with implications for present trophic status. Journal of Great Lakes Research. 2010. Vol. 36. P. 1-8. doi: 10.1016/j.jglr.2010.03.010
Mida J. L., Scavia D., Fahnenstiel G. L., Potho-ven S. A. Cladophora Reserch and Management in the Great Lakes. In: Proceedings of a Workshop Held at the Great Lakes WATER Institute, University of Wisconsin-Milwaukee, December 8, 2004. GLWI Special Report No. 2005-01.
MorozovE. A., KorosovA. A., PozdnyakovD. V., Pettersson L. H., Sychev V. I. A New Area-Specific Bio-Optical Algorithm for the Bay of Biscay and Assessment of Its Potentials for SeaWiFS and MODIS/Aqua Data Merging. International Journal of Remote Sensing. 2010. Vol. 31. P. 6541-6555.
Nalepa T. F., Schloesser D. W. Quagga and Zebra Musssels: Biology, Impacts, and Control. CRC Press: Boca Raton, FL. 2014. 312 p.
Nussbaumer H. J. Fast Fourier transform and convolution algorithms. Berlin: Springer-Verlag. 1982. 280 p.
Otazu X., Gonzalez-Audicana M., Fors O., Nunez J. Introduction of sensor spectral response into image fusion methods. Application to wavelet-based methods. IEEE Transactions on Geoscience and Remote Sensing. 2005. Vol. 43, no. 10. P. 2376-2385. doi: 10.1109/ TGRS.2005.856106
Palubinskas G. Fast, simple, and good pan-sharpening method. Journal of Applied Remote Sensing. 2013. Vol. 7, no. 1. P. 073526-1-073526-12. doi: 10.1117/1. JRS.7.073526
Pintore M., Van De Waterbeemd H., Piclin N., Chrétien J. R. Prediction of Oral Bioavailability by Adaptive Fuzzy Partitioning. European Journal of Medical Chemistry. 2003. Vol. 38, no. 4. P. 427-431. doi: 10.1016/ S0223-5234(03)00052-7
Pohl C., van Genderen J. Multisensor image fusion in remote sensing: Concepts, methods and applications. International Journal of Remote Sensing. 1998. Vol. 19, iss. 5. P. 823-854. doi: 10.1080/014311698215748
Pohl C., van Genderen J. Structuring contemporary remote sensing image fusion. International Journal of Image and Data Fusion. 2015. Vol. 6, no. 1. P. 3-21. doi: 0.1080/19479832.2014.998727
Press W., Teukolsky S., Vettering W., Flannery B. Numerical Recipes in C: The Art of Scientific Computing. 2nd ed. New York: Cambridge University Press. 1992.
Rong K., WangSh., Yang Sh., Jiao L. Pan-sharpening by exploiting sharpness of the spatial structure. International Journal of Remote Sensing. 2014. Vol. 35, iss. 18. P. 6662-6673. doi: 10.1080/2150704X.2014.960607
Shah V. P., Younan N. H., King R. L. An efficient pan-sharpening method via a combined adaptive PCA approach and contourlets. IEEE Transactions of Geosci-ence and Remote Sensing. 2008. Vol. 46, no. 5. P. 13231335. doi: 10.1109/TGRS.2008.916211
Shahraiyni H. T., Shouraki S. B., Fell F., Schaale M., Fischer J., Tavakoli A., Preusker R., Tajrishy M., Vatan-doust M., Hhodaparast H. Application of the Active Learning Method to the Retrieval of Pigment from Spectral Remote Sensing Reflectance Data. International Journal of Remote Sensing. 2009. Vol. 30. P. 1045-65. doi: 10.1080/01431160802448927
Shuchman R. A., Sayers M. J., Brooks C. N. Mapping and monitoring the extent of submerged aquatic vegetation in the Laurentian Great Lakes with multi-scale satellite remote sensing. Journal of Great Lakes Research. 2013. Vol. 39. P. 78-89. doi: 10.1016/j.jglr.2013.05.006
Shuchman R., Korosov A., Hatt C., Pozdnyakov D., Means J., Meadows G. Verification and application of a Bio-optical Algorithm for Lake Michigan Using SeaWiFS: a 7-year Inter-annual Analysis. Great Likes Res. 2006. Vol. 32. P. 258-279. doi: 10.3394/0380-1330(2006)32[258:VAAOAB]2.0.CO;2
Starck J. L., Murtagh F., Candes E. J. Gray and colour image contrast enhancement by the curvelet transform. IEEE Transections on Image Processing. 2003. Vol. 12, no. 6. P. 706-716. doi: 10.1109/TIP.2003.813140 Tu T. M., Huang P. S., Hung C. L., Chang C. P. A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery. IEEE Geosci-ence and Remote Sensing Letters. 2004. Vol. 1, no. 4. P. 309-312. doi: 10.1109/LGRS.2004.834804
Vrabel J. Multispectral imagery advanced band sharpening study. Photogrammetric Engineering & Remote Sensing. 2000. Vol. 66, no. 1. P. 73-79.
Zang J. Multi-source remote sensing data fusion: status and trends. International Journal of Image and Data Fusion. 2010. Vol. 1, no. 1. P. 5-24. doi: 10.1080/19479830903561035
Zang Y. Noise-resistant wavelet-based Bayes-ian fusion of multispectral and hyperspectral images. IEEE Transactions on Geoscience and Remote Sensing. 2009. Vol. 47, no. 11. P. 3834-3842. doi: 10.1109/ TGRS.2009.2017737
Zhang Y. A new merging method and its spectral and spatial effects. International Journal of Remote Sensing. 1999. Vol. 20, no. 10. P. 2003-2014. doi: 10.1080/014311699212317
Zimmermann H. J. Fuzzy Set Theory. Boston, MA: Kluwer Academic Publishers, 2001.
Received July 31, 2017
0
СВЕДЕНИЯ ОБ АВТОРАХ:
CONTRIBUTORS:
Коросов Антон Андреевич
научный сотрудник, руководитель группы дистанционного
зондирования морского и материкового льда, к. ф.-м. н.
Центр по окружающей среде и дистанционному
зондированию имени Нансена
Берген, Норвегия
эл. почта: [email protected]
Моисеев Артем Владимирович
младший научный сотрудник
Научный фонд «Международный центр по окружающей среде и дистанционному зондированию имени Нансена» 14-я линия В. О., 7, оф. 49, Санкт-Петербург, Россия, 199034
эл. почта: [email protected]
Шухман Роберт
директор, PhD
Мичиганский технический исследовательский институт
Анн-Арбор, США
эл. почта: [email protected]
Поздняков Дмитрий Викторович
заместитель директора по науке, руководитель группы
водных экосистем, д. ф.-м. н., проф.
Научный фонд «Международный центр по окружающей
среде и дистанционному зондированию имени Нансена»
14-я линия В. О., 7, оф. 49, Санкт-Петербург, Россия,
199034
эл. почта: [email protected]
Korosov, Anton
Nansen Environmental and Remote Sensing Center Thorm0hlens gate, 47, N-5006, Bergen, Norway e-mail: [email protected]
Moiseev, Artem
Scientific foundation "Nansen international Environmental
and Remote Sensing Centre"
14th Line, 7, Office 49, Vasilievsky island, 199034
St. Petersburg, Russia
e-mail: [email protected]
Shuchman, Robert
Michigan Tech Research institute
3600 Green Court, Suite 100, Ann Arbor, Mi 48105
e-mail: [email protected]
Pozdnyakov, Dmitry
Scientific foundation "Nansen international Environmental
and Remote Sensing Centre"
14th Line, 7, Office 49, Vasilievsky island, 199034
St. Petersburg, Russia
e-mail: [email protected]