Two calibration models for compensation of the individual elements properties of self-emitting displays
O.A. BasovaS.A. Gladilin 2, A.S. Grigoryev 2, D.P. Nikolaev2,3 1 Moscow Institute of Physics and Technology (National Research University), 141701, Moscow Region, Dolgoprudny, Institutsky pereulok 9, Russia;
2 Institute for Information Transmission Problems of Russian Academy of Sciences (Kharkevich Institute), 127051, Moscow, Bolshoy Karetny pereulok 19, Russia;
3LLC "Smart Engines Service", 117312, Moscow, prospect 60-letiya Oktyabrya 9, Russia
Abstract
In this paper, we examine the applicability limits of different methods of compensation of the individual properties of self-emitting displays with significant non-uniformity of chromaticity and maximum brightness. The aim of the compensation is to minimize the perceived image non-uniformity. Compensation of the displayed image non-uniformity is based on minimizing the perceived distance between the target (ideally displayed) and the simulated image displayed by the calibrated screen. The S-CIELAB model of the human visual system properties is used to estimate the perceived distance between two images. In this work, we compare the efficiency of the channel-wise and linear (with channel mixing) compensation models depending on the models of variation in the characteristics of display elements (subpixels). It was found that even for a display with uniform chromatic subpixels characteristics, the linear model with channel mixing is superior in terms of compensation accuracy.
Keywords: displays; non-uniformity compensation; dead pixel compensation; display calibration; image enhancement; spatial filtering; spatial resolution; human visual system model; S-CIELAB.
Citation: Basova OA, Gladilin SA, Grigoryev AS, Nikolaev DP. Two calibration models for compensation of the individual elements properties of self-emitting displays. Computer Optics 2022; 46(2): 335-344. DOI: 10.18287/2412-6179-CO-854.
Acknowledgements: This work was supported by Russian Science Foundation (Project No. 2061-47089).
Introduction
Modern displays incorporate a large number of individual elements forming an image. Imperfect manufacturing techniques yield variation in the characteristics of the elements composing a display, which results in different luminance of these elements when receiving the same input signal. Furthermore, display elements age at different rates, which also leads to the variation of the characteristics among them [1, 2]. If the input signal does not take into account these differences, then the uniform areas of the input image appear non-uniform on a screen (display non-uniformity problem), which significantly reduces the overall quality of the displayed image.
The problem of compensation of such distortions can be formulated for different types of displays: self-emitting (self-luminous elements with individually determined luminance, e. g. OLED or LED-array displays), transmissive (with optical filters as elements passing the light from the uniform source, e.g. liquid-crystal displays (LCD)), reflective (where the reflection coefficient of the outer light source is controlled, e.g. electronic paper and reflective LCDs), and transflective (which can function as transmissive or reflective, including a backlight dependent on ambient light). Each of these display types is characterized by a different type of distortion [2, 3].
In this work, we consider self-emitting displays. The latter do not suffer from non-uniform backlight lumi-
nance possible for LCDs, thus the main problem of non-uniformity in self-emitting displays is associated with the variation in the emission characteristics of individual pixels (see fig. 1).
Fig. 1. Defective pixels on large format LED-array display can be easily spotted and significantly reduce the quality of the displayed image
The ideal display is a uniform display, i.e. a display with the same characteristics shared by all the pixels. Equalization of properties of individual display elements involves a search for the compensating transformation of the input signals to a display so that the formed image
and the image formed by an ideal display would be as close as possible.
The transformation should allow for efficient implementation, possibly in hardware, to avoid increasing the display latency.
In [4], we have suggested the display luminance non-uniformity model, which describes the pixel-wise variation as the element-wise multiplication by a 3D vector in the input signals to display matrix space. To calibrate a display within this model, we suggested the compensating transformation defined as the multiplication between a matrix 3 x 3 and the input signal to a pixel.
The question is: does the compensation for the element-wise ("vector") distortion require the matrix multiplication? In this work, we study the validity of matrix compensating transformation compared to the vector (channel-wise) transformation for both the luminance model and, more general, spectral model.
It is known that both channel-wise and matrix multiplications are used for color visualization, therefore, its efficient implementation in the monitor design should be possible. For instance, the color balance is usually implemented using the von Kries's chromatic adaptation [5], which consists of multiplication of an input signal by a diagonal matrix (channel-wise multiplication) in the color coordinate system LMS, which corresponds to response of the three types of cones of the human eye. The usage of the algorithm assumes transmission from one coordinate system to the LMS. If the von Kries's algorithm is implemented, then monitors allow to multiply matrices 3 x 3 by design. Frequently in the sake of decreasing computational complexity the diagonal matrix multiplication is implemented in the pixel color coordinate system [6], then only channel-wise multiplication is implemented. Thus both types of compensation (channel-wise and matrix) can potentially be implemented in the monitor design.
1. The display non-uniformity model
Let us consider a display of the following design: the size is W by H pixels, each pixel is composed of three self-luminous elements (subpixels) of different types: red, green and blue. The luminance of a subpixel, depending on the input signal, varies from zero to a certain maximum value determined by subpixel properties. The valid input signal values are limited to the range [0, 1]; all the values outside this interval are clipped to the interval edges (either 1 or 0). We denote the clipping operator to the interval [0, 1] by clip.
Let us denote the spectral properties of the three subpixels of any given pixel at maximum luminance as ei (X), e2 (X), e3 (X), correspondingly, where ek (X)is the spectral intensity function of wavelength X. We call the latter the primaries. They form the color coordinate system defined by individual pixel characteristics, which we refer to as pixel coordinate system. The input signal to a subpixel Jk, k = 1, 2, 3 determines the ratio between sub-
pixels' luminance and the maximum luminance. It can be defined as a point in pixel coordinate system. The pixel emission spectrum is defined as
C = J clip( Jk )ek (X).
(1)
In an ideal display, the primaries of all pixels are identical. The variation of the pixels' primaries of a real display can be described by different models. In this paper, we consider two probability models, which describe pixel defects and the drift of their parameters. Both are based on the assumption that all display subpixels are independent. For the purposes of this work, we assume that all display subpixels can be described by one of the following models of subpixel defects.
1.1. Luminance ("vector") model of subpixel defects
In this model, we assume that different subpixels of the same type differ only in peak luminance. Each subpixel can be either ideal or defective with probability p. If the subpixel is defective, its primary can be defined as
e[ (X) = ek (X) • d[,
(2)
where d'k is the defectiveness coefficient sampled from a uniform distribution on the interval [0,1) determining the factor by which subpixel's maximum luminance is lower than that of an ideal subpixel; and i is the pixel index, k is the subpixel index. Non-defective subpixels of the same type are considered identical: e'k (X) = ek (X), i. e. d'k = 1.
Therefore, the pixel emission spectrum can be written as
C = J clip( Jk )d[ek (X).
(3)
Thus, in the described model, the primaries of all subpixels of the same type are proportional (with the only exception for subpixels with maximum luminance of zero and non-defined primaries).
1.2. Spectral model of defects and drift of subpixel parameters
In this model, we assume that subpixels can differ in both peak luminance and spectral characteristics. The spectral characteristic of a subpixel primary is determined within the double Gaussian model described in [7]. This model is constructed and experimentally verified based on the empirical data for InGaN and AlInGaP LED, and it is determined as follows:
ek (X) = b
( ( exp
-(X- mk )2 ^
(
+ ak exp
-(X- mk )2 ^
//
(4)
where mk is the position of the Gaussian peak of the k-th subpixel, CTk i and &k,2 are the standard deviations of the Gaussian distributions, ak is a weighted sum coefficient, b is a luminance factor, and i is the pixel index.
k =1
k=1
The parameters of the subpixel spectrum model were drawn from the following probability distributions:
ak 6 unif (ak,min , ak,max X mk 6 unif (mk,min , mk,max ) , (5)
where the specific values of the variables
ak,min, ak,max, mk,min, mk,max, CTk,i, CTk,2 depend on the subpixel type.
Additionally, this model allows for the completely defective subpixels (i.e. the maximum luminance is zero) with probability q.
2. The general approach to calibration
To compensate the display non-uniformity, we will search for such a signal transformation of the input signal to the display matrix, so that the image formed by a nonuniform display would be as close as possible to the image formed by an ideal (uniform) display.
There are two different approaches to compensation: compensation of image with defective pixels and display calibration based on gamut optimization.
For the first time, the idea of defective pixel compensation was introduced in the works [8, 9]. Authors propose an algorithm of an LCD display calibration. The algorithm uses pixels in the neighborhood of the defective pixel to provide overall image improvement. The masking neighboring pixels' input signal is modified in such a way as to increase the perceived uniformity of the area with the defective pixel. Perception of the displayed image is estimated using point spread function (PSF) which models the human visual system (HVS). Compensation is achieved by changing the input signal of the masking pixels. Later, in 2012 and 2015, the PSF-based compensation algorithm was patented [10, 11]. After a while, in the paper [12] a similar method for defective pixels was proposed. It was based on the contrast sensitivity function (CSF) of the HVS. In [13], an image compensation algorithm based on another HVS model was proposed which takes into account the masking effects of the visual perception.
It looks like the methods from aforementioned papers [8, 9, 12, 13] are designed for image processing and not for estimation of a display compensation parameters. In other words, to estimate the parameters of defective pixel compensation one needs to apply the algorithm for every image. In contrast, we will estimate the calibration parameters of the display only once. Another difference from our problem formulation is that their distortion model does not allow controlling the brightness of defective pixels, and do not seem to consider the case of clustered defects, working only with isolated defects.
In the paper [14], McFadden and Ward suggest a PSF-based algorithm for compensation of a grid distortion caused by gaps between individual tiles of a display. The proposed algorithm reduces the apparent visibility of seams between individual tiles.
The idea of neighbor-based compensation of the defective pixels continues to be relevant to this day: for instance, in 2019 the patent on this topic was published
[15]. It describes the compensation of completely defective pixels on an LCD display. The brightness of surrounding pixels is increased to compensate the lost brightness output from non-functioning pixels. The patent describes three types of a spatial distribution function for apportioning additional brightness to surrounding pixels: (i) additional brightness is divided equally, (ii) allocating larger portions to closer surrounding pixel, (iii) diagonal pixels are received a larger amount of the additional brightness, since the human eye is more sensitive to horizontal or vertical lines. Also, the total brightness error associated with a given non-functioning pixel must be recomputed for each image frame.
All the algorithms mentioned above are based on various models of the HVS and work as an additional stage in image processing pipeline to adapt the input image to characteristics of the display. Another approach to correct for the defective pixels is by calibrating the display, and solutions implementing it are compatible with in-display implementation due to their independence of specific input signal.
One well-studied problem that is similar to calibration of displays with defective pixels is tiled display calibration. In such systems, each display tile has own white point chromaticity and maximum brightness. For example, the methods described in papers [16, 17] allow for the complete restoration of the non-uniform display to a uniform state by the elimination of extremely bright and / or saturated colors, which could not be displayed by some separate parts of a tiled display. In other words, the algorithms reduce the gamut of the most bright and / or saturated sub-displays. The latter algorithm could be applied for calibration of self-emitting displays.
Another group of scientists have developed a system for calibration of OLED displays. They have published several patents describing a method for color and ageing compensation of an emission display [18, 19, 20]. The algorithm described in the most recent patent [20] compensates degraded pixels by supplying their respective driving circuits with greater voltages. The display data is scaled by a compression factor of less than one to reserve some voltage levels for compensating degraded pixels. In other words, the algorithms reduce the gamut of non-defective pixels.
Some display types, for instance, micro-LED, are difficult to produce with high uniformity, therefore the calibration of such displays is often implemented in software [21, 22] rather than in hardware. The paper [21] proposes a calibration of an LED display (shown images have block artifacts typical for micro-LED-like displays, although this is not specified in the article). The paper presents a calibration method based on the brightness correction coefficient map. The outputs of the algorithm are calibration coefficients for each channel, but their calculation does not take into account the HVS properties.
One of the most up-to-date works on self-emitting display calibration, particularly for micro-LED displays,
is a paper from Samsung Research [22]. It proposes two algorithms for calibration of the uniformity of micro-LED display. The first algorithm calculates a set of input to an output look-up tables (LUT). The second algorithm is based on 4D transform correction. It estimates the 4 x 4 matrix which works as an input to output converter at the on-device step. The authors note that the advantage of the 4D transform-based method compared to the older methods is that it can be fused with any other calibration algorithm which can be converted to a LUT.
All works mentioned above can be divided into two categories: (1) papers that describe a calibration of a display, but do not take into account the HVS properties, (2) methods that consider the HVS properties, but do not calculate a single display calibration, only enhancing individual displayed images. In our work, we try to combine these two approaches: we calculate a single set of compensation parameters for any images displayed on the screen, taking into account the HVS properties.
This work develops the approach proposed in [4]. The suggested approach is based on the following. If the colors of the pixels within a neighborhood of a defective pixel are slightly corrected in a way that ensures that the average color of this area is closer to the desired one, then due to the small size of this area the human vision would perceive this as a whole: the same way the area composed of non-defective pixels with original (uncorrected) input signal would have been perceived. Thus, the proposed approach optimizes the compensation parameters based on the HVS model response, instead of the minimization of the emission difference.
The general flow of the conversion of the input signal is shown in fig. 2.
Input sRGB signal Subpixel signal Light
t ,T 1
Compensation matrices or vectors
\\
V Red-green \
Blue-yellow\ \ " 1
Fig. 2. The conversion of the input signal to a display into a perceived image
An image to be displayed is usually represented in standard RGB (sRGB) or other standardized color coordinates defined in relation to the CIE XYZ color space of the standard observer, which simulates a primary response of the HVS model.
Then compensating transformations are applied to the image, i.e. matrix multiplication or element-wise (channel-wise) multiplication by correction vectors. This results in a corrected subpixel signal which takes into account the non-uniformity of the subpixels.
The displayed image is perceived by the human visual system. The perception is modeled by the contrast sensitivity functions of the latter [23], see Section 3. The correspondence between the perceived image and the image displayed by an uniform display is the goal of the calibration.
Since an ideal display cannot be designed, and during the compensation parameters optimization it is not feasible to repeatedly show pairs of images to a person for similarity assessment, we need to develop a computational model for such assessment, which should approximate the HVS perception of an image.
Thus, the compensation method development requires the following:
1. Selection of the distortion model to correct for;
2. Choosing the similarity assessment approach between two images: the ideal one and formed by a real display;
3. Selection of the compensation transformation classes the parameters of which will be optimized.
The two different distortion models considered in this paper were discussed in Section 1.
To evaluate the similarity, we consider the averaged Euclidean distance in various color coordinate systems: CIE XYZ [24], CIELAB [25], and S-CIELAB [26].
The approach proposed in [4] utilizes multiplication by a 3 x 3 matrix for compensation even for luminance (i.e. proportional) distortion model instead of the simpler element-wise multiplication by a 3D vector. In this work, we will study the validity of such a choice.
3. Similarity assessment between the two images
CIE XYZ [24] (hereafter XYZ) is a color space where spectral characteristics are determined by the spectral sensitivity functions of some average human eye, and its X, Y, and Z values are referred to as standard colorimet-ric color coordinates. A simple Euclidean metric in this coordinate system does not approximate the perceived by a person difference between the colors. To eliminate this drawback, the coordinate system CIELAB [25] (hereafter LAB) was derived from the XYZ color space.
Another attribute of the human visual system which is important for display calibration is that the human eye is less sensitive to chromatic and brightness differences in small details compared to larger ones. In other words, when the contrast of an initially resolvable image gradually decreases, the perception of uniformity is achieved before reaching the true zero contrast of the stimulus. Therefore, we are interested in the dependence of the minimum contrast which is still resolved (contrast sensitivity) on the spatial frequency of the stimulus. This dependence is described by contrast sensitivity functions (CSFs) [27]. CSFs also model another spatial property of
the visual system (however, it does not affect the compensation described in this paper): the lower sensitivity to low-frequency achromatic changes compared to higher-frequency ones. The shape of the CSF depends on the direction of the stimulus color contrast vector. Most studies consider the CSF along three directions which are assumed to be independent: the luminance axis, red-green, and blue-yellow directions. It is presumed that by knowing the sensitivity along these color directions, it is possible to predict the contrast sensitivity for any color pair.
To more realistically assess the difference between images defined in LAB color coordinates, the S-CIELAB metric was proposed in [26]. This metric takes into account the spatial properties of the human visual system (simulated via CSFs). S-CIELAB allows to approximate a perceived image difference for a given viewing distance (i.e. the distance between the observer's eyes and the display) and resolution (dpi) of images.
S-CIELAB computation includes two steps: 1) spatial filtering of the images; 2) pixel-wise color difference metric computation for the compared images. The first step employs the CSF to simulate the dependence of human visual system sensitivity on the spatial frequency and chromaticity of the stimulus. The CSF is approximated using a weighted sum of Gaussian filters (each with its own weight w and blur parameter c, the latter being scaled according to the viewing distance and image resolution). To apply the CSF more accurately, instead of the nonlinear LAB coordinates, the image is processed in linear opponent color coordinates along three basis directions: luminance, red-green, and blue-yellow. As the result of this spatial filtering, we obtain an image that contains only details visible at a given distance and resolution, with intensities modified according to spatial response of the HVS.
In the second step, for two images formed as described above and transformed back from opponent colors to LAB coordinates, the Euclidean metric is computed pixel-wise to create a difference map, which is then averaged into a single value - the output of the S-CIELAB image difference metric.
The spatial filtering method employed in S-CIELAB can be also utilized in different color coordinates and color difference metrics, such as those proposed in [28, 29]. Furthermore, another model, N-CIELAB [30], can be used to simulate the properties of the contrast sensitivity of the human visual system. This model was suggested for the compression optimization for JPEG and JPEG2000, as well as for the classification of images according to the level of the detail. Alternatively, the metric suggested in [31] for the assessment of the quantized color images can be used as the image quality metric.
In this work, we compare the calibrations obtained via the optimization of the mean Euclidean metric in the color coordinates XYZ, LAB, and S-CIELAB.
4. Proposed algorithm for the compensation parameters calculation
We described the matrix version of the algorithm for the calibration parameter calculation earlier in [4]. This Section describes its generalized version.
The input of the algorithm is the primaries of the display pixels in standard observer color space XYZ. These vectors can be determined, for example, by applying an input signal (1, 0, 0) in hardware RGB to all display pixels. Then for each pixel, the coordinates of its e1 primary in XYZ can be obtained from the picture of the screen taken with the colorimetrically calibrated camera with sufficient resolution. Similarly, we can obtain coordinates of primaries e2 and e3 using (0, 1, 0) and (0, 0, 1) hardware RGB stimuli. The primaries of the display pixels are thus represent the transition matrix B from individual pixel (hardware) space to standard XYZ coordinates.
The output of the algorithm is a set of H-W compensation matrices C of size 3 x 3, or, depending on the type of calibration, a set of 3D compensation vectors C.
The algorithm is based on the minimization of the following functional:
Lvis (C) = X k ' d(O(M( J, C)), O(I)).
(6)
In our current study, the optimization was implemented using the Adam algorithm [32]. In further work the performance of other optimization methods, e.g. [33], should be explored for this problem.
The optimization is performed for uniform images J (all pixels of which are identical) of four colors: red, green, blue, and white. For greater increase in uniformity of achromatic areas, the white image was weighted with k=3 (for other images k = 1). To make compensation of brightness level possible, the maximum brightness of the gamut was reduced by 20 %. The idea is that the impact of defective pixels is the greatest in the uniform image areas, where those pixels are significantly different from the surrounding ones, while on textured areas the defects are likely to be masked, so by performing optimization on uniform images only we improve the worst-case perceived uniformity.
In this paper we compare compensation algorithms where the metric d is calculated in various color coordinates: XYZ, LAB, S-CIELAB.
The functional (6) is the norm d of the difference between the images M(J, C) (the input signal J(x,y) corrected by the set of matrices C and displayed on the screen with defective pixels) and I (the image ideally displayed, i.e. on the display without defective pixels), where
1
d (J, I ) = - XI Ji -1,|| .
(7)
Prior to the metric calculation, both images are converted into specific color coordinates depending on the variation of the calibration method under study (XYZ, LAB or S-CIELAB) using the function O ( I ):
1=0
2
0( I ) =
I, for XYZ coordinates,
LAB(I), for LAB coordinates, (8)
SCIELAB(I, v), for SCIELAB coordinates,
LAB ( I ) is a transition function from XYZ to LAB color coordinates. SCIELAB ( I, v) is a transition function from XYZ to S-CIELAB image representation for viewing parameter v and has the following form:
SCIELAB(I, v) = LAB(M01 [MOI * f (v)]),
(9)
where Mo is a transition matrix from XYZ to opponent color coordinates [26], and * is a channel-wise convolution with two-dimensional kernel f which for each channel has the following form:
fj (v) = kj ^
(10)
where j is an index of the opponent channel, values of ct,; j and Wi, j are given in the article [26]; scale factor kj is chosen so that fj sums to 1; viewing parameter v is a multiplication of a display resolution (dpi) and a viewing distance in inches.
The corrected image displayed on the screen with defective elements is simulated as follows:
M ( J, C) = B ■ clip(C ■ J ),
(11)
where the clipping operator ensures that the resulting pixel values are within the valid interval.
The input signal to the display J (x, y) is specified in the linear color coordinates of an individual pixel (for each x,y J is a 3D column vector that corresponds to the input signal to this pixel).
The ideal display would render J (x, y) as an image I=E ■ J, where E = (ei, e2, e3) are the pixel primaries, which for the luminance distortion model (Section 1.1) coincide with the primaries of non-defective pixels. For the spectral distortion model (Section 1.2), the former were considered equal to vectors constructed via maximum coordinates in XYZ separately for each type of the subpixel.
In this work, the compensation parameters (matrices or vectors) are fixed for every monitor and considered independent from the displayed image. To achieve better performance, the compensation parameters should be recalculated for each image to be displayed, but this is time-consuming due to repeated multiplication and convolution. However, there are promising approaches [34] allowing for the construction of computational structures based on the addition operation (without multiplication), which are comparable with convolutional neural networks in terms of expressive power. The utilization of such approaches, as well as the employment of several precalculated compensation parameters, can further enhance the presented approach.
5. Test dataset generation
Using the models described in Section 1, we synthesized two types of display characteristics with different pixel defect models.
In this work, to clearly present the simulation (so that even within the small area of an image or display different defective pixels could be observed), the pixel defect probability p (for model 1.1) was set to 0.08, and q (for model 1.2) was set to 0.005. Both values are orders of magnitude greater than that officially specified by the manufacturer [35].
For the model described in Section 1.2, the values of variables ak,min ak,mаx, mk, min mk„max, CTk,1, ak,2 depending on the type of a particular subpixel are shown in Table 1. The value of b was sampled from the normal distribution N (0.85, 0.05), but prior to the multiplication of spectrum by b, its luminance was brought to be equal to the luminance of the corresponding sRGB color coordinate primary.
Tab. 1. Parameters of subpixels spectra generation
[ak,min, ak, max] [mk,min, mk,max]
R (k = 1) 11 100 [0.1, 0.2] [623, 641]
G (k=2) 11 200 [0.2, 0.4] [525, 544]
B (k = 3) 12 300 [0.1, 0.2] [468, 482]
Example spectra for different subpixels within the luminance model described in Section 1.2 are illustrated in fig. 3.
Fig. 4 shows a scaled up illustration of the images on the displays under the described distortion models.
Relative intensity (arb. unit)_
0.50-
0.25
o-
,A Aj
mil
- ■ ! I\ /1
400
500
600
700 800 Wavelength (nm)
Fig. 3. Example of subpixel spectra within the selected model
Fig. 4. An 8 x 8 pixel image (a) displayed by an ideal screen, (b) under luminance distortion model, and (c) under spectral distortion model
6. Experiments
The proposed algorithms were implemented in Python. The optimization was implemented using the Ten-sorflow library and was run on NVIDIA GPU (GeForce GTX 1080 Ti).
The following condition was set as the criterion of the optimization completion: if the average value of the error function over 300 iterations decreases by less than 10 -4. If the display characteristics do not meet the completion criterion, the maximum number of iterations was set to 90000. As the initial approximations of the compensation matrices, identity matrices were used.
The input image size was selected via the following formula based on the convolution kernel parameters in S-CIELAB and the three-sigma rule:
s = dpi • d • m • tan(^/180) • 3 • a •2,
(12)
where dpi = 94, d = 60 cm, m = 0.3937 is the conversion factor for converting centimeters to inches, c = 0.494 is the maximum variance of the convolution kernel with positive weight from S-CIELAB. The image size calculated according to formula (12) is 115 pixels.
The calibrations for different metrics (XYZ, LAB, and S-CIELAB for several observation distances at 94 dpi: 60 cm, 40 cm, 20 cm) were compared. For better alignment with LAB coordinates, the range of image values in XYZ coordinates was limited to the interval of [0, 100] prior to the metric calculation.
6.1. Comparison of channel-wise and matrix compensations
Optimization testing was performed for 10 simulated display instances under the luminance model and for 10 under the spectral distortion model, i.e. the performance of the compensation method is evaluated as average for different displays within the same class.
To compare the values of the metrics in different color coordinates, we will further consider the relative error function — each error function is divided by the error function calculated over the uncalibrated images:
n l
=X 7 ■
i=1 Ai
(13)
where v is an average error function for certain types of calibration, l is a metric value between calibrated and ideal images, s is the metric value between non-calibrated and ideal images. The average was calculated for 10 different simulated displays (n = 10).
Tab. 2 compares channel-wise (vector) and matrix calibrations within the luminance distortion model, and tab. 3 within the spectral one.
Tab. 2 and 3 show that the accuracy of matrix compensation is superior compared to channel-wise (vector) calibration, even if the display subpixels only vary in luminance. Thus, the experimental results illustrate the validity of the matrix compensation algorithm even in the case of luminance distortions only.
Tab. 2. Comparison of channel-wise and matrix compensations: table cells represent the average relative error function (for 10 simulated displays within luminance distortion model)
XYZ LAB S-CIELAB 20 cm S-CIELAB 40 cm S-CIELAB 60 cm
Channel-wise 0.921± 0.002 0.941± 0.003 0.904± 0.003 0.765± 0.008 0.804± 0.007
Matrix 0.807± 0,003 0.869± 0.003 0.816± 0.005 0.720± 0.008 0.754± 0.008
Difference 0.114± 0.001 0.072± 0.001 0.088± 0.003 0.045± 0.001 0.051± 0.001
Tab. 3. Comparison of channel-wise and matrix compensations: table cells represent the average relative error function (for 10 simulated displays within spectral distortion model)
XYZ LAB S-CIELAB 20 cm S-CIELAB 40 cm S-CIELAB 60 cm
Channel-wise 0.9791± 0.0018 0.9999± 0.0001 0.9926± 0.0002 0.9844± 0.0003 0.9830± 0.0003
Matrix 0.9331± 0.0064 0.9985± 0.0002 0.9876± 0.0003 0.9824± 0.0005 0.9824± 0.0005
Difference 0.0460± 0.0047 0.0014± 0.0002 0.0051± 0.0001 0.0020± 0.0003 0.0006± 0.0003
Fig. 5a - f illustrates a scaled-up effect of the generated compensation matrices on a uniform gray input image. The compensation effect is difficult to properly illustrate in a journal figure since it depends both on proper color reproduction and viewing distance. To show the qualitative structure of the resulting compensation, we show the enlarged pixel structure of the image in fig. 5. To simulate the effect of the proper viewing distance, in fig. 5g - l we show the result of blurring these images with a Gaussian kernel. It can be seen that the calibrated in S-CIELAB image (fig. 5d - f becomes uniformly gray after blurring (fig. 5j -1), however uncalibrated and calibrated in LAB images with the same blurring (fig. 5b and 5c) are not uniformly gray (fig. 5h and 5i).
Also it can be seen that while calibrating in LAB (fig. 5c and 5i) color distortions of some pixels appear — this is probably caused by the fact that LAB is not designed for estimation of large color differences. In the same time spatial convolutions in S-CIELAB allow to partially eliminate this drawback.
Tab. 4 shows the relative error functions for the images shown in fig. 5.
Similar images and metrics to fig. 5 and tab. 4, but for a single defective pixel, are shown in fig. 6 and tab. 5.
6.2. Comparison with other works
The idea of using metrics approximating human perception of a visual difference is not new. For instance, in the article [12] the optimization method with error function based on the CSF has been used. Authors of the article suggest optimizing the image itself, which requires recalculation of the compensation coefficients for each new displayed image. In our algorithm it's only needed to apply pre-computed calibration matrices for every displayed image.
(j) S-CIELAB 20 cm (k) S-CIELAB 40 cm (l) S-CIELAB 60 cm Fig. 5. Illustration of changes in the structure of the observed
defects for a display within the luminance distortion model. (a)-(f) - input, uncalibrated and compensated images structure: (a) - input image, (b) - display defects, (c)-(f) - display defects after compensation via matrices; (g)-(h) - same images blurred with Gaussian blur with a = 2
Tab. 4. Relative error function for images from fig. 5: by columns - different images, by rows - different metrics
Image / metric LAB (fig. 5c) S-CIELAB 20 cm (fig. 5d) S-CIELAB 40 cm (fig. 5e) S-CIELAB 60 cm (Fig. 5f )
LAB 0.835 1.982 2.436 2.082
S-CIELAB 20 cm 1.092 0.742 1.051 1.076
S-CIELAB 40 cm 1.069 0.794 0.666 0.721
S-CIELAB 60 cm 1.080 0.815 0.704 0.700
The computational efficiency of estimating these matrices in [12] is similar to that of our paper, however, in their method such optimization is applied for every frame independently. Therefore, with the same optimization procedure our algorithm cannot be better in spatial uniformity compensation (though per-frame computation could introduce temporal nonuniformity). If we calculate one iteration of the algorithm proposed in [12] (which is roughly equal to on-device application of our algorithm in the terms of computational efficiency), the quality of
the image from fig. 5e is equal to 3.193 (the image quality has improved by 2 % relatively to uncalibrated image (its metric is equal to 3.273)), while the quality of image calculated by our algorithm in 2950 iterations is equal to 2.181 (33.4 % improvement).
(j) S-CIELAB 20 cm (k) S-CIELAB 40 cm (l) S-CIELAB 60 cm
Fig. 6. Single-pixel illustration of changes in the structure of the observed defects for a display within the luminance distortion model. (a)-(f) - input, uncalibrated and compensated images structure: (a) - input image, (b) - display defects, (c)-(f) - display defects after compensation via matrices; (g)-(h) - same images blurred with Gaussian blur with a = 2
Tab. 5. Relative error function for images from fig. 6: by columns - different images, by rows - different metrics
Image / metric LAB (fig. 6c) S-CIELAB 20 cm (fig. 6d) S-CIELAB 40 cm (fig. 6e) S-CIELAB 60 cm (fig. 6f)
LAB 0.979 1.102 1.369 1.414
S-CIELAB 20 cm 0.994 0.967 1.017 1.068
S-CIELAB 40 cm 0.984 0.968 0.943 0.960
S-CIELAB 60 cm 0.982 0.968 0.946 0.946
In the paper [16], authors propose the algorithm that calculates a single calibration for any displayed images. However, in comparison with the algorithm proposed in this paper, it reduces the maximum brightness of the display more strongly (so as global brightness contrast). The
algorithm from [16] for the image from the fig. 5e decreases the maximal brightness of all pixels by 42 % relative to the input image. In the same time our algorithm decreases the maximum brightness by 11 %.
Conclusion
In this work, we consider the defective pixels compensation problem for a display with variations in pixel characteristics (including defects) to minimize the perceived non-uniformity of an image. The S-CIELAB image difference metric was used to evaluate the perceived non-uniformity. This method takes into account both the resolution of the human visual system and the reduced sensitivity to the absolute values of the achromatic component of the image. We propose a reasonable compromise between uniformity of the displayed image, its maximum brightness, and computational efficiency of the algorithm.
We studied the dependence of compensation accuracy on the compensation model (channel-wise or matrix) on the model of pixel characteristics variation. Even when there is no variation in the emission spectra of subpixels, we found that the matrix compensation model is superior compared to the channel-wise (vector) model. Additionally, the dependence of the compensation accuracy on the observation distance used in the compensation data was studied.
To further develop the solution to the considered problem, more experimental data should be acquired in order to confirm and clarify the conclusions of this work, since the accuracy of the S-CIELAB spatial color perception model is limited and has not been studied thoroughly enough to be applied to these tasks.
References
[1] Arnold AD, Cok RS. OLED display with aging compensation. US Patent 6995519 of February 7, 2006.
[2] Harris S. Color and luminance uniformity correction for LED video screens. Source:
(www.signindustry. com/led/articles/2007- 10-15-SH-
PulseWidthModulationPWMCorrectionOfLED-
Displays.php3).
[3] Uttwani PK, Villari BC, Unni KN, Singh R, Awasthi A Detection of physical defects in full color passive-matrix OLED display by image driving techniques. Journal of Display Technology 2012; 8(3): 154-161. DOI: 10.1109/jdt.2011.2168805.
[4] Basova OA, Grigoryev AS, Savchik AV, Sidorchuk DS, Ni-kolaev DP. On optimal visualization of images on photoemission displays with significant dispersion of efficiency of individual elements [in Russian]. Sensornye sistemy 2020; 34(1): 25-31. DOI: 10.31857/S0235009220010047.
[5] Sharma G. Color fundamentals for digital imaging. In Book: Sharma G, ed. Digital color imaging handbook. Ch 1. Boca Raton: CRC Press; 2003: 1-114.
[6] Viggiano JAS. Comparison of the accuracy of different white-balancing options as quantified by their color constancy. Proc SPIE 2004; 5301: 323-333. DOI: 10.1117/12.524922.
[7] Man K, Ashdown I. Accurate colorimetric feedback for RGB LED clusters. Proc SPIE 2006; 6337: 633702. DOI: 10.1117/12.683239.
[8] Kimpe T, Xthona A, Matthijs P. P-11: Spatial noise and non-uniformities in medical LCD displays: Solution and performance results. 2004. Source: (https://www.researchgate.net/profile/Tom-Kimpe/publication/228879086_P-11_Spatial_Noise_and_Non-
Uniformi-
ties_in_Medical_LCD_Displays_Solution_and_Performan ce_Results/links/0deec5315d08a1e426000000/P-11-Spatial-Noise-and-Non-Uniformities-in-Medical-LCD-Displays-Solution-and-Performance-Results.pdf).
[9] Kimpe T, Coulier S, Van Hoey G. Human vision-based algorithm to hide defective pixels in LCDs. Proc SPIE 2006; 6057: 60570N. DOI: 10.1117/12.649240.
[10] Kimpe T. Display assemblies and computer programs and methods for defect compensation. US Patent 8164598 of April 24, 2012.
[11] Verstraete G, Kimpe T. Optical correction for high uniformity panel lights. US Patent 9070316 of June 30, 2015.
[12] Messing DS, Kerofsky LJ. Using optimal rendering to visually mask defective subpixels. Proc SPIE 2006; 6057: 605700. DOI: 10.1117/12.644321.
[13] Stellbrink J. Comparison of vision-based algorithms for hiding defective sub-pixels. Proc SPIE 2007; 6494: 64940Q. DOI: 10.1117/12.704336.
[14] McFadden SB, Ward PAS. Improving image quality of tiled displays. In Book: Kamel M, Campilho A, eds. Image Analysis and Recognition. ICIAR 2015. Cham: Springer; 2015: 22-29. DOI: 10.1007/978-3-319-20801-5_3.
[15] Jepsen ML, Loomis NC, Bastani B, Vieri C, Braley C, Abercrombie SCB. Masking no n-functioning pixels in a display. US Patent 10354577 B1 of June 16, 2019.
[16] Stone MC. Color and brightness appearance issues in tiled displays. IEEE Comput Graph Appl 2001; 21(5): 5866. DOI: 10.1109/38.946632.
[17] Bern M, Eppstein D. Optimized color gamuts for tiled displays. Proceedings of the Nineteenth Annual Symposium on Computational Geometry 2003: 274-281. DOI: 10.1145/777792.777834.
[18] Chaji G, Dionne JM, Azizi Y, Jaffari J, Hormati A, Liu T, Alexander S. System and methods for aging compensation in AMOLED displays. US Patent 9786209 B2 of October 10, 2017.
[19] Chaji G. Compensation for color variations in emissive devices. US Patent 10181282 B2 of January 15, 2019.
[20] Nathan A, Chaji G, Alexander S, Servati P, Huang RI, Church C. Method and system for programming, calibrating and/or compensating, and driving an LED display. US Patent 10699624 B2 of June 30, 2020.
[21] Mao XY, Wang RG, Cheng HB, Miao J, Chen Y, Cao H. Calibration of abnormal brightness area on the LED Display. ITM Web of Conferences 2017; 11: 02001. DOI: 10.1051/itmconf/20171102001.
[22] Kim K, Lim T, Kim C, Park S, Park C, Keum C. High-precision color uniformity based on 4D transformation for micro-LED. Proc SPIE 2020; 11302: 113021U. DOI: 10.1117/12.2542728.
[23] Wuerger SM, Watson AB, Ahumada AJ Jr. Towards a spatio-chromatic standard observer for detection. Proc SPIE 2002; 4662: 159-172. DOI: 10.1117/12.469512.
[24] Smith T, Guild J. The CIE colorimetric standards and their use. Trans Opt Soc 1931; 33(3): 73. DOI: 10.1088/14754878/33/3/301.
[25] CIE recommendations on uniform color spaces, color-difference equations, and metric color terms. Color Res Appl 1977; 2(1): 5-6. DOI: 10.1002/j.1520-6378.1977.tb00102.x.
[26] Zhang X, Wandell BA. A spatial extension of CIELAB for digital color image reproduction. SID International Symposium Digest of Technical Papers 1996; 27: 731-734. DOI: 10.1889/1.1985127.
[27] Bozhkova VP, Basova OA, Nikolaev DP. Mathematical models of spatial color perception [In Russian]. Information Processes 2019; 19(2): 187-199.
[28] Luo MR, Cui G, Rigg B. The development of the CIE 2000 colour-difference formula: CIEDE2000. Color Res Appl 2001; 26(5): 340-350. DOI: 10.1002/col.1049.
[29] Konovalenko IA, Smagina AA, Nikolaev DP, Nikolaev PP. Prolab: perceptually uniform projective colour coordinate system [In Russian]. Sensory Systems 2020; 34(4): 307-328. DOI: 10.31857/S0235009220040034.
[30] Sai SV. Metric of fine structures distortions of compressed images. Computer Optics 2018; 42(5): 829-837. DOI: 10.18287/2412-6179-2018-42-5-829-837.
[31] Frackiewicz M, Palus H. New image quality metric used for the assessment of color quantization algorithms. Proc SPIE 2017; 10341: 103411G. DOI: 10.1117/12.2268531.
[32] Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv Preprint 2014. Source: <https://arxiv.org/abs/1412.6980).
[33] Darkhovsky BS, Popkov AY, Popkov YS. Method of batch Monte Carlo iterations /to solving by global optimization problems [In Russian]. Journal of Information Technologies and Computing Systems 2014; 3: 39-52.
[34] Limonova EE, Matveev DM, Nikolaev DP, Arlazarov VV. Bipolar morphological neural networks: convolution without multiplication. Proc SPIE 2020; 11433: 114333J. DOI: 10.1117/12.2559299.
[35] Acceptable number of defective pixels LCD, OLED modules of TVs and displays [In Russian]. Source: <www.lg.com/ru/support/product-help/CT20206007-1347276421471 ).
Authors' information
Olga Andreevna Basova, (b. 1996) graduated from Moscow Institute of Physics and Technology (MITP) in 2019, majoring in Applied Mathematics and Informatics. Currently she works as a researcher at the Institute for Information Transmission Problems (IITP RAS). Research interests are image processing and enhancement methods, color appearance models. E-mail: [email protected] .
Sergey Alexandrovich Gladilin, (b. 1980), Ph. D. in Physics and Mathematics. He graduated from Lomonosov Moscow State University (MSU) in 2002, is a researcher at the Institute for Information Transmission Problems (IITP RAS). Research interests are visual systems, image processing and pattern recognition. E-mail: [email protected] .
Anton Sergeevich Grigoryev, (b. 1989) graduated from Moscow Institute of Physics and Technology (MIPT) in 2012, majoring in Applied Mathematics and Informatics. Currently he works as a researcher at the Institute for Information Transmission Problems (IITP RAS) and also is the Director of Technology of an AI software development company Visillect Service. Research interests are image processing and enhancement methods, autonomous robotics and software architecture. E-mail: [email protected] .
Dmitry Petrovich Nikolaev, (b. 1978) Ph. D. in Physics and Mathematics. He graduated from Lomonosov Moscow State University (MSU) in 2000, is a head of the vision systems laboratory at the Institute for Information Transmission Problems (IITP RAS) and the Director of Technology of Smart Engines Service LLC. Research interests are machine vision, algorithms for fast image processing, pattern recognition. E-mail: [email protected] .
Code of State Categories Scientific and Technical Information (in Russian - GRNTI)): 28.23.15. Received December 24, 2020. The final version - September 22, 2021.