Научная статья на тему 'X-ray tomography: the way from layer-by-layer radiography to computed tomography'

X-ray tomography: the way from layer-by-layer radiography to computed tomography Текст научной статьи по специальности «Медицинские технологии»

CC BY
222
52
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Компьютерная оптика
Scopus
ВАК
RSCI
ESCI
Ключевые слова
computer tomography / data size / radiation load / monitored reconstruction

Аннотация научной статьи по медицинским технологиям, автор научной работы — Arlazarov Vladimir Lvovich, Nikolaev Dmitry Petrovich, Arlazarov Vladimir Viktorovich, Chukalina Marina Valerievna

The methods of X-ray computed tomography allow us to study the internal morphological structure of objects in a non-destructive way. The evolution of these methods is similar in many respects to the evolution of photography, where complex optics were replaced by mobile phone cameras, and the computers built into the phone took over the functions of high-quality image generation. X-ray tomography originated as a method of hardware non-invasive imaging of a certain internal cross-section of the human body. Today, thanks to the advanced reconstruction algorithms, a method makes it possible to reconstruct a digital 3D image of an object with a submicron resolution. In this article, we will analyze the tasks that the software part of the tomographic complex has to solve in addition to managing the process of data collection. The issues that are still considered open are also discussed. The relationship between the spatial resolution of the method, sensitivity and the radiation load is reviewed. An innovative approach to the organization of tomographic imaging, called “reconstruction with monitoring”, is described. This approach makes it possible to reduce the radiation load on the object by at least 2 – 3 times. In this work, we show that when X-ray computed tomography moves towards increasing the spatial resolution and reducing the radiation load, the software part of the method becomes increasingly important.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «X-ray tomography: the way from layer-by-layer radiography to computed tomography»

X-ray tomography: the way from layer-by-layer radiography to computed tomography

V.L. Arlazarov1-2, D.P. Nikolaev1-3, V. V. Arlazarov1-2, M. V. Chukalina1,3 1 Smart Engines Service LLC, 121205, Moscow, Russia, Nobelya str.7, 2 FRC "Computer Science and Control"RAS Institute for Systems Analysis, 117312, Moscow, Russia, pr. 60-letiya Oktyabrya 9, 3 Institute for Information Transmission Problems (Kharkevich Institute) RAS, 127051, Moscow, Russia, Bolshoy Karetny per. 19, build. 1

Abstract

The methods of X-ray computed tomography allow us to study the internal morphological structure of objects in a non-destructive way. The evolution of these methods is similar in many respects to the evolution of photography, where complex optics were replaced by mobile phone cameras, and the computers built into the phone took over the functions of high-quality image generation. X-ray tomography originated as a method of hardware non-invasive imaging of a certain internal cross-section of the human body. Today, thanks to the advanced reconstruction algorithms, a method makes it possible to reconstruct a digital 3D image of an object with a submicron resolution. In this article, we will analyze the tasks that the software part of the tomographic complex has to solve in addition to managing the process of data collection. The issues that are still considered open are also discussed. The relationship between the spatial resolution of the method, sensitivity and the radiation load is reviewed. An innovative approach to the organization of tomographic imaging, called "reconstruction with monitoring", is described. This approach makes it possible to reduce the radiation load on the object by at least 2 - 3 times. In this work, we show that when X-ray computed tomography moves towards increasing the spatial resolution and reducing the radiation load, the software part of the method becomes increasingly important.

Keywords: computer tomography, data size, radiation load, monitored reconstruction.

Citation: Arlazarov VL, Nikolaev DP, Arlazarov VV, Chukalina MV. X-ray tomography: the way from layer-by-layer radiography to computed tomography. Computer Optics 2021; 45(6): 897-906. DOI: 10.18287/2412-6179-CO-898.

Acknowledgements:This work was supported by Russian Foundation for Basic Research (Projects No.18-29-26033, 18-29-26020).

Introduction

It has been a little over a hundred years since we started employing the X-ray tomography method, which makes it possible to look inside an object without breaking it down. What is more the performance capabilities of this method are increasing as the computational capabilities are developing [1]. While it all started with the hardware implementation of the possibility to visualize one of the cross sections of the object directly on the tape, today we are already able to reconstruct a digital 3D image of the whole object using a set of X-ray images (tomographic projections) with submicron resolution collected at different angles.

As the technical side of the method is advancing, the management of the collection process is get-ting more complex, the reconstruction methods of a digital image of an object are evolving, thus lifting certain limitations on collection of projections in some cases, and adding additional ones in other cases. The programming part of the hardware-software tomographic complexes starts to play an increasingly important role. Apart from the in-verse Radon transform realization itself, today the software is responsible for the correction of miss-calibrating the sys-

tem nodes and addressing the influence of the probing spectrum composition as well. We would like to note here that the classic tomography problem implies homogenous probing. Nevertheless, the software functions do not end here. It is frequently required to conduct an anaysis of a reconstructed image before it is presented to its end user in a convenient form. The industrial and medical tomographic complexes available on the market present a set of cross-sections of a 3D object at the output, but in some cases the equiscalar or only high-resolution local segments are required. The software developers have to solve recognition problems of a reconstructed image, conduct segmentation, binarization, etc.

In this work, we attempted to classify the objectives facing the software engineers who work with the software designed for work with tomographic data collected in various geometric schemes, in the conditions of low or high radiation load and during the experiments with the high spatial resolution. The issues of increasing speed during the reconstruction process are mentioned in passing both from the point of view of algorithmic acceleration and calculation organization on the platforms, which are optimized for the problems with real numbers and possess a high degree of parallelism.

We would like to demonstrate that we are on the verge of creating a new generation of tomographs. They will make it possible to lower the radiation load and increase the reconstruction accuracy without slowing down the analysis speed. It is achieved through the software management of the scanning process. At the same time, a partial (and sometimes complete) reconstruction is taking place during scanning. The first results are already in, yet we need more time to design fully-fledged tomographic scanners.

The article is organized the following way. After a brief historical background of the development of the computational X-ray tomography method, there is an attempt to compare the development paths of two methods - photography as an optical imaging method and x-ray tomography. The question of the relationship between high spatial resolution in tomography and the requirements for computational resources is then reviewed. Finally, we are going to comment on the approach to tomo-graphic surveying proposed by the authors and known as "the reconstruction with monitoring". This approach makes it possible to decrease the radiation load on an object through replacing the fixed protocol of the projection surveying with the protocol with monitored reconstruction [2].

1. The development of the X-ray tomography method

The layer-by-layer X-ray imaging method, or the method for radiological imaging of individual layers without shading, became the conversation subject approximately 20 years after the X-rays were discovered by Wilhelm Conrad Rontgen in 1895. Later the rays came to bear the discoverer's name. The interpretation of x-ray images (radiographs) which are generated when the rays go through 3D objects pose certain difficulties as the images are formed by the entire internal structure through which these rays are travelling through. The performance of layer-by-layer imaging of the cross sections of an object without breaking it down can be called the moment when the x-ray tomography method was born. According to [3], in 1914 K. Mayer, the doctor from Poznan, made the presentation "A heart radiograph without shadowing" at the congress in Lviv, and in 1917 a French radiologist Bocage solved the problem of layer-by-layer isolated X-ray radiography of an object on his own and patented his hardware solution in 1921. In the proposed measuring model an X-ray tube and a cassette with a film were moving around a stationary object in opposite directions during the exposure. The inventors tried to change the trajectories of the source-receiver pair in an attempt to improve the contrast of the generated image, and later on, they made the stationary object rotate while the source-receiver pair became fixed. Around the same time in 1917 Radon published his famous work on reversibility of a linear integral transform which converts a function on a given plane into a set of its linear integrals [4], the computed tomography method was about to be invented

in less than half a century. Computed tomography was fully hardware-based. The word "tomographie" appeared in the publications by Grossman [5 - 7] and Chaoul [8] in 1935. The first model of a domestically produced X-ray tomographwas produced in the same year. It was designed by V. I. Pheoktistov, and it was followed by the projects made by S. P. Yanshek and I. S. Ter-Ogonian, and a number of further developments by various Soviet designers [3].

In 1958, in the physical series of the journal "Izves-tia" of the higher educational institutions, a work by B. I. Korenblyum and his co-authors was published [9], where they described a new method for obtaining X-ray images of a cross-section of an object, which was based on processing the data from an X-ray cipher telegram, recorded at different angles. They provided the integral equation and presented its solution. The computed tomography method was not fully hardware-based any longer, it became a hardware-software based method. It took 5 minutes to generate an image containing 104 elements. This work was published in Russian and translated into English only in this century [10].

During his years of residency at the hospital in the mid-60s Cormack solved the Computer Assisted Tomography (CAT) problem independently of other researchers 5 years later [11 - 13]. The term "computed tomography" was coined. Cormack performed the first measurements on the test objects while being employed at the laboratory of the private institution, Tufts University. His researches made in the 60s didn't attract much interest until 1971 when the EMI-scanner was introduced on to the market [14]. The device that performs computed tomography was developed by Godfrey Hounsfield at the firm "Electro Musical Instruments" (EMI). The first copy of the scanner was handed over for the use of London Hospital. The scanner software was designed in the technological laboratory of the same company. New equipment is always difficult to manufacture and to use, the hardwaresoftware complexes for computed tomography were no exception. In Western Europe the scanners were slowly making their way into hospitals. They were purchased in Britain, France, Germany, and Italy, but by the end of 1977, there were only 200 devices all over Europe [15], whereas there were 300 machines being used in Japan and 1000 of them found their place in America. Although they were predominantly used by radiologists, cardiologi-cal and neurological departments started taking advantage of them as well. Since the equipment is not easy to operate and it's pretty complex to interpret the produced results, close cooperation between the end users and the developers of the machines was required at first. And while in England the development was carried out in the departments of the private company, in America the advancement of the instrument and algorithmic base of tomography was performed in academic laboratories.

In the next 6 years, there were some huge changes made to the hardware of the scanners and there were al-

ready 4 generations of the machines with different organization patterns of the scanning process. The innovations included the scan speed and the method for recording of the probing X-ray radiation. It is essential to remember that the increased scan speed guarantees the decrease of movement artefacts.

The first generation of scanners of 1972 is the EMI scanner, a prototype of which had been worked on since 1967. The scan time was 9 hours, and the reconstruction process took 2 hours. By the time, the machine was ready to be placed on the market, the head scan took 5 minutes already and was decreased to 1 minute in no time. There was only one beam and one detector used in the scanner. The "source-detector" system was moving incrementally at first and the measurement of the radiation attenuation was carried out in increments of 1 mm. Then the "source-detector" system rotated at a small angle about half a degree, and the moving phase resumed. And it continued until it reached a full 180-degree turn. One scan of a cross section of an object up to 240 mm (a human head) collected about 360 x 240 data about the X-ray radiation attenuation in multiple directions, which explained both the longer scan time and the computing time.

In 1975 the second generation of the scanners comes out. And it already features the use of a few X-ray beams. The scan time of the complex EMI CT 1010 that scans two layers simultaneously with 8 detectors in each of them was 1 minute. The well-known German manufacturer of medical equipment "Simens" joins the competition when it presents its second-generation machine. Tomographs are starting to get manufactured in other countries as well. Among others, in a Hungarin company "Medicure". The work on the software improvements is intensifying. A full body scan is produced in America. EMI presented a full body scanning machine as well, but a long scan time led to the artifacts due to internal organ movements during breathing and made the second-generation computed tomographs a dead-end option for medical analysis, as opposed to industrial and scientific systems.

And in 1975 the fan pattern implementation made it possible to decrease the scan time down to 5 seconds and even less than that. It basically meant the arrival of the third generation of computer tomographic scanners. Their first manufacturer was Axronics. Academic laboratories got involved in the work on advancing the software that performed the fan algorithm of the tomographic image reconstruction. The third-generation machines use a large number of detectors (from a few hundred to a thousand of them) which are arranged in an arc with their center in the focal point of the X-ray tube. The rotational movement of the "tube-detector" system is carried out continuously, and the cycle of data collecting from the detectors gets strobed, for example, at every half a degree. The rotation is performed to 360 degrees. The amount of collected data and its rate of arrival grew 4 times and more in comparison to the second-generation systems.

As an alternative to circumvent patent restrictions, the fourth-generation of scanners was born. It had about a thousand scanners that were arranged fixedly along the full circle inside an object, and the tube was rotating inside the circle of detectors making a full 360-degree turn. While in the third-generation machines the vanishing point of a fan of beams was a focal spot (a small area on the anode of the x-ray tube that is a source of radiation), in the fourth-generation scanners each detector formed a vanishing point.

In Russia the work in the field of the application of the mathematical theory of the 3D reconstruction based on projections for studying the structure of macromole-cules using 3D electron microscopy was conducted by B. K. Vanstein [16, 17] at the Institute of Crystallography. In the late 1970s E. Yu. Vasilieva [18] from the AllUnion Research Institute of Radiation Technology (ARI RT, currently JSC "NIIFTA") made a pilot plant, and in the early 1980s the prototype of the first Russian-made X-ray computed tomograph CPT 1000 was made in the All-Union Research Institute of Cable Manufacturing under the guidance of Professor I. B. Rubashov and the copyright certificate for a reconstructive unit [19]. Fig. 1 shows "an actual tomography image of a human head generated by the SRT-1000 tomograph" [20].

■i;¥*?ft Ф н seen.

ОКНО' -вв?*-':

Fig. 1. An actual tomography image of a human head generated by the SRT-1000 tomograph (inverted)

Any tomographic framework has its advantages and disadvantages. The resulting contrast in a recorded image can be transmissive, phase or hybrid. Various technical solutions are required to organize the collection of angular projections and therefore the computational complexity of reconstruction algorithms that are applied differs. Still, an X-ray source, an optical system that creates a probe, a recorder (in the earliest days of the method it was a cassette with a film) and a computer are still the fundamental hardware components of a tomograph. And the computer has to deal with an increasingly bigger workload. The third-generation computers and the first computed tomographs are the same age. The optimization of the design of sources, detectors and tomographic set-

ups is aimed at decreasing the collecting time, lowering the radiation load and improving the spatial resolution. The latter reached nanometer accuracies [21 - 23]. Fig. 2 demonstrates an area of a lithium-ion battery electrode [24]. The voxel size of a digital image is about 50 nanometers. The measurements were performed using a synchrotron radiation source in Grenoble, France.

Fig. 2. The image of an electrode section [24].

The voxel size is about 50 nanometers

We will discuss the tasks that fall to a computer during the optimization of the tomograph hardware in more details below. In the following section we'll trace the development path of photography as a possible extrapolation of the tomography development.

2. Evolution ofphotography as an example of the imaging method development

Photography as a method of creating and fixation an image on a photosensitive material was brought to the world's attention about 100 years before the computed tomography invention. However, back then it was called heliography or solar science. In 1826, a French inventor J. Niepce produced an image of the view from his window by placing a camera obscura on a tin plate covered with a thin layer of Syrian tarmac and then proposed a way of copying it. The image was obtaining for 8 hours. By 1837, the exposure time decreased to 30 minutes thanks to the refinement of the process performed by D. Lager together with Niepce. W. Talbot then offered an innovative way of the photosensitive paper preparation, by the middle of the 19th century, there were already glass negatives, and in 1861, D. Maxwell was able to produce the first color image because of three different images of the same object with different filters. In 1911 Oskar Barnack who played a big role in the method development came to work for the German company "Leitz", and in 1925 the first compact camera Leica hit the market. The name was derived from merging the words Leinz and Camera. And about 50 years later in 1972 the whole world witnessed the first image of the planet Earth from space.

If we view photography as an optical imaging method, attention may be drawn to the fact that the improvement of the camera hardware has been making solid progress for a long period, for approximately 175 years. These years mobile devices cameras have replaced bulky cameras of the past, and as a result, a number of functions of the high-quality image generation is passed on to the computer.

Photography was initially based on the physical and other analytical models of image generation, just like tomography today. The main progress in the image quality was attributable to the advances in hardware (particularly, to the increased resolution and sensor sensitivity). This trend for professional cameras is continuing, however, lately most of the studies and the most substantial technological progress had to do with mobile photography where the limitations on the size and the cost of the sensor are especially significant. It is worth noting here the analogy with the tomograph situation.

In recent times in terms of different parameters of image quality (including equivalent resolution, dynamic range and sensitivity) mobile cameras began the direct competition with a full-sized cameras with incomparable sizes of optics and sensors the reason of which happened to be the wide use of algorithms of computational photography [25], including the ones based on latest developments in the field of artificial intelligence, to be more precise - deep learning and algorithms of images reconstruction based on a set of frames (which requires the compensation of camera and stage objects moving). Apart from that, photographic techniques are now being successfully modeled, which by nature require large geometric sizes of the optics - namely, selective background blurring ("bokeh"). Along with that, if in traditional cameras this effect is achieved by physical processes typical for the images formation through a large-diameter lens then in mobile photography a simulation of this effect is used based on a depth map obtained either from several small-sized cameras, or even on the basis of one image from using semantic segmentation (if, for instance, we know there is a person in the image foreground). In addition, diffractive optics are used for further reduction of the size of optical systems. Its use leads to artifacts appearing (fig. 3 on the left).

m 11

Fig. 3. Removed (on the left) and reconstructed (on the right) fragment of the mire [26]

However, these artifacts are successfully suppressed using neural network methods (fig. 3 on the right) [26]. Without a doubt, photography in terms of its mass nature

and more ancient history is at a later stage in the development of technology compared to tomography but today it is already clear that if we start not from the instrument base but from reconstruction algorithms, where a huge unrealized laboratory experience has been accumulated, which keeps expanding, then we can already talk about new generation hardware and software complexes.

3. Space resolution in tomography

Let us start with the photography. In the photograpy method resolution is a measurement which defines the number of image pixels per area unit. To talk about a resolution capability of photography printing tools, the value determined in dots per unit (DPI) is used.

If the talk is about tomography, the situation is different. During a medical diagnosing or a medical expert assessment a minimal size of the local area, which had been detected in the reconstructed 3D digital image, will determine the resolution. If the area occupies 1 voxel, the resolution will match the voxel size. At the same time, the phrase 'doctors have managed to visualize a tumor measuring 2 mm' sounds familiar and clear. The voxel linear size of the reconstructed image for the case of a proportionate voxel grid determines the resolution of the tomograph. It depends on the number of cells of the position-sensitive detector, the scanning scheme, the used reconstruction algorithm and is expressed in length units. The cell number increase of the detector without the detector size increase will lead to the proportionate boost of processed data amount. The grand number of the cells will be called an area. According to the Kotelnikov-Shannon theorem to reconstruct the volume, the area's cross section of which is equal to the area of the detector, the number of projection angles should be equal to the pixel number in the detector entry. Then by using the algorithm of convolution and reversed projection the volume the voxel number of which is equal to the square of pixel amount in the detector entry multiplied by the number of entries, will be possible to reconstruct. Imagine a researcher wants to peek into a nanoworld. Let us take a simple assessment -1 cubic millimeter contains a quintil-lion of voxels with 1 nanometer in size. If the optic density is encoded by a number of floating single precision dot, then only to store the reconstruction results 4 exabytes of memory will be required. I.e. the hardware-based increase of the space resolution will lead to the gigantic increase of processed data volume.

The task of large data amount manipulation is solved at 2 sides simultaneously.

Technologies for organizing such computations are being developed [27] and new approaches for reconstruction which can work with large amounts of data. are suggested. Actions are actively carried out both abroad [28, 29] and in Russia [30]. In parallel, there is research on the methods application and approaches existing in the theory of image processing and analysis to achieve superresolution [31, 32] without the amount of recorded data in-

crease. The software methods to increase resolution may differ in the way of filling in the missing information [33, 34] when forming the reconstructed image, for example, a priori information about objects and noise can be used.

In conclusion, we would like to highlight that the advances achieved in hardware and algorithmic solutions already today make it possible to compare cross-sections of digital images and histology results (fig. 4) [35]. The hardware implementation is still available only at synchrotron stations. The intensity of laboratory sources and the sensitivity of the recording equipment are still not enough for the signal-to-noise ratio on the recorded projections to allow the reconstruction of digital images with high quality.

Fig. 4. Tomographic image of a section of the spinal cord of the mouse (a). Enlarged areas of tomographic images (b, d), images of histology (c, f) [35]

4. Algebraic approach for reconstruction allowing to work in poor 'signal/noise' ratio conditions

With a decrease of the linear size of the cells of the position-sensitive detecting equipment, it is necessary to increase the recording time of the radiograph (tomographic projection) in order to maintain the signal-to-noise ratio (SNR) required for the correct operation of fast integral reconstruction methods [36 - 38]. In this case, the radiation load on the sample as a whole increases. The SNR requirement is relaxed when the algebraic method with regularization is used for reconstruction. An algebraic approach to solving the problem of tomographic reconstruction in the middle of the 70th century was described in their works by Gordon [39, 40], Wanstein [17] and Hounsfield [41]. Regularization as a technology for solving incorrectly presented problems [42], which took its development in the 60s of the last century in the method of computed tomography, in no time found its application in the implementation of the algebraic approach for reconstruction [20]. Today, when the achieved nanometer resolution [43] worsened SNR, regularization methods are again in demand [44, 45]. The use of mathematical models in computational experiments describing the relationship between the magnitude of the recorded signal, the spatial distribution of the attenuation coefficient, and the description of the optical path made it possible to

specify the form of the regularizing term and obtain a stable solution to the reconstruction problem in a number of cases.

It should be noted that the use of algebraic approach for reconstruction though allows to weaken the requirements for the SNR value, increases much the memory requirements of the calculators used in cases where the projections cannot be processed sequentially layer by layer.

When using a parallel beam to probe an object (Fig. 5 left), it is trivial to organize a layer-by-layer reconstruction. In case of a conical scheme for collecting tomo-graphic projections (fig. 5 right), the organization of the reconstruction procedure becomes more complex. Let us hold on in more detail on the cone scheme, since precisely this one is implemented in medical and industrial tomographs.

Fig. 5. Schematic diagrams of projection formation. Top - parallel beam, bottom - cone beam

The figure shows that with parallel probing, the reconstruction of a 3D digital image can be carried out layer by layer since the same cross section of the tomograph-ic object for each projection angle is involved in the formation of the signal of the cells of one line of the detector. Reconstruction of one horizontal section of a 3D digital image can be carried out independently of the others. Sections can be reconstructed in a sequence one after another or in parallel, i.e. simultaneously. To reconstruct one section, the data collected by one line of cells for all projection angles are loaded into memory. If the scheme is conical and the source-detector distance is comparable with the size of the tomographic object, i.e. the circuit cannot be approximated parallel, then when the object is rotated (or the source-detector system is rotated with a stationary object), the rays arriving in different cells of one detector row (except for the central one) pass through several layers of the object. The further the detector line is from the center line, the more layers are involved in signal shaping. Those in order to reconstruct one of the

layers of a 3D digital image, the signal values of only one line of the detector located at the height of the reconstructed layer are not enough. It is necessary to load several detector lines at the same time, the exact number is determined by the geometry of the used measuring circuit.

With an increase in the number of cells in a row of position-sensitive detectors (now it is several tens of thousands) and an increase in the number of layers required for loading, it is no longer always possible to directly implement algebraic methods such as ART [46], SART [47] or SIRT [48] with regularization on a PC with a GPU. In these cases, it is necessary to reformulate the optimization problem and introduce local sub-volumes and impose restrictions on the regions of their matching [30]. It should be noted that the reconstruction in a parallel scheme for collecting projections or the implementation of the reconstruction procedure using sub-volumes in the cone scheme allows parallelizing the reconstruction process. I.e the use of specialized architectures such as Elbrus [49] look specifically attractive from the point of view of speeding up computations. The limitation on the reconstruction time is of particular importance if tomographic systems are used during surgery [50, 51] or to study the dynamics of fast processes, such as filtration processes in cores [52, 53] or processes of evolution of the structure of materials [54, 55].

As it was demonstrated above, with an increase in the spatial resolution of the method and an increase in the requirements for the speed of calculations, the role of a computer in a tomographic complex is constantly increasing. In principle, the calculator can be placed both in a single complex with the measuring part, and, thanks to the rapidly developing methods of remote data manipulation, it can be carried over a considerable distance, being the center of collective use. Systems of this type are created and are constantly being improved using synchrotron radiation sources [56]. The development of such systems is facilitated by tomographic data banks [57, 58], where data from many tomographic complexes operating in different modes are collected. The improvement of calculators goes independently of the improvement of the measuring part of tomographic complexes, and perhaps the measuring part will soon be supplied to the user with the provision of a choice of calculator from the proposed line, the elements of which differ in their indicators.

5. Can radiation exposure be reduced by optimizing data handling?

The quality of the reconstructed image depends on two types of scanning parameters: parameters related to the radiation load on the tomographic object, and parameters related to the conditions of digital image formation. The first group of parameters includes the source operating mode (voltage, current) and the collecting time of one projection or exposure time. The second group includes the relative size of the detector's view field (the object is

included in its entirety or only a part of it), the number of detector cells, the number of registered projections, the chosen reconstruction algorithm.

If the operating mode of the source is fixed, then the radiation load can be varied in two ways - by changing the number of projections or by changing the collecting time of one projection [59]. A decrease in the number of projection angles leads to degradation of the quality of the reconstructed image [35]. Reducing the registration time for one projection saves the number of projections. Moreover, each projection is characterized by a low signal-to-noise ratio. Although algebraic reconstruction methods deal with such projections, there is a question remaining to what extent the ratio can be reduced. Until the visually reconstructed picture crumbles? Until the structures of interest to the observer begin to disappear? The issue of assessing the quality of reconstruction results becomes nontrivial in the absence of a standard (phantom) with which the reconstructed digital image could be compared. Next, we will consider the issue of assessing the quality of reconstruction results.

6. Reconstruction quality

The quality of reconstruction results obtained from projections collected according to protocols with a small number of projection angles [60], according to visual assessment, depends on the used reconstruction algorithm. Approaches to quantitative assessment of the quality of reconstruction results differ for the surface and hidden layers of the tomographic object [61]. Metrological approaches used in Microscopy can be used to assess the quality of reconstruction of the surface layer [62]. To detect violations of geometry or composition in hidden layers, either destructive control methods are applicable, or approaches specialized for the tomography method are required that do not require absolute coordinate measurements in the absence of phantoms. Since 1989, work in this direction has been ongoing [63]. In industrial diagnostics, where the concept of dimensional metrology is introduced, today deviations in the sizes and location of components are controlled with known tolerances, while the quality of materials (composition) of these components is controlled [64]. In medicine, anthropomorphic phantoms are used mainly to calculate the deposited dose and to optimize projection collection protocols [65]. Clinics require quality that is sufficient to make a diagnosis. The main tool to assess the quality of images, the visual control of doctors remains. To automatically compare the quality of images obtained from different tomographic systems, global or local metrics are used [66]. A reference 3D digital image may be available [67] or not. In the latter case, the assessment of the quality of the resulting image must necessarily be profile-oriented: dental [68], pulmonological [69], etc.

Transition to measurement methods with monitoring

In the tomograph, image reconstruction begins after the completion of the scan protocol. In 2020, we pro-

posed a fundamentally new approach to organizing tomographic imaging [20]. The idea is based on the fact that the quality of the reconstructed image depends not only on the number of projections collected at different angles and cannot grow continuously. It is proposed to carry out the reconstruction of images immediately during the collecting process, controlling the progress of the reconstruction and analyzing the intermediate results obtained in an automatic mode. This algorithm allows you to stop the study in time if errors&time / dose cost are at the optimum or to stop the collecting process when a result of sufficient quality is achieved. We have called the approach "monitored reconstruction". It can estimate how many computational costs or images will be required to bring the reconstructed image to a quality level acceptable for a physician, material scientist, etc. with minimal losses associated with the absence of unrecorded projections. Test protocol (protocol of measurements) is changed from a rigid protocol to a flexible one. A schematic diagram of working with projections with this approach is shown in fig. 6.

Prolect with (a=60)

Detected-

Partial reconstruction */(*/, x2, •■■> xi)

Fig. 6. Schematic diagram of reconstruction with monitoring [2]

The proposed ideology poses a number of new tasks in the areas of optimization of the projection collection process, image reconstruction when adding another projection or a pencil of projections, creating problem-oriented quality criteria and rules for the basis of the collecting process. The need to carry out reconstruction in real time (commensurate with the exposure time of one projection) imposes strict restrictions on the speed of reconstruction and the creation of fast algorithms adapted for specialized platforms seems more necessary than ever.

Conclusion

Let us draw the conclusion. It seems to be obvious that CT in the way it has been existing for the last 50 years has almost outlived its capabilities of development. Resolution capability of the method has reached nanometers. The attempt to examine even a small area of a brain with the same resolution will lead to the volume of 10151018 units of information of data received from the scanning. It is impossible to process such volumes on ordinary computing devices even in the nearest time perspective.

While treating COVID-19 it was detected that patients need to get CT every few days to watch the treatment course which is extremely dangerous under the current radiation exposure this is why one the main goals is to decrease such a load. The use of CT in commercial appli-

cations, for instance, the detail damage detection in a conveyor brings very challenging demands for highspeed operation. Scanning and processing must take not a split of minutes as currently but split of seconds.

To solve these tasks it is necessary to change the approach for tomograph operating itself. At present we are gathering all data, reconstructing images and processing them afterward. Instead we should the scanning itself must be implemented under the program conduct. It gives a number of opportunities. For example:

- to scan with a low resolution, determine the area of interest and, if it exists, rescan only it

- to make a small number of scans from different angles and determine if there is no object of interest or to find its borders

- to stop scanning as soon as there is enough data for reconstruction

First experimental research works of such an approach show its viability. It allows to exploit all the power of modern image processing and reconstruction algorithms and solve new tasks, which could not have been given earlier. Thus, there are no doubts we are on the threshold of paradigm change and building of new generation tomographs.

We would like to thank for the help provided to us while working on the historical part of the article. In the photography section the materials of Fedorova E. http://blogphotografelena.ru/istoriya-fotografii/ helped us to compose a historical reference and while creating the part about the development of computational tomography method in Russia we consulted with Ryazantsev O.B.

References

[1] Friedland GW, Thurber BD. The birth of CT. Am J of Roentgenol 1996; 167(6): 1365-1370. DOI: 10.2214/ajr.167.6.8956560.

[2] Bulatov K, Chukalina M, Buzmakov A, NikolaevD, Arla-zarov V. Monitored reconstruction: Computed tomography as an anytime algorithm. IEEE Access 2020; 8: 110759-110774. DOI: 10.1109/ACCESS.2020.3002019.

[3] Rabinovich AM. Tomography for pulmonary tuberculosis [In Russian]. Leningrad: "Medgiz" Publisher; 1963.

[4] Radon J. Über die Bestimmung von Funktionen durch ihre Integralwerte längs gewisser Mannigfaltigkeiten. Berichte Sächsische Akademie der Wissenschaften Leipzig 1917; 29: 262-277.

[5] Grossman G. Tomographie I (Rontgenographische Darstellung von Korperschnitten). Fortschr a d Geb d Röntgenstr 1935; 51: 61-80.

[6] Grossman G. Tomographie II (Theoretisches über Tomographie). Fortschr a d Geb d Röntgenstr 1935; 51: 191-208.

[7] Grossman G. Praktische Voraussetzungen für die Tomographie. Fortschr a d Geb d Röntgenstr 1935; 52(H): 44.

[8] Chaoul H. Ueber die Tomographie und insbesondoreihre-Anwendung in der Lungendiagnostik. Fortschr a d Geb d Röntgenstr 1935; 51: 342-356.

[9] Korenblum BI, Tetelbaum SI, Tyutin AA. About one scheme of tomography [In Russian]. Izvestiya VUZov MVO: Radiofizika 1958; 1(3): 13-19.

[10] Gustschin A. Translation: About one scheme of tomography. arXiv Preprint arXiv:2004.03750v1 2020. Source: (https://arxiv.org/abs/2004.03750).

[11] Cormack AM. Representation of a function by its line integrals, with some radiological applications. J Appl Phys 1963; 34(9): 2722-2727. DOI: 10.1063/1.1729798.

[12] Cormack AM. Representation of a function by its line integrals, with some radiological applications. II. J Appl Phys 1964; 35(10): 2908-2913. DOI: 10.1063/1.1713127.

[13] Cormack AM. Reconstruction of densities from their projections, with applications in radiological physics. Phys Med Biol 1973; 18(2): 195-207. DOI: 10.1088/0031-9155/18/2/003.

[14] Alexander RE, Gunderman RE. EMI and the first CT scanner. J Am Coll Radiol 2010; 7(10): 778-781. DOI: 10.1016/j.jacr.2010.06.003.

[15] Mitchell W. Playing leap-frog with elephants: EMI Ltd. and the CT scanner competition in the 1970's. Ann Arbor: University of Michigan Ross Business School; 1989. Source: (http://www-

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

2.rotman.utoronto.ca/william.mitchell/Bio/TeachingMateri als/0Cases/emi/emi_2005a.pdf).

[16] Vanshtein BK. About finding the structure of objects by projections [In Russian]. Kristallographia 1970; 15(5): 894-902.

[17] Vanshtein BK. Three-dimensional electron microscopy of biological macromolecules. Sov Phys Usp 1973; 109(3): 455-497. DOI: 10.1070/PU1973v016n02ABEH005164.

[18] Vasilieva EYu, Maiorov A. Application of computer tomography for fuel rod control [In Russian]. Atomnaya Energia 1979; 46(6): 403-406.

[19] Rubashov IB, Timonov AA, RapkinYuI, DorofeevYuV, Pestryakov AV. Tomograph [In Russia]. USSR Inventor's certificate SU 928277 of May 15, 1982, Russian Bull of Inventions N18, 1982.

[20] Rubashov IB, Timonov AA, Pestryakov AV. Abaut computer tomography [In Russian]. Doklady Academii Nauk SSSR 1980; 258(4): 846-850.

[21] Topal E, Liao Zh, Loffler M, Gluch J, Zhang J, Feng X, Zschech E. Multi-scale X-ray tomography and machine learning algorithms to study MoNi4 electrocatalysts anchored on MoO2 cuboids aligned on Ni foam. BMC Ma-ter 2020; 2: 5. DOI: 10.1186/s42833-020-00011-0.

[22] Du M, Nashed YoSG, Kandel S, Gursoy D, Jacobsen C. Three dimensions, two microscopes, one code: Automatic differentiation for X-ray nanotomography beyond the depth of focus limit. Sci Adv 2020; 6(13): eaay3700. DOI: 10.1126/sciadv.aay3700.

[23] Lemelle L, Simionovici A, Colin P, Knott G, Bohic S, . Cloetens B., Schneider B. Nano-imaging trace elements at organelle levels in substantia nigra overexpressing a-synuclein to model Parkinson's disease. Commun Biol 2020; 3: 364. DOI: 10.1038/s42003-020-1084-0.

[24] Nguyen TT, Villanova J, Su Z, Tucoulou R, Fleutot B, Delobel B, Delacourt C, Demortiere A. 3D Quantification of microstructural properties of LiNi0.5Mn0.3Co0.2O2 high-energy density electrodes by X-Ray holographic nano-tomography. Adv Energy Mater 2021; 11: 2003529. DOI: 10.1002/aenm.202003529.

[25] Taffel S. Google's lens: computational photography and platform capitalism. Media, Culture & Society 2020; 43(2): 0163443720939449. DOI: 10.1177/0163443720939449.

[26] Nikonorov AV, Petrov MV, Bibikov SA, Kutikova VV, Morozov AA, Kazanskiy NL. Image restoration in diffrac-tive optical systems using deep learning and deconvolu-tion. Computer Optics 2017; 41(6): 875-887. DOI: 10.18287/2412-6179-2017-41-6-875-887.

[27] Yoon D-H, Han Y. Parallel power flow computation trends and applications: A review focusing on GPU. Energies 2020; 13(9): 2147. DOI: 10.3390/en13092147.

[28] Draelos R, Dov D, Mazurowski M, Lo J, Henao R, Rubin G, Carin L. Machine-learning-based multiple abnormality prediction with large-scale chest computed tomography volumes. Med Image Anal 2021; 67: 101857. DOI: 10.1016/j.media.2020.101857.

[29] Zhao X, Hu J, Zhang P. GPU-based 3D cone-beam CT image reconstruction for large data volume. Int J Biomed Imaging 2009; 2009: 149079. DOI: 10.1155/2009/149079.

[30] Chukalina MV, Ingacheva AI, Buzmakov AV, Terekhin AP, Shikina Yu. Algebraic reconstruction in case of limited GPU memory in the task of computed tomography [In Russian]. Sensornye Systemy 2019; 33(2): 166-172. DOI: 10.1134/S0235009219020021.

[31] Karhula SS, Finnilä MAJ, Rytky S JO, Cooper DM, Thevenot J, Valkealahti M, Pritzker KPH, Heapea M, Joukaainen A, Lehenkari P, Kroger H, Korhonen RK, Nieminen HJ, Saarakkala S. Quantifying subresolution 3D morphology of bone with clinical computed tomography. Ann Biomed Eng 2020; 48: 595-605. DOI: 10.1007/s10439-019-02374-2.

[32] Janssens N, Huysmans M, SwennenR. Computed tomography 3D super-resolution with generative adversarial neural networks: Implications on unsaturated and two-phase fluid flow. Materials 2020; 13(6): 1397. DOI: 10.3390/ma13061397.

[33] Milanfar P, ed. Super-resolution imaging. Boca Raton, London, New York: CRC Press; 2011.

[34] Smal P, Gouze P, Rodriguez O. An automatic segmentation algorithm for retrieving sub-resolution porosity from X-ray tomography images. J Pet Sci Eng 2018; 166: 198207. DOI: 10.1016/j.petrol.2018.02.062.

[35] Bukreeva I, Asadchikov V, Buzmakov A, Chukalina M, Ingacheva A, Korolev N, Bravin A, Mittone A, Biell G, Sierra G, Brun F, Massimi L, Fratini M, Cedola A. High resolution 3D visualization of the spinal cord in a postmortem murine model. Biomed Opt Express 2020; 11(4): 2235-2253. DOI: 10.1364/BOE.386837.

[36] Natterer F. The mathematics of computerized tomog-raphy. Stuttgart: John Wi1ey & Sons Ltd, B G Teubner; 1986.

[37] Ramachandrah GN, Lakshminarayanan AV. Three-dimensional reconstruction from radiographs and electron micrographs: application of convolutions instead of Fourier transforms. Proc Nat Acad Sci U S A 1971; 68(9): 22362240. DOI: 10.1073/pnas.68.9.2236.

[38] Shepp L, Logan BF. The Fourier reconstruction of a head section. IEEE Trans Nucl Sci 1974; NS-21: 21-43.

[39] Gordon R, Bender R, German GT. Algebraic Reconstruction Techniques (ART) for three-dimensional electron microscopy and X-Ray photography. J Theor Biol 1970; 29: 471-481.

[40] Gordon R. A tutorial on ART (algebraic reconstruction techniques). IEEE Trans Nucl Sci 1974; 21(3): 78-93. DOI: 10.1109/TNS.1974.6499238.

[41] Ambrose J, Hounsfield GN. Computerized transverse axial tomography. Br J Radiol 1973; 46(542): 148-149.

[42] Tikhonov AN. About ill-posed problems and regulariza-tion technique [In Russian]. Doklady Akademii Nauk SSSR 1963, 151(3): 501-504.

[43] Kuyumchyan A, Isoyan A, Shulakov E, Aristov V, Kon-dratenkov V, Snigirev A, Snigireva I, Souvorov A, Tama-saku K, Yabashi M, Ishikawa T, Trouni K. High-efficiency and low-absorption Fresnel compound zone plates for hard X-ray focusing. Proc SPIE 2002: 4783: 92-96. DOI: 10.1117/12.450480.

[44] Loffelmann V, Mlynar J, Imrisek M, Mazon D,Jardin A, Weinzettl V, Hron M. Minimum Fisher Tikhonov regulari-

zation adapted to real-time tomography. Fusion Sci Tech-nol 2016; 69(2): 505-513. DOI: 10.13182/FST15-180.

[45] Webber JW, Quinto ET, Miller EL. A joint reconstruction and lambda tomography regularization technique for energy-resolved X-ray imaging. Inverse Probl 2020; 36: 074002.

[46] Kaczmarz S. Angenäherte auflösung von systemen linearer gleichungen. Bull Int Acad Pol Sci Lett 1937; 35: 355-357.

[47] Andersen AH, Kak AC. Simultaneous Algebraic Reconstruction (SART): a superior implementation of ART algorithm. Ultrason Imaging 1984; 6: 81-94.

[48] Gilbert P. Iterative methods for three dimensional reconstruction of an object from projections. J Theor Biol 1972; 36: 105-117.

[49] Gorobets AV, Neiman-Zade MI, Okunev SK, Kalyakin AA, Soukov SA. Performance of Elbrus-8C processor in supercomputer CFD simulations. Math Models Comput Simul 2019; 11(6): 914-923. DOI: 10.1134/S2070048219060073.

[50] Sedrak M, Sabelman E, Pezeshkian P, Duncan J, Bernstein I, Bruce D, Tse V, Khandhar S, Call E, Heit G, Alaminos-Bouza A. Biplanar X-ray methods for stereo-tactic intraoperative localization in deep brain stimulation surgery. Oper Neurosurg 2020; 19(3): 302-312. DOI: 10.1093/ons/opz397.

[51] Ueberschaer M, Vettermann F, Forbrig R, Unterrainer M, Siller S, Biczok A, Thor-steinsdottir J, Cyran C, Bartenstein P, Tonn J, Albert N, Schichor C, Grade S. Simpson grade revisited - Intraoperative estimation of the extent of resection in meningiomas versus postoperative somatosta-tin receptor positron emission tomography. computed tomography and magnetic resonance imaging. Neurosurgery 2021; 88(1): 140-146. DOI: 10.1093/neuros/nyaa333.

[52] Qadeer SMA, Filomena S, Lamei RH, Paul F, Roshan H. Configurational diffusion transport of water and oil in dual continuum shales. Sci Rep 2021; 11(2152): 18. DOI: 10.1038/s41598-021-81004-1.

[53] Singh N, Kumar S, Udawatta RP, Anderson SH, Jonge LW, Katuwal S. X-ray micro-computed tomography characterized soil pore network as influenced by long-term application of manure and fertilizer. Geoderma 2021; 385: 114872. DOI: 10.1016/j.geoderma.2020.114872.

[54] Ziesche RF,Arlt T, Finegan DP, Heenan T, Tengattini A, Baum D, Kardjilov N, Marketter H, Manke I, Kockelmann W, Brett D, Shearing P. 4D imaging of lithium-batteries using correlative neutron and X-ray tomography with a virtual unrolling technique. Nat Commun 2020; 11: 777. DOI: 10.1038/s41467-019-13943-3.

[55] Creveling PJ, Fisher J, LeBaron N, Czabaj MW. 4D Imaging of ceramic matrix composites during polymer infiltration and pyrolysis. Acta Materialia 2020; 201: 547-560. DOI: 10.1016/j.actamat.2020.10.036.

[56] Khokhriakov I, Lottermoser L, Gehrke R, Kracht T, Win-tersberger E, Kopmann A, Vogelgesang M, Beckmann F. Integrated control system environment for high-throughput tomography. Proc SPIE 2014; 9212: 921217. DOI: 10.1117/12.2060975.

[57] Sarkissian HD, Lucka F, Eijnatten M, Colacicco G, Coban SB, Batenburg KJ. A cone-beam X-ray computed tomography data collection designed for machine learning. Sci Data 2019; 6: 215. DOI: 10.1038/s41597-019-0235-y.

[58] De Carlo F, Gürsoy D, Ching, DJ,Batenburg KJ, Ludwig W, Mancini L, Marone F, Mokso R, Pelt DM, Sijbers J, RiversM. TomoBank: a tomographic data repository for computational X-ray science. Meas Sci Technol 2018; 29: 034004. DOI: 10.1088/1361-6501/aa9c19.

[59] Cristofaro M, Busi R, Rizzi E, Piselli P, Pianura P, Petrone A, Fusco N, Di F, Schinina S. Image quality and radiation dose reduction in chest CT in pulmonary infection. Radiol Med 2020; 125(5): 451. DOI: 10.1007/s11547-020-0113.

[60] Villarraga-Gömez H, Smith ST. Effect of the number of projections on dimensional measurements with X-ray computed tomography. Precis Eng 2020; 66: 445-456. DOI: 10.1016/j.precisioneng.2020.08.006.

[61] Müller P. Estimation of measurement uncertainties in X-ray computed tomography metrology using the substitution method. CIRP J Manuf Sci Technol 2014; 7(3): 222-232. DOI: 10.1016/j.cirpj.2014.04.002.

[62] Hansen HN, Carneiro K, Haitjema H, De Chiffre L. Dimensional micro and nano metrology. Annals of the CIRP 2006; 55(2): 721-743. DOI: 10.1016/j.cirp.2006.10.005.

[63] Fernandes T, Oliveira M, Castro R, Araujo B, Viamonte B, Cunha R. Bowel wall thickening at CT: simplifying the diagnosis. Insights into Imaging 2014; 5: 195-208. DOI 10.1007/s13244-013-0308-y.

[64] Kruth JP, Bartscher M, Carmignato S, Schmitt R, De Chiffre L, Weckenmann A. Computed tomography for dimen-

sional metrology. CIRP Annals 2011; 60(2): 821-842. DOI: 10.1016/j.cirp.2011.05.006.

[65] GömezAML, Santana PS, Mourao AP. Dosimetry study in head and neck of anthropomorphic phantoms in computed tomography scans. SciMedicine J 2020; 2(1): 38-43. DOI: 10.28991/SciMedJ-2020-0201-6.

[66] Sara U, Akter M, Uddin MS. Image quality assessment through FSIM, SSIM, MSE and PSNR. J Comput Commun 2019; 7(3): 8-18. DOI: 10.4236/jcc.2019.73002.

[67] Martin CJ, Sharp PF, Sutton DG. Measurement of image quality in diagnostic radiology. Appl Radiat Isot 1999; 50: 21-38. DOI: 10.1016/s0969-8043(98)00022-0.

[68] Gori C, Rossi F, Stecco A, Villari N, Zatelli G. Dose evaluation and quality criteria in dental radiology. Radiat Prot Dosimetry 2000; 90(1-2): 225-227. DOI: 10.1093/oxfordjournals.rpd.a033125.

[69] Guo L, Zhang J, Kong D, Shan W, Duan L. WITHDRAWN: Lung nodule image quality assessment under iterative model reconstruction. Future Gener Comput Syst 2021; February: online. DOI: 10.1016/j.future.2021.02.004.

Authors' information

Vladimir Lvovich Arlazarov, (b. 1939), Dr. Sc., corresponding member of the Russian Academy of Sciences, graduated from Lomonosov Moscow State University in 1961. Currently he works as head of sector 9 at FRC CSC RAS. Research interests are machine learning, computer vision and artificial intelligence. E-mail: vladi-mir.arlazarov@smartengines.com .

Dmitry Petrovich Nikolaev, (b. 1978), Ph. D. in Physics and Mathematics, a head of the laboratory at the IITP RAS. Graduated from Lomonosov M.V. in 2000. Research interests are machine vision, algorithms for fast image processing, pattern recognition. E-mail: dimonstr@iitp.ru .

Vladimir Viktorovich Arlazarov, (b. 1976), PhD, graduated from Moscow Institute of Steel and Allows in 1999, majoring in Applied Mathematics. Currently he works as head of division 93 at the Institute for Systems Analysis FRC CSC RAS. Research interests are pattern recognition and machine learning. E-mail: vva@smartengines.com .

Marina Valerievna Chukalina, (b. 1965), graduated from Moscow Engineering Physical Institute as a Ph.D. in Applied Mathematics. She is the head of the computed tomography group at Smart Engines and works as senior researcher at the IITP RAS. Research interests include direct and inverse problems in X-ray microscopy and tomography. E-mail: chukalinamarina@gmail.com .

Received March 29, 2021. The final version - July 26, 2021.

i Надоели баннеры? Вы всегда можете отключить рекламу.