Научная статья на тему 'Some important issues of the computational process in parallel programming'

Some important issues of the computational process in parallel programming Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
131
70
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
parallel programming / parallel computation / synchronization / pipeline / software

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Lyazzat Kh. Zhunussova

The modern approach to education in parallel programming has enough bright “technological” focus: the main emphasis in presenting educational material is on aspects of parallel computing architectures and practical parallel programming techniques.In other words, the issue of creating parallel software becomes only one aspect of a more general discipline — engineering parallel software application as a set of mathematical models, numerical methods for their implementation, parallel algorithms and software codes.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Some important issues of the computational process in parallel programming»

Section 9. Technical sciences

Lyazzat Kh. Zhunussova, Kazakh National Pedagogical University named after Abai, Almaty, Kazakhstan Associate Professor of Mathematics E-mail: khafizovna_66@mail.ru

Some important issues of the computational process in parallel programming

Abstract: The modern approach to education in parallel programming has enough bright “technological" focus: the main emphasis in presenting educational material is on aspects of parallel computing architectures and practical parallel programming techniques.In other words, the issue of creating parallel software becomes only one aspect of a more general discipline — engineering parallel software application as a set of mathematical models, numerical methods for their implementation, parallel algorithms and software codes.

Keyword: parallel programming, parallel computation, synchronization, pipeline, software.

Introduction. Parallel computing — a modern multi-faceted area of computer science, thriving and is the most relevant in the coming decades. The relevance of this work consists of a variety of factors, first and foremost, based on the need for large computing resources for applications to modeling processes in physics, biophysics, chemistry, etc. As with other applications in computer science, parallel programming has passed several stages. It arose because of the new opportunities provided by the development ofhardware and developed in accordance with the technological changes. After some time, specialized techniques have been combined into a set of basic principles and general programming techniques.

Statement of the problem. Follows note that the possibility of functional languages and management practices in their parallel computation are not well understood.

Parallel program contains multiple processes working together to perform a task. Each process — is a sequential program — or rather, the sequence of statements executed one after the other. Consequent programma has one control flow, and parallel — several. Joint work processes of the parallel program is carried out by means of their interaction.Interaction is programmed using shared variables or message transfer. If you use shared variables, one process writes to the variable read by another process. When forwarding messages one process sends a message that gets the other. For any form of interaction processes need mutual synchronization. There are two main types of synchronization — mutual exclusion and conditional synchronization. Provides mutual exclusion to critical sections statements are executed simultaneously. Conventional synchronization delays the process until until certain conditions are

met. For example, the interaction of the producer and consumer processes is often achieved using a buffer in shared memory. Producer writes to the buffer, the consumer reads from it. To prevent the simultaneous use ofa buffer producer and consumer uses mutual exclusion. Conditional synchronization is used to check whether the user read the last message written to the buffer.

The most important characteristics of any computers are their productivity and performance. Often, these two characteristics are identified, but is sometimes used one and the other. Under performance is meant the amount of operations performed in this computing system per unit time. Speed — the inverse of the average time of a single operation.Performance is measured in millions of instructions per second MIPS or millions of floatingpoint operations per second MFLOPS. Also important performance characteristics of computing systems are scalable computing system (scalability) — the ability of a computer system to increase and reduce the resources (primarily performance and ^AM), reconfigurable computer systems (programmability) — variation of the number of nodes of the graph and their relationships, reliability and survivability computer system.

Highly scalable system provides linear performance with increasing number of processors in it. As noted above, the main feature of vector-pipeline systems is the presence of pipeline functional units, containing a number of pipelines operations. Therefore, the evaluation of performance of vector-based pipeline systems for evaluating the performance of pipelines operations.

Methodology to assess the performance of pipelines operations consider the example of the addition operation of the pipeline. Suppose that there l -stage pipeline operations of addition and let all stage of the pipeline operations require the same execution time At

98

Some important issues of the computational process in parallel programming

then, for the operation of vector, x = (x1,.yn), у = (y1,. yn) addition, it takes time

T = (s +1 + n)At, (l)

where sDt -fixed start time of the pipeline, lDt — time “acceleration" of the pipeline.

After starting the pipeline and its “acceleration” pipeline gives the result after each cycle At.

Maximum speed of delivery of results pipeline (maximum speed) is:

(r-)=*

(2)

Pipeline speed is called asymptotic performance. The speed pipeline is close to the asymptotic performance in the case where in formula (l) can be neglected terms, l. This situation occurs when the length of the processed vectors s much larger quantities l. It is assumed that there are no conflicts when accessing memory. A similar situation occurs for any operation of pipelines. Conventionally we say that the asymptotic speed pipeline operation is achieved by the vectors of infinite length.

When operating pipeline in sequential mode, obviously, the maximum speed of delivery of results is:

(0=lit (3)

this pipelining improves performance computing system in n time.

Methodology to assess the performance of vector-parallel systems and MIMD-systems consider the example of the operation of vector x = (x1,...yn), у = (y1,.yn), addition, on-N processor system. While doing this as a vector-parallel system, so on MIMD-system can be estimated by the formula:

(4)

communication time,

T = T + T

A A com ' A cal )

where Tcom = O(dV—]v) -

n

Tcal = [—]t) — computation time; d. d - the diameter

N n

of the communication network system, [—]. the

nearest integer greater than A, v[bite / sec]. - channel

performance interprocessor exchange, т [s] - time

operation of adding two numbers on a single processor system.

If we neglect the communication costs, then as a

minimum runtime component operations of addition,

T

X, Y, vectors, host N processor can take time —.

n

This, the maximum rate of delivery of results N -vector-processor parallel system and MIMD-system (maximum speed) is:

W = ^- = f (5)

Tcal T

oance vector-parallel systems and MIMD-system also called asymptotic performance.

When adding X, Y, vectors on a single processor system maximum speed of delivery of results is obviously

(6)

T

Thus, addition of vectors parallel to the vector-parallel MIMD-systems and increases the maximum performance in N time. A similar situation occurs in the performance of any vector-parallel systems or MIMD-systems all binary operations. An important characteristic of parallel computing systems is the size — the length of a vector, which is achieved by half the asymptotic performance of the system. This value is called the length floor performance. The relative performance of different algorithms on a given parallel computer system depends on the length floor performance.

We introduce the value:

П1/2

P= — , n

where n - the average length of the processed vectors. Then 0 means that the algorithm can be efficiently parallelized for solutions for given computing system, 1 means opposite.

Example. Consider the operation of multiplying two matrices (to perform an operation that requires a scalar product of vectors) parallel computing systems. The diagram r° values in the first row and n1/2 the second row.

Fig. 1. Diagram calculations

99

Section 9. Technical sciences

Suppose that the average length of processed n vectors is 100. Then

.. _ «1/2 „ 116 „ i ,, _ «1/2 „ 7 „ 0 .1 . . ,

Pcyber-205 _ - ~ ™ ~ 1 Ecray-1 _ - ~ ™ ~ 0 that task

CRAY-1 system is much more efficient than a system CYBER-205.

Conclusion. Vector- speed parallel system and MIMD- system approaches the asymptotic performance in the case where in formula (4) can be neglected when the communications component and

the value of n is a multiple of the number of processors in the system. Note that neglecting the communication costs also suggests that teams do not conflict with each other when accessing memory. Sense of asymptotic performance and length floor performance different. Asymptotic performance mainly characterizes the technology of manufacturing computers, while floor performance length is a measure of the degree of parallelism.

References:

1. Boebodin B. Parallel data processing. Sankt-Peter., 2002,600p.

2. Nemnugin S. A. Parallel programming for large powered multiprocessor systems.. Sankt-Peter.,2002,400p. Gregory R.

3. Andress (2003). Foundations of multithreoaded, parallel and distributed programming.Published by Addison-Wesley Longman.

4. Flynn, M.J. (1995), Computer architecture pipelined and parallel processor design, Lones and Barlett, Poston, Massachusetts.

5. The microgrid: a scientific tool for modeling computational grids/H. J. Song, X. Liu, D. Jakobsen et al.//In Proceedings of SC’2000, Dallas, Texas. - 2000.

6. Wilkinson, B, and Allen, M. (1999). Parallel programming: techniques and applications using networked workstations parallel computer.Published by Prentice-Hall, Inc.

7. Taylor V., Wu X., Stevens R. Prophesy: An infrastructure for performance analysis and modeling of parallel and grid applications. Eclipse modeling framework www.eclipse.org/emf/

8. The Infiniband Trade Association official website http://www.infinibandta.org.

9. HyperTransport Consortium official website http://www.hypertransport.org14. www.intel.com

Miryuk Olga Aleksandrovna, Rudny Industrial Institute Professor, Doctor of technical sciences E-mail: psm58@mail.ru

Formation of a cell in alkalissilicates compositions

Abstract: Processes formations of cellular structure of the cementless compositions are investigated. Influence of an alkaline component on foam properties is defined. The comparative characteristic of the cellular materials received by various methods is given.

Keywords: formation of a cell, foam-mass, structure.

Мирюк Ольга Александровна, Рудненский индустриальный институт профессор, доктор технических наук E-mail: psm58@mail.ru

Формирование пор в щелочесиликатных композициях

Аннотация: Исследованы процессы образования пор в бесцементных композициях. Определено влияние щелочного компонента на свойства пены. Даны сравнительные характеристики ячеистых материалов, полученных различными способами.

Ключевые слова: образование пор, пеномасса, структура

100

i Надоели баннеры? Вы всегда можете отключить рекламу.