Mathematical Structures and Modeling 2017. N. 2(42). PP. 115-118
UDC 004.2
WHAT IS THE BEST WAY TO ADD LARGE NUMBER OF INTEGERS: NUMBER-BY-NUMBER AS COMPUTERS DO OR LOWEST-DIGITS-THAN-NEXT-DIGITS-ETC AS WE HUMANS DO?
Olga Kosheleva1
Ph.D. (Phys.-Math.), Associate Professor, e-mail: [email protected] Vladik Kreinovich2 Ph.D. (Phys.-Math.), Professor, e-mail: [email protected]
1 Department of Teacher Education, University of Texas at El Paso, 500 W. University, El
Paso, TX 79968, USA
2Department of Computer Science, University of Texas at El Paso, 500 W. University El
Paso, TX 79968, USA
Abstract. When we need to add several integers, computers add them one by one, while we usually add them digit by digit: first, we add all the lowest digits, then we add all next lowest digits, etc. Which way is faster? Should we learn from computers or should we teach computers to add several integers our way?
In this paper, we show that the computer way is faster. This adds one more example to the list of cases when computer-based arithmetic algorithms are much more efficient than the algorithms that we humans normally use.
Keywords: digits, adding, calculating.
1. Formulation of the Problem
When we humans need to add several integers:
• we usually first add their lowest digits, then we add their next lowest digits,
• etc.
On the other hand, when computers are given a task of adding several integers, they add these integers number-by-number:
first, they add the first two numbers,
then they add the third number to the resulting intermediate sum, etc.
Which way is better?
• Should we program computers to add several numbers our way?
• Or should we learn from the computers and add numbers their way? This is the question that we answer in this paper.
2. Analysis of the Problem Notations.
• Let us denote by n the number of integers that we need to add, and
• let us denote by d the number of digits in each of these numbers.
How many extra digits do we need to represent the sum? When we add n d-digit integers, the sum is n times larger than each of the original d-digit integers. So, to represent this sum, we need to use additional digits. How many additional digits do we need?
Every time we add one more digit, the size of the numbers that can be represented increases by a factor of B, where B is the base of the corresponding numerical system:
• B = 2 for most computers, and
• B =10 for human computations.
Adding two digits increases the largest number by a factor of B2. In general, adding k digits increases the largest number by a factor of Bk.
To be able to increase the size by a factor of n, we therefore need to use k additional digits, where Bk « n. Thus, we need k = logB(n) additional digits.
What if we add numbers one by one? When we add two d-digit numbers, we need d digit operations; see, e.g., [1]. When we add numbers one by one, eventually, we will get to numbers with d + logB(n) digits, so we will need d + logB(n) digits operations for each addition.
Overall, to find the sum of n numbers, we need to perform n — 1 « n additions. So, we need
n ■ (d + logB(n)) = n ■ d + n ■ logB(n) (1)
digit operations.
What if we first add all lower digits, then all next digits, etc.? When we add all lower digits, we get the value n ■ B. To represent this value, we need logB(n ■ B) = 1 +logB(n) digits. So, to perform the addition of the lowest digits of all n numbers, we need n ■ (1 + logB(n)) digit operations.
Overall, we need to perform similar summation for all d original digits. Thus, in this case, we need overall
d ■ n ■ (1 + logB(n)) = n ■ d + n ■ d ■ logB(n)
(2)
Mathematical Structures and Modeling. 2017. N.2(42)
117
digit operations.
Conclusion: computer way is much faster. By comparing the formulas (1) and (2), we see that number-by-number addition is faster: for the digits-by-digits addition, the term added to d • n is d times larger than for the number-by-number addition.
Discussion. This conclusion is in line with the general trend, that the arithmetic algorithms used by humans are far from being optimal [1]. For example:
while we have two different algorithms for addition and subtraction, computers use 2's complement implementation of negative numbers that allows both operations to be performed the same way [1];
• our digit-by-digit multiplication requires O(d2) digit operations, while there exist faster algorithms based on Fast Fourier Transform that require O(n • ln(n)) C O(n2) digit operations [1];
• our "long division" algorithm is also not the best: the usual computer way of first computing 1/b and then computing a • (1/b) is faster [1].
Another known case when computer-based algorithms are faster is sorting: computer-based mergesort algorithm is much faster than the insertion sort algorithm that we normally use when we need to sort a group of items [1].
In this sense, our paper has added one more example where a usual human algorithm can be improved.
Acknowledgments
This work was supported in part by the National Science Foundation grants HRD-0734825 and HRD-1242122 (Cyber-ShARE Center of Excellence) and DUE-0926721, and by an award "UTEP and Prudential Actuarial Science Academy and Pipeline Initiative" from Prudential Foundation.
References
1. Th. H. Cormen, Leiserson C.E., Rivest R.L., Stein C. Introduction to Algorithms. MIT Press, Cambridge, Massachusetts, 2009.
КАКОВ НАИЛУЧШИЙ СПОСОБ СЛОЖЕНИЯ БОЛЬШОГО КОЛИЧЕСТВА ЦЕЛЫХ ЧИСЕЛ: ЧИСЛО ЗА ЧИСЛОМ, КАК КОМПЬЮТЕРЫ, ИЛИ САМЫЕ МЛАДШИЕ ЦИФРЫ, ЗАТЕМ СЛЕДУЮЩИЕ ЦИФРЫ И ТАК ДАЛЕЕ,
КАК ДЕЛАЕМ МЫ, ЛЮДИ?
О. Кошелева1
к.ф.-м.н., доцент, e-mail: [email protected] В. Крейнович2
к.ф.-м.н., профессор, e-mail: [email protected]
1 Департамент образования, Техасский университет в Эль Пасо, США 2Департамент компьютерных наук, Техасский университет в Эль Пасо, США
Аннотация. Когда нам нужно сложить несколько целых чисел, компьютеры складывают их по одному, а мы обычно складываем их по цифрам: сначала мы складываем все младшие цифры, затем складываем все следующие младшие разряды и т.д. Какой способ быстрее? Должны ли мы учиться у компьютеров, или нам следует научить компьютеры складывать несколько целых чисел нашим способом? В этой статье мы показываем, что компьютерный путь быстрее. Это добавляет ещё один пример в список случаев, когда компьютерные арифметические алгоритмы намного эффективнее, чем алгоритмы, обычно используемые человеком.
Ключевые слова: цифры, сложение, вычисления.
Дата поступления в редакцию: 02.03.17