Homogeneity Principle for Cryptographic Protocol Analysis and Synthesis

Aleksandrova E. B. Peter the Great St. Petersburg Polytechnic University St. Petersburg, Russia helen@ibks.spbstu.ru

Abstract. Homogeneity principle for analysis and synthesis of cryptographic protocols is introduced, due to which increasing the number of mathematical problems does not increase the security of underlying information system. The comparison of two authentication protocols in accordance with this principle is given. The ways to reduce complexity heterogeneity in digital signature protocol are proposed.

Keywords: cryptographic protocol, digital signature, elliptic curve discrete logarithm problem, complexity, homogeneity.

Introduction

Security of cryptographic algorithms and protocols is based on mathematical problems. They can be unaltered and variable. These problems come from different cryptographic primitives. As a rule, to break cryptosystem security it is sufficient to solve any of these mathematical problems. The complexity of these problems decreases in time, so it seems natural to subdue all of them to the basic problem, i. e. the problem with the highest complexity.

The complexity homogeneity principle is considered, and some ways of its use for analysis and synthesis of cryptographic protocols are proposed.

Hierarchy of mathematical problems

The security of large cryptographic systems is based on diverse cryptographic primitives: symmetric and public key encryption, digital signature, zero knowledge proof, etc. Security of cryptographic primitives is based on the complexity of mathematical problems. The number of such problems is tens and even hundreds:

• the problem of key breaking and keyless reading for different ciphers [1];

• integer factorization problem, to which RSA problem can be reduced [2];

• square root problem, to which Rabin [3] and Fiat-Shamir [4] cryptosystem problems can be reduced;

• discrete logarithm problem in a multiplicative group of prime field, to which Elgamal [5] problem can be reduced;

• discrete logarithm problem in a multiplicative group of extended field, to which Diffie - Hellmann [6] problem can be reduced;

• elliptic curve discrete logarithm problem, to which GOST R 34.10-2012 [7] problem can be reduced;

• the problems of ideal classes group of on number fields [8];

• the subset sum and knapsack problem [9];

• the problem of elliptic curve isogeny computation [10-13];

• lattice-based problems [14-16], etc.

It is important to classify these problems, reflecting both the peculiarities of the problems and methods of their solution. It seems natural to divide these problems into the following unified classes [1]:

• security of ciphers for known plaintexts is based on Boolean satisfiability problem. under the assumption that plaintexts and corresponding ciphertexts define the key uniquely (to achieve this, the known plaintext is to be 1,38 times longer than the key). If encryption operations are presented in the form of Booolean functions, substituted each other, then every bit of intermediate text is computable Boolean function of the key. Let objective function be conjunction of bitwise equalities of the values of Boolean functions, corresponding to ciphertext, and the values of true ciphertext. This function is computable and holds true for the true key only;

• integer factorization problem for n = pq in RSA cryptosystem can be reduced to the problem of order and structure computation in finite Abelian group. Indeed, if the order 4>(n) = = (p - 1) (q - 1) of the group Z * is known, the divisors of n are the roots of polynomial x 2 - (n - ^(n) + 1) x + n;

• discrete logarithm problems in different cyclic groups represent the unified discrete logarithm problem in the cyclic group of computable order;

• the knapsack cryptosystems problems can be reduced to the unified knapsack problem;

• the problem of elliptic curve isogeny computation is a special case of the problem of computing the morphism in Abelian groups category.

parameterization of these classes of unified problems with mathematical structures (ciphers, groups, categories) gives a basic mathematical problem. Thus, the methods of unified problems solution are obviously applicable for basic problems.

The more complex is the problem, the more secure is corresponding cryptographic algorithm or protocol. But the complexity

, ™ log ^(T) - log S(T +t)

decreases in time with the rate s(t, T) =-,

t log S (T)

where S(T) is the strength in the initial time T [1]. This formula can be interpreted as the decrease of the initial brute force cryptographic algorithm strength per one bit of the key. The expression for s (t, T) can be written as logS (T + t) = (1 - ts (t, T))logS (T), from which S (T + t) = S (T)1-ts (t- T) = e (1-ts (t- ^ (T). So if s (t, T) changes slightly, the dependence S (T + t) for the symmetric iterated cipher or a hash function can be approximately described by a falling exponent.

However, the rate of complexity falling is different for different mathematical problems and cryptographic algorithms. For example, the complexity of key breaking for symmetric ciphers falls by about a constant speed, equal to 0,023 year-1. At the same time, the complexity of factorization problem falls unevenly and depends on the length of composite number [1].

So, the problem complexity depends not only on its size but on time also. When constructing cryptosystems it is necessary to predict the high complexity of mathematical problems at least for a few years. If at the time of project completion the problem A is more difficult than the problem B of the same (or different) size, it does not mean that the same ratio will continue in ten years.

For analysis of mathematical problems the relation of polynomial reducibility is used often, which does not depend on the current complexity estimation. The problem A is not more difficult than the problem B, if we can solve every partial problem A by deterministic polynomial algorithm, knowing the solution of the problem B. We can say also that the problem A can be reduced to the problem B. If the reverse reduction holds also, the problems A and B are called equivalent.

Polynomial reduction allows allocating in the set S of mass mathematical problems a subset of problems to which given mass problem A can be reduced. Let call the subset SA of those problems, to which the problem A can be reduced, the problems, associated with A. For example, elliptic curve discrete logarithm problem over prime finite field Fp can be reduced to one of the following problems:

• discrete logarithm problem in multiplicative group of finite extension of the field Fp, resulting from Weil pairing [17];

• the problem of Mordell-Weil group computation of the curve E (K), given over the number field K = Q [a], where the root of minimal polynomial for a is in Fp, and the problem of point lifting from finite field to the number one [18].

As usual, cryptographic algorithm or protocol security depends not on one but on several problems, solving any of which violates its security. If none of these problems is associated with the others, we call these problems heterogeneous.

Homogeneity principle

Cryptosystems are often made heterogeneous, with different algorithms of encryption, hashing, digital signing, etc. It may seem that it is easier for intruder to solve one problem instead of several. But in practice it is sufficient to solve one problem, which is the easiest at this point of time.

For example, if in cryptographic system the session key of the symmetric cipher is established by the Diffie - Hellman protocol and digital signature is used, then it is enough to solve any of the problems: compute the symmetric cipher key, break Diffie - Hellman or break signature key, to violate security. In the latter case, the intruder can impersonate a legitimate user towards another user or to implement a man-in-the-middle attack.

Let A1, ..., At form the set of problems, on the complexity of which cryptographic algorithm security is based. The relation of polynomial reduction divides this set into equivalence classes of associated problems. Then the problems, belonging to different classes, will be heterogeneous.

We call the number of heterogeneous problems, forming the base of cryptosystem security, homogeneity of complexity. The following proposition holds.

Proposition 1. Let cryptographic algorithms C1 and C2 are such that in order to violate algorithm C1 security we are to solve problem A, and to violate algorithm C2 security we are to solve one of heterogeneous problems A or B. Then algorithm C2 is not more secure than algorithm C1.

Proof. Let SA, SB be the complexities of the problems A and B correspondingly. According to the absorption law, SA > min

(^ sb).

Corollary. Increasing the number of different heterogeneous problems in cryptographic algorithm does not increase its security.

Now we can formulate complexity homogeneity principle: we need to decrease the number of heterogeneous problems while cryptosystem construction [19]. Thus, conglomeration of het-erogenious ciphers, random number generators, hash functions, signatures, and other authentication algorithms in a large information system are useless and even potentially dangerous.

Cryptographic protocols

analysis and synthesis

In practice, cryptographic protection is implemented in the form of cryptographic protocols. Indeed, it is not enough only to encrypt data, you must generate encryption keys and deliver them to users, change the keys in time, and provide authentication. This raises a set of different applied problems, which are solved with the help of certain cryptographic protocols. The security of a whole system is determined by the composition of all cryptographic protocols.

The problem of protocol analysis and synthesis is one of the main problems in cryptography. A complete solution of this problem requires a large amount of research on analysis of all basic and associated mathematical problems, the analysis of the security of actual protocols and security analysis of complex protocols in view of their interdependence. In this case, since the complexity of mathematical problems change over time, the result of the research will reflect the real situation only at the time of the end (or even beginning) of these studies. So, let's consider a simplified approach to analysis and synthesis of protocols due to homogeneity principle.

If there are two cryptographic protocols, the first is based on the problems set A; and to break security it is sufficient to solve any of them; the second is based on the problems set B, where A 3 B, then the first protocol is not more secure than the second one.

The specified ordering of protocols allows comparing their security, and to optimize the composition of protocols, including the issues of key and auxiliary random numbers generation. The question of analysis and optimization is to formalize a set of problems underlying the security, and to minimize this set.

This set consists of base and variable problems. For example, in digital signature standard GOST R 34.10-2012 [7] elliptic curve discrete logarithm problem cannot be change. At the same time, algorithms for generating auxiliary and a secret key are not regulated by any procedure, that is, the related problems can vary. Among the varied algorithms there exists optimal for which a problem of security breach is not easier than elliptic curve discrete logarithm problem.

Comparative analysis of authentication protocols

Consider two identical elliptic curve protocols of multiple authentication based on digital signature and public-key encryption. The first party proves its authenticity, and the second checks it. It is believed that prover does not trust verifier.

In the first protocol, verifier generates a random request and sends it to the prover, who signs it according to GOST R 34.102012. prover generates signature and sends it to the verifier. The latter checks the validity of the signature.

In the second protocol, verifier generates a random request, encrypts it with public key and sends to the prover for decryption. prover decrypts the message using secret key and sends it to the verifier who compares this text with the original.

These two protocols are solving the same problem: the verifier makes sure that the prover knows the secret key (signature key or decryption key). Both protocols are implemented on the same elliptic curve. Which of them is safer? Analysis due to homogeneity principle allows answering this question.

Let's consider digital signature protocol from GOST R 34.102012 [7].

Let E (Fp): y2 = f (x) be elliptic curve with'-invariant not equal to 0 and 1728, m be the message to be signed, h be hash-function, P e E (Fp) be generator of order r. So {E (Fp), P, Q} is public key, integer d, where Q = dP is private key. To sign the message m, the sender computes e = h (m) (mod r), and if e = 0 sets e = 1; generates integer k, 0 < k < r, at random and computes R = (xR, yR) = kP, where xR ± 0 (mod r); computes s = (dxR + ke) (mod r), where s ± 0.

The signature verification needs the following: check that 0 < xR (mod r) < r and 0 < s < r; compute e = h (m) (mod r) and if e = 0 set e = 1; compute R' = (se-1 (mod r))P — (xRe-1 (mod r)) Q. If x = xR (mod r) holds then the signature is valid.

The security of digital signature protocol directly depends on the complexity of hash function inversion and the difficulty of computing collisions of a hash function, as in the first case it is possible to calculate the message for an existing signature, and in the second to prepare a couple of messages with the same value e to sign one of them, and then replace one message by another. The complexity of computing hash function collisions by pollard's algorithm is equal to s = o(4T ).

This protocol involves stringent requirements for the random number generator: If the random number k is predictable (computable) at least once or repeats during the public key lifetime, the signature key can be computed with polynomial complexity.

Note that to break secret signature key it is sufficient not only to predict a random number, which will appear in future, but to find a random number that was previously used. For example, if the intruder has gained access to an algorithmic random number generator, allowing recovery of earlier state from the current, all digital signatures created on this current key should be considered invalid. Therefore, if the potential unauthorized access (for example, when sending a computer for repair due to the failure) to computer, that implements the signature generation is possible, and if the random number generator is implemented al-gorithmically, the algorithm of random number generation must be irreversible.

For secret key d computation it is sufficient to solve one of discrete logarithm problems: for the points P and Q or for the points P and R (the value xR is easily restored from its residue x (mod r), and the coordinate yR can be computed as V f (xr ) (mod p)). Thus, the requirements for the random number generator, are more rigid than for the key one: the intruder is still what to look for: k or d, but d can be computed from k. Thus, Kolmogorov entropy of digital signature key does not exceed the entropy of the random numbers from generator [1].

So the security of GOST R 34.10-2012 is based on the following heterogeneous assumptions.

1. Elliptic curve discrete logarithm problem is hard (the solution of this problem leads to the disclosure of secret key).

2. Kolmogorov entropy of random bit generator is not less than Kolmogorov entropy of key generator (otherwise, the signature strength will be reduced in compare to the design rating).

3. probability of two equal random numbers during the key life period is negligible (otherwise, one can expect to the disclosure of secret key with a low complexity).

4. Hash-function h is computationally irreversible (otherwise, one can substitute the signed message to the other).

5. Hash-function collision computation problem is hard (otherwise, one can prepare a couple of messages that make up collision, and to replace the signed message to the other after signing).

Now let's consider public key encryption protocol [1], based on Diffie - Hellmann and Elgamal protocols. The common parameters are elliptic curve E (Fp) and the point P e E (Fp) of order r. Point Q e (P) is a private key, and integer d, such that Q = dP is public key. Hash function h is used and plaintext m is from 0 < m < r - 1.

Verifier chooses integer k at random, and computes R ^ kP, e ^ h (kQ), c ^ (m + e) (mod r). Ciphertext is a tuple (R, c).

Pretender in the authentication protocol computes e ^ h (dR) and m = (c - e) (mod r).

The security of this protocol is based on Diffie-Hellmann problem: given points P, Q = dP, R = kP, compute kdP = kQ = = dR. This problem is reduced in polynomial time to the elliptic curve discrete logarithm problem, as it is sufficient to compute one of logarithms k or d for its solution.

The requirements imposed on random bit generator here are less stringent than in signature protocol. The point R is not to be used more than once, because a single computation of a random number k allow decrypting a message m encrypted with this number, but not to compute the key (as in digital signature protocol). Similarly, the repetition of the random number k allows decrypting one of the messages if the other, encrypted with this k, is known.

Hash function and the arithmetic in Zr are needed to eliminate practical redundancy, which presents in elliptic curve point coordinates. Such redundancy is due to the fact that the length of two coordinates equals to 2logp and the length of the group order is log2r, where r is no longer than p.

The set of heterogeneous problems here consists of three ones, and is a subset of the set from digital signature protocol. Therefore, it can be argued that if the discrete logarithm problem and Diffie - Hellmann problem are equivalent, then public key encryption protocol is not less secure than digital signature one.

Both private and public keys of digital signature are to be changed periodically. Furthermore, hash-function initial vector is to be changed: if the collision is computed, there is a danger for all the cryptosystem users.

So, if the signer can generate random numbers, he also can generate his own private key. It follows from homogeneity principle that for GOST R 34.10-2012 algorithmic generator, equivalent to the elliptic curve discrete logarithm problem, is optimal. Such generator, with regard to the signature protocol, is not worse than the "ideal" strong random number generator.

Similarly, one can determine the optimal key generators for symmetric ciphers: keys are to be generated algorithmically with the same cipher. Thus, the principle of complexity homogeneity allows determining the best key generator for this cryptographic algorithm.

Strengthening the signature protocol

The first way to decrease the number of heterogeneous mathematical problems is to modify hash-function and to include auxiliary random elliptic curve point into its argument.

If we use e' = h (m||xR (mod r)) (mod r) instead of e in digital signature protocol, then protocol holds, but excludes the attack based on the prepared hash function collisions. The value e' can be computed not only by the signer but by verifier, so it is correct. Replacing e with e' does not reduce the cryptographic strength of the signature protocol, but eliminates the attack based on the prepared hash function collision. To prove this, it is sufficient to consider the impact of this substitution on the problem of the treatment and calculation of hash function collisions. To inverse the value e it is needed to find m, and to inverse e' it is needed to solve the same problem, provided that the remaining part of the argument is fixed and is equal to xR (mod r). Obviously, if one can inverse e, then he can inverse e' (it is sufficient a portion of the argument bits be fixed). The same considerations apply in terms of collisions computation for two variants of a hash function.

However, if in the original version of the protocol, the intruder may seek collisions only through initial vector life time, in the second case, he can do this only during the interval between the procedures of signature generation and verification. In the original version, the intruder may participate in initial vector generation and to combine his choice with the collisions search. In the second case, such an attack is impossible, since the values xR (mod r) are always different (the same values lead immediately to the key disclosure in both cases). Thus, replacing e with e' is a strict strengthening of the signature protocol.

In addition, iterative hash function with the fixed input size can be used instead of the hash function GOST R 34.11-2012. To construct it one can use one way function, which is collisionfree for the arguments of n < ([log2 r] - 2)/2 bits length, where r is the order of elliptic curve points group. In this case, elliptic curve discrete logarithm problem is reduced to the problem of hash function inversion. For this purpose, hash function from [20] can be adapted to elliptic curves.

If standard hash function h is needed (for example, GoST R 34.11-2012 [21]), one can use e' = xh (m)p (mod r). Here e' depends on h(m), so it is natural to assume that to invert e', one must firstly compute h(m) from h(m)P, i. e. to solve elliptic curve discrete logarithm problem. Elliptic curve is to be the same as in digital signature protocol.

The second way is to use algorithmic elliptic curve random bit generator, for which elliptic curve discrete logarithm problem is reduced to the problem of random number prediction and to the problem of previous generator state computation. Such generator can be given by the following recurrent equation. Let T = (x_, y_) be the current point of the same elliptic curve E (Fp) as in digital signature protocol, and integer r - p is at least as maximum length of generator period. The next state is given as T+1 = (i + xT)P. The dependence of the point of the number of iteration i is to prevent "looping" after a small number of itera-

tions, and cannot be considered a cryptographic enhancement. Generator returns some (for example, 16) less significant bits

of xT.

i

Generator state inversion problem is to compute the next state from the current one.

Proposition 2. The problems of elliptic curve discrete logarithm and generator state inversion are equivalent.

Proof. Every point T corresponds to unique logarithm i + xT,, so the only one previous state T is possible. So, every point -T.+l with the same x-coordinate corresponds to the unique previous state -T. So every generator state is defined up to elliptic curve automorphism (if '-invariant is not equal to 0 and 1728, then automorphism group consists of two elements ±1). The set of logarithms in generator equation is a subset of all possible elliptic curve discrete logarithms.

If elliptic curve discrete logarithm problem is solved, then one can compute the previous state T from the current state T , i. e. generator inversion problem is reduced to elliptic curve discrete logarithm problem.

If generator inversion problem is solved, then one can compute the previous state T from the current state T , for any initial logarithm corresponding to the point T0 So it is possible to compute elliptic curve discrete logarithm of the form i + xt . While changing logarithm of the initial point T0, one can get the full set oa possible logarithms, coinciding with the set of remainders modulo r. So, Следовательно, elliptic curve discrete logarithm problem is reduced to generator inversion problem. ■

The prediction of the next state of the generator, when a part of random sequence is known, reduces to the calculation of xt from one or several least significant bits of x-coordinate of T and previous points. Numerous experiments have shown that when 16 least significant bits are used, then bits, pairs, triples, ..., eights of bits are statistically distributed at uniform.

According to [22], the problem of prediction of generator state, if some previous elements are known, can be reduced to the discrete logarithm problem.

This generator is not the only possible one. One can build a random number generator on elliptic curve based on a recursive hash function without collisions with the brute force complexity of inversion (see, [1]), for which the dependence of iteration number is not needed. Moreover, the following condition can be used instead of least significant bits of x-coordinate: if y < p/2 then generator outputs 0, otherwise generator outputs 1. The y-coordinate of the point of order r is not zero, and for every T = (x, y) there exists -T = (x, p - y), so such generator outputs 0 and 1 equiprobably.

*i*Не можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Not only digital signature, but zero-knowledge proof can be used for multiple messages authentication in the conditions of the untrusted verifier. It can be shown that zero-knowledge proof protocol is of the same homogeneity as digital signature one, but it requires to transfer far greater amounts of proprietary information and is of less speed. So zero-knowledge proofs based on elliptic curve discrete logarithm problem have no advantages in comparison with digital signature.

Conclusion

The reduction of the number of mathematical problems, underlying the security of cryptographic algorithms and protocols, and the primitives conformation to the most hard problem is

gaining more and more widespread. In recent years, random number generators and hash functions had been proposed, whose security is based on a new, so-called post-quantum mathematical problems of the theory of lattices and isogenies of elliptic curves (see, for example [23-26]). Thus, the formulated complexity homogeneity principle can be used and expanded not only in existing practical purposes but also for future cryptographic systems.

References

1. Rostovtsev A. G., Makhovenko E. B. Teoreticheskaya kriptografiya [Theoretical Cryptography], St. Petersburg, Professional, 2005, 480 p.

2. Rivest R. L., Shamir A., Adleman L. A method for obtaining digital signatures and public-key cryptosystems, Communications of the ACM, 1978, Vol. 21, no. 2, pp. 120-126.

3. Rabin M. Digitalized signatures and public key functions as intractable as factorization. - MIT Laboratory for Comp. Sci. Technical Report MIT/LCS/TR-212. 1979.

4. Fiat A., Shamir A. How to prove yourself: Practical solutions to identification and signature problems, Cryptology -CRYPTO'85. LNCS, 1990, Vol. 263, pp. 186-194.

5. ElGamal T. A public key cryptosystem and a signature scheme based on discrete logarithms/T. ElGamal, IEEE Trans. Inf. Theory, 1985, Vol. IT-31, pp. 469-472.

6. Diffie W., Hellman M. New directions in cryptography, IEEE Trans. Inf. Theory, 1976, Vol. IT-22, pp. 644-654.

7. GOSTR 34.10-2012 Informatsionnaya tekhnologiya. Krip-tograficheskaya zashchita informatsii. Protsessy formirovaniya i proverki elektronnoy tsifrovoy podpisi [Information technology. Cryptographic data security. Signature and verification processes of [electronic] digital signature], Moscow, Standartinform, 2013, 29 p.

8. Buchmann J., Maurer M., Moller B. Cryptography based on number fields with large regulator, J. Théorie des Nombres de Bordeaux, 2000, Vol. 12, no. 2, pp. 293-307.

9. Chor B., Rivest R. A knapsack-type public key cryptosystem based on arithmetic in finite fields, IEEE Trans. Inf. Theory, 1988, Vol. IT-34, pp. 901-909.

10. Broker R., Charles D., Lauter K. Evaluating large degree isogenies and applications to pairing based cryptography, Pairing-Based Cryptography - Pairing 2008, LNCS, 2008, Vol. 5209, pp. 100-112.

11. Childs A., Jao D., Soukharev V. Constructing elliptic curve isogenies in quantum subexponential time, Math. Cryptol, 2014, Vol. 8 (1), pp. 1-29.

12. Feo L. de, Jao D., Plut J. Towards quantum-resistant cryptosystems from supersingular elliptic curve isogenies, PostQuantum Cryptography, LNCS, 2011, Vol. 7071, pp. 19-34.

13. Rostovtsev A. G., Makhovenko E. B. Cryptosystem on the category of isogenous elliptic curves [Kriptosistema na

kategorii izogennykh ellipticheskikh krivykh], Problemy in-formatsionnoy bezopasnosti. Kompyuternye sistemy [Information Security Problems. Computer Systems], 2002, no. 3, pp. 74-81.

14. Dov Gordon S., Katz J., Vaikuntanathan V. A Group Signature Scheme from Lattice Assumptions, Cryptology ePrint archive. Report 2011/060. Available at: https://eprint.iacr. org/2011/060.pdf (accessed 28.02.2017).

15. Laguillaumie F., Langlois A., Libert B., Stehlé D. Lattice-Based Group Signatures with Logarithmic Signature Size, Cryptology ePrint archive. Report 2013/308. Available at: https://eprint.iacr.org/2013/308.pdf (accessed 28.02.2017).

16. Regev O. On lattices, learning with errors, random linear codes, and cryptography, J. ACM, 2009, Vol. 56 (6), pp. 34:134:40.

17. Galbraith S., Hess F., Smart N. P. Extending the GHS Weil descent attack, Eurocrypt 2002, LNCS, 2002, Vol. 2332, pp. 29-44.

18. Silverman J. A bound for the Mordell-Weil rank of an elliptic surface after a cyclic base extension, J. Algebraic Geometry, 2000, Vol. 9, pp. 301-308.

19. Rostovtsev A. G., Makhovenko E. B. Printsip maksimal-noy odnorodnosti [Maximum homogeneity principle]//Trudy "Metody i tekhnicheskie sredstva obespecheniia bezopasnosti informatsii [Proc. "Methods and technical tools of information security"], St. Petersburg, 2003, pp. 96-98.

20. Chaum D., Heijst E. van, Pfitzmann B. Cryptographi-cally strong undeniable signatures, unconditionally secure for the signer, Cryptology - CRYPTO'91, LNCS, 1992, Vol. 576, pp. 470-484.

21. GOST R 34.11-2012 Informatsionnaya tekhnologiya. Kriptograficheskaya zashchita informatsii. Funktsiya kheshiro-vaniya [Information Technology. Cryptographic data security. Hash Function], Moscow, Standartinform, 2013, 19 p.

22. Long D., Widgerson A. Discrete logarithm hides O (log n) bits, SIAMJ. Comput, 1988, Vol. 17, pp. 363-372.

23. Debiao H., Jianhua C., Jin H. A Random number generator based on isogenies operations, Cryptology ePrint archive. Report 2010/094. Available at: https://eprint.iacr.org/2010/094. pdf (accessed 28.02.2017).

24. Kuan Ch. Pseudorandom Generator Based on Hard Lattice Problem, Cryptology ePrint archive. Report 2014/002. Available at: https://eprint.iacr.org/2014/002.pdf (accessed 28.02.2017).

25. Peikert C., Rosen A. Efficient collision-resistant hashing from worst-case assumptions on cyclic lattices, Theory of Cryptography. LNCS, 2006, Vol. 3876, pp. 145-166.

26. Zhang J., Chen Yu., Zhang Zh. Programmable Hash Functions from Lattices: Short Signatures and IBEs with Small Key Sizes, Cryptology ePrint archive. Report 2016/523. Available at: https://eprint.iacr.org/2016/523.pdf (accessed 28.02.2017).

Принцип однородности при анализе и синтезе криптографических протоколов

Александрова Е. Б. Санкт-Петербургский политехнический университет Петра Великого Санкт-Петербург, Россия helen@ibks.spbstu.ru

Аннотация. Рассматривается принцип сложностной однородности для анализа и синтеза криптографических протоколов. Приведено сравнение двух протоколов аутентификации в соответствии с указанным принципом. Предложены способы снижения сложностной неоднородности в протоколе цифровой подписи.

Ключевые слова: криптографический протокол, цифровая подпись, задача дискретного логарифмирования на эллиптической кривой, сложностная однородность.

Литература

1. Ростовцев А. Г. Теоретическая криптография / А. Г. Ростовцев, Е. Б. Маховенко. - СПб.: Профессионал, 2005. -480 с.

2. Rivest R. L. A method for obtaining digital signatures and public-key cryptosystems / R. L. Rivest, A. Shamir, L. Adle-man // Communications of the ACM. - 1978. - Vol. 21, no. 2. -P. 120-126.

3. Rabin M. Digitalized signatures and public key functions as intractable as factorization / M. Rabin // MIT Lab. for Com-put. Sci. - Technical Report MIT/LCS/TR-212, 1979.

4. Fiat A. How to prove yourself: Practical solutions to identification and signature problems / A. Fiat, A. Shamir // Cryp-tology - CRYPTO'85. - LNCS. - 1990. - Vol. 263. -P. 186-194.

5. ElGamal T. A public key cryptosystem and a signature scheme based on discrete logarithms / T. ElGamal // IEEE Trans. Inf. Theory. - 1985. - Vol. IT-31. - P. 469-472.

6. Diffie W. New directions in cryptography / W. Diffie, M. Hellman // IEEE Trans. Inf. Theory. - 1976. - Vol. IT-22. -P. 644-654.

7. ГОСТ Р 34.10-2012 Информационная технология. Криптографическая защита информации. Процессы формирования и проверки электронной цифровой подписи. - М.: Стандартинформ, 2013. - 29 с.

8. Buchmann J. Cryptography based on number fields with large regulator / J. Buchmann, M. Maurer, B. Moller // J. de Théorie des Nombres de Bordeaux. - 2000. - Vol. 12, no. 2. -P. 293-307.

9. Chor B. A knapsack-type public key cryptosystem based on arithmetic in finite fields / B. Chor, R. Rivest // IEEE Trans. Inf. Theory. - 1988. - Vol. IT-34. - P. 901-909.

10. Bröker R. Evaluating large degree isogenies and applications to pairing based cryptography / R. Bröker, D. Charles,

K. Lauter // Pairing-Based Cryptography - Pairing 2008. -LNCS. - 2008. - Vol. 5209. - P. 100-112.

11. Childs A. Constructing elliptic curve isogenies in quantum subexponential time / A. Childs, D. Jao, V. Soukharev // J. Math. Cryptol. - 2014. - Vol. 8 (1). - P. 1-29.

12. De Feo L. Towards quantum-resistant cryptosystems from supersingular elliptic curve isogenies / L. De Feo, D. Jao, J. Plut // Post-Quantum Cryptography. - LNCS. - 2011. -Vol. 7071. - P. 19-34.

13. Ростовцев А. Г. Криптосистема на категории изоген-ных эллиптических кривых / А. Г. Ростовцев, Е. Б. Маховенко // Проблемы информационной безопасности. Компьютерные системы. - 2002. - № 3. - С. 74-81.

14. Dov Gordon S. A Group Signature Scheme from Lattice Assumptions / S. Dov Gordon, J. Katz, V. Vaikuntanathan // Cryp-tology ePrint archive. Report 2011/060. - URL: https://eprint. iacr.org/2011/060.pdf (дата обращения: 28.02.2017).

15. Laguillaumie F. Lattice-Based Group Signatures with Logarithmic Signature Size / F. Laguillaumie, A. Langlois, B. Libert, D. Stehle // Cryptology ePrint archive. Report 2013/308. -URL: https://eprint.iacr.org/2013/308.pdf (дата обращения: 28.02.2017).

16. Regev O. On lattices, learning with errors, random linear codes, and cryptography / O. Regev // J. ACM. - 2009. -Vol. 56 (6). - P. 34:1-34:40.

17. Galbraith S. Extending the GHS Weil descent attack / S. D. Galbraith, F. Hess, N. P. Smart // Eurocrypt 2002. -LNCS. - 2002. - Vol. 2332. - P. 29-44.

18. Silverman J. A bound for the Mordell-Weil rank of an elliptic surface after a cyclic base extension / J. H. Silverman // J. Algebraic Geom. - 2000. - Vol. 9. - P. 301-308.

19. Ростовцев А. Г. Принцип максимальной однородности / А. Г. Ростовцев, Е. Б. Маховенко // Методы и технические обеспечения безопасности информации: тр. конф. -СПб., 2003. - С. 96-98.

20. Chaum D. Cryptographically strong undeniable signatures, unconditionally secure for the signer / D. Chaum, E. van Heijst, B. Pfitzmann // Cryptology - CRYPTO'91. - LNCS. -1992. - Vol. 576. - P. 470-484.

21. ГОСТ Р 34.11-2012 Информационная технология. Криптографическая защита информации. Функция хэширования. - М.: Стандартинформ, 2013. - 19 с.

22. Long D. Discrete logarithm hides O (log n) bits / D. Long, A. Widgerson // SIAM J. comput. - 1988. - Vol. 17. - P. 363-372.

23. Debiao H. A Random number generator based on isogenies operations / H. Debiao, C. Jianhua, H. Jin // Cryptology ePrint archive. Report 2010/094. - URL: https://eprint.iacr. org/2010/094.pdf (дата обращения: 28.02.2017).

24. Kuan Ch. Pseudorandom Generator Based on Hard Lattice Problem / Ch. Kuan // Cryptology ePrint archive. Report 2014/002. - URL: https://eprint.iacr.org/2014/002.pdf (дата обращения: 28.02.2017).

25. Peikert С. Efficient collision-resistant hashing from worst-case assumptions on cyclic lattices / С. Peikert, A. Rosen // Theory of Cryptography. - LNCS. - 2006. - Vol. 3876. - P. 145-166.

26. Zhang, J. Programmable Hash Functions from Lattices: Short Signatures and IBEs with Small Key Sizes / J. Zhang, Yu Chen, Zh. Zhang // Cryptology ePrint archive. Report 2016/ 523. - URL: https://eprint.iacr.org/2016/523.pdf (дата обращения: 28.02.2017).