Научная статья на тему 'Using intrinsic time in portfolio optimization'

Using intrinsic time in portfolio optimization Текст научной статьи по специальности «Математика»

CC BY
215
52
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
INTRINSIC TIME / MODERN PORTFOLIO THEORY / PORTFOLIO OPTIMISATION / RETURNS NORMALITY

Аннотация научной статьи по математике, автор научной работы — Vasilyev Boris

Концепция внутреннего времени была введена в работе Mandelbrot 1963 года и далее развита в докладе Muller с соавторами (1993). Недавнее исследование Диденко с соавторами (2014) предоставило ряд свидетельств о том, что свертка ценовых рядов по объемам приводит к квази-нормальности доходностей активов. Этот феномен можно использовать в портфельной оптимизации.Наша работа начинается с краткого обзора основных проблем современной портфельной теории. Далее мы тестируем нормальность рядов при различных параметрах свертки по объемам и эмпирически тестируем пригодность такой свертки в портфельной оптимизации. Наши результаты показывают, что сверткапо объемам позволяет преодолеть такие недостатки СПТ, как слабая диверсификация и предположение о нормальности доходностей.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

The concept of intrinsic time was introduced in Mandelbrot’s paper circa 1963 and further developed in discussion paper by Muller et al. (1993). As reported by Didenko et al. (2014), there are some evidencesthat sampling price series in volume domain results in almost normal returns, which could help to overcome some common issues in portfolio optimisation. First, we briefly survey flaws of classic approach to portfolio optimisation, then we test for statistical properties of intrinsic-time sampled return series, theorize on how intrinsic time could help in handling issues of portfolio optimisation, and then empirically test our guesses. We show that using intrinsic time helps in overcoming such flaws of Modern Portfolio Theory as poor diversification and reliance on normality of returns.

Текст научной работы на тему «Using intrinsic time in portfolio optimization»

Using Intrinsic Time in Portfolio Optimization*

Boris VASILYEV

International Financial Laboratory, Financial University, Moscow b_va@hotmail.com

Abstract. The concept of intrinsic time was introduced in Mandelbrot's paper circa 1963 and further developed in discussion paper by Muller et al. (1993). As reported by Didenko et al. (2014), there are some evidences that sampling price series in volume domain results in almost normal returns, which could help to overcome some common issues in portfolio optimisation. First, we briefly survey flaws of classic approach to portfolio optimisation, then we test for statistical properties of intrinsic-time sampled return series, theorize on how intrinsic time could help in handling issues of portfolio optimisation, and then empirically test our guesses. We show that using intrinsic time helps in overcoming such flaws of Modern Portfolio Theory as poor diversification and reliance on normality of returns.

Аннотация. Концепция внутреннего времени была введена в работе Mandelbrot 1963 года и далее развита в докладе Muller с соавторами (1993). Недавнее исследование Диденко с соавторами (2014) предоставило ряд свидетельств о том, что свертка ценовых рядов по объемам приводит к квазинормальности доходностей активов. Этот феномен можно использовать в портфельной оптимизации. Наша работа начинается с краткого обзора основных проблем современной портфельной теории. Далее мы тестируем нормальность рядов при различных параметрах свертки по объемам и эмпирически тестируем пригодность такой свертки в портфельной оптимизации. Наши результаты показывают, что свертка по объемам позволяет преодолеть такие недостатки СПТ, как слабая диверсификация и предположение о нормальности доходностей.

Key words: Intrinsic time, modern portfolio theory, portfolio optimisation, returns normality.

INTRODUCTION

Soon after the publication of "Portfolio Selection" by Harry Markowitz (1952) that is mostly referred to as a seminal work for modern portfolio theory based on mean-variance analysis (referred herein after to as "MVO"), it became evident that the original method presented therein resulted in low-diversified and unstable portfolios leading to overtrading and excessive risks. Along with increasing the number of assets in optimization universe these drawbacks even aggravated, and that most probably motivated Markowitz to introduce initial linear constraints to the process which were described in his work (1956) published several years later and gave ground to numerous modifications and developments to the MVO process ever since.

OVERTRADING

With respect to MVO excessive trading activity is mainly stemmed from frequent portfolio rebalancing

that leads to placing additional open or close market orders to meet new assets allocation. A major cause of such instability is a combination of factors comprising unavoidable presence of estimation errors within input data from one hand, and high sensitivity of MVO to even minor changes in inputs, from the other. Hypothetically, if input data would be free of such errors inside, the optimization would definitely provide efficient or optimal portfolio composition. In reality the inputs are statistical estimates derived from or generated on the basis of historical data and bear some portion of disturbance inside. Michaud (1986) posited such inaccuracy results in overinvestment in some securities or assets and underinvestment in others. For example, with two assets like A and B, such as A's true expected return is slightly lower than that of B, but standard deviation is slightly higher, and provided both assets returns have identical correlations with the returns for each of the other assets the portfolio universe, asset B is preferred among these two, and if the inputs are free

* Использование внутреннего времени ценовых рядов в портфельной оптимизации.

of estimation error, it dominates A. But if such errors resides the input data, asset A may have an estimated expected return that is higher, and an estimated standard deviation hat is lower than that of B. In this case, portfolio optimization will erroneously assign a higher weight for A than for B. Moreover, estimation error may fluctuate around zero over time, and having the same true expected values for A and B in future, the optimizer may generate the opposite result affected by changing estimation error that will lead to dramatic rebalancing of portfolio.

High MVO sensitivity to a minor change in the data for input can therefore lead to a dramatic change in overall portfolio composition. Thus, an update that bears a slight change in expected return or standard deviation for one asset can result in radical portfolio reconstruction, rebalancing weight not only for this particular asset, but reallocating all the assets from the universe under consideration. Such potential recomposition results in excessive trading on the portfolio deemed necessary to meet new allocations each time the inputs change.

Overtrading is usually associated with two main problems such as increased possibility of capital loss and excessive transaction costs. First mainly results from overinvestment in few assets that is evident for low-diversified concentrated portfolios. The inputs for MVO are always estimates that may be quite far from the true values in future. Thus, if the market turns against the investor, low portfolio diversification, i. e. allocation into fewer assets, will increase potential losses. In this case if the investor utilizes the leverage the losses are even magnified and may exceed investor's capital. Another issue is transaction costs. They are often fixed, and in total therefore dependant on the number of trades executed. Frequent assets re-allocation results in higher transaction costs that harmfully affect the return of the portfolio and hence overall profitability of the investment.

The problem of excessive turnover and overinvestment in fewer assets can be settled by introduction of specific constraints into MVO process. These may limit minimum and maximum weights for one asset (or class of assets) and/or preset minimum number of assets to be included in the portfolio to ensure proper level of its diversification.

Transaction costs may be reduced by composition of more stable portfolios. For example, Lummer et al. (1994) proposed for this purpose to use sensitivity analysis allowing to diminish dramatic changes in recommended portfolio due to minor changes in inputs. This method implies selecting an efficient portfolio and then altering the MVO inputs to construct a set of portfolios with new inputs, and then to examine how close they are to the initial efficient one.

The goal is to find a set of asset weights that will be close to efficient proportion under several different sets of plausible inputs. On the other hand, expected benefit from any reallocation advised by MVO can be assessed with respect to relevant transaction costs necessary for its execution.

EXPECTED RETURNS

Yet another question for MVO is that the theory implies expected returns as an input. They cannot be known directly from the market, but only estimated commonly on the basis of its past data, that leads to unstable portfolio weights. MVO would generate a perfect solution if the inputs would be true expected returns and the variance matrix. In reality the estimates of expected returns mostly consist of noise and estimates of the variance matrixes are very noisy too. Scherer (2002) noted that "mean-variance optimization is too powerful tool for the quality of our data".

The main problem is to estimate expected returns with sufficient accuracy. There are several main methods published to resolve this issue. For example, Black and Litterman (1992) proposed to estimate the expected returns by combining Capital Asset Pricing Model (CAPM) equilibrium and subjective investor views. However, investor's assumptions for the market must be also specified with numbers for both the expected returns and the uncertainty that may be considered as a drawback for this approach. Another way is the Arbitrage Pricing Theory (APT) that was described by Ross (1976) and was intended to model returns of the assets (for the discrete time) as a linear combination of independent factors. The APT constructs expected returns as statistical estimates to fit historical data that in turn may also lead to unstable allocations.

Another empirical way of expected returns estimation is to apply for consensus forecasts of professionals participating in market activity. Informational vendors (such as Bloomberg) provide this opportunity to its subscribers. However, the experience proves their expectations are usually drop far from true values, at least as far as single assets predicts are concerned. Meanwhile, the empirical expectations with respect to cumulative indexes prove to be much more accurate. This allows to use a single index model as an instrument of expected return estimations using index estimation as the only macroeconomic parameter to influence particular asset expected return. Multifactor models are not that simplified and imply regression analysis based on several factors such as, for example, indexes by various industry sectors. They are more detailed in assessment of expected returns

than single index models as consider any stock dependence not on general index, but on the index of corresponding sector. However, multifactor models also provide quite rough estimations within wide confidence intervals.

Following MVO routine, once input parameters have been estimated, it performs optimization assuming all inputs are certain and estimation errors are introduced into the process of allocation. Various approaches exist to stabilize the optimization results with respect to estimation errors, which can be distinguished in two main ways.

The first approach implies to reduce the estimation errors of the input parameters via econometric methods. For example, to reduce the impact of noise estimation Michaud (1998) used the resampling method. The idea behind it is that real returns are very noisy. As the optimization procedure is very unstable depending on small changes in inputs, the portfolio should be optimized over sets of similar return series that are randomly generated following some preset parameters. On average, noise should be evened out. Thus, starting with original return series, some new series are generated with small amounts of noise to the original series. Then MVO procedure runs over all series and eventually results in a set of different optimal portfolios composed for a same expected returns level. The average over all optimal portfolios is expected to be more stable with respect to errors in the input data.

The second way is to shrink directly the weights in portfolio using bounds, penalties for the objective function or regularization of input parameters. Jagannathan and Ma (2003) showed that imposing constraints on the mean-variance optimization can be interpreted as a modification of the covariance matrix. In particular, lower (upper) bounds decrease (increase) the variances of asset returns. Thus, constraints imposed on weights can reduce the degree of freedom of the optimization, and the allocation remains then within certain intervals. But the correction of estimation errors proved to be such difficult task that some studies were devoted to show that heuristic allocations perform even better than MVO-generated ones with respect to Sharpe ratio. For example, DeMiguel et al. (2009) assessed the performances of 14 different portfolio models and the equally-weighted portfolio on different datasets and come to conclusion that detailed and sophisticated models did not produce a better optimization than the naive equally-weighted portfolio.

As a result, Lindberg (2009) mentioned one more way to deal with the problem of expected returns estimation that is simply ignoring them. This method is stemmed from the classical 1/n strategy, which

simply puts 1/n of the investor's capital in each of n available assets. No doubt, this strategy should be well diversified. However, covariation between different assets may refrain this from being the case, and as it is possible to obtain rather good estimates of covariations between assets returns, this information can be also used in portfolio construction. Later, Fernholz (2002) has proposed to consider expected returns as dependant on ranks. These ranks can be established, for example, based on the market capital distribution. Thus, rank 1 can be assigned to the asset with the highest market capitalization, rank 2 to the next highest, and so far. A paper by Almgren and Chriss (2005) presented a portfolio optimization method which utilized such ordering information instead of expected returns. It uses information about the order of the expected returns as the MVO inputs instead of the very estimates. This approach also benefits from extended use of covariance information.

NORMALITY OF RETURNS

Assets returns follow some statistical distribution and its form is an issue of highest importance for financial modeling in general and MVO in particular. Basic assumptions on market prices behavior are required to perform a testing of asset pricing models, to optimize portfolios by computation of risk/return efficient frontiers, to assess derivatives and determine the hedging strategy over time, as well as to measure and manage financial risks. However, neither economic nor statistical theory appears to succeed in determination of exact type of returns distribution. Thus, distributions used in empirical and theoretical research are commonly derived from an assumption or estimation of data used. The overall belief adopted in finances is that this is the normal (Gaussian) distribution.

Although returns normality is the standard in financial modeling, some alternatives have been also considered mainly due to evidence that the Gaussian distribution tends to underestimate the weight of the extreme returns contained in the distribution tails as well as the returns fallen around the mean. For example, Longin (2005) noted that during the stock market crashes (such as in 2008) daily market drops can exceed 20% that can hardly be explained within normality universe. In response, several other distributions have been proposed by the scholars, who tried to apply them, however without evident success: a mixture of Gaussian distributions, stable Paretian distributions, Student t-distributions and the class of ARCH processes. Main shortcoming of all these alternatives is that they are not nested and their adequacy

therefore cannot be directly compared, for example, by a likelihood ratio test.

On the other hand, MVO's intended outcome is to find an optimal portfolio that means to maximize investor's utility function. In case this utility function is not quadratic, but generally represented with any upward concave form, expected utility function should depend on the portfolio return's values only. Such distributions must be the two-parameter ones, i. e. should be fully explained by their first two moments — mean and variance, which are also implied to express the higher order moments, e. g. skewness and kurtosis. Several distributions, such as the normal, lognormal, or gamma ones satisfy this criterion well. However, with respect to the problem of portfolio optimization, the distribution in question should also satisfy one more criterion. Portfolio optimization deals with a universe of assets (or other portfolios), and an investor selects which assets to include into portfolio. Thus, all portfolios composed by combination of individual assets must also follow some distribution that can be fully explained by their means and variances. The distribution therefore must comply with a criterion that both individual assets' returns distribution should depend on just their mean and variance, and the distribution of returns of a portfolio (combination) of these assets meets the same requirement. The only distribution that is suitable to comply with it and has finite variance is the normal Gaussian one.

As a result, the paradigm in finance is that MVO can be successfully applied only provided asset returns follow the normal distribution that is determined by its two first moments, means of returns and their variances. The third and fourth moments of distribution, that are, in particular, the skewness and kurtosis can be also theoretically added to the utility to reflect and explain a non-normality of returns, but it is believed that skewness is close to impossible to predict and the predictability of kurtosis is considerably limited, either.

INTRINSIC TIME

MVO is intended to answer a very natural question: if the exact parameters are known, which portfolio maximizes the expected return for pre-specified level of risk, or which portfolio minimizes the risk for pre-specified rate of expected return? This would be all the investor would need to have an optimal portfolio and be happy enough with it. However, among others, the issues described above bring some bitter stuff into reality. "Exact parameters" that are needed ad hoc, proved to be uncertain, noisy and lead the optimizer to unstable results with underestimated risks.

However, it becomes evident the main problem for all these issues is that asset returns are not normally distributed. This is a reason why the investor cannot accurately estimate expected returns, has problems with unstable solutions, rebalancing, and hence with overtrading and other bad things. Realized returns values refrains the investor from a clear view of true normal distribution that exists in the market, but is hidden by noise. It is widely assumed that this is the way things are, and for the purpose of this work, in particular, it is implied as a true.

Based on inherent normality of returns distribution, most of the scholars propose various approaches how to adjust realized market returns to suit Gaussian framework by introducing new parameters that make the models more and more complicated. At some extent, it becomes evident that many of such sophisticated models perform worse than simplest naive portfolios, and hence are discarded. But one point remains unchanged: the source data is taken from the market and then is converted into returns addressed for statistical manipulations.

On the other hand, it is known that the proximity of returns distribution to Gaussian normality is not stable over different time intervals and commonly increases with decrease of the frequency. For example, the distribution of monthly returns is closer to the normal one than that of days, hours or minutes. The cause is deemed to be that the higher time intervals have relatively lower proportion of noise within the returns, but anyway it is obvious the proximity of returns distribution to the normal on depends on time. It flows constantly by seconds, minutes, etc. And it is also obvious, but not for the market! One minute at the middle of trading day is not the same as one minute right before it is being closed. Hence, a question: how can one consider all time spans during the day in the same manner? This understanding may explain (at least partially) the non-normality that all involved have got accustomed to observe.

Next question is what can be used to measure this difference in the same intervals of time, or to tick market intrinsic time clock. Volatility is usually higher during periods of active trading (when our time should go "faster") and, conversely, is lower over non-active trading ones (when our time goes "slower"). But it is not so easy to estimate it independently, and its value represents the situation non-equally depending on volumes traded, that seems itself to be much more interesting to implement. Traded volumes can generally reflect the level of market activity and this parameter is usually available as provided among common market data.

The bars can be now formed as based not on astronomic time interval expiration (end of second,

minute, hour, etc.), but when the traded volume achieves certain pre-set value since last closed bar formed by the same method. It can be considered as market intrinsic time. Such time dimension — cumulative volume bar (referred hereinafter to as "CVB"), will not coincide with astronomic time, but is expected to better reflect the nature and the mood of the market. The CVB returns are expected to achieve closer proximity to normal distribution as much of usual noise may prove to be in fact the messed data of neighbor conventional (astronomic timed) bars, that is going to disappear in case of CVB accounting for market activity.

CVB approach as market intrinsic time can potentially provide a better solution for all of above described issues. And the most interesting is that it may allow to use MVO it its original form, without complicated modifications and add-ons. More stable portfolios avoid overtrading, expected returns have lower estimation errors as returns distribution is close to the normal one, realized returns noise is diminished.

CVB PROXIMITY TO NORMAL DISTRIBUTION

Although the data generated by the market is believed to be normally distributed, it is full of noise that prevents investors from gaining benefits associated with this normality. The proportion of such disturbances, however, in overall price movements tends to decrease along with increasing of time intervals size taken for consideration. It mainly results from the magnitude of the market swings that are evidently bigger within less frequent intervals, while the noise component rises slower and steadily fades out. The returns for yearly intervals are much closer to normally distributed data than the returns for minute frequency. Higher intervals, however, cannot often be useful enough for active trading and this makes it clear that normalization of more frequent data

would be a matter of the highest interest for investors. As the returns derived from CVB are believed to be closer to normal distributed data than the regular ones (based on conventional astronomic time bars — referred hereinafter to as "conventional returns"), we have conducted a comparison of both types.

CVB concept posits that the bar is closed not with a tick of a clock as usual, but when the volume of trades for particular asset achieved certain preset value. Thus, such intrinsic time is individual for every asset as particular trading volumes are believed impossible to coincide across the market. To fulfill an experiment we have taken one minute data for a period of one year 2013 for top ten assets of Russian stock market1 and have compared the proximity to normal distribution for the returns generated by conventional bars data and CVBs.

CVB composition is performed as iterations for trading volumes increasing from 100,000 to 40,000,000 with a step of 100,000. For every asset, one minute bars volumes from original source data files are added up until the sum achieves the value of current iteration. Then the current CVB is considered as closed, and the loop starts the same routine for next CVB. Any next iteration obviously produces less bars than the previous one as it collects more conventional bars to achieve increased target volume, i. e. generates higher intervals that may itself bring the results closer to normality. To offset this influence and to assess the contribution of the very CVB concert rather than the benefit of a scale, we also generate conventional bars of similar range. When any iteration if finished, it brings the finite number of CVBs generated. Dividing original source data file length by this number we can obtain the number of conventional bars in the interval that corresponds to one newly generated CVB. Then we compose new conventional bars dataset relevant to this particular

1 Data is available at: http: //www. finam. ru/analysis/profi-le041CA00007/, [accessed 25 February 2015].

Figure 1. Deviation of observed returns from normally distributed data.

CVB set and compare proximity to normally distributed data with the same mean and standard deviation for both generated datasets.

Figure 1 presents the results for common shares of GAZP (Gazprom) and SBER (Sberbank). Other assets examined provide similar pictures. Deviation from normal distribution diminishes with interval rise for returns based on both conventional bars and CVBs, but the latter present higher rate and gets times lower in the left part of the charts. By the end of iterations the conventional returns row tends to reach CVB ones, although CVBs still provide lower values within the range of observation.

As a result we may posit that CVB approach allows obtaining returns that are closer to normally distributed than the conventional bars. This advantage becomes specifically evident on smaller time intervals, but proceeds even further, although not that dramatically. The application of CVB may encounter some complexities stemmed from the fact that every asset now exists in the market at its own time. But this problem may be solved for practical purposes of optimization as described below.

PORTFOLIO OPTIMIZATION USING CVB

Portfolio Theory by Harry Markowitz gives ground to numerous mean-variance optimizers most of which attempt to improve the method and to bypass its known drawbacks as described above. Thus, we believe it is interesting to compare portfolio optimization by original mean-variance analysis performed on conventional and CVB based data, as CVB brings no modification to optimization process itself, but just rearranges the data to input. For this purpose we take one minute interval data (also provided by Finam) for a period from June 2008 till end of December 2014 for top ten Russian stocks. The start date was taken that as one of the participants (particularly HYDR — Rushydro) was listed just at the end of May 2008, and we have no data for processing beyond this point. The

portfolio is intended to be rebalanced on a weekly or monthly basis.

Here we encounter a problem rising from individual CVB time for each participant of our universe to optimize. Going common way we cannot rebalance the portfolio based on CVB as the bars of all participating assets close differently, and there is no conventional uniform cut-off time. This issue can be solved by several means, but we use one as follows. As CVB is intended to arrange the data in a more natural way, there is no difference which direction such a composition goes to. In other words, returning back to the Figure 1 above, CVB construction performed from the last data point backward to the first one would produce the same result in the chart. Thus, we can perform portfolio optimization at any point of conventional time if constructing CVB row backward from this point.

Similar to the way we used in the experiment on proximity to normal distribution, at every point of portfolio optimization we imitate conventional row by CVBs one to compare with the most suitable. For example, if we perform monthly optimization for the point X of conventional data and use therefore X months of previous data, we adjust CVB dataset accordingly. Particularly, we derive total trading volume for each asset for whole the period till point X, and then we divide it by X — the number of months taken for optimization. It results in the value of average volume per month which becomes a target volume for CVB composition. It is definitely the easiest way that does not take into account, for example, global changes in volumes across all periods that may be significant for Russian market and can be introduced by averages, but we leave it out of this research for the sake of simplicity. Once we have the target value for volume, we can construct CVBs starting from X point. The number of CVBs is also X that is the last point of both conventional and CVB datasets that are now equally sized and ready for input to the optimizer.

Figure 2. Static Transition Maps.

Figure 3. Back test transition maps.

Optimization results are interesting in comparison of static and historical portfolio composition for both data samples. Figure 2 shows static transition maps for the date of 30 December 2014. Left map refers to conventional bars based portfolios, while the right map — to CVBs-based ones. Each map represents a hundred portfolios sitting of efficiency frontier and sorted by return (or risk) from the lowest to the highest. X-axis is labeled with expected portfolio risk, while Y-axis represents the weights of participating stocks.

CVB-based efficient portfolios are evidently more diversified and contain eight-nine assets until the middle of the map, while portfolios composed on the basis of conventional data sample consist of only four assets with domination of two of them in the beginning — LKOH (Lukoil) and SNGS (Surgutneftegas). GMKN (GMK Norilskiy Nikel) dominates both maps most risky/profitable portfolios rightwards. Higher level diversification leads to lowering of portfolio risk that is particularly demonstrated by these charts.

Starting level of risk 0.169 for conventional bars based frontier is achieved by CVB-based one only in the right half of its map. As concentrated portfolios are considered as one of the known shortcomings to Markowitz optimization, considerably higher diversification of CVB-based effective portfolios may demonstrate CVB's obvious advantage over conventional data sampling.

Figure 3 represents transition maps of optimal Sharpe portfolios for the whole period from 2008 to the end of 2014 for both data sampling methods with monthly rebalancing. For each time point all generated efficient portfolios are compared by their Sharpe ratios calculated as a quotient of division of portfolio expected (excess) return by its expected risk. Then, the best portfolio is included in this map each time. The charts display similar peculiarity as the static maps. CVB-based portfolios are at least twice better diversified over the whole period under consideration. The assets participating in the portfolios in the left map are included in respective port-

t'r

■ ■

K' I

......:

— Benchmark — Mark simple ■ CVB

[ 1 r T r 1 111111' IUI 1 1 1

aug 30 2008 aug 01 2009 aug 07 2010 aug 06 2011 aug 04 2012 aug 03 2013 jul 26 2014

Figure 4. Optimization performance.

folios represented in the right chart, but also added with some other assets ignored by conventional data based optimization.

Back test conducted for both types of data sampling methods also allows comparing their real performance in the market. The results are demonstrated in the Figure 4.

Both sets of portfolios outperform the benchmark that is calculated as cumulative return of assets market capitalization. CVB-based portfolios perform better than conventional data based ones in the beginning, but depreciate in the second part of the graph. This may result from initial assumption that target volumes for CVB step is constant over the period, or that CVB works better in the bull market that has taken place in Russia from second half of 2008 till the mid of 2011.

CONCLUSION

CVB approach to data sampling proved to have a real effect on portfolios composition using even pure, standard tools of optimization. It brings more diversified portfolios and can provide results outperforming that of generated on the basis of conventionally sampled data. As CVB approach is not an optimization tool itself, it can be easily introduced to any of existing optimization techniques to enhance their positive features.

Based on experiments to approach normally distributed series, it is believed that CVB can demonstrate better performance on smaller intervals. Thus, there are two main directions to evolve the research. First, to apply CVB to more frequent rebalancing, for example, on daily or even lower basis, where common portfolio optimization is not traditionally used. And the second direction is to try CVB on more developed and vigorous markets such as that of USA. Both these developments will allow tuning the method and improving its performance for future application.

REFERENCES

Black, F., & Litterman, R. (1992), "Global portfolio optimization", Financial Analysts Jour-nal, 48 (5), 28-43.

Chriss, N. A., & Almgren, R. (2005), "Portfolios from sorts", http: //ssrn. com/abstract=720041 [accessed 17 Mar 2015].

DeMiguel, V., Garlappi, L., & Uppal, R. (2009), "Optimal versus naive diversification: How inef-ficient is the 1/N portfolio strategy? ", Review of Financial Studies, 22 (5), 1915-1953.

Didenko, A., Dubovikov, M., Poutko, B. (2014), "Forecasting market transition to an unsteady state using decomposition of volatility to dynamic components", Working paper (in Russian).

Fernholz, E. R. (2002), Stochastic portfolio theory (pp. 1-24). Springer New York.

Jagannathan, R., & Ma, T. (2003), "Risk reduction in large portfolios: Why imposing the wrong constraints helps", The Journal of Finance, 58 (4), 1651-1684.

Lindberg, C. (2009), "Portfolio optimization when expected stock returns are determined by exposure to risk", Bernoulli, 15 (2), 464-474.

Longin, F. (2005), "The choice of the distribution of asset returns: How extreme value theory can help? ", Journal of Banking & Finance, 29 (4), 1017-1035.

Lummer, S. L., Riepe, M. W., & Siegel, L. B. (1994), "Taming your optimizer: A guide through the pitfalls of mean-variance optimization", Global Asset Allocation: Techniques for Optimizing Portfolio Management.

Mandelbrot, B. (1963). "The stable Paretian income distribution when the apparent exponent is near two", International Economic Review, 4 (1), 111-115.

Markowitz, H. (1952), "Portfolio selection", The journal of finance, 7 (1), 77-91.

Markowitz, H. (1956), "The optimization of a quadratic function subject to linear constraints", Naval research logistics Quarterly, 3 (1-2), 111-133.

Michaud, R. O. (1989), "The Markowitz optimization enigma: is 'optimized' optimal? ", Financial Analysts Journal, 45 (1), 31-42.

Michaud, R. O., & Michaud, R. (1999), U. S. Patent No. 6,003,018. Washington, DC: U. S. Patent and Trademark Office.

Müller, U. A., Dacorogna, M. M., Davé, R. D., Pictet, O. V., Olsen, R. B., & Ward, J. R. (1993). "Fractals and intrinsic time: A challenge to econometricians", Unpublished manuscript, Olsen & Associates, Zürich.

Ross, S. A. (1976), "The arbitrage theory of capital asset pricing", Journal of economic theory, 13 (3), 341-360.

Scherer, B. (2002), "Portfolio resampling: Review and critique", Financial Analysts Journal, 58 (6), 98-109.

i Надоели баннеры? Вы всегда можете отключить рекламу.