Научная статья на тему 'REAL POWER LOSS REDUCTION BY ENHANCED TRAILBLAZER OPTIMIZATION ALGORITHM'

REAL POWER LOSS REDUCTION BY ENHANCED TRAILBLAZER OPTIMIZATION ALGORITHM Текст научной статьи по специальности «Электротехника, электронная техника, информационные технологии»

CC BY
22
4
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
OPTIMAL REACTIVE POWER / TRANSMISSION LOSS / TEACHING / LEARNING / TRAILBLAZER

Аннотация научной статьи по электротехнике, электронной технике, информационным технологиям, автор научной работы — Lenin K.

In this paper Teaching learning based Trailblazer optimization algorithm (TLBOTO) is used for solving the power loss lessening problem. Trailblazer optimization algorithm (TOA) is alienated into dual phases for exploration: trailblazer phase and adherent phase. Both phases epitomize the exploration and exploitation phase of TOA correspondingly. Nevertheless, in order to avoid the solution falling in local optimum in this paper Teaching-learning-based optimization (TLBO) is integrated with TOA approach. Learning segment of the TLBO algorithm is added to the adherent phase. Proposed Teaching learning based Trailblazer optimization algorithm (TLBOTO) augment exploration capability of the algorithm and upsurge the convergence speed. Algorithm's exploration competences enhanced by linking the teaching phase and learning. Exploration segment of the trailblazer algorithm identifies the zone with the pre-eminent solution. Subsequently inducing the teaching process, the trailblazer performs as a teacher to teach additional entities and engender a new-fangled entity. The new-fangled unit is equated with the trailblazer, and with reference to the greedy selection norm, the optimal one is designated as the trailblazer to endure exploration. The location of trailblazer is modernized. Legitimacy of the Teaching learning based Trailblazer optimization algorithm (TLBOTO) is substantiated in IEEE 30 bus system (with and devoid of L-index). Actual power loss lessening is reached. Proportion of actual power loss lessening is augmented

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «REAL POWER LOSS REDUCTION BY ENHANCED TRAILBLAZER OPTIMIZATION ALGORITHM»

UDC 519.6

DOI: 10.18698/1812-3368-2021-3-77-93

REAL POWER LOSS REDUCTION BY ENHANCED TRAILBLAZER OPTIMIZATION ALGORITHM

K. Lenin gklenin@gmail.com

P.V.P. Siddhartha Institute of Technology, Kanuru, Vijayawada, India

Abstract

In this paper Teaching learning based Trailblazer optimization algorithm (TLBOTO) is used for solving the power loss lessening problem. Trailblazer optimization algorithm (TOA) is alienated into dual phases for exploration: trailblazer phase and adherent phase. Both phases epitomize the exploration and exploitation phase of TOA correspondingly. Nevertheless, in order to avoid the solution falling in local optimum in this paper Teaching-learning-based optimization (TLBO) is integrated with TOA approach. Learning segment of the TLBO algorithm is added to the adherent phase. Proposed Teaching learning based Trailblazer optimization algorithm (TLBOTO) augment exploration capability of the algorithm and upsurge the convergence speed. Algorithm's exploration competences enhanced by linking the teaching phase and learning. Exploration segment of the trail-blazer algorithm identifies the zone with the pre-eminent solution. Subsequently inducing the teaching process, the trailblazer performs as a teacher to teach additional entities and engender a new-fangled entity. The new-fangled unit is equated with the trailblazer, and with reference to the greedy selection norm, the optimal one is designated as the trailblazer to endure exploration. The location of trailblazer is modernized. Legitimacy of the Teaching learning based Trailblazer optimization algorithm (TLBOTO) is substantiated in IEEE 30 bus system (with and devoid of L-index). Actual power loss lessening is reached. Proportion of actual power loss lessening is augmented

Keywords

Optimal reactive power, Transmission loss, Teaching, Learning, Trailblazer

Received 26.02.2021 Accepted 29.03.2021 © Author(s), 2021

Introduction. In power system Subsiding of factual power loss is a substantial facet. Ample numeric procedures [1-6] and evolutionary approaches (Ant lion optimizer, Hybrid PSO-Tabu search, quasi-oppositional Teaching learning based optimization, harmony search algorithm, stochastic fractal search optimization algorithm, improved pseudo-gradient search particle swarm optimiza-

tion, Effective Metaheuristic Algorithm, Seeker optimization algorithm, Diversity-Enhanced Particle Swarm Optimization) [7-15] are applied for solving Factual power loss lessening problem. Yet many approaches failed to reach the global optimal solution. In this paper Teaching learning based Trailblazer optimization algorithm (TLBOTO) is applied to solve the Factual power loss lessening problem. Trailblazer optimization algorithm (TOA) emulates the prevailing practices and doggedness rubrics of constellation animals in the common flora and fauna. Trailblazer optimization algorithm is imitated from the reality rules and characteristics of constellation of animals. Adherent track the trailblazer spot (laid by the trailblazer) and their specific intelligence of transportable. Trailblazer and adherent are both uncertain. When algorithms iterations are increased, the trailblazer and adherent roles are inter-changed with orientation to the entity's exploration competencies. Teaching-learning-based optimization (TLBO) is designed based on the imitation of education procedure of the teacher and the responded learning method of the pupils. Its own dual phase, i.e., one is teaching and another is learning. In the Proposed Teaching learning based TLBOTO primarily, trailblazer is deliberated as a teacher and it implements the teaching segment and it swiftly streamline Trailblazer and surge the exploration competence of the technique. Additionally, adherents in are prejudiced by the trailblazer, and also by their own realization. Their performance is comparable to that of pupils in TLBO. Subsequently, adherent implements the learning segment in TLBO. Then the adherent will move forward with obstinacy and surge the exploitation capability of the algorithm. Trial entities use "greedy selection" to produce offspring's. On the contrary, in the education juncture, double entities are capriciously nominated for evaluation, and the entity with deprived performance acquires from the entity with enriched performance, thus turn out in to creating new-fangled entities.

Step length is alienated into dual kinds: stable step length and adjustable step length. Exponential step fits to adjustable step. Exponential step is the amalgamation of exponential utility in arithmetic and step factor in the procedure. Through presenting the factor of exponential step, solution space can be abridged efficiently, which quicken the convergence swiftness. Entities in the adherents will travel in the direction of the trailblazer and the enriched entity nearer to them will streamline the position, instead of capriciously electing the nearer entities to modernize the position. In the meantime, the location of the adherent is steering, the step size is altered to an exponential expansion phase size. Sagacity of Teaching learning based TLBOTO is confirmed by corroborated in IEEE 30 bus system (with and devoid of L-index). Factual power loss lessening is achieved. Proportion of factual power loss reduction is augmented.

Problem formulation. In recent years, the problem of voltage stability and voltage collapse has become a major concern in power system planning and operation. To enhance the voltage stability, voltage magnitudes alone will not be a reliable indicator of how far an operating point is from the collapse point. The reactive power support and voltage problems are intrinsically related. Hence, this paper formulates the reactive power dispatch as a multi-objective optimization problem with loss minimization and maximization of static voltage stability margin as the objectives. Voltage stability evaluation using modal analysis is used as the indicator of voltage stability.

Objective function of the problem is mathematically defined in general mode by

Minimization F (x, y ). Subject to E (x, y ) = 0;

I (x, y ) = 0.

Minimization of the objective function is the key and it defined by " F". Both E and I indicate the control and dependent variables of the optimal reactive power problem (ORPD); "x" consist of control variables which are reactive power compensators (Qc), dynamic tap setting of transformers — dynamic (T), level of the voltage in the generation units (Vg ):

x = [VGi,.., VGNg; QCi,.., QCNc;Ti,.., TNt ];

"y" consist of dependent variables which has slack generator PGsiack, level of voltage on transmission lines VL, generation units reactive power QG, apparent power SL:

y = [PGsiack; VLi,..., VLNLoad; QGi,...,QGNg; SLi,...,SLnt ].

Then the single objective problem formulation is defined as follows. The fitness function (OF1) is defined to reduce the power loss (MW) in the system is written as

" NTL

X Gm [V2 + Vf - 2ViVj cos 0ij

OF = Pmin = min

Number of transmission line indicated by "NTL", conductance of the transmission line between the i-th, j-th buses, phase angle between buses i and j is indicated by 0j.

Minimization of voltage deviation fitness function (OF2) is given by

2 Ng i i2

-—, „ „ t :— ^

OF2 = min

Nlb X VLk - V

i = 1

desired Lk

X\Qgk -q

Lim KG

i = 1

Load voltage in k-th load bus is indicated by VLk, voltage desired at the k-th load bus is denoted by yfesired, reactive power generated at k-th load bus generators is symbolized by QGK , then the reactive power limitation is given by Qkg , then the number load and generating units are indicated by NLb and Ng.

Then the voltage stability index (L-index) fitness function (OF3 ) is given

by

OF = min Lmax; Lmax = max [ Lj ]; j = 1, Nlb ,

and

NPV v

Lj =1 - X FjiT7~' i=i vj

Fji = ~[Yi З1 [У2 ]•

Such that

Lmax — max

1 -Yi [У2

Then the equality constraints are

0 = PGi -PDi -V X Vj [Gij cos[0i -0j] + Bj sin[0, -0j]];

j £Nb

0 = QGi -QDi -y X Vj [Gij sin[0, -0j] + Bj cos[0, -0j]],

j eNB

where Nb is the number of buses; PG and QG are the real and reactive power of the generator; PD and QD are the real and reactive load of the generator; Gij and Bij are the mutual conductance and susceptance between bus i and bus j.

Generator bus voltage Va inequality constraint VG-in < VGi < V^ax, i e ng.

Load bus voltage Vli inequality constraint: VLmin < VLi < V™ax, i e nl. Switchable reactive power compensations Qci inequality constraint: Qmin < QCi < QCmax, i enc. Reactive power generation Qai inequality con-

straint: Q<mm < QG < QGfx, i e ng. Transformers tap setting Ti inequality constraint: Ttmm < T < Tmax, i e nt. Transmission line flow Su inequality constraint: SLjm < Sfmax, i e nl. Here nc, ng and nt are numbers of the switcha-ble reactive power sources, generators and transformers.

The equality constraints are satisfied by running the power flow program.

The active power generation Pgi, generator terminal bus voltages Vgi and transformer tap settings tk are the control variables and they are self-restricted by the optimization algorithm. The active power generation at slack bus Psi, load

bus voltage Vioad and reactive power generation Qgi are the state variables and are restricted by adding a quadratic penalty term to the objective function. Then the multi-objective fitness (MOF) function has been defined by

MOF = Oil + xOF2 + yOF =

NL 2 NG

Xxv [ VLi - VLfn ] +Xxg [ QGi - QGimin i = 1 i = 1

Vumax vL, > Vumax,

= Oil +

^min 12

+ XfOFy,

vimm =

QGmin =

vimin vLi < vimin;

QGmax QGi > QGmax

QGmin QGi < QGmin.

Teaching learning based Trailblazer optimization algorithm. Trailblazer optimization algorithm (TOA) imitates the existing practices and persistence rubrics of cluster animals in the normal flora and fauna. Trailblazer optimization algorithm is imitated from the existence rules and physiognomies of cluster of animals. In the Trailblazer optimization algorithm, cluster of animals are alienated into dual types of characters rendering to the entity's fitness value: trailblazer and adherent. The Trailblazer is accountable for discovering the way onward to attain the pre-eminent food in any location, and leave a spot around especially for the adherent's reference. Adherent track the trailblazer based on the spot port by the trailblazer and their individual intelligence of transportable. Trailblazer and adherent are both ambiguous. When algorithms iterations are increased, the trailblazer and adherent roles are inter-changed with orientation to the entity's exploration competences. During this segment trailblazers will turn out to be adherent. Equally, adherent will be as a trailblaz-er.

In the exploration phase the Trailblazer will modernize the location and it mathematically defined as

Yj+1 = Yf + Random3(Yf - Yf'1 ) + B,

where Y f+1 epitomizes the modernized position vector of Trailblazer; Yf signifies the present position vector of Trailblazer; Y f _1 characterizes the preceding position vector of Trailblazer; k embodies the present sum of iterations, Random3e [ 0,1].

Subsequent to the modernization of location exploitation phase will be there and it numerically defined as

Yk+1 = Yf +a Random1 ( Yf - Ytk ) + p Random 2 ( Yf - Yf ) + e; i > 2.0,

where Yf+1 epitomizes the new-fangled position vector of i-th entity after the

location modernizing; Yf position vector of i-th entity; Yf characterizes the

position vector of neighbouring entities; Y k signifies the present position vector of Trailblazer; Random1and Random2 e[ 0,1]; a is the interface coefficient; P is the fascination coefficient,

f

8 =

V kmax

B = g2e2k/kmax .

Here g1, g2 g [-1,1] ; Stj is the space among the two entities; B, s will deliver arbitrary walk steps for all entities; k, kmax are the present and maximum number of iterations.

Trailblazer optimization algorithm

a. Begin

b. Parameters are initialized

c. Preliminary population fitness values are computed

d. Find the Trailblazer

e. While k < maximum number of iterations

f. Random variables; a and Pe[1,2 ]

g. Position of the Trailblazer modernize

Yf+1 = Yf + Random3 (Yf - Yf"1 ) + B

h. Check the limit conditions

i. If new-fangled Trailblazer is better than previous Trailblazer, then

Km

j. Modernize the Trailblazer k. End

l. For i = 2.0 to maximum number of populations m. Modernize the location of the entities

yk+1 = yk +a Random! (yf - Yk ) + p Random 2 (yf - Yf ) + s, i > 2.0

n. Check the limit conditions

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

o. End

p. Compute the fitness values of new-fangled entities

q. Find the most excellent fitness value

r. If most excellent fitness < fitness of Trailblazer, then

s. Trailblazer = Most excellent entity; Fitness = Most excellent fitness

t. End

u. For i = 2.0 to maximum number of populations

v. If new-fangled fitness of entity(i)<fitness of entity(i), then w. Modernize the entities

x. End

y. End

z. Engender new-fangled B and s aa. End

Teaching-learning-based optimization is modelled by the imitation of education process [16] of the teacher and the reciprocated learning procedure of the pupils and it possess dual phase, i.e., one is teaching and another is learning.

Teaching segment mimic the instruction procedure of the teacher. In the population the pre-eminent individual is designated as the teacher. The teacher ensures his or her superlative actions to transport the mediocre level of the pupil to more attention and action though which progress the whole class has been done and it mathematically defined as follows:

ynew = yold + Random^yteacher -1 eaching feature- mean ) ; Randome [0,1],

where ynew indicates the agent educated from the pre-eminent agent; yold symbolize the entity picked for learning,

Teaching feature = rotund [1 + random (0,1){2 -1)].

In the procedure of reciprocated education, two dissimilar entities yr1, yr2 are arbitrarily nominated from the population, the benefit and drawbacks of the

dual entities are equated, and then the superior entities are nominated for education (learning):

,old

new

random (yr1 - yold ) f (yr1 )< f (yr2);

"u random (yr2 - yold ) otherwise.

Investigational entities use "greedy selection" to engender offspring's. Conversely, in the education stage, double entities are arbitrarily designated for appraisal, and the entity with deprived performance acquires from the entity with improved performance, thus producing new-fangled entities.

Step length [17] is alienated into dual kinds: stable step length and adjustable step length. Exponential step fits to adjustable step. Exponential step is the amalgamation of exponential utility in arithmetic and step factor in the procedure. Through presenting the factor of exponential step, solution space can be abridged efficiently, which quicken the convergence swiftness.

In the proposed Teaching learning based TLBOTO primarily, trailblazer is considered as a teacher and implements the teaching segment and it rapidly modernize Trailblazer and upsurge the exploration capability of the procedure. Furthermore, adherents in are prejudiced by the trailblazer, and also by their own realization. Their performance is comparable to that of pupils in TLBO. Consequently, adherent executes the learning segment in TLBO. Then the adherent will move onward with tenacity and upsurge the exploitation capability of the algorithm. Then the swiftness of the adherent tracking the Trailblazer can be accustomed. An exponential phase length is smeared to the adherent to track the Trailblazer diligently. Algorithm's exploration competences enhanced by linking the teaching phase and learning. Exploration segment of the trailblazer algorithm identifies the zone with the pre-eminent solution. Subsequently inducing the teaching process, the trailblazer performs as a teacher to teach additional entities and engender a new-fangled entity. The newfangled unit is equated with the trailblazer, and with reference to the greedy selection norm, the optimal one is designated as the trailblazer to endure exploration. The location of trailblazer is modernized. Linking the learning segment of the teaching and learning algorithm upsurges the exploiting ability of the procedure. In the exploiting segment of the trailblazer algorithm entity modernizes the position in the direction of the Trailblazer and the entities closer to it. At this period, the entity has no intellectual capability and is an arbitrary transportable behaviour. Add the learning procedure to the entities in the adherents to create them to possess human intellectual capability. In this method, the entities in the adherents will travel in the direction of the trail-

blazer and the enhanced entity nearer to them will modernize the position, instead of arbitrarily choosing the nearer entities to modernize the position. In the meantime, the location of the adherent is steering, the step size is altered to an exponential development phase size. The location of numbers is rationalized rendering to

y.k+1 = yf +a Random! - Ytk ) + ß Random 2 (y£ - Yf ) + e, i > 2.0;

gendered after reciprocated learning.

Teaching learning based Trailblazer optimization algorithm

a. Begin

b. Parameters are initialized

c. Preliminary population fitness values are computed

d. Fine the Trailblazer

e. While k < maximum number of iterations

f. Random variables; a and P e [l, 2 ]

g. For i = 1.0 to maximum number of populations

h. Engender new-fangled entity

ynew = yold + Random (yteacher - teaching feature • mean ); Random e [0,1]

i. End

j. Compute the new-fangled agent fitness value k. Check the limit conditions l. If the new-fangled better than Trailblazer, then m. Modernize the position of the new-fangled agent

ynew = yold + Random^ yteacher - teaching feature • mean ); Randome[ 0,1] n. Else

o. Modernize the position of the Trailblazer

Yji+1 = Yk + Random3(y^ - Yf-1) + B

ynew = yold + Random(yteacher - teaching feature • mean Randome [0,1] p. End

where Y^ epitomizes the location vector of the new-fangled entity and it en

new

Yf+1 = Yf + Random3 (yf - Yf"1 ) + B

q. Compute the fitness value of Yjkew and Yf

r. Check the limit conditions

s. If new-fangled Trailblazer is better than previous Trailblazer, then

t. Modernize the Trailblazer

u. End

v. For i = 2.0 to maximum number of populations

w. Compute the fitness value of neighbouring entity ( Yj and Yk)

x. if fitness value of Yk <Yk, then y. Obtain the new entity by

' yold + random ( yr1 - yold ) f ( yr1) < f ( yr 2), yold + random (yr2 - yold ) otherwise

..new _ у -

z. End

aa. Modernize the location of entities bb. Check the limit conditions

Ytk+1 = Yj +aRandom!(Ykew -Ytk) + pRandom2(y^ -Yk) + e, i >2.0

cc. Compute the fitness values of new-fangled entities

dd. Find the most excellent fitness value

ee. If most excellent fitness < fitness of Trailblazer, then

ff. Trailblazer = Most excellent entity; Fitness = Most excellent fitness gg. End

hh. For i = 2.0 to maximum number of populations

ii. If new-fangled fitness of entity (i) < fitness of entity (i), then

jj. Modernize the entities kk. End ll. End

mm. Engender new-fangled B and s nn. End

Simulation results. With considering L-index (voltage constancy), Teaching learning based TLBOTO is corroborated in IEEE 30 bus system [18]. Appraisal of loss has been done with PSO, amended PSO, enhanced PSO, widespread learning PSO, Adaptive genetic algorithm, Canonical genetic algorithm, enriched genetic algorithm, Hybrid PSO-Tabu search (PSO-TS), Ant lion (ALO), quasi-oppositional teaching learning based (QOTBO), improved sto-

chastic fractal search optimization algorithm (ISFS), harmony search (HS), improved pseudo-gradient search particle swarm optimization and cuckoo search algorithm. Power loss abridged competently and proportion of the power loss lessening has been enriched. Predominantly voltage constancy enrichment achieved with minimized voltage deviancy. In Table 1 shows the loss appraisal, Table 2 shows the voltage deviancy evaluation and Table 3 gives the L-index assessment. Figures 1 to 3 gives graphical appraisal.

Table 1

Assessment of factual power loss lessening

Technique Actual Power loss (MW)

Standard PSO-TS [8] 4.5213

Basic TS [8] 4.6862

Standard PSO [8] 4.6862

ALO [9] 4.5900

QO-TLBO [10] 4.5594

TLBO [10] 4.5629

Standard GA [11] 4.9408

Standard PSO [11] 4.9239

HAS [11] 4.9059

Standard FS [12] 4.5777

IS-FS [12] 4.5142

Standard FS [14] 4.5275

TOA 4.5007

TLBOTO 4.5002

ö *

о ft

a

о <

Fig. 1. Appraisal of actual power loss

Table 2

Evaluation of voltage deviation

Technique Voltage deviancy (PU)

Standard PSO-TVIW [13] 0.1038

Standard PSO-TVAC [13] 0.2064

Standard PSO-TVAC [13] 0.1354

Standard PSO-CF [13] 0.1287

PG-PSO [13] 0.1202

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

SWT-PSO [13] 0.1614

PGSWT-PSO [13] 0.1539

MPG-PSO [13] 0.0892

QO-TLBO [10] 0.0856

TLBO [10] 0.0913

Standard FS [12] 0.1220

ISFS [12] 0.0890

Standard FS [14] 0.0877

TOA 0.0845

TLBOTO 0.0836

Fig. 2. Appraisal of voltage deviation

Table 3

Assessment of voltage constancy

Technique Voltage constancy (PU)

Standard PSO-TVIW [13] 0.1258

Standard PSO-TVAC [13] 0.1499

End of the Table 3

Technique Voltage constancy (PU)

Standard PSO-TVAC [13] 0.1271

Standard PSO-CF [13] 0.1261

PG-PSO [13] 0.1264

Standard WT-PSO [13] 0.1488

PGSWT-PSO [13] 0.1394

MPG-PSO [13] 0.1241

QO-TLBO [10] 0.1191

TLBO [10] 0.1180

ALO [9] 0.1161

ABC [9] 0.1161

GWO [9] 0.1242

BA [9] 0.1252

Basic FS [12] 0.1252

IS-FS [12] 0.1245

Standard FS [14] 0.1007

TOA 0.1006

TLBOTO 0.1002

0.16

> &

Fig. 3. Appraisal of voltage constancy

Then Projected Teaching learning based TLBOTO is corroborated in IEEE 30 bus test system deprived of L-index. Loss appraisal is shown in Table 4. Figure 4 gives graphical appraisal between the approaches with orientation to actual power loss.

Table 4

Assessment of true power loss

Parameter Actual Power Loss in MW Proportion of Lessening in Power Loss

Base case value [19] 17.5500 0

Amended PSO[19] 16.0700 8.40000

Standard PSO [20] 16.2500 7.40000

Standard EP [21] 16.3800 6.60000

Standard GA [22] 16.0900 8.30000

Basic PSO [23] 17.5246 0.14472

DEPSO [23] 17.5200 0.17094

JAYA [23] 17.5360 0.07977

TOA 14.0210 20.1000

TLBOTO 13.9820 20.3300

Actual Power Loss in MW

Proportion of Lessening in Power Loss

/ # # J* ^

/ V ^ V5

/// ✓✓

Fig. 4. Appraisal of actual power loss in MW and proportion of lessening in power loss

Table 5 shows the convergence characteristics of Teaching learning based TLBOTO. Figure 5 shows the graphical representation of the characteristics.

Table 5

Convergence characteristics

IEEE 30 bus system Actual power loss in MW with L-index / without L-index Proportion of lessening in power loss, % Time, sec, with L-index / without L-index Number of iterations with L-index / without L-index

TOA 4.5007/14.021 20.10 18.67 / 15.92 29 / 24

TLBOTO 4.5002 / 13.982 20.33 14.04 / 11.31 23 / 20

35

■Actual power loss in MW

(with ¿-index) ■Actual power loss in MW (without ¿-index) Proportion of lessening in power loss (%) ■ Time, sec (with ¿-index) Time, sec (without ¿-index) Number of iterations (with ¿-index) Number of iterations (without ¿-index)

30

25

20

10

15

0

5

TOA

TLBOTO

Fig. 5. Convergence characteristics of Teaching learning based TLBOTO

Conclusion. Teaching learning based TLBOTO condensed the factual power loss dexterously. Teaching learning based TLBOTO corroborated in IEEE 30 bus test system with and also deprived of voltage constancy. In the Proposed Teaching learning based Trailblazer optimization algorithm (TLBOTO) primarily, trailblaz-er is considered as a teacher and implements the teaching segment and it rapidly modernize Trailblazer and upsurge the exploration capability of the procedure. Exploration segment of the trailblazer algorithm identifies the zone with the preeminent solution. Subsequently inducing the teaching process, the trailblazer performs as a teacher to teach additional entities and engender a new-fangled entity. The new-fangled unit is equated with the trailblazer, and with reference to the greedy selection norm, the optimal one is designated as the trailblazer to endure exploration. The location of trailblazer is modernized. Linking the learning segment of the teaching and learning algorithm upsurges the exploiting ability of the procedure. In the exploiting segment of the trailblazer algorithm entity modernizes the position in the direction of the Trailblazer and the entities closer to it. At this period, the entity has no intellectual capability and is an arbitrary transportable behaviour. Add the learning procedure to the entities in the adherents to create them to possess human intellectual capability. In this method, the entities in the adherents will travel in the direction of the trailblazer and the enhanced entity nearer to them will modernize the position, instead of arbitrarily choosing the nearer entities to modernize the position. Teaching learning based TLBOTO creditably condensed the power loss and proportion of factual power loss lessening has been upgraded. Convergence characteristics show the better performance of the proposed TLBOTO algorithm. Assessment of power loss has been done with other customary reported algorithms.

REFERENCES

[1] Lee K.Y., Park Y.M., Ortiz J.L. Fuel-cost minimisation for both real and reactive-power dispatches. IEE Proc. C, 1984, vol. 131, iss. 3, pp. 85-93.

DOI: https://doi.org/10.1049/ip-c.1984.0012

[2] Deeb N.I., Shahidehpour S.M. An efficient technique for reactive power dispatch using a revised linear programming approach. Electr. Power Syst. Res., 1998, vol. 15, iss. 2, pp. 121-134. DOI: https://doi.org/10.1016/0378-7796(88)90016-8

[3] Bjelogrlic M.R., Calovic M.S., Ristanovic P. Application of Newton's optimal power flow in voltage/reactive power control. IEEE Trans. Power Syst., 1990, vol. 5, iss. 4, pp. 1447-1454. DOI: https://doi.org/10.1109/59.99399

[4] Granville S. Optimal reactive dispatch through interior point methods. IEEE Trans. Power Syst., 1994, vol. 9, iss. 1, pp. 136-146. DOI: http://dx.doi.org/10.1109/59.317548

[5] Grudinin N. Reactive power optimization using successive quadratic programming method. IEEE Trans. Power Syst., 1998, vol. 13, iss. 4, pp. 1219-1225.

DOI: http://dx.doi.org/10.1109/59.736232

[6] Khan I., Li Z., Xu Y., et al. Distributed control algorithm for optimal reactive power control in power grids. Int. J. Electr. Power Energy Syst., 2016, vol. 83, pp. 505-513. DOI: https://doi.org/10.1016/j.ijepes.2016.04.004

[7] Li J., Wang N., Zhou D., et al. Optimal reactive power dispatch of permanent magnet synchronous generator-based wind farm considering levelised production cost minimisation. Renew. Energy, 2020, vol. 145, pp. 1-12.

DOI: https://doi.org/10.1016/jj.renene.2019.06.014

[8] Sahli Z., Hamouda A., Bekrar A., et al. Hybrid PSO-tabu search for the optimal reactive power dispatch problem. Proc. IECON, 2014.

DOI: https://doi.org/10.1109/IECON.2014.7049024

[9] Mouassa S., Bouktir T., Salhi A. Ant lion optimizer for solving optimal reactive power dispatch problem in power systems. Eng. Sci. Technol. an Int. J., 2017, vol. 20, iss. 3, pp. 885-895. DOI: https://doi.org/10.1016/j.jestch.2017.03.006

[10] Mandal B., Roy P.K. Optimal reactive power dispatch using quasi-oppositional teaching learning based optimization. Int. J. Electr. Power Energy Syst., 2013, vol. 53, pp. 123-134. DOI: https://doi.org/10.1016/j.ijepes.2013.04.011

[11] Khazali H., Kalantar M. Optimal reactive power dispatch based on harmony search algorithm. Int. J. Electr. Power Energy Syst., 2011, vol. 33, iss. 3, pp. 684-692. DOI: https://doi.org/10.1016/j.ijepes.2010.11.018

[12] Tran H.V., Pham T.V., Pham L.H., et al. Finding optimal reactive power dispatch solutions by using a novel improved stochastic fractal search optimization algorithm. TELKOMNIKA, 2019, vol. 17, no. 5, pp. 2517-2526.

DOI: http://dx.doi.org/10.12928/telkomnika.v17i5.10767

[13] Polprasert J., Ongsakul W., Dieu V.N. Optimal reactive power dispatch using improved pseudo-gradient search particle swarm optimization. Electr. Power Compon. Syst., 2016, vol. 44, iss. 5, pp. 518-532. DOI: https://doi.org/10.1080/15325008.2015.1112449

[14] Duong T.L., Duong M.Q., Phan V.-D., et al. Optimal reactive power flow for large-scale power systems using an effective metaheuristic algorithm. J. Electr. Comput. Eng.,

2020, vol. 2020, art. 6382507. DOI: https://doi.org/10.1155/2020/6382507

[15] Muhammad Y., Khan R., Raja M.A.Z., et al. Solution of optimal reactive power dispatch with FACTS devices: a survey. Energy Rep., 2020, vol. 6, pp. 2211-2229.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

DOI: https://doi.org/10.1016/jj.egyr.2020.07.030

[16] Rao R., Savsani V.J., Vakharia D.P. Teaching-learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput. Aided Des., 2011, vol. 43, iss. 3, pp. 303-315. DOI: https://doi.org/10.1016/j.cad.2010.12.015

[17] Balakrishnan N. A synthesis of exact inferential results for exponential step-stress models and associated optimal accelerated life-tests. Metrika, 2009, vol. 69, no. 2-3, pp. 351-396. DOI: https://doi.org/10.1007/s00184-008-0221-4

[18] Illinois Center for a Smarter Electric Grid (ICSEG): website. Available at: https://icseg.iti.illinois.edu (accessed: 25.02.2019).

[19] Hussain A.N., Abdullah A.A., Neda O.M. Modified particle swarm optimization for solution of reactive power dispatch. Res. J. Appl. Sci. Eng. Technol., 2018, vol. 15, no. 8, pp. 316-327. DOI: http://dx.doi.org/10.19026/rjaset.15.5917

[20] Pandya S., Roy R. Particle swarm optimization based optimal reactive power dispatch. Proc. IEEE ICECCT, 2015. DOI: https://doi.org/10.1109/ICECCT.2015.7225981

[21] Dai C., Chen W., Zhu Y., et al. Seeker optimization algorithm for optimal reactive power dispatch. IEEE Trans. Power Syst., 2009, vol. 24, iss. 3, pp. 1218-1231.

DOI: https://doi.org/10.1109/TPWRS.2009.2021226

[22] Subbaraj P., Rajnarayan P.N. Optimal reactive power dispatch using self-adaptive real coded Genetic algorithm. Electr. Power Syst. Res., 2009, vol. 79, iss. 2, pp. 374-438. DOI: https://doi.org/10.1016/j.epsr.2008.07.008

[23] Vishnu M., Sunil K.T.K. An improved solution for reactive power dispatch problem using diversity-enhanced particle swarm optimization. Energies, 2020, vol. 13, no. 11, art. 2862. DOI: https://doi.org/10.3390/en13112862

Lenin Kanagasabai — Dr. Sc., Professor, Department of Electrical and Electronics Engineering, P.V.P. Siddhartha Institute of Technology (Kanuru, Vijayawada, 520007, Andhra Pradesh, India).

Please cite this article as:

Lenin K. Real power loss reduction by enhanced Trailblazer optimization algorithm. Herald of the Bauman Moscow State Technical University, Series Natural Sciences,

2021, no. 3 (96), pp. 77-93. DOI: https://doi.org/10.18698/1812-3368-2021-3-77-93

i Надоели баннеры? Вы всегда можете отключить рекламу.