A Novel Real-Coded Genetic Algorithm for Dynamic Economic Dispatch Integrating Plug-In Electric Vehicles

Massive popularity of plug-in electric vehicles (PEVs) may bring considerable opportunities and challenges to the power grid. The scenario is highly dependent on whether PEVs can be effectively managed. Dynamic economic dispatch with PEVs (DED with PEVs) determines the optimal level of online units and PEVs, to minimize the fuel cost and grid fluctuations. Considering valve-point effects and transmission losses is a complex constrained optimization problem with non-smooth, non-linear, and non-convex characteristics. High efficient DED method provides a powerful tool in both power system scheduling and PEVs charging coordination. In this study, firstly, PEVs are integrated into the DED problem, which can carry out orderly charge and discharge management to improve the quality of the grid. To tackle this, a novel real-coded genetic algorithm (RCGA), namely, dimension-by-dimension mutation based on feature intervals (GADMFI), is proposed to enhance the exploitation and exploration of conventional RCGAs. Thirdly, a simple and efficient constraint handling method is proposed for an infeasible solution for DED. Finally, the proposed method is compared with the current literature on six cases with three scenarios, including only thermal units, units with disorderly PEVs, and units with orderly PEVs. The proposed GADMFI shows outstanding advantages on solving the DED with/without PEVs problem, obtaining the effect of cutting peaks and filling valleys on the DED with orderly PEVs problem.


INTRODUCTION The Optimization Problem
Over the last few decades, the rapid increase in the use of fossil fuel has led to a consequential worldwide reduction of the resource; thus, its optimal utilization in power generation has become an important research topic Yang et al., 2015). In addition, massive popularity of PEVs may bring opportunities or challenges to the power grid. Therefore, the DED with PEVs plays an important role in power systems operation and control. Coupling with space and time, it is a complicated optimal decision problem, and its goal is to minimize the fuel cost and fluctuation of the power grid, on the premise of satisfying a series of constraints.

The Optimization Algorithm
Since GA was proposed by Holland in 1975 (Ali et al., 2018), which has undergone two revisions: the binary-coded genetic algorithm (BCGA) and real-coding genetic algorithm (RCGA). RCGA was first suggested by Herrera in 1998 (Akopov et al., 2019;Iyer et al., 2019). It is known that its performance depends heavily on crossover and mutation operators (Thakur et al., 2014), and then scholars mainly focused on the improvement of crossover and mutation operators and proposed many excellent variants of RCGA.
The crossover operator generates new individuals through interactive information among existing ones (Nakane et al., 2020). Arithmetic crossover (AX) (Naqvi et al., 2020) produces offspring through the linear combination of the parents. By flat crossover (FX) (Picek et al., 2013a), the parents exchange genes to produce offspring, but it does not destroy genetic information in the population. In LX (Deep and Thakur, 2007), Laplace distribution is used as the density function to generate genes near the parents. In 1993, Eshelman et al. used the concept of interval schemata to develop a blend crossover operator (BLX-α) (Wang et al., 2019b), which can do linear exploration around the parents. SPX is a multi-paternal crossover operator based on the nature of the simplex and is an extension of BLX (Chuang et al.,

2015)
. SBX (Naqvi et al., 2020), as its name implies, is formed by simulating binary crossover operator; UNDX can produce two or more offspring individuals from three parents (Kwak and Lee, 2016). The comparative research results have been reported in the literature (Picek et al., 2013b;Naqvi et al., 2020) for the abovementioned crossover operators.
Mutation is the important operation responsible for exploration in RCGA (Wang et al., 2018), especially the local development. The popular and widespread mutation operators in the literature mostly use a certain distribution as a density function to generate random numbers around the gene to vary, so as to carry out the local development of the gene, which may lead to a lot of troubles in the application of the algorithm due to the introduction of extra parameters, such as the random mutation (RM), the non-uniform mutation (NUM) (Wang et al., 2019b), the power mutation (PM), the polynomial mutation (PLM), the Gaussian mutation (GM) (Wang et al., 2019b), and Cauchy mutation (CM). In addition, the direction mutation is presented (Tang and Tseng, 2013), which mutates toward a promising area by utilizing statistical population information and has achieved good results. On the basis of previous studies, two simple and efficient mutation operators are proposed. One is the dimension-by-dimension mutation based on feature intervals (DMFI), which combines horizontal search at the component level with the rule of greed to form a directional horizontal local development strategy; what is more, the genetic characteristics of outstanding individuals are extracted as the variation interval, thereby providing a directional vertical local development capability. The other is the uniform mutation based on the interval of opposing features (UMOFI); the opposing of feature intervals of outstanding individuals is employed as the variation interval for the inferior individuals, thereby introducing new information in the population.
At present, genetic algorithms have the defects of falling into local optimal and lack of exploration capabilities for large-scale and high-dimensional problems (Sawyerr et al., 2011;Fang et al., 2014;D'Angelo and Palmieri, 2021;Sawyerr et al., 2014). In the final analysis, this is a fundamental and difficult problem faced by metaheuristics: how to balance the exploration and exploitation of the algorithm. In general, researchers design or improve an algorithm based on the idea that focuses on exploration in the early stage and later on exploitation. However, if the development capability is not enough to find the global optimal area in the early stage, it will become stuck in the local optimal. Given this, this study provides a new solution, based on the idea of collaborative optimization of the superior and inferior individuals, in which excellent individuals strengthen local search. Inferior individuals are responsible for introducing new information to increase the diversity of the population. Then, interacting information between excellent individuals and inferior individuals helps to find outstanding individuals and realizing their comprehensive development carried out in each iterative.
The characteristics of GA's mechanisms and the new solutions mentioned above are consistent on the issue of balancing the global and local search capabilities. Therefore, an RCGA based on co-optimization of superior and inferior individuals is proposed: 1) through the rank selection and the flat crossover to realize the genetic interaction between superior and inferior populations, 2) using DMFI to obtain the ability of directed vertical and horizontal local development for excellent individuals and achieve in-depth local search, and 3) using UMOFI for the inferior individuals, which is controlled by the mutation probability Pm, thereby introducing new genetic information while maintaining the diversity of the population.

Constraint Handling Methods
For constraint optimization problems, the feasibility of the solutions is more important than the objective values. The penalty function method is common and popular for handling some constraints (Shen et al., 2019). However, it is a troublesome thing to choose a suitable penalty factor. Hence, some scholars proposed several types of repair methods to meet problem constraints. Feasibility-based rules are used to lead the search toward the feasible region to handle inequality constraints effectively (Yuan et al., 2009), which is more efficient to filter feasible solutions and ease the burden of setting the penalty factor   (Wang et al., 2011). They adjust the output power according to general experience to gradually reduce the violation of the constraint until it becomes a feasible solution, whereas this cannot efficiently solve complex equality constraints, for instance, the power balance Frontiers in Energy Research | www.frontiersin.org September 2021 | Volume 9 | Article 706782 4 constraint with the transmission loss, and there may be an excessive adjustment, especially when the amount of violation is not big. The forced repair technology refers to adjusting the output strictly according to the characteristics of the equation, which greatly improves repair efficiency of equality constraints (Panigrahi et al., 2007;Zou et al., 2018). Apart from equality limits, inequality constraints also influence the feasibility, and hence the forced repair technology may spend more time solving the quadratic equation of DED with transmission losses. In Li et al. (2019) and Shen et al. (2019), the constraints handling technology combining a heuristic repair technology and the forced repair technology is proposed, which enhances the capability of repairing infeasible solutions.
In summary, the methods or techniques above-mentioned have the following shortcomings in solving DED with PEVs: 1) the algorithms are difficult to solve large-scale problems or face premature phenomena and 2) constraint processing techniques are difficult to repair infeasible solutions or do not work well with the algorithm. In view of this, this study proposed GADMFI based on RCGA. Meanwhile, a simple and efficient constraint processing technique is designed.
The remainder of this study is arranged as follows:

THE FORMULATION OF THE DYNAMIC ECONOMIC DISPATCH WITH PLUG-IN ELECTRIC VEHICLES
The DED integrating PEVs aims to determine the optimal generation levels of all online units and PEVs, during a specified period of time (e.g., 24 intervals a day), so as to minimize the total fuel cost and subject to a number of equality and inequality constraints.

The Optimization Objectives
There are two optimization objectives for this problem: one is to minimize fuel costs and the other is to minimize the fluctuation of the grid, that is, maximize peak shaving and valley filling. They are described in Eqs. 1, 2.
(1) minf 2 (P, P PEV ) where P and P PEV constitute the decision variables of the problem and P L,t is the transmission loss at time t.

Constraints
The DED with PEVs is an optimization problem containing multiple inequalities and equality constraints, including capacity constraints, ramp rate limits, PEVs charge/ discharge limits, PEVs demand limits, and power balance constraints.

Capacity Constraints
The capacity limits of the thermal unit are inequality constraints, which are determined by the physical characteristics of the unit and are given as follows: where P min i and P max i represent the min and max output power of the ith unit, respectively.

Ramp Rate Limits
Due to the inertia of thermal power units, the ramp rate limits are considered to extend the service life of the units; that is, the output  of the unit cannot be adjusted greatly in a short time, and the output of current time will affect the output of the next time: where UR i and DR i , respectively, represent the maximum allowable rise and fall of the ith unit, which are limited to its physical characteristics.

Plug-In Electric Vehicles Charge/Discharge Limits
The maximum charging power and discharging power of PEVs should be limited to a normal range. Because different types of electric vehicles have different models, the charging and discharging power of PEVs at t time is described as a variable P PEV,t : P max PEV,disc ≤ P PEV,t ≤ P max PEV,char .
The Plug-In Electric Vehicles Demand Constraint For users' daily travel, the PEVs demand constraint should be met (Yang et al., 2017a), which is described as Eq. 6: T t 1 P PEV,t ≤ P PEV,total , where P PEV,total is the desired power for daily use.

The Power Balance Constraint
Power balance limit is the most important and complex constraint, especially considering the transmission loss, which is defined as T t 1 P t,i P D,t + P L,t + P PEV,t , where P D,t presents the load demand at time t, P L,t is the transmission loss, and its mathematic model is expressed by Kron's loss (Abdelaziz et al., 2008) as Eq. 8.
where B ij , B 0i , and B 00 represent the loss coefficients of the generation units. In addition, the model of the transmission loss is usually simplified as Eq. 9 (Pan et al., 2018), which is adopted in this study:

PROPOSED GENETIC ALGORITHM DIMENSION-BY-DIMENSION MUTATION BASED ON FEATURE INTERVALS
RCGAs generally consist of selection, crossover, mutation, and elite retention strategies, and the pseudo-code is summarized in Table 1. Without destroying its main structure and extra parameters, this study proposes a simple and efficient, superior and inferior population collaborative optimization algorithm, namely, a novel realcoded genetic algorithm: GADMFI. Its pseudo-code is shown in Table 2. It is worth noting that this study takes the minimum value of the function as the optimization objective.
In the study, two novel mutation operators are proposed: the dimension-by-dimension mutation based on feature intervals (DMFI) and the uniform mutation based on the interval of opposite features (UMOFI). DMFI and UMOFI are designed based on the idea that excellent individuals strengthen local exploitation capabilities, to improve the convergence accuracy and speed of the algorithm; low-quality individuals introduce new information, to improve population diversity; and good and bad individuals exchange information by an interactive operation. Excellent individuals perform DMFI to strengthen local development in both vertical and horizontal dimensions. Inferior individuals introduce new genes through UMOFI. Then, the information of the two is exchanged through the ranking selection (RS) and FX, so as to achieve the effect of collaborative optimization.

The Selection and Crossover Operator
The selection operator is the first operator of GA. One of the most widely used selection operators is the roulette selection. The higher the fitness, the greater the probability of being selected. However, the excellent genes of the inferior individuals may be abandoned. The other most commonly used is RS (Chuang et al., 2016). Excellent individuals are used as the parent 1 to cross, and inferior individuals are used as the parent 2. All individuals participate in the crossover with the same probability, which does not affect the diversity of the population. RS is used in the study; however, the difference between this study and the literature (Chuang et al., 2016) is random matching for the individuals, thus enhancing the population diversity.
The flat crossover (FX) is rarely used due to its poor local development ability, but it has the characteristics of not changing the genetic information in the population that other operators do not have. It can maintain the diversity of the population and the interaction between individuals. The role of RS and FX is to carry out information interaction between individuals without changing the genetic information of the population.

The Mutation Operator
Strong local search capabilities should be possessed for each stochastic algorithm. At the same time, it is also indispensable to maintain the diversity of the population. In order to illustrate the design idea of the mutation operator, horizontal search and vertical search are firstly defined. If there is a comparison between the variants or a variant and ontology in an operator, it is called horizontal search of individuals. If only components are changed, it is called the vertical search of the individual. In previous related studies, local search often refers to vertical search. All the components are developed simultaneously. In this section, horizontal search and the greedy rule, vertical search, and feature intervals are combined to fulfill horizontal and vertical local search so as to obtain in-depth development for the superior population by DMFI. In addition, new population information is also introduced into the inferior population by UMOFI.

The Dimension-by-Dimension Mutation Based on Feature Intervals
The dimension-by-dimension mutation is a dimension-bydimension search for outstanding individuals in the characteristic interval and is combined with the greedy rule to achieve a directed horizontal search. Figure 1 shows the change from one dimension to the next dimension of DMFI. A gene x1 of the chromosome X is pre-mutated to m1 in the feature interval that is formed by the minimum and maximum gene values of the excellent population individuals, which can be described as Eqs. 10, 11. Because the feature interval contains the genetic characteristics of excellent individuals, the mutation will search toward a promising area and is used to realize the vertical local development for the superior population, which is different from the previous uniform mutation in that it is not centered on the individual, but the center of the superior population: where S upper (d) and S lower (d) are the lower and upper limits of the characteristic interval of the dth dimension, S(d) is the dth dimension of the superior population, m(d)is the pre-mutation gene of the d-dimension, and rand is a number generated randomly from 0 to 1. And then, M is the chromosome after the pre-mutation, and X are compared by the greedy rule. If M is better, then the variation is executed and the pre-mutation of the next dimension continues; otherwise, it is not mutated. Until all dimensions have performed the process, X is an excellent individual who has completed directional vertical and partial development. Its mathematical formula can be expressed as Eq. 12: The Uniform Mutation Based on the Interval of Opposite Features In order to obtain new genetic information without destroying the diversity of the population, the opposite feature intervals are utilized as the range of variation to carry out by Eqs. 13, 14 in UMOFI. What is more, UMOFI controls Pm: where O L,lower (d) and O R,lower (d) represent the lower limits of opposite feature intervals on the left and right, respectively.

Ramp Rate Limits Handling
where P min t,i and P max t,i stand for the new lower and upper bounds, considering simultaneously the ramp rate constraint and the capacity limit of the ith unit for the tth time, as Eqs 15, 16. Then, if P t,i is beyond its new bound, it will be limited to the bound. Namely, P t,i is repaired by Eq. 17. It is simpler and more efficient than the traditional penalty function method:

The Power Balance Constraint Handling
Considering the network transmission loss, the power balance constraint is the most difficult to repair among all constraints. This study proposes a simple and efficient repair technology. The overall process is designed as two stages: firstly, rough adjustment can rapidly reduce the violation and then enter the second fine adjustment stage to eliminate the violation; the detailed steps are described as Steps 1-4.
Step 1. Set the set A {1, 2, 3, . . . , N − 1, N}, and select randomly a unit r from A and roughly adjust the output by Eq. 18: where Vio(t) is the violation of the power balance constraint at t time.
And if P t,r does not go beyond its new boundary, it is thought that Vio(t) is so small to repair by the fine stage and thus go to the next step and reset A {1, 2, 3, . . . , N − 1, N}. Let k 1; otherwise, remove r unit from A. If A is an empty set, end repair; otherwise, repeat Step 1.
Step 2. Handling the power balance constraint can be converted as solving a quadratic equation, and the output of the unit is solved by Eq. 19. Here, two cases are discussed as follows: Otherwise, it can be converted as solving a quadratic equation, and the output of the unit is solved by Eq. 19. Here, two cases are discussed as follows: Let a B kk , b 2 i∈A, ≠ k B ki p t,i − 1, c P D,t + i∈A,≠ k j∈A, ≠ k P t,i B ij P t,j − i∈A, ≠ k P t,i ; then, if existing, the roots are calculated by Sol 1 and Sol 2 −b± b 2 −4ac √ 2a when a ≠0, or Sol 3 − c b when a 0 and b ≠0 . Case 1. If no solution, let k + 1 ; if k < N, repeat Step 2; and otherwise, end repair.
Case 2. If there are solutions, checking whether they satisfy the other constraints. If satisfied, let P t,k be equal to any, and end; if only a solution is satisfied, let P t,k be equal to the solution. Otherwise, let k k + 1; if k < N, repeat Step 2; and otherwise, end repair.
Step 1 can rapidly decrease violation of the equality constraint associated with power balances, and Step 2 further decrease or eliminate the violation by fine adjustment as well as solving. And finally, if still infeasible, the feasible-rule (Yuan et al., 2009) is used to strictly screen the feasible solutions of the population.

THE IMPLEMENTATION OF GENETIC ALGORITHM DIMENSION MUTATION BASED ON FEATURE INTERVALS FOR DYNAMIC ECONOMIC DISPATCH WITH PLUG-IN ELECTRIC VEHICLES
The Implementation of GADMFI on DED integrating PEVs is a process that effectively combines heuristic algorithms and Bold digits are the best statistical performance measures of various algorithms.  constraints handling methods and optimal mathematical model. The overall framework is described in Figure 2.
Step 2. Checking individuals' feasibility. If feasible, go to the next step; otherwise, repair by constraint handling technology and go to the next step.
Step 3. Evaluate their fitness by Eq. 22, in which the objective functions f1 and f2 are combined into f by a weighting factor λ and update the optimal individual. If FEs are equal to MaxFEs, output the best solution; otherwise, go to the next step: Step 4. Update individuals in the population via GADMFI, and go to Step 2

Validation of the Performance of Genetic Algorithm Dimension Mutation Based on Feature Intervals
In order to validate the performance of the algorithm, a set of benchmark functions are selected from Civicioglu (2013), which are shown in Table 3, including low-dimensional, multidimensional, unimodal (U), multimodal (M), separable (S), and non-separable (N) functions. Advanced meta-heuristics are employed for qualitative and quantitative comparison using the benchmark problems. They are ABC, the grey wolf optimizer (GWO) (Mirjalili et al., 2014), the whale optimization algorithm (WOA) (Mirjalili and Lewis, 2016), the bat algorithm with triangle-flipping strategy (BA-HTFS) (Cai et al., 2017), and Hybrid DE-WOA Algorithm (DEWOA) (Wang et al., 2019c). In addition, the Wilcoxon Signed-Rank Test was used for pairwise comparisons, with the statistical significance value α 0.05. The null hypothesis H 0 for this test is as follows: there is no difference between the median of the solution between the two algorithms. The experimental computer is Intel(R) Core (TM) i9-10900F CPU @ 3.7GHZ and its RAM is 16.0 GHz.

Parameters Setting
MaXFEs for 30, 50, and 100-dimensional benchmarks are set as Dp10000, which means that when FEs reach MaxFEs, the optimization algorithms will be terminated. In addition, private parameters of methods taking part in the comparison from the corresponding references are set as in Table 4.

Performance Analysis
The performance of an optimization method should be evaluated in convergence accuracy, speed, and robustness. Therefore, the mean, best, and standard deviation of the objective values based on 30 independent runs from three dimensions 30, 50, and 100 are utilized in the quantitative analysis in Tables 5-7, and several typical qualitative graphs are shown in Figure 3.
In Tables 5-7, the optimal values of the indicators including "mean", "Std", and "best" are bold among the six comparative algorithms. Winner 1, 0, and −1 mean GADMFI is obviously superior, equivalent, and inferior to other methods with α 0.05. "NA" refers to not available. First of all, it can be seen that, as the dimension size increases, the performance of the algorithm does not change greatly. It can be observed from best that except for F6, none performs better than the proposed GADMFI in terms of the global search. This is attributed to the collaboration of mechanisms of the proposed algorithm. From mean and Std, it can be seen that GADMFI is the most stable and GWO, secondly, which benefits from DMFI has the ability to be directed fine-grained to develop near the current optimal population. From Runtime, the running time of GADMFI is the shortest in most problems, and it comes from the algorithm maintaining the traditional framework of RCGAs. As can be obtained from Winner, only in F6, problem is inferior to ABC, and F6 of 100 dimensions is inferior to GWO, but the difference is very small.
In Figure 3, evolution curves of six functions of 100 dimensions are shown. As can be seen, the convergence curve of the GADMFI shows superior exploration and exploitation abilities. In F1, the curve presents a straight line, which shows that when dealing with unimodal problems, the proposed DMFI has the potential to explore a promising area. In F2-F6 with multiple local minima, it stays at approximately constant speed until it converges, which again confirms the role of DMFI. In summary, the proposed GADMFI has outstanding performance for the benchmark problems.

Simulation Results and Discussion on Dynamic Economic Dispatch With Plug-In Electric Vehicles Problem
In this section, in order to verify the reliability of the proposed algorithm and constraint handling method, three scenarios and six cases are considered, as described in Table 8, as follows: Scenario A: only units, Scenario B: units with disorderly PEVs, Scenario C: units with orderly PEVs. Case I: only 5 units, Case II: only 10 units, Case III: 5 units with disorderly PEVs, Case IV:10 units with disorderly PEVs, Case V: 5 units with orderly PEVs, and Case VI: 10 units with orderly PEVs. It is worth noting that PEV and transmission loss are all considered in all cases. The population size of the algorithm is 100, P c is 0.7, and P m is 0.3. MaxFEs is set to D*10000, D represents the dimension of the decision variable; that is, for Cases I-II; D N*24, for Cases III-VI; D (N + 1)*24, and in the study, T is set as 24. In addition, in order to avoid contingency, each case is run independently 30 times. The optimization is implemented in the MATLAB ® 2019b on an Intel(R) Core (TM) i9-10900K CPU @ 3.70 GHz with RAM is 16.0 GHz personal computer.

Scenario A: Only Units Without Plug-In Electric Vehicles
The data of Cases I-II are derived from (Basu, 2008;Mohammadi-ivatloo et al., 2012;Qian et al., 2020), including predicted power demand (PD), unit information, and B coefficients in transmission loss. Fuel costs and constraint violations are counted in Table 9 and are compared with the current popular literature, including the new enhanced harmony search (NEHS), the artificial immune system (AIS), the hybrid DE and sequential quadratic programming (DE-SQP), the hybrid PSO and sequential quadratic programming (PSO-SQP), the efficient fitness-based differential evolution algorithm (EFDE), the hybrid seeker optimization algorithm (SOA) and sequential quadratic programming method (SOA-SQP), the simulated annealing (SA), a hybrid genetic algorithm and bacterial foraging approach (HCRO), and the improved bacterial foraging algorithm (IBFA).
In Table 9, the minimum, average, maximum, and standard deviation of fuel cost of 30 independent trials are presented, as well as the number of violations of unit ramp rate limits and the power balance constraint. Here, the minimum value of each statistic is bold in black font. the constraint violation amount is greater than one and is bold in red font, indicating that the solution is not feasible. For constrained optimization problems, judging the quality of a solution must first meet the conditions of a feasible solution and then evaluate the value of the objective function. Obviously, in the two cases, the proposed constraint processing technology can efficiently repair infeasible solutions. In addition, compared with other algorithms in the literature, except for the minimum value in Case II, it is slightly inferior to NEHS, and GADMFI shows extremely high superiority.
In order to clearly show the output of each unit, the stacked histogram of 10 units is drawn. The impact of the ramp rate limit on the output of the unit can be clearly seen in Figure 4; that is, the power difference between two adjacent moments within the smaller ranges and the optimal solution of 30 trails for Cases I-II are shown in Tables 10-11, as well as the transmission loss (PL) and the amount of violation.

Scenarios B and C: Units With Plug-In Electric Vehicles
A total of 50,000 PEVs are assumed to be integrated into the 5and 10-unit power systems, and the daily average traveling distance and expected power demand of a PEV are 32.88 miles and 8.22 kWh, respectively (Saber and Venayagamoorthy, 2011). The total power necessity for PEVs is 411 MW and is expected to be met by power generation. The state of charge SOC is 50%; the number of PEVs that can provide V2G/G2V service is 50000/36125; the average battery capacity is 15 kWh; the charging efficiency and the discharge efficiency are both 85%; and available PEVs are 20% (Yang et al., 2017b). In that way, the maximum discharge power P max PEV,disc −50, 000*15KWh*85%*20%*50% −63.75KW, and the maximum charge power P max PEV,char 36125*15KWh/85%*20%*50% +63.75KW. λ 0 means that electric vehicles are not managed; that is, for Cases III-IV, λ> 0 means that electric vehicles are orderly managed, that is, Cases V-VI. Comparing scenarios B and C and determining λ, f 1 , and f 2 in 30 trials are counted in Table 12.
In the column of Max and Std, the larger value is marked in red, indicating that the quality of the solution is poor and unstable, which reflects that the grid fluctuates greatly when electric vehicles are not managed, and further, in λ 1, 2, 3, when the balance effect is best for f 1 and f 2 , λ 1; therefore, in Scenario C, λ is set to 1, and the decision variables are listed in Tables 13 and 14 for Cases V-VI. The output of units and PEVs for Case VI is drawn in Figure 5.
From Eqs 15, 16, plug-in electric vehicles are effectively managed by the proposed strategy f 2 , that is, during peak demand periods, discharge through G2V and charge during trough periods by V2G. In addition, from the perspective of PL at various times, it is larger than or close to P PEV , so transmission loss should be considered in DED; otherwise, a few decisions may cause mistakes. In order to describe this effect more clearly, PD and PL and P PEV are plotted in Figure 6. It can be seen that the magnitude of the loss is close to the maximum output power of electric vehicles, and the management strategy of electric vehicles has been proved to be effective; that is, it plays the role of cutting peaks and filling valleys.

CONCLUSION
In view of the impact of plug-in electric vehicles on the power grid, and the complexity of dynamic economic dispatch considering the valve-point effect and transmission loss, this study integrates PEVs into DED and proposes a novel genetic algorithm: GADMFI, a simple and yet efficient constraint handling method aiming at power balance constraints. In three scenarios with two scales, only units, units with disorderly PEVs, and units with orderly PEVs, a horizontal and vertical comparison was carried out. The results show that GADMFI has an excellent performance in dealing with multi-modal, high-dimensional, and large-scale problems such as DED. At the same time, the proposed constraint handling method guarantees the feasibility of solutions and the design of target f 2 had achieved the effect of adaptive peak clipping and valley filling.

DATA AVAILABILITY STATEMENT
The author selected the following statement: the data analyzed in this study is subject to the following licenses/restrictions: commercial data. Requests to access these datasets should be directed to zl.yang@siat.ac.cn.