Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
23 result(s) for "Samad Nejatian"
Sort by:
A fuzzy clustering ensemble based on cluster clustering and iterative Fusion of base clusters
For obtaining the more robust, novel, stable, and consistent clustering result, clustering ensemble has been emerged. There are two approaches in clustering ensemble frameworks: (a) the approaches that focus on creation or preparation of a suitable ensemble, called as ensemble creation approaches, and (b) the approaches that try to find a suitable final clustering (called also as consensus clustering) out of a given ensemble, called as ensemble aggregation approaches. The first approaches try to solve ensemble creation problem. The second approaches try to solve aggregation problem. This paper tries to propose an ensemble aggregator, or a consensus function, called as Robust Clustering Ensemble based on Sampling and Cluster Clustering (RCESCC).RCESCC algorithm first generates an ensemble of fuzzy clusterings generated by the fuzzy c-means algorithm on subsampled data. Then, it obtains a cluster-cluster similarity matrix out of the fuzzy clusters. After that, it partitions the fuzzy clusters by applying a hierarchical clustering algorithm on the cluster-cluster similarity matrix. In the next phase, the RCESCC algorithm assigns the data points to merged clusters. The experimental results comparing with the state of the art clustering algorithms indicate the effectiveness of the RCESCC algorithm in terms of performance, speed and robustness.
Diversity based cluster weighting in cluster ensemble: an information theory approach
Clustering ensemble has been increasingly popular in the recent years by consolidating several base clustering methods into a probably better and more robust one. However, cluster dependability has been ignored in the majority of the presented clustering ensemble methods that exposes them to the risk of the low-quality base clustering methods (and consequently the low-quality base clusters). In spite of some attempts made to evaluate the clustering methods, it seems that they consider each base clustering individually regardless of the diversity. In this study, a new clustering ensemble approach has been proposed using a weighting strategy. The paper has presented a method for performing consensus clustering by exploiting the cluster uncertainty concept. Indeed, each cluster has a contribution weight computed based on its undependability. All of the predicted cluster tags available in the ensemble are used to evaluate a cluster undependability based on an information theoretic measure. The paper has proposed two measures based on cluster undependability or uncertainty to estimate the cluster dependability or certainty. The multiple clusters are reconciled through the cluster uncertainty. A clustering ensemble paradigm has been proposed through the proposed method. The paper has proposed two approaches to achieve this goal: a cluster-wise weighted evidence accumulation and a cluster-wise weighted graph partitioning. The former approach is based on hierarchical agglomerative clustering and co-association matrices, while the latter is based on bi-partite graph formulating and partitioning. In the first step of the former, the cluster-wise weighing co-association matrix is proposed for representing a clustering ensemble. The proposed approaches have been then evaluated on 19 real-life datasets. The experimental evaluation has revealed that the proposed methods have better performances than the competing methods; i.e. through the extensive experiments on the real-world datasets, it has been concluded that the proposed method outperforms the state-of-the-art. The substantial experiments on some benchmark data sets indicate that the proposed methods can effectively capture the implicit relationship among the objects with higher clustering accuracy, stability, and robustness compared to a large number of the state-of-the-art techniques, supported by statistical analysis.
Clustering ensemble selection considering quality and diversity
It is highly likely that there is a partition that is judged by a stability measure as a bad one while it contains one (or more) high quality cluster(s); and then it is totally neglected. So, inspiring from the evaluation of partitions, researchers turn to define measures for evaluation of clusters. Many stability measures have been proposed such as Normalized Mutual Information to validate a partition. The defined measures are based on Normalized Mutual Information. The drawback of the commonly used approach will be discussed in this paper and a criterion is proposed to assess the association between a cluster and a partition which is called Edited Normalized Mutual Information, ENMI criterion. The ENMI criterion compensates the drawback of the common Normalized Mutual Information (NMI) measure. Also, a clustering ensemble method that is based on aggregating a subset of primary clusters is proposed. The proposed method uses the Average ENMI as fitness measure to select a number of clusters. The clusters that satisfy a predefined threshold of the mentioned measure are selected to participate in the final ensemble. To combine the chosen clusters a set of consensus function methods are employed. One class of the used consensus functions is the co-association based consensus functions. Since the Evidence Accumulation Clustering, EAC, method can’t derive the co-association matrix from a subset of clusters, Extended EAC, EEAC, is employed to construct the co-association matrix from the chosen subset of clusters. The second class of the used consensus functions is based on hyper graph partitioning algorithms. The other class of the used consensus functions considers the chosen clusters as a new feature space and uses a simple clustering algorithm to extract the consensus partitioning. The empirical studies show that the proposed method outperforms other well-known ensembles.
Fuzzy Decision-Based Energy Management of Energy Grids with Hubs considering Participation of Hubs and Networks in the Energy Markets
With the creation of competitive environments, such as electricity market, it is expected that energy networks and active consumers such as energy hubs participate in the market to promote their economic situation. So, the article proposes the optimal involvement of energy networks and hubs in energy markets in two wholesale and retail designs based on the energy management system at the same time. The proposed scheme is expressed as two-objective optimization. The first objective is to minimize the cost of different types of energy in electricity-gas-thermal networks. In another objective function, the cost of energy (which is the difference between the energy purchase cost and energy sale income) of energy hubs in the retail market is minimized. The scheme is bound to optimal power flow equations of the mentioned networks and operating model of power sources and active loads. Then, the Pareto optimization mixed with the sum of weighted functions helps extract an optimal compromise solution on the basis of fuzzy decision-making. Finally, the scheme is applied to a test system, and the obtained numerical results confirm that energy hubs are improved financially, and economic and operation conditions of the electricity-gas-thermal networks are enhanced simultaneously. So, significant profit can be achieved for EHs in the retail energy market. The economic situation of the networks enhances up to roughly 10% compared with that of power flow studies. Also, operating situation of the networks enhances by about 12% to 53% compared with a case without EHs.
Explicit memory based ABC with a clustering strategy for updating and retrieval of memory in dynamic environments
The Artificial Bee Colony (ABC) algorithm is considered as one of the swarm intelligence optimization algorithms. It has been extensively used for the applications of static type. Many practical and real-world applications, nevertheless, are of dynamic type. Thus, it is needed to employ some optimization algorithms that could solve this group of the problems that are of dynamic type. Dynamic optimization problems in which change(s) may occur through the time are tougher to face than static optimization problems. In this paper, an approach based on the ABC algorithm enriched with explicit memory and population clustering scheme, for solving dynamic optimization problems is proposed. The proposed algorithm uses the explicit memory to store the aging best solutions and employs clustering for preserving diversity in the population. Using the aging best solutions and keeping the diversity in population of the candidate solutions in the environment help speed-up the convergence of the algorithm. The proposed approach has been tested on Moving Peaks Benchmark. The Moving Peaks Benchmark is a suitable function for testing optimization algorithms and it is considered as one of the best representative of dynamic environments. The experimental study on the Moving Peaks Benchmark shows that the proposed approach is superior to several other well-known and state-of-the-art algorithms in dynamic environments.
Participation of Grid-Connected Energy Hubs and Energy Distribution Companies in the Day-Ahead Energy Wholesale and Retail Markets Constrained to Network Operation Indices
In this paper, the optimal scheduling of energy grids and networked energy hubs based on their participation in the day-ahead energy wholesale and retail markets is presented. The problem is formulated as a bilevel model. Its upper level minimizes the expected energy cost of electricity, gas, and heating grids, especially in the form of private distribution companies in the mentioned markets, in the first objective function, and it minimizes the expected energy loss of these networks in the second objective function. This problem is constrained by linearized optimal power flow equations. The lower-level formulation minimizes the expected energy cost of hubs (equal to the difference between sell and purchase of energy) as an objective function in the retail market. Constraints of this model are the operation formulation of sources and active loads and the flexibility limit of hubs. The unscented transformation approach models the uncertainties of load, renewable power, energy price, and energy demand of mobile storage. Then, the Karush–Kuhn–Tucker approach and Pareto optimization technique based on ε-constraint are adopted to extract the single-level single-objective formulation. Finally, obtained results verify the capability of the present method in improving the economic status of hubs and the economic and operation situation of the mentioned networks simultaneously so that the proposed scheme by managing the power of energy hubs compared with power flow studies has been able to reduce operating costs by 8%, reduce energy losses by 10%, and improve voltage profile and temperature by 36% and 30%.
Multi-objective whale optimization algorithm and multi-objective grey wolf optimizer for solving next release problem with developing fairness and uncertainty quality indicators
Selecting a set of requirements to implement in the next software release is an NP-Hard problem known as NRP. We propose multi-objective versions of grey wolf optimizer and whale optimization algorithm for solving bi-objective NRP. We used these two algorithms and three other evolutionary algorithms to solve NRP problem instances from four datasets. The cost-to-score ratio and the roulette wheel are used to satisfy constraints of the NRP problem. We compare obtained Pareto fronts based on eight quality indicators. In addition to four general multi-objective optimization quality indicators, the three aspects of fairness among clients and also uncertainty are reconfigured as quality indicators. These quality indicators are computed for a Pareto front. Results show that MOWOA performs better than others and makes requirement selection fairer. MOGWO works better than the rest when budget constraints are reduced.
Modeling of static var compensator-high voltage direct current to provide power and improve voltage profile
Transmission lines react to an unexpected increase in power, and if these power changes are not controlled, some lines will become overloaded on certain routes. Flexible alternating current transmission system (FACTS) devices can change the voltage range and phase angle and thus control the power flow. This paper presents suitable mathematical modeling of FACTS devices including static var compensator (SVC) as a parallel compensator and high voltage direct current (HVDC) bonding. A comprehensive modeling of SVC and HVDC bonding in the form of simultaneous applications for power flow is also performed, and the effects of compensations are compared. The comprehensive model obtained was implemented on the 5-bus test system in MATLAB software using the Newton-Raphson method, revealed that generators have to produce more power. Also, the addition of these devices stabilizes the voltage and controls active and reactive power in the network.
Imputing missing value through ensemble concept based on statistical measures
Many datasets include missing values in their attributes. Data mining techniques are not applicable in the presence of missing values. So an important step in preprocessing of a data mining task is missing value management. One of the most important categories in missing value management techniques is missing value imputation. This paper presents a new imputation technique. The proposed imputation technique is based on statistical measurements. The suggested imputation technique employs an ensemble of the estimators built to estimate the missing values based on positive and negative correlated observed attributes separately. Each estimator guesses a value for a missed value based on the average and variance of that feature. The average and variance of the feature are estimated from the non-missed values of that feature. The final consensus value for a missed value is the weighted aggregation of the values estimated by different estimators. The chief weight is attribute correlation, and the slight weight is dependent to kernel function such as kurtosis, skewness, number of involved samples and composition of them. The missing values are deliberately produced randomly at different levels. The experimentations indicate that the suggested technique has a good accuracy in comparison with the classical methods.
Comprehensive modeling of SVC–TCSC–HVDC power flow in terms of simultaneous application in power systems
Due to the pattern of growth for electricity consumption, there is a need for developing power networks and transmission lines. The power transmission capacity of lines is limited due to a host of factors. Thus, these lines need series and parallel compensations to reduce losses, increase efficiency, and promote system security. In this paper, flexible alternating current transmission system (FACTS) devices including static VAR compensators (SVC) as parallel compensators, thyristor-controlled series compensation (TCSC) as a series compensator, and high-voltage direct current (HVDC) bonding are modeled. In addition, comprehensive modeling of the simultaneous application of these three devices for load flow is performed, and the effects of these types of compensations are compared. The obtained comprehensive model was implemented on MATLAB software using the Newton–Raphson method on two 9-bus WSCC and 5-bus test system. In this case, the calculation speed and convergence were reduced when compared to applying devices individually due to the increase in equations and the addition of new terms to the load flow equations. Furthermore, more losses were observed in this model, which can probably be improved using an optimal power flow and optimal placement of the devices in the network.