Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
4,258 result(s) for "target function"
Sort by:
Denoising of the Poisson-Noise Statistics 2D Image Patterns in the Computer X-ray Diffraction Tomography
A central point of validity of computer X-ray diffraction micro tomography is to improve the digital contrast and spatial resolution of the 3D-recovered nano-scaled objects in crystals. In this respect, the denoising issue of the 2D image patterns data involved in the 3D high-resolution recovery processing has been treated. The Poisson-noise simulation of 2D image patterns data was performed; afterwards, it was employed for recovering nano-scaled crystal structures. By using the statistical average and geometric means methods of the acquired 2D image frames, we showed that the statistical average hypothesis works well, at least in the case of 2D Poisson-noise image data related to the Coulomb-type point defect in a crystal Si(111). The validation of results related to the de-noised 2D IPs data obtained was carried out by both the 3D recovery processing of the Coulomb-type point defect in a crystal Si(111) and using the peak signal-to-noise ratio (PSNR) criterion.
Function‐Specific Localization in the Supplementary Motor Area: A Potential Effective Target for Tourette Syndrome
ABSTRACT Aims Repetitive transcranial magnetic stimulation (rTMS) targeting the supplementary motor area (SMA) may treat Tourette's syndrome (TS) by modulating the function of the globus pallidus internus (GPi) via the cortico‐striato‐thalamo‐cortical circuit. Methods We conducted a randomized longitudinal study to examine circuit functionality and clinical efficacy. The GPi was identified as an “effective region” for TS treatment. Using functional MRI, individualized targets within the SMA were identified. Function‐specific targets [left SMA (n = 19), right SMA (n = 16)] were compared with conventional scalp‐localized SMA targets (n = 19). Age‐ and gender‐matched typical developmental children (TDC) served as controls (n = 48). TS patients received 50 Hz continuous theta burst stimulation (cTBS) at 70% RMT over five consecutive days (1800 pulses/day). Clinical efficacy was assessed using the Yale Global Tic Severity Scale (YGTSS) at one and two weeks post‐cTBS. Functional connectivity (FC) analyses of the GPi evaluated the impact on brain function. Results There was an approximately 3 cm Y‐axis distance between the function‐specific and conventional targets. TS patients exhibited significantly reduced GPi‐base FC in bilateral motor areas at baseline compared to TDC. Following cTBS, 4 out of 19 patients in the left SMA group achieved a ≥ 30% reduction in YGTSS scores. cTBS modulated brain function in the left inferior orbital frontal cortex and right Lingual/cerebellum, primarily influenced by the right SMA target, whereas the conventional target showed no effect on YGTSS scores. Changes in GPi‐target FC were significantly correlated with reduction in YGTSS total scores (r = 0.638, p = 0.026). Conclusion These findings suggest that function‐specific SMA targets may yield more pronounced modulatory effects, with the left SMA target achieving “Effectiveness” after just one week of cTBS. Combining function‐specific SMA‐targeted cTBS with standard treatment shows promise in accelerating clinical efficacy for TS treatment, warranting further investigation. Combining function‐specific SMA‐targeted cTBS with standard treatment may be more effective than conventional SMA targeting in accelerating the onset of clinical efficacy in TS treatment.
Fast Registration Algorithm for Laser Point Cloud Based on 3D-SIFT Features
In response to the issues of slow convergence and the tendency to fall into local optima in traditional iterative closest point (ICP) point cloud registration algorithms, this study presents a fast registration algorithm for laser point clouds based on 3D scale-invariant feature transform (3D-SIFT) feature extraction. First, feature points are preliminarily extracted using a normal vector threshold; then, more high-quality feature points are extracted using the 3D-SIFT algorithm, effectively reducing the number of point cloud registrations. Based on the extracted feature points, a coarse registration of the point cloud is performed using the fast point feature histogram (FPFH) descriptor combined with the sample consensus initial alignment (SAC-IA) algorithm, followed by fine registration using the point-to-plane ICP algorithm with a symmetric target function. The experimental results show that this algorithm significantly improved the registration efficiency. Compared with the traditional SAC−IA+ICP algorithm, the registration accuracy of this algorithm increased by 29.55% in experiments on a public dataset, and the registration time was reduced by 81.01%. In experiments on actual collected data, the registration accuracy increased by 41.72%, and the registration time was reduced by 67.65%. The algorithm presented in this paper maintains a high registration accuracy while greatly reducing the registration speed.
Sharpness of some Hardy-type inequalities
The current status concerning Hardy-type inequalities with sharp constants is presented and described in a unified convexity way. In particular, it is then natural to replace the Lebesgue measure dx with the Haar measure dx/x. There are also derived some new two-sided Hardy-type inequalities for monotone functions, where not only the two constants are sharp but also the involved function spaces are (more) optimal. As applications, a number of both well-known and new Hardy-type inequalities are pointed out. And, in turn, these results are used to derive some new sharp information concerning sharpness in the relation between different quasi-norms in Lorentz spaces.
Investigating the potential of Morris algorithm for improving the computational constraints of global sensitivity analysis
Sensitivity analysis (SA) is widely acknowledged as advantageous and worthwhile in recognizing parameters for model calibration and optimization, especially in complex hydrological models. Although Sobol global SA is an efficient way to evaluate the sensitivity indices, the computational cost is a constraint. This study analyzes the potential of Morris global SA to achieve results tantamount to Sobol SA, at a much cheaper computational expense, using a new approach of increasing the number of replications for the Morris algorithm. SA for two catchments is performed on a coupled hydrological model using Morris and Sobol algorithms. Two target functions are used for each of the algorithms. Sobol SA required 660000 model simulations accounting for about 400 computing hours, whereas increasing the replications from 1000 to 3000, the Morris method called for 63000 runs and 06 computing hours to produce significantly similar results. The Morris parameter ranking improved 50% with respect to Sobol SA by a three-fold increase in replications with a small 5-h increase in the computational expense. The results also suggest that target functions and catchments influence parameter sensitivity. The new approach to employ the Morris method of SA shows promising results for highly parameterized hydrological models without compromising the quality of SA, specifically if there are time constraints. The study encourages the use of SA, which is mainly skipped because of higher computational demands.
Multi-Scale Remaining Useful Life Prediction Using Long Short-Term Memory
Predictive maintenance based on performance degradation is a crucial way to reduce maintenance costs and potential failures in modern complex engineering systems. Reliable remaining useful life (RUL) prediction is the main criterion for decision-making in predictive maintenance. Conventional model-based methods and data-driven approaches often fail to achieve an accurate prediction result using a single model for a complex system featuring multiple components and operational conditions, as the degradation pattern is usually nonlinear and time-varying. This paper proposes a novel multi-scale RUL prediction approach adopting the Long Short-Term Memory (LSTM) neural network. In the feature engineering phase, Pearson’s correlation coefficient is applied to extract the representative features, and an operation-based data normalisation approach is presented to deal with the cases where multiple degradation patterns are concealed in the sensor data. Then, a three-stage RUL target function is proposed, which segments the degradation process of the system into the non-degradation stage, the transition stage, and the linear degradation stage. The classification of these three stages is regarded as the small-scale RUL prediction, and it is achieved through processing sensor signals after the feature engineering using a novel LSTM-based binary classification algorithm combined with a correlation method. After that, a specific LSTM-based predictive model is built for the last two stages to produce a large-scale RUL prediction. The proposed approach is validated by comparing it with several state-of-the-art techniques based on the widely used C-MAPSS dataset. A significant improvement is achieved in RUL prediction performance in most subsets. For instance, a 40% reduction is achieved in Root Mean Square Error over the best existing method in subset FD001. Another contribution of the multi-scale RUL prediction approach is that it offers more degree of flexibility of prediction in the maintenance strategy depending on data availability and which degradation stage the system is in.
Improving Efficiency of Electric Energy System and Grid Operating Modes: Review of Optimization Techniques
Continuously growing tariff rates for energy carriers required to generate electrical and thermal energy bring about the need to search for alternatives. Such alternatives are intended for the reduction in the electricity and heat net costs as well as the expenses for the operation and maintenance of system elements and damage from power outages or deteriorated power quality. A way to reduce electricity and heat costs is the introduction of distributed energy resources capable of operating on both conventional (natural gas) and alternative (solar and wind energy, biomass, etc.) fuels. The problem of reducing electricity and, in some cases, heat costs are solved by applying mathematical optimization techniques adapted to a specific element or system of the industry in question. When it comes to power industry facilities, optimization, as a rule, includes reducing active power losses by controlling the system mode or specific power unit parameters; planning generating equipment operating modes; defining the optimal equipment composition; improving the regime and structural reliability of grids; scheduling preventive maintenance of equipment; searching for effective power unit operating modes. Many of the problems listed are solved using direct enumeration techniques; modern technical tools allow quickly solving such local problems with a large number of source data. However, in the case of integrated control over the power system or its individual elements, optimization techniques are used that allow considering a lot of operating limitations and the target function multicriteriality. This paper provides an analytical review of optimization techniques adapted to solving problems of improving the efficiency of the power facility operating modes. The article is made on the basis of the research conducted by the authors in the area of optimization of operating modes for electric energy systems and grids. The authors drew conclusions on the applicability of mathematical optimization methods in the power energy area. While conducting the research, the authors relied on their expertise in the development and introduction of the method to optimize the operation modes of energy supply systems with heterogeneous energy sources.
Optimal Target Function for the Fractional Fourier Transform of LFM Signals
Owing to their effectiveness in underwater acoustic communication, linear frequency modulation (LFM) signals have been widely used in commercial and military applications. The existing approaches based on the traditional target function can estimate only single-component LFM signal parameters, as these approaches assume that the order of the optimal fractional Fourier transform (FRFT) is one. To overcome this limitation, we developed an LFM signal parameter estimation method that exploits the information entropy in its target function and optimizes the order of the FRFT using search algorithms, such as sequential search, multistage step search, and particle swarm optimization. Unlike existing solutions, the proposed technique can estimate both single and multicomponent LFM signal parameters. Experiments were performed to compare the proposed method with state-of-the-art techniques that rely on the maximum value, high-order cumulants, and fractional broadening target functions. The proposed approach was noted to be computationally more efficient and more accurate. Moreover, the parameter estimation precision of this approach was comparable to that of classic FRFT schemes.
Comprehensive Recovery of Point Defect Displacement Field Function in Crystals by Computer X-ray Diffraction Microtomography
In the case of the point defect in a crystal, the inverse Radon’s problem in X-ray diffraction microtomography has been solved. As is known, the crystal-lattice defect displacement field function f(r) = h·u(r) determines phases − (±h)-structure factors incorporated into the Takagi–Taupin equations and provides the 2D image patterns by diffracted and transmitted waves propagating through a crystal (h is the diffraction vector and u(r) is the displacement field crystal-lattice-defects vector). Beyond the semi-kinematical approach for obtaining the analytical problem solution, the difference-equations-scheme of the Takagi–Taupin equations that, in turn, yield numerically controlled-accuracy problem solutions has been first applied and tested. Addressing the inverse Radon’s problem solution, the χ2-target function optimization method using the Nelder–Mead algorithm has been employed and tested in an example of recovering the Coulomb-type point defect structure in a crystal Si(111). As has been shown in the cases of the 2D noise-free fractional and integrated image patterns, based on the Takagi–Taupin solutions in the semi-kinematical and difference-scheme approaches, both procedures provide the χ2-target function global minimum, even if the starting-values of the point-defect vector P1 is chosen rather far away from the reference up to 40% in relative units. In the cases of the 2D Poisson-noise image patterns with noise levels up to 5%, the figures-of-merit values of the optimization procedures by the Nelder–Mead algorithm turn out to be high enough; the lucky trials number is 85%; and in contrast, for the statistically denoised 2D image patterns, they reach 0.1%.
Analysis and Forecasting of States of Industrial Networks with Adaptive Topology Based on Network Motifs
— This article proposes an approach to study states of complex industrial networks with adaptive topology based on network motifs: statistically significant subgraphs of a larger graph. The presented analysis concerns the applicability of network motifs to characterizing the system’s performance and for short-, medium-, and long-term forecasting of system states. A smart grid network structure is used as an example: it is represented as a directed graph, in which the most frequent motifs are identified; several scenarios of attacks on network nodes are modeled, and a forecast of the network state is compiled. The results of experimental studies demonstrate the accuracy and consistency of the application of this mathematical tool to the considered problems.