Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
24,354 result(s) for "computational efficiency"
Sort by:
Analytical modeling of lack-of-fusion porosity in metal additive manufacturing
This work presents a physics-based analytical modeling methodology for the prediction of the lack-of-fusion porosity in powder bed metal additive manufacturing (PBMAM) considering the molten pool geometry, powder size variation, and packing. The presented model has promising short computational time without resorting to the finite element method or any iteration-based simulations. The temperature profiles were calculated using a closed-form temperature solution. Multiple transverse sectional areas of the molten pool geometry were plotted on a cross-sectional area of the part based on hatch space and layer thickness to calculate the lack-of-fusion area. The powder bed porosity was calculated using advancing front approach with consideration of powder statistical distribution and powder packing. The part porosity was converted from the calculated lack-of-fusion area by multiplying the calculated powder bed porosity. Acceptable agreements were observed upon validation against experimental measurements under various process conditions in PBMAM of Ti6Al4V. The computational time was recorded less than 26 s for the porosity calculation of five consecutive layers. The presented model has high prediction accuracy and high computational efficiency, which allow the porosity calculation for large-scale parts and process parameters planning through inverse analysis, and thus improves the usefulness of analytical modeling in real applications.
Realization of a computational efficient BBU cluster for cloud RAN
Computational efficiency, CE, of a baseband unit, BBU, has become a key issue for a cloud radio access network, CRAN. The CE is defined as the sum of computational power, CP and computing resources, CR. Where CP and CR can be defined as the amount of power and processing resources of the core computing unit, CCU, of the BBU consumed during execution of traffic requests from remote radio heads, RRHs, respectively. A CCU is nothing but a general-purpose processor. To optimize CR utilization, less number of BBUs has to execute the same amount of RRH-requests. This causes dramatic rise in CCU’s operating temperature and imposes CP efficiency problem, because leakage power of a CCU, which is a wasted power, increases with its operating temperature. On the other hand, a CCU can reduce its CP remarkably by executing the RRH-requests slowly at lower frequency. However, the execution has to meet deadline of the RRH-request that is equivalent to a circuit delay in CCU, which depends on supply voltage, CCU-temperature, and frequency. Therefore, it is very challenging to minimize CP of GPP while optimizing CR subject to RRH-request’s deadline. Previous approaches, including our previously proposed TADCRA scheme, tries to turn off as many GPPs as possible by integrating traffic load on fewer GPPs to maximize the CR utilization while working at allowable temperature. Compared to TADCRA, it is observed that significant CP can be saved from the active GPP by dynamically adjusting its voltage and frequency (DVFS) while executing RRH-requests. Thus, given optimal number of GPPs obtained from TADCRA that maximizes the CR efficiency, a computational efficiency problem is formulated subject to minimum operating frequency, voltage and RRH-request’s deadline constraints. To address this problem, we propose computational efficient allocation (CEA) algorithm, where Lagrange multiplier tool is used for CP efficiency problem, while our proposed heuristic, Win–Win, solves the CR optimization problem subject to RRH-request’s deadline. Simulation results substantiate the proposed algorithm can save 16%, 29% and 46% more CP than TADCRA, LDA and conventional methods, respectively. The CEA increases CR utilization rate by 15% and 45% from LDA and conventional schemes, respectively, with significantly less number of CCUs requirements. Moreover, Win–Win heuristic satisfies the deadline constraint compared to first fit decreasing, FFD, approach, which misses deadline by 1 ms.
Efficient reconstruction method with common weighting strategy and spectral optimization based on scale sensor
High-order schemes are essential for high-fidelity aerodynamic simulations. To enhance computational efficiency while maintaining the adaptability of high-order schemes across various flow regions, this study proposes a simplified discontinuity-detection method based on the scale sensor. It operates prior to the reconstruction algorithm. In smooth regions, common weights are applied for all governing equations, whereas in multi-scale regions, the scheme adaptively switches between schemes with varying dispersion properties. By integrating these novel techniques with a computationally efficient smoothness indicator within the TENO framework, the schemes called FTENO and its optimized version are constructed. Comparative analyses of several typical benchmark cases demonstrate the superior comprehensive performance in efficiency and multi-scale resolution of the proposed schemes.
Fast Bayesian Compressed Sensing Algorithm via Relevance Vector Machine for LASAR 3D Imaging
Because of the three-dimensional (3D) imaging scene’s sparsity, compressed sensing (CS) algorithms can be used for linear array synthetic aperture radar (LASAR) 3D sparse imaging. CS algorithms usually achieve high-quality sparse imaging at the expense of computational efficiency. To solve this problem, a fast Bayesian compressed sensing algorithm via relevance vector machine (FBCS–RVM) is proposed in this paper. The proposed method calculates the maximum marginal likelihood function under the framework of the RVM to obtain the optimal hyper-parameters; the scattering units corresponding to the non-zero optimal hyper-parameters are extracted as the target-areas in the imaging scene. Then, based on the target-areas, we simplify the measurement matrix and conduct sparse imaging. In addition, under low signal to noise ratio (SNR), low sampling rate, or high sparsity, the target-areas cannot always be extracted accurately, which probably contain several elements whose scattering coefficients are too small and closer to 0 compared to other elements. Those elements probably make the diagonal matrix singular and irreversible; the scattering coefficients cannot be estimated correctly. To solve this problem, the inverse matrix of the singular matrix is replaced with the generalized inverse matrix obtained by the truncated singular value decomposition (TSVD) algorithm to estimate the scattering coefficients correctly. Based on the rank of the singular matrix, those elements with small scattering coefficients are extracted and eliminated to obtain more accurate target-areas. Both simulation and experimental results show that the proposed method can improve the computational efficiency and imaging quality of LASAR 3D imaging compared with the state-of-the-art CS-based methods.
An Efficient Sixth-Order Newton-Type Method for Solving Nonlinear Systems
In this paper, we present a new sixth-order iterative method for solving nonlinear systems and prove a local convergence result. The new method requires solving five linear systems per iteration. An important feature of the new method is that the LU (lower upper, also called LU factorization) decomposition of the Jacobian matrix is computed only once in each iteration. The computational efficiency index of the new method is compared to that of some known methods. Numerical results are given to show that the convergence behavior of the new method is similar to the existing methods. The new method can be applied to small- and medium-sized nonlinear systems.
DRL-Based Backbone SDN Control Methods in UAV-Assisted Networks for Computational Resource Efficiency
The limited coverage extension of mobile edge computing (MEC) necessitates exploring cooperation with unmanned aerial vehicles (UAV) to leverage advanced features for future computation-intensive and mission-critical applications. Moreover, the workflow for task offloading in software-defined networking (SDN)-enabled 5G is significant to tackle in UAV-MEC networks. In this paper, deep reinforcement learning (DRL) SDN control methods for improving computing resources are proposed. DRL-based SDN controller, termed DRL-SDNC, allocates computational resources, bandwidth, and storage based on task requirements, upper-bound tolerable delays, and network conditions, using the UAV system architecture for task exchange between MECs. DRL-SDNC configures rule installation based on state observations and agent evaluation indicators, such as network congestion, user equipment computational capabilities, and energy efficiency. This paper also proposes the training deep network architecture for the DRL-SDNC, enabling interactive and autonomous policy enforcement. The agent learns from the UAV-MEC environment through experience gathering and updates its parameters using optimization methods. DRL-SDNC collaboratively adjusts hyperparameters and network architecture to enhance learning efficiency. Compared with baseline schemes, simulation results demonstrate the effectiveness of the proposed approach in optimizing resource efficiency and achieving satisfied quality of service for efficient utilization of computing and communication resources in UAV-assisted networking environments.
A State-of-the-Art Review on Machine Learning-Based Multiscale Modeling, Simulation, Homogenization and Design of Materials
Multiscale simulation and homogenization of materials have become the major computational technology as well as engineering tools in material modeling and material design. However, the concurrent multiscale simulations require extensive computational resources, in which the CPU time increases exponentially as the spatial and temporal scale increase. In fact, with only a few exceptions, both hierarchical and concurrent multiscale modeling techniques have not been adopted in the industrial sector, primarily because of their computational cost. Recently, the rapid developments in artificial intelligence technology as well as the fast growth in computational resources and data have stimulated a widespread adoption of machine learning-based methodologies to enhance the computational efficiency and accuracy in multiscale simulations and their applications. Even though there is a great expectation of a revolution propelled by the artificial intelligence and machine learning technology in computational materials and computational mechanics, the machine learning-based multiscale modeling and simulation is still at its infant stage. In this paper, we aim at a state-of-the-art review on the machine learning-based multiscale modeling and simulation of materials, and its applications in composite homogenization, defect mechanics modeling, and material design, to provide an overview as well as perspectives on these innovative techniques, which may soon replace the conventional multiscale modeling methods.
Scalable watermarking for identifying large language model outputs
Large language models (LLMs) have enabled the generation of high-quality synthetic text, often indistinguishable from human-written content, at a scale that can markedly affect the nature of the information ecosystem 1 – 3 . Watermarking can help identify synthetic text and limit accidental or deliberate misuse 4 , but has not been adopted in production systems owing to stringent quality, detectability and computational efficiency requirements. Here we describe SynthID-Text, a production-ready text watermarking scheme that preserves text quality and enables high detection accuracy, with minimal latency overhead. SynthID-Text does not affect LLM training and modifies only the sampling procedure; watermark detection is computationally efficient, without using the underlying LLM. To enable watermarking at scale, we develop an algorithm integrating watermarking with speculative sampling, an efficiency technique frequently used in production systems 5 . Evaluations across multiple LLMs empirically show that SynthID-Text provides improved detectability over comparable methods, and standard benchmarks and human side-by-side ratings indicate no change in LLM capabilities. To demonstrate the feasibility of watermarking in large-scale-production systems, we conducted a live experiment that assessed feedback from nearly 20 million Gemini 6 responses, again confirming the preservation of text quality. We hope that the availability of SynthID-Text 7 will facilitate further development of watermarking and responsible use of LLM systems. A scheme for watermarking the text generated by large language models shows high text quality preservation and detection accuracy and low latency, and is feasible in large-scale-production settings.
Modelling and analysis of the grid connected photovoltaic system
Designing a photovoltaic installation is a crucial step in the process of preparing a photovoltaic-based energy system. Computational modeling of installations can ensure meeting the environmental, physical, and economic requirements. Modeling of existing installations can help to analyze possible improvements to the system. Most detailed models involve a prediction of characteristics of an individual module and validation against a short few days’ time horizon. Full grid-tied rooftop systems are often to a few representative days. Year-long simulations of complete electrical models remain rare. This paper addresses the development of a computationally efficient, averaged-inverter model for an existing 24.3 kW rooftop system. Hourly 2022 weather data drive the simulation, producing season-resolved energy yields. Results from the simulation are compared to the experimental data. By proving that a computationally efficient, averaged inverter model can capture the seasonal and daily dynamics of a prosumer-scale array, insights derived from this study can enhance future photovoltaic systems designs.
A new generation 99 line Matlab code for compliance topology optimization and its extension to 3D
Compact and efficient Matlab implementations of compliance topology optimization (TO) for 2D and 3D continua are given, consisting of 99 and 125 lines respectively. On discretizations ranging from 3 ⋅ 10 4 to 4.8 ⋅ 10 5 elements, the 2D version, named top99neo, shows speedups from 2.55 to 5.5 times compared to the well-known top88 code of Andreassen et al. (Struct Multidiscip Optim 43(1):1–16, 2011 ). The 3D version, named top3D125, is the most compact and efficient Matlab implementation for 3D TO to date, showing a speedup of 1.9 times compared to the code of Amir et al. (Struct Multidiscip Optim 49(5):815–829, 2014 ), on a discretization with 2.2 ⋅ 10 5 elements. For both codes, improvements are due to much more efficient procedures for the assembly and implementation of filters and shortcuts in the design update step. The use of an acceleration strategy, yielding major cuts in the overall computational time, is also discussed, stressing its easy integration within the basic codes.