Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
3,704 result(s) for "hypercubes"
Sort by:
Topological Indices, Graph Spectra, Entropies, Laplacians, and Matching Polynomials of n-Dimensional Hypercubes
We obtain a large number of degree and distance-based topological indices, graph and Laplacian spectra and the corresponding polynomials, entropies and matching polynomials of n-dimensional hypercubes through the use of Hadamard symmetry and recursive dynamic computational techniques. Moreover, computations are used to provide independent numerical values for the topological indices of the 11- and 12-cubes. We invoke symmetry-based recursive Hadamard transforms to obtain the graph and Laplacian spectra of nD-hypercubes and the computed numerical results are constructed for up to 23-dimensional hypercubes. The symmetries of these hypercubes constitute the hyperoctahedral wreath product groups which also pave the way for the symmetry-based elegant computations. These results are used to independently validate the exact analytical expressions that we have obtained for the topological indices as well as graph, Laplacian spectra and their polynomials. We invoke a robust dynamic programming technique to handle the computationally intensive generation of matching polynomials of hypercubes and compute all matching polynomials up to the 6-cube. The distance degree sequence vectors have been obtained numerically for up to 108-dimensional cubes and their frequencies are found to be in binomial distributions akin to the spectra of n-cubes.
On Independent 1, 2-sets in Hypercubes
Given a simple graph G, a subset S ⊆ V ( G ) is an independent [1, 2]-set if no two vertices in S are adjacent and for every vertex υ ϵ V ( G )\\ S , 1 ≤ | N (υ) ∩ S | ≤ 2, that is, every vertex υ ϵ V ( G )\\ S is adjacent to at least one but not more than two vertices in S. This paper investigates the existence of independent [1, 2]-sets of hypercubes. We show that for some positive integer k, if n = 2 k − 1, then hypercubes Q n and Q n +1 have an independent [1, 2]-set. Furthermore, for 1 ≤ n ≤ 4, we find bounds for the minimum and maximum cardinality of an independent [1, 2]-set of hypercube Q n , while for n = 5, 6, we get the maximum of cardinality of an independent [1, 2]-set of hypercube Q n .
The signed (|G| –1)-subdomination number of balanced hypercubes
Let G = BH n be a n - dimensional balanced hypercube. As a topology of interconnection network, balanced hypercubes are widely used in many areas. The signed k - subdomination number of graphs is an important parameter in the domination theory. In this paper, according to the properties of balanced hypercubes, the signed (| G | –1) - subdomination number of balanced hypercubes when n = 2 is determined by classified discussion and exhaustived method.
High-precision deformation prediction for compliant parts in the ship sub-assembly process
In the ship sub-assembly process, large compliant parts are common and generally thin. These compliant parts are normally easy to deform under the influence of gravity, which will greatly affect the accuracy of the sub-assembly processes. Thus, it is important to predict the deformation of the compliant part under a given fixture layout in advance. In current practice, existing methods of post-compensation are usually used to correct the deformation of the compliant part, which are inefficient and costly. In this paper, a transformer-based surrogate model with two-stage Latin hypercube sampling (TSM-TSS) is established. This surrogate model considers each fixture position and its deviation to predict the deformation of the entire compliant part. Compared with BPNN and Kriging, a case study reveals that TSM-TSS can predict the deformation of compliant parts with an error of 0.061mm. With the application of TSM-TSS, the deformation of the compliant part under gravity can be predicted accurately and the efficiency of shipbuilding can be improved.
Characteristic polynomials, spectral-based Riemann-Zeta functions and entropy indices of n-dimensional hypercubes
We obtain the characteristic polynomials and a number of spectral-based indices such as the Riemann-Zeta functional indices and spectral entropies of n-dimensional hypercubes using recursive Hadamard transforms. The computed numerical results are constructed for up to 23-dimensional hypercubes. While the graph energies exhibit a J-curve as a function of the dimension of the n-cubes, the spectra-based entropies exhibit a linear dependence on the dimension. We have also provided structural interpretations for the coefficients of the characteristic polynomials of n-cubes and obtain expressions for the integer sequences formed by the spectral-based Riemann-Zeta functions. Graphical abstract
How to treat uncertainties in life cycle assessment studies?
PurposeThe use of life cycle assessment (LCA) as a decision support tool can be hampered by the numerous uncertainties embedded in the calculation. The treatment of uncertainty is necessary to increase the reliability and credibility of LCA results. The objective is to provide an overview of the methods to identify, characterize, propagate (uncertainty analysis), understand the effects (sensitivity analysis), and communicate uncertainty in order to propose recommendations to a broad public of LCA practitioners.MethodsThis work was carried out via a literature review and an analysis of LCA tool functionalities. In order to facilitate the identification of uncertainty, its location within an LCA model was distinguished between quantity (any numerical data), model structure (relationships structure), and context (criteria chosen within the goal and scope of the study). The methods for uncertainty characterization, uncertainty analysis, and sensitivity analysis were classified according to the information provided, their implementation in LCA software, the time and effort required to apply them, and their reliability and validity. This review led to the definition of recommendations on three levels: basic (low efforts with LCA software), intermediate (significant efforts with LCA software), and advanced (significant efforts with non-LCA software).Results and discussionFor the basic recommendations, minimum and maximum values (quantity uncertainty) and alternative scenarios (model structure/context uncertainty) are defined for critical elements in order to estimate the range of results. Result sensitivity is analyzed via one-at-a-time variations (with realistic ranges of quantities) and scenario analyses. Uncertainty should be discussed at least qualitatively in a dedicated paragraph. For the intermediate level, the characterization can be refined with probability distributions and an expert review for scenario definition. Uncertainty analysis can then be performed with the Monte Carlo method for the different scenarios. Quantitative information should appear in inventory tables and result figures. Finally, advanced practitioners can screen uncertainty sources more exhaustively, include correlations, estimate model error with validation data, and perform Latin hypercube sampling and global sensitivity analysis.ConclusionsThrough this pedagogic review of the methods and practical recommendations, the authors aim to increase the knowledge of LCA practitioners related to uncertainty and facilitate the application of treatment techniques. To continue in this direction, further research questions should be investigated (e.g., on the implementation of fuzzy logic and model uncertainty characterization) and the developers of databases, LCIA methods, and software tools should invest efforts in better implementing and treating uncertainty in LCA.
Research on the H point-anchorage angle of high-angle seat based on BPNN-GA
In this paper, the changes of safety belt anchorage, H point, and H point-anchorage angle ( α 1 and α 2 ) with seat back angle and seat pan angle of the high-angle seat are studied. Firstly, the structure of the high-angle seat was analyzed. The research range of the seat back angle and seat pan angle was determined by the three-dimensional H-point test device. Then, Latin hypercube sampling and back propagation neural networks were combined to study the changing trend of α 1 and α 2 . Finally, the genetic algorithm and back propagation neural network were combined to build a model to identify the minimum H point-anchorage angle. The results show that the α 1 and α 2 tend to increase with the increase of seat pan angle. The proposed model has a recognition error of 4.3% and can identify the minimum H point-anchorage angle seat position.
Logical quantum processor based on reconfigurable atom arrays
Suppressing errors is the central challenge for useful quantum computing 1 , requiring quantum error correction (QEC) 2 – 6 for large-scale processing. However, the overhead in the realization of error-corrected ‘logical’ qubits, in which information is encoded across many physical qubits for redundancy 2 – 4 , poses substantial challenges to large-scale logical quantum computing. Here we report the realization of a programmable quantum processor based on encoded logical qubits operating with up to 280 physical qubits. Using logical-level control and a zoned architecture in reconfigurable neutral-atom arrays 7 , our system combines high two-qubit gate fidelities 8 , arbitrary connectivity 7 , 9 , as well as fully programmable single-qubit rotations and mid-circuit readout 10 – 15 . Operating this logical processor with various types of encoding, we demonstrate improvement of a two-qubit logic gate by scaling surface-code 6 distance from d  = 3 to d  = 7, preparation of colour-code qubits with break-even fidelities 5 , fault-tolerant creation of logical Greenberger–Horne–Zeilinger (GHZ) states and feedforward entanglement teleportation, as well as operation of 40 colour-code qubits. Finally, using 3D [[8,3,2]] code blocks 16 , 17 , we realize computationally complex sampling circuits 18 with up to 48 logical qubits entangled with hypercube connectivity 19 with 228 logical two-qubit gates and 48 logical CCZ gates 20 . We find that this logical encoding substantially improves algorithmic performance with error detection, outperforming physical-qubit fidelities at both cross-entropy benchmarking and quantum simulations of fast scrambling 21 , 22 . These results herald the advent of early error-corrected quantum computation and chart a path towards large-scale logical processors. A programmable quantum processor based on encoded logical qubits operating with up to 280 physical qubits is described, in which improvement of algorithmic performance using a variety of error-correction codes is enabled.
Path Planning for Unmanned Systems Based on Integrated Sampling Strategies and Improved PSO
B-splines and Particle Swarm Optimization algorithms are integrated for unmanned system path planning in mountainous terrains. In the early stages of the optimization search, the traditional Particle Swarm Optimization (PSO) algorithm achieves rapid convergence. However, as the process continues, it often struggles with local optima in later stages. To address this limitation, this research proposes an improved PSO algorithm that combines the Immune Algorithm (IMA) and Latin Hypercube Sampling Method. This enhancement bolsters the optimization capabilities of particles at different phases of the search by implementing an evaluation mechanism and dynamic weight adjustments. Experimental results demonstrate that, when confronting optimization challenges within complex mountainous terrains, the improved PSO algorithm (SIPSO) which is combined with IMA and Sampling Method significantly outperforms conventional PSO and Genetic Algorithm (GA) in both iteration counts and computational efficiency, showcasing a notable advancement in performance.
Lessons Learned from Quantitative Dynamical Modeling in Systems Biology
Due to the high complexity of biological data it is difficult to disentangle cellular processes relying only on intuitive interpretation of measurements. A Systems Biology approach that combines quantitative experimental data with dynamic mathematical modeling promises to yield deeper insights into these processes. Nevertheless, with growing complexity and increasing amount of quantitative experimental data, building realistic and reliable mathematical models can become a challenging task: the quality of experimental data has to be assessed objectively, unknown model parameters need to be estimated from the experimental data, and numerical calculations need to be precise and efficient. Here, we discuss, compare and characterize the performance of computational methods throughout the process of quantitative dynamic modeling using two previously established examples, for which quantitative, dose- and time-resolved experimental data are available. In particular, we present an approach that allows to determine the quality of experimental data in an efficient, objective and automated manner. Using this approach data generated by different measurement techniques and even in single replicates can be reliably used for mathematical modeling. For the estimation of unknown model parameters, the performance of different optimization algorithms was compared systematically. Our results show that deterministic derivative-based optimization employing the sensitivity equations in combination with a multi-start strategy based on latin hypercube sampling outperforms the other methods by orders of magnitude in accuracy and speed. Finally, we investigated transformations that yield a more efficient parameterization of the model and therefore lead to a further enhancement in optimization performance. We provide a freely available open source software package that implements the algorithms and examples compared here.