Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
45 result(s) for "Ramachandra, Nesar"
Sort by:
Global field reconstruction from sparse sensors with Voronoi tessellation-assisted deep learning
Achieving accurate and robust global situational awareness of a complex time-evolving field from a limited number of sensors has been a long-standing challenge. This reconstruction problem is especially difficult when sensors are sparsely positioned in a seemingly random or unorganized manner, which is often encountered in a range of scientific and engineering problems. Moreover, these sensors could be in motion and could become online or offline over time. The key leverage in addressing this scientific issue is the wealth of data accumulated from the sensors. As a solution to this problem, we propose a data-driven spatial field recovery technique founded on a structured grid-based deep-learning approach for arbitrary positioned sensors of any numbers. It should be noted that naive use of machine learning becomes prohibitively expensive for global field reconstruction and is furthermore not adaptable to an arbitrary number of sensors. In this work, we consider the use of Voronoi tessellation to obtain a structured-grid representation from sensor locations, enabling the computationally tractable use of convolutional neural networks. One of the central features of our method is its compatibility with deep learning-based super-resolution reconstruction techniques for structured sensor data that are established for image processing. The proposed reconstruction technique is demonstrated for unsteady wake flow, geophysical data and three-dimensional turbulence. The current framework is able to handle an arbitrary number of moving sensors and thereby overcomes a major limitation with existing reconstruction methods. Our technique opens a new pathway toward the practical use of neural networks for real-time global field estimation. Complex physical processes such as flow fields can be predicted using deep learning methods if good quality sensor data is available, but sparsely placed sensors and sensors that change their position present a problem. A new approach from Kai Fukami and colleagues based on Voronoi tessellation now allows to use data from an arbitrary number of moving sensors to reconstruct a global field.
Achieving GPT-4o level performance in astronomy with a specialized 8B-parameter large language model
AstroSage-Llama-3.1-8B is a domain-specialized natural-language AI assistant tailored for research in astronomy, astrophysics, cosmology, and astronomical instrumentation. Trained on the complete collection of astronomy-related arXiv papers from 2007 to 2024 along with millions of synthetically-generated question-answer pairs and other astronomical literature, AstroSage-Llama-3.1-8B demonstrates remarkable proficiency on a wide range of questions. AstroSage-Llama-3.1-8B scores 80.9% on the AstroMLab-1 benchmark, greatly outperforming all models—proprietary and open-weight—in the 8-billion parameter class, and performing on par with GPT-4o. This achievement demonstrates the potential of domain specialization in AI, suggesting that focused training can yield capabilities exceeding those of much larger, general-purpose models. AstroSage-Llama-3.1-8B is freely available, enabling widespread access to advanced AI capabilities for astronomical education and research.
Topology, Geometry and Morphology of the Dark Matter Web
Spatial distribution of dark matter displays a variety of intricate three dimensional structures on the largest scales in the Universe, notably the massive haloes, long tubular filaments, flattened walls and the vast under-dense voids. Galaxies embedded in the dark matter structures have illuminated the rich geometry of these structures currently known as the cosmic web. Cosmological N-body simulations are indispensable tools for understanding the evolution of the dark matter web. Recent developments in the numerical analysis of these simulations have hinted towards incorporating the dynamical information of gravitational clustering of collisionless dark matter. This is inferred from a six-dimensional Lagrangian sub-manifold – comprising of initial and final coordinates of the dark matter particles. Velocity multistream field derived from this sub-manifold sheds new light on the nature of gravitational collapse. Geometrical, topological, morphological and heuristic diagnostic tools used in this novel parameter space reveal features of the dark matter distribution. For instance, a single void structure not only percolates the multistream field in all the directions, but also occupies over 99 per cent of all the single-streaming regions. On the other hand, connected filaments display a rapid topological transition to isolated islands at high multistream values. Hessian analysis delineates structures with different shapes: tubular, sheet-like, or globular – enabling detection of the dark matter haloes without ad hoc parameters related to matter density or distance field.
Differentiable Predictions for Large Scale Structure with SHAMNet
In simulation-based models of the galaxy-halo connection, theoretical predictions for galaxy clustering and lensing are typically made based on Monte Carlo realizations of a mock universe. In this paper, we use Subhalo Abundance Matching (SHAM) as a toy model to introduce an alternative to stochastic predictions based on mock population, demonstrating how to make simulation-based predictions for clustering and lensing that are both exact and differentiable with respect to the parameters of the model. Conventional implementations of SHAM are based on iterative algorithms such as Richardson-Lucy deconvolution; here we use the JAX library for automatic differentiation to train SHAMNet, a neural network that accurately approximates the stellar-to-halo mass relation (SMHM) defined by abundance matching. In our approach to making differentiable predictions for large scale structure, we map parameterized PDFs onto each simulated halo, and calculate gradients of summary statistics of the galaxy distribution by using autodiff to propagate the gradients of the SMHM through the statistical estimators used to measure one- and two-point functions. Our techniques are quite general, and we conclude with an overview of how they can be applied in tandem with more complex, higher-dimensional models, creating the capability to make differentiable predictions for the multi-wavelength universe of galaxies.
Neural Network Based Point Spread Function Deconvolution For Astronomical Applications
Optical astronomical images are strongly affected by the point spread function (PSF) of the optical system and the atmosphere (seeing) which blurs the observed image. The amount of blurring depends both on the observed band, and on the atmospheric conditions during observation. A typical astronomical image will likely have a unique PSF, that is non-circular and different in different bands. At the same time, observations of known stars also give us an accurate determination of this PSF. Therefore, any serious candidate for production analysis of astronomical images must take the known PSF into account during the image analysis. So far, the majority of applications of neural networks (NN) to astronomical image analysis have ignored this problem by assuming a fixed PSF in training and validation. We present a neural-network based deconvolution algorithm based on Deep Wiener Deconvolution Network (DWDN). This algorithm belongs to a class of non-blind deconvolution algorithms, since it assumes the PSF shape is known. We study the performance of different versions of this algorithm under realistic observational conditions in terms of the recovery of the most relevant astronomical quantities such as colors, ellipticities and orientations. We investigate custom loss functions that optimize the recovery of astronomical quantities with mixed results.
AstroMLab 3: Achieving GPT-4o Level Performance in Astronomy with a Specialized 8B-Parameter Large Language Model
AstroSage-Llama-3.1-8B is a domain-specialized natural-language AI assistant tailored for research in astronomy, astrophysics, cosmology, and astronomical instrumentation. Trained on the complete collection of astronomy-related arXiv papers from 2007 to 2024 along with millions of synthetically-generated question-answer pairs and other astronomical literature, AstroSage-Llama-3.1-8B demonstrates remarkable proficiency on a wide range of questions. AstroSage-Llama-3.1-8B scores 80.9% on the AstroMLab-1 benchmark, greatly outperforming all models -- proprietary and open-weight -- in the 8-billion parameter class, and performing on par with GPT-4o. This achievement demonstrates the potential of domain specialization in AI, suggesting that focused training can yield capabilities exceeding those of much larger, general-purpose models. AstroSage-Llama-3.1-8B is freely available, enabling widespread access to advanced AI capabilities for astronomical education and research.
Benchmarking AI-evolved cosmological structure formation
The potential of deep learning-based image-to-image translations has recently attracted significant attention. One possible application of such a framework is as a fast, approximate alternative to cosmological simulations, which would be particularly useful in various contexts, including covariance studies, investigations of systematics, and cosmological parameter inference. To investigate different aspects of learning-based cosmological mappings, we choose two approaches for generating suitable cosmological matter fields as datasets: a simple analytical prescription provided by the Zel'dovich approximation, and a numerical N-body method using the Particle-Mesh approach. The evolution of structure formation is modeled using U-Net, a widely employed convolutional image translation framework. Because of the lack of a controlled methodology, validation of these learned mappings requires multiple benchmarks beyond simple visual comparisons and summary statistics. A comprehensive list of metrics is considered, including higher-order correlation functions, conservation laws, topological indicators, and statistical independence of density fields. We find that the U-Net approach performs well only for some of these physical metrics, and accuracy is worse at increasingly smaller scales, where the dynamic range in density is large. By introducing a custom density-weighted loss function during training, we demonstrate a significant improvement in the U-Net results at smaller scales. This study provides an example of how a family of physically motivated benchmarks can, in turn, be used to fine-tune optimization schemes -- such as the density-weighted loss used here -- to significantly enhance the accuracy of scientific machine learning approaches by focusing attention on relevant features.
Global field reconstruction from sparse sensors with Voronoi tessellation-assisted deep learning
Achieving accurate and robust global situational awareness of a complex time-evolving field from a limited number of sensors has been a longstanding challenge. This reconstruction problem is especially difficult when sensors are sparsely positioned in a seemingly random or unorganized manner, which is often encountered in a range of scientific and engineering problems. Moreover, these sensors can be in motion and can become online or offline over time. The key leverage in addressing this scientific issue is the wealth of data accumulated from the sensors. As a solution to this problem, we propose a data-driven spatial field recovery technique founded on a structured grid-based deep-learning approach for arbitrary positioned sensors of any numbers. It should be noted that the na\"ive use of machine learning becomes prohibitively expensive for global field reconstruction and is furthermore not adaptable to an arbitrary number of sensors. In the present work, we consider the use of Voronoi tessellation to obtain a structured-grid representation from sensor locations enabling the computationally tractable use of convolutional neural networks. One of the central features of the present method is its compatibility with deep-learning based super-resolution reconstruction techniques for structured sensor data that are established for image processing. The proposed reconstruction technique is demonstrated for unsteady wake flow, geophysical data, and three-dimensional turbulence. The current framework is able to handle an arbitrary number of moving sensors, and thereby overcomes a major limitation with existing reconstruction methods. The presented technique opens a new pathway towards the practical use of neural networks for real-time global field estimation.
Enhancing Interpretability in Generative Modeling: Statistically Disentangled Latent Spaces Guided by Generative Factors in Scientific Datasets
This study addresses the challenge of statistically extracting generative factors from complex, high-dimensional datasets in unsupervised or semi-supervised settings. We investigate encoder-decoder-based generative models for nonlinear dimensionality reduction, focusing on disentangling low-dimensional latent variables corresponding to independent physical factors. Introducing Aux-VAE, a novel architecture within the classical Variational Autoencoder framework, we achieve disentanglement with minimal modifications to the standard VAE loss function by leveraging prior statistical knowledge through auxiliary variables. These variables guide the shaping of the latent space by aligning latent factors with learned auxiliary variables. We validate the efficacy of Aux-VAE through comparative assessments on multiple datasets, including astronomical simulations.
Interpretable Uncertainty Quantification in AI for HEP
Estimating uncertainty is at the core of performing scientific measurements in HEP: a measurement is not useful without an estimate of its uncertainty. The goal of uncertainty quantification (UQ) is inextricably linked to the question, \"how do we physically and statistically interpret these uncertainties?\" The answer to this question depends not only on the computational task we aim to undertake, but also on the methods we use for that task. For artificial intelligence (AI) applications in HEP, there are several areas where interpretable methods for UQ are essential, including inference, simulation, and control/decision-making. There exist some methods for each of these areas, but they have not yet been demonstrated to be as trustworthy as more traditional approaches currently employed in physics (e.g., non-AI frequentist and Bayesian methods). Shedding light on the questions above requires additional understanding of the interplay of AI systems and uncertainty quantification. We briefly discuss the existing methods in each area and relate them to tasks across HEP. We then discuss recommendations for avenues to pursue to develop the necessary techniques for reliable widespread usage of AI with UQ over the next decade.