Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
154 result(s) for "LSST Dark Energy Science Collaboration"
Sort by:
Joint Modelling of Astrophysical Systematics for Cosmology with LSST Cosmic Shear
We present a novel framework for jointly modelling the weak lensing source galaxy redshift distribution and the intrinsic alignment of galaxies via a shared luminosity function (LF). Considering this framework within the context of a Rubin Observatory's Legacy Survey of Space and Time (LSST) Year 1 and Year 10 cosmic shear analysis, we first demonstrate the substantial impact of the LF on both source galaxy redshift distributions and the intrinsic alignment contamination. We establish how the individual parameters of a Schechter LF model impact the redshift distribution of a magnitude-limited sample, and we demonstrate the effect of marginalising over the LF parameters as incorporated in the intrinsic alignment modelling of a standard cosmic shear analysis set-up. We forecast the impact of our joint modelling framework on cosmological parameter constraints. Our preliminary results are promising, indicating that this framework can yield cosmological constraints consistent with those expected from standard analyses, enhanced by the flexibility of not fixing LF parameters. We plan to further validate these findings with comprehensive Markov chain Monte Carlo simulations to robustly quantify bias avoidance and underscore the framework's efficacy. Taking advantage of our forecasting results and the parameter degeneracies, we identify the specific impact of the shape of the LF of source galaxies on the cosmic shear data vector. We also discuss the potential of this method in providing a way to model generic selection functions in redshift distribution estimation, as well as its possibilities for extension to a 3x2pt analysis, particularly with respect to incorporating galaxy bias in this luminosity-function-based framework. Although we consider the context of LSST cosmic shear in this work, the proposed joint modelling framework is generically applicable to weak lensing surveys.
Simulation-Based Inference for Probabilistic Galaxy Detection and Deblending
Stage-IV dark energy wide-field surveys, such as the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST), will observe an unprecedented number density of galaxies. As a result, the majority of imaged galaxies will visually overlap, a phenomenon known as blending. Blending is expected to be a leading source of systematic error in astronomical measurements. To mitigate this systematic, we propose a new probabilistic method for detecting, deblending, and measuring the properties of galaxies, called the Bayesian Light Source Separator (BLISS). Given an astronomical survey image, BLISS uses convolutional neural networks to produce a probabilistic astronomical catalog by approximating the posterior distribution over the number of light sources, their centroids' locations, and their types (galaxy vs. star). BLISS additionally includes a denoising autoencoder to reconstruct unblended galaxy profiles. As a first step towards demonstrating the feasibility of BLISS for cosmological applications, we apply our method to simulated single-band images whose properties are representative of year-10 LSST coadds. First, we study each BLISS component independently and examine its probabilistic output as a function of SNR and degree of blending. Then, by propagating the probabilistic detections from BLISS to its deblender, we produce per-object flux posteriors. Using these posteriors yields a substantial improvement in aperture flux residuals relative to deterministic detections alone, particularly for highly blended and faint objects. These results highlight the potential of BLISS as a scalable, uncertainty-aware tool for mitigating blending-induced systematics in next-generation cosmological surveys.
The impact of environment on size: Galaxies are 50% smaller in the Fornax Cluster compared to the field
Size is a fundamental parameter for measuring the growth of galaxies and the role of the environment on their evolution. However, the conventional size definitions used for this purpose are often biased and miss the diffuse, outermost signatures of galaxy growth, including star formation and gas accretion. This issue is addressed by examining low surface brightness truncations or galaxy ``edges'' as a physically motivated tracer of size based on star formation thresholds. Our total sample consists of \\(\\sim900\\) galaxies with stellar masses ranging from \\(10^5 M_{\\odot} < M_{\\star} < 10^{11} M_{\\odot}\\). This sample of nearby cluster, group satellite and nearly isolated field galaxies was compiled using multi-band imaging from the Fornax Deep Survey, deep IAC Stripe 82 and Dark Energy Camera Legacy Surveys. We find that the edge radii scale as \\(R_{\\rm edge} \\propto M_{\\star}^{0.42}\\) with a very small intrinsic scatter (\\(\\sim 0.07\\) dex). The scatter is driven by the morphology and environment of galaxies. In both the cluster and field, early-type dwarfs are systematically smaller by \\(\\sim20\\%\\) than the late-types. However, compared to the field galaxies in the Fornax cluster are the most impacted. At a fixed stellar mass, edges in the cluster can be found at \\(\\sim\\) 50\\% smaller radii and the average stellar surface density at the edges is a factor of two higher \\(\\sim 1\\,M_{\\odot}\\)/pc\\(^2\\). Our findings support the rapid removal of loosely bound neutral hydrogen in hot, crowded environments which truncates galaxies outside-in earlier, preventing the formation of more extended sizes and lower density edges. Our results highlight the importance of deep imaging surveys to study the low surface brightness imprints of the large scale structure and environment on galaxy evolution.
Impact of Large-Scale Structure Systematics on Cosmological Parameter Estimation
Large near-future galaxy surveys offer sufficient statistical power to make our cosmology analyses data-driven, limited primarily by systematic errors. Understanding the impact of systematics is therefore critical. We perform an end-to-end analysis to investigate the impact of some of the systematics that affect large-scale structure studies by doing an inference analysis using simulated density maps with various systematics; these include systematics caused by photometric redshifts (photo-\\(z\\)s), Galactic dust, structure induced by the telescope observing strategy and observing conditions, and incomplete covariance matrices. Specifically, we consider the impacts of incorrect photo-\\(z\\) distributions (photometric biases, scatter, outliers; spectroscopic calibration biases), dust map resolution, incorrect dust law, selecting none or only some contaminant templates for deprojection, and using a diagonal covariance matrix instead of a full one. We quantify the biases induced by these systematics on cosmological parameter estimation using tomographic galaxy angular power spectra, with a focus on identifying whether the maximum plausible level of each systematic has an adverse impact on the estimation of key cosmological parameters from a galaxy clustering analysis with Rubin Observatory Legacy Survey of Space and Time (LSST). We find photo-\\(z\\) systematics to be the most pressing out of the systematics investigated, with spectroscopic calibration biases leading to the greatest adverse impact while helpfully being flagged by a high \\(^2\\) value for the best fit model. Larger-than-expected photo-\\(z\\) scatter, on the other hand, has a significant impact without necessarily indicating a poor fit. In contrast, in the analysis framework used in this work, biases from observational systematics and incomplete covariance matrices are comfortably subdominant.
MADNESS Deblender: Maximum A posteriori with Deep NEural networks for Source Separation
Due to the unprecedented depth of the upcoming ground-based Legacy Survey of Space and Time (LSST) at the Vera C. Rubin Observatory, approximately two-thirds of the galaxies are likely to be affected by blending - the overlap of physically separated galaxies in images. Thus, extracting reliable shapes and photometry from individual objects will be limited by our ability to correct blending and control any residual systematic effect. Deblending algorithms tackle this issue by reconstructing the isolated components from a blended scene, but the most commonly used algorithms often fail to model complex realistic galaxy morphologies. As part of an effort to address this major challenge, we present MADNESS, which takes a data-driven approach and combines pixel-level multi-band information to learn complex priors for obtaining the maximum a posteriori solution of deblending. MADNESS is based on deep neural network architectures such as variational auto-encoders and normalizing flows. The variational auto-encoder reduces the high-dimensional pixel space into a lower-dimensional space, while the normalizing flow models a data-driven prior in this latent space. Using a simulated test dataset with galaxy models for a 10-year LSST survey and a galaxy density ranging from 48 to 80 galaxies per arcmin2 we characterize the aperture-photometry g-r color, structural similarity index, and pixel cosine similarity of the galaxies reconstructed by MADNESS. We compare our results against state-of-the-art deblenders including scarlet. With the r-band of LSST as an example, we show that MADNESS performs better than in all the metrics. For instance, the average absolute value of relative flux residual in the r-band for MADNESS is approximately 29% lower than that of scarlet. The code is publicly available on GitHub.
Simulation-Based Inference Benchmark for Weak Lensing Cosmology
Standard cosmological analysis, which relies on two-point statistics, fails to extract the full information of the data. This limits our ability to constrain with precision cosmological parameters. Thus, recent years have seen a paradigm shift from analytical likelihood-based to simulation-based inference. However, such methods require a large number of costly simulations. We focus on full-field inference, considered the optimal form of inference. Our objective is to benchmark several ways of conducting full-field inference to gain insight into the number of simulations required for each method. We make a distinction between explicit and implicit full-field inference. Moreover, as it is crucial for explicit full-field inference to use a differentiable forward model, we aim to discuss the advantages of having this property for the implicit approach. We use the sbi_lens package which provides a fast and differentiable log-normal forward model. This forward model enables us to compare explicit and implicit full-field inference with and without gradient. The former is achieved by sampling the forward model through the No U-Turns sampler. The latter starts by compressing the data into sufficient statistics and uses the Neural Likelihood Estimation algorithm and the one augmented with gradient. We perform a full-field analysis on LSST Y10 like weak lensing simulated mass maps. We show that explicit and implicit full-field inference yield consistent constraints. Explicit inference requires 630 000 simulations with our particular sampler corresponding to 400 independent samples. Implicit inference requires a maximum of 101 000 simulations split into 100 000 simulations to build sufficient statistics (this number is not fine tuned) and 1 000 simulations to perform inference. Additionally, we show that our way of exploiting the gradients does not significantly help implicit inference.
Improved photometric redshift estimations through self-organising map-based data augmentation
We introduce a framework for the enhanced estimation of photometric redshifts using Self-Organising Maps (SOMs). Our method projects galaxy Spectral Energy Distributions (SEDs) onto a two-dimensional map, identifying regions that are sparsely sampled by existing spectroscopic observations. These under-sampled areas are then augmented with simulated galaxies, yielding a more representative spectroscopic training dataset. To assess the efficacy of this SOM-based data augmentation in the context of the forthcoming Legacy Survey of Space and Time (LSST), we employ mock galaxy catalogues from the OpenUniverse2024 project and generate synthetic datasets that mimic the expected photometric selections of LSST after one (Y1) and ten (Y10) years of observation. We construct 501 degraded realisations by sampling galaxy colours, magnitudes, redshifts and spectroscopic success rates, in order to emulate the compilation of a wide array of realistic spectroscopic surveys. Augmenting the degraded mock datasets with simulated galaxies from the independent CosmoDC2 catalogues has markedly improved the performance of our photometric redshift estimates compared to models lacking this augmentation, particularly for high-redshift galaxies (\\(z_true 1.5\\)). This improvement is manifested in notably reduced systematic biases and a decrease in catastrophic failures by up to approximately a factor of 2, along with a reduction in information loss in the conditional density estimations. These results underscore the effectiveness of SOM-based augmentation in refining photometric redshift estimation, thereby enabling more robust analyses in cosmology and astrophysics for the NSF-DOE Vera C. Rubin Observatory.
A direct detection method of galaxy intrinsic ellipticity-gravitational shear correlation in non-linear regimes using self-calibration
Intrinsic alignment (IA) of galaxies is a challenging source of contamination in the Cosmic shear (GG) signals. The galaxy intrinsic ellipticity-gravitational shear (IG) correlation is generally the most dominant component of such contamination for cross-correlating redshift bins. The self-calibration (SC) method is one of the most effective techniques to mitigate such contamination from the GG signal. In a photometric survey, the SC method first extracts the galaxy number density-galaxy intrinsic ellipticity (gI) correlation from the observed galaxy-galaxy lensing correlation using the redshift dependence of lens-source pairs. The IG correlation is computed through a scaling relation using the gI correlation and other lensing observables. We extend the SC method beyond the linear regime by modifying its scaling relation which can account for the non-linear galaxy bias model and various IA models. In this study, we provide a framework to detect the IG correlation for the redshift bins for source galaxies for the proposed year 1 survey of the Rubin Legacy Survey of Space and Time (LSST Y1). We tested the method for the tidal alignment and tidal torquing (TATT) model of IA and we found that the scaling relation is accurate within 10\\(\\%\\) and 20\\(\\%\\) for cross-correlating and auto-correlating redshift bins, respectively. Hence the suppression of IG contamination can be accomplished with a factor of 10 and 5, for cross-correlating and auto-correlating redshift bins, respectively. We tested the method's robustness and found that the suppression of IG contamination by a factor of 5 is still achievable for all combinations of cross-correlating bins even with the inclusion of a moderate amount of uncertainties on IA and bias parameters, respectively. [Abridged]
A halo model approach for mock catalogs of time-variable strong gravitational lenses
Time delays in both galaxy- and cluster-scale strong gravitational lenses have recently attracted a lot of attention in the context of the Hubble tension. Future wide-field cadenced surveys, such as the LSST, are anticipated to discover strong lenses across various scales. We generate mock catalogs of strongly lensed QSOs and SNe on galaxy-, group-, and cluster-scales based on a halo model that incorporates dark matter halos, galaxies, and subhalos. For the upcoming LSST survey, we predict that approximately 4000 lensed QSOs and 200 lensed SNe with resolved multiple images will be discovered. Among these, about 80 lensed QSOs and 10 lensed SNe will have maximum image separations larger than 10 arcsec, which roughly correspond to cluster-scale strong lensing. We find that adopting the Chabrier stellar IMF instead of the fiducial Salpeter IMF reduces the predicted number of strong lenses approximately by half, while the distributions of lens and source redshifts and image separations are not significantly changed. In addition to mock catalogs of multiple-image lens systems, we create mock catalogs of highly magnified systems, including both multiple-image and single-image systems. We find that such highly magnified systems are typically produced by massive galaxies, but non-negligible fraction of them are located in the outskirt of galaxy groups and clusters. Furthermore, we compare subsamples of our mock catalogs with lensed QSO samples constructed from the SDSS and Gaia to find that our mock catalogs with the fiducial Salpeter IMF reproduce the observation quite well. In contrast, our mock catalogs with the Chabrier IMF predict a significantly smaller number of lensed QSOs compared with observations, which adds evidence that the stellar IMF of massive galaxies is Salpeter-like. Our python code SL-Hammocks as well as the mock catalogs are made available online. (abridged)
Type Ia supernova growth-rate measurement with LSST simulations: intrinsic scatter systematics
Measurement of the growth rate of structures (\\(\\fsig\\)) with Type Ia supernovae (\\sns) will improve our understanding of the nature of dark energy and enable tests of general relativity. In this paper, we generate simulations of the 10 year \\sn\\ dataset of the Rubin-LSST survey, including a correlated velocity field from a N-body simulation and realistic models of \\sns\\ properties and their correlations with host-galaxy properties. We find, similar to SN~Ia analyses that constrain the dark energy equation-of-state parameters \\(w_0w_a\\), that constraints on \\(\\fsig\\) can be biased depending on the intrinsic scatter of \\sns. While for the majority of intrinsic scatter models we recover \\(\\fsig\\) with a precision of \\(\\sim13 - 14\\%\\), for the most realistic dust-based model, we find that the presence of non-Gaussianities in Hubble diagram residuals leads to a bias on \\(\\fsig\\) of about \\(\\sim-20\\%\\). When trying to correct for the dust-based intrinsic scatter, we find that the propagation of the uncertainty on the model parameters does not significantly increase the error on \\(\\fsig\\). We also find that while the main component of the error budget of \\(\\fsig\\) is the statistical uncertainty (\\(>75\\%\\) of the total error budget), the systematic error budget is dominated by the uncertainty on the damping parameter, \\(\\sigma_u\\), that gives an empirical description of the effect of redshift space distortions on the velocity power spectrum. Our results motivate a search for new methods to correct for the non-Gaussian distribution of the Hubble diagram residuals, as well as an improved modeling of the damping parameter.