Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
100 result(s) for "Colby, Sean"
Sort by:
Relative permeability for water and gas through fractures in cement
Relative permeability is an important attribute influencing subsurface multiphase flow. Characterization of relative permeability is necessary to support activities such as carbon sequestration, geothermal energy production, and oil and gas exploration. Previous research efforts have largely neglected the relative permeability of wellbore cement used to seal well bores where risks of leak are significant. Therefore this study was performed to evaluate fracturing on permeability and relative permeability of wellbore cement. Studies of relative permeability of water and air were conducted using ordinary Portland cement paste cylinders having fracture networks that exhibited a range of permeability values. The measured relative permeability was compared with three models, 1) Corey-curve, often used for modeling relative permeability in porous media, 2) X-curve, commonly used to represent relative permeability of fractures, and 3) Burdine model based on fitting the Brooks-Corey function to fracture saturation-pressure data inferred from x-ray computed tomography (XCT) derived aperture distribution results. Experimentally-determined aqueous relative permeability was best described by the Burdine model. Though water phase tended to follow the Corey-curve for the simple fracture system while air relative permeability was best described by the X-curve.
Reviews and syntheses: Opportunities for robust use of peak intensities from high-resolution mass spectrometry in organic matter studies
Earth's biogeochemical cycles are intimately tied to the biotic and abiotic processing of organic matter (OM). Spatial and temporal variations in OM chemistry are often studied using direct infusion, high-resolution Fourier transform mass spectrometry (FTMS). An increasingly common approach is to use ecological metrics (e.g., within-sample diversity) to summarize high-dimensional FTMS data, notably Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS). However, problems can arise when FTMS peak-intensity data are used in a way that is analogous to abundances in ecological analyses (e.g., species abundance distributions). Using peak-intensity data in this way requires the assumption that intensities act as direct proxies for concentrations. Here, we show that comparisons of the same peak across samples (within-peak) may carry information regarding variations in relative concentration, but comparing different peaks (between-peak) within or between samples does not. We further developed a simulation model to study the quantitative implications of using peak intensities to compute ecological metrics (e.g., intensity-weighted mean properties and diversity) that rely on information about both within-peak and between-peak shifts in relative abundance. We found that, despite analytical limitations in linking concentration to intensity, ecological metrics often perform well in terms of providing robust qualitative inferences and sometimes quantitatively accurate estimates of diversity and mean molecular characteristics. We conclude with recommendations for the robust use of peak intensities for natural organic matter studies. A primary recommendation is the use and extension of the simulation model to provide objective guidance on the degree to which conceptual and quantitative inferences can be made for a given analysis of a given dataset. Broad use of this approach can help ensure rigorous scientific outcomes from the use of FTMS peak intensities in environmental applications.
An automated framework for NMR chemical shift calculations of small organic molecules
When using nuclear magnetic resonance (NMR) to assist in chemical identification in complex samples, researchers commonly rely on databases for chemical shift spectra. However, authentic standards are typically depended upon to build libraries experimentally. Considering complex biological samples, such as blood and soil, the entirety of NMR spectra required for all possible compounds would be infeasible to ascertain due to limitations of available standards and experimental processing time. As an alternative, we introduce the in silico Chemical Library Engine (ISiCLE) NMR chemical shift module to accurately and automatically calculate NMR chemical shifts of small organic molecules through use of quantum chemical calculations. ISiCLE performs density functional theory (DFT)-based calculations for predicting chemical properties—specifically NMR chemical shifts in this manuscript—via the open source, high-performance computational chemistry software, NWChem. ISiCLE calculates the NMR chemical shifts of sets of molecules using any available combination of DFT method, solvent, and NMR-active nuclei, using both user-selected reference compounds and/or linear regression methods. Calculated NMR chemical shifts are provided to the user for each molecule, along with comparisons with respect to a number of metrics commonly used in the literature. Here, we demonstrate ISiCLE using a set of 312 molecules, ranging in size up to 90 carbon atoms. For each, calculation of NMR chemical shifts have been performed with 8 different levels of DFT theory, and with solvation effects using the implicit solvent Conductor-like Screening Model. The DFT method dependence of the calculated chemical shifts have been systematically investigated through benchmarking and subsequently compared to experimental data available in the literature. Furthermore, ISiCLE has been applied to a set of 80 methylcyclohexane conformers, combined via Boltzmann weighting and compared to experimental values. We demonstrate that our protocol shows promise in the automation of chemical shift calculations and, ultimately, the expansion of chemical shift libraries.
In Silico Quantification of Intersubject Variability on Aerosol Deposition in the Oral Airway
The extrathoracic oral airway is not only a major mechanical barrier for pharmaceutical aerosols to reach the lung but also a major source of variability in lung deposition. Using computational fluid dynamics, deposition of 1–30 µm particles was predicted in 11 CT-based models of the oral airways of adults. Simulations were performed for mouth breathing during both inspiration and expiration at two steady-state flow rates representative of resting/nebulizer use (18 L/min) and of dry powder inhaler (DPI) use (45 L/min). Consistent with previous in vitro studies, there was a large intersubject variability in oral deposition. For an optimal size distribution of 1–5 µm for pharmaceutical aerosols, our data suggest that >75% of the inhaled aerosol is delivered to the intrathoracic lungs in most subjects when using a nebulizer but only in about half the subjects when using a DPI. There was no significant difference in oral deposition efficiency between inspiration and expiration, unlike subregional deposition, which shows significantly different patterns between the two breathing phases. These results highlight the need for incorporating a morphological variation of the upper airway in predictive models of aerosol deposition for accurate predictions of particle dosimetry in the intrathoracic region of the lung.
Improving network inference algorithms using resampling methods
Background Relatively small changes to gene expression data dramatically affect co-expression networks inferred from that data which, in turn, can significantly alter the subsequent biological interpretation. This error propagation is an underappreciated problem that, while hinted at in the literature, has not yet been thoroughly explored. Resampling methods (e.g. bootstrap aggregation, random subspace method) are hypothesized to alleviate variability in network inference methods by minimizing outlier effects and distilling persistent associations in the data. But the efficacy of the approach assumes the generalization from statistical theory holds true in biological network inference applications. Results We evaluated the effect of bootstrap aggregation on inferred networks using commonly applied network inference methods in terms of stability, or resilience to perturbations in the underlying expression data, a metric for accuracy, and functional enrichment of edge interactions. Conclusion Bootstrap aggregation results in improved stability and, depending on the size of the input dataset, a marginal improvement to accuracy assessed by each method’s ability to link genes in the same functional pathway.
Who Is Metabolizing What? Discovering Novel Biomolecules in the Microbiome and the Organisms Who Make Them
Even as the field of microbiome research has made huge strides in mapping microbial community composition in a variety of environments and organisms, explaining the phenotypic influences on the host by microbial taxa-both known and unknown-and their specific functions still remain major challenges. A pressing need is the ability to assign specific functions in terms of enzymes and small molecules to specific taxa or groups of taxa in the community. This knowledge will be crucial for advancing personalized therapies based on the targeted modulation of microbes or metabolites that have predictable outcomes to benefit the human host. This perspective article advocates for the combined use of standards-free metabolomics and activity-based protein profiling strategies to address this gap in functional knowledge in microbiome research via the identification of novel biomolecules and the attribution of their production to specific microbial taxa.
Correction: Nielson et al. Similarity Downselection: Finding the n Most Dissimilar Molecular Conformers for Reference-Free Metabolomics. Metabolites 2023, 13, 105
There were missing figures and associated legends for Figure 3 and Figure 4 as published due to a publication error [...].There were missing figures and associated legends for Figure 3 and Figure 4 as published due to a publication error [...].
Similarity Downselection: Finding the n Most Dissimilar Molecular Conformers for Reference-Free Metabolomics
Computational methods for creating in silico libraries of molecular descriptors (e.g., collision cross sections) are becoming increasingly prevalent due to the limited number of authentic reference materials available for traditional library building. These so-called “reference-free metabolomics” methods require sampling sets of molecular conformers in order to produce high accuracy property predictions. Due to the computational cost of the subsequent calculations for each conformer, there is a need to sample the most relevant subset and avoid repeating calculations on conformers that are nearly identical. The goal of this study is to introduce a heuristic method of finding the most dissimilar conformers from a larger population in order to help speed up reference-free calculation methods and maintain a high property prediction accuracy. Finding the set of the n items most dissimilar from each other out of a larger population becomes increasingly difficult and computationally expensive as either n or the population size grows large. Because there exists a pairwise relationship between each item and all other items in the population, finding the set of the n most dissimilar items is different than simply sorting an array of numbers. For instance, if you have a set of the most dissimilar n = 4 items, one or more of the items from n = 4 might not be in the set n = 5. An exact solution would have to search all possible combinations of size n in the population exhaustively. We present an open-source software called similarity downselection (SDS), written in Python and freely available on GitHub. SDS implements a heuristic algorithm for quickly finding the approximate set(s) of the n most dissimilar items. We benchmark SDS against a Monte Carlo method, which attempts to find the exact solution through repeated random sampling. We show that for SDS to find the set of n most dissimilar conformers, our method is not only orders of magnitude faster, but it is also more accurate than running Monte Carlo for 1,000,000 iterations, each searching for set sizes n = 3–7 out of a population of 50,000. We also benchmark SDS against the exact solution for example small populations, showing that SDS produces a solution close to the exact solution in these instances. Using theoretical approaches, we also demonstrate the constraints of the greedy algorithm and its efficacy as a ratio to the exact solution.
MerCat: a versatile k-mer counter and diversity estimator for database-independent property analysis obtained from metagenomic and/or metatranscriptomic sequencing data
MerCat (“ Mer - Cat enate”) is a parallel, highly scalable and modular property software package for robust analysis of features in next-generation sequencing data. Using assembled contigs and raw sequence reads from any platform as input, MerCat performs k-mer counting of any length k, resulting in feature abundance counts tables. MerCat allows for direct analysis of data properties without reference sequence database dependency commonly used by search tools such as BLAST for compositional analysis of whole community shotgun sequencing (e.g., metagenomes and metatranscriptomes).
Predicting Common Buckthorn Distribution across Multiple Scales in New York State Using Maxent and Boosted Regression Tree Modelling
Common Buckthorn (Rhamnus cathartica) is a woody plant species that is a prolific invader of half of North American including much of New York State. While some aspects of Buckthorn’s ecological niche have been revealed through floral surveys and mesocosm experiments, Buckthorn’s presence has not been statistically evaluated based upon its environmental explanatory variables. In order to better understand Buckthorn’s response to environmental variables as well as determine the roles that Habitat Suitability, Propagule Pressure, and Invasional Meltdown have on influencing Buckthorn’s distribution, it is imperative to generate species distribution models. Within this study, Boosted Regression Tree modelling of Buckthorn abundance in 160 plots around Geneseo, NY and Maxent modelling of Buckthorn presence in 512,327 plots across New York State were applied to test Buckthorn’s response to environmental variables. Additionally, paired-plots near Geneseo, NY were compared to determine which in site factors best control Buckthorn’s relative abundance. The results of the study show that Buckthorn is most likely to be present in moist, well-drained, flat areas that are closer to disturbed/developed areas, and that Buckthorn’s distribution is best explained by the theories of Invasional Meltdown and Propagule Pressure. As for Buckthorn’s relative abundance within site, the results indicate that a complex combination of higher competitive ability mixed with increased Propagule Pressure best explains the higher amount of Buckthorn. Overall, the results of both models indicate that the areas of highest concern would be near to major cities and highways in flat moist areas, especially where the native ash trees have been decimated by emerald ash borer. Despite high agreement between model environmental explanatory variables, Buckthorn predicted presence greatly differs from between geographic scales in which the NYS-scale model is not able to accurately predict Buckthorn Presence at the smaller Geneseo-scale and thus management would require additional survey efforts at the plot level.