Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
5
result(s) for
"Damiani, Jana"
Sort by:
Relationships between Biomarkers of Oxidative Stress in Seminal Plasma and Sperm Motility in Bulls before and after Cryopreservation
2022
This study aimed at evaluating the relationship between biomarkers of oxidative stress (OS) in seminal plasma and sperm motility in bulls before and after cryopreservation. Three ejaculates per bull were collected from 20 young bulls. Each ejaculate was analyzed for motility before and after cryopreservation (by CASA), and the SP concentration of Advanced Oxidation Protein Products (AOPP), thiols, and carbonyl groups (CT) were examined. Then, based on their motility, the ejaculates were grouped into: high motility fresh (HMF), low motility fresh (LMF), high motility thawed (HMT), and low motility thawed (LMT) groups. Higher AOPP and thiol concentrations on SP were related (p < 0.05) to the higher LIN and BCF and lower ALH of fresh semen. In addition, AOPP and thiols were significantly higher in HMF than LMF. As a confirmation of this, the Receiver Operating Characteristic (ROC) curve analysis showed that AOPP and thiol concentrations in SP were able to discriminate between HMF and LMF ejaculates (Area Under the Curve of 71.67% and 72.04%, respectively). These observations give an alternative perspective on the relationship between sperm motility and the OS parameters of SP, which need further investigations.
Journal Article
Massive Scale Data Analytics at LCLS-II
by
Thayer, Jana
,
Shankar, Murali
,
Weninger, Clemens
in
Coherent light
,
Data acquisition
,
Data analysis
2024
The increasing volumes of data produced at light sources such as the Linac Coherent Light Source (LCLS) enable the direct observation of materials and molecular assemblies at the length and timescales of molecular and atomic motion. This exponential increase in the scale and speed of data production is prohibitive to traditional analysis workflows that rely on scientists tuning parameters during live experiments to adapt data collection and analysis. User facilities will increasingly rely on the automated delivery of actionable information in real time for rapid experiment adaptation which presents a considerable challenge for data acquisition, data processing, data management, and workflow orchestration. In addition, the desire from researchers to accelerate science requires rapid analysis, dynamic integration of experiment and theory, the ability to visualize results in near real-time, and the introduction of ML and AI techniques. We present the LCLS-II Data System architecture which is designed to address these challenges via an adaptable data reduction pipeline (DRP) to reduce data volume on-thefly, online monitoring analysis software for real-time data visualization and experiment feedback, and the ability to scale to computing needs by utilizing local and remote compute resources, such as the ASCR Leadership Class Facilities, to enable quasi-real-time data analysis in minutes. We discuss the overall challenges facing LCLS, our ongoing work to develop a system responsive to these challenges, and our vision for future developments.
Journal Article
Real-time Data Analysis with the LCLS-II Data System
2025
The recent updates of light sources such as the Linac Coherent Light Source (LCLS-II) enable the observation of materials and molecular assemblies at the length and timescales of molecular and atomic motion. However, these systems also present formidable challenges for data analysis, due to the extremely high data throughput (hundreds of GB to multiple TB per second). The exponential increase in scale and speed of data production heavily impacts data acquisition, data processing, and data management. On one side, the sheer amount of collected data is disrupting traditional analysis workflows. In the past, scientists have been analyzing the data right after collection, and have been manually tuning parameters to adapt future data collection and analysis to new results and changing experimental conditions. The massive amount of data generated by the new facilities is prohibitive for this kind of workflow: facilities often struggle to meet the demand of computational resources needed to process the data and scientists find it increasingly difficult to assess the quality of the scientific information extracted from the data, and to translate it in directives that steer the experiments in new directions. On the other hand, data storage and management within the facilities has also been impacted by the high repetition rate of the new instruments, combined with the increasing resolution of X-ray imaging detectors. Storing all the recorded data is increasingly becoming impractical, and strategies to reduce the data as close as possible to the source, possibly even before it is saved into the storage system of the facility, are currently under development.
To overcome these challenges, user facilities like LCLS increasingly rely on the automated delivery of actionable information in real time. Data must be analyzed as soon as possible after being collected. When the computational resources are of the facility are not sufficient to carry out the analysis, data must be transferred to computer centers which might be located outside of the facility, and the results of the analysis must be returned to LCLS with minimal latency. After the analysis, data can be reduced, discarding information that is not valuable to answer specific scientific questions. Additionally, the processed information must be either shown to the scientists, in a form that allows them to take quick experiment-steering decisions, or evaluated by expert systems (such as ML- and AI-based algorithms) that can close the feedback loop without human intervention. The need for low latency requires infrastructure to carry out all these operations to be implemented directly in the Data Acquisition System (DAQ) of the facility.
This presentation will introduce the architecture of the LCLS-II Data Acquisition System, currently in use at the TMO and RIX beamlines at LCLS. It will discuss how the LCLS-II data system includes a feature extraction layer (DRP) designed to reduce in real-time the data volumes while preserving the science content of the data., and how the architecture allows the reduction layer to be configurable and adaptable to the multiple science areas served by LCLS. It will also discuss the features included in the LCLS-II data system to facilitate real-time analysis by external software packages. Two of these software packages will be introduced in this presentation. The first is AMI2, a real-time analysis framework which provides visualization and graphically-configurable analysis of a selectable subset of the data generated by the data system, on the timescale of seconds. AMI2 is designed to allow scientists to easily develop analysis pipelines customized to specific techniques and experiments. The second is OM, designed for more complex scientific analysis pipelines, such as Serial Femtosecond Crystallography (SFX), small and wide angle X-ray Scattering (SWAXS) and X-ray emission spectroscopy (XES). OM is capable of transferring the recorded data to LCLS’s local datacenter (S3DF) for more in-depth analysis and of retrieving the results to be displayed while the experiment is running.
Additionally, this presentation will introduce recent developments at LCLS in the capability to stream data to non-local computer centers for large-scale AI-based analysis, and discuss how this feature will be implemented in the future directly in the LCLS-II data system.
Journal Article
Current and future strategies for end-to-end data analysis at LCLS
2025
The efficiency, and sometimes success, of experiments carried out at LCLS often depends on getting timely information about the past and current state of the experiment, in order to inform decisions for the reminder of the beamtime. Timely information can mean different things along the steps leading from raw data to final results. Real-time feedback can be obtained from very early steps to quickly guide the instrument - for example, during experiments delivering sample with jets, the relative position between the sample injector and the point of interaction between the liquid jet and X-ray beam is constantly adjusted to maximize hit rate. Decisions that need to be made on slightly longer timescales can be informed by intermediate processing steps; for example in serial crystallography (SFX), the position of the detector or dilution of the sample can be optimized against the crystal indexing rate. Finally, decisions that need to be made on even longer timescales benefit from efficient end-to-end processing, typically done “offline”, yielding statistical results about the quality of the experiment; for example, merging statistics in SFX can be monitored to decide when to switch sample or measurement conditions.
In this talk, I will present the infrastructure that has been built in the LCLS Data Systems to facilitate the latter type of decisions. I will attempt to describe the complexity of this infrastructure, which needs to robustly span and handle the whole data life-cycle from the hutch to supercomputers, while highlighting efforts made in hiding the complexity to help users focus on the science. In particular, I will describe the data processing workflows that can be operated from the experiment interface in the browser. With the coming online of LCLS-II in 2023, and its hard X-ray upgrade expected in 2026, a step change in data generation rates will exert a stress on the ability to efficiently process and inform experiments. I will give a brief overview of the data reduction pipeline (DRP) that has been built within LCLS Data Systems to handle order of magnitudes more data than LCLS-I ever produced. The DRP is a hardware infrastructure neatly putting together compressor nodes, monitoring nodes and event building nodes between hutches and disks which together make possible drastic reduction in storage needs while allowing to preserve information. To guarantee that this is true, effort is underway to devise generic compression algorithms that efficiently run in the DRP.
With this background in mind, the remainder of the talk will sketch a possible way forward to bringing the current “offline” end-to-end processing directly into the DRP, adding experiment specific algorithms to the existing generic ones. In particular, I will discuss how work pioneered in cryoEM leveraging differentiable simulators could be extended to the realm of X-ray diffractive imaging and mapped onto the LCLS-II Data Systems.
Journal Article
AGRONOMIC FEASIBILITY OF GROWING CHIA IN NORTHWESTERN RIO GRANDE DO SUL
by
Tragnago, Jose Luiz
,
Salazar, Rodrigo Fernando dos Santos
,
Damiani, Fernanda
in
Agronomy
,
Antioxidants
,
Carbohydrates
2018
The aim of this work was to evaluate the development of Chia at different sowing times and plant spacings. The experiment was conducted in a commercial area located in the Northwest Region of RS in the municipality of Novo Machado. The experiment was conducted in a bifactorial arrangement with 2 spacings and 3 sowing times, with four replications. The factors analyzed were three different sowing times (January 15 and 30, and February 15) combined with two spacings (17 and 45 cm). The following parameters were evaluated: mean plant height, ear length, yield, weight of one thousand seeds, germination and first germination test count. The variable percentage of germination and its derivations were transformed into arcsin (X/100)1/2. The study showed that chia can be cultivated in the region, achieving good results for the parameters evaluated, mainly for grain yield. Sowing carried out in January and at the spacing of 17 cm between rows provided the best results.
Journal Article