Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
999 result(s) for "Temporal dimensions"
Sort by:
Exploring the temporal dimension in policy evaluation studies
This article explores the time dimension in policy evaluation studies. We argue that time has been given little attention in policy evaluation studies, despite it being very important for the occurrence and assessment of policy success or failure. We therefore propose to make time a central element of policy evaluation. First, we explore the theoretical foundations behind the concept of time. Second, we present a case study to investigate the presence of time in that specific case and the evaluation thereof. We conclude with recommendations for policy evaluation studies.
How marketers and consumers synchronize temporal modes to cocreate ritual vitality
Marketers recognize the contributions that consumer rituals make to their organizations. They endeavor to have such contributions persist as they support consumers in enacting those rituals. This ethnographic study examines the temporal aspects of ritual, termed 'ritual vitality.' We explain how marketers can influence ritual vitality through engagement in the chronos and kairos temporal dimensions of ritual; theorize the relationship between those dimensions; identify the ways in which marketers and consumers interact through a ritual's chronos and kairos temporal dimensions; and theorize how marketers and consumers co-create these experiences as each party guides, aligns with, or detours from one another. This co-creation is central to ritual vitality. Finally, we contribute an understanding of how chronos and kairos temporal dimensions shape, structure, and perpetuate ritual performance, and identify opportunities for marketers and consumers to participate in the synchronization of chronos and kairos temporality and the support of ritual performances that together may result in ritual vitality.
A dual transfer learning method based on 3D-CNN and vision transformer for emotion recognition
In the domain of medical science, emotion recognition based on electroencephalogram (EEG) has been widely used in emotion computing. Despite the prevalence of deep learning in EEG signals analysis, standard convolutional and recurrent neural networks fall short in effectively processing EEG data due to their inherent limitations in capturing global dependencies and addressing the non-linear and unstable characteristics of EEG signals. We propose a dual transfer learning method based on 3D Convolutional Neural Networks (3D-CNN) with a Vision Transformer (ViT) to enhance emotion recognition. This paper aims to utilize 3D-CNN effectively to capture the spatial characteristics of EEG signals and reduce data covariance, extracting shallow features. Additionally, ViT is incorporated to improve the model’s ability to capture long-range dependencies, facilitating deep feature extraction. The methodology involves a two-stage process: initially, the front end of a pre-trained 3D-CNN is employed as a shallow feature extractor to mitigate EEG data covariance and transformer biases, focusing on low-level feature detection. The subsequent stage utilizes ViT as a deep feature extractor, adept at modeling the global aspects of EEG signals and employing attention mechanisms for precise classification. We also present an innovative algorithm for data mapping in transfer learning, ensuring consistent feature representation across both spatio-temporal dimensions. This approach significantly improves global feature processing and long-range dependency detection, with the integration of color channels augmenting the model’s sensitivity to signal variations. In a 10-fold cross-validation experiment on the DEAP, experimental results demonstrate that the proposed method achieves classification accuracies of 92.44% and 92.85% for the valence and arousal dimensions, and the accuracies of four-class classification across valence and arousal are HVHA: 88.01%, HVLA: 88.27%, LVHA: 90.89%, LVLA: 78.84%. Similarly, it achieves an accuracy of 98.69% on the SEED. Overall, this methodology not only holds substantial potential in advancing emotion recognition tasks but also contributes to the broader field of affective computing.
Intrinsic Human Elimination Half-Lives of Polychlorinated Biphenyls Derived from the Temporal Evolution of Cross-Sectional Biomonitoring Data from the United Kingdom
Background: Most empirical estimates of human elimination kinetics for persistent chemicals reflect apparent elimination half-lives that represent the aggregated effect of intrinsic elimination, ongoing exposure, and changes in body weight. However, estimates of intrinsic elimination at background levels are required for risk assessments for the general population. Objective: To estimate intrinsic human elimination half-lives at background levels for nine polychlorinated biphenyl (PCB) congeners, we used a novel approach based on population data. Methods: We used a population pharmacokinetic model to interpret two sets of congener-specific cross-sectional age-concentration biomonitoring data of PCB concentrations measured in lipid and blood samples that were collected from 229 individuals in 1990 and 2003. Our method is novel because it exploits information about changes in concentration in the human population along two dimensions: age and calendar time. Results: Our approach extracted information about both elimination kinetics and exposure trends from biomonitoring data. The longest intrinsic human elimination half-lives estimated in this study are 15.5 years for PCB-170, 14.4 years for PCB-153, and 11.5 years for PCB-180. Conclusions: Our results are further evidence that a maximum intrinsic elimination half-life for persistent chemicals such as PCBs exists and is approximately 10-15 years. A clear conceptual distinction between apparent and intrinsic half-lives is required to reduce the uncertainty in elimination half-lives of persistent chemicals. The method presented here estimates intrinsic elimination half-lives and the exposure trends of persistent pollutants using cross-sectional data available from a large and growing number of biomonitoring programs.
Task-level decision-making for dynamic and stochastic human-robot collaboration based on dual agents deep reinforcement learning
Human-robot collaboration as a multidisciplinary research topic is still pursuing the robots’ enhanced intelligence to be more human-compatible and fit the dynamic and stochastic characteristics of human. However, the uncertainties brought by the human partner challenge the task-planning and decision-making of the robot. When aiming at industrial tasks like collaborative assembly, dynamics on temporal dimension and stochasticities on the order of procedures need to be further considered. In this work, we bring a new perspective and solution based on reinforcement learning, where the problem is regarded as training an agent towards tasks in dynamic and stochastic environments. Concretely, an adapted training approach based on the deep Q learning method is proposed. This method regards both the robot and the human as the agents in the interactive training environment for deep reinforcement learning. With the consideration of task-level industrial human-robot collaboration, the training logic and the agent-environment interaction have been proposed. For the human-robot collaborative assembly tasks in the case study, it is illustrated that our method could drive the robot represented by one agent to collaborate with the human partner even the human performs randomly on the task procedures.
Teachers' Use of Competency-based Assessment in Personalised Learning Practices: An Exploratory Study
The impact of competencies within the framework of the new learning ecology has generated a qualitative leap in the quality of educational practices. Currently, we have appropriated different personalised pedagogical approaches and innovative methodologies that promote the acquisition and use of competencies, but how do we assess them? Their assessment is not sufficiently clear since there is no consensus on what competencies, competency-based learning, competency-based assessment, among others, mean, which in practice presents difficulties and concerns for teachers. This qualitative exploratory study has two objectives: i) to propose a model to analyse competency-based assessment in personalised learning practices and (ii) to illustrate its application to two practices in secondary education. The results show differences in the complexity of competency-based assessment practices, in the degree of relevance of the proposed assessment contexts, and in learners' participation in the feedback process. It is concluded that the model is a potentially helpful tool to understand and optimise competency-based assessment in personalised learning practices. Recommendations are formulated so that teachers can promote effective assessment practices from a competency perspective. El impacto de las competencias en el marco de la nueva ecología del aprendizaje ha generado un salto cualitativo en la calidad de las prácticas educativas. Actualmente, nos hemos apropiado de diferentes enfoques pedagógicos personalizados y metodologías innovadoras que promueven la adquisición y el uso de competencias, pero ¿cómo las evaluamos? Su evaluación no es lo suficientemente clara, puesto que no existe consenso sobre lo que significan las competencias, el aprendizaje basado en competencias, la evaluación competencial, entre otros, lo que en la práctica presenta dificultades y preocupaciones para el profesorado. Este estudio cualitativo exploratorio tiene dos objetivos: i) proponer un modelo para analizar la evaluación competencial en prácticas de personalización del aprendizaje e (ii) ilustrar su aplicación a dos de estas prácticas en educación secundaria. Los resultados muestran diferencias en la complejidad que presentan las prácticas de evaluación competencial, en el grado de relevancia de los contextos de evaluación propuestos y en la participación del alumnado en el proceso de feedback. Se concluye que el modelo es una herramienta potencialmente útil para comprender y optimizar la evaluación competencial en prácticas de personalización del aprendizaje. Se formulan recomendaciones para que el profesorado pueda promover prácticas de evaluación efectivas desde una perspectiva competencial.
The Fragility of Time: Time-Insensitivity and Valuation of the Near and Far Future
We propose that the temporal dimension is fragile in that choices are insufficiently sensitive to it, and second, such sensitivity as exists is exceptionally malleable, unlike other dimensions such as money, which are attended by default. To test this, we axiomatize a \"constant-sensitivity\" discount function, and in four studies, we show that the degree of time-sensitivity is inadequate relative to the compound discounting norm, and strongly susceptible to manipulation. Time-sensitivity is increased by a comparative within-subject presentation (Experiment 1), direct instruction (Experiment 3), and provision of a visual cue for time duration (Experiment 4); time-sensitivity is decreased using a time pressure manipulation (Experiment 2). In each study, the sensitivity manipulation has an opposite effect on near-future and far-future valuations: Increased sensitivity decreases discounting in the near future and increases discounting in the far future. In contrast, such sensitivity manipulations have little effect on the money dimension.
Times series InSAR deformation monitoring of Jinchuan mining area based on mini stack technology
Time-series Interferometric Synthetic Aperture Radar (TS-InSAR) enables precise, wide-area ground deformation monitoring but suffers from decorrelation and heavy computation with large archives of satellite imagery. To address these challenges, this study applies temporal dimension image compression to 199 Sentinel-1 A scenes (March 25, 2017–May 11, 2024) covering the Jinchuan mining area, China. Specifically, through the construction of a covariance matrix, PL (Phase-Linking) for phase compensation, and dimensionality reduction and reconstruction processes, the time-series image datasets are compressed into 22 virtual images. These virtual images are then processed within the Persistent Scatterer Interferometry (PS-InSAR) framework, referred to as mini stack technology. Results show that (1) the time-series image compression mini stack technology significantly enhances computational efficiency compared to traditional time-series InSAR (TS-InSAR) methods, relieving the decorrelation issue caused by long time spans in conventional interferograms; (2) The average coherence coefficient obtained from the virtual image stack improved by 32.8%, and the sum of phase differences (SPD) decreased by 19.2% compared to the original image after full-group interferometric processing. Furthermore, the monitoring points (MPs) density extracted by mini stack technology in the deformation zone of the mining area increased by over 32 times more than the PS-InSAR method. Additionally, spatial deformation patterns derived from both approaches are consistent, with a Pearson correlation coefficient of 0.89, a mean deformation rate difference of 0.01 mm/yr, and a standard deviation of 0.68 mm/yr. These findings confirm that mini stack technology reliably captures ground deformation while enhancing processing efficiency and data quality, making it suitable for broader applications in long-term deformation monitoring.
The challenge of managing the evolution of genomics data over time: a conceptual model-based approach
Background Precision medicine is a promising approach that has revolutionized disease prevention and individualized treatment. The DELFOS oracle is a model-driven genomics platform that aids clinicians in identifying relevant variations that are associated with diseases. In its previous version, the DELFOS oracle did not consider the high degree of variability of genomics data over time. However, changes in genomics data have had a profound impact on clinicians’ work and pose the need for changing past, present, and future clinical actions. Therefore, our objective in this work is to consider changes in genomics data over time in the DELFOS oracle. Methods Our objective has been achieved through three steps. First, we studied the characteristics of each database from which the DELFOS oracle extracts data. Second, we characterized which genomics concepts of the conceptual schema that supports the DELFOS oracle change over time. Third, we updated the DELFOS Oracle so that it can manage the temporal dimension. To validate our approach, we carried out a use case to illustrate how the new version of the DELFOS oracle handles the temporal dimension. Results Three events can change genomics data, namely, the addition of a new variation, the addition of a new link between a variation and a phenotype, and the update of a link between a variation and a phenotype. These events have been linked to the entities of the conceptual model that are affected by them. Finally, a new version of the DELFOS oracle that can deal with the temporal dimension has been implemented. Conclusion Huge amounts of genomics data that is associated with diseases change over time, impacting patients’ diagnosis and treatment. Including this information in the DELFOS oracle added an extra layer of complexity, but using a model-driven based approach mitigated the cost of implementing the needed changes. The new version handles the temporal dimension appropriately and eases clinicians’ work.
Assessing drinking and irrigation water quality in a highly altered subtropical river in India using hydro-chemical indices
River water pollution and the subsequent degradation of water quality for irrigation and drinking are reported worldwide, especially in tropical regions with excess population pressure. The present study intends to investigate irrigation and drinking water quality and assess their suitability in the subtropical Damodar River in India using hydrochemical indices during pre-monsoon (PRM), monsoon (MON), and post-monsoon (POM) periods. The water quality index (WQI) results reveal that the river’s water is unsuitable for drinking, as 68.92% (52.95% in PRM, 86.54% in MON, and 66.88% in POM) of samples are found to be unfit for consumption in the temporal dimension. However, in the spatial dimension, the percentage of unsuitable water samples is primarily high near the village of Mujher Mana station, with 97.20% of samples (97.87% in PRM, 97.91% in MON, and 95.83% in POM) deemed unfit for drinking. This suggests the Damodar River water in MON and near the village of Mujher Mana needs treatment before drinking. The study’s findings from the irrigation hazards indices and the local farmers’ feedback indicate that the river water is suitable for irrigation use. Moreover, SAR, %Na, KR, and PS are high at Mujher Mana village, RSC at Raniganj downstream (Ds), PI at Barakar, and MAR at Durgapur upstream (Us) in terms of spatial extent. The ANOVA test indicates a significant variation in river water quality across different spatio-temporal dimensions in the study area. Water pollution is mainly attributed to the discharge of untreated industrial and urban effluents directly into rivers, without undergoing water treatment. Therefore, it is imperative to address the issue promptly to reinstate the river water quality.