Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
155 result(s) for "Pardo, Raúl"
Sort by:
Free open source communities sustainability: Does it make a difference in software quality?
ContextFree and Open Source Software (FOSS) communities’ ability to stay viable and productive over time is pivotal for society as they maintain the building blocks that digital infrastructure, products, and services depend on. Sustainability may, however, be characterized from multiple aspects, and less is known how these aspects interplay and impact community outputs, and software quality specifically.ObjectiveThis study, therefore, aims to empirically explore how the different aspects of FOSS sustainability impact software quality.Method16 sustainability metrics across four categories were sampled and applied to a set of 217 OSS projects sourced from the Apache Software Foundation Incubator program. The impact of a decline in the sustainability metrics was analyzed against eight software quality metrics using Bayesian data analysis, which incorporates probability distributions to represent the regression coefficients and intercepts.ResultsFindings suggest that selected sustainability metrics do not significantly affect defect density or code coverage. However, a positive impact of community age was observed on specific code quality metrics, such as risk complexity, number of very large files, and code duplication percentage. Interestingly, findings show that even when communities are experiencing sustainability, certain code quality metrics are negatively impacted.ConclusionFindings imply that code quality practices are not consistently linked to sustainability, and defect management and prevention may be prioritized over the former. Results suggest that growth, resulting in a more complex and large codebase, combined with a probable lack of understanding of code quality standards, may explain the degradation in certain aspects of code quality.
Genetic and pharmacological inhibition of calcineurin corrects the BDNF transport defect in Huntington's disease
Background Huntington's disease (HD) is an inherited neurogenerative disease caused by an abnormal expansion of glutamine repeats in the huntingtin protein. There is currently no treatment to prevent the neurodegeneration caused by this devastating disorder. Huntingtin has been shown to be a positive regulator of vesicular transport, particularly for neurotrophins such as brain-derived neurotrophic factor (BDNF). This function is lost in patients with HD, resulting in a decrease in neurotrophic support and subsequent neuronal death. One promising line of treatment is therefore the restoration of huntingtin function in BDNF transport. Results The phosphorylation of huntingtin at serine 421 (S421) restores its function in axonal transport. We therefore investigated whether inhibition of calcineurin, the bona fide huntingtin S421 phosphatase, restored the transport defects observed in HD. We found that pharmacological inhibition of calcineurin by FK506 led to sustained phosphorylation of mutant huntingtin at S421. FK506 restored BDNF transport in two complementary models: rat primary neuronal cultures expressing mutant huntingtin and mouse cortical neurons from Hdh Q111/Q111 HD knock-in mice. This effect was the result of specific calcineurin inhibition, as calcineurin silencing restored both anterograde and retrograde transport in neurons from Hdh Q111/Q111 mice. We also observed a specific increase in calcineurin activity in the brain of Hdh Q111/Q111 mice potentially accounting for the selective loss of huntingtin phosphorylation and contributing to neuronal cell death in HD. Conclusion Our results validate calcineurin as a target for the treatment of HD and provide the first demonstration of the restoration of huntingtin function by an FDA-approved compound.
TR1801‐ADC: a highly potent cMet antibody–drug conjugate with high activity in patient‐derived xenograft models of solid tumors
cMet is a well‐characterized oncogene that is the target of many drugs including small molecule and biologic pathway inhibitors, and, more recently, antibody–drug conjugates (ADCs). However, the clinical benefit from cMet‐targeted therapy has been limited. We developed a novel cMet‐targeted ‘third‐generation’ ADC, TR1801‐ADC, that was optimized at different levels including specificity, stability, toxin–linker, conjugation site, and in vivo efficacy. Our nonagonistic cMet antibody was site‐specifically conjugated to the pyrrolobenzodiazepine (PBD) toxin–linker tesirine and has picomolar activity in cancer cell lines derived from different solid tumors including lung, colorectal, and gastric cancers. The potency of our cMet ADC is independent of MET gene copy number, and its antitumor activity was high not only in high cMet‐expressing cell lines but also in medium‐to‐low cMet cell lines (40 000–90 000 cMet/cell) in which a cMet ADC with tubulin inhibitor payload was considerably less potent. In vivo xenografts with low–medium cMet expression were also very responsive to TR1801‐ADC at a single dose, while a cMet ADC using a tubulin inhibitor showed a substantially reduced efficacy. Furthermore, TR1801‐ADC had excellent efficacy with significant antitumor activity in 90% of tested patient‐derived xenograft models of gastric, colorectal, and head and neck cancers: 7 of 10 gastric models, 4 of 10 colorectal cancer models, and 3 of 10 head and neck cancer models showed complete tumor regression after a single‐dose administration. Altogether, TR1801‐ADC is a new generation cMet ADC with best‐in‐class preclinical efficacy and good tolerability in rats. TR1801‐ADC is an antibody–drug conjugate with highly potent PBD payload–linker conjugated site‐specifically to our cMet antibody. The ADC is stable in rat circulation with acceptable tolerability. TR1801‐ADC is highly active in MET‐amplified and cMet‐overexpressing cancer cell lines. Patient‐derived xenograft solid tumor models of the stomach, colorectum, and head and neck were highly sensitive to treatment with our anti‐cMet‐ADC.
pARIS-htt: an optimised expression platform to study huntingtin reveals functional domains required for vesicular trafficking
Background Huntingtin (htt) is a multi-domain protein of 350 kDa that is mutated in Huntington's disease (HD) but whose function is yet to be fully understood. This absence of information is due in part to the difficulty of manipulating large DNA fragments by using conventional molecular cloning techniques. Consequently, few studies have addressed the cellular function(s) of full-length htt and its dysfunction(s) associated with the disease. Results We describe a flexible synthetic vector encoding full-length htt called pARIS-htt ( A daptable, R NAi I nsensitive & S ynthetic). It includes synthetic cDNA coding for full-length human htt modified so that: 1) it is improved for codon usage, 2) it is insensitive to four different siRNAs allowing gene replacement studies, 3) it contains unique restriction sites (URSs) dispersed throughout the entire sequence without modifying the translated amino acid sequence, 4) it contains multiple cloning sites at the N and C-ter ends and 5) it is Gateway compatible. These modifications facilitate mutagenesis, tagging and cloning into diverse expression plasmids. Htt regulates dynein/dynactin-dependent trafficking of vesicles, such as brain-derived neurotrophic factor (BDNF)-containing vesicles, and of organelles, including reforming and maintenance of the Golgi near the cell centre. We used tests of these trafficking functions to validate various pARIS-htt constructs. We demonstrated, after silencing of endogenous htt, that full-length htt expressed from pARIS-htt rescues Golgi apparatus reformation following reversible microtubule disruption. A mutant form of htt that contains a 100Q expansion and a htt form devoid of either HAP1 or dynein interaction domains are both unable to rescue loss of endogenous htt. These mutants have also an impaired capacity to promote BDNF vesicular trafficking in neuronal cells. Conclusion We report the validation of a synthetic gene encoding full-length htt protein that will facilitate analyses of its structure/function. This may help provide relevant information about the cellular dysfunctions operating during the disease. As proof of principle, we show that either polyQ expansion or deletion of key interacting domains within full-length htt protein impairs its function in transport indicating that HD mutation induces defects on intrinsic properties of the protein and further demonstrating the importance of studying htt in its full-length context.
Relationship between Students’ Perception of a Rubric for Oral Presentations and Their Academic Characteristics
The use of rubrics in the evaluation of oral presentations has been associated with several benefits for students. However, it is unknown whether students with better academic marks and greater self-regulation find the use of rubrics more useful or not. This paper aims to assess the relationship between how students perceive the use of a rubric and their academic characteristics, and to analyze the congruence between the professor’s and students’ evaluations when using the rubric. Eighty-five students studying for a Degree in Sport Sciences participated in this study. A rubric for oral presentations was used to assess the students’ performance. The students then filled out a questionnaire about their perception of the validity of the rubric, an assessment of academic performance, and a self-regulation questionnaire. Inverse correlations were observed between the academic record and two items of the rubric validity perception (r < −0.24). Direct correlations were also found between learning oriented self-regulation and four items of the rubric validity perception (r > 0.22). There was very good congruence between the professor’s and students’ marks when using the rubric (ICC = 0.78). The results suggest that the rubric used is a good instrument to ensure fair and consistent evaluations, despite possible differences between evaluators.
Privacy Policies for Social Networks: A Formal Approach
Online Social Networks (OSNs) are ubiquitous, with more than 70% of Internet users being part of them. The pervasive nature of OSNs brings many threats and challenges, privacy being one of them. Very often the available privacy protection mechanisms in OSNs do not meet users requirements. This results in users that are unable to define privacy settings (also known as privacy policies) that meet their expectations. Furthermore, current privacy settings are difficult to understand, which makes users sharing their personal information with more people than they actually intend to. In this thesis we explore novel techniques to protect users' privacy in OSNs.On the one hand, we define a formal framework to write privacy policies in OSNs and to reason about them. We use this framework to define and study current and new types of privacy policies that are not present in today's OSNs. In particular, we look into: i) protection against implicit disclosure of information, e.g., a user sharing someone else's information---without her consent; and ii) evolving privacy policies, i.e., privacy policies that change over time, e.g., \"my supervisor cannot see my location during the weekend\". These formalisms also provide a direct enforcement mechanism for this new type of privacy policies. We have developed a proof-of-concept implementation of the enforcement to show the practicality of our technique. We formally prove that this enforcement is correct, i.e., no privacy violations may occur.On the other hand, we look into the problem of embedding privacy policies into the data. Having policies and data as separate entities is prone to consistency issues. It might happen that the data is accessed by individuals who should not have access to it because the access policy is outdated or simply missing. This issue is particularly important in OSNs as they normally rely on geographically distributed databases or have a distributed architecture. Concretely, we use Attributed-Based Encryption (ABE) to \"attach\" privacy policies to pictures.
Symbolic Quantitative Information Flow for Probabilistic Programs
It is of utmost importance to ensure that modern data intensive systems do not leak sensitive information. In this paper, the authors, who met thanks to Joost-Pieter Katoen, discuss symbolic methods to compute information-theoretic measures of leakage: entropy, conditional entropy, Kullback-Leibler divergence, and mutual information. We build on two semantic frameworks for symbolic execution of probabilistic programs. For discrete programs, we use weakest pre-expectation calculus to compute exact symbolic expressions for the leakage measures. Using Second Order Gaussian Approximation (SOGA), we handle programs that combine discrete and continuous distributions. However, in the SOGA setting, we approximate the exact semantics using Gaussian mixtures and compute bounds for the measures. We demonstrate the use of our methods in two widely used mechanisms to ensure differential privacy: randomized response and the Gaussian mechanism.
Model-Checking the Implementation of Consent
Privacy policies define the terms under which personal data may be collected and processed by data controllers. The General Data Protection Regulation (GDPR) imposes requirements on these policies that are often difficult to implement. Difficulties arise in particular due to the heterogeneity of existing systems (e.g., the Internet of Things (IoT), web technology, etc.). In this paper, we propose a method to refine high level GDPR privacy requirements for informed consent into low-level computational models. The method is aimed at software developers implementing systems that require consent management. We mechanize our models in TLA+ and use model-checking to prove that the low-level computational models implement the high-level privacy requirements; TLA+ has been used by software engineers in companies such as Microsoft or Amazon. We demonstrate our method in two real world scenarios: an implementation of cookie banners and a IoT system communicating via Bluetooth low energy.
Model Checking Social Network Models
A social network service is a platform to build social relations among people sharing similar interests and activities. The underlying structure of a social networks service is the social graph, where nodes represent users and the arcs represent the users' social links and other kind of connections. One important concern in social networks is privacy: what others are (not) allowed to know about us. The \"logic of knowledge\" (epistemic logic) is thus a good formalism to define, and reason about, privacy policies. In this paper we consider the problem of verifying knowledge properties over social network models (SNMs), that is social graphs enriched with knowledge bases containing the information that the users know. More concretely, our contributions are: i) We prove that the model checking problem for epistemic properties over SNMs is decidable; ii) We prove that a number of properties of knowledge that are sound w.r.t. Kripke models are also sound w.r.t. SNMs; iii) We give a satisfaction-preserving encoding of SNMs into canonical Kripke models, and we also characterise which Kripke models may be translated into SNMs; iv) We show that, for SNMs, the model checking problem is cheaper than the one based on standard Kripke models. Finally, we have developed a proof-of-concept implementation of the model-checking algorithm for SNMs.
Free Open Source Communities Sustainability: Does It Make a Difference in Software Quality?
Context: Free and Open Source Software (FOSS) communities' ability to stay viable and productive over time is pivotal for society as they maintain the building blocks that digital infrastructure, products, and services depend on. Sustainability may, however, be characterized from multiple aspects, and less is known how these aspects interplay and impact community outputs, and software quality specifically. Objective: This study, therefore, aims to empirically explore how the different aspects of FOSS sustainability impact software quality. Method: 16 sustainability metrics across four categories were sampled and applied to a set of 217 OSS projects sourced from the Apache Software Foundation Incubator program. The impact of a decline in the sustainability metrics was analyzed against eight software quality metrics using Bayesian data analysis, which incorporates probability distributions to represent the regression coefficients and intercepts. Results: Findings suggest that selected sustainability metrics do not significantly affect defect density or code coverage. However, a positive impact of community age was observed on specific code quality metrics, such as risk complexity, number of very large files, and code duplication percentage. Interestingly, findings show that even when communities are experiencing sustainability, certain code quality metrics are negatively impacted. Conclusion: Findings imply that code quality practices are not consistently linked to sustainability, and defect management and prevention may be prioritized over the former. Results suggest that growth, resulting in a more complex and large codebase, combined with a probable lack of understanding of code quality standards, may explain the degradation in certain aspects of code quality.