Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
30 result(s) for "Groves, Lindsay"
Sort by:
Verifying Whiley Programs with Boogie
The quest to develop increasingly sophisticated verification systems continues unabated. Tools such as Dafny, Spec#, ESC/Java, SPARK Ada and Whiley attempt to seamlessly integrate specification and verification into a programming language, in a similar way to type checking. A common integration approach is to generate verification conditions that are handed off to an automated theorem prover. This provides a nice separation of concerns and allows different theorem provers to be used interchangeably. However, generating verification conditions is still a difficult undertaking and the use of more “high-level” intermediate verification languages has become commonplace. In particular, Boogie provides a widely used and understood intermediate verification language. A common difficulty is the potential for an impedance mismatch between the source language and the intermediate verification language. In this paper, we explore the use of Boogie as an intermediate verification language for verifying programs in Whiley. This is noteworthy because the Whiley language has (amongst other things) a rich type system with considerable potential for an impedance mismatch. We provide a comprehensive account of translating Whiley to Boogie which demonstrates that it is possible to model most aspects of the Whiley language. Key challenges posed by the Whiley language included: the encoding of Whiley’s expressive type system and support for flow typing and generics; the implicit assumption that expressions in specifications are well defined; the ability to invoke methods from within expressions; the ability to return multiple values from a function or method; the presence of unrestricted lambda functions; and the limited syntax for framing. We demonstrate that the resulting verification tool can verify significantly more programs than the native Whiley verifier which was custom-built for Whiley verification. Furthermore, our work provides evidence that Boogie is (for the most part) sufficiently general to act as an intermediate language for a wide range of source languages.
Towards formally specifying and verifying transactional memory
Over the last decade, great progress has been made in developing practical transactional memory (TM) implementations, but relatively little attention has been paid to precisely specifying what it means for them to be correct, or formally proving that they are. In this paper, we present TMS1 (Transactional Memory Specification 1), a precise specification of correct behaviour of a TM runtime library. TMS1 targets TM runtimes used to implement transactional features in an unmanaged programming language such as C or C++. In such contexts, even transactions that ultimately abort must observe consistent states of memory; otherwise, unrecoverable errors such as divide-by-zero may occur before a transaction aborts, even in a correct program in which the error would not be possible if transactions were executed atomically. We specify TMS1 precisely using an I/O automaton (IOA). This approach enables us to also model TM implementations using IOAs and to construct fully formal and machine-checked correctness proofs for them using well established proof techniques and tools. We outline key requirements for a TM system. To avoid precluding any implementation that satisfies these requirements, we specify TMS1 to be as general as we can, consistent with these requirements. The cost of such generality is that the condition does not map closely to intuition about common TM implementation techniques, and thus it is difficult to prove that such implementations satisfy the condition. To address this concern, we present TMS2, a more restrictive condition that more closely reflects intuition about common TM implementation techniques. We present a simulation proof that TMS2 implements TMS1, thus showing that to prove that an implementation satisfies TMS1, it suffices to prove that it satisfies TMS2. We have formalised and verified this proof using the PVS specification and verification system.
Trace-based derivation of a scalable lock-free stack algorithm
We show how a sophisticated, lock-free concurrent stack implementation can be derived from an abstract specification in a series of verifiable steps. The algorithm is based on the scalable stack algorithm of Hendler et al. (Proceedings of the sixteenth annual ACM symposium on parallel algorithms, 27–30 June 2004, Barcelona, Spain, pp 206–215), which allows push and pop operations to be paired off and eliminated without affecting the central stack, thus reducing contention on the stack, and allowing multiple pairs of push and pop operations to be performed in parallel. Our algorithm uses a simpler data structure than Hendler, Shavit and Yerushalmi’s, and avoids an ABA problem. We first derive a simple lock-free stack algorithm using a linked-list implementation, and discuss issues related to memory management and the ABA problem. We then add an abstract model of the elimination process, from which we derive our elimination algorithm. This allows the basic algorithmic ideas to be separated from implementation details, and provides a basis for explaining and comparing different variants of the algorithm. We show that the elimination stack algorithm is linearisable by showing that any execution of the implementation can be transformed into an equivalent execution of an abstract model of a linearisable stack. Each step in the derivation is either a data refinement which preserves the level of atomicity, an operational refinement which may alter the level of atomicity, or a refactoring step which alters the structure of the system resulting from the preceding derivation. We verify our refinements using an extension of Lipton’s reduction method, allowing concurrent and non-concurrent aspects to be considered separately.
Towards linking correctness conditions for concurrent objects and contextual trace refinement
Correctness conditions for concurrent objects describe how atomicity of an abstract sequential object may be decomposed. Many different concurrent objects and proof methods for them have been developed. However, arguments about correctness are conducted with respect to an object in isolation. This is in contrast to real-world practice, where concurrent objects are often implemented as part of a programming language library (e.g., java.util.concurrent) and are instantiated within a client program. A natural question to ask, then is: How does a correctness condition for a concurrent object ensure correctness of a client program that uses the concurrent object? This paper presents the main issues that surround this question and provides some answers by linking different correctness conditions with a form of trace refinement.
Contextual trace refinement for concurrent objects: Safety and progress
Correctness of concurrent objects is defined in terms of safety properties such as linearizability, sequential consistency, and quiescent consistency, and progress properties such as wait-, lock-, and obstruction-freedom. These properties, however, only refer to the behaviours of the object in isolation, which does not tell us what guarantees these correctness conditions on concurrent objects provide to their client programs. This paper investigates the links between safety and progress properties of concurrent objects and a form of trace refinement for client programs, called contextual trace refinement. In particular, we show that linearizability together with a minimal notion of progress are sufficient properties of concurrent objects to ensure contextual trace refinement, but sequential consistency and quiescent consistency are both too weak. Our reasoning is carried out in the action systems framework with procedure calls, which we extend to cope with non-atomic operations.
Collapsing Threads Safely with Soft Invariants
Canonical abstraction is a static analysis technique that represents states as 3-valued logical structures, and produces finite abstract systems. Despite providing a finite bound, these abstractions may still suffer from the state explosion problem. Notably, for concurrent programs with arbitrary interleaving, if threads in a state are abstracted based on their location, then the number of locations will be a combinatorial factor in the size of the statespace. We present an approach using canonical abstraction that avoids this state explosion by \"collapsing\" all of the threads in a state into a single abstract representative. Properties of threads that would be lost by the abstraction, but are needed for verification, are retained by defining conditional \"soft invariant\" instrumentation predicates. This technique is used to adapt previous models for verifying linearizability of nonblocking concurrent data structure algorithms, resulting in exponentially smaller statespaces.
The High Latitude Ionospheric Response to the Major May 2024 Geomagnetic Storm: A Synoptic View
The high latitude ionospheric evolution of the May 10‐11, 2024, geomagnetic storm is investigated in terms of Total Electron Content and contextualized with Incoherent Scatter Radar and ionosonde observations. Substantial plasma lifting is observed within the initial Storm Enhanced Density plume with ionospheric peak heights increasing by 150–300 km, reaching levels of up to 630 km. Scintillation is observed within the cusp during the initial expansion phase of the storm, spreading across the auroral oval thereafter. Patch transport into the polar cap produces broad regions of scintillation that are rapidly cleared from the region after a strong Interplanetary Magnetic Field reversal at 2230UT. Strong heating and composition changes result in the complete absence of the F2‐layer on the eleventh, suffocating high latitude convection from dense plasma necessary for Tongue of Ionization and patch formation, ultimately resulting in a suppression of polar cap scintillation on the eleventh. Plain Language Summary The intense geomagnetic storm of May 2024 caused a plethora of different responses within the Earth's ionosphere. In the early phases of the storm, the auroral oval quickly expands to upper midlatitudes and induces strong variations in Global Navigation Satellite System (GNSS) phase measurements. Concurrently, midlatitude plasma is repeatedly lifted by 100–300 km on timescales of about an hour resulting in enhanced plasma densities. This intensified and lifted plasma is then drawn into the polar cap inducing variations in GNSS amplitude and phase. As the storm evolves, heating drives mixing of the thermosphere and causes an extreme depletion in ionospheric plasma. After 24 hr, despite severe geomagnetic conditions persisting, the depleted plasma environment results in only relatively weak plasma transport into the polar cap and significantly reduced impacts on GNSS. Key Points Plasma lifting during the storm caused midlatitude displacements of ionospheric peak height by as much as 300 km over the course of 1 hour Sporadic‐E is observed at the sub‐auroral convective boundary edge of the storm‐enhanced density with strong plasma drift shears present Severe depletion of electron density at mid and high latitudes significantly reduced the impact of subsequent geomagnetic activity on GNSS
Characterisation of morphological differences in well-differentiated nasal epithelial cell cultures from preterm and term infants at birth and one-year
Innate immune responses of airway epithelium are important defences against respiratory pathogens and allergens. Newborn infants are at greater risk of severe respiratory infections compared to older infants, while premature infants are at greater risk than full term infants. However, very little is known regarding human neonatal airway epithelium immune responses and whether age-related morphological and/or innate immune changes contribute to the development of airway disease. We collected nasal epithelial cells from 41 newborn infants (23 term, 18 preterm) within 5 days of birth. Repeat sampling was achieved for 24 infants (13 term, 11 preterm) at a median age of 12.5 months. Morphologically- and physiologically-authentic well-differentiated primary paediatric nasal epithelial cell (WD-PNEC) cultures were generated and characterised using light microscopy and immunofluorescence. WD-PNEC cultures were established for 15/23 (65%) term and 13/18 (72%) preterm samples at birth, and 9/13 (69%) term and 8/11 (73%) preterm samples at one-year. Newborn and infant WD-PNEC cultures demonstrated extensive cilia coverage, mucous production and tight junction integrity. Newborn WD-PNECs took significantly longer to reach full differentiation and were noted to have much greater proportions of goblet cells compared to one-year repeat WD-PNECs. No differences were evident in ciliated/goblet cell proportions between term- and preterm-derived WD-PNECs at birth or one-year old. We describe the successful generation of newborn-derived WD-PNEC cultures and their revival from frozen. We also compared the characteristics of WD-PNECs derived from infants born at term with those born prematurely at birth and at one-year-old. The development of WD-PNEC cultures from newborn infants provides a powerful and exciting opportunity to study the development of airway epithelium morphology, physiology, and innate immune responses to environmental or infectious insults from birth.