Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,086 result(s) for "Milne, Peter"
Sort by:
A REFINEMENT OF THE CRAIG–LYNDON INTERPOLATION THEOREM FOR CLASSICAL FIRST-ORDER LOGIC WITH IDENTITY
We refine the interpolation property of classical first-order logic (without identity and without function symbols), showing that if Γ ⊬ , ⊬ Δ and Γ ⊢ Δ then there is an interpolant 𝜒, constructed using only non-logical vocabulary common to both members of Γ and members of Δ, such that (i) Γ entails 𝜒 in the first-order version of Kleene's strong three-valued logic, and (ii) 𝜒 entails Δ in the first-order version of Priest's Logic of Paradox. The proof proceeds via a careful analysis of derivations employing semantic tableaux. Lyndon's strengthening of the interpolation property falls out of an observation regarding such derivations and the steps involved in the construction of interpolants. Through an analysis of tableaux rules for identity, the proof is then extended to classical first-order logic with identity (but without function symbols).
A NON-CLASSICAL REFINEMENT OF THE INTERPOLATION PROPERTY FOR CLASSICAL PROPOSITIONAL LOGIC
We refine the interpolation property of the {˄, ˅, ¬}-fragment of classical propositional logic, showing that if ⊭ ¬𝜙, ⊭ 𝜓 and 𝜙 ⊨ 𝜓 then there is an interpolant 𝜒, constructed using at most atomic formulas occurring in both 𝜙 and 𝜓 and negation, conjunction and disjunction, such that (i) 𝜙 entails 𝜒 in Kleene's strong three-valued logic and (ii) 𝜒 entails 𝜓 in Priest's Logic of Paradox.
Methodological and conceptual challenges in rare and severe event forecast verification
There are distinctive methodological and conceptual challenges in rare and severe event (RSE) forecast verification, that is, in the assessment of the quality of forecasts of rare but severe natural hazards such as avalanches, landslides or tornadoes. While some of these challenges have been discussed since the inception of the discipline in the 1880s, there is no consensus about how to assess RSE forecasts. This article offers a comprehensive and critical overview of the many different measures used to capture the quality of categorical, binary RSE forecasts – forecasts of occurrence and non-occurrence – and argues that of skill scores in the literature there is only one adequate for RSE forecasting. We do so by first focusing on the relationship between accuracy and skill and showing why skill is more important than accuracy in the case of RSE forecast verification. We then motivate three adequacy constraints for a measure of skill in RSE forecasting. We argue that of skill scores in the literature only the Peirce skill score meets all three constraints. We then outline how our theoretical investigation has important practical implications for avalanche forecasting, basing our discussion on a study in avalanche forecast verification using the nearest-neighbour method (Heierli et al., 2004). Lastly, we raise what we call the “scope challenge”; this affects all forms of RSE forecasting and highlights how and why working with the right measure of skill is important not only for local binary RSE forecasts but also for the assessment of different diagnostic tests widely used in avalanche risk management and related operations, including the design of methods to assess the quality of regional multi-categorical avalanche forecasts.
Belief, Degrees of Belief, and Assertion
Starting from John MacFarlane's recent survey of answers to the question 'What is assertion?', I defend an account of assertion that draws on elements of MacFarlane's and Robert Brandom's commitment accounts, Timothy Williamson's knowledge norm account, and my own previous work on the normative status of logic. I defend the knowledge norm from recent attacks. Indicative conditionals, however, pose a problem when read along the lines of Ernest Adams' account, an account supported by much work in the psychology of reasoning. Furthermore, there seems to be no place for degrees of belief in the accounts of belief and assertion given here. Degrees of belief do have a role in decision-making, but, again, there is much evidence that the orthodox theory of subjective utility maximization is not a good description of what we do in decision-making and, arguably, neither is it a good normative guide to how we ought to make decisions.
Probability as a Measure of Information Added
Some propositions add more information to bodies of propositions than do others. We start with intuitive considerations on qualitative comparisons of information added. Central to these are considerations bearing on conjunctions and on negations. We find that we can discern two distinct, incompatible, notions of information added. From the comparative notions we pass to quantitative measurement of information added. In this we borrow heavily from the literature on quantitative representations of qualitative, comparative conditional probability. We look at two ways to obtain a quantitative conception of information added. One, the most direct, mirrors Bernard Koopman's construction of conditional probability: by making a strong structural assumption, it leads to a measure that is, transparently, some function of a function P which is, formally, an assignment of conditional probability (in fact, a Popper function). P reverses the information added order and mislocates the natural zero of the scale so some transformation of this scale is needed but the derivation of P falls out so readily that no particular transformation suggests itself. The Cox-Good-Aczél method assumes the existence of a quantitative measure matching the qualitative relation, and builds on the structural constraints to obtain a measure of information that can be rescaled as, formally, an assignment of conditional probability. A classical result of Cantor's, subsequently strengthened by Debreu, goes some way towards justifying the assumption of the existence of a quantitative scale. What the two approaches give us is a pointer towards a novel interpretation of probability as a rescaling of a measure of information added.
On the Explosion Geometry of Red Supergiant Stars
We present the observed “continuum” levels of polarization as a function of time for four well-observed Type II-Plateau supernovae (SNe II-P; Fig. 1), the class of SNe decisively determined to arise from red supergiant stars (Smartt 2009). All four objects show temporally increasing degrees of polarization through the end of the photospheric phase, with some exhibiting early-time polarization that challenge existing models (e.g., Dessart and Hillier 2011) to reproduce. A fundamental ejecta asymmetry is present in this photometrically diverse sample of type II SNe, and it probably takes different forms (e.g., 56Ni blobs/fingers, large scale deformation). We acknowledge support from NSF grants AST-1009571 and AST-1210311.
On-Line Unsupervised Outlier Detection Using Finite Mixtures with Discounting Learning Algorithms
Outlier detection is a fundamental issue in data mining, specifically in fraud detection, network intrusion detection, network monitoring, etc. SmartSifter is an outlier detection engine addressing this problem from the viewpoint of statistical learning theory. This paper provides a theoretical basis for SmartSifter and empirically demonstrates its effectiveness. SmartSifter detects outliers in an on-line process through the on-line unsupervised learning of a probabilistic model (using a finite mixture model) of the information source. Each time a datum is input SmartSifter employs an on-line discounting learning algorithm to learn the probabilistic model. A score is given to the datum based on the learned model with a high score indicating a high possibility of being a statistical outlier. The novel features of SmartSifter are: (1) it is adaptive to non-stationary sources of data; (2) a score has a clear statistical/information-theoretic meaning; (3) it is computationally inexpensive; and (4) it can handle both categorical and continuous variables. An experimental application to network intrusion detection shows that SmartSifter was able to identify data with high scores that corresponded to attacks, with low computational costs. Further experimental application has identified a number of meaningful rare cases in actual health insurance pathology data from Australia's Health Insurance Commission.
SUBFORMULA AND SEPARATION PROPERTIES IN NATURAL DEDUCTION VIA SMALL KRIPKE MODELS
Various natural deduction formulations of classical, minimal, intuitionist, and intermediate propositional and first-order logics are presented and investigated with respect to satisfaction of the separation and subformula properties. The technique employed is, for the most part, semantic, based on general versions of the Lindenbaum and Lindenbaum–Henkin constructions. Careful attention is paid (i) to which properties of theories result in the presence of which rules of inference, and (ii) to restrictions on the sets of formulas to which the rules may be employed, restrictions determined by the formulas occurring as premises and conclusion of the invalid inference for which a counterexample is to be constructed. We obtain an elegant formulation of classical propositional logic with the subformula property and a singularly inelegant formulation of classical first-order logic with the subformula property, the latter, unfortunately, not a product of the strategy otherwise used throughout the article. Along the way, we arrive at an optimal strengthening of the subformula results for classical first-order logic obtained as consequences of normalization theorems by Dag Prawitz and Gunnar Stålmarck.