Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
333
result(s) for
"Formalisation. Models"
Sort by:
A Maximum Entropy Model of Phonotactics and Phonotactic Learning
2008
The study of phonotactics is a central topic in phonology. We propose a theory of phonotactic grammars and a learning algorithm that constructs such grammars from positive evidence. Our grammars consist of constraints that are assigned numerical weights according to the principle of maximum entropy. The grammars assess possible words on the basis of the weighted sum of their constraint violations. The learning algorithm yields grammars that can capture both categorical and gradient phonotactic patterns. The algorithm is not provided with constraints in advance, but uses its own resources to form constraints and weight them. A baseline model, in which Universal Grammar is reduced to a feature set and an SPE-style constraint format, suffices to learn many phonotactic phenomena. In order for the model to learn nonlocal phenomena such as stress and vowel harmony, it must be augmented with autosegmental tiers and metrical grids. Our results thus offer novel, learning-theoretic support for such representations. We apply the model in a variety of learning simulations, showing that the learned grammars capture the distributional generalizations of these languages and accurately predict the findings of a phonotactic experiment.
Journal Article
Computational and evolutionary aspects of language
by
Niyogi, Partha
,
Komarova, Natalia L.
,
Nowak, Martin A.
in
Biological Evolution
,
Brain - physiology
,
Computational neuroscience
2002
Language is our legacy. It is the main evolutionary contribution of humans, and perhaps the most interesting trait that has emerged in the past 500 million years. Understanding how darwinian evolution gives rise to human language requires the integration of formal language theory, learning theory and evolutionary dynamics. Formal language theory provides a mathematical description of language and grammar. Learning theory formalizes the task of language acquisition—it can be shown that no procedure can learn an unrestricted set of languages. Universal grammar specifies the restricted set of languages learnable by the human brain. Evolutionary dynamics can be formulated to describe the cultural evolution of language and the biological evolution of universal grammar.
Journal Article
Toward a model of grammaticality judgments
by
HÄUSSLER, JANA
,
BADER, MARKUS
in
Cognition & reasoning
,
Computer Modeling and Simulation
,
Correlation
2010
This paper presents three experiments that investigate the relationship between gradient and binary judgments of grammaticality. In the first two experiments, two different groups of participants judged sentences by the method of magnitude estimation and by the method of speeded grammaticality judgments in a single session. The two experiments involved identical sentence materials but they differed in the order in which the two procedures were applied. The results show a high correlation between the magnitude estimation data and the speeded grammaticality judgments data, both within a session and across the two sessions. The third experiment was a questionnaire study in which participants judged the same sentences as either grammatical or ungrammatical without time pressure. This experiment yielded results quite similar to those of the other two experiments. Thus gradient and binary judgments both provide valuable and reliable sources for linguistic theory when assessed in an experimentally controlled way. We present a model based on Signal Detection Theory which specifies how gradient grammaticality scores are mapped to binary grammaticality judgments. Finally, we compare our experimental results to existing corpus data in order to inquire into the relationship between grammaticality and frequency of usage.
Journal Article
Empirical Tests of the Gradual Learning Algorithm
2001
The Gradual Learning Algorithm (Boersma 1997) is a constraint-ranking algorithm for learning optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and Smolensky (1993, 1996, 1998, 2000), which initiated the learnability research program for Optimality Theory. We argue that the Gradual Learning Algorithm has a number of special advantages: it can learn free variation, deal effectively with noisy learning data, and account for gradient well-formedness judgments. The case studies we examine involve Ilokano reduplication and metathesis, Finnish genitive plurals, and the distribution of English light and dark /l/.
Journal Article
Information theoretic approaches to phonological structure: the case of Finnish vowel harmony
2012
This paper offers a study of vowel harmony in Finnish as an example of how information theoretic concepts can be employed in order to better understand the nature of phonological structure. The probability assigned by a phonological model to a corpus is used as a means to evaluate how good such a model is, and information theoretic methods allow us to determine the extent to which each addition to our grammar results in a better treatment of the data. We explore a natural implementation of autosegmental phonology within an information theoretic perspective, and find that it is empirically inadequate; that is, it performs more poorly than a simple bigram model. We extend the model by means of a Boltzmann distribution, taking into consideration both local, segment-to-segment, relations and distal, vowel-to-vowel, relations, and find a significant improvement. We conclude with some general observations on how we propose to revisit other phonological questions from this perspective.
Journal Article
Gradual Learning and Convergence
2008
The version of the gradual learning algorithm presented by Paul Boersma & Bruce Hayes (2001) is shown to be consistently foiled by a learning problem consisting of a few iterations of a simple data pattern in optimality theory. Of two competing forms, unshared violation marks are assigned to the loser on the highest-ranked constraint, to the winner on the next highest-ranked constraint, & to the loser on the following constraint; the rankings are determined by further input-output pairings each of which repeats the pattern with a new constraint set that overlaps the previous one. The Perceptron learner introduced by Frank Rosenblatt (1958), which resembles the gradual learning algorithm in gradually adjusting constraint values & additionally has a convergence/correctness proof, succeeds in learning the data pattern in the precursor of optimality theory, the harmonic grammar model of Geraldine Legendre et al (1990). References. J. Hitchcock
Journal Article
A model of lexical variation and the grammar with application to Tagalog nasal substitution
2010
This paper presents a case of patterned exceptionality. The case is Tagalog nasal substitution, a phenomenon in which a prefix-final nasal fuses with a steminitial obstruent. The rule is variable on a word-by-word basis, but its distribution is phonologically patterned, as shown through dictionary and corpus data. Speakers appear to have implicit knowledge of the patterning, as shown through experimental data and loan adaptation. A grammar is proposed that reconciles the primacy of lexical information with regularities in the distribution of the rule. Morphologically complex words are allowed to have their own lexical entries, whose use is preferred to on-the-fly morphological concatenation. The grammar contains lowerranked markedness constraints that govern the behavior of novel words. Faithfulness for lexicalized full words is ranked high, so that an established word will have a stable pronunciation. But when a word is newly coined through affixation, the outcome varies according the lexical trends. A crucial aspect of the proposal is that the ranking of the \"subterranean\" markedness constraints can be learned despite training data in which all words are pronounced faithfully, using Boersma's (1997, 1998) Gradual learning algorithm. The paper also shows, by summarizing the rule's behavior in related languages, that the same constraints, in different rankings, seem to be at work even in languages reported to lack variation.
Journal Article
Markedness and Subject Choice in Optimality Theory
1999
Among the most robust generalizations in syntactic markedness is the association of semantic role with person/animacy rank, discussed first in Silverstein (1976). The present paper explores how Silverstein's generalization might be expressed in a formal theory of grammar, and how it can play a role in individual grammars. The account, which focuses here on the role of person, is developed in Optimality Theory. Central to it are two formal devices which have been proposed in connection with phonology: harmonic alignment of prominence scales, and local conjunction of constraints. It is shown that application of harmonic alignment to scales involving syntactic relations and several substantive dimensions characterizes the universal markedness relations operative in this domain, and provides the constraints necessary for grammar construction. Differences between languages can be described as differences in the ranking of universal constraints.
Journal Article
Biases in Harmonic Grammar: the road to restrictive learning
2011
In the Optimality-Theoretic learnability and acquisition literature it has been proposed that certain classes of constraints must be biased toward particular rankings (e.g., Markedness ≫ IO-Faithfulness; Specific IO-Faithfulness ≫ General IO-Faithfulness). While sometimes difficult to implement efficiently or comprehensively, these biases are necessary to explain how learners acquire the most restrictive grammar consistent with positive evidence from the target language, and how innovative patterns emerge during the course of child phonological development. This paper demonstrates that altering the mode of constraint interaction from strict ranking as in Optimality Theory to additive weighting as in Harmonic Grammar (HG) reduces the number of classes of constraints that must be distinguished by such biases. Using weighted constraints and a version of the Gradual Learning Algorithm (GLA), the only distinction needed is between Output-based constraints, which must be biased toward high weights, and Input-Output-based constraints, which must be biased toward the lowest weights possible. We implement this distinction within the HG-GLA model by assigning different initial weights and plasticity values to the two classes of constraints. This implementation suffices to ensure that restrictive grammars are learned, and also predicts the emergence of a variety of attested intermediate stages during the course of acquisition.
Journal Article
Some Correct Error-Driven Versions of the Constraint Demotion Algorithm
2009
This article shows that Error-Driven Constraint Demotion (EDCD), an error-driven learning algorithm proposed by Tesar (1995) for Prince and Smolensky's (1993/2004) version of Optimality Theory, can fail to converge to a correct totally ranked hierarchy of constraints, unlike the earlier non-error-driven learning algorithms proposed by Tesar and Smolensky (1993). The cause of the problem is found in Tesar's use of \"mark-pooling ties,\" indicating that EDCD can be repaired by assuming Anttila's (1997) \"permuting ties\" instead. Proofs show, and simulations confirm, that totally ranked hierarchies can indeed be found by both this repaired version of EDCD and Boersma's (1998) Minimal Gradual Learning Algorithm.
Journal Article