Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
88 result(s) for "Levy, Arnon"
Sort by:
Abstraction and the Organization of Mechanisms
Proponents of mechanistic explanation all acknowledge the importance of organization. But they have also tended to emphasize specificity with respect to parts and operations in mechanisms. We argue that in understanding one important mode of organization—patterns of causal connectivity—a successful explanatory strategy abstracts from the specifics of the mechanism and invokes tools such as those of graph theory to explain how mechanisms with a particular mode of connectivity will behave. We discuss the connection between organization, abstraction, and mechanistic explanation and illustrate our claims by looking at an example from recent research on so-called network motifs.
Idealization and abstraction
Idealization and abstraction are central concepts in the philosophy of science and in science itself. My goal in this paper is suggest an account of these concepts, building on and refining an existing view due to Jones (in: Jones MR, Cartwright N (eds) Idealization XII: correcting the model. Idealization and abstraction in the sciences, vol 86. Rodopi, Amsterdam, pp 173–217, 2005) and Godfrey-Smith (in: Barberousse A, Morange M, Pradeu T (eds) Mapping the future of biology: evolving concepts and theories. Springer, Berlin, 2009). On this line of thought, abstraction —which I call, for reasons to be explained, abstractness—involves the omission of detail, whereas idealization consists in a deliberate mismatch between a description (or a model) and the world. I will suggest that while the core idea underlying these authors’ view is correct, they make several assumptions and stipulations that are best avoided. For one thing, they tie abstractness too close to truth. For another, they do not allow sufficient room to the difference between idealization and error. Taking these points into account leads to a refined account of the distinction, in which abstractness is seen in terms of relative richness of detail, and idealization is seen as closely connected with the knowledge and intentions of idealizers. I lay out these accounts in turn, and then discuss the relationship between the two concepts, and several other upshots of the present way of construing the distinction.
Can Bayesian Models of Cognition Show That We Are (Epistemically) Rational?
“According to [Bayesian] models” in cognitive neuroscience, says a recent textbook, “the human mind behaves like a capable data scientist.” Do they? That is, do such models show we are rational? I argue that Bayesian models of cognition, perhaps surprisingly, don’t and indeed can’t show that we are Bayes-rational. The key reason is that they appeal to approximations, a fact that carries significant implications. After outlining the argument, I critique two responses, seen in recent cognitive neuroscience. One says that the mind can be seen as approximately Bayes-rational, while the other reconceives norms of rationality.
Modeling without models
Modeling is an important scientific practice, yet it raises significant philosophical puzzles. Models are typically idealized, and they are often explored via imaginative engagement and at a certain \"distance\" from empirical reality. These features raise questions such as what models are and how they relate to the world. A number of recent accounts answer these questions in terms of indirect representation and analysis. Such views treat the model as a bona fide object (\"the model system\"), specified by the modeler and used to represent and reason about some portion of the concrete empirical world (\"the target system\"). On some indirect views, model systems are abstract entities, such as mathematical structures, while on other views they are concrete hypothetical things, akin to fictional characters. Here I assess these views and offer a novel account of models. I argue that regarding models as abstracta results in some significant tensions with the practice of modeling, especially in areas where non-mathematical models are common. On the other hand, viewing models as concrete hypotheticals raises difficult questions about model-world relations. The view I argue for treats models as direct, albeit simplified, representations of targets in the world. I close by suggesting a treatment of model-world relations that draws on recent work by Stephen Yablo concerning the notion of partial truth.
Molecular-biological machines: a defense
I offer a defense, albeit a qualified one, of machine analogies in biology, focusing on molecular contexts. The defense is rooted in my prior work (Levy in Philosopher’s Imprint 14(6), 2014), which construes the machine machine-likeness of a system as a matter of the extent to which it exhibits an internal division of labor. A concrete aim is to shore up the notion of molecular biological machines, paying special attention to processive molecular motors, such as Kinesin. But I will also try to show how the division of labor account gives us guidance more broadly, both about where and why machine analogies can be expected to prove helpful and about their limitations.
Simulation of laser-induced tunnel ionization based on a curved waveguide
The problem of tunneling ionization and the associated questions of how long it takes for an electron to tunnel through the barrier, and what the tunneling rate has fascinated scientists for almost a century. In strong field physics, tunnel ionization plays an important role, and accurate knowledge of the time-dependent tunnel rate is of paramount importance. The Keldysh theory and other more advanced related theories are often used, but their accuracy is still controversial. In previous work, we suggested using a curved waveguide as a quantum simulator to simulate the tunnel ionization process. Here we implemented for the first time such a curved waveguide and observed the simulated tunneling ionization process. We compare our results with the theory.
What was Hodgkin and Huxley's Achievement?
The Hodgkin–Huxley (HH) model of the action potential is a theoretical pillar of modern neurobiology. In a number of recent publications, Carl Craver ([2006], [2007], [2008]) has argued that the model is explanatorily deficient because it does not reveal enough about underlying molecular mechanisms. I offer an alternative picture of the HH model, according to which it deliberately abstracts from molecular specifics. By doing so, the model explains whole-cell behaviour as the product of a mass of underlying low-level events. The issue goes beyond cellular neurobiology, for the strategy of abstraction exhibited in the HH case is found in a range of biological contexts. I discuss why it has been largely neglected by advocates of the mechanist approach to explanation.
Three kinds of new mechanism
I distinguish three theses associated with the new mechanistic philosophy—concerning causation, explanation and scientific methodology. Advocates of each thesis are identified and relationships among them are outlined. I then look at some recent work on natural selection and mechanisms. Framing that debate in terms of different kinds of New Mechanism significantly affects what is at stake.
Evolutionary models and the normative significance of stability
Many have expected that understanding the evolution of norms should, in some way, bear on our first-order normative outlook: How norms evolve should shape which norms we accept. But recent philosophy has not done much to shore up this expectation. Most existing discussions of evolution and norms either jump headlong into the is/ought gap or else target meta-ethical issues, such as the objectivity of norms. My aim in this paper is to sketch a different way in which evolutionary considerations can feed into normative thinking—focusing on stability. I will discuss two (related) forms of argument that utilize information about social stability drawn from evolutionary models, and employs it to assess claims in political philosophy. One such argument treats stability as feature of social states that may be taken into account alongside other features. The other uses stability as a constraint on the realization of social ideals, via a version of the ought-implies-can maxim. These forms of argument are not new; indeed they have a history going back at least to early modern philosophy. But their marriage with evolutionary information is relatively recent, has a significantly novel character, and has received little attention in recent moral and political philosophy.
Model Organisms are Not (Theoretical) Models
Many biological investigations are organized around a small group of species, often referred to as 'model organisms', such as the fruit fly Drosophila melanogaster. The terms 'model' and 'modelling' also occur in biology in association with mathematical and mechanistic theorizing, as in the Lotka–Volterra model of predator-prey dynamics. What is the relation between theoretical models and model organisms? Are these models in the same sense? We offer an account on which the two practices are shown to have different epistemic characters. Theoretical modelling is grounded in explicit and known analogies between model and target. By contrast, inferences from model organisms are empirical extrapolations. Often such extrapolation is based on shared ancestry, sometimes in conjunction with other empirical information. One implication is that such inferences are unique to biology, whereas theoretical models are common across many disciplines. We close by discussing the diversity of uses to which model organisms are put, suggesting how these relate to our overall account.