Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
285 result(s) for "Econometrics Computer programs."
Sort by:
Applied econometrics using the SAS system
The first cutting-edge guide to using the SAS® system for the analysis of econometric data Applied Econometrics Using the SAS® System is the first book of its kind to treat the analysis of basic econometric data using SAS®, one of the most commonly used software tools among today's statisticians in business and industry. This book thoroughly examines econometric methods and discusses how data collected in economic studies can easily be analyzed using the SAS® system. In addition to addressing the computational aspects of econometric data analysis, the author provides a statistical foundation by introducing the underlying theory behind each method before delving into the related SAS® routines. The book begins with a basic introduction to econometrics and the relationship between classical regression analysis models and econometric models. Subsequent chapters balance essential concepts with SAS® tools and cover key topics such as: Regression analysis using Proc IML and Proc Reg Hypothesis testing Instrumental variables analysis, with a discussion of measurement errors, the assumptions incorporated into the analysis, and specification tests  Heteroscedasticity, including GLS and FGLS estimation, group-wise heteroscedasticity, and GARCH models Panel data analysis Discrete choice models, along with coverage of binary choice models and Poisson regression Duration analysis models Assuming only a working knowledge of SAS®, this book is a one-stop reference for using the software to analyze econometric data. Additional features include complete SAS® code, Proc IML routines plus a tutorial on Proc IML, and an appendix with additional programs and data sets. Applied Econometrics Using the SAS® System serves as a relevant and valuable reference for practitioners in the fields of business, economics, and finance. In addition, most students of econometrics are taught using GAUSS and STATA, yet SAS® is the standard in the working world; therefore, this book is an ideal supplement for upper-undergraduate and graduate courses in statistics, economics, and other social sciences since it prepares readers for real-world careers.
Computable Foundations for Economics
Computable Foundations for Economics is a unified collection of essays, some of which are published here for the first time and all of which have been updated for this book, on an approach to economic theory from the point of view of algorithmic mathematics. By algorithmic mathematics the author means computability theory and constructive mathematics. This is in contrast to orthodox mathematical economics and game theory, which are formalised with the mathematics of real analysis, underpinned by what is called the ZFC formalism, i.e., set theory with the axiom of choice. This reliance on ordinary real analysis and the ZFC system makes economic theory in its current mathematical mode completely non-algorithmic, which means it is numerically meaningless. The book provides a systematic attempt to dissect and expose the non-algorithmic content of orthodox mathematical economics and game theory and suggests a reformalization on the basis of a strictly rigorous algorithmic mathematics. This removes the current schizophrenia in mathematical economics and game theory, where theory is entirely divorced from algorithmic applicability - for experimental and computational exercises. The chapters demonstrate the uncomputability and non-constructivity of core areas of general equilibrium theory, game theory and recursive macroeconomics. The book also provides a fresh look at the kind of behavioural economics that lies behind Herbert Simon's work, and resurrects a role for the noble classical traditions of induction and verification, viewed and formalised, now, algorithmically. It will therefore be of particular interest to postgraduate students and researchers in algorithmic economics, game theory and classical behavioural economics.
Computable Foundations for Economics
Computable Foundations for Economics is a unified collection of essays, some of which are published here for the first time and all of which have been updated for this book, on an approach to economic theory from the point of view of algorithmic mathematics. By algorithmic mathematics the author means computability theory and constructive mathematics. This is in contrast to orthodox mathematical economics and game theory, which are formalised with the mathematics of real analysis, underpinned by what is called the ZFC formalism, i.e., set theory with the axiom of choice. This reliance on ordin
Open source tools for geographic analysis in transport planning
Geographic analysis has long supported transport plans that are appropriate to local contexts. Many incumbent ‘tools of the trade’ are proprietary and were developed to support growth in motor traffic, limiting their utility for transport planners who have been tasked with twenty-first century objectives such as enabling citizen participation, reducing pollution, and increasing levels of physical activity by getting more people walking and cycling. Geographic techniques—such as route analysis, network editing, localised impact assessment and interactive map visualisation—have great potential to support modern transport planning priorities. The aim of this paper is to explore emerging open source tools for geographic analysis in transport planning, with reference to the literature and a review of open source tools that are already being used. A key finding is that a growing number of options exist, challenging the current landscape of proprietary tools. These can be classified as command-line interface, graphical user interface or web-based user interface tools and by the framework in which they were implemented, with numerous tools released as R, Python and JavaScript packages, and QGIS plugins. The review found a diverse and rapidly evolving ‘ecosystem’ tools, with 25 tools that were designed for geographic analysis to support transport planning outlined in terms of their popularity and functionality based on online documentation. They ranged in size from single-purpose tools such as the QGIS plugin AwaP to sophisticated stand-alone multi-modal traffic simulation software such as MATSim, SUMO and Veins. Building on their ability to re-use the most effective components from other open source projects, developers of open source transport planning tools can avoid ‘reinventing the wheel’ and focus on innovation, the ‘gamified’ A/B Street https://github.com/dabreegster/abstreet/#abstreet simulation software, based on OpenStreetMap, a case in point. The paper, the source code of which can be found at https://github.com/robinlovelace/open-gat, concludes that, although many of the tools reviewed are still evolving and further research is needed to understand their relative strengths and barriers to uptake, open source tools for geographic analysis in transport planning already hold great potential to help generate the strategic visions of change and evidence that is needed by transport planners in the twenty-first century.
Opening practice: supporting reproducibility and critical spatial data science
This paper reflects on a number of trends towards a more open and reproducible approach to geographic and spatial data science over recent years. In particular, it considers trends towards Big Data, and the impacts this is having on spatial data analysis and modelling. It identifies a turn in academia towards coding as a core analytic tool, and away from proprietary software tools offering ‘black boxes’ where the internal workings of the analysis are not revealed. It is argued that this closed form software is problematic and considers a number of ways in which issues identified in spatial data analysis (such as the MAUP) could be overlooked when working with closed tools, leading to problems of interpretation and possibly inappropriate actions and policies based on these. In addition, this paper considers the role that reproducible and open spatial science may play in such an approach, taking into account the issues raised. It highlights the dangers of failing to account for the geographical properties of data, now that all data are spatial (they are collected somewhere), the problems of a desire for n = all observations in data science and it identifies the need for a critical approach. This is one in which openness, transparency, sharing and reproducibility provide a mantra for defensible and robust spatial data science.
Copycats vs. Original Mobile Apps: A Machine Learning Copycat-Detection Method and Empirical Analysis
While the growth of the mobile apps market has created significant market opportunities and economic incentives for mobile app developers to innovate, it has also inevitably invited other developers to create rip-offs. Practitioners and developers of original apps claim that copycats steal the original app’s idea and potential demand, and have called for app platforms to take action against such copycats. Surprisingly, however, there has been little rigorous research analyzing whether and how copycats affect an original app’s demand. The primary deterrent to such research is the lack of an objective way to identify whether an app is a copycat or an original. Using a combination of machine learning techniques such as natural language processing, latent semantic analysis, network-based clustering, and image analysis, we propose a method to identify apps as original or copycat and detect two types of copycats: deceptive and nondeceptive. Based on the detection results, we conduct an econometric analysis to determine the impact of copycat apps on the demand for the original apps on a sample of 10,100 action game apps by 5,141 developers that were released in the iOS App Store over five years. Our results indicate that the effect of a specific copycat on an original app’s demand is determined by the quality and level of deceptiveness of the copycat. High-quality nondeceptive copycats negatively affect demand for the originals. By contrast, low-quality, deceptive copycats positively affect demand for the originals. Results indicate that in aggregate the impact of copycats on the demand of original mobile apps is statistically insignificant. Our study contributes to the growing literature on mobile app consumption by presenting a method to identify copycats and providing evidence of the impact of copycats on an original app’s demand. The online appendix is available at https://doi.org/10.1287/isre.2017.0735 .
Open data products-A framework for creating valuable analysis ready data
This paper develops the notion of “open data product”. We define an open data product as the open result of the processes through which a variety of data (open and not) are turned into accessible information through a service, infrastructure, analytics or a combination of all of them, where each step of development is designed to promote open principles. Open data products are born out of a (data) need and add value beyond simply publishing existing datasets. We argue that the process of adding value should adhere to the principles of open (geographic) data science, ensuring openness, transparency and reproducibility. We also contend that outreach, in the form of active communication and dissemination through dashboards, software and publication are key to engage end-users and ensure societal impact. Open data products have major benefits. First, they enable insights from highly sensitive, controlled and/or secure data which may not be accessible otherwise. Second, they can expand the use of commercial and administrative data for the public good leveraging on their high temporal frequency and geographic granularity. We also contend that there is a compelling need for open data products as we experience the current data revolution. New, emerging data sources are unprecedented in temporal frequency and geographical resolution, but they are large, unstructured, fragmented and often hard to access due to privacy and confidentiality concerns. By transforming raw (open or “closed”) data into ready to use open data products, new dimensions of human geographical processes can be captured and analysed, as we illustrate with existing examples. We conclude by arguing that several parallels exist between the role that open source software played in enabling research on spatial analysis in the 90 s and early 2000s, and the opportunities that open data products offer to unlock the potential of new forms of (geo-)data.
Progress in the R ecosystem for representing and handling spatial data
Twenty years have passed since Bivand and Gebhardt (J Geogr Syst 2(3):307–317, 2000. https://doi.org/10.1007/PL00011460) indicated that there was a good match between the then nascent open-source R programming language and environment and the needs of researchers analysing spatial data. Recalling the development of classes for spatial data presented in book form in Bivand et al. (Applied spatial data analysis with R. Springer, New York, 2008, Applied spatial data analysis with R, 2nd edn. Springer, New York, 2013), it is important to present the progress now occurring in representation of spatial data, and possible consequences for spatial data handling and the statistical analysis of spatial data. Beyond this, it is imperative to discuss the relationships between R-spatial software and the larger open-source geospatial software community on whose work R packages crucially depend.