Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeDegree TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceGranting InstitutionTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
517,860
result(s) for
"Data processing"
Sort by:
Big Data, Little Data, No Data
by
Borgman, Christine L
in
Big data
,
Communication in learning and scholarship
,
Communication in learning and scholarship -- Technological innovations
2015,2016,2017
\"Big Data\" is on the covers ofScience, Nature, theEconomist, andWiredmagazines, on the front pages of theWall Street Journaland theNew York Times.But despite the media hyperbole, as Christine Borgman points out in this examination of data and scholarly research, having the right data is usually better than having more data; little data can be just as valuable as big data. In many cases, there are no data -- because relevant data don't exist, cannot be found, or are not available. Moreover, data sharing is difficult, incentives to do so are minimal, and data practices vary widely across disciplines.Borgman, an often-cited authority on scholarly communication, argues that data have no value or meaning in isolation; they exist within a knowledge infrastructure -- an ecology of people, practices, technologies, institutions, material objects, and relationships. After laying out the premises of her investigation -- six \"provocations\" meant to inspire discussion about the uses of data in scholarship -- Borgman offers case studies of data practices in the sciences, the social sciences, and the humanities, and then considers the implications of her findings for scholarly practice and research policy. To manage and exploit data over the long term, Borgman argues, requires massive investment in knowledge infrastructures; at stake is the future of scholarship.
A real-time in-memory discovery service : leveraging hierarchical packaging information in a unique identifier network to retrieve track and trace information
The research presented in this book discusses how to efficiently retrieve track and trace information for an item of interest that took a certain path through a complex network of manufacturers, wholesalers, retailers, and consumers. To this end, a super-ordinate system called \"Discovery Service\" is designed that has to handle large amounts of data, high insert-rates, and a high number of queries that are submitted to the discovery service. An example that is used throughout this book is the European pharmaceutical supply chain, which faces the challenge that more and more counterfeit medicinal products are being introduced. Between October and December 2008, more than 34 million fake drug pills were detected at customs control at the borders of the European Union. These fake drugs can put lives in danger as they were supposed to fight cancer, take effect as painkiller or antibiotics, among others. The concepts described in this book can be adopted for supply chain management use cases other than track and trace, such as recall, supply chain optimization, or supply chain analytics.
Corpus Stylistics
2004
This book combines stylistic analysis with corpus linguistics to present an innovative account of the phenomenon of speech, writing and thought presentation - commonly referred to as 'speech reporting' or 'discourse presentation'. This new account is based on an extensive analysis of a quarter-of-a-million word electronic collection of written narrative texts, including both fiction and non-fiction. The book includes detailed discussions of:
The construction of this corpus of late twentieth-century written British narratives taken from fiction, newspaper news reports and (auto)biographies
The development of a manual annotation system for speech, writing and thought presentation and its application to the corpus.
The findings of a quantitive and qualitative analysis of the forms and functions of speech, writing and thought presentation in the three genres represented in the corpus.
The findings of the analysis of a range of specific phenomena, including hypothetical speech, writing and thought presentation, embedded speech, writing and thought presentation and ambiguities in speech, writing and thought presentation.
Two case studies concentrating on specific texts from the corpus.
Corpus Stylistics shows how stylistics, and text/discourse analysis more generally, can benefit from the use of a corpus methodology and the authors' innovative approach results in a more reliable and comprehensive categorisation of the forms of speech, writing and thought presentation than have been suggested so far. This book is essential reading for linguists interested in the areas of stylistics and corpus linguistics.
Elena Semino is Senior Lecturer in the Department of Linguistics and Modern English Language at Lancaster University. She is the author of Language and World Creation in Poems and Other Texts (1997), and co-editor (with Jonathan Culpetter) of Cognitive Stylistics: Language and Cognition in Text Analysis (2002). Mick Short is Professor of English Language and Literature at Lancaster University. He has written Exploring the Language of Poems, Plays and Prose (1997) and (with Geoffrey Leech) Style in Fiction (1997). He founded the Poetics and Linguistics Association and was the founding editor of its international journal, Language and Literature.
1. A Corpus-Based Approach to the Study of Discourse Presentation in Written Narratives 2. Methodology: The Construction and Annotation of the Corpus 3. A Revised Model of Speech, Writing and Thought Presentation 4. Speech Presentation in the Corpus: A Quantitative and Qualitative Analysis 5. Writing Presentation in the Corpus: A Quantitative and Qualitative Analysis 6. Thought Presentation in the Corpus: A Quantitative and Qualitative Analysis 7. Specific Phenomena in Speech, Writing Presentation 8. Case Studies of Specific Texts from the Corpus 9. Conclusion
Cyber security in parallel and distributed computing : concepts, techniques, applications and case studies
by
Mishra, Brojo Kishore
,
Khari, Manju
,
Kumar, Raghvendra
in
Computer networks
,
Computer networks -- Security measures
,
Computer security
2019
The book contains several new concepts, techniques, applications and case studies for cyber securities in parallel and distributed computing The main objective of this book is to explore the concept of cybersecurity in parallel and distributed computing along with recent research developments in the field.
Structural equation modeling : applications using Mplus
2012
A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a flexible tool to analyze their data with an easy-to-use interface and graphical displays of data and analysis results. Key features: Presents a useful reference guide for applications of SEM whilst systematically demonstrating various advanced SEM models, such as multi-group and mixture models using Mplus. Discusses and demonstrates various SEM models using both cross-sectional and longitudinal data with both continuous and categorical outcomes. Provides step-by-step instructions of model specification and estimation, as well as detail interpretation of Mplus results. Explores different methods for sample size estimate and statistical power analysis for SEM. By following the examples provided in this book, readers will be able to build their own SEM models using Mplus. Teachers, graduate students, and researchers in social sciences and health studies will also benefit from this book.
Apache Kafka Quick Start Guide
by
Estrada, Raúl
in
COMPUTERS / Data Science / General
,
Electronic data processing
,
Telecommunication
2024,2018
Process large volumes of data in real-time while building high performance and robust data stream processing pipeline using the latest Apache Kafka 2.0Key FeaturesSolve practical large data and processing challenges with KafkaTackle data processing challenges like late events, windowing, and watermarkingUnderstand real-time streaming applications processing using Schema registry, Kafka connect, Kafka streams, and KSQLBook DescriptionApache Kafka is a great open source platform for handling your real-time data pipeline to ensure high-speed filtering and pattern matching on the fly. In this book, you will learn how to use Apache Kafka for efficient processing of distributed applications and will get familiar with solving everyday problems in fast data and processing pipelines.This book focuses on programming rather than the configuration management of Kafka clusters or DevOps. It starts off with the installation and setting up the development environment, before quickly moving on to performing fundamental messaging operations such as validation and enrichment.Here you will learn about message composition with pure Kafka API and Kafka Streams. You will look into the transformation of messages in different formats, such asext, binary, XML, JSON, and AVRO. Next, you will learn how to expose the schemas contained in Kafka with the Schema Registry. You will then learn how to work with all relevant connectors with Kafka Connect. While working with Kafka Streams, you will perform various interesting operations on streams, such as windowing, joins, and aggregations. Finally, through KSQL, you will learn how to retrieve, insert, modify, and delete data streams, and how to manipulate watermarks and windows.What you will learnHow to validate data with KafkaAdd information to existing data flowsGenerate new information through message compositionPerform data validation and versioning with the Schema RegistryHow to perform message Serialization and DeserializationHow to perform message Serialization and DeserializationProcess data streams with Kafka StreamsUnderstand the duality between tables and streams with KSQLWho this book is forThis book is for developers who want to quickly master the practical concepts behind Apache Kafka. The audience need not have come across Apache Kafka previously; however, a familiarity of Java or any JVM language will be helpful in understanding the code in this book.