Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeDegree TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceGranting InstitutionTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
1,642,238
result(s) for
"PROCESSING"
Sort by:
Big Data, Little Data, No Data
by
Borgman, Christine L
in
Big data
,
Communication in learning and scholarship
,
Communication in learning and scholarship -- Technological innovations
2015,2016,2017
\"Big Data\" is on the covers ofScience, Nature, theEconomist, andWiredmagazines, on the front pages of theWall Street Journaland theNew York Times.But despite the media hyperbole, as Christine Borgman points out in this examination of data and scholarly research, having the right data is usually better than having more data; little data can be just as valuable as big data. In many cases, there are no data -- because relevant data don't exist, cannot be found, or are not available. Moreover, data sharing is difficult, incentives to do so are minimal, and data practices vary widely across disciplines.Borgman, an often-cited authority on scholarly communication, argues that data have no value or meaning in isolation; they exist within a knowledge infrastructure -- an ecology of people, practices, technologies, institutions, material objects, and relationships. After laying out the premises of her investigation -- six \"provocations\" meant to inspire discussion about the uses of data in scholarship -- Borgman offers case studies of data practices in the sciences, the social sciences, and the humanities, and then considers the implications of her findings for scholarly practice and research policy. To manage and exploit data over the long term, Borgman argues, requires massive investment in knowledge infrastructures; at stake is the future of scholarship.
Cyber security in parallel and distributed computing : concepts, techniques, applications and case studies
by
Mishra, Brojo Kishore
,
Khari, Manju
,
Kumar, Raghvendra
in
Computer networks
,
Computer networks -- Security measures
,
Computer security
2019
The book contains several new concepts, techniques, applications and case studies for cyber securities in parallel and distributed computing The main objective of this book is to explore the concept of cybersecurity in parallel and distributed computing along with recent research developments in the field.
Distributed computing pearls
\"Computers and computer networks are one of the most incredible inventions of the 20th century, having an ever-expanding role in our daily lives by enabling complex human activities in areas such as entertainment, education, and commerce. One of the most challenging problems in computer science for the 21st century is to improve the design of distributed systems where computing devices have to work together as a team to achieve common goals. In this book, the author has tried to gently introduce the general reader to some of the most fundamental issues and classical results of computer science underlying the design of algorithms for distributed systems, so that the reader can get a feel of the nature of this exciting and fascinating field called distributed computing. The book will appeal to the educated layperson, while computer-knowledgeable readers will be able to learn something new.\"--Page 4 of cover.
Corpus Stylistics
2004
This book combines stylistic analysis with corpus linguistics to present an innovative account of the phenomenon of speech, writing and thought presentation - commonly referred to as 'speech reporting' or 'discourse presentation'. This new account is based on an extensive analysis of a quarter-of-a-million word electronic collection of written narrative texts, including both fiction and non-fiction. The book includes detailed discussions of:
The construction of this corpus of late twentieth-century written British narratives taken from fiction, newspaper news reports and (auto)biographies
The development of a manual annotation system for speech, writing and thought presentation and its application to the corpus.
The findings of a quantitive and qualitative analysis of the forms and functions of speech, writing and thought presentation in the three genres represented in the corpus.
The findings of the analysis of a range of specific phenomena, including hypothetical speech, writing and thought presentation, embedded speech, writing and thought presentation and ambiguities in speech, writing and thought presentation.
Two case studies concentrating on specific texts from the corpus.
Corpus Stylistics shows how stylistics, and text/discourse analysis more generally, can benefit from the use of a corpus methodology and the authors' innovative approach results in a more reliable and comprehensive categorisation of the forms of speech, writing and thought presentation than have been suggested so far. This book is essential reading for linguists interested in the areas of stylistics and corpus linguistics.
Elena Semino is Senior Lecturer in the Department of Linguistics and Modern English Language at Lancaster University. She is the author of Language and World Creation in Poems and Other Texts (1997), and co-editor (with Jonathan Culpetter) of Cognitive Stylistics: Language and Cognition in Text Analysis (2002). Mick Short is Professor of English Language and Literature at Lancaster University. He has written Exploring the Language of Poems, Plays and Prose (1997) and (with Geoffrey Leech) Style in Fiction (1997). He founded the Poetics and Linguistics Association and was the founding editor of its international journal, Language and Literature.
1. A Corpus-Based Approach to the Study of Discourse Presentation in Written Narratives 2. Methodology: The Construction and Annotation of the Corpus 3. A Revised Model of Speech, Writing and Thought Presentation 4. Speech Presentation in the Corpus: A Quantitative and Qualitative Analysis 5. Writing Presentation in the Corpus: A Quantitative and Qualitative Analysis 6. Thought Presentation in the Corpus: A Quantitative and Qualitative Analysis 7. Specific Phenomena in Speech, Writing Presentation 8. Case Studies of Specific Texts from the Corpus 9. Conclusion
A real-time in-memory discovery service : leveraging hierarchical packaging information in a unique identifier network to retrieve track and trace information
The research presented in this book discusses how to efficiently retrieve track and trace information for an item of interest that took a certain path through a complex network of manufacturers, wholesalers, retailers, and consumers. To this end, a super-ordinate system called \"Discovery Service\" is designed that has to handle large amounts of data, high insert-rates, and a high number of queries that are submitted to the discovery service. An example that is used throughout this book is the European pharmaceutical supply chain, which faces the challenge that more and more counterfeit medicinal products are being introduced. Between October and December 2008, more than 34 million fake drug pills were detected at customs control at the borders of the European Union. These fake drugs can put lives in danger as they were supposed to fight cancer, take effect as painkiller or antibiotics, among others. The concepts described in this book can be adopted for supply chain management use cases other than track and trace, such as recall, supply chain optimization, or supply chain analytics.
Apache Kafka Quick Start Guide
by
Estrada, Raúl
in
COMPUTERS / Data Science / General
,
Electronic data processing
,
Telecommunication
2024,2018
Process large volumes of data in real-time while building high performance and robust data stream processing pipeline using the latest Apache Kafka 2.0Key FeaturesSolve practical large data and processing challenges with KafkaTackle data processing challenges like late events, windowing, and watermarkingUnderstand real-time streaming applications processing using Schema registry, Kafka connect, Kafka streams, and KSQLBook DescriptionApache Kafka is a great open source platform for handling your real-time data pipeline to ensure high-speed filtering and pattern matching on the fly. In this book, you will learn how to use Apache Kafka for efficient processing of distributed applications and will get familiar with solving everyday problems in fast data and processing pipelines.This book focuses on programming rather than the configuration management of Kafka clusters or DevOps. It starts off with the installation and setting up the development environment, before quickly moving on to performing fundamental messaging operations such as validation and enrichment.Here you will learn about message composition with pure Kafka API and Kafka Streams. You will look into the transformation of messages in different formats, such asext, binary, XML, JSON, and AVRO. Next, you will learn how to expose the schemas contained in Kafka with the Schema Registry. You will then learn how to work with all relevant connectors with Kafka Connect. While working with Kafka Streams, you will perform various interesting operations on streams, such as windowing, joins, and aggregations. Finally, through KSQL, you will learn how to retrieve, insert, modify, and delete data streams, and how to manipulate watermarks and windows.What you will learnHow to validate data with KafkaAdd information to existing data flowsGenerate new information through message compositionPerform data validation and versioning with the Schema RegistryHow to perform message Serialization and DeserializationHow to perform message Serialization and DeserializationProcess data streams with Kafka StreamsUnderstand the duality between tables and streams with KSQLWho this book is forThis book is for developers who want to quickly master the practical concepts behind Apache Kafka. The audience need not have come across Apache Kafka previously; however, a familiarity of Java or any JVM language will be helpful in understanding the code in this book.
Fog and Edge Computing
by
Rajkumar Buyya, Satish Narayana Srirama, Rajkumar Buyya, Satish Narayana Srirama
in
Applied physics
,
Cloud computing
,
Communication, Networking and Broadcast Technologies
2019,2018
</P> <b>A comprehensive guide to Fog and Edge applications, architectures, and technologies</b> <p>Recent years have seen the explosive growth of the Internet of Things (IoT): the internet- connected network of devices that includes everything from personal electronics and home appliances to automobiles and industrial machinery. Responding to the ever-increasing bandwidth demands and privacy concerns of the IoT, Fog and Edge computing concepts have developed to collect, analyze, and process data closer to devices and more efficiently than traditional cloud architecture. <p><i>Fog and Edge Computing: Principles and Paradigms</i>provides a comprehensive overview of the state-of-the-art applications and architectures driving this dynamic field of computing while highlighting potential research directions and emerging technologies. <p>Exploring topics such as developing scalable architectures, moving from closed systems to open systems, and ethical issues rising from data sensing, this timely book addresses both the challenges and opportunities that Fog and Edge computing presents. Contributions from leading IoT experts discuss federating Edge resources, middleware design issues, data management and predictive analysis, smart transportation and surveillance applications, and more. A coordinated and integrated presentation of topics helps readers gain thorough knowledge of the foundations, applications, and issues that are central to Fog and Edge computing. This valuable resource: <ul> <li>Discusses IoT and new computing paradigms in the domain such as Fog, Edge and Mist</li> <li>Provides insights on transitioning from current Cloud-centric and 4G/5G wireless environments to Fog computing</li> <li>Examines methods to optimize virtualized, pooled, and shared resources</li> <li>Identifies potential technical challenges and offers suggestions for possible solutions</li> <li>Discusses major components of Fog and Edge computing architectures such as middleware, interaction protocols, and autonomic management</li> <li>Includes access to a website portal for advanced online resources</li> </ul> <p><i>Fog and Edge Computing: Principles and Paradigms</i>is an essential source of up-to-date information for systems architects, developers, researchers, and advanced undergraduate and graduate students in fields of computer science and engineering.