Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Degree Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Granting Institution
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
51,015 result(s) for "Human-Computer Interaction"
Sort by:
Research methods in human-computer interaction
Research Methods in Human-Computer Interaction is a comprehensive guide to performing research and is essential reading for both quantitative and qualitative methods.Since the first edition was published in 2009, the book has been adopted for use at leading universities around the world, including Harvard University, Carnegie-Mellon University.
From tool to partner : the evolution of human-computer interaction
This is the first comprehensive history of human-computer interaction (HCI). Whether you are a user-experience professional or an academic researcher, whether you identify with computer science, human factors, information systems, information science, design, or communication, you can discover how your experiences fit into the expanding field of HCI. You can determine where to look for relevant information in other fields--and where you won't find it. This book describes the different fields that have participated in improving our digital tools. It is organized chronologically, describing major developments across fields in each period. Computer use has changed radically, but many underlying forces are constant. Technology has changed rapidly, human nature very little. An irresistible force meets an immovable object. The exponential rate of technological change gives us little time to react before technology moves on. Patterns and trajectories described in this book provide your best chance to anticipate what could come next. We have reached a turning point. Tools that we built for ourselves to use are increasingly influencing how we use them, in ways that are planned and sometimes unplanned. The book ends with issues worthy of consideration as we explore the new world that we and our digital partners are shaping.
Research in the wild
The phrase \"in-the-wild\" is becoming popular again in the field of human-computer interaction (HCI), describing approaches to HCI research and accounts of user experience phenomena that differ from those derived from other lab-based methods. The phrase first came to the forefront 20-25 years ago when anthropologists Jean Lave, Lucy Suchman, and Ed Hutchins began writing about cognition being in-the-wild. Today, it is used more broadly to refer to research that seeks to understand new technology interventions in everyday living. A reason for its resurgence in contemporary HCI is an acknowledgment that so much technology is now embedded and used in our everyday lives. Researchers have begun following suit-decamping from their usability and living labs and moving into the wild; carrying out in-situ development and engagement, sampling experiences, and probing people in their homes and on the streets. The aim of this book is to examine what this new direction entails and what it means for HCI theory, practice, and design. The focus is on the insights, demands and concerns. But how does research in the wild differ from the other applied approaches in interaction design, such as contextual design, action research, or ethnography? What is added by labeling user research as being inthe- wild? One main difference is where the research starts and ends: unlike user-centered, and more specifically, ethnographic approaches which typically begin by observing existing practices and then suggesting general design implications or system requirements, in-the-wild approaches create and evaluate new technologies and experiences in situ. Moreover, novel technologies are often developed to augment people, places, and settings, without necessarily designing them for specific user needs. There has also been a shift in design thinking. Instead of developing solutions that fit in with existing practices, researchers are experimenting with new technological possibilities that can change and even disrupt behavior. Opportunities are created, interventions installed, and different ways of behaving are encouraged. A key concern is how people react, change and integrate these in their everyday lives. This book outlines the emergence and development of research in the wild. It is structured around a framework for conceptualizing and bringing together the different strands. It covers approaches, methods, case studies, and outcomes. Finally, it notes that there is more in the wild research in HCI than usability and other kinds of user studies in HCI and what the implications of this are for the field.
Deep learning in vision-based static hand gesture recognition
Hand gesture for communication has proven effective for humans, and active research is ongoing in replicating the same success in computer vision systems. Human–computer interaction can be significantly improved from advances in systems that are capable of recognizing different hand gestures. In contrast to many earlier works, which consider the recognition of significantly differentiable hand gestures, and therefore often selecting a few gestures from the American Sign Language (ASL) for recognition, we propose applying deep learning to the problem of hand gesture recognition for the whole 24 hand gestures obtained from the Thomas Moeslund’s gesture recognition database. We show that more biologically inspired and deep neural networks such as convolutional neural network and stacked denoising autoencoder are capable of learning the complex hand gesture classification task with lower error rates. The considered networks are trained and tested on data obtained from the above-mentioned public database; results comparison is then made against earlier works in which only small subsets of the ASL hand gestures are considered for recognition.
Brave NUI world : designing natural user interfaces for touch and gesture
Touch and gestural devices have been hailed as next evolutionary step in human-computer interaction. As software companies struggle to catch up with one another in terms of developing the next great touch-based interface, designers are charged with the daunting task of keeping up with the advances in new technology and this new aspect to user experience design. Product and interaction designers, developers and managers are already well versed in UI design, but touch-based interfaces have added a new level of complexity. They need quick references and real-world examples in order to make informed decisions when designing for these particular interfaces. Brave NUI World is the first practical book for product and interaction developers and designing touch and gesture interfaces. Written by developers of industry-first, multi-touch, multi-user products, this book gives you the necessary tools and information to integrate touch and gesture practices into your daily work, presenting scenarios, problem solving, metaphors, and techniques intended to avoid making mistakes. *Provides easy-to-apply design guidance for the unique challenge of creating touch- and gesture-based user interfaces *Considers diverse user needs and context, real world successes and failures, and a look into the future of NUI *Presents thirty scenarios, giving practitioners a multitude of considerations for making informed design decisions and helping to ensure that missteps are never made again
Affective state detection via facial expression analysis within a human–computer interaction context
The advancement in technology indicates that there is an opportunity to enhance human–computer interaction by way of affective state recognition. Affective state recognition is typically based on passive stimuli such as watching video clips, which does not reflect genuine interaction. This paper presents a study on affective state recognition using active stimuli, i.e. facial expressions of users when they attempt computerised tasks, particularly across typical usage of computer systems. A data collection experiment is presented for acquiring data from normal users whilst they interact with software, attempting to complete a set of predefined tasks. In addition, a hierarchical machine learning approach is presented for facial expression-based affective state recognition, which employs an Euclidean distance-based feature representation, conjointly with a customised encoding for users’ self-reported affective states. Consequently, the aim is to find the potential relationship between the facial expressions, as defined by Paul Ekman, and the self-reported emotional states specified by users using Russells Circumplex model, in relation to the actual feelings and affective states. The main findings of this study suggest that facial expressions cannot precisely reveal the actual feelings of users whilst interacting with common computerised tasks. Moreover, during active interaction tasks more variation occurs within the facial expressions of participants than occurs within passive interaction.