Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
181 result(s) for "Videospiel"
Sort by:
Affective computing in virtual reality: emotion recognition from brain and heartbeat dynamics using wearable sensors
Affective Computing has emerged as an important field of study that aims to develop systems that can automatically recognize emotions. Up to the present, elicitation has been carried out with non-immersive stimuli. This study, on the other hand, aims to develop an emotion recognition system for affective states evoked through Immersive Virtual Environments. Four alternative virtual rooms were designed to elicit four possible arousal-valence combinations, as described in each quadrant of the Circumplex Model of Affects. An experiment involving the recording of the electroencephalography (EEG) and electrocardiography (ECG) of sixty participants was carried out. A set of features was extracted from these signals using various state-of-the-art metrics that quantify brain and cardiovascular linear and nonlinear dynamics, which were input into a Support Vector Machine classifier to predict the subject’s arousal and valence perception. The model’s accuracy was 75.00% along the arousal dimension and 71.21% along the valence dimension. Our findings validate the use of Immersive Virtual Environments to elicit and automatically recognize different emotional states from neural and cardiac dynamics; this development could have novel applications in fields as diverse as Architecture, Health, Education and Videogames.
Lucid Data Dreaming for Video Object Segmentation
Convolutional networks reach top quality in pixel-level video object segmentation but require a large amount of training data (1k–100k) to deliver such results. We propose a new training strategy which achieves state-of-the-art results across three evaluation datasets while using \\[20\\,\\times \\]–\\[1000\\,\\times \\] less annotated data than competing methods. Our approach is suitable for both single and multiple object segmentation. Instead of using large training sets hoping to generalize across domains, we generate in-domain training data using the provided annotation on the first frame of each video to synthesize—“lucid dream” (in a lucid dream the sleeper is aware that he or she is dreaming and is sometimes able to control the course of the dream)—plausible future video frames. In-domain per-video training data allows us to train high quality appearance- and motion-based models, as well as tune the post-processing stage. This approach allows to reach competitive results even when training from only a single annotated frame, without ImageNet pre-training. Our results indicate that using a larger training set is not automatically better, and that for the video object segmentation task a smaller training set that is closer to the target domain is more effective. This changes the mindset regarding how many training samples and general “objectness” knowledge are required for the video object segmentation task.
Challenging local realism with human choices
A Bell test is a randomized trial that compares experimental observations against the philosophical worldview of local realism 1 , in which the properties of the physical world are independent of our observation of them and no signal travels faster than light. A Bell test requires spatially distributed entanglement, fast and high-efficiency detection and unpredictable measurement settings 2 , 3 . Although technology can satisfy the first two of these requirements 4 – 7 , the use of physical devices to choose settings in a Bell test involves making assumptions about the physics that one aims to test. Bell himself noted this weakness in using physical setting choices and argued that human ‘free will’ could be used rigorously to ensure unpredictability in Bell tests 8 . Here we report a set of local-realism tests using human choices, which avoids assumptions about predictability in physics. We recruited about 100,000 human participants to play an online video game that incentivizes fast, sustained input of unpredictable selections and illustrates Bell-test methodology 9 . The participants generated 97,347,490 binary choices, which were directed via a scalable web platform to 12 laboratories on five continents, where 13 experiments tested local realism using photons 5 , 6 , single atoms 7 , atomic ensembles 10 and superconducting devices 11 . Over a 12-hour period on 30 November 2016, participants worldwide provided a sustained data flow of over 1,000 bits per second to the experiments, which used different human-generated data to choose each measurement setting. The observed correlations strongly contradict local realism and other realistic positions in bipartite and tripartite 12 scenarios. Project outcomes include closing the ‘freedom-of-choice loophole’ (the possibility that the setting choices are influenced by ‘hidden variables’ to correlate with the particle properties 13 ), the utilization of video-game methods 14 for rapid collection of human-generated randomness, and the use of networking techniques for global participation in experimental science. The BIG Bell Test, which used an online video game with 100,000 participants worldwide to provide random bits to 13 quantum physics experiments, contradicts the Einstein–Podolsky–Rosen worldview of local realism.
Deep learning for procedural content generation
Procedural content generation in video games has a long history. Existing procedural content generation methods, such as search-based, solver-based, rule-based and grammar-based methods have been applied to various content types such as levels, maps, character models, and textures. A research field centered on content generation in games has existed for more than a decade. More recently, deep learning has powered a remarkable range of inventions in content production, which are applicable to games. While some cutting-edge deep learning methods are applied on their own, others are applied in combination with more traditional methods, or in an interactive setting. This article surveys the various deep learning methods that have been applied to generate game content directly or indirectly, discusses deep learning methods that could be used for content generation purposes but are rarely used today, and envisages some limitations and potential future directions of deep learning for procedural content generation.
Robotic Versus Human Coaches for Active Aging: An Automated Social Presence Perspective
This empirical study compares elderly people’s social perception of human versus robotic coaches in the context of an active and healthy aging program. In evaluating hedonic and utilitarian value perceptions of exergames (i.e., video games integrating physical activity), we consider elderly people’s judgments of the warmth and competence (i.e., social cognition) of their assigned coach (human vs. robot). The field experiments involve 58 elderly participants in the real-life context. Leveraging a mixed-method approach that combines quantitative and qualitative data, we show that (1) socially assistive robots activate feelings of (automated) social presence (2) human coaches score higher on perceived warmth and competence relative to robotic coaches, and (3) social cognition affects elderly people’s experience (i.e., emotional and cognitive reactions and behavioral intentions) with respect to exergames. These findings can inform future developments and design of social robots and systems for their smoother inclusion into elderly people’s social networks. In particular, we recommend that socially assistive robots take complementary roles (e.g., motivational coach) and assist human caregivers in improving elderly people’s physical and psychosocial well-being.
Virtual reality experiences, embodiment, videogames and their dimensions in neurorehabilitation
Background In the context of stroke rehabilitation, new training approaches mediated by virtual reality and videogames are usually discussed and evaluated together in reviews and meta-analyses. This represents a serious confounding factor that is leading to misleading, inconclusive outcomes in the interest of validating these new solutions. Main body Extending existing definitions of virtual reality, in this paper I put forward the concept of virtual reality experience (VRE), generated by virtual reality systems (VRS; i.e. a group of variable technologies employed to create a VRE). Then, I review the main components composing a VRE, and how they may purposely affect the mind and body of participants in the context of neurorehabilitation. In turn, VRS are not anymore exclusive from VREs but are currently used in videogames and other human-computer interaction applications in different domains. Often, these other applications receive the name of virtual reality applications as they use VRS. However, they do not necessarily create a VRE. I put emphasis on exposing fundamental similarities and differences between VREs and videogames for neurorehabilitation. I also recommend describing and evaluating the specific features encompassing the intervention rather than evaluating virtual reality or videogames as a whole. Conclusion This disambiguation between VREs, VRS and videogames should help reduce confusion in the field. This is important for databases searches when looking for specific studies or building metareviews that aim at evaluating the efficacy of technology-mediated interventions.
Artificial Neural Networks and Deep Learning in the Visual Arts: a review
In this article, we perform an exhaustive analysis of the use of Artificial Neural Networks and Deep Learning in the Visual Arts. We begin by introducing changes in Artificial Intelligence over the years and examine in depth the latest work carried out in prediction, classification, evaluation, generation, and identification through Artificial Neural Networks for the different Visual Arts. While we highlight the contributions of photography and pictorial art, there are also other uses for 3D modeling, including video games, architecture, and comics. The results of the investigations discussed show that the use of Artificial Neural Networks in the Visual Arts continues to evolve and have recently experienced significant growth. To complement the text, we include a glossary and table with information about the most commonly employed image datasets.
Leap motion controlled video game-based therapy for upper limb rehabilitation in patients with Parkinson’s disease: a feasibility study
Background Non-immersive video games are currently being used as technological rehabilitation tools for individuals with Parkinson’s disease (PD). The aim of this feasibility study was to evaluate the effectiveness of the Leap Motion Controller® (LMC) system used with serious games designed for the upper limb (UL), as well as the levels of satisfaction and compliance among patients in mild-to-moderate stages of the disease. Methods A non-probabilistic sampling of non-consecutive cases was performed. 23 PD patients, in stages II-IV of the Hoehn & Yahr scale, were randomized into two groups: an experimental group ( n  = 12) who received treatment based on serious games designed by the research team using the LMC system for the UL, and a control group ( n  = 11) who received a specific intervention for the UL. Grip muscle strength, coordination, speed of movements, fine and gross UL dexterity, as well as satisfaction and compliance, were assessed in both groups pre-treatment and post-treatment. Results Within the experimental group, significant improvements were observed in all post-treatment assessments, except for Box and Blocks test for the less affected side. Clinical improvements were observed for all assessments in the control group. Statistical intergroup analysis showed significant improvements in coordination, speed of movements and fine motor dexterity scores on the more affected side of patients in the experimental group. Conclusions The LMC system and the serious games designed may be a feasible rehabilitation tool for the improvement of coordination, speed of movements and fine UL dexterity in PD patients. Further studies are needed to confirm these preliminary findings.
Identifying Stressors and Coping Strategies of Elite Esports Competitors
Researchers have examined some of the psychological aspects of competing at a high level in esports. The present study aims to build on this literature by examining the various stressors faced and the associated coping strategies employed by seven esports competitors. The interviews were inductively analysed, and the findings illustrated a range of internal (e.g., communication issues, lack of shared team goals) and external (e.g., event audience, media interviews) stressors that the participants faced. Following this, the coping strategies used to deal with these stressors were deductively analysed. A number of emotion- (e.g., breathing, relaxation), problem- (e.g., intra-team communication after matches), and approach- (e.g., team camps, delegating roles) coping strategies were described by participants. Avoidance coping strategies were predominantly highlighted as being used during games. Results are considered in line with how applied practitioners might support players to develop strategies to deal with stressors, which might in turn lead to performance enhancements.
Scaffolding depth cues and perceptual learning in VR to train stereovision: a proof of concept pilot study
Stereopsis is a valuable feature of human visual perception, which may be impaired or absent in amblyopia and/or strabismus but can be improved through perceptual learning (PL) and videogames. The development of consumer virtual reality (VR) may provide a useful tool for improving stereovision. We report a proof of concept study, especially useful for strabismic patients and/or those with reduced or null stereoacuity. Our novel VR PL strategy is based on a principled approach which included aligning and balancing the perceptual input to the two eyes, dichoptic tasks, exposure to large disparities, scaffolding depth cues and perception for action. We recruited ten adults with normal vision and ten with binocular impairments. Participants played two novel PL games (DartBoard and Halloween) using a VR-HMD. Each game consisted of three depth cue scaffolding conditions, starting with non-binocular and binocular cues to depth and ending with only binocular disparity. All stereo-anomalous participants improved in the game and most (9/10) showed transfer to clinical and psychophysical stereoacuity tests (mean stereoacuity changed from 569 to 296 arc seconds, P  < 0.0001). Stereo-normal participants also showed in-game improvement, which transferred to psychophysical tests (mean stereoacuity changed from 23 to a ceiling value of 20 arc seconds, P  = 0.001). We conclude that a VR PL approach based on depth cue scaffolding may provide a useful method for improving stereoacuity, and the in-game performance metrics may provide useful insights into principles for effective treatment of stereo anomalies. This study was registered as a clinical trial on 04/05/2010 with the identifier NCT01115283 at ClinicalTrials.gov.