Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
148 result(s) for "Gloor, Peter A."
Sort by:
Can Plants Sense Humans? Using Plants as Biosensors to Detect the Presence of Eurythmic Gestures
This paper describes the preliminary results of measuring the impact of human body movements on plants. The scope of this project is to investigate if a plant perceives human activity in its vicinity. In particular, we analyze the influence of eurythmic gestures of human actors on lettuce and beans. In an eight-week experiment, we exposed rows of lettuce and beans to weekly eurythmic movements (similar to Qi Gong) of a eurythmist, while at the same time measuring changes in voltage between the roots and leaves of lettuce and beans using the plant spikerbox. We compared this experimental group of vegetables to a control group of vegetables whose voltage differential was also measured while not being exposed to eurythmy. We placed a plant spikerbox connected to lettuce or beans in the vegetable plot while the eurythmist was performing their gestures about 2 m away; a second spikerbox was connected to a control plant 20 m away. Using -tests, we found a clear difference between the experimental and the control group, which was also verified with a machine learning model. In other words, the vegetables showed a noticeably different pattern in electric potentials in response to eurythmic gestures.
Leveraging the Sensitivity of Plants with Deep Learning to Recognize Human Emotions
Recent advances in artificial intelligence combined with behavioral sciences have led to the development of cutting-edge tools for recognizing human emotions based on text, video, audio, and physiological data. However, these data sources are expensive, intrusive, and regulated, unlike plants, which have been shown to be sensitive to human steps and sounds. A methodology to use plants as human emotion detectors is proposed. Electrical signals from plants were tracked and labeled based on video data. The labeled data were then used for classification., and the MLP, biLSTM, MFCC-CNN, MFCC-ResNet, Random Forest, 1-Dimensional CNN, and biLSTM (without windowing) models were set using a grid search algorithm with cross-validation. Finally, the best-parameterized models were trained and used on the test set for classification. The performance of this methodology was measured via a case study with 54 participants who were watching an emotionally charged video; as ground truth, their facial emotions were simultaneously measured using facial emotion analysis. The Random Forest model shows the best performance, particularly in recognizing high-arousal emotions, achieving an overall weighted accuracy of 55.2% and demonstrating high weighted recall in emotions such as fear (61.0%) and happiness (60.4%). The MFCC-ResNet model offers decently balanced results, with AccuracyMFCC-ResNet=0.318 and RecallMFCC-ResNet=0.324. Regarding the MFCC-ResNet model, fear and anger were recognized with 75% and 50% recall, respectively. Thus, using plants as an emotion recognition tool seems worth investigating, addressing both cost and privacy concerns.
Comparing Synchronicity in Body Movement among Jazz Musicians with Their Emotions
This paper presents novel preliminary research that investigates the relationship between the flow of a group of jazz musicians, quantified through multi-person pose synchronization, and their collective emotions. We have developed a real-time software to calculate the physical synchronicity of team members by tracking the difference in arm, leg, and head movements using Lightweight OpenPose. We employ facial expression recognition to evaluate the musicians' collective emotions. Through correlation and regression analysis, we establish that higher levels of synchronized body and head movements correspond to lower levels of disgust, anger, sadness, and higher levels of joy among the musicians. Furthermore, we utilize 1-D CNNs to predict the collective emotions of the musicians. The model leverages 17 body synchrony keypoint vectors as features, resulting in a training accuracy of 61.47% and a test accuracy of 66.17%.
Predicting Dog Emotions Based on Posture Analysis Using DeepLabCut
This paper describes an emotion recognition system for dogs automatically identifying the emotions anger, fear, happiness, and relaxation. It is based on a previously trained machine learning model, which uses automatic pose estimation to differentiate emotional states of canines. Towards that goal, we have compiled a picture library with full body dog pictures featuring 400 images with 100 samples each for the states “Anger”, “Fear”, “Happiness” and “Relaxation”. A new dog keypoint detection model was built using the framework DeepLabCut for animal keypoint detector training. The newly trained detector learned from a total of 13,809 annotated dog images and possesses the capability to estimate the coordinates of 24 different dog body part keypoints. Our application is able to determine a dog’s emotional state visually with an accuracy between 60% and 70%, exceeding human capability to recognize dog emotions.
Taming the snake in paradise: combining institutional design and leadership to enhance collaborative innovation
The growing expectations to public services and the pervasiveness of wicked problems in times characterized by growing fiscal constraints call for the enhancement of public innovation, and new research suggests that multi-actor collaboration in networks and partnerships is superior to hierarchical and market-based strategies when it comes to spurring such innovation. Collaborative innovation seems ideal as it builds on diversity to generate innovative public value outcomes, but there is a catch since diversity may clash with the need for constructing a common ground that allows participating actors to agree on a joint and innovative solution. The challenge for collaborative innovation - taming the snake in paradise - is to nurture the diversity of views, ideas and forms of knowledge while still establishing a common ground for joint learning. While we know a great deal about the dynamics of the mutually supportive processes of collaboration, learning and innovation, we have yet to understand the role of institutional design and leadership in spurring collaborative innovation and dealing with this tension. Building on extant research, the article draws suitable cases from the Collaborative Governance Data Bank and uses Qualitative Comparative Analysis to explore how multiple constellations of institutional design and leadership spur collaborative innovation. The main finding is that, even though certain institutional design features reduce the need for certain leadership roles, the exercise of hands-on leadership is more important for securing collaborative innovation outcomes than hands-off institutional design.
Emotion Recognition in Horses with Convolutional Neural Networks
Creating intelligent systems capable of recognizing emotions is a difficult task, especially when looking at emotions in animals. This paper describes the process of designing a “proof of concept” system to recognize emotions in horses. This system is formed by two elements, a detector and a model. The detector is a fast region-based convolutional neural network that detects horses in an image. The model is a convolutional neural network that predicts the emotions of those horses. These two elements were trained with multiple images of horses until they achieved high accuracy in their tasks. In total, 400 images of horses were collected and labeled to train both the detector and the model while 40 were used to test the system. Once the two components were validated, they were combined into a testable system that would detect equine emotions based on established behavioral ethograms indicating emotional affect through the head, neck, ear, muzzle, and eye position. The system showed an accuracy of 80% on the validation set and 65% on the test set, demonstrating that it is possible to predict emotions in animals using autonomous intelligent systems. Such a system has multiple applications including further studies in the growing field of animal emotions as well as in the veterinary field to determine the physical welfare of horses or other livestock.
Assessing the Predictive Power of Online Social Media to Analyze COVID-19 Outbreaks in the 50 U.S. States
As the coronavirus disease 2019 (COVID-19) continues to rage worldwide, the United States has become the most affected country, with more than 34.1 million total confirmed cases up to 1 June 2021. In this work, we investigate correlations between online social media and Internet search for the COVID-19 pandemic among 50 U.S. states. By collecting the state-level daily trends through both Twitter and Google Trends, we observe a high but state-different lag correlation with the number of daily confirmed cases. We further find that the accuracy measured by the correlation coefficient is positively correlated to a state’s demographic, air traffic volume and GDP development. Most importantly, we show that a state’s early infection rate is negatively correlated with the lag to the previous peak in Internet searches and tweeting about COVID-19, indicating that earlier collective awareness on Twitter/Google correlates with a lower infection rate. Lastly, we demonstrate that correlations between online social media and search trends are sensitive to time, mainly due to the attention shifting of the public.
Measuring Ethical Values with AI for Better Teamwork
Do employees with high ethical and moral values perform better? Comparing personality characteristics, moral values, and risk-taking behavior with individual and team performance has long been researched. Until now, these determinants of individual personality have been measured through surveys. However, individuals are notoriously bad at self-assessment. Combining machine learning (ML) with social network analysis (SNA) and natural language processing (NLP), this research draws on email conversations to predict the personal values of individuals. These values are then compared with the individual and team performance of employees. This prediction builds on a two-layered ML model. Building on features of social network structure, network dynamics, and network content derived from email conversations, we predict personality characteristics, moral values, and the risk-taking behavior of employees. In turn, we use these values to predict individual and team performance. Our results indicate that more conscientious and less extroverted team members increase the performance of their teams. Willingness to take social risks decreases the performance of innovation teams in a healthcare environment. Similarly, a focus on values such as power and self-enhancement increases the team performance of a global services provider. In sum, the contributions of this paper are twofold: it first introduces a novel approach to measuring personal values based on “honest signals” in emails. Second, these values are then used to build better teams by identifying ideal personality characteristics for a chosen task.
Predicting Individual Well-Being in Teamwork Contexts Based on Speech Features
Current methods for assessing individual well-being in team collaboration at the workplace often rely on manually collected surveys. This limits continuous real-world data collection and proactive measures to improve team member workplace satisfaction. We propose a method to automatically derive social signals related to individual well-being in team collaboration from raw audio and video data collected in teamwork contexts. The goal was to develop computational methods and measurements to facilitate the mirroring of individuals’ well-being to themselves. We focus on how speech behavior is perceived by team members to improve their well-being. Our main contribution is the assembly of an integrated toolchain to perform multi-modal extraction of robust speech features in noisy field settings and to explore which features are predictors of self-reported satisfaction scores. We applied the toolchain to a case study, where we collected videos of 20 teams with 56 participants collaborating over a four-day period in a team project in an educational environment. Our audiovisual speaker diarization extracted individual speech features from a noisy environment. As the dependent variable, team members filled out a daily PERMA (positive emotion, engagement, relationships, meaning, and accomplishment) survey. These well-being scores were predicted using speech features extracted from the videos using machine learning. The results suggest that the proposed toolchain was able to automatically predict individual well-being in teams, leading to better teamwork and happier team members.