Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
918 result(s) for "Barry, Barbara"
Sort by:
Music, Culture and its Discontents Part 1
The article opens by discussing categories of art, literature and ideas, such as Classicism and Rationalism. Such normative criteria enable us to identify style features and evaluate innovative interpretations. The central part of the article considers motifs in music and other art-forms as anchors of identity. In the Renaissance secular vocal music of Dowland and Marenzio, motifs were often related to poetic contours, especially the contour of melancholy as »dying fall,« also seen in the context of late Renaissance ‘fin-desiècle’. Članak započinje raspravom o kategorijama umjetnosti, književnosti i ideja kao što su klasicizam i racionalizam. Takvi normativni kriteriji omogućuju nam identificirati karakteristike stila i vrednovati inovativne interpretacije. Središnji dio članka razmatra motive u glazbi i drugim umjetničkim formama kao sidrišta identiteta. U renesansi su u svjetovnoj vokalnoj glazbi Johna Dowlanda i Luce Marenzija motivi bili često vezani uz pjesničke oblike, osobito uz oblik melankolije kao ‘umirućeg pada’, koju se također sagledava u kontekstu kasnorenesansnog fin-de-sièclea.
Batgirl, the Bronze Age omnibus
\"Batgirl started her vigilante career when mild-mannered librarian Barbara Gordon, daughter of famed police commissioner Jim Gordon, attended a costume party gone awry. It wasn't long before the teenage genius crime-fighter became a regular feature of Batman's world and an icon to generations of young readers.\"-- Provided by publisher.
Under Threat
Using Jacob Bronowski’s criteria of tactics, as shortterm solutions, and strategies, as long-range solutions of structural plans, the article considers how they provide points of identity and recognition in musical journeys: as deflective, intriguing timeprofiles as musical stories – rather than Heinrich Schenker’s reductive spatial hierarchies. Contemporary cognitive theory underscores the vital connection between categories of identity and the play of the imagination – both within tonal and extended tonal language, and especially non-tonal contexts, where prime motifs, as in Bartók’s Bluebeard’s Castle, provide reference points in a psycho-drama of desire, and playing with fire. The closing reference to Bronowski reopens the intellectual landscape: that we are neither shut in nor shouted down by social media and the ‘woke’ society but retain the essential human capability of choice. Ovaj je članak odgovor na ono što se naziva »novom muzikologijom«, čija je prvotna ideja bila proširivanje repertoara skladateljâ i djelâ. No, dok je premisa bila razumna, njezino je ostvarenje postalo škripavo i netolerantno spram alternativnih stajališta. Skladatelji klasičnog repertoara, uključujući Bacha i Brahmsa, bijahu odbačeni kao »sredovječni bijelci«, a umjesto njih u prvi se plan ističu žene skladateljice, homoseksualci, lezbijke i transrodne osobe, te manjine. U takvom iščitavanju glazba je postala kulturni artefakt i ništa više: ne više medij ekspresivnog diskursa, ne više primjer stila i vještine unutar svojih okvira, a u posljedici bez svakog vrijednosnog kriterija o tome što je dobro ili samo derivat. Predlažući paradigme vrednovanja Jacob Bronowski u svojoj knjizi The Ascent of Man (Uspon čovjeka) raspravlja o tome kako znanstvena inovacija ovisi o dvije stvari: prva je okvir, a druga zamišljena vizija o njegovu transformiranju. To se može primijeniti ne samo na znanstvena otkrića nego i na sve vrste ljudskoga stvaralaštva. U skladu s time u članku se rabe njegovi termini strategije za mapiranje, tj. kako broditi u svim aspektima života (ono što kognitivni lingvist George Miller naziva ‘planovi ponašanja’ kao sidrišta za referentnost i identitet), i taktike kao inovativna rješenja visokog stupnja u individualnom radu. Prema tome, taktike i strategije pružaju alternativni pristup hijerarhijskoj analizi Heinricha Schenkera, koja tvrdi da sva tonalitetna djela – bez obzira na razdoblje, formu i stil – podržava ista strukturna pozadina. Unatoč tome, povjesničar umjetnosti John Berger predložio je da u klimi našeg višeslojnog suvremenog svijeta ne postoji samo jedna priča nego različiti »načini gledanja«. Naslov ‘Pod prijetnjom’ opisuje kako napuštanje glazbenih strategija ostavlja taktike bez sidra ili konteksta – zapravo, u slobodnom padu. No kraj članka, pozivajući se opet na Bronowskog, poprima neočekivan pravac: to je činjenica da nikada ne znamo odakle će doći inovacija. Poput Beethovenovih kasnih kvarteta i rodne teorije, moćna se stvaralačka inovacija mijenja ne samo u tome kako mi razumijemo takva djela nego također i kako mi shvaćamo sami sebe.
Patient apprehensions about the use of artificial intelligence in healthcare
While there is significant enthusiasm in the medical community about the use of artificial intelligence (AI) technologies in healthcare, few research studies have sought to assess patient perspectives on these technologies. We conducted 15 focus groups examining patient views of diverse applications of AI in healthcare. Our results indicate that patients have multiple concerns, including concerns related to the safety of AI, threats to patient choice, potential increases in healthcare costs, data-source bias, and data security. We also found that patient acceptance of AI is contingent on mitigating these possible harms. Our results highlight an array of patient concerns that may limit enthusiasm for applications of AI in healthcare. Proactively addressing these concerns is critical for the flourishing of ethical innovation and ensuring the long-term success of AI applications in healthcare.
Artificial intelligence–enabled electrocardiograms for identification of patients with low ejection fraction: a pragmatic, randomized clinical trial
We have conducted a pragmatic clinical trial aimed to assess whether an electrocardiogram (ECG)-based, artificial intelligence (AI)-powered clinical decision support tool enables early diagnosis of low ejection fraction (EF), a condition that is underdiagnosed but treatable. In this trial ( NCT04000087 ), 120 primary care teams from 45 clinics or hospitals were cluster-randomized to either the intervention arm (access to AI results; 181 clinicians) or the control arm (usual care; 177 clinicians). ECGs were obtained as part of routine care from a total of 22,641 adults ( N  = 11,573 intervention; N  = 11,068 control) without prior heart failure. The primary outcome was a new diagnosis of low EF (≤50%) within 90 days of the ECG. The trial met the prespecified primary endpoint, demonstrating that the intervention increased the diagnosis of low EF in the overall cohort (1.6% in the control arm versus 2.1% in the intervention arm, odds ratio (OR) 1.32 (1.01–1.61), P  = 0.007) and among those who were identified as having a high likelihood of low EF (that is, positive AI-ECG, 6% of the overall cohort) (14.5% in the control arm versus 19.5% in the intervention arm, OR 1.43 (1.08–1.91), P  = 0.01). In the overall cohort, echocardiogram utilization was similar between the two arms (18.2% control versus 19.2% intervention, P  = 0.17); for patients with positive AI-ECGs, more echocardiograms were obtained in the intervention compared to the control arm (38.1% control versus 49.6% intervention, P  < 0.001). These results indicate that use of an AI algorithm based on ECGs can enable the early diagnosis of low EF in patients in the setting of routine primary care. In a pragmatic, cluster-randomized clinical trial, use of an AI algorithm for interpretation of electrocardiograms in primary care practices increased the frequency at which impaired heart function was diagnosed.
Trustworthy and ethical AI-enabled cardiovascular care: a rapid review
Background Artificial intelligence (AI) is increasingly used for prevention, diagnosis, monitoring, and treatment of cardiovascular diseases. Despite the potential for AI to improve care, ethical concerns and mistrust in AI-enabled healthcare exist among the public and medical community. Given the rapid and transformative recent growth of AI in cardiovascular care, to inform practice guidelines and regulatory policies that facilitate ethical and trustworthy use of AI in medicine, we conducted a literature review to identify key ethical and trust barriers and facilitators from patients’ and healthcare providers’ perspectives when using AI in cardiovascular care. Methods In this rapid literature review, we searched six bibliographic databases to identify publications discussing transparency, trust, or ethical concerns (outcomes of interest) associated with AI-based medical devices (interventions of interest) in the context of cardiovascular care from patients’, caregivers’, or healthcare providers’ perspectives. The search was completed on May 24, 2022 and was not limited by date or study design. Results After reviewing 7,925 papers from six databases and 3,603 papers identified through citation chasing, 145 articles were included. Key ethical concerns included privacy, security, or confidentiality issues ( n  = 59, 40.7%); risk of healthcare inequity or disparity ( n  = 36, 24.8%); risk of patient harm ( n  = 24, 16.6%); accountability and responsibility concerns ( n  = 19, 13.1%); problematic informed consent and potential loss of patient autonomy ( n  = 17, 11.7%); and issues related to data ownership ( n  = 11, 7.6%). Major trust barriers included data privacy and security concerns, potential risk of patient harm, perceived lack of transparency about AI-enabled medical devices, concerns about AI replacing human aspects of care, concerns about prioritizing profits over patients’ interests, and lack of robust evidence related to the accuracy and limitations of AI-based medical devices. Ethical and trust facilitators included ensuring data privacy and data validation, conducting clinical trials in diverse cohorts, providing appropriate training and resources to patients and healthcare providers and improving their engagement in different phases of AI implementation, and establishing further regulatory oversights. Conclusion This review revealed key ethical concerns and barriers and facilitators of trust in AI-enabled medical devices from patients’ and healthcare providers’ perspectives. Successful integration of AI into cardiovascular care necessitates implementation of mitigation strategies. These strategies should focus on enhanced regulatory oversight on the use of patient data and promoting transparency around the use of AI in patient care.
Key Information Influencing Patient Decision-Making About AI in Health Care: Survey Experiment Study
Artificial intelligence (AI)-enabled devices are increasingly used in health care. However, there has been limited research on patients' informational preferences, including which elements of AI device labeling enhance patient understanding, trust, and acceptance. Clear and effective patient-facing communication is essential to address patient concerns and support informed decision-making regarding AI-enabled care. We evaluated 3 aims using simulated AI device labels in a cardiovascular context. First, we identified key information elements that influence patient trust and acceptance of an AI device. Second, we examined how these effects varied based on patient characteristics. Third, we explored how patients evaluated informational content of AI labels and their perceived effectiveness of the AI labels in informing decision-making about the use of AI device, building trust in the device, and shaping their intention to use it in their health care. We recruited 340 US patients from ResearchMatch.org to participate in a web-based survey that contained 2 experiments. In the discrete choice experiment, participants indicated preferences in terms of trust and acceptance regarding 16 pairs of simulated AI device labels that varied across 8 types of information needs identified in our previous qualitative work. In the single profile factorial experiment, participants evaluated 4 randomly assigned label prototypes regarding the label's legibility, comprehensibility, information overload, credibility, and perceived effectiveness in informing about the AI device, as well as participants' trust in the AI device and intention to use the device in their health care. Data were analyzed using mixed effects binary or ordinal logistic regression. The discrete choice experiment showed that information about regulatory approval, high device performance, provider oversight, and AI's value added to usual care significantly increased the likelihood of patient trust by 14.1%-19.3% and acceptance by 13.3%-17.9%. Subgroup analyses revealed variations based on patient characteristics such as familiarity with AI, health literacy, and recency of last medical checkup. The single profile factorial experiment showed that patients reported good label comprehension, and that information about provider oversight, regulatory approval, device performance, and AI's added value improved perceived credibility and effectiveness of the AI label (odds ratio [OR] range: 1.35-2.05), reduced doubts in the AI device (OR range: 0.61-0.77), and increased trust and intention to use the AI device (OR range: 1.47-1.73). However, information about data privacy and safety management protocols was less influential. Patients value information about an AI device's performance, provider oversight, regulatory status, and added value during decision-making. Providing transparent, easily understandable information about these aspects is critical to support patient determinations of trust and acceptance of AI-enabled health care. Information elements' impact on patient trust and acceptance varies by patient characteristics, highlighting the need for a tailored approach to address the concerns of diverse patient groups about AI in health care.
A framework for examining patient attitudes regarding applications of artificial intelligence in healthcare
Background While use of artificial intelligence (AI) in healthcare is increasing, little is known about how patients view healthcare AI. Characterizing patient attitudes and beliefs about healthcare AI and the factors that lead to these attitudes can help ensure patient values are in close alignment with the implementation of these new technologies. Methods We conducted 15 focus groups with adult patients who had a recent primary care visit at a large academic health center. Using modified grounded theory, focus-group data was analyzed for themes related to the formation of attitudes and beliefs about healthcare AI. Results When evaluating AI in healthcare, we found that patients draw on a variety of factors to contextualize these new technologies including previous experiences of illness, interactions with health systems and established health technologies, comfort with other information technology, and other personal experiences. We found that these experiences informed normative and cultural beliefs about the values and goals of healthcare technologies that patients applied when engaging with AI. The results of this study form the basis for a theoretical framework for understanding patient orientation to applications of AI in healthcare, highlighting a number of specific social, health, and technological experiences that will likely shape patient opinions about future healthcare AI applications. Conclusions Understanding the basis of patient attitudes and beliefs about healthcare AI is a crucial first step in effective patient engagement and education. The theoretical framework we present provides a foundation for future studies examining patient opinions about applications of AI in healthcare.
Patient information needs for transparent and trustworthy cardiovascular artificial intelligence: A qualitative study
As health systems incorporate artificial intelligence (AI) into various aspects of patient care, there is growing interest in understanding how to ensure transparent and trustworthy implementation. However, little attention has been given to what information patients need about these technologies to promote transparency of their use. We conducted three asynchronous online focus groups with 42 patients across the United States discussing perspectives on their information needs for trust and uptake of AI, focusing on its use in cardiovascular care. Data were analyzed using a rapid content analysis approach. Our results suggest that patients have a set of core information needs, including specific information factors pertaining to the AI tool, oversight, and healthcare experience, that are relevant to calibrating trust as well as perspectives concerning information delivery, disclosure, consent, and physician AI use. Identifying patient information needs is a critical starting point for calibrating trust in healthcare AI systems and designing strategies for information delivery. These findings highlight the importance of patient-centered engagement when developing AI model documentation and communicating and provisioning information about these technologies in clinical encounters.