Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
151 result(s) for "Artificial life Fiction"
Sort by:
More science friction for less science fiction
AI-ready health datasets can be exploited to generate many research articles with potentially limited scientific value. A study in PLOS Biology highlights this problem, by describing a recent, sudden explosion in papers analyzing the NHANES health dataset.
The opportunities and pitfalls of ChatGPT in clinical and translational medicine
[...]things don't go well, but it's an interesting exploration of the difference between human and machine intelligence. [...]ChatGPT has the potential to significantly impact the clinical and translational medicine fields by providing access to up-to-date information, improving patient engagement and reducing workloads for healthcare providers. [...]it would be part of GCP to train and validate the ChatGPT algorithm, for example, for diagnosis and therapy-accompanying applications on relevant, evidence-based knowledge bases, before it can be used.
Partners of Humans: A Realistic Assessment of the Role of Robots in the Foreseeable Future
As robots are generally thought to perform human-like tasks, they depend on the successes of information technology in the area of artificial intelligence to succeed in such pursuits. But robots, through their anthropomorphic character and their weighty presence in science fiction, attract the attention of the press and the media in a way that, at times, blurs the distinction between the actual state of the art and exaggerated claims. This makes it hard to assess the true functional positioning of robots, how this is likely to move forward and whether the outcome of progress could be detrimental to human society. The aim of this paper is to review the actual level of competence that is being achieved in robotics research laboratories and a plausible impact that this is likely to have on human control over life and jobs. The key thesis here is that cognition in machines and even an artificial form of consciousness lead to operations in a set of tasks (the ‘algorithmic’ category) which is different from that available to truly cognitive and conscious human beings (the ‘life-need’ category): that is, in the paper it is argued that a major category error (Ryle in The concept of mind, University of Chicago Press, Chicago, 1949) looms in predictions of serious threats to humanity. As far as a threat to jobs goes, it is argued that early attention to education and re-skilling of humans in the workplace can lead to an effective symbiosis between people and robots.
The Promise of Artificial Intelligence in the Field of Environmental Health
In this issue, President CDR Anna Khan explores how technology, specifically artificial intelligence (AI), can impact our work, the communities we serve, and our ability to respond in times of crisis.
A Sleight of Hand
Correspondence to Dr Emma Tumilty, Department of Bioethics and Health Humanities, The University of Texas Medical Branch at Galveston, Galveston, Texas, USA; emtumilt@utmb.edu Jecker et al1 offer a valuable analysis of risk discussion in relation to Artifical Intelligence (AI) and in the context of longtermism generally, a philosophy prevalent among technocrats and tech billionaires who significantly shape the direction of technological progress in our world. By making this argument, they are able to have abstract (and uncertain) benefits for an infinite group of people, outweigh concrete harms for current people—a nifty trick given it also often seems to align with their own personal benefits related to status, power and profit-making. By now, many people will have seen the headlines and research regarding AI’s energy and water consumption.2 This issue is an actual X-risk—we exist in a time where climate change is already having significant effects, with more US$1 billion dollar weather-related disasters occurring per year than ever before in the USA3 and decimating many communities and countries globally.
Artificial intelligence risks, attention allocation and priorities
Correspondence to Professor Yi Zeng; yi.zeng@ia.ac.cn Introduction Jecker et al critically analysed the predominant focus on existential risk (X-Risk) in artificial intelligence (AI) ethics, advocating for a balanced communication of AI’s risks and benefits and urging serious consideration of other urgent ethical issues alongside X-Risk.1 Building on this analysis, we argue for the necessity of acknowledging the unique attention-grabbing attributes of X-Risk and leveraging these traits to foster a comprehensive focus on AI ethics. In both the specific realm of ethical AI initiatives and the broader scope of AI risk management, the responses to X-Risk do not have the advantage of prioritising the allocation of resources over other related risks.2 This discrepancy suggests that, in terms of actual social resource allocation, X-Risks do not receive commensurate resources relative to the attention they attract. People conceptualise the magnitude of risk through a blend of sensory and cognitive stimulation, and the impact of larger and more exaggerated representations of risk is unmistakably clear. [...]merely considering the attention that accusations of X-Risk garner is insufficient to explain its prominence. Conclusion Based on the discussion above, we argue that the contemporary perception of X-Risk does not necessarily involve the direct allocation of resources away from other risks. [...]the unique characteristics of X-Risk can redirect attention towards other AI risks by highlighting their relevance, thereby promoting a broader public awareness of AI risks. [...]we aim to demonstrate that X-Risk is not entirely separate from other AI risks.
Let’s not be indifferent about robots: Neutral ratings on bipolar measures mask ambivalence in attitudes towards robots
Ambivalence, the simultaneous experience of both positive and negative feelings about one and the same attitude object, has been investigated within psychological attitude research for decades. Ambivalence is interpreted as an attitudinal conflict with distinct affective, behavioral, and cognitive consequences. In social psychological research, it has been shown that ambivalence is sometimes confused with neutrality due to the use of measures that cannot distinguish between neutrality and ambivalence. Likewise, in social robotics research the attitudes of users are often characterized as neutral. We assume that this is due to the fact that existing research regarding attitudes towards robots lacks the opportunity to measure ambivalence. In the current experiment (N = 45), we show that a neutral and a robot stimulus were evaluated equivalently when using a bipolar item, but evaluations differed greatly regarding self-reported ambivalence and arousal. This points to attitudes towards robots being in fact highly ambivalent, although they might appear neutral depending on the measurement method. To gain valid insights into people’s attitudes towards robots, positive and negative evaluations of robots should be measured separately, providing participants with measures to express evaluative conflict instead of administering bipolar items. Acknowledging the role of ambivalence in attitude research focusing on robots has the potential to deepen our understanding of users’ attitudes and their potential evaluative conflicts, and thus improve predictions of behavior from attitudes towards robots.
The future is now: artificial intelligence and beyond
Do androids indeed dream of electric sheep and maybe about other animals? [See PDF for image] Fig. 1 The original hard cover of “Do androids dream of electric sheep?”, the cult book on which the cult movie “Bladerunner” is based. Maybe this will help us focus on innovation rather than on authority, and on sharing points of views, instead of repeating what is known, sometimes also because we are pushed by a quantitative evaluation of our “scientific production”. Promotion: The journal's editorial staff also promotes the publication to the medical community, through email newsletters, social media, and other channels, to ensure that the research is widely disseminated and reaches its intended audience. ChatGPT: “Increasing the impact factor of a medical journal can be a long-term process that requires a concerted effort by the editorial board and the journal staff.