Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
14,455
result(s) for
"Min, John"
Sort by:
Daisy-chain gene drives for the alteration of local populations
by
Noble, Charleston
,
DeBenedictis, Erika A.
,
Nowak, Martin A.
in
Animals
,
Anopheles - genetics
,
Applied Biological Sciences
2019
If they are able to spread in wild populations, CRISPR-based gene-drive elements would provide new ways to address ecological problems by altering the traits of wild organisms, but the potential for uncontrolled spread tremendously complicates ethical development and use. Here, we detail a self-exhausting form of CRISPR-based drive system comprising genetic elements arranged in a daisy chain such that each drives the next. “Daisy-drive” systems can locally duplicate any effect achievable by using an equivalent self-propagating drive system, but their capacity to spread is limited by the successive loss of nondriving elements from one end of the chain. Releasing daisy-drive organisms constituting a small fraction of the local wild population can drive a useful genetic element nearly to local fixation for a wide range of fitness parameters without self-propagating spread. We additionally report numerous highly active guide RNA sequences sharing minimal homology that may enable evolutionarily stable daisy drive as well as self-propagating CRISPR-based gene drive. Especially when combined with threshold dependence, daisy drives could simplify decision-making and promote ethical use by enabling local communities to decide whether, when, and how to alter local ecosystems.
Journal Article
Cervical kinematic change after posterior full-endoscopic cervical foraminotomy for disc herniation or foraminal stenosis
by
Park, Sung Bae
,
Paik, Seungyoon
,
Rhee, John Min
in
Analysis
,
Biology and Life Sciences
,
Biomechanical Phenomena
2023
Posterior full-endoscopic cervical foraminotomy (PECF) is one of minimally invasive surgical techniques for cervical radiculopathy. Because of minimal disruption of posterior cervical structures, such as facet joint, cervical kinematics was minimally changed. However, a larger resection of facet joint is required for cervical foraminal stenosis (FS) than disc herniation (DH). The objective was to compare the cervical kinematics between patients with FS and DH after PECF.
Consecutive 52 patients (DH, 34 vs. FS, 18) who underwent PECF for single-level radiculopathy were retrospectively reviewed. Clinical parameters (neck disability index, neck pain and arm pain), and segmental, cervical and global radiological parameters were compared at postoperative 3, 6, and 12 months, and yearly thereafter. A linear mixed-effect model was used to assess interactions between groups and time. Any occurrence of significant pain during follow-up was recorded during a mean follow-up period of 45.5 months (range 24-113 months).
Clinical parameters improved after PECF, with no significant differences between groups. Recurrent pain occurred in 6 patients and surgery (PECF, anterior discectomy and fusion) was performed in 2 patients. Pain-free survival rate was 91% for DH and 83% for FS, with no significant difference between the groups (P = 0.29). Radiological changes were not different between groups (P > 0.05). Segmental neutral and extension curvature became more lordotic. Cervical curvature became more lordotic on neutral and extension X-rays, and the range of cervical motion increased. The mismatch between T1-slope and cervical curvature decreased. Disc height did not change, but the index level showed degeneration at postoperative 2 years.
Clinical and radiological outcomes after PECF were not different between DH and FS patients and kinematics were significantly improved. These findings may be informative in a shared decision-making process.
Journal Article
Propaganda’s Role in Liberal Democratic Societies
2018
Stanley and Min discuss how propaganda works in liberal democratic societies. Stanley observes that the inability to address the crisis of liberal democracies can be partially explained by contemporary political philosophy’s penchant for idealized theorizing about norms of justice over transitions from injustice to justice. Whereas ancient and modern political philosophers took seriously propaganda and demagoguery of the elites and populists, contemporary political philosophers have tended to theorize about the idealized structures of justice. This leads to a lack of theoretical constructs and explanatory tools by which we can theorize about real-life political problems, such as mass incarceration. Starting with this premise, Stanley provides an explanation of how propaganda works and the mechanisms that enable propaganda. Stanley further theorizes the pernicious effects that elitism, populism, authoritarianism, and “post-truth” have on democratic politics.
Journal Article
Inclusion and the Epistemic Benefits of Deliberation
2016
Contrary to the popular belief, I argue that a more inclusive polity does not necessarily conflict with the goal of improving the epistemic capacities of deliberation. My argument examines one property of democracy that is usually thought of in non-epistemic terms, inclusion. Inclusion is not only valuable for moral reasons, but it also has epistemic virtues. I consider two epistemic benefits of inclusive deliberation: (a) inclusive deliberation helps to create a more complete picture of the world that everyone dwells together; and (b) inclusive deliberation can be helpful in reducing biases and errors endemic to a society. Having advanced two epistemic arguments for inclusive deliberation, I argue that the Deweyan model best captures the knowledge-pooling function of deliberation.
Journal Article
Epistocracy and democratic epistemology
by
Min, John B
in
Epistemology
2015
Epistocracy, the rule by the experts or educated, poses a significant challenge to authentic democratic rule. Epistocrats typically reason from the premise, “experts have knowledge of political truths” to the conclusion, “experts should have the authority to rule.” There may be powerful moral reasons for thinking that the inference is fallacious. Invoking a public reason standard of acceptability, David Estlund makes a powerful argument of this sort. I argue that Estlund’s argument against epistocracy overlooks democratic epistemology, which can and should be utilized to strengthen the epistemic merits of a democratic rule. I therefore examine whether democratic democracy’s epistemic value can rest on a formal epistemic model. The inadequacy of the formal epistemic model leads us to defend democratic epistemology differently. This will be defended in two ways. The first step will be to cast doubt into the epistemic merits of expert rule in two ways. First, experts sometimes do not have access to privileged information of citizens who bear the consequences of expert decisions. Second, experts themselves can be biased. I argue that democratic deliberation can offset those two disadvantages of expert rule. The second step will be to examine the epistemic values of inclusive democratic rule.
Journal Article
Classifying Patient Complaints Using Artificial Intelligence–Powered Large Language Models: Cross-Sectional Study
by
Quek, Queenie
,
Wong, Eunice Rui Ning
,
Koh, Sky Wei Chee
in
AI Language Models in Health Care
,
Artificial Intelligence
,
Care and treatment
2025
Patient complaints provide valuable insights into the performance of health care systems, highlighting potential risks not apparent to staff. Patient complaints can drive systemic changes that enhance patient safety. However, manual categorization and analysis pose a huge logistical challenge, hindering the ability to harness the potential of these data.
This study aims to evaluate the accuracy of artificial intelligence (AI)-powered categorization of patient complaints in primary care based on the Healthcare Complaint Analysis Tool (HCAT) General Practice (GP) taxonomy and assess the importance of advanced large language models (LLMs) in complaint categorization.
This cross-sectional study analyzed 1816 anonymous patient complaints from 7 public primary care clinics in Singapore. Complaints were first coded by trained human coders using the HCAT (GP) taxonomy through a rigorous process involving independent assessment and consensus discussions. LLMs (GPT-3.5 turbo, GPT-4o mini, and Claude 3.5 Sonnet) were used to validate manual classification. Claude 3.5 Sonnet was further used to identify complaint themes. LLM classifications were assessed for accuracy and consistency with human coding using accuracy and F1-score. Cohen κ and McNemar test evaluated AI-human agreement and compared AI models' concordance, respectively.
The majority of complaints fell under the HCAT (GP) domain of management (1079/1816, 59.4%), specifically relating to institutional processes (830/1816, 45.7%). Most complaints were of medium severity (994/1816, 54.7%), occurred within the practice (627/1816, 34.5%), and resulted in minimal harm (75.4%). LLMs achieved moderate to good accuracy (58.4%-95.5%) in HCAT (GP) field classifications, with GPT-4o mini generally outperforming GPT-3.5 turbo, except in severity classification. All 3 LLMs demonstrated moderate concordance rates (average 61.9%-68.8%) in complaints classification with varying levels of agreement (κ=0.114-0.623). GPT-4o mini and Claude 3.5 significantly outperformed GPT-3.5 turbo in several fields (P<.05), such as domain and stage of care classification. Thematic analysis using Claude 3.5 identified long wait times (393/1816, 21.6%), staff attitudes (287/1816, 15.8%), and appointment booking issues (191/1816, 10.5%) as the top concerns, which accounted for nearly half of all complaints.
Our study highlighted the potential of LLMs in classifying patient complaints in primary care using HCAT (GP) taxonomy. While GPT-4o and Claude 3.5 demonstrated promising results, further fine-tuning and model training are required to improve accuracy. Integrating AI into complaint analysis can facilitate proactive identification of systemic issues, ultimately enhancing quality improvement and patient safety. By leveraging LLMs, health care organizations can prioritize complaints and escalate high-risk issues more effectively. Theoretically, this could lead to improved patient care and experience; further research is needed to confirm this potential benefit.
Journal Article
DropNet: Reducing Neural Network Complexity via Iterative Pruning
2022
Modern deep neural networks require a significant amount of computing time and power to train and deploy, which limits their usage on edge devices. Inspired by the iterative weight pruning in the Lottery Ticket Hypothesis, we propose DropNet, an iterative pruning method which prunes nodes/filters to reduce network complexity. DropNet iteratively removes nodes/filters with the lowest average post-activation value across all training samples. Empirically, we show that DropNet is robust across diverse scenarios, including MLPs and CNNs using the MNIST, CIFAR-10 and Tiny ImageNet datasets. We show that up to 90% of the nodes/filters can be removed without any significant loss of accuracy. The final pruned network performs well even with reinitialization of the weights and biases. DropNet also has similar accuracy to an oracle which greedily removes nodes/filters one at a time to minimise training loss, highlighting its effectiveness.
Towards Fast and Adaptable Agents via Goal-Directed, Memory-Based Learning
How do humans learn so fast? A young kid, when shown examples of a giraffe on a flashcard, can associate it to a real-life giraffe zero-shot. AI systems struggle to perform such association even with many examples. This thesis documents how can we imbue such fast learning and adaptive capabilities into AI systems, drawing inspiration from human cognition. The first part of this thesis deals with the more fundamental part of learning via pruning weights/nodes (see Chapter 2: DropNet) and how it can be used for memory consolidation. The next part deals with how experiences are learned with rewards, and deals with a deep dive of the AlphaGo / AlphaZero mechanism (see Chapter 3: Brick Tic Tac Toe). While capable of learning optimal moves after a long training time, these systems are brittle and do not generalise well to small changes of the environment. This meant that using Reinforcement Learning (RL) systems are not the right approach for fast learning and adaptable agents. This led to a discovery that overemphasis on optimisation and maximisation of rewards is hampering generalisation. The real world is ever-changing, and methods based on maximising rewards like RL will have a hard time adapting. We experiment with imbuing memory (see Chapter 4: Hippocampal Replay) and goals (see Chapter 5: Goal-Directed Intrinsic Reward (GDIR))for better learning.After a long period of pondering, the key insight was eventually found - instead of learning from reward, we learn from self-supervised learning of the next action given current state and goal state. We then use this next-action prediction plus memory of past state transitions to choose a good action to do next. This led to the landmark idea, \"Learning, Fast and Slow\", which was awarded IEEE ICDL Best Paper Finalist (see Chapter 6). As Large Language Models (LLMs) grew in popularity in 2022, we incorporate this to improve the \"Learning, Fast and Slow\" idea. We experimented with how to imbue multiple abstraction spaces of memory and various skills for an LLM-based agent to solve the Abstraction and Reasoning Corpus (ARC) Challenge (see Chapter 7), showcasing an initial prototype of how a generally capable agent can arise simply by giving it the right context and action spaces. This idea is improved upon in TaskGen (see Chapter 8), which breaks down an arbitrary task into subtasks, and assigns each subtask to an Equipped Function or Inner Agent to execute. This improves the reliability and robustness of the methods in Chapters 6 and 7. This thesis documents the beginning of the journey towards building fast and adaptable agents. This thesis will be continued under the open-sourced project, AgentJo (see Chapter 9), to head towards fast and adaptable agents.
Dissertation
On Localizing the Effects CRISPR Gene-Drives and Safeguarding Our Shared Future
2020
Through advances in homing endonuclease engineering and the advent of the CRISPR class RNA-guided endonucleases, the construction, testing, and field deployment of gene drive technology have moved rapidly from theory into practice in laboratories around the world. As with any new technology, these advances bring about both benefits, as well as a slew of new concerns over the technology’s potential for catastrophic accidents, dual use, and long-term environmental impact. Thus, it is imperative to develop methods which limit the endless nature of classical gene drive designs, which will not only expand the utility and benefits of gene drive applications through localization of deployment, but also reduce the potential impact of an accidental escape during laboratory research. This project sets out to define and explore the possibility of containing, or localizing, a gene drive deployment by simultaneously introducing weaknesses into the self-propagating elements of a gene drive, and taking advantage of bottlenecks in matting populations to limit the spread of a given gene drive to a local gene pool. This is accomplished by breaking up components of a gene into several interdependent, self-driving elements, forming the daisy-chain gene drive system. In such a system, one of the essential components of the drive will not be self-driving, thus allowed to be diluted through repeated mating events with wild-type. To further expand on this idea, one can seed multiple copies of the non-driving element throughout an organism’s entire genome, allowing for further control over the number of times generations the drive is allowed to copy itself through a technique called daisyfield gene drive. Finally, daisy-quorum drives are where drive elements are inserted directly into and replace the function of essential, haploinsufficient genes in the genome, reducing the viability, and the evolutionary fitness, of hybrid offspring which result from mating events with wild-type organisms. Based on these techniques, a new generation of gene drives can be engineered and deployed that are limited in their ability to spread and pose a significantly lower risk of permanently altering the genome of the global population of an entire species. Combined with the research safety techniques outlined in this project, it is my hope iii that the techniques outlined in this work can improve the safety of future gene drive research and pave the way for eventual deployment.
Dissertation