Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
19
result(s) for
"Lin, Nanyun"
Sort by:
Trisulfide Bond‐Mediated Molecular Phototheranostic Platform for “Activatable” NIR‐II Imaging‐Guided Enhanced Gas/Chemo‐Hypothermal Photothermal Therapy
by
Xiao, Hao
,
Wu, Yinyin
,
Fu, Qian
in
activatable fluorescence imaging
,
Cancer therapies
,
chemodynamic therapy
2023
Tumor microenvironment (TME)‐triggered phototheranostic platform offers a feasible strategy to improve cancer diagnosis accuracy and minimize treatment side effects. Developing a stable and biocompatible molecular phototheranostic platform for TME‐activated second near‐infrared (NIR‐II) fluorescence imaging‐guided multimodal cascade therapy is a promising strategy for creating desirable anticancer agents. Herein, a new NIR‐II fluorescence imaging‐guided activatable molecular phototheranostic platform (IR‐FEP‐RGD‐S‐S‐S‐Fc) is presented for actively targeted tumor imaging and hydrogen sulfide (H2S) gas‐enhanced chemodynamic‐hypothermal photothermal combined therapy (CDT/HPTT). It is revealed for the first time that the coupling distance between IR‐FE and ferrocene is proportional to the photoinduced electron transfer (PET), and the aqueous environment is favorable for PET generation. The part of Cyclic‐RGDfK (cRGDfk) peptides can target the tumor and benefit the endocytosis of nanoparticles. The high‐concentration glutathione (GSH) in the TME will separate the fluorescence molecule and ferrocene by the GSH‐sensitive trisulfide bond, realizing light‐up NIR‐II fluorescence imaging and a cascade of trimodal synergistic CDT/HPTT/gas therapy (GT). In addition, the accumulation of hydroxyl radicals (•OH) and down‐regulation of glutathione peroxidase 4 (GPX4) can produce excessive harmful lipid hydroperoxides, ultimately leading to ferroptosis. Trisulfide bond‐mediated molecular phototheranostic platform for “activatable” NIR‐II imaging‐guided enhanced gas‐chemo‐hypothermal photothermal therapy. A new phototheranostic platform is developed, utilizing cyclic‐RGDfk (cRGDfk) peptide for active tumor cell targeting and achieving glutathione (GSH)‐rich tumor environment photoinduced electron transfer (PET) blockade for fluorescence‐specific illumination and therapy.
Journal Article
AuCu@CuO2 Aerogels with H2O2/O2 Self‐Supplying and Quadruple Enzyme‐Like Activity for MRSA‐Infected Diabetic Wound Management
2025
Diabetic wound healing presents serious clinical challenges due to the unique wound microenvironment characterized by hyperglycemia, bacterial infection, excessive oxidative stress, and hypoxia. Herein, a copper peroxide (CuO2)‐coated AuCu bimetallic aerogel is developed that exhibits quadruple enzyme‐mimicking activity and H2O2/O2 self‐supplying to modulate the complex microenvironment of methicillin‐resistant staphylococcus aureus (MRSA)‐infected diabetic wounds. The AuCu@CuO2 aerogels demonstrate favorable photothermal properties and mimic four enzyme‐like activities: peroxidase‐like activity for producing toxic reactive oxygen species; catalase‐like activity for decomposing H2O2 to release O2 to relieve oxidative stress and hypoxia; glucose oxidase‐like activity for reducing excessive blood glucose and glutathione peroxidase‐like activity for balancing abnormal glutathione level. The CuO2 coating facilitates a continuous and adequate in situ production of H2O2 within the mildly acidic infection microenvironment, enabling excellent antibacterial activity and reduced blood glucose levels during the initial treatment of infected diabetic wounds. Furthermore, the engineered AuCu@CuO2 aerogels not only scavenge elevated ROS during the inflammatory phase but also synergistically generate oxygen to promote wound healing. Overall, the AuCu@CuO2 aerogelsmicroenvironment can be activated by the diabetic wound infection microenvironments, alleviating inflammation, reducing hypoxia, lowering blood glucose levels, and enhancing angiogenesis and collagen fiber accumulation, thereby significantly improving diabetic wound healing. The AuCu@CuO2 aerogels demonstrate favorable photothermal properties, four types of enzyme‐like activities, and H2O2/O2 self‐supplying to modulate the complex microenvironment of MRSA‐infected diabetic wounds. The results indicate that the AuCu@CuO2 aerogel can be activated by the microenvironment of diabetic wound infection, alleviating inflammation, reducing hypoxia, lowering blood glucose levels, and enhancing angiogenesis and collagen fiber accumulation, thereby significantly improving diabetic wound healing.
Journal Article
AuCu@CuO 2 Aerogels with H 2 O 2 /O 2 Self‐Supplying and Quadruple Enzyme‐Like Activity for MRSA ‐Infected Diabetic Wound Management
by
Lin, Nanyun
,
Yang, Qinglai
,
Wu, Yingying
in
Animals
,
Anti-Bacterial Agents - pharmacology
,
Copper - chemistry
2025
Diabetic wound healing presents serious clinical challenges due to the unique wound microenvironment characterized by hyperglycemia, bacterial infection, excessive oxidative stress, and hypoxia. Herein, a copper peroxide (CuO 2 )‐coated AuCu bimetallic aerogel is developed that exhibits quadruple enzyme‐mimicking activity and H 2 O 2 /O 2 self‐supplying to modulate the complex microenvironment of methicillin‐resistant staphylococcus aureus (MRSA) ‐infected diabetic wounds. The AuCu@CuO 2 aerogels demonstrate favorable photothermal properties and mimic four enzyme‐like activities: peroxidase‐like activity for producing toxic reactive oxygen species; catalase‐like activity for decomposing H 2 O 2 to release O 2 to relieve oxidative stress and hypoxia; glucose oxidase‐like activity for reducing excessive blood glucose and glutathione peroxidase‐like activity for balancing abnormal glutathione level. The CuO 2 coating facilitates a continuous and adequate in situ production of H 2 O 2 within the mildly acidic infection microenvironment, enabling excellent antibacterial activity and reduced blood glucose levels during the initial treatment of infected diabetic wounds. Furthermore, the engineered AuCu@CuO 2 aerogels not only scavenge elevated ROS during the inflammatory phase but also synergistically generate oxygen to promote wound healing. Overall, the AuCu@CuO 2 aerogelsmicroenvironment can be activated by the diabetic wound infection microenvironments, alleviating inflammation, reducing hypoxia, lowering blood glucose levels, and enhancing angiogenesis and collagen fiber accumulation, thereby significantly improving diabetic wound healing.
Journal Article
CIB1 and CaBP1 bind to the myo1c regulatory domain
by
Tang, Nanyun
,
Foskett, J. Kevin
,
Lin, Tianming
in
Amino Acid Motifs
,
Animals
,
Calcium-Binding Proteins - metabolism
2007
Myo1c is a member of the myosin-I family that binds phosphoinositides and links the actin cytoskeleton to cellular membranes. Recent investigations suggest that targeting of myo1c to some subcellular regions requires the binding of an unknown protein to the IQ motifs in the myo1c regulatory domain. We identify two myristoylated proteins that bind the myo1c regulatory domain: calcium-binding protein 1 (CaBP1) and calcium- and integrin-binding-protein-1 (CIB1). CIB1 and CaBP1 interact with myo1c in vivo as determined by pull-down experiments and fluorescence microscopy where the endogenously expressed proteins show extensive cellular colocalization with myo1c. CIB1 and CaBP1 bind to the myo1c IQ motifs in the regulatory domain where they compete with calmodulin for binding. CaBP1 has a higher apparent affinity for myo1c than CIB1, and both proteins better compete with calmodulin in the presence of calcium. We propose that these proteins may play a role in specifying subcellular localization of myo1c.
Journal Article
Localizing Active Objects from Egocentric Vision with Symbolic World Knowledge
2023
The ability to actively ground task instructions from an egocentric view is crucial for AI agents to accomplish tasks or assist humans virtually. One important step towards this goal is to localize and track key active objects that undergo major state change as a consequence of human actions/interactions to the environment without being told exactly what/where to ground (e.g., localizing and tracking the `sponge` in video from the instruction \"Dip the `sponge` into the bucket.\"). While existing works approach this problem from a pure vision perspective, we investigate to which extent the textual modality (i.e., task instructions) and their interaction with visual modality can be beneficial. Specifically, we propose to improve phrase grounding models' ability on localizing the active objects by: (1) learning the role of `objects undergoing change` and extracting them accurately from the instructions, (2) leveraging pre- and post-conditions of the objects during actions, and (3) recognizing the objects more robustly with descriptional knowledge. We leverage large language models (LLMs) to extract the aforementioned action-object knowledge, and design a per-object aggregation masking technique to effectively perform joint inference on object phrases and symbolic knowledge. We evaluate our framework on Ego4D and Epic-Kitchens datasets. Extensive experiments demonstrate the effectiveness of our proposed framework, which leads to>54% improvements in all standard metrics on the TREK-150-OPE-Det localization + tracking task, >7% improvements in all standard metrics on the TREK-150-OPE tracking task, and >3% improvements in average precision (AP) on the Ego4D SCOD task.
VDebugger: Harnessing Execution Feedback for Debugging Visual Programs
2024
Visual programs are executable code generated by large language models to address visual reasoning problems. They decompose complex questions into multiple reasoning steps and invoke specialized models for each step to solve the problems. However, these programs are prone to logic errors, with our preliminary evaluation showing that 58% of the total errors are caused by program logic errors. Debugging complex visual programs remains a major bottleneck for visual reasoning. To address this, we introduce VDebugger, a novel critic-refiner framework trained to localize and debug visual programs by tracking execution step by step. VDebugger identifies and corrects program errors leveraging detailed execution feedback, improving interpretability and accuracy. The training data is generated through an automated pipeline that injects errors into correct visual programs using a novel mask-best decoding technique. Evaluations on six datasets demonstrate VDebugger's effectiveness, showing performance improvements of up to 3.2% in downstream task accuracy. Further studies show VDebugger's ability to generalize to unseen tasks, bringing a notable improvement of 2.3% on the unseen COVR task. Code, data and models are made publicly available at https://github.com/shirley-wu/vdebugger/
ARMADA: Attribute-Based Multimodal Data Augmentation
2024
In Multimodal Language Models (MLMs), the cost of manually annotating high-quality image-text pair data for fine-tuning and alignment is extremely high. While existing multimodal data augmentation frameworks propose ways to augment image-text pairs, they either suffer from semantic inconsistency between texts and images, or generate unrealistic images, causing knowledge gap with real world examples. To address these issues, we propose Attribute-based Multimodal Data Augmentation (ARMADA), a novel multimodal data augmentation method via knowledge-guided manipulation of visual attributes of the mentioned entities. Specifically, we extract entities and their visual attributes from the original text data, then search for alternative values for the visual attributes under the guidance of knowledge bases (KBs) and large language models (LLMs). We then utilize an image-editing model to edit the images with the extracted attributes. ARMADA is a novel multimodal data generation framework that: (i) extracts knowledge-grounded attributes from symbolic KBs for semantically consistent yet distinctive image-text pair generation, (ii) generates visually similar images of disparate categories using neighboring entities in the KB hierarchy, and (iii) uses the commonsense knowledge of LLMs to modulate auxiliary visual attributes such as backgrounds for more robust representation of original entities. Our empirical results over four downstream tasks demonstrate the efficacy of our framework to produce high-quality data and enhance the model performance. This also highlights the need to leverage external knowledge proxies for enhanced interpretability and real-world grounding.
Learning Action Conditions from Instructional Manuals for Instruction Understanding
2024
The ability to infer pre- and postconditions of an action is vital for comprehending complex instructions, and is essential for applications such as autonomous instruction-guided agents and assistive AI that supports humans to perform physical tasks. In this work, we propose a task dubbed action condition inference, and collecting a high-quality, human annotated dataset of preconditions and postconditions of actions in instructional manuals. We propose a weakly supervised approach to automatically construct large-scale training instances from online instructional manuals, and curate a densely human-annotated and validated dataset to study how well the current NLP models can infer action-condition dependencies in the instruction texts. We design two types of models differ by whether contextualized and global information is leveraged, as well as various combinations of heuristics to construct the weak supervisions. Our experimental results show a >20% F1-score improvement with considering the entire instruction contexts and a >6% F1-score benefit with the proposed heuristics.
Understanding Multimodal Procedural Knowledge by Sequencing Multimodal Instructional Manuals
2024
The ability to sequence unordered events is an essential skill to comprehend and reason about real world task procedures, which often requires thorough understanding of temporal common sense and multimodal information, as these procedures are often communicated through a combination of texts and images. Such capability is essential for applications such as sequential task planning and multi-source instruction summarization. While humans are capable of reasoning about and sequencing unordered multimodal procedural instructions, whether current machine learning models have such essential capability is still an open question. In this work, we benchmark models' capability of reasoning over and sequencing unordered multimodal instructions by curating datasets from popular online instructional manuals and collecting comprehensive human annotations. We find models not only perform significantly worse than humans but also seem incapable of efficiently utilizing the multimodal information. To improve machines' performance on multimodal event sequencing, we propose sequentiality-aware pretraining techniques that exploit the sequential alignment properties of both texts and images, resulting in > 5% significant improvements.
LiveCLKTBench: Towards Reliable Evaluation of Cross-Lingual Knowledge Transfer in Multilingual LLMs
2025
Evaluating cross-lingual knowledge transfer in large language models is challenging, as correct answers in a target language may arise either from genuine transfer or from prior exposure during pre-training. We present LiveCLKTBench, an automated generation pipeline specifically designed to isolate and measure cross-lingual knowledge transfer. Our pipeline identifies self-contained, time-sensitive knowledge entities from real-world domains, filters them based on temporal occurrence, and verifies them against the model's knowledge. The documents of these valid entities are then used to generate factual questions, which are translated into multiple languages to evaluate transferability across linguistic boundaries. Using LiveCLKTBench, we evaluate several LLMs across five languages and observe that cross-lingual transfer is strongly influenced by linguistic distance and often asymmetric across language directions. While larger models improve transfer, the gains diminish with scale and vary across domains. These findings provide new insights into multilingual transfer and demonstrate the value of LiveCLKTBench as a reliable benchmark for future research.