Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
6 result(s) for "Thakker, Jay"
Sort by:
Tour Sentiments Personalized Recommendations System Using AI and Deep Learning
- In a world saturated with travel information overload and generic recommendations, \"Tour Sentiments: A Personalized Travel Recommendation System Using AI \" emerges as a groundbreaking initiative. This project seeks to transform travel planning by harnessing the capabilities of Artificial Intelligence and real-time google Maps with street view and duration of travel. The primary goal of 'Tour Sentiments' is to create a personalized travel recommendation system that adapts to the changing emotions and preferences of travelers. By employing real-time sentiment analysis using google gemini, the system aims to offer tailored suggestions that align closely with individuals' current feelings and desires, departing from conventional one-size-fits-all approaches. Beyond mere efficiency, the ultimate vision of 'Tour Sentiments' is to elevate the travel experience, infusing it with joy, discovery, and lasting memories. Through the amalgamation of AI and sentiment analysis, the project aspires to make global exploration more accessible and captivating, setting a new standard for personalized travel guidance and enriching journeys worldwide.
Pretherapeutic Assessment by Multidetector Computed Tomography for Thyroid Cartilage Invasion in Laryngeal Cancer: A Double‑edged Sword
Abstract Introduction: Almost one-fourth of head and neck cancers in India are laryngeal cancers. Both conservative and surgical therapeutic approaches are available. According to present tumor-node-metastasis staging protocol, thyroid cartilage invasion is a crucial criterion for diagnosing advanced stages of the disease. A major cartilage invasion depicts T4A stage of disease for which surgical treatment is required. Aims: The present study aims to evaluate the accuracy of multidetector computed tomography (MDCT) in evaluation thyroid cartilage invasion in T3 and T4 stage of laryngeal cancers. Materials and Methods: It is a retrospective analysis done in the Department of Radiology, Pramukhswami Medical College, Anand, Gujarat, on 22 patients of T3 and T4 stage of laryngeal cancer who presented for pretherapeutic MDCT neck evaluation. The MDCT results were retrospectively reviewed and compared with postoperative histopathological results. Statistical analysis was done for each parameter as positive predictive value (PPV) (main statistical parameter), negative predictive value, sensitivity, and specificity. Results: MDCT showed a PPV of 60.00% in detecting any type of thyroid cartilage invasion, 66.66% for major and 33.33% for minor cartilage invasion. Extralaryngeal spread of disease was the most specific marker for cartilage involvement. In total, 31.8% of cases were downgraded in staging by pathology. Conclusion: Overestimation of thyroid cartilage invasion by MDCT is a reality which should be in mind before making final therapeutic decisions. Although crucial, it should not be the sole criteria preventing while making a surgical versus conservative therapeutic call.
Test-Time Adaptation via Many-Shot Prompting: Benefits, Limits, and Pitfalls
Test-time adaptation enables large language models (LLMs) to modify their behavior at inference without updating model parameters. A common approach is many-shot prompting, where large numbers of in-context learning (ICL) examples are injected as an input-space test-time update. Although performance can improve as more demonstrations are added, the reliability and limits of this update mechanism remain poorly understood, particularly for open-source models. We present an empirical study of many-shot prompting across tasks and model backbones, analyzing how performance varies with update magnitude, example ordering, and selection policy. We further study Dynamic and Reinforced ICL as alternative test-time update strategies that control which information is injected and how it constrains model behavior. We find that many-shot prompting is effective for structured tasks where demonstrations provide high information gain, but is highly sensitive to selection strategy and often shows limited benefits for open-ended generation tasks. Overall, we characterize the practical limits of prompt-based test-time adaptation and outline when input-space updates are beneficial versus harmful.
Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models
Large language model (LLM) applications such as agents and domain-specific reasoning increasingly rely on context adaptation: modifying inputs with instructions, strategies, or evidence, rather than weight updates. Prior approaches improve usability but often suffer from brevity bias, which drops domain insights for concise summaries, and from context collapse, where iterative rewriting erodes details over time. We introduce ACE (Agentic Context Engineering), a framework that treats contexts as evolving playbooks that accumulate, refine, and organize strategies through a modular process of generation, reflection, and curation. ACE prevents collapse with structured, incremental updates that preserve detailed knowledge and scale with long-context models. Across agent and domain-specific benchmarks, ACE optimizes contexts both offline (e.g., system prompts) and online (e.g., agent memory), consistently outperforming strong baselines: +10.6% on agents and +8.6% on finance, while significantly reducing adaptation latency and rollout cost. Notably, ACE could adapt effectively without labeled supervision and instead by leveraging natural execution feedback. On the AppWorld leaderboard, ACE matches the top-ranked production-level agent on the overall average and surpasses it on the harder test-challenge split, despite using a smaller open-source model. These results show that comprehensive, evolving contexts enable scalable, efficient, and self-improving LLM systems with low overhead.
MLPerf Tiny Benchmark
Advancements in ultra-low-power tiny machine learning (TinyML) systems promise to unlock an entirely new class of smart applications. However, continued progress is limited by the lack of a widely accepted and easily reproducible benchmark for these systems. To meet this need, we present MLPerf Tiny, the first industry-standard benchmark suite for ultra-low-power tiny machine learning systems. The benchmark suite is the collaborative effort of more than 50 organizations from industry and academia and reflects the needs of the community. MLPerf Tiny measures the accuracy, latency, and energy of machine learning inference to properly evaluate the tradeoffs between systems. Additionally, MLPerf Tiny implements a modular design that enables benchmark submitters to show the benefits of their product, regardless of where it falls on the ML deployment stack, in a fair and reproducible manner. The suite features four benchmarks: keyword spotting, visual wake words, image classification, and anomaly detection.
NeBula: Quest for Robotic Autonomy in Challenging Environments; TEAM CoSTAR at the DARPA Subterranean Challenge
This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved 2nd and 1st place, respectively. We also discuss CoSTAR's demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including: (i) geometric and semantic environment mapping; (ii) a multi-modal positioning system; (iii) traversability analysis and local planning; (iv) global motion planning and exploration behavior; (i) risk-aware mission planning; (vi) networking and decentralized reasoning; and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g. wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.