Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Is Full-Text Available
      Is Full-Text Available
      Clear All
      Is Full-Text Available
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Subject
    • Country Of Publication
    • Publisher
    • Source
    • Language
    • Place of Publication
    • Contributors
    • Location
34,384 result(s) for "difficulty"
Sort by:
Internal validation of the Tampa Robotic Difficulty Scoring System: real-time assessment of the novel robotic scoring system in predicting clinical outcomes after hepatectomy
IntroductionAs the robotic approach in hepatectomy gains prominence, the need to establish a robotic-specific difficulty scoring system (DSS) is evident. The Tampa Difficulty Score was conceived to bridge this gap, offering a novel and dedicated robotic DSS aimed at improving preoperative surgical planning and predicting potential clinical challenges in robotic hepatectomies. In this study, we internally validated the recently published Tampa DSS by applying the scoring system to our most recent cohort of patients.MethodsThe Tampa Difficulty Score was applied to 170 recent patients who underwent robotic hepatectomy in our center. Patients were classified into: Group 1 (score 1–8, n = 23), Group 2 (score 9–24, n = 120), Group 3 (score 25–32, n = 20), and Group 4 (score 33–49, n = 7). Key variables for each of the groups were analyzed and compared. Statistical significance was accepted at p ≤ 0.05.ResultsNotable correlations were found between the Tampa Difficulty Score and key clinical parameters such as operative duration (p < 0.0001), estimated blood loss (p < 0.0001), and percentage of major resection (p = 0.00007), affirming the score’s predictive capacity for operative technical complexity. The Tampa Difficulty Score also correlated with major complications (Clavien–Dindo ≥ III) (p < 0.0001), length of stay (p = 0.011), and 30-day readmission (p = 0.046) after robotic hepatectomy.ConclusionsThe Tampa Difficulty Score, through the internal validation process, has confirmed its effectiveness in predicting intra- and postoperative outcomes in patients undergoing robotic hepatectomy. The predictive capacity of this system is useful in preoperative surgical planning and risk categorization. External validation is necessary to further explore the accuracy of this robotic DSS.
Predicting the difficult laparoscopic cholecystectomy: development and validation of a pre-operative risk score using an objective operative difficulty grading system
BackgroundThe prediction of a difficult cholecystectomy has traditionally been based on certain pre-operative clinical and imaging factors. Most of the previous literature reported small patient cohorts and have not used an objective measure of operative difficulty. The aim of this study was to develop a pre-operative score to predict difficult cholecystectomy, as defined by a validated intra-operative difficulty grading scale.MethodTwo cohorts from prospectively maintained databases of patients who underwent laparoscopic cholecystectomy were analysed: the CholeS Study (8755 patients) and a single surgeon series (4089 patients). Factors potentially predictive of difficulty were correlated to the Nassar intra-operative difficulty scale. A multivariable binary logistic regression analysis was then used to identify factors that were independently associated with difficult laparoscopic cholecystectomy, defined as operative difficulty grades 3 to 5. The resulting model was then converted to a risk score, and validated on both internal and external datasets.ResultIncreasing age and ASA classification, male gender, diagnosis of CBD stone or cholecystitis, thick-walled gallbladders, CBD dilation, use of pre-operative ERCP and non-elective operations were found to be significant independent predictors of difficult cases. A risk score based on these factors returned an area under the ROC curve of 0.789 (95% CI 0.773–0.806, p < 0.001) on external validation, with 11.0% versus 80.0% of patients classified as low versus high risk having difficult surgeries.ConclusionWe have developed and validated a pre-operative scoring system that uses easily available pre-operative variables to predict difficult laparoscopic cholecystectomies. This scoring system should assist in patient selection for day case surgery, optimising pre-operative surgical planning (e.g. allocation of the procedure to a suitably trained surgeon) and counselling patients during the consent process. The score could also be used to risk adjust outcomes in future research.
The Desirable Difficulty Framework as a Theoretical Foundation for Optimizing and Researching Second Language Practice
This coda article offers unified theoretical accounts of the major findings of the empirical studies in this special issue of Optimizing Second Language Practice in the Classroom: Perspectives from Cognitive Psychology. We present a theoretical framework from cognitive psychology (desirable difficulty framework) and link it to the ideas of second language (L2) difficulty. We argue that practice condition, linguistic difficulty, and individual differences need to be taken into account for creating optimal, deliberate, and systematic L2 practice. The desirable difficulty framework may serve as a theoretical foundation to better understand the role of practice on L2 acquisition, as well as to gain insights into effective L2 teaching. Future directions for research are presented to further develop this emerging field of L2 practice.
The simple view of reading and its broad types of reading difficulties
Common depictions of the simple view of reading (SVR), in both research and practice, describe reading comprehension difficulties by using the dichotomous variables of “poor” and “good” for each of its three constructs. But these fail to accurately capture the role the product of the two subcomponents of word recognition and language comprehension plays in defining such difficulties. When the skills in both subcomponents are “good,” most depictions show reading comprehension as “good” – but this is not what the SVR holds. This can lead users of the SVR to both overlook the great variation in reading comprehension skills that are possible within each of the SVR’s defined reading difficulty types as well as misunderstand that reading comprehension may still suffer even when both word recognition and language comprehension do not. This article first reviews the SVR and its main predictions, followed by an overview of the evidence bearing on these. The article then describes how reading comprehension difficulties are defined under the SVR, presenting graphics that employ continuous variables that accurately reflect these definitions. The article concludes with a discussion of classification studies that have investigated SVR-defined reading difficulties and their findings of cases of good skills in word recognition and language comprehension coupled with poor reading comprehension. The article argues that these can be interpreted as consistent with the SVR rather than counter to it.
Text-based Question Difficulty Prediction: A Systematic Review of Automatic Approaches
Designing and constructing pedagogical tests that contain items (i.e. questions) which measure various types of skills for different levels of students equitably is a challenging task. Teachers and item writers alike need to ensure that the quality of assessment materials is consistent, if student evaluations are to be objective and effective. Assessment quality and validity are therefore heavily reliant on the quality of the items included in the test. Moreover, the notion of difficulty is an essential factor that can determine the overall quality of the items and the resulting tests.Thus, item difficulty prediction is extremely important in any pedagogical learning environment. Although difficulty is traditionally estimated either by experts or through pre-testing, such methods are criticised for being costly, time-consuming, subjective and difficult to scale, and consequently, the use of automatic approaches as proxies for these traditional methods is gaining more and more traction. In this paper, we provide a comprehensive and systematic review of methods for the priori prediction of question difficulty. The aims of this review are to: 1) provide an overview of the research community regarding the publication landscape; 2) explore the use of automatic, text-based prediction models; 3) summarise influential difficulty features; and 4) examine the performance of the prediction models. Supervised machine learning prediction models were found to be mostly used to overcome the limitations of traditional item calibration methods. Moreover, linguistic features were found to play a major role in the determination of item difficulty levels, and several syntactic and semantic features were explored by researchers in this area to explain the difficulty of pedagogical assessments. Based on these findings, a number of challenges to the item difficulty prediction community are posed, including the need for a publicly available repository of standardised data-sets and further investigation into alternative feature elicitation and prediction models.
Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy
BackgroundA reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets.MethodsPatient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis.ResultsA higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001).ConclusionWe have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty.
Achieving the critical view of safety in the difficult laparoscopic cholecystectomy: a prospective study of predictors of failure
BackgroundBile duct injury rates for laparoscopic cholecystectomy (LC) remain higher than during open cholecystectomy. The “culture of safety” concept is based on demonstrating the critical view of safety (CVS) and/or correctly interpreting intraoperative cholangiography (IOC). However, the CVS may not always be achievable due to difficult anatomy or pathology. Safety may be enhanced if surgeons assess difficulties objectively, recognise instances where a CVS is unachievable and be familiar with recovery strategies.Aims and methodsA prospective study was conducted to evaluate the achievability of the CVS during all consecutive LC performed over four years. The primary aim was to study the association between the inability to obtain the CVS and an objective measure of operative difficulty. The secondary aim was to identify preoperative and operative predictors indicating the use of alternate strategies to complete the operation safely.ResultsThe study included 1060 consecutive LC. The median age was 53 years, male to female ratio was 1:2.1 and 54.9% were emergency admissions. CVS was obtained in 84.2%, the majority being difficulty grade I or II (70.7%). Displaying the CVS failed in 167 LC (15.8%): including 55.6% of all difficulty grade IV LC and 92.3% of difficulty grade V. There were no biliary injuries or conversions.ConclusionAll three components of the critical view of safety could not be demonstrated in one out of 6 consecutive laparoscopic cholecystectomies. Preoperative factors and operative difficulty grading can predict cases where the CVS may not be achievable. Adapting instrument selection and alternate dissection strategies would then need to be considered.
Risk Factors and Outcomes of Open Conversion During Minimally Invasive Major Hepatectomies: An International Multicenter Study on 3880 Procedures Comparing the Laparoscopic and Robotic Approaches
IntroductionDespite the advances in minimally invasive (MI) liver surgery, most major hepatectomies (MHs) continue to be performed by open surgery. This study aimed to evaluate the risk factors and outcomes of open conversion during MI MH, including the impact of the type of approach (laparoscopic vs. robotic) on the occurrence and outcomes of conversions.MethodsData on 3880 MI conventional and technical (right anterior and posterior sectionectomies) MHs were retrospectively collected. Risk factors and perioperative outcomes of open conversion were analyzed. Multivariate analysis, propensity score matching, and inverse probability treatment weighting analysis were performed to control for confounding factors.ResultsOverall, 3211 laparoscopic MHs (LMHs) and 669 robotic MHs (RMHs) were included, of which 399 (10.28%) had an open conversion. Multivariate analyses demonstrated that male sex, laparoscopic approach, cirrhosis, previous abdominal surgery, concomitant other surgery, American Society of Anesthesiologists (ASA) score 3/4, larger tumor size, conventional MH, and Institut Mutualiste Montsouris classification III procedures were associated with an increased risk of conversion. After matching, patients requiring open conversion had poorer outcomes compared with non-converted cases, as evidenced by the increased operation time, blood transfusion rate, blood loss, hospital stay, postoperative morbidity/major morbidity and 30/90-day mortality. Although RMH showed a decreased risk of conversion compared with LMH, converted RMH showed increased blood loss, blood transfusion rate, postoperative major morbidity and 30/90-day mortality compared with converted LMH.ConclusionsMultiple risk factors are associated with conversion. Converted cases, especially those due to intraoperative bleeding, have unfavorable outcomes. Robotic assistance seemed to increase the feasibility of the MI approach, but converted robotic procedures showed inferior outcomes compared with converted laparoscopic procedures.
Cognitive Architecture and Instructional Design: 20 Years Later
Cognitive load theory was introduced in the 1980s as an instructional design theory based on several uncontroversial aspects of human cognitive architecture. Our knowledge of many of the characteristics of working memory, long-term memory and the relations between them had been well-established for many decades prior to the introduction of the theory. Curiously, this knowledge had had a limited impact on the field of instructional design with most instructional design recommendations proceeding as though working memory and long-term memory did not exist. In contrast, cognitive load theory emphasised that all novel information first is processed by a capacity and duration limited working memory and then stored in an unlimited long-term memory for later use. Once information is stored in long-term memory, the capacity and duration limits of working memory disappear transforming our ability to function. By the late 1990s, sufficient data had been collected using the theory to warrant an extended analysis resulting in the publication of Sweller et al. {Educational Psychology Review, 10, 251-296, 1998). Extensive further theoretical and empirical work have been carried out since that time and this paper is an attempt to summarise the last 20 years of cognitive load theory and to sketch directions for future research.