Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
2 result(s) for "Joslyn, Steve K."
Sort by:
A comparison of fragmenting lead-based and lead-free bullets for aerial shooting of wild pigs
In response to the health threats posed by toxic lead to humans, scavenging wildlife and the environment, there is currently a focus on transitioning from lead-based to lead-free bullets for shooting of wild animals. We compared efficiency metrics and terminal ballistic performance for lead-based and lead-free (non-lead) bullets for aerial shooting of wild pigs ( Sus scrofa ) in eastern Australia. Ballistic testing revealed that lead-based and lead-free bullets achieved similar performance in precision and muzzle kinetic energy (E 0 ) levels (3337.2 J and 3345.7 J, respectively). An aerial shooting trial was conducted with wild pigs shot with one type of lead-based and one type of lead-free bullets under identical conditions. Observations were made from 859 shooting events ( n = 430 and 429 respectively), with a sub-set of pigs examined via gross post-mortem ( n = 100 and 108 respectively), and a further sub-set examined via radiography ( n = 94 and 101 respectively). The mean number of bullets fired per pig killed did not differ greatly between lead-based and lead-free bullets respectively (4.09 vs 3.91), nor did the mean number of bullet wound tracts in each animal via post-mortem inspection (3.29 vs 2.98). However, radiography revealed a higher average number of fragments per animal (median >300 vs median = 55) and a broader distribution of fragments with lead-based bullets. Our results suggest that lead-based and lead-free bullets are similarly effective for aerial shooting of wild pigs, but that the bullet types behave differently, with lead-based bullets displaying a higher degree of fragmentation. These results suggest that aerial shooting may be a particularly important contributor to scavenging wildlife being exposed to lead and that investigation of lead-free bullets for this use should continue.
Commentary: Comparison of radiological interpretation made by veterinary radiologists and state-of-the-art commercial AI software for canine and feline radiographic studies
Small, potentially biased sample with class imbalance The experiment analyzed just 50 radiographic cases (40 canine, 10 feline), retrospectively selected from a single institution's PACS with no power calculation or other reasoning given for the number, and the paper acknowledges the limitation of this relatively small sample that may not capture the full spectrum of cases in veterinary practice (1). [...]the composition of findings was severely skewed: 84% of all reported findings were determined on consensus to be normal and only 16% abnormal (1). [...]the study includes a methodological choice to classify “insignificant abnormalities” as abnormal, a decision that introduces potential misalignment with clinical practice, where incidental or clinically irrelevant findings are often disregarded (5,8). [...]by deriving ambiguity from the overall inter-observer agreement—which includes the AI's input—the metric may conflate case difficulty with observer performance, thus casting doubt on its validity as a true measure of ambiguity. Truly evaluating AI on “challenging” cases would require an independent measure of case difficulty or ambiguity (for example, cases with known subtle lesions, or confirmed diagnoses that radiologists often miss), rather than one derived solely from the observers' agreement. [...]conclusions about the AI's reliability in high-ambiguity scenarios should be viewed with skepticism—they may not generalize beyond this specific sample and methodology.