Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
332 result(s) for "artifact codes"
Sort by:
Rebels with a Cause: Formation, Contestation, and Expansion of the De Novo Category “Modern Architecture,” 1870–1975
Most category studies have focused on established categories with discrete boundaries. These studies not only beg the question of how a de novo category arises, but also upon what institutional material actors draw to create a de novo category. We examine the formation and theorization of the de novo category “modern architecture” between 1870 and 1975. Our study shows that the process of new category formation was driven by groups of architects with distinct clientele associated with institutional logics of commerce, state, religion, and family. These architects enacted different artifact codes for a building based on institutional logics associated with their specific mix of clients. “Modern architects” fought over what logics and artifact codes should guide “modern architecture.” Modern functional architects espoused a logic of commerce enacted through a restricted artifact code of new materials in a building, whereas modern organic architects advocated transforming the profession's logic enacted through a flexible artifact code of mixing new and traditional materials in buildings. The conflict became a source of creative tension for modern architects that followed, who integrated aspects of both logics and materials in buildings, expanding the category boundary. Plural logics and category expansion resulted in multiple conflicting exemplars within “modern architecture” and enabled its adaptation to changing social forces and architectural interpretations for over 70 years.
An empirical analysis of journal policy effectiveness for computational reproducibility
A key component of scientific communication is sufficient information for other researchers in the field to reproduce published findings. For computational and data-enabled research, this has often been interpreted to mean making available the raw data from which results were generated, the computer code that generated the findings, and any additional information needed such as workflows and input parameters. Many journals are revising author guidelines to include data and code availability. This work evaluates the effectiveness of journal policy that requires the data and code necessary for reproducibility be made available postpublication by the authors upon request. We assess the effectiveness of such a policy by (i) requesting data and code from authors and (ii) attempting replication of the published findings. We chose a random sample of 204 scientific papers published in the journal Science after the implementation of their policy in February 2011. We found that we were able to obtain artifacts from 44% of our sample and were able to reproduce the findings for 26%. We find this policy—author remission of data and code postpublication upon request—an improvement over no policy, but currently insufficient for reproducibility.
AncientGlyphNet: an advanced deep learning framework for detecting ancient Chinese characters in complex scene
Detecting ancient Chinese characters in various media, including stone inscriptions, calligraphy, and couplets, is challenging due to the complex backgrounds and diverse styles. This study proposes an advanced deep-learning framework for detecting ancient Chinese characters in complex scenes to improve detection accuracy. First, the framework introduces an Ancient Character Haar Wavelet Transform downsampling block (ACHaar), effectively reducing feature maps’ spatial resolution while preserving key ancient character features. Second, a Glyph Focus Module (GFM) is introduced, utilizing attention mechanisms to enhance the processing of deep semantic information and generating ancient character feature maps that emphasize horizontal and vertical features through a four-path parallel strategy. Third, a Character Contour Refinement Layer (CCRL) is incorporated to sharpen the edges of characters. Additionally, to train and validate the model, a dedicated dataset was constructed, named Huzhou University-Ancient Chinese Character Dataset for Complex Scenes (HUSAM-SinoCDCS), comprising images of stone inscriptions, calligraphy, and couplets. Experimental results demonstrated that the proposed method outperforms previous text detection methods on the HUSAM-SinoCDCS dataset, with accuracy improved by 1.36–92.84%, recall improved by 2.24–85.61%, and F1 score improved by 1.84–89.08%. This research contributes to digitizing ancient Chinese character artifacts and literature, promoting the inheritance and dissemination of traditional Chinese character culture. The source code and the HUSAM-SinoCDCS dataset can be accessed at https://github.com/youngbbi/AncientGlyphNet and https://github.com/youngbbi/HUSAM-SinoCDCS .
A comprehensive study of bloated dependencies in the Maven ecosystem
Build automation tools and package managers have a profound influence on software development. They facilitate the reuse of third-party libraries, support a clear separation between the application’s code and its external dependencies, and automate several software development tasks. However, the wide adoption of these tools introduces new challenges related to dependency management. In this paper, we propose an original study of one such challenge: the emergence of bloated dependencies. Bloated dependencies are libraries that are packaged with the application’s compiled code but that are actually not necessary to build and run the application. They artificially grow the size of the built binary and increase maintenance effort. We propose DepClean, a tool to determine the presence of bloated dependencies in Maven artifacts. We analyze 9,639 Java artifacts hosted on Maven Central, which include a total of 723,444 dependency relationships. Our key result is as follows: 2.7% of the dependencies directly declared are bloated, 15.4% of the inherited dependencies are bloated, and 57% of the transitive dependencies of the studied artifacts are bloated. In other words, it is feasible to reduce the number of dependencies of Maven artifacts to 1/4 of its current count. Our qualitative assessment with 30 notable open-source projects indicates that developers pay attention to their dependencies when they are notified of the problem. They are willing to remove bloated dependencies: 21/26 answered pull requests were accepted and merged by developers, removing 140 dependencies in total: 75 direct and 65 transitive.
Augmented reality in urban places: contested content and the duplicity of code
With the increasing prevalence of both geographically referenced information and the code through which it is regulated, digital augmentations of place will become increasingly important in everyday, lived geographies. Through two detailed explorations of 'augmented realities', this paper provides a broad overview of not only the ways that those augmented realities matter, but also the complex and often duplicitous manner that code and content can congeal in our experiences of augmented places. Because the re-makings of our spatial experiences and interactions are increasingly influenced through the ways in which content and code are fixed, ordered, stabilised and contested, this paper places a focus on how power, as mediated through technological artefacts, code and content, helps to produce place. Specifically, it demonstrates there are four key ways in which power is manifested in augmented realities: two performed largely by social actors, distributed power and communication power; and two enacted primarily via software, code power and timeless power. The paper concludes by calling for redoubled attention to both the layerings of content and the duplicity and ephemerality of code in shaping the uneven and power-laden practices of representations and the experiences of place augmentations in urban places.
SYRMEP Tomo Project: a graphical user interface for customizing CT reconstruction workflows
When considering the acquisition of experimental synchrotron radiation (SR) X-ray CT data, the reconstruction workflow cannot be limited to the essential computational steps of flat fielding and filtered back projection (FBP). More refined image processing is often required, usually to compensate artifacts and enhance the quality of the reconstructed images. In principle, it would be desirable to optimize the reconstruction workflow at the facility during the experiment (beamtime). However, several practical factors affect the image reconstruction part of the experiment and users are likely to conclude the beamtime with sub-optimal reconstructed images. Through an example of application, this article presents SYRMEP Tomo Project (STP), an open-source software tool conceived to let users design custom CT reconstruction workflows. STP has been designed for post-beamtime (off-line use) and for a new reconstruction of past archived data at user’s home institution where simple computing resources are available. Releases of the software can be downloaded at the Elettra Scientific Computing group GitHub repository https://github.com/ElettraSciComp/STP-Gui .
Distortion correction of diffusion weighted MRI without reverse phase-encoding scans or field-maps
Diffusion magnetic resonance images may suffer from geometric distortions due to susceptibility induced off resonance fields, which cause geometric mismatch with anatomical images and ultimately affect subsequent quantification of microstructural or connectivity indices. State-of-the art diffusion distortion correction methods typically require data acquired with reverse phase encoding directions, resulting in varying magnitudes and orientations of distortion, which allow estimation of an undistorted volume. Alternatively, additional field maps acquisitions can be used along with sequence information to determine warping fields. However, not all imaging protocols include these additional scans and cannot take advantage of state-of-the art distortion correction. To avoid additional acquisitions, structural MRI (undistorted scans) can be used as registration targets for intensity driven correction. In this study, we aim to (1) enable susceptibility distortion correction with historical and/or limited diffusion datasets that do not include specific sequences for distortion correction and (2) avoid the computationally intensive registration procedure typically required for distortion correction using structural scans. To achieve these aims, we use deep learning (3D U-nets) to synthesize an undistorted b0 image that matches geometry of structural T1w images and intensity contrasts from diffusion images. Importantly, the training dataset is heterogenous, consisting of varying acquisitions of both structural and diffusion. We apply our approach to a withheld test set and show that distortions are successfully corrected after processing. We quantitatively evaluate the proposed distortion correction and intensity-based registration against state-of-the-art distortion correction (FSL topup). The results illustrate that the proposed pipeline results in b0 images that are geometrically similar to non-distorted structural images, and more closely match state-of-the-art correction with additional acquisitions. In addition, we show generalizability of the proposed approach to datasets that were not in the original training / validation / testing datasets. These datasets included varying populations, contrasts, resolutions, and magnitudes and orientations of distortion and show efficacious distortion correction. The method is available as a Singularity container, source code, and an executable trained model to facilitate evaluation.
Predicting software reuse using machine learning techniques—A case study on open-source Java software systems
Software reuse is an essential practice to increase efficiency and reduce costs in software production. Software reuse practices range from reusing artifacts, libraries, components, packages, and APIs. Identifying suitable software for reuse requires pinpointing potential candidates. However, there are no objective methods in place to measure software reuse. This makes it challenging to identify highly reusable software. Software reuse research mainly addresses two hurdles: 1) identifying reusable candidates effectively and efficiently, and 2) selecting high-quality software components that improve maintainability and extensibility. This paper proposes automating software reuse prediction by leveraging machine learning (ML) algorithms, enabling future research and practitioners to better identify highly reusable software. Our approach uses cross-project code clone detection to establish the ground truth for software reuse, identifying code clones across popular GitHub projects as indicators of potential reuse candidates. Software metrics were extracted from Maven artifacts and used to train classification and regression models to predict and estimate software reuse. The average F1-score of the ML classification models is 77.19%. The best-performing model, Ridge Regression, achieved an F1-score of 79.17%. Additionally, this research aims to assist developers by identifying key metrics that significantly impact software reuse. Our findings suggest that the file-level PUA (Public Undocumented API) metric is the most important factor influencing software reuse. We also present suitable value ranges for the top five important metrics that developers can follow to create highly reusable software. Furthermore, we developed a tool that utilizes the trained models to predict the reuse potential of existing GitHub projects and rank Maven artifacts by their domain.
PCR biases distort bacterial and archaeal community structure in pyrosequencing datasets
As 16S rRNA gene targeted massively parallel sequencing has become a common tool for microbial diversity investigations, numerous advances have been made to minimize the influence of sequencing and chimeric PCR artifacts through rigorous quality control measures. However, there has been little effort towards understanding the effect of multi-template PCR biases on microbial community structure. In this study, we used three bacterial and three archaeal mock communities consisting of, respectively, 33 bacterial and 24 archaeal 16S rRNA gene sequences combined in different proportions to compare the influences of (1) sequencing depth, (2) sequencing artifacts (sequencing errors and chimeric PCR artifacts), and (3) biases in multi-template PCR, towards the interpretation of community structure in pyrosequencing datasets. We also assessed the influence of each of these three variables on α- and β-diversity metrics that rely on the number of OTUs alone (richness) and those that include both membership and the relative abundance of detected OTUs (diversity). As part of this study, we redesigned bacterial and archaeal primer sets that target the V3-V5 region of the 16S rRNA gene, along with multiplexing barcodes, to permit simultaneous sequencing of PCR products from the two domains. We conclude that the benefits of deeper sequencing efforts extend beyond greater OTU detection and result in higher precision in β-diversity analyses by reducing the variability between replicate libraries, despite the presence of more sequencing artifacts. Additionally, spurious OTUs resulting from sequencing errors have a significant impact on richness or shared-richness based α- and β-diversity metrics, whereas metrics that utilize community structure (including both richness and relative abundance of OTUs) are minimally affected by spurious OTUs. However, the greatest obstacle towards accurately evaluating community structure are the errors in estimated mean relative abundance of each detected OTU due to biases associated with multi-template PCR reactions.
A deep learning solution for real-time quality assessment and control in additive manufacturing using point cloud data
This work presents an in-situ quality assessment and improvement technique using point cloud and AI for data processing and smart decision making in Additive Manufacturing (AM) fabrication to improve the quality and accuracy of fabricated artifacts. The top surface point-cloud containing top surface geometry and quality information is pre-processed and passed to an improved deep Hybrid Convolutional Auto-Encoder decoder (HCAE) model used to statistically describe the artifact's quality. The HCAE’s output is comprised of 9 × 9 segments, each including four channels with the segment's probability to contain one of four labels, Under-printed, Normally-printed, Over-printed, or Empty region. This data structure plays a significant role in command generation for fabrication process optimization. The HCAE’s accuracy and repeatability were measured by a multi-label multi-output metric developed in this study. The HCAE’s results are used to perform a real-time process adjustment by manipulating the future layer's fabrication through the G-code modification. By adjusting the machine's print speed and feed-rate, the controller exploits the subsequent layer’s deposition, grid-by-grid. The algorithm is then tested with two defective process plans: severe under-extrusion and over-extrusion conditions. Both test artifacts' quality advanced significantly and converged to an acceptable state by four iterations.