Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
855 result(s) for "Bibliographic description. Cataloging. Referencing"
Sort by:
Mental models of the bibliographic universe. Part 1: mental models of descriptions
Purpose - The paper aims to present the results of the first two tasks of a user study looking into mental models of the bibliographic universe and especially their comparison to the Functional Requirements for Bibliographic Records (FRBR) conceptual model, which has not yet been user tested.Design methodology approach - The paper employes a combination of techniques for eliciting mental models and consisted of three tasks, two of which, card sorting and concept mapping, are presented herein. Its participants were 30 individuals residing in the general area of Ljubljana, Slovenia.Findings - Cumulative results of concept mapping show a strong resemblance to FRBR. Card sorts did not produce conclusive results. In both tasks, participants paid special attention to the original expression, indicating that a special place for it should be considered.Research limitations implications - The study was performed using a relatively small sample of participants living in a geographically limited space using relatively straight-forward examples.Practical implications - Some solid evidence is provided for adoption of FRBR as the conceptual basis for cataloguing.Originality value - This is the first widely published user study of FRBR, applying novel methodological approaches in the field of Library and Information Science.
Metadata Creation Practices in Digital Repositories and Collections: Schemata, Selection Criteria, and Interoperability
This study explores the current state of metadata-creation practices across digital repositories and collections by using data collected from a nationwide survey of mostly cataloging and metadata professionals. Results show that MARC, AACR2, and LCSH are the most widely used metadata schema, content standard, and subjectcontrolled vocabulary, respectively. Dublin Core (DC) is the second most widely used metadata schema, followed by EAD, MODS, VRA, and TEI. Qualified DC’s wider use vis-à-vis Unqualified DC (40.6 percent versus 25.4 percent) is noteworthy. The leading criteria in selecting metadata and controlled-vocabulary schemata are collection-specific considerations, such as the types of resources, nature of the collection, and needs of primary users and communities. Existing technological infrastructure and staff expertise also are significant factors contributing to the current use of metadata schemata and controlled vocabularies for subject access across distributed digital repositories and collections. Metadata interoperability remains a major challenge. There is a lack of exposure of locally created metadata and metadata guidelines beyond the local environments. Homegrown locally added metadata elements may also hinder metadata interoperability across digital repositories and collections when there is a lack of sharable mechanisms for locally defined extensions and variants.
Mental models of the bibliographic universe. Part 2: comparison task and conclusions
Purpose - The paper aims to provide some insight into mental models of the bibliographic universe and how they compare with functional requirements for bibliographic records (FRBR) as a conceptual model of the bibliographic universe.Design methodology approach - To get a more complete picture of the mental models, different elicitation techniques were used. The three tasks of the paper were: card-sorting, concept mapping and comparison task. The paper deals with comparison task, which consisted of interviews and rankings, and provides a discussion of the results of the paper as a whole.Findings - Results of the ranking part of the comparison task confirm the findings of concept mapping task. In both cases, while there are individual differences between mental models, on average they gravitate towards FRBR.Research limitations implications - This is a small study and it provides only a glimpse of the implications of using FRBR as a conceptual basis for cataloguing. More FRBR-related user studies are needed, including similar studies on different groups of individuals and different types of materials, as well as practical studies of user needs and user interfaces.Practical implications - The results of this study are the first user-tested indication of the validity of FRBR as a conceptual basis for the future of cataloguing.Originality value - This is the first published paper of mental models of the bibliographic universe and uses a unique combination of mental model elicitation techniques.
Notes on Operations Cataloging E-Books and Vendor Records: A Case Study at the University of Illinois at Chicago
E-books have become a substantial part of many academic library collections. Catalog records for each e-book title enhance discovery by library users, but cataloging individual books may be impossible when large packages are purchased. Increasingly, libraries are relying on outside sources for their e-book catalog records, which may come from vendors or third-party record services and are frequently included in the price of a subscription. Rather than handling individual items, catalogers find themselves managing and manipulating large sets of catalog records. While dealing with the records in batch is the only practical way to provide access to the large sets, batch processing does bring about a new set of challenges. This paper will explore the challenges of managing Machine-Readable Cataloging (MARC) records for the Springer e-book collection at the University of Illinois at Chicago University Library. It discusses tools and methods to improve record quality while working in a consortial setting. It provides lessons learned, continuing challenges of working with vendor records, and some steps that might help other libraries expedite the process of getting vendor records into the catalog. Adapted from the source document.
XML editor for UNIMARC and MARC 21 cataloguing
Purpose - The purpose of this paper is to model and implement an extensible markup language (XML)-based editor for library cataloguing. The editor model should support data input in the form of free text with interactive control of structure and content validity of records specified in the UNIMARC and MARC 21 formats. The editor is implemented in the Java programming language in the form of a software package.Design methodology approach - The unified modelling language (UML 2.0) is used for the specification of both the information requirements and the model architecture. The object oriented methodology is used for design and implementation of the software packages, as well as the corresponding CASE tools.Findings - The result is an editor for UNIMARC and MARC 21 cataloguing. The editor is based on the XML technologies by which the two basic characteristics are achieved as follows: a possibility of integrating the editor into different library software systems and, moving to another format requires only the changes of the module for bibliographic record data control.Research limitations implications - A basic limitation of the system is related to the subsystem that controls validation of the bibliographic records and its expansion for work with other bibliographic formats. In the proposed solution, a part of the control of data input is included into the implementation itself and it is related to the UNIMARC format. That is, a part of data by which the control is done, such as repeatability of the record elements and the codebooks, is contained in the XML document of the format that is input information in the editor. However, the control that is related to validation of the format of content in record elements cannot be performed for any other format without modification in the implementation. Therefore, the research could be continued by considering the separation of data used for content control as input information for the application. In that way, this segment would also become implementation independent. One of the solutions should be extending the XML document of the format by this data. Some other solution should mean creating a totally separate system for the content validation. Moreover, the proposed editor supports processing of a bibliographic record only in the UNIMARC and MARC 21 formats. Processing of records in other formats requires considerable changes in the model.Practical implications - The model of a new editor is developed on the basis of the experience and needs of electronic management in city and special libraries. Based on the given model a new editor is implemented and integrated into the BISIS software system used by the mentioned libraries. Testing and verification are performed on the bibliographic records of the public city libraries.Originality value - The contribution of this work is in the system architecture that is based on the XML documents and is independent of the bibliographic format. The XML document that contains data about the bibliographic format represents the editor input information. After a bibliographic record is created in this editor, the record is stored into an XML document that represents the editor output information. This XML document can be stored into various software systems for data storage and retrieval.
XML schema for UNIMARC and MARC 21
Purpose - The purpose of this paper is to create a model for an XML document that will carry information about bibliographic formats. The model will be given in the form of an XML schema describing two bibliographic formats, UNIMARC and MARC 21.Design methodology approach - The description of bibliographic formats using the XML schema language may be discussed in two ways. The first one relates to creating an XML schema in a way that all elements of the bibliographic format are described separately. The second way, used in this paper, is creating an XML schema as a set of elements that presents concepts of bibliographic formats. A schema created in the second way is appropriate for use in implementation of cataloguing software.Findings - The result is an XML schema that describes MARC 21 and UNIMARC formats. The instance of that schema is an XML document describing a bibliographic format that will be used in software systems for cataloguing. An XML document that is an instance of the proposed XML schema is applied in the development of the editor for cataloguing in the BISIS library information system. This XML document represents input information for that editor. In this way, the implementation of the editor becomes independent of the bibliographic format.Practical implications - The created XML schema cannot serve as an electronic manual because there is some information about the format that is not included in it. In order to overcome this shortcoming an additional XML schema that will contain remaining format data may be provided.Originality value - The originality lies in the idea of creating one XML schema for two bibliographic formats. The schema contains elements that are models for data used in cataloguing tools. On the basis of that XML schema, the object model of bibliographic formats is implemented as well as software component for manipulating format data. This component can be used in development of library software systems.
Mass Management of E-Book Catalog Records: Approaches, Challenges, and Solutions
Electronic book collections in libraries have grown dramatically over the last decade. A great diversity of providers, service models, and content types exist today, presenting a variety of challenges for cataloging and catalog maintenance. Many libraries rely on external data providers to supply bibliographic records for electronic books, but cataloging guidance has focused primarily on rules and standards for individual records rather than data management at the collection level. This paper discusses the challenges, decisions, and priorities that have evolved around cataloging electronic books at a mid-size academic library, the University of Houston Libraries. The authors illustrate the various issues raised by vendor-supplied records and the impact of new guidelines for provider-neutral records for electronic monographs. They also describe workflow for batch cataloging using the MarcEdit utility, address ongoing maintenance of records and record sets, and suggest future directions for large-scale management of electronic books. [PUBLICATION ABSTRACT]
Author-Assigned Keywords versus Library of Congress Subject Headings: Implications for the Cataloging of Electronic Theses and Dissertations
This study is an examination of the overlap between author-assigned keywords and cataloger-assigned Library of Congress Subject Headings (LCSH) for a set of electronic theses and dissertations in Ohio State University's online catalog. The project is intended to contribute to the literature on the issue of keywords versus controlled vocabularies in the use of online catalogs and databases. Findings support previous studies' conclusions that both keywords and controlled vocabularies complement one another. Further, even in the presence of bibliographic record enhancements, such as abstracts or summaries, keywords and subject headings provided a significant number of unique terms that could affect the success of keyword searches. Implications for the maintenance of controlled vocabularies such as LCSH also are discussed in light of the patterns of matches and nonmatches found between the keywords and their corresponding subject headings. [PUBLICATION ABSTRACT]
What Is Next for Functional Requirements for Bibliographic Records? A Delphi Study
This article reports on a Delphi study conducted to determine key issues and challenges facing Functional Requirements for Bibliographic Records (FRBR) research and practice. The Delphi panel consisted of thirty-three experts in the field who participated in a three-round issue-raising and consensus-building process via a Web-based survey instrument designed for this study. The panel members were asked to raise critical issues in each of five major areas based on themes in existing literature: (1) the FRBR model, (2) FRBR and related standards, (3) FRBR application, (4) FRBR system development, and (5) FRBR research. These issues were categorized and then rated for importance in the follow-up rounds. The results of this study provide a list of the most critical issues, based on importance ratings and group consensus, for future FRBR research and practice in each FRBR area.
Can Bibliographic Data be Put Directly onto the Semantic Web?
This paper is a think piece about the possible future of bibliographic control; it provides a brief introduction to the Semantic Web and defines related terms, and it discusses granularity and structure issues and the lack of standards for the efficient display and indexing of bibliographic data. It is also a report on a work in progress—an experiment in building a Resource Description Framework (RDF) model of more FRBRized cataloging rules than those about to be introduced to the library community (Resource Description and Access) and in creating an RDF data model for the rules. I am now in the process of trying to model my cataloging rules in the form of an RDF model, which can also be inspected at http://myee.bol.ucla.edu/. In the process of doing this, I have discovered a number of areas in which I am not sure that RDF is sophisticated enough yet to deal with our data. This article is an attempt to identify some of those areas and explore whether or not the problems I have encountered are soluble—in other words, whether or not our data might be able to live on the Semantic Web. In this paper, I am focusing on raising the questions about the suitability of RDF to our data that have come up in the course of my work.