Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
634
result(s) for
"COMPUTERS Web Site Directories."
Sort by:
Learn Amazon Web Services in a month of lunches
Learn Amazon Web Services in a Month of Lunches guides you through the process of building a robust and secure web application using the core AWS services you really need to know. You'll be amazed by how much you can accomplish with AWS!
Learning from libraries that use WordPress
by
Jones, Kyle M. L
,
Farrington, Polly-Alida
in
Authoring programs
,
Blogs
,
Blogs -- Computer programs
2012,2013
With its intuitive interface and open-source development method, the WordPress web platform has emerged as a uniquely flexible content management system (CMS) with many library-related applications. In this book Jones and Farrington, two web designer/librarians, explore the variety of ways libraries are implementing WordPress as a CMS, from simple \"\"out-of-the-box\"\" websites to large sites with many custom features. Emphasising a library-specific perspective, the authors Offer a brief history of WordPress, reviewing its genesis and sketching in some possible future directions Analyse the software's strengths and weaknesses, spotlighting its advantages over other existing web publishing platforms as well as discussing the limitations libraries have encountered Present a variety of case studies, offering first-hand examples which detail why WordPress was selected, methods of implementation and degree of customisation, feedback from users, and reflections on usability Discuss essential plug-ins, themes, and other specialised applications for library sites This useful book shows how scores of libraries have used WordPress to create library websites that are both user friendly and easy to maintain.
Improving the visibility and use of digital repositories through SEO
by
OBrien, Patrick S.
,
Arlitsch, Kenning
in
Digital libraries
,
Electronic information resources
,
Electronic information resources -- Management
2013
Recent OCLC surveys show that less than 2 percent of library users start their search on a library website. Another survey of faculty researchers at four major universities showed that most consider Google and Google Scholar amazingly effective for their research. Low Google Scholar indexing ratios for library institutional repositories is widespread because it ignores common library metadata, and high-value content through libraries is consequently invisible to researchers. Authors Arlitsch and O Brien share their expertise in digital libraries and corporate marketing to offer practical steps for search engine optimisation, such as: Recommended dashboards to increase participation by sharing dataAvoiding the four most common crawler errors that lead to low rankingsHow to effectively utilise the Google Keyword ToolHow to use domain settings to generate unit-specific reports for special collections, institutional repositories, and university presses
Using libguides to enhance library services
by
Sittler, Ryan L
,
Cook, Douglas
,
Dobbs, Aaron W
in
Computer network resources
,
Design
,
Digital libraries
2013
The easy-to-use tools in Springshare's LibGuides help you organise web pages, improve students' research experience and learning, and offer an online community of librarians sharing their work and ideas. Editors Dobbs, Sittler, and Cook have recruited expert contributors to address specific applications, creating a one-stop reference.
Running the Digital Branch
2012
Maintaining a library website is as important as its initial development. In this issue of Library Technology Reports King draws from his team's four-year experience running the acclaimed digital branch of the Topeka Shawnee County Public Library (TSCPL). From website tweaks to staffing issues, King outlines recommended strategies and workflow plans for continually meeting library users' needs and effectively highlighting library programs and services, including Ways to engage library users in conversations about books, movies, and other materials A detailed listing of what data TSCPL tracks and the tools they use for the website, blogs, and social media platforms such as YouTube, Twitter, Facebook, and FourSquare Their team's approach to efficiently maintaining 24 blogs and six social media accounts Tips for using rotating banner ads to draw attention to website content, a technique which brought in 3,000+ page views for a post about the new library catalog Reasons behind the decision to migrate to the WordPress content management system (CMS) How gathering customer feedback led to more effective location for their My Account link The lessons learned by King and his team at TSCPL can help any library sharpen their presence on the Web while efficiently maintaining library website operations. --- About the Author David Lee King is the digital branch and services manager at the Topeka Shawnee County Public Library, where he plans for, implements, and experiments with emerging technology trends. He speaks internationally about emerging trends, website usability and management, digital experience planning, and managing tech staff. A Library Journal \"\"Mover and Shaker\"\" for 2008, David writes the American Libraries column \"Outside/In\" with Michael Porter and maintains a blog at www.davidleeking.com.
User Experience (UX) Design for Libraries
2012
This book shows you how to get there by providing hands-on steps and best practices for UX design principles, practices, and tools to engage with patrons online and build the best web presence for your library.
Building the digital branch
2009
In the past fifteen years, the World Wide Web has become such a major part of the library world that most libraries now have some presence on the Web. This issue of Library Technology Reports explores the idea of the digital branch a library website that is a vital, functional resource for patrons and enhances the library s place within its community. The report outlines an efficient process for creating a digital branch, from the initial phases of gathering information and sketching out a design, to winning approval from management, hiring qualified IT staff, and maintaining and upgrading the site once it is built. Throughout the report, the author regularly uses his experience at his own library as an example of how the process can unfold and what pitfalls to avoid.
Ensemble approach for web page classification
2021
Over the decades World Wide Web has become abundance source of distributed web content repository hyper-linked with diverse information domains. Performance of search engines in locating the information is exemplary but still there is inadequacy in search engines for focused crawling of web content. Web Page Classification being pivotal for information retrieval and management task plays imperative role for natural language processing in creating classified web document repositories and building indexed web directories. The conventional machine learning approaches extract the desired features from web pages in order to classify them whereas deep leaning algorithms learns the covet features as the network goes deeper and deeper. Transfer learning based Pre-trained models such as BERT attains impressive performance for text classification. In this study, we evaluate the effectiveness of adopting pre-trained model BERT for the task of classifying web pages into different categories. In this paper, we proposed an ensemble approach for web page classification by learning contextual representation using pre-trained bidirectional BERT and then applying deep Inception modelling with Residual connections for fine-tunes the target task by utilizing parallel multi-scale semantics. Experimental evaluation exhibit that proposed ensemble model outperforms benchmark baselines and achieve better performance in contrast to other transfer learning approaches evaluated on the web page classification task for different classification datasets.
Journal Article
Automated data extraction from historical city directories: The rise and fall of mid-century gas stations in Providence, RI
2020
The location of defunct environmentally hazardous businesses like gas stations has many implications for modern American cities. To track down these locations, we present the directoreadr code (github.com/brown-ccv/directoreadr). Using scans of Polk city directories from Providence, RI, directoreadr extracts and parses business location data with a high degree of accuracy. The image processing pipeline ran without any human input for 94.4% of the pages we examined. For the remaining 5.6%, we processed them with some human input. Through hand-checking a sample of three years, we estimate that ~94.6% of historical gas stations are correctly identified and located, with historical street changes and non-standard address formats being the main drivers of errors. As an example use, we look at gas stations, finding that gas stations were most common early in the study period in 1936, beginning a sharp and steady decline around 1950. We are making the dataset produced by directoreadr publicly available. We hope it will be used to explore a range of important questions about socioeconomic patterns in Providence and cities like it during the transformations of the mid-1900s.
Journal Article
A first look at references from the dark to the surface web world: a case study in Tor
2022
Tor is the most well-known anonymity network that protects the identity of both content providers and their clients against any tracking on the Internet. The previous research on Tor investigated either the security and privacy concerns or the information and hyperlink structure. However, there is still a lack of knowledge about the information leakage attributed to the links from Tor hidden services to the surface Web. This work addresses this gap by a broad evaluation on: (a) the network of links from Tor to the surface Web, (b) the vulnerability of Tor hidden services against the information leakage, (c) the changes in the overall hyperlink structure of Tor hidden services caused by linking to surface websites, and (d) the type of information and services provided by the domains with significant impact on Tor’s network. The results recover the dark-to-surface network as a single massive, connected component where over 90% of identified Tor hidden services have at least one link to the surface world. We also identify that Tor directories significantly contribute to both communication and information dissemination through the network. Our study is the product of crawling approximately 2 million pages from 23,145 onion seed addresses, over a three-month period.
Journal Article