Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
41,752
result(s) for
"Web pages"
Sort by:
Affect in Web Interfaces: A Study of the Impacts of Web Page Visual Complexity and Order
2010
This research concentrates on visual complexity and order as central factors in the design of webpages that enhance users' positive emotional reactions and facilitate desirable psychological states and behaviors. Drawing on existing theories and empirical findings in the environmental psychology, human—computer interaction, and marketing research literatures, a research model is developed to explain the relationships among visual complexity and order design features of a webpage, induced emotional responses in users, and users' approach behaviors toward the website as moderated by users' metamotivational states. A laboratory experiment was conducted to test the model and its associated hypotheses. The results of the study suggested that a web user's initial emotional responses (i.e., pleasantness and arousal), evoked by the visual complexity and order design features of a webpage when first encountered, will have carry-over effects on subsequent approach behavior toward the website. The results also revealed how webpage visual complexity and order influence users' emotions and behaviors differently when users are in different metamotivational states. The salience and importance of webpage visual complexity and order for users' feelings of pleasantness were largely dependent on users' metamotivational states.
Journal Article
ASP.NET core recipes : a problem-solution approach
Quickly find solutions to common web development problems. Content is presented in the popular problem-solution format. Look up the problem that you want to solve. Read the solution. Apply the solution directly in your own code. Problem solved! ASP.NET Core Recipes is a practical guide for developers creating modern web applications, cutting through the complexities of ASP.NET, jQuery, React, and HTML5 to provide straightforward solutions to common web development problems using proven methods based on best practices. The problem-solution approach gets you in, out, and back to work quickly while deepening your understanding of the underlying platform and how to develop with it. Author John Ciliberti guides you through the MVC framework and development tools, presenting typical challenges, along with code solutions and clear, concise explanations, to accelerate application development. Solve problems immediately by pasting in code from the recipes, or put multiple recipe solutions together to overcome challenging development obstacles. What You'll Learn Take advantage of MVC's streamlined syntax Discover how to take full control over HTML Develop a simple API for creating RESTful web services Understand test-driven development Migrate a project from ASP.NET web forms to Core MVC, including recipes for converting DataGrids, Forms, Web Parts, Master Pages, and navigation controls Use Core MVC in combination with popular JavaScript libraries, including jQuery, React, Bootstrap, and more Write unit tests for your MVC controllers, views, custom filters, and HTML helpers Utilize the latest features in Visual Studio 2017 to accelerate your Core MVC projects Identify performance bottlenecks in your MVC application.
Folksonomies. Indexing and Retrieval in Web 2.0
by
Peters, Isabella
in
Electronic information resource searching
,
Information retrieval
,
Information retrieval -- Social aspects
2009
In Web 2.0 users not only make heavy use of Col-laborative Information Services in order to create, publish and share digital information resources - what is more, they index and represent these re-sources via own keywords, so-called tags. The sum of this user-generated metadata of a Collaborative Information Service is also called Folksonomy. In contrast to professionally created and highly struc-tured metadata, e.g. subject headings, thesauri, clas-sification systems or ontologies, which are applied in libraries, corporate information architectures or commercial databases and which were developed according to defined standards, tags can be freely chosen by users and attached to any information resource. As one type of metadata Folksonomies provide access to information resources and serve users as retrieval tool in order to retrieve own re-sources as well as to find data of other users.
The book delivers insights into typical applications of Folksonomies, especially within Collaborative Information Services, and discusses the strengths and weaknesses of Folksonomies as tools of knowl-edge representation and information retrieval. More-over, it aims at providing conceptual considerations for solving problems of Folksonomies and presents how established methods of knowledge representa-tion and models of information retrieval can successfully be transferred to them.
Beyond DOM: Unlocking Web Page Structure from Source Code with Neural Networks
by
Prazina, Irfan
,
Pozderac, Damir
,
Okanović, Vensada
in
code-only modeling
,
Design
,
HyperText Markup Language
2025
We introduce a code-only approach for modeling web page layouts directly from their source code (HTML and CSS only), bypassing rendering. Our method employs a neural architecture with specialized encoders for style rules, CSS selectors, and HTML attributes. These encodings are then aggregated in another neural network that integrates hierarchical context (sibling and ancestor information) to form rich representational vectors for each web page’s element. Using these vectors, our model predicts eight spatial relationships between pairs of elements, focusing on edge-based proximity in a multilabel classification setup. For scalable training, labels are automatically derived from the Document Object Model (DOM) data for each web page, but the model operates independently of the DOM during inference. During inference, the model does not use bounding boxes or any information found in the DOM; instead, it relies solely on the source code as input. This approach facilitates structure-aware visual analysis in a lightweight and fully code-based way. Our model demonstrates alignment with human judgment in the evaluation of web page similarity, suggesting that code-only layout modeling offers a promising direction for scalable, interpretable, and efficient web interface analysis. The evaluation metrics show our method yields similar performance despite relying on less information.
Journal Article
The Twenty-Six Words That Created the Internet
2019
\"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.\"
Did you know that these twenty-six words are responsible for much of America's multibillion-dollar online industry? What we can and cannot write, say, and do online is based on just one law-a law that protects online services from lawsuits based on user content. Jeff Kosseff exposes the workings of Section 230 of the Communications Decency Act, which has lived mostly in the shadows since its enshrinement in 1996. Because many segments of American society now exist largely online, Kosseff argues that we need to understand and pay attention to what Section 230 really means and how it affects what we like, share, and comment upon every day.
The Twenty-Six Words That Created the Internet tells the story of the institutions that flourished as a result of this powerful statute. It introduces us to those who created the law, those who advocated for it, and those involved in some of the most prominent cases decided under the law. Kosseff assesses the law that has facilitated freedom of online speech, trolling, and much more. His keen eye for the law, combined with his background as an award-winning journalist, demystifies a statute that affects all our lives -for good and for ill. While Section 230 may be imperfect and in need of refinement, Kosseff maintains that it is necessary to foster free speech and innovation.
For filings from many of the cases discussed in the book and updates about Section 230, visit jeffkosseff.com
Front-end development with ASP.NET Core, Angular, and Bootstrap
This book shows you how to integrate ASP.NET Core with Angular, Bootstrap, and similar frameworks, with a bit of Nuget, continuous deployment, Bower dependencies, and Gulp build systems, including development beyond Windows on Mac and Linux.
Toward the Implementation of Text-Based Web Page Classification and Filtering Solution for Low-Resource Home Routers Using a Machine Learning Approach
by
Janavičiūtė, Audronė
,
Liutkevičius, Agnius
,
Morkevičius, Nerijus
in
Access control
,
Access to information
,
Accuracy
2025
Restricting and filtering harmful content on the Internet is a serious problem that is often addressed even at the state and legislative levels. Existing solutions for restricting and filtering online content are usually installed on end-user devices and are easily circumvented and difficult to adapt to larger groups of users with different filtering needs. To mitigate this problem, this study proposed a model of a web page classification and filtering solution suitable for use on home routers or other low-resource web page filtering devices. The proposed system combines the constantly updated web page category list approach with machine learning-based text classification methods. Unlike existing web page filtering solutions, such an approach does not require additional software on the client-side, is more difficult to circumvent for ordinary users and can be implemented using common low-resource routers intended for home and organizations usage. This study evaluated the feasibility of the proposed solution by creating the less resource-demanding implementations of machine learning-based web page classification methods adapted for low-resource home routers that could be used to classify and filter unwanted Internet pages in real-time based on the text of the page. The experimental evaluation of softmax regression, decision tree, random forest, and linear SVM (support vector machine) machine learning methods implemented in the C/C++ programming language was performed using a commercial home router Asus RT-AC85P with 256 MB RAM (random access memory) and MediaTek MT7621AT 880 MHz CPU (central processing unit). The implementation of the linear SVM classifier demonstrated the best accuracy of 0.9198 and required 1.86 s to process a web page. The random forest model was only slightly faster (1.56 s to process a web page), while its accuracy reached only 0.7879.
Journal Article