Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
43,854
result(s) for
"Information content"
Sort by:
Numerical algorithms for personalized search in self-organizing information networks
\"This book lays out the theoretical groundwork for personalized search and reputation management, both on the Web and in peer-to-peer and social networks.\" The book develops scalable algorithms that exploit the graphlike properties underlying personalized search and reputation management, and delves into realistic scenarios regarding web-scale data.--[book cover]
A community-sourced glossary of open scholarship terms
by
Yu-Fang, Yang
,
Grose-Hodge, Magdalena
,
Yamada Yuki
in
Communication
,
Community research
,
Credibility
2022
Open scholarship has transformed research, and introduced a host of new terms in the lexicon of researchers. The ‘Framework for Open and Reproducible Research Teaching’ (FORRT) community presents a crowdsourced glossary of open scholarship terms to facilitate education and effective communication between experts and newcomers.
Journal Article
A Decision Probability Transformation Method Based on the Neural Network
2022
When the Dempster–Shafer evidence theory is applied to the field of information fusion, how to reasonably transform the basic probability assignment (BPA) into probability to improve decision-making efficiency has been a key challenge. To address this challenge, this paper proposes an efficient probability transformation method based on neural network to achieve the transformation from the BPA to the probabilistic decision. First, a neural network is constructed based on the BPA of propositions in the mass function. Next, the average information content and the interval information content are used to quantify the information contained in each proposition subset and combined to construct the weighting function with parameter r. Then, the BPA of the input layer and the bias units are allocated to the proposition subset in each hidden layer according to the weight factors until the probability of each single-element proposition with the variable is output. Finally, the parameter r and the optimal transform results are obtained under the premise of maximizing the probabilistic information content. The proposed method satisfies the consistency of the upper and lower boundaries of each proposition. Extensive examples and a practical application show that, compared with the other methods, the proposed method not only has higher applicability, but also has lower uncertainty regarding the transformation result information.
Journal Article
A hybrid approach for measuring semantic similarity based on IC-weighted path distance in WordNet
by
Cai, Yuanyuan
,
Lu, Wei
,
Zhang, Qingchuan
in
Artificial intelligence
,
Computation
,
Computational linguistics
2018
As a valuable tool for text understanding, semantic similarity measurement enables discriminative semantic-based applications in the fields of natural language processing, information retrieval, computational linguistics and artificial intelligence. Most of the existing studies have used structured taxonomies such as WordNet to explore the lexical semantic relationship, however, the improvement of computation accuracy is still a challenge for them. To address this problem, in this paper, we propose a hybrid WordNet-based approach CSSM-ICSP to measuring concept semantic similarity, which leverage the information content(IC) of concepts to weight the shortest path distance between concepts. To improve the performance of IC computation, we also develop a novel model of the intrinsic IC of concepts, where a variety of semantic properties involved in the structure of WordNet are taken into consideration. In addition, we summarize and classify the technical characteristics of previous WordNet-based approaches, as well as evaluate our approach against these approaches on various benchmarks. The experimental results of the proposed approaches are more correlated with human judgment of similarity in term of the correlation coefficient, which indicates that our IC model and similarity detection approach are comparable or even better for semantic similarity measurement as compared to others.
Journal Article
Is the Decline in the Information Content of Earnings Following Restatements Short-Lived?
by
Cheng, Qiang
,
Lo, Alvis K.
,
Chen, Xia
in
Accounting irregularities
,
Accounting theory
,
Analytical forecasting
2014
Prior research finds that the decline in the information content of earnings after restatement announcements is short-lived and the earnings response coefficient (ERC) bounces back after three quarters. We re-examine this issue using a more recent and comprehensive sample of restatements. We find that material restatement firms experience a significant decrease in the ERC over a prolonged period—close to three years after restatement announcements. In contrast, other restatement firms experience a decline in the ERC for only one quarter. We further find that among material restatement firms, those that are subject to more credibility concerns and those that do not take prompt actions to improve reporting credibility experience a longer drop in the ERC. Last, reconciling with prior research, we find that using a more powerful proxy for material restatements and imposing less restrictive sampling requirements help to increase the power of the tests to detect the long-run drop in the ERC.
Journal Article
An Empirical Analysis of the Decline in the Information Content of Earnings following Restatements
2008
Regulatory officials and market analysts have speculated that the loss of credibility in subsequently reported financial information is a long-lasting consequence of earnings restatements. I measure the information content of earnings using a standard earnings-returns framework over several years surrounding restatements to examine characteristics of the decline in the information content of earnings. Results indicate that although the information content of earnings declines following restatements, the loss is temporary. In particular, the earnings response coefficients for earnings announcements surrounding restatements exhibit a U-shaped pattern in which they are no longer significantly lower in the post-restatement period over an average of four quarters. The extent to which the earnings of restatement firms suffer a loss of information content varies across several dimensions. First, the duration of the loss is greater for firms that restate earnings to correct revenue recognition errors and for restatements that result in a large decline in the stock price at the announcement date. Second, there is not a loss in the information content of earnings for firms that make changes to their financial reporting governance structures following restatements. Overall, the evidence in this paper is consistent with a short-term decline in investor confidence regarding financial reporting following restatements, but shows that suspicion regarding the information loss of post-restatement earnings in the long-term is unwarranted.
Journal Article
Earnings Informativeness and Strategic Disclosure: An Empirical Examination of \Pro Forma\ Earnings
2004
This paper provides evidence on the characteristics of firms that include \"pro forma\" earnings information in their press releases, whether the usefulness of pro forma earnings to investors varies systematically with these characteristics, and whether the investor response to pro forma earnings is consistent with market efficiency or mispricing. Using a sample of 249 press releases from 1997-99, we find that firms with low GAAP earnings informativeness are more likely to disclose pro forma earnings than other firms. We also find that strategic considerations, measured using the direction of GAAP earnings surprises, are an important determinant of pro forma reporting. In addition, our examination of the relative and incremental information content of pro forma earnings shows that investors find pro forma earnings to be more useful when GAAP earnings informativeness is low or when strategic considerations are absent. Tests of the predictive ability of pro forma earnings for future profitability and returns are mixed, and we therefore cannot conclusively determine whether the investor reaction to pro forma earnings at the time of the press release is consistent with market efficiency or mispricing. The paper contributes to the growing literature on pro forma earnings and more generally to the literature on voluntary and strategic disclosure.
Journal Article
New DTW Windows Type for Forward- and Backward-Lookingness Examination. Application for Inflation Expectation
2022
This study provides an application of dynamic time warping algorithm with a new window constraint to assess consumer expectations’ information content regarding current and future inflation. Our study’s contribution is the novel application of DTW for testing expectations’ forward-lookingness. Additionally, we modify the algorithm to adjust it for a specific question on the information content of our data. The DTW overcomes constraints of the standard tool that examines forward-lookingness: DTW does not impose assumptions on time series properties. In empirical study we cover seven European counties and compare the DTW outcomes with the results of previous studies in these economies using a standard methodology. The research period covers 2001 to mid-2018. Application of DTW provides information on the degree of expectations’ forward-lookingness. The result, after standardization, are similar to the standard parameters of hybrid specification of expectations. Moreover, the rankings of most forward-looking consumers are replicated. Our results confirm the economic intuition, and they do not contradict previous studies.
Journal Article
Automatic evaluation of metadata quality in digital repositories
2009
Owing to the recent developments in automatic metadata generation and interoperability between digital repositories, the production of metadata is now vastly surpassing manual quality control capabilities. Abandoning quality control altogether is problematic, because low-quality metadata compromise the effectiveness of services that repositories provide to their users. To address this problem, we present a set of scalable quality metrics for metadata based on the Bruce & Hillman framework for metadata quality control. We perform three experiments to evaluate our metrics: (1) the degree of correlation between the metrics and manual quality reviews, (2) the discriminatory power between metadata sets and (3) the usefulness of the metrics as low-quality filters. Through statistical analysis, we found that several metrics, especially Text Information Content, correlate well with human evaluation and that the average of all the metrics are roughly as effective as people to flag low-quality instances. The implications of this finding are discussed. Finally, we propose possible applications of the metrics to improve tools for the administration of digital repositories.
Journal Article