Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
800
result(s) for
"Social problems Forecasting."
Sort by:
Surviving the 21st century : humanity's ten great challenges and how we can overcome them
\"The book explores the central question facing humanity today: how can we best survive the ten great existential challenges that are now coming together to confront us? ... The author examines ten intersecting areas of activity (mass extinction, resource depletion, WMD, climate change, universal toxicity, food crises, population and urban expansion, pandemic disease, dangerous new technologies and self-delusion) which pose manifest risks to civilization and, potentially, to our species' long-term future. This isn't a book just about problems. It is also about solutions. Every chapter concludes with clear conclusions and consensus advice on what needs to be done at global level--but it also empowers individuals with what they can do for themselves to make a difference. Unlike other books, it offers integrated solutions across the areas of greatest risk. It explains why Homo sapiens is no longer an appropriate name for our species, and what should be done about it\"--Back cover.
Topological measures for identifying and predicting the spread of complex contagions
2021
The standard measure of distance in social networks – average shortest path length – assumes a model of “simple” contagion, in which people only need exposure to influence from one peer to adopt the contagion. However, many social phenomena are “complex” contagions, for which people need exposure to multiple peers before they adopt. Here, we show that the classical measure of path length fails to define network connectedness and node centrality for complex contagions. Centrality measures and seeding strategies based on the classical definition of path length frequently misidentify the network features that are most effective for spreading complex contagions. To address these issues, we derive measures of
complex path length
and
complex centrality
, which significantly improve the capacity to identify the network structures and central individuals best suited for spreading complex contagions. We validate our theory using empirical data on the spread of a microfinance program in 43 rural Indian villages.
Understanding of complex contagions is crucial for explaining diffusion processes in networks. Guilbeault and Centola introduce topological mechanisms and measures to elucidate spreading dynamics and identify the most influential nodes in social, epidemic and economic networks.
Journal Article
Ethnographies of uncertainty in Africa
\"This collection explores the productive potential of uncertainty for people living in Africa as well as for scholars of Africa. The relevance of the focus on uncertainty in Africa is not only that contemporary life is objectively risky and unpredictable (since it is so everywhere and in every period), but that uncertainty has become a dominant trope in the subjective experience of life in contemporary African societies. The contributors investigate how uncertainty animates people's ways of knowing and being across the continent. An introduction and eight ethnographic studies examine uncertainty as a social resource that can be used to negotiate insecurity, conduct and create relationships, and act as a source for imagining the future. These in-depth accounts demonstrate that uncertainty does not exist as an autonomous, external condition. Rather, uncertainty is entwined with social relations and shapes people's relationship between the present and the future. By foregrounding uncertainty, this volume advances our understandings of the contingency of practice, both socially and temporally\"-- Provided by publisher.
Predicting and explaining life satisfaction among older adults using tree-based ensemble models and SHAP: Evidence from the digital divide survey
2025
As digital transformation continues to penetrate various sectors of society, the issue of the digital divide has become increasingly prominent. Against the backdrop of accelerating population aging, the barriers that older adults face in accessing and utilizing digital information have exerted a profound impact on their quality of life. This study employs tree-based ensemble learning algorithms to predict and identify the key factors of the digital divide that influence life satisfaction among older adults. It also evaluates the predictive performance of these models, thereby providing interpretive insights into the impact of the digital divide on subjective well-being. Using original data from the ‘2023 Report on Digital Information Divide Survey’ conducted by the National Information Society Agency of South Korea, this study constructs an analytical framework that integrates both predictive capability and interpretability. First, the XGBoost model is employed to conduct feature importance analysis, identifying 15 key variables that are highly influential in predicting life satisfaction. These variables are further examined using the SHAP method to provide interpretive insights into their contributions. Subsequently, multiple tree-based ensemble learning algorithms—including Random Forest, XGBoost, LightGBM, and CatBoost—are applied to compare their predictive performance. The results indicate that variables related to technological self-efficacy, digital information literacy, social capital, experience and perception of AI services, and household monthly income are significant predictors of life satisfaction among older adults. Among the models tested, CatBoost demonstrates superior overall predictive accuracy, suggesting its effectiveness in forecasting life satisfaction in this demographic. This study expands the application of machine learning in areas such as aging research and the digital divide and proves the effectiveness of ensemble learning algorithms in predicting digital divide factors that affect the life satisfaction of older adults. This approach provides a novel and powerful methodological for addressing complex social problems. Moreover, the study uncovers the structural configuration of key digital information factors associated with life satisfaction, offering data-driven insights into the mechanisms through which the digital divide influences well-being. These results have practical implications for enhancing digital inclusion, improving adaptability among older adults, and fostering a stronger sense of participation and happiness in digital society.
Journal Article
The Great Future Debate and the Struggle for the World
In 1964, two researchers at RAND, Olaf Helmer and Theodore Gordon, presented what they argued was a general theory of prediction, a theory that, Helmer boasted, would \"enable people to deal with socio-economic and political problems as confidently as they do with problems in physics and chemistry.\" Work had begun at RAND in the early 1960s to find a systematic and scientific approach to the future. Computers had made it possible to \"amass all available information\" about ongoing developments and process it in a systematic way, providing \"the kind of massive data processing and interpreting capability that, in the physical sciences, created the breakthrough which led to the development of the atomic bomb.\" This meant a radical shift in notions of the future, a shift that was emphasized by many of the futurists of the period. The future, Helmer stated in another assertive piece, could now be liberated from the grip of utopian fantasy and superstition and be welcomed into the halls of science. Here, Andersson traces the global struggle for future representations in the multifaceted field of prediction (forecasting, futurology, futures studies) in the postwar period.
Journal Article
Forecasting in Economics and Finance
2016
Practices used to address economic forecasting problems have undergone substantial changes over recent years. We review how such changes have influenced the ways in which a range of forecasting questions are being addressed. We also discuss the promises and challenges arising from access to big data. Finally, we review empirical evidence and experience accumulated from the use of forecasting methods to a range of economic and financial variables.
Journal Article
Hybrid AI-enhanced lightning flash prediction in the medium-range forecast horizon
2024
Traditional fully-deterministic algorithms, which rely on physical equations and mathematical models, are the backbone of many scientific disciplines for decades. These algorithms are based on well-established principles and laws of physics, enabling a systematic and predictable approach to problem-solving. On the other hand, AI-based strategies emerge as a powerful tool for handling vast amounts of data and extracting patterns and relationships that might be challenging to identify through traditional algorithms. Here, we bridge these two realms by using AI to find an optimal mapping of meteorological features predicted two days ahead by the state-of-the-art numerical weather prediction model by the European Centre for Medium-range Weather Forecasts (ECMWF) into lightning flash occurrence. The prediction capability of the resulting AI-enhanced algorithm turns out to be significantly higher than that of the fully-deterministic algorithm employed in the ECMWF model. A remarkable Recall peak of about 95% within the 0-24 h forecast interval is obtained. This performance surpasses the 85% achieved by the ECMWF model at the same Precision of the AI algorithm.
In this work, authors propose a synergistic approach combining state-of-the-art deterministic forecasting model with artificial intelligence for predicting lightning occurrences. The strategy shows efficient predictive capabilities at medium-range forecast horizons.
Journal Article
Improving PM2.5 prediction in New Delhi using a hybrid extreme learning machine coupled with snake optimization algorithm
2023
Fine particulate matter (PM
2.5
) is a significant air pollutant that drives the most chronic health problems and premature mortality in big metropolitans such as Delhi. In such a context, accurate prediction of PM
2.5
concentration is critical for raising public awareness, allowing sensitive populations to plan ahead, and providing governments with information for public health alerts. This study applies a novel hybridization of extreme learning machine (ELM) with a snake optimization algorithm called the ELM-SO model to forecast PM
2.5
concentrations. The model has been developed on air quality inputs and meteorological parameters. Furthermore, the ELM-SO hybrid model is compared with individual machine learning models, such as Support Vector Regression (SVR), Random Forest (RF), Extreme Learning Machines (ELM), Gradient Boosting Regressor (GBR), XGBoost, and a deep learning model known as Long Short-Term Memory networks (LSTM), in forecasting PM
2.5
concentrations. The study results suggested that ELM-SO exhibited the highest level of predictive performance among the five models, with a testing value of squared correlation coefficient (
R
2
) of 0.928, and root mean square error of 30.325 µg/m
3
. The study's findings suggest that the ELM-SO technique is a valuable tool for accurately forecasting PM
2.5
concentrations and could help advance the field of air quality forecasting. By developing state-of-the-art air pollution prediction models that incorporate ELM-SO, it may be possible to understand better and anticipate the effects of air pollution on human health and the environment.
Journal Article
Why is the Teen Birth Rate in the United States So High and Why Does It Matter?
by
Kearney, Melissa S.
,
Levine, Phillip B.
in
1976-2009
,
Abortion
,
Abortion, Induced - statistics & numerical data
2012
Why is the rate of teen childbearing is so unusually high in the United States as a whole, and in some U.S. states in particular? U.S. teens are two and a half times as likely to give birth as compared to teens in Canada, around four times as likely as teens in Germany or Norway, and almost ten times as likely as teens in Switzerland. A teenage girl in Mississippi is four times more likely to give birth than a teenage girl in New Hampshire—and 15 times more likely to give birth as a teen compared to a teenage girl in Switzerland. We examine teen birth rates alongside pregnancy, abortion, and “shotgun” marriage rates as well as the antecedent behaviors of sexual activity and contraceptive use. We demonstrate that variation in income inequality across U.S. states and developed countries can explain a sizable share of the geographic variation in teen childbearing. Our reading of the totality of evidence leads us to conclude that being on a low economic trajectory in life leads many teenage girls to have children while they are young and unmarried. Teen childbearing is explained by the low economic trajectory but is not an additional cause of later difficulties in life. Surprisingly, teen birth itself does not appear to have much direct economic consequence. Our view is that teen childbearing is so high in the United States because of underlying social and economic problems. It reflects a decision among a set of girls to “drop-out” of the economic mainstream; they choose nonmarital motherhood at a young age instead of investing in their own economic progress because they feel they have little chance of advancement.
Journal Article
Real-Time Fiscal Forecasting Using Mixed-Frequency Data
2020
The sovereign debt crisis has increased the importance of monitoring budgetary execution. We employ real-time data using a mixed data sampling (MiDaS) methodology to demonstrate how budgetary slippages can be detected early on. We show that in spite of using real-time data, the year-end forecast errors diminish significantly when incorporating intra-annual information. Our results show the benefits of forecasting aggregates via subcomponents, in this case total government revenue and expenditure. Our methodology could significantly improve fiscal surveillance and could therefore be an important part of the European Commission's model toolkit.
Journal Article