Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
6,537 result(s) for "Business intelligence Databases."
Sort by:
From Reality to World. A Critical Perspective on AI Fairness
Fairness of Artificial Intelligence (AI) decisions has become a big challenge for governments, companies, and societies. We offer a theoretical contribution to consider AI ethics outside of high-level and top-down approaches, based on the distinction between “reality” and “world” from Luc Boltanski. To do so, we provide a new perspective on the debate on AI fairness and show that criticism of ML unfairness is “realist”, in other words, grounded in an already instituted reality based on demographic categories produced by institutions. Second, we show that the limits of “realist” fairness corrections lead to the elaboration of “radical responses” to fairness, that is, responses that radically change the format of data. Third, we show that fairness correction is shifting to a “domination regime” that absorbs criticism, and we provide some theoretical and practical avenues for further development in AI ethics. Using an ad hoc critical space stabilized by reality tests alongside the algorithm, we build a shared responsibility model which is compatible with the radical response to fairness issues. Finally, this paper shows the fundamental contribution of pragmatic sociology theories, insofar as they afford a social and political perspective on AI ethics by giving an active role to material actors such as database formats on ethical debates. In a context where data are increasingly numerous, granular, and behavioral, it is essential to renew our conception of AI ethics on algorithms in order to establish new models of responsibility for companies that take into account changes in the computing paradigm.
EFIM: a fast and memory efficient algorithm for high-utility itemset mining
In recent years, high-utility itemset mining has emerged as an important data mining task. However, it remains computationally expensive both in terms of runtime and memory consumption. It is thus an important challenge to design more efficient algorithms for this task. In this paper, we address this issue by proposing a novel algorithm named EFIM (EFficient high-utility Itemset Mining), which introduces several new ideas to more efficiently discover high-utility itemsets. EFIM relies on two new upper bounds named revised sub-tree utility and local utility to more effectively prune the search space. It also introduces a novel array-based utility counting technique named Fast Utility Counting to calculate these upper bounds in linear time and space. Moreover, to reduce the cost of database scans, EFIM proposes efficient database projection and transaction merging techniques named High-utility Database Projection and High-utility Transaction Merging (HTM), also performed in linear time. An extensive experimental study on various datasets shows that EFIM is in general two to three orders of magnitude faster than the state-of-art algorithms d 2 HUP, HUI-Miner, HUP-Miner, FHM and UP-Growth+ on dense datasets and performs quite well on sparse datasets. Moreover, a key advantage of EFIM is its low memory consumption.
Mandatory Non-financial Disclosure and Its Influence on CSR: An International Comparison
The article examines the effects of non-financial disclosure (NFD) on corporate social responsibility (CSR). We conceptualise trade-offs between two ideal types (government regulation and business self-regulation) in relation to CSR. Whereas selfregulation is associated with greater flexibility for businesses to develop best practices, it can also lead to complacency if firms feel no external pressure to engage with CSR. In contrast, government regulation is associated with greater stringency around minimum standards, but can also result in rigidity owing to a One-size-fits-air approach. Given these potential tradeoffs, we ask how mandatory non-financial disclosure has been shaping CSR practices and examine its potential effectiveness as a regulatory instrument. Our analysis of 24 OECD countries using the Asset4 database shows that firms in countries that require non-financial disclosure adopt significantly more CSR activities. However, we also find that NFD regulation does not lead to lower levels of corporate irresponsibility. Furthermore, our analysis demonstrates that, over time, the variation in CSR activities declines as firms adopt increasingly similar practices. Our study thereby contributes to understanding the impact of government regulation on CSR at firm level. We also discuss the limits of mandatory NFD in addressing regulatory trade-offs between stringency and flexibility in the field of corporate social responsibility.
The big data system, components, tools, and technologies: a survey
The traditional databases are not capable of handling unstructured data and high volumes of real-time datasets. Diverse datasets are unstructured lead to big data, and it is laborious to store, manage, process, analyze, visualize, and extract the useful insights from these datasets using traditional database approaches. However, many technical aspects exist in refining large heterogeneous datasets in the trend of big data. This paper aims to present a generalized view of complete big data system which includes several stages and key components of each stage in processing the big data. In particular, we compare and contrast various distributed file systems and MapReduce-supported NoSQL databases concerning certain parameters in data management process. Further, we present distinct distributed/cloud-based machine learning (ML) tools that play a key role to design, develop and deploy data models. The paper investigates case studies on distributed ML tools such as Mahout, Spark MLlib, and FlinkML. Further, we classify analytics based on the type of data, domain, and application. We distinguish various visualization tools pertaining three parameters: functionality, analysis capabilities, and supported development environment. Furthermore, we systematically investigate big data tools and technologies (Hadoop 3.0, Spark 2.3) including distributed/cloud-based stream processing tools in a comparative approach. Moreover, we discuss functionalities of several SQL Query tools on Hadoop based on 10 parameters. Finally, We present some critical points relevant to research directions and opportunities according to the current trend of big data. Investigating infrastructure tools for big data with recent developments provides a better understanding that how different tools and technologies apply to solve real-life applications.
Nexus of circular economy and sustainable business performance in the era of digitalization
PurposeThis study aims to conduct a comprehensive review and network-based analysis by exploring future research directions in the nexus of circular economy (CE) and sustainable business performance (SBP) in the context of digitalization.Design/methodology/approachA systematic literature review methodology was adopted to present the review in the field of CE and SBP in the era of digitalization. WOS and SCOPUS databases were considered in the study to identify and select the articles. The bibliometric study was carried out to analyze the significant contributions made by authors, various journal sources, countries and different universities in the field of CE and SBP in the era of digitalization. Further, network analysis is carried out to analyze the collaboration among authors from different countries.FindingsThe study revealed that digitalization could be a great help in developing sustainable circular products. Moreover, the customers' involvement is necessary for creating innovative sustainable circular products using digitalization. A move toward the product-service system was suggested to accelerate the transformation toward CE and digitalization.Originality/valueThe paper discusses digitalization and CE practices' adoption to enhance the SP of the firms. This work's unique contribution is the systematic literature analysis and bibliometric study to explore future research directions in the nexus of CE and SP in the context of digitalization. The present study has been one of the first efforts to examine the literature of CE and SBP integration from a digitalization perspective along with bibliometric analysis.
Applications of Blockchain in Industry 4.0: a Review
As the key component of Industry 4.0, IoT has been widely used in various fields of industry. But cloud-based data storage, computation, and communication in IoT cause many issues, such as transmission delay, single point of failure, and privacy disclosure. Moreover, the centralized access control in IoT constrains its availability and scalability. Blockchain is a decentralized, tamper-proof, trustless, transparent, and immutable append-only database. The integration of Blockchain and IoT technologies has led to robust distributed applications, including smart healthcare, smart finance, smart supply chain, smart cities, smart manufacturing, smart government, smart agriculture, smart transportation, smart education, smart e-commerce, and smart grid. Blockchain should be consolidated with 5G and artificial intelligence to tackle the challenges associated with digital transformation in Industry 4.0.
Blockchain technology and smart contracts in decentralized governance systems
The aim of our systematic review was to inspect the recently published literature on decentralized governance systems and integrate the insights it articulates on blockchain technology and smart contracts by employing Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines. Throughout January and May 2022, a quantitative literature review of ProQuest, Scopus, and the Web of Science databases was carried out, with search terms including \"city\" + \"blockchain technology\", \"smart contracts\", and \"decentralized governance systems\". As the analyzed research studies were published between 2016 and 2022, only 371 sources satisfied the eligibility criteria. A Shiny app was harnessed for the PRISMA flow diagram to include evidence-based acquired and handled data. Analyzing the most recent and relevant sources and leveraging screening and quality assessment tools such as AMSTAR, Dedoose, Distiller SR, ROBIS, and SRDR, we integrated the core outcomes and robust correlations related to smart urban governance. As data visualization tools, for initial bibliometric mapping dimensions were harnessed, together with layout algorithms provided by VOSviewer. Future research should investigate smart contract governance of blockchain applications and infrastructure using decision-making tools and spatial cognition algorithms.
Diagnosis of thyroid cancer using deep convolutional neural network models applied to sonographic images: a retrospective, multicohort, diagnostic study
The incidence of thyroid cancer is rising steadily because of overdiagnosis and overtreatment conferred by widespread use of sensitive imaging techniques for screening. This overall incidence growth is especially driven by increased diagnosis of indolent and well-differentiated papillary subtype and early-stage thyroid cancer, whereas the incidence of advanced-stage thyroid cancer has increased marginally. Thyroid ultrasound is frequently used to diagnose thyroid cancer. The aim of this study was to use deep convolutional neural network (DCNN) models to improve the diagnostic accuracy of thyroid cancer by analysing sonographic imaging data from clinical ultrasounds. We did a retrospective, multicohort, diagnostic study using ultrasound images sets from three hospitals in China. We developed and trained the DCNN model on the training set, 131 731 ultrasound images from 17 627 patients with thyroid cancer and 180 668 images from 25 325 controls from the thyroid imaging database at Tianjin Cancer Hospital. Clinical diagnosis of the training set was made by 16 radiologists from Tianjin Cancer Hospital. Images from anatomical sites that were judged as not having cancer were excluded from the training set and only individuals with suspected thyroid cancer underwent pathological examination to confirm diagnosis. The model's diagnostic performance was validated in an internal validation set from Tianjin Cancer Hospital (8606 images from 1118 patients) and two external datasets in China (the Integrated Traditional Chinese and Western Medicine Hospital, Jilin, 741 images from 154 patients; and the Weihai Municipal Hospital, Shandong, 11 039 images from 1420 patients). All individuals with suspected thyroid cancer after clinical examination in the validation sets had pathological examination. We also compared the specificity and sensitivity of the DCNN model with the performance of six skilled thyroid ultrasound radiologists on the three validation sets. Between Jan 1, 2012, and March 28, 2018, ultrasound images for the four study cohorts were obtained. The model achieved high performance in identifying thyroid cancer patients in the validation sets tested, with area under the curve values of 0·947 (95% CI 0·935–0·959) for the Tianjin internal validation set, 0·912 (95% CI 0·865–0·958) for the Jilin external validation set, and 0·908 (95% CI 0·891–0·925) for the Weihai external validation set. The DCNN model also showed improved performance in identifying thyroid cancer patients versus skilled radiologists. For the Tianjin internal validation set, sensitivity was 93·4% (95% CI 89·6–96·1) versus 96·9% (93·9–98·6; p=0·003) and specificity was 86·1% (81·1–90·2) versus 59·4% (53·0–65·6; p<0·0001). For the Jilin external validation set, sensitivity was 84·3% (95% CI 73·6–91·9) versus 92·9% (84·1–97·6; p=0·048) and specificity was 86·9% (95% CI 77·8–93·3) versus 57·1% (45·9–67·9; p<0·0001). For the Weihai external validation set, sensitivity was 84·7% (95% CI 77·0–90·7) versus 89·0% (81·9–94·0; p=0·25) and specificity was 87·8% (95% CI 81·6–92·5) versus 68·6% (60·7–75·8; p<0·0001). The DCNN model showed similar sensitivity and improved specificity in identifying patients with thyroid cancer compared with a group of skilled radiologists. The improved technical performance of the DCNN model warrants further investigation as part of randomised clinical trials. The Program for Changjiang Scholars and Innovative Research Team in University in China, and National Natural Science Foundation of China.
On data lake architectures and metadata management
Over the past two decades, we have witnessed an exponential increase of data production in the world. So-called big data generally come from transactional systems, and even more so from the Internet of Things and social media. They are mainly characterized by volume, velocity, variety and veracity issues. Big data-related issues strongly challenge traditional data management and analysis systems. The concept of data lake was introduced to address them. A data lake is a large, raw data repository that stores and manages all company data bearing any format. However, the data lake concept remains ambiguous or fuzzy for many researchers and practitioners, who often confuse it with the Hadoop technology. Thus, we provide in this paper a comprehensive state of the art of the different approaches to data lake design. We particularly focus on data lake architectures and metadata management, which are key issues in successful data lakes. We also discuss the pros and cons of data lakes and their design alternatives.