Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
5 result(s) for "OpenMP (Application program interface)"
Sort by:
On the predictive power of meta-features in OpenML
The demand for performing data analysis is steadily rising. As a consequence, people of different profiles (i.e., nonexperienced users) have started to analyze their data. However, this is challenging for them. A key step that poses difficulties and determines the success of the analysis is data mining (model/algorithm selection problem). Meta-learning is a technique used for assisting non-expert users in this step. The effectiveness of meta-learning is, however, largely dependent on the description/characterization of datasets (i.e., meta-features used for meta-learning). There is a need for improving the effectiveness of meta-learning by identifying and designing more predictive meta-features. In this work, we use a method from exploratory factor analysis to study the predictive power of different meta-features collected in OpenML, which is a collaborative machine learning platform that is designed to store and organize meta-data about datasets, data mining algorithms, models and their evaluations. We first use the method to extract latent features, which are abstract concepts that group together meta-features with common characteristics. Then, we study and visualize the relationship of the latent features with three different performance measures of four classification algorithms on hundreds of datasets available in OpenML, and we select the latent features with the highest predictive power. Finally, we use the selected latent features to perform meta-learning and we show that our method improves the meta-learning process. Furthermore, we design an easy to use application for retrieving different meta-data from OpenML as the biggest source of data in this domain.
Using OpenMP
OpenMP, a portable programming interface for shared memory parallel computers, was adopted as an informal standard in 1997 by computer scientists who wanted a unified model on which to base programs for shared memory systems. OpenMP is now used by many software developers; it offers significant advantages over both hand-threading and MPI. Using OpenMP offers a comprehensive introduction to parallel programming concepts and a detailed overview of OpenMP. Using OpenMP discusses hardware developments, describes where OpenMP is applicable, and compares OpenMP to other programming interfaces for shared and distributed memory parallel architectures. It introduces the individual features of OpenMP, provides many source code examples that demonstrate the use and functionality of the language constructs, and offers tips on writing an efficient OpenMP program. It describes how to use OpenMP in full-scale applications to achieve high performance on large-scale architectures, discussing several case studies in detail, and offers in-depth troubleshooting advice. It explains how OpenMP is translated into explicitly multithreaded code, providing a valuable behind-the-scenes account of OpenMP program performance. Finally, Using OpenMP considers trends likely to influence OpenMP development, offering a glimpse of the possibilities of a future OpenMP 3.0 from the vantage point of the current OpenMP 2.5. With multicore computer use increasing, the need for a comprehensive introduction and overview of the standard interface is clear. Using OpenMP provides an essential reference not only for students at both undergraduate and graduate levels but also for professionals who intend to parallelize existing codes or develop new parallel programs for shared memory computer architectures. Barbara Chapman is Professor of Computer Science at the University of Houston. Gabriele Jost is Principal Member of Technical Staff, Application Server Performance Engineering, at Oracle, Inc. Ruud van der Pas is Senior Staff Engineer at Sun Microsystems, Menlo Park.
Improving reliability and performance of high performance computing applications
Because of the growing popularity of parallel programming in multi-core/multiprocessor and multithreaded hardware, more and more applications are implemented in the well-written concurrent programming model. These programming models are MPI, OpenMP and Hybrid MPI/OpenMP. However, developing concurrent programs is extremely difficult. Concurrency introduces the possibility of errors that do not happen in traditional sequential programs, such as data race, deadlock and thread-safety issues. In addition, the performance issue of concurrent programs is another research area. This dissertation presents an integrated static and dynamic program analysis framework to address these concurrent issues in the OpenMP multithreaded application and hybrid OpenMP/MPI programming model. This dissertation also introduces the approach to reallocating the computing resources to improve the performance of MPI parallel programs in the container-based virtual cloud. First, we present the OpenMP Analysis Toolkit (OAT), which uses Satisfiability Modulo Theories (SMT) solver based symbolic analysis to detect data races and deadlocks in OpenMP applications. Our approach approximately simulates the real execution schedule of an OpenMP program through schedule permutation with partial order reduction to improve the analysis efficiency. We conducted experiments on real-world OpenMP benchmarks by comparing our OAT tool with two commercial dynamic analysis tools: Intel Thread Checker and Sun Thread Analyzer, and one commercial static analysis tool: Viva64 PVS Studio. The experiments show that our symbolic analysis approach is more accurate than static analysis and more efficient and scalable than dynamic analysis tools with less false positives and negatives. The second part of the dissertation proposes an approach by integrating static and dynamic program analyses to check thread-safety violations in hybrid MPI/OpenMP programs. We use an innovative method to transform the thread-safety violation problems in race conditions. In our approach, the static analysis identifies a list of MPI calls related to thread-safety violations, then replaces them with our own MPI wrappers, which access specific shared variables. The static analysis avoids instrumenting unrelated code, which significantly reduces runtime overhead. In the dynamic analysis, both happen-before and lockset-based race detection algorithms are used to check races on these aforementioned shared variables. By checking races, we can identify thread-safety violations according to their specifications. Our experimental evaluation over real-world applications shows that our approach is both accurate and efficient. Finally, the dissertation describes an approach that uses adaptive resource management enabled by container-based virtualization techniques to automatically tune performance of MPI programs in the cloud. Specifically, the containers running on physical hosts can dynamically allocate CPU resources to MPI processes according to the current program execution state and system resource status. High Performance Computing (HPC) in the cloud has great potential as an effective and convenient option for users to launch HPC applications. However, there are still many open problems to be solved in order for the cloud to be more amenable to HPC applications. In order to tune the performance of MPI applications during runtime, many traditional techniques try to balance the workloads by distributing datasets approximately equally to all computing nodes. However, the computing resource imbalance may still arise from data skew, and it is nontrivial to foresee such imbalances beforehand. The resource allocation among MPI processes are adjusted in two ways: the intra-host level, which dynamically adjusts resources within a host; and the inter-host level, which migrates containers together with MPI processes from one host to another host. We have implemented and evaluated our approach on the Amazon EC2 platform using real-world scientific benchmarks and applications, which demonstrates that the performance can be improved up to 31.1% (with an average of 15.6%) when comparing with the baseline of application runtime.
Parallel Rendering Mechanism for Graphics Programming on Multicore Processors
The present day technological advancement has resulted in multiple core processors coming into desktops, handhelds, servers and workstations. This is because the present day applications and users demand huge computing power and interactivity. Both of these reasons have resulted in a total design shift in the way the processors are designed and developed. However the change in the hardware has not been accompanied with the change in the way the software has to be written for these multicore processors. In this paper, we intend to provide the integration of OpenGL programs on a platform which supports multicore processors. The paper would result in clear understanding how graphics pipelines can be implemented on multi-core processors to achieve higher computational speeds up with highest thread granularity. The impacts of using too much parallelism are also discussed. An OpenMP API for the thread scheduling of parallel task is discussed in this paper. The tool Intel VTune Performance Analyzer is used to find the hotspots and for software optimization. Comparing both the serial and parallel execution of graphics code shows encouraging results and it has been found that the increase in frame rate has resulted due to parallel programming techniques.