e99 Online Shopping Mall

Geometry.Net - the online learning center Help  
Home  - Computer - Parallel Computing (Books)

  Back | 41-60 of 100 | Next 20

click price to see details     click image to enlarge     click link to go to the store

$46.50
41. Patterns for Parallel Programming
$82.21
42. Patterns and Skeletons for Parallel
$39.29
43. Parallel and Distributed Computing:
$28.32
44. Using OpenMP: Portable Shared
$180.00
45. Parallel Optimization: Theory,
 
46. Highly Parallel Computing (The
$64.07
47. Parallel Computing in Quantum
$40.00
48. Parallel Programming with MPI
$124.88
49. Parallel Scientific Computation:
$42.77
50. High Performance Computing: Third
$26.74
51. PVM: Parallel Virtual Machine:
$94.07
52. High Performance Computing for
 
53. Parallel and Distributed Computing
$9.89
54. Solutions to Parallel and Distributed
$25.00
55. Introduction to Parallel Algorithms
$92.58
56. High Performance Parallel Database
$79.95
57. Spatially Structured Evolutionary
 
$92.00
58. Massively Parallel, Optical, and
$39.00
59. Applied Parallel Computing. Large
 
$104.00
60. Massively Parallel, Optical, and

41. Patterns for Parallel Programming
by Timothy G. Mattson, Beverly A. Sanders, Berna L. Massingill
Hardcover: 384 Pages (2004-09-25)
list price: US$64.99 -- used & new: US$46.50
(price subject to change: see help)
Asin: 0321228111
Average Customer Review: 3.5 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description

The Parallel Programming Guide for Every Software Developer

From grids and clusters to next-generation game consoles, parallel computing is going mainstream. Innovations such as Hyper-Threading Technology, HyperTransport Technology, and multicore microprocessors from IBM, Intel, and Sun are accelerating the movement's growth. Only one thing is missing: programmers with the skills to meet the soaring demand for parallel software.

That's where Patterns for Parallel Programming comes in. It's the first parallel programming guide written specifically to serve working software developers, not just computer scientists. The authors introduce a complete, highly accessible pattern language that will help any experienced developer "think parallel"-and start writing effective parallel code almost immediately. Instead of formal theory, they deliver proven solutions to the challenges faced by parallel programmers, and pragmatic guidance for using today's parallel APIs in the real world. Coverage includes:

  • Understanding the parallel computing landscape and the challenges faced by parallel developers
  • Finding the concurrency in a software design problem and decomposing it into concurrent tasks
  • Managing the use of data across tasks
  • Creating an algorithm structure that effectively exploits the concurrency you've identified
  • Connecting your algorithmic structures to the APIs needed to implement them
  • Specific software constructs for implementing parallel programs
  • Working with today's leading parallel programming environments: OpenMP, MPI, and Java

Patterns have helped thousands of programmers master object-oriented development and other complex programming technologies. With this book, you will learn that they're the best way to master parallel programming too.



0321228111B08232004

... Read more

Customer Reviews (7)

4-0 out of 5 stars A pretty decent guide to parallel programming
"Patterns for Parallel Programming" (PPP) is the outcome of a collaboration between Timothy Mattson of Intel and Beverly Sanders &Berna Massingill (who are academic researchers). It introduces a pattern language for parallel programming, and uses OpenMP, MPI, and Java to flesh out the related patterns.

The Good: this volume discusses both shared-memory and distributed-memory programming, all between one set of covers. It also makes use of a general-purpose programming language and is therefore of interest both to computational scientists who are interested in clusters, and to programmers interested in multiprocessors (these days that covers pretty much everyone). More generally, PPP offers valuable advice to those interested in robust parallel software design. The authors cover a number of topics that are an essential part of parallel-programming lore (e.g. the 1D and 2D block-cyclic array distributions in Chapter 5). In other words, they codify existing knowledge, which is precisely what patterns are supposed to do. To accomplish this, they make effective use of a small number of examples (like molecular dynamics and the Mandelbrot set). That allows them to show a specific problem as approached both from different design spaces, and also from different patterns within one design space. This book follows in the footsteps of the illustrious volume "Design Patterns" by the Gang of Four (GoF). In chapters 3, 4, and 5, Mattson, Sanders, and Massingill introduce a number of patterns using a simplified version of the GoF template. Despite the structural similarities between the two books, PPP is more readable than the GoF volume. This is probably because it introduces a pattern language ("an organized way of navigating through a collection of design patterns to produce a design"), not just a collection of patterns. Essentially, the writing style is a linear combination of narrative and reference: it can be read cover-to-cover, or not. Finally, the three appendices contain introductory discussions of OpenMP, MPI, and concurrency in Java, respectively. They can be read either as the need arises, or before even starting the book: though limited in scope, they are pedagogically sound.

The Bad: despite being easier to read from start to finish than the GoF classic, this book is still constrained by its choice to catalog patterns. As a result, the recurring examples lead to repetition, since they have to be re-introduced in each example section. Also, given that the book was published in 2004, a few implementation-related topics are somewhat out-of-date (e.g., OpenMP 3.0 was not around at the time). Importantly, the book predates the recent explosion of interest in general-purpose GPU programming, so it doesn't mention, say, texture memory. However, more fundamental things like data decomposition, which the book does explain, are related to any parallel programming environment. On a different note, even though the book is generally readable, from time to time the authors resort to the "just look at the code and figure it out" technique: the best-known example is in chapter 4 when they discuss ghost cells and nonblocking communication. Furthermore, even though the authors have been for the most part clearheaded when naming the different patterns, I found their decision to call two distinct patterns "Data Sharing" and "Shared Data" (in the "Finding Concurrency" and "Supporting Structures" design spaces, respectively) quite confusing and therefore unfortunate. Also, the Glossary is very useful, in that it explains many terms either discussed in the text (e.g. "False sharing") or not (e.g. "Copy on write", "Eager evaluation"), but it is far from complete (e.g. "First touch", "Poison pill", and "Work stealing", though mentioned in the main text, are not included in the Glossary). Finally, I think the authors overstate the case when they claim that "the parallel programming community has converged around" Java: Pthreads would have been an equally (if not more) acceptable choice.

All in all, this book provides a good description of many aspects of parallel programming. Most other texts on parallel programming either are class textbooks or focus on a specific technology. In contradistinction to such books, "Patterns for parallel programming" strikes a happy medium between focusing on principles and discussing practical applications.

Alex Gezerlis

1-0 out of 5 stars A total waste of money
When I bought this book, I was hoping that the word 'patterns' in its title is only there to make it buzzword compliant.But sadly not.It is one of those completely useless pattern books, that long-windedly explain what should you do, without telling the how, and the why.Moreover all that explanations are about things, that you find out during the first day, when you actually sit down, and try to do some parallel programming.

4-0 out of 5 stars Probably one of the best books on this subject
A little dry and a little repetitive but only to a small degree. The subject is (necessarily) approached from several different 'points of view' so some repetition is to be expected, but this should not discourage you from buying and reading this book, it is one of the most readable and affordable books on this topic. I highly recommend this book.

4-0 out of 5 stars Easy to read and useful content
Normally design pattern books are things that you dip into rather than read end to end, simply because they can be very dry reading. Not this one - as long as you have an interest in parallel programming, reading this end to end should be easy. But that's not to say that you couldn't just dip in to the bits that are most applicable to your work - I'm sure you could.

Many of the examples given of where each pattern is used are in industry sectors other than where I work, but with such good descriptions of each pattern it is easy to picture where they are used other than the examples given and to identify where you have used them yourself without previously knowing that you were using a "named" pattern even if you have been doing it that way for years.

Much of the material in this book is stuff that is hard to find elsewhere. I've heard bits of it at Intel seminars or touched on in Intel books (e.g. the Threading Building Blocks book), but otherwise have not seen this stuff in print, even though many people (possibly unknowingly) are implementing the same ideas in code.

Excellent book. I've knocked one star off though, simply because the authors work on the premise that almost everyone is using one of OpenMP, MPI or Java. In practice, there are still an awful lot of people implementing such systems using C++ with either native threading APIs or third party libraries wrapping those threading APIs.

4-0 out of 5 stars Read this book
This is a very good book: It will start teaching you how to think about parallel programming and will help you get started in this area.

Why only four stars you may ask? The trouble is that after over 40 years knowledge about parallel programming is still weak. The scientific computation folks have their (often heavy duty) tricks of the trade, but, as another reviewer pointed out, parallel computing is much more and is starting to address much broader areas.

This book will help you wade through the maze of confusion and will help you get oriented - that is of a huge help. Then you need to practice... ... Read more


42. Patterns and Skeletons for Parallel and Distributed Computing
Hardcover: 333 Pages (2002-11-11)
list price: US$145.00 -- used & new: US$82.21
(price subject to change: see help)
Asin: 1852335068
Average Customer Review: 2.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Patterns and Skeletons for Parallel and Distributed Computing is a unique survey of research work in high-level parallel and distributed computing over the past ten years. Comprising contributions from the leading researchers in Europe and the US, it looks at interaction patterns and their role in parallel and distributed processing, and demonstrates for the first time the link between skeletons and design patterns. It focuses on computation and communication structures that are beyond simple message-passing or remote procedure calling, and also on pragmatic approaches that lead to practical design and programming methodologies with their associated compilers and tools. The book is divided into two parts which cover: skeletons-related material such as expressing and composing skeletons, formal transformation, cost modelling and languages, compilers and run-time systems for skeleton-based programming.- design patterns and other related concepts, applied to other areas such as real-time, embedded and distributed systems. It will be an essential reference for researchers undertaking new projects in this area, and will also provide useful background reading for advanced undergraduate and postgraduate courses on parallel or distributed system design. ... Read more

Customer Reviews (1)

2-0 out of 5 stars Over-specialized
Somehow, this book came across as too narrow and too broad, both at the same time.

Too narrow, in that each chapter was a very detailed study of a specific implementation or idea. The first few chapters, for example, presented particular extensions to the Haskell programming langauge, intended to support parallel programming. Lord knows that parallel systems need all the help they can get. If hard-core functional programming is the answer, though, I'm not sure I heard the question. Functional programmers have been beating their drum for at least 30 years, and still have little effect on the main parade of software development.

What they call "skeletons" seem to be fairly ordinary constructs for parallelism, including co-begin and pipelining. I have trouble getting excited about seeing them presented in obscure notation. I would also have hoped to see more demanding kinds of applications. Ray-tracing was a common one, but ray-tracing is "embarassingly parallel." It's almost hard not to get a parallel speedup approaching 1:1 with the number of processors.

The remainder of the book operates at a very different level. Instead of specific syntax in a specific language, it presents a number of design patterns at a very high conceptual level. Instead of particular implementations on specific processors, it discusses techniques that can be applied across loosely-coupled, web-based ensembles. The design pattern discussion was adequate, but seemed an odd mate for the low-level detail of the book's first section.

Even though I work every day with highly parallel computation, I just didn't come away with much I could use. I found this book frankly disappointing. ... Read more


43. Parallel and Distributed Computing: A Survey of Models, Paradigms and Approaches
by Claudia Leopold
Hardcover: 272 Pages (2000-11-17)
list price: US$132.95 -- used & new: US$39.29
(price subject to change: see help)
Asin: 0471358312
Average Customer Review: 3.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
An all-inclusive survey of the fundamentals of parallel and distributed computing. The use of parallel and distributed computing has increased dramatically over the past few years, giving rise to a variety of projects, implementations, and buzzwords surrounding the subject. Although the areas of parallel and distributed computing have traditionally evolved separately, these models have overlapping goals and characteristics. Parallel and Distributed Computing surveys the models and paradigms in this converging area of parallel and distributed computing and considers the diverse approaches within a common text. Covering a comprehensive set of models and paradigms, the material also skims lightly over more specific details and serves as both an introduction and a survey. Novice readers will be able to quickly grasp a balanced overview with the review of central concepts, problems, and ideas, while the more experienced researcher will appreciate the specific comparisons between models, the coherency of the parallel and distributed computing field, and the discussion of less well-known proposals. Other topics covered include:
* Data parallelism
* Shared-memory programming
* Message passing
* Client/server computing
* Code mobility
* Coordination, object-oriented, high-level, and abstract models
* And much more

Parallel and Distributed Computing is a perfect tool for students and can be used as a foundation for parallel and distributed computing courses. Application developers will find this book helpful to get an overview before choosing a particular programming style to study in depth, and researchers and programmers will appreciate the wealth of information concerning the various areas of parallel and distributed computing. ... Read more

Customer Reviews (2)

3-0 out of 5 stars A good start, but don't stop here
This book is subtitled as "An all-inclusive survey of the fundamentals of parallel and distributed computing."It both succeeds and fails on this point.Leopold does indeed cover a wide expanse of technologies and approaches that characterize the space of high performance computing.It is in many ways still an emerging space, so conclusively nailing down every possible thread (no pun intended) in a coherent fashion is eminently difficult.The author's treatment of these different possibilities is uneven, overlooking some important contemporary technologies and implementations.It does cover a wide range of topics within the fields of distributed and parallel computing.Furthermore, within the chapters Leopold treats us to both high-level discussions of approaches and provides a glimpse into some of the implementation challenges involved.On the latter point especially, this book is very useful in that it gives the noninitiate some understanding and appreciation of the peculiarities of parallel programming, without requiring substantial technical background in the technologies.The examples in High Performance C and Parallel Fortran were very enlightening.

Where the book fails is that it is far from "all inclusive".There are a number of prominent and important developments that have not been included.Similarly, there are other interesting newer technologies that have only received cursory treatment.Examples include:

- No mention of SETI@Home.SETI@Home is the poster child of massively distributed computing, and with 15 teraflops of raw computing power, it is more capable than IBM's ASCI White supercomputer.
- No mention of distributed.net, or other notable exercises in public and commercial grid computing.
- Grid computing gets only a glancing reference at the tail end of one chapter.A comparative analysis of this important and still-forming space is glaringly absent from this text.
- JavaSpaces, Sun's answer to tuple-spaces, gets only a few sentences.
- Java RMI similarly gets less than a paragraph.
- Although DCOM is now basically legacy for Microsoft, it represents an important milestone in the evolution of distributed computing.It receives only a paragraph.
- Talk of web services and .Net would have been hitting the airwaves as the writing of this book as progressing, although possibly late in the effort.However, some cursory mention at least should have been made.There is increasing discussion of exposing grid compute services via web services interfaces, and Microsoft has recently announced their intention to port the Globus toolkit to Windows.
- Oh yeah, about Globus.Barely a mention.

It was clear from the text that the author came from a strong UNIX and CORBA background.The text has the feel of a PhD thesis-turned-book, and the areas of concentration are decidedly academic.There are a few other areas of minor complaint.Some of the wording in the text is clumsy, reflecting inadequate editing.Some topics feel like they are introduced in reverse order, assuming the reader already has some context about the given topic.

The author makes a sometimes-clumsy distinction between paradigms and models.The distinction is important in that an understanding of models brings a reader closer to envisioning how they might tackle a given problem themselves.However, reference to various models are sprinkled throughout the book.A comparative analysis, even brief, would have been very useful had it been centralized.

Those complaints may sound harsh, but overall the book is useful.It demystifies the problems of parallel programming, and provides a reasonably concise starting point for researching the distributed computing space.But, consider this book a starting point, and not an ending point.

3-0 out of 5 stars A good start, but don't stop here
This book is subtitled as "An all-inclusive survey of the fundamentals of parallel and distributed computing."It both succeeds and fails on this point.Leopold does indeed cover a wide expanse of technologies and approaches that characterize the space of high performance computing.It is in many ways still an emerging space, so conclusively nailing down every possible thread (no pun intended) in a coherent fashion is eminently difficult.The author's treatment of these different possibilities is uneven, overlooking some important contemporary technologies and implementations.It does cover a wide range of topics within the fields of distributed and parallel computing.Furthermore, within the chapters Leopold treats us to both high-level discussions of approaches and provides a glimpse into some of the implementation challenges involved.On the latter point especially, this book is very useful in that it gives the noninitiate some understanding and appreciation of the peculiarities of parallel programming, without requiring substantial technical background in the technologies.The examples in High Performance C and Parallel Fortran were very enlightening.

Where the book fails is that it is far from "all inclusive".There are a number of prominent and important developments that have not been included.Similarly, there are other interesting newer technologies that have only received cursory treatment.Examples include:

- No mention of SETI@Home.SETI@Home is the poster child of massively distributed computing, and with 15 teraflops of raw computing power, it is more capable than IBM's ASCI White supercomputer.
- No mention of distributed.net, or other notable exercises in public and commercial grid computing.
- Grid computing gets only a glancing reference at the tail end of one chapter.A comparative analysis of this important and still-forming space is glaringly absent from this text.
- JavaSpaces, Sun's answer to tuple-spaces, gets only a few sentences.
- Java RMI similarly gets less than a paragraph.
- Although DCOM is now basically legacy for Microsoft, it represents an important milestone in the evolution of distributed computing.It receives only a paragraph.
- Talk of web services and .Net would have been hitting the airwaves as the writing of this book as progressing, although possibly late in the effort.However, some cursory mention at least should have been made.There is increasing discussion of exposing grid compute services via web services interfaces, and Microsoft has recently announced their intention to port the Globus toolkit to Windows.
- Oh yeah, about Globus.Barely a mention.

It was clear from the text that the author came from a strong UNIX and CORBA background.The text has the feel of a PhD thesis-turned-book, and the areas of concentration are decidedly academic.There are a few other areas of minor complaint.Some of the wording in the text is clumsy, reflecting inadequate editing.Some topics feel like they are introduced in reverse order, assuming the reader already has some context about the given topic.

The author makes a sometimes-clumsy distinction between paradigms and models.The distinction is important in that an understanding of models brings a reader closer to envisioning how they might tackle a given problem themselves.However, reference to various models are sprinkled throughout the book.A comparative analysis, even brief, would have been very useful had it been centralized.

Those complaints may sound harsh, but overall the book is useful.It demystifies the problems of parallel programming, and provides a reasonably concise starting point for researching the distributed computing space.But, consider this book a starting point, and not an ending point. ... Read more


44. Using OpenMP: Portable Shared Memory Parallel Programming (Scientific and Engineering Computation)
by Barbara Chapman, Gabriele Jost, Ruud van der Pas
Paperback: 353 Pages (2007-10-31)
list price: US$38.00 -- used & new: US$28.32
(price subject to change: see help)
Asin: 0262533022
Average Customer Review: 4.5 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
"I hope that readers will learn to use the full expressibility and power of OpenMP. This book should provide an excellent introduction to beginners, and the performance section should help those with some experience who want to push OpenMP to its limits."
—from the foreword by David J. Kuck, Intel Fellow, Software and Solutions Group, and Director, Parallel and Distributed Solutions, Intel Corporation

OpenMP, a portable programming interface for shared memory parallel computers, was adopted as an informal standard in 1997 by computer scientists who wanted a unified model on which to base programs for shared memory systems. OpenMP is now used by many software developers; it offers significant advantages over both hand-threading and MPI. Using OpenMP offers a comprehensive introduction to parallel programming concepts and a detailed overview of OpenMP.

Using OpenMP discusses hardware developments, describes where OpenMP is applicable, and compares OpenMP to other programming interfaces for shared and distributed memory parallel architectures. It introduces the individual features of OpenMP, provides many source code examples that demonstrate the use and functionality of the language constructs, and offers tips on writing an efficient OpenMP program. It describes how to use OpenMP in full-scale applications to achieve high performance on large-scale architectures, discussing several case studies in detail, and offers in-depth troubleshooting advice. It explains how OpenMP is translated into explicitly multithreaded code, providing a valuable behind-the-scenes account of OpenMP program performance. Finally, Using OpenMP considers trends likely to influence OpenMP development, offering a glimpse of the possibilities of a future OpenMP 3.0 from the vantage point of the current OpenMP 2.5.

With multicore computer use increasing, the need for a comprehensive introduction and overview of the standard interface is clear. Using OpenMP provides an essential reference not only for students at both undergraduate and graduate levels but also for professionals who intend to parallelize existing codes or develop new parallel programs for shared memory computer architectures. ... Read more

Customer Reviews (3)

5-0 out of 5 stars Excellent with heavy emphasis on performance
Only the most inexpensive processors, or processors built for low power consumption, now have single cores.The present and future of CPUs is multi-core, quad cores per CPU, 6 cores soon to come, and probably more after that.The authors have a deep understanding of parallel processing, modern computer architecture, and OpenMP.This understanding is communicated clearly in this excellent book.The only reason to use OpenMP is to make your programs run faster, this motivation permeates the entire book.Extensive discussions regarding performance are included, including extensive discussions of coding to maximize hits on the CPU cache, considerations of overhead in parallel program, how memory placement and thread binding behavior of multiple multi-core CPUs can affect performance, and many other considerations that likely never occured to you.Almost all of the discussions are presented with specific examples and instruction regarding how to code OpenMP directives.The emphasis is on C, with enough examples in Fortran to be able to use that also -- there is no discussion of C++.Since C and Fortran are by far the most important languages used for scientific computation, the language choices are appropriate at least for that community.

4-0 out of 5 stars A practical and well-priced book
The OpenMP specification can be downloaded from the web, but it is not a really a good starting point for learning how to write real programs using the OpenMP constructs. However, this book does have a lot of material that you really don't need just to write programs. This extra information is in the form of context and information on parallel computing in general, since this book is really intended to double as a textbook and a practical guide for professionals. The following briefly describes the contents.

Chapter one contains some background information on OpenMP and its applications. You can skip it if you are not interested in this or already know the material.
Chapter two is a brief overview of the features of OpenMP at a high level. It discusses how OpenMP deals with problems that come from the complex memory hierarchy that exists on modern computers.
Chapter three is a good starting point if you know you need OpenMP, know why you need it, and just need to get something going. It discusses a complete OpenMP program in both C and Fortran that uses a couple of OpenMP's most widely used features, plus it explains the basics of the OpenMP syntax. The problem discussed is specifically how to perform a matrix times a vector operation in parallel.
Chapter four is a more complete overview of the OpenMP programming paradigm and it contains many example programs. First the most widely used features are introduced with a focus on those features that enable work to be shared among multiple threads. The scope narrows until the author is down to some of OpenMP's less widely known features. The programs start simple and get more complex as the chapter progresses, always staying within the field of scientific computing.
Chapters five and six go together, and discuss how to optimize performance with OpenMP. There are a number of practical programming tips and an extended example that gives insight into the process of investigating performance problems.
Chapter seven talks about program correctness and troubleshooting. This can be hard to do in shared-memory parallel programs.
Chapter 8 is on translation by the compiler of an OpenMP program into an application that can be executed in parallel. It talks about behind the scenes occurrences with OpenMP computing including the operation of OpenMP-aware compilers, performance tools, and debuggers. It also discusses strategies for obtaining high performance.
Chapter 9 is a special topics chapter and discusses trends that could influence extensions to the OpenMP specification in the future. Obviously it is not necessary for the practicing professional to know this, but it is interesting.

My perspective is that of someone that knows I must use OpenMP, and I need good concrete examples andan accompanying tutorial to get me going. Chapters three through eight were ideal for my purpose. Other books I examined ran the gamut from talking about why OpenMP is important and lacking practical details, to overpriced textbooks, to books that were about OpenMP plus some other parallel programming paradigms and weren't specific or modern enough. This one is clear, concise, modern, and the price can't be beat. The only drawback for me was the dual emphasis on C and Fortran, but then I'm sure the Fortran information is still useful to a good number of programmers.

4-0 out of 5 stars Good Performance on a Multicore Machine - Try OpenMP?
I have most of the parallel computing books out there so I am sort of a collector of sorts.Most focus on either the basics of parallel programming, MPI, OpenMP, both, or some other less popular (yet) paradigm e.g. PFortran, TBB, etc.With every parallel-computing wanna be buying a multicore machine dual, quad, dual-quad, etc., the parallel computing software "industry" is in flux.No longer will MPI on a cluster be enough.It still remains to be seen whether the slower memory bus on quad core machines will allow for speedups without major code overhaul or a new paradigm.Anyway, this book is a welcome addition to my collection.For one, it is current e.g. 2008 and also it is focussed on OpenMP (but does treat dual MPI/OpenMP programming).It is well written (I am about 100 pages in since I just got my copy last week) and has one tantalizing chapter entitled "How to get good performance by using OpenMP" - which is really timely since my new 72 core machine (9 dual Intel quad cores) seems to give slower performance for a major commercial CFD code than the equivalent number of dual-core nodes).I hope it helps me.Based on the rapid growth of multicore machines and the lack of a simple programming solution, I recommend this book to all those wanting to try and get their codes running fast onmulticore machines.The only downsides in this book so far is the lack of downloadable code (you have to type it in yourself) and it is hard to test the code fragments because they are just that - fragments.A nice feature of the book is the 50/50 emphasis on Fortran/C codes - which are the still the mainstay in large-scale scientific computing. ... Read more


45. Parallel Optimization: Theory, Algorithms, and Applications (Numerical Mathematics and Scientific Computation)
by Yair Censor, Stavros A. Zenios
Hardcover: 576 Pages (1998-01-08)
list price: US$225.00 -- used & new: US$180.00
(price subject to change: see help)
Asin: 019510062X
Average Customer Review: 4.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
This book offers a unique pathway to methods of parallel optimization by introducing parallel computing ideas into both optimization theory and into some numerical algorithms for large-scale optimization problems. The three parts of the book bring together relevant theory, careful study of algorithms, and modeling of significant real world problems such as image reconstruction, radiation therapy treatment planning, and transportation problems. ... Read more

Customer Reviews (2)

3-0 out of 5 stars For the dedicated specialist
This book is highly mathematical. It phrases its points as a series of theorems, with a number of case studies at the end. The theorems all stop just short legible technique. The examples, though exciting, all presuppose command of technique. I regret that I never found that missing piece within myself. That piece was the one that connects deep theorems about abstract N-dimensional pseudodistances into working fluency about CAT scans.

I did, however, get some understanding of the kinds of problems that the authors address. They are some of the analytic functions, with linear or nonlinear constraints. In particular, they are functions of high dimensions - thousands or millions of constraints - amenable to fairly fine-grained optimization. They are not discrete problems, like the Travelling Salesman. They are not problems with hugely jagged reward surfaces, like "motif finding" problems in bioinformatics. They are not genetic algorithms, Monte Carlo searches, or combinatorial problems. The authors do in fact parallelize a number of important optimization problems, including CAT scans, transportation planning, and radiation therapy, but not all optimization techniques.

Those problems span only a small part of the parallelizable world. The broad promise in the title "Parallel Optimization" was only partly kept. Some parallelization techniques were presented, as well as some interesting perspetives on numerical optimization. Optimization is a large field, however, and this is only a small map.

Still, for that range of problems, it seems to offer the right reader profound insight. I can not be sure, though, since I'm not the right reader. I give it three stars, just because I had to give something. Different people will assign this book very different value.

5-0 out of 5 stars Proximal point algorithms by Censor and Zenios
Part I of this book starts with an aphorism,
attributed to H. von Helmholtz:
``The most practical thing in the world
is a good theory.''
The book does justice to von Helmholtz maxim.
The theory of Bregman distances is presented
in a clear, geometrical and intuitive way.
Part II of the book presents several important
and illustrative applications of Proximal Point
algorithms, to constrained optimization,
Maximum Entropy problems, financial stochastic
networks, and several other important areas. ... Read more


46. Highly Parallel Computing (The Benjamin/Cummings Series in Computer Science and Engineering)
by George S. Almasi, Allan Gottlieb
 Hardcover: 689 Pages (1993-10)
list price: US$68.95
Isbn: 0805304436
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Thorough, up-to-date introduction to parallel computing in this revision of a classic. Emphasizes the most recent technology and how it affects parallel computing, including RISC chips, CMOS, and fiber optics. DLC: Parallel processing (Electronic computers) ... Read more


47. Parallel Computing in Quantum Chemistry
by Curtis L. Janssen, Ida M. B. Nielsen
Hardcover: 232 Pages (2008-04-09)
list price: US$94.95 -- used & new: US$64.07
(price subject to change: see help)
Asin: 1420051644
Average Customer Review: 3.5 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
An In-Depth View of Hardware Issues, Programming Practices, and Implementation of Key Methods

Exploring the challenges of parallel programming from the perspective of quantum chemists, Parallel Computing in Quantum Chemistry thoroughly covers topics relevant to designing and implementing parallel quantum chemistry programs.

Focusing on good parallel program design and performance analysis, the first part of the book deals with parallel computer architectures and parallel computing concepts and terminology. The authors discuss trends in hardware, methods, and algorithms; parallel computer architectures and the overall system view of a parallel computer; message-passing; parallelization via multi-threading; measures for predicting and assessing the performance of parallel algorithms; and fundamental issues of designing and implementing parallel programs.

The second part contains detailed discussions and performance analyses of parallel algorithms for a number of important and widely used quantum chemistry procedures and methods. The book presents schemes for the parallel computation of two-electron integrals, details the Hartree–Fock procedure, considers the parallel computation of second-order Møller–Plesset energies, and examines the difficulties of parallelizing local correlation methods.

Through a solid assessment of parallel computing hardware issues, parallel programming practices, and implementation of key methods, this invaluable book enables readers to develop efficient quantum chemistry software capable of utilizing large-scale parallel computers. ... Read more

Customer Reviews (2)

2-0 out of 5 stars Awfully thin
First let me say I'll agree with crawdad's statement "it stops too soon", but we differ on how. The book takes a far too cursory description of the parallelizing quantum chemistry codes.

The first 2/3rds of the book are pretty basic parallel computing calculations. The authors tie it to some examples. But there seems to be a disconnect from the topic at the front of the chapter and the examples at the end. The chapter on computer/network architectures is a good example; after referring to seven different network topologies in abstract terms, only two real world examples are given, and that 10 pages later.

In Chapter 5 the authors spend a lot of time working through Amdal's & Gustafson's laws which provide basic models of computation time, but then they go unused for a more precise big O() notation. And Amdal & Gustafson are never heard from again.

One quirk of the way the formulas are written needs to be corrected too. Many of the communications formulas scale logarithmically by processor counts, but the formulas are not formatted to distinguish this. For example, log2pa is actually (log2(p))*a. It reads as though the a (latency) is a part of the log, but it is not.

When we get around the second section, I have a real hard time following. Realize that I come from a Computer Science background not from Chemistry/Physics and simultaneously was working through Lowe's Quantum Chemistry. So maybe it would make sense to a more seasoned chemist. But I have a hard time figuring out precisely what the author is talking about. The code samples are not all that much help. They are in a very high-level pseudo code. So figuring out which element is which and what data is needed where through code inspection is not an option.

I don't want to be entirely critical here. It is very readable despite the problems noted above. There is a lot of good information in the book, but it needs to be much more in depth.

5-0 out of 5 stars A timely and much-needed book
This is a well-written book aimed at researchers in the field of quantum chemistry -- from graduate students to long-standing experts -- who require a concise and clear description of the most important problems facing efforts to parallelize ab initio quantum chemical programs.Given the rapid emergence of new petascale computer systems containing thousands to even millions of computing cores, the timing of this book is fitting.

Full disclosure: I know the authors of this book well.I have published two peer-reviewed journal articles with Dr. Janssen and one with Dr. Nielsen, and I received lab-directed research and development funding from Sandia National Labs through a Department of Energy project of Dr. Janssen's in 2001-2004.In addition, I reviewed the proposal for the book for Taylor and Francis publishers.However, I was not involved in the writing of the book at all.I purchased it of my own accord, and I am writing this review only because I am very impressed with the finished product.

I found the book to be tremendously enlightening.In the first half, the authors provide an overview of essential aspects and tools of parallel computing: hardware, network topology, message-passing software and methods, threading, load-balancing, etc.In addition, they give a fairly detailed explanation of methods for modeling the parallel performance (speedup and efficiency) of algorithms, as well as aspects of parallel program design. One of the strengths of the book is the way the authors make their points clearer by constantly returning to a few specific examples, including matrix-vector multiplication and the second-order Moller-Plesset perturbation theory (MP2) algorithm.

They then make use of the fundamentals developed in the first half of the book to address several key problems in quantum chemical programs: two-electron repulsion integral evaluation, the integral-direct Hartree-Fock method, as well as canonical and local MP2 energy calculations.These provide fertile soil for discussions of load balancing, collective versus one-sided communication, and hybrid (simultaneous shared- and distributed-memory) parallel methods.Each example is well-supported by performance models that provide a clear analysis of the scalability of each algorithm.

My only criticism of the book is that it stops too soon.The numerous problems associated with parallel implementation of more advanced and complicated methods, especially coupled cluster theory, are not discussed, and I would have enjoyed reading the authors' take on this area of on-going research.

Nevertheless, I believe this book will prove to be extremely valuable to those developing quantum chemical program for emerging massively parallel supercomputers.The authors' perspective on the parallelism problem is state-of-the-art, and our field would be wise to listen carefully to what they have to say. ... Read more


48. Parallel Programming with MPI
by Peter Pacheco
Paperback: 500 Pages (1996-10-15)
list price: US$76.95 -- used & new: US$40.00
(price subject to change: see help)
Asin: 1558603395
Average Customer Review: 4.5 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
A hands-on introduction to parallel programming based on the Message-Passing Interface (MPI) standard, the de-facto industry standard adopted by major vendors of commercial parallel systems. This textbook/tutorial, based on the C language, contains many fully-developed examples and exercises. The complete source code for the examples is available in both C and Fortran 77. Students and professionals will find that the portability of MPI, combined with a thorough grounding in parallel programming principles, will allow them to program any parallel system, from a network of workstations to a parallel supercomputer.

* Proceeds from basic blocking sends and receives to the most esoteric aspects of MPI.
* Includes extensive coverage of performance and debugging.
* Discusses a variety of approaches to the problem of basic I/O on parallel machines.
* Provides exercises and programming assignments. ... Read more

Customer Reviews (13)

4-0 out of 5 stars For C programmers, Fortran programmers get a challenge
For the undergraduate or graduate student who programs in C, this book is a well-written and informative introduction to MPI programming. However, for the old-school Fortran programmer, the book offers little guidance. Typical students today will be writing code in C (or its derivatives), and so the choice of programming language is logical, but I have still spent a lot more time online looking up MPI Fortran code and syntax than using the book.

Overall, the text itself is solid and readable. A Fortran version of the text would be nice, but the online code snippets are good enough to get you started.

4-0 out of 5 stars Great for MPI beginners
Pacheco's book is a strong, gently paced introduction to a very complex API. MPI, the message passing interface, is the most common coordination tool for parallel scientific computing. When a Blue Gene programmer has 1,000 or 100,000 processors all working on different parts of one calculation, there's a big problem in getting partial results from where they're computed to where they're needed. That's what MPI is for.

When the problem is so complicated, the solution is also complicated. Pacheco does a good job of breaking MPI down into digestible pieces, starting with the basic send and receive primitives that you'll use most often. He presents each new part of the API in terms of some problem to be solved, keeping a concrete and practical tone to this book. He gradually adds more pieces in terms of more practical exercises: broadcasts and reductions, scatter and gather, data structuring, communicators, and asynchronous IO.

Along the way, Pacheco introduces algorithms that even experienced uniprocessor programmers may not be familiar with, including bitonic sorting and Fox's algorithm for matrix multiplication. This isn't gratuitous intellectual showmanship. It's a pointed demonstration that, when communication barriers change the computation landscape, old paths to solutions may not be the best routes any more. After finishing with the MPI API itself, Pacheco presents debugging techniques and common kinds of support libraries, as well as basic techniques for analyzing the potential and actual acceleration possible for a given problem.

If you're serious about MPI, you'll need the official standard for understanding the fussy details of these complex APIs. That's a pretty brutal way for a beginner to get going, though. This introduces not only the basic concepts of MPI, but also the basics of how to think about highly parallel programming. And, as multi-threaded multi-core multi-processor systems become common, that's an increasing percentage of all programming.

//wiredweird

5-0 out of 5 stars Your MPI on-ramp
I read this book over the past week, covering chapters 1 through 6, skimming 7-10, and reading 11 through the final chapter 16. Its basically applied MPI programming, done up very well and clearly, starting with architectural history & motivations and leading into a simple numerical integrator example program in chapter 4 (chapter 3 was the MPI `hello, world'). The coding used is C, and I wrote my own integrator after finishing chapter 4 to also explore floating point numbers in calculations, loop control, and to integrate arbitrary functions on arbitrary intervals with adjustable resolution. The integrator is developed more fully throughout the book wherein MPI performance issues of the original design are pointed out and polished off as additional MPI functions and techniques are introduced. Some of these techniques included tree-structured initialization & broadcasts, data communication optimizations (such as derived types, packing / unpacking, etc), and guidance as to when certain techniques would be more useful than others offered by MPI. Communications are further advanced later in the book where the important non-blocking forms and more advanced concepts are brought to light & illustrated. Empirical analysis of algorithmic performance occupies two full chapters and is very interesting, including a detailed look into Amdahl's law. Its an excellent example of why we should keeps our eyes open in research. Program design & troubleshooting are also covered, but I only skimmed those chapters. Several parallel algorithms and some parallel libraries are also treated well in the text. At only 362 pages, (minus appendices) this book is a quick read and a superb lab manual. If you are a software developer just now getting into MPI, this book will certainly accelerate you onto MPI with the confidence that you can do anything with it. Just give it a week of your time. Its the perfect self-study manual.
5-stars

5-0 out of 5 stars Just what I needed
I was looking for a good introduction into MPI to parallelize some software I had written. Somehow, no online resources seemed to cover the topic well, so $30 seemed like a worthwhile investment, compared to my time. I got this book and the more recent one by Quinn (PP in C with MPI and OpenMP). This one's the hands-down winner. I basically scanned it in bed for three nights, and two weeks later my code is running like a charm. Just the right mix of reference and tutorial, very little distraction, and a pleasant read throughout.

Highly recommended.

5-0 out of 5 stars Well written, easy for someone who is not an MPI guru... yet
The book is written very well and goes through how the MPI functions work and all their parameters in pretty fine detail.He even talks about the simpler things many books overlook.The only thing I wish was in this book, and this has nothing to do with the quality of the book, is some C++ reference or talk of the C++ MPI calls.This book is written with examples in C only (I think FORTRAN one may be available online), but the theory he teaches and the design of the programs will work for any language, of course.It would be nice to have a good reference chapter that lists the FORTRAN MPI functions and the C++ ones too though. ... Read more


49. Parallel Scientific Computation: A Structured Approach using BSP and MPI
by Rob H. Bisseling
Hardcover: 334 Pages (2004-05-06)
list price: US$135.00 -- used & new: US$124.88
(price subject to change: see help)
Asin: 0198529392
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Based on the author's extensive development, this is the first text explaining how to use BSPlib, the bulk synchronous parallel library, which is freely available for use in parallel programming.Aimed at graduate students and researchers in mathematics, physics and computer science, the main topics treated in the book are core in the area of scientific computation and many additional topics are treated in numerous exercises.An appendix on the message-passing interface (MPI) discusses how to program using the MPI communication library.MPI equivalents of all the programs are also presented. The main topics treated in the book are core in the area of scientific computation: solving dense linear systems by Gaussian elimination, computing fast Fourier transforms, and solving sparse linear systems, by iterative methods.Each topic is treated in depth, starting from the problem formulation and a sequential algorithm, through a parallel algorithm and its analysis to a complete parallel program written in C and BSPlib, and experimental results obtained using this program on a parallel computer. Additional topics treated in the exercises include: data compression, random number generation, cryptography, eigensystem solving, 3D and Strassen Matrix multiplication, wavelets and image compression, fast cosine transform, decimals of pi, simulated annealing and molecular dynamics. This book contains five small but complete example programs written in BSPlib which illustrates the methods taught.The appendix on MPI discusses how to program in a structured, bulk synchronous parallel style using the MPI communication library.It presents MPI equivalents of all the programs in the book.The complete programs of the book and their driver programs are freely available online in packages called BSPedupack and MPIedupack. ... Read more


50. High Performance Computing: Third International Symposium, ISHPC 2000 Tokyo, Japan, October 16-18, 2000 Proceedings (Lecture Notes in Computer Science)
Paperback: 595 Pages (2000-11-10)
list price: US$102.00 -- used & new: US$42.77
(price subject to change: see help)
Asin: 3540411283
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
This book constitutes the refereed proceedings of the Third International Symposium on High-Performance Computing, ISHPC 2000, held in Tokyo, Japan in October 2000.
The 15 revised full papers presented together with 16 short papers and five invited contributions were carefully reviewed and selected from 53 submissions. Also included are 20 refereed papers from two related workshops. The book offers topical sections on compilers, architectures and evaluation; algorithms, models, and applications; OpenMP: experiences and implementations; and simulation and visualization.
... Read more


51. PVM: Parallel Virtual Machine: A Users' Guide and Tutorial for Network Parallel Computing (Scientific and Engineering Computation)
by Al Geist, Adam Beguelin, Jack Dongarra, Weicheng Jiang, Robert Manchek, Vaidyalingam S. Sunderam
Paperback: 299 Pages (1994-11-08)
list price: US$38.00 -- used & new: US$26.74
(price subject to change: see help)
Asin: 0262571080
Average Customer Review: 4.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Written by the team that developed the software, this tutorial is thedefinitive resource for scientists, engineers, and other computer userswho want to use PVM to increase the flexibility and power of theirhigh-performance computing resources. PVM introduces distributedcomputing, discusses where and how to get the PVM software, provides anoverview of PVM and a tutorial on setting up and running existingprograms, and introduces basic programming techniques including puttingPVM in existing code. There are program examples and details on how PVMworks on UNIX and multiprocessor systems, along with advanced topics(portability, debugging, improving performance) and troubleshooting. PVM(Parallel Virtual Machine) is a software package that enables thecomputer user to define a networked heterogeneous collection of serial,parallel, and vector computers to function as one large computer. It canbe used as stand-alone software or as a foundation for otherheterogeneous network software. PVM may be configured to contain variousmachine architectures, including sequential processors, vectorprocessors, and multicomputers, and it can be ported to new computerarchitectures that may emerge. ... Read more

Customer Reviews (2)

4-0 out of 5 stars Parallelism APIs evolving
The Parallel Virtual Machine abstraction assumes a message passing environment built from Unix machines. It's message passing primitives are a good deal simpler than MPI's, and the authors note that MPI can be effective as an under-layer for that part of PVM. Unlike MPI, however, PVM emphasizes heterogeneous computing ensemble built from whatever hardware is already at hand.

What sets PVM apart from the others is its emphasis on the pragmatics of multi-computer coordination. More than the usual SPMD coordination, it has facilities for managing the ensemble. It even has facilities for signalling runaway processes and for recovering from lost nodes and other errors. And, although the authors note many system-dependent specifics, they address issues that arise in managing the server daemons, crossing administrative boundaries, and other pragmatics of parallel computing.

Most of the book is taken up with code samples and man pages for the PVM API. That gives it a very hands-on, practical feel, short on the philosophical and theoretical tone of other books on parallelism APIs. PVM doesn't depend on special compilers, so it's a bit easier for C programmers to approach than OpenMP is. And it's a compact API with just a few central concepts, mostly drawn from standard C idioms, so it's lot simpler that MPI. The book's mention of MasPar, Kendall Square Research, DEC, and Thinking Machines gives an antiquated sense, though. I'm not sure how common PVM is, these days, but if it's what you have, then this is the book for you.

//wiredweird

4-0 out of 5 stars So you want to build a super computer?
I bought this book because I built a super-computer out of junk computers. I am writing a game based on Risk (the board game) that runs on all these machines. I decided I liked PVM better than MPI, because PVM doesn't require any special compilers and also it worked easier with SSH. Also it was easier to setup and use than MPI (IMHO).

This book is a good tutorial and introduction to PVM but the problem is it talks alot about strange computers and things you will most likely never heard of (HIPPI,bit-vector computers). While it is cool stuff, it's pretty old now. But, PVM is still alot easier (IMHO) to get into than MPI.

Or you can just roll your message passing code by hand using TCP/IP too. ... Read more


52. High Performance Computing for Computational Science - VECPAR 2002: 5th International Conference, Porto, Portugal, June 26-28, 2002. Selected Papers and ... Talks (Lecture Notes in Computer Science)
Paperback: 732 Pages (2003-07-29)
list price: US$105.00 -- used & new: US$94.07
(price subject to change: see help)
Asin: 3540008527
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description

This book constitutes the thoroughly refereed post-proceedings of the 5th International Conference on High Performance Computing for Computational Science, VECPAR 2002, held in Porto, Portugal in June 2002.

The 45 revised full papers presented together with 4 invited papers were carefully selected during two rounds of reviewing and improvement. The papers are organized in topical sections on fluids and structures, data mining, computing in chemistry and biology, problem solving environments, computational linear and non-linear algebra, cluster computing, imaging, and software tools and environments. ... Read more


53. Parallel and Distributed Computing Handbook
by Albert Y. Zomaya
 Hardcover: 1232 Pages (1995-12-01)
list price: US$99.50
Isbn: 0070730202
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
The evolution from traditional serial to parallel and distributed systems has produced a quantum leap in computing power--and an urgent need for a comprehensive reference on the technology. Now, with over a thousand pages and a wealth of illustrations and data tables, this Handbook offers readers the first information source with the scope to encompass the parallel and distributed revolution. This handbook, with over a thousand pages of reference material, and a wealth of illustrations and data tables, is written by an international team of experts, and reviewed by an elite group of editorial advisors, the handbook describes the latest theories behind these fast-developing systems--summarizes the current state of the art--interprets the most promising trends--and spotlights the many industrial and commercial applications. ... Read more


54. Solutions to Parallel and Distributed Computing Problems: Lessons from Biological Sciences
Hardcover: 288 Pages (2000-10-31)
list price: US$140.00 -- used & new: US$9.89
(price subject to change: see help)
Asin: 0471353523
Average Customer Review: 5.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Solving problems in parallel and distributed computing through the use of bio-inspired techniques. Recent years have seen a surge of interest in computational methods patterned after natural phenomena, with biologically inspired techniques such as fuzzy logic, neural networks, simulated annealing, genetic algorithms, or evolutionary computer models increasingly being harnessed for problem solving in parallel and distributed computing. Solutions to Parallel and Distributed Computing Problems presents a comprehensive review of the state of the art in the field, providing researchers and practitioners with critical information on the use of bio-inspired techniques for improving software and hardware design in high-performance computing. Through contributions from top leaders in the field, this important book brings together current research results, exploring some of the most intriguing and cutting-edge topics from the world of biocomputing, including:

* Parallel and distributed computing of cellular automata and evolutionary algorithms
* How the speedup of bio-inspired algorithms will help their applicability in a wide range of problems
* Solving problems in parallel simulation through such techniques as simulated annealing algorithms and genetic algorithms
* Techniques for solving scheduling and load-balancing problems in parallel and distributed computers
* Applying neural networks for problem solving in wireless communication systems ... Read more

Customer Reviews (1)

5-0 out of 5 stars A good overview of the use of artificial life techniques
The book reviews the use of artificial life techniques in solving a wide range of problems in high performance computing and mobile computing.
The approaches are an interesting and fresh look at how new solution methodologies can be applied to deal with complex
problems in the areas of parallel and mobile computing.

I would highly recommend the book to any researcher who is
interested in experimenting with new ideas and probably contemplating the use of a-life methods. ... Read more


55. Introduction to Parallel Algorithms (Wiley Series on Parallel and Distributed Computing)
by C. Xavier, S. S. Iyengar
Hardcover: 384 Pages (1998-08-05)
list price: US$151.95 -- used & new: US$25.00
(price subject to change: see help)
Asin: 0471251828
Average Customer Review: 1.0 out of 5 stars
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
Parallel algorithms Made Easy

The complexity of today's applications coupled with the widespread use of parallel computing has made the design and analysis of parallel algorithms topics of growing interest. This volume fills a need in the field for an introductory treatment of parallel algorithms-appropriate even at the undergraduate level, where no other textbooks on the subject exist. It features a systematic approach to the latest design techniques, providing analysis and implementation details for each parallel algorithm described in the book. Introduction to Parallel Algorithms covers foundations of parallel computing; parallel algorithms for trees and graphs; parallel algorithms for sorting, searching, and merging; and numerical algorithms. This remarkable book:
* Presents basic concepts in clear and simple terms
* Incorporates numerous examples to enhance students' understanding
* Shows how to develop parallel algorithms for all classical problems in computer science, mathematics, and engineering
* Employs extensive illustrations of new design techniques
* Discusses parallel algorithms in the context of PRAM model
* Includes end-of-chapter exercises and detailed references on parallel computing.

This book enables universities to offer parallel algorithm courses at the senior undergraduate level in computer science and engineering. It is also an invaluable text/reference for graduate students, scientists, and engineers in computer science, mathematics, and engineering. ... Read more

Customer Reviews (1)

1-0 out of 5 stars Book has too many errors
This book has numerous errors. I am surprised it was printed in the first place. I found myself having to derive any equation they show because I eventually just didn't trust what the book said. ... Read more


56. High Performance Parallel Database Processing and Grid Databases (Wiley Series on Parallel and Distributed Computing)
by David Taniar, Clement H. C. Leung, Wenny Rahayu, Sushant Goel
Hardcover: 554 Pages (2008-10-13)
list price: US$140.00 -- used & new: US$92.58
(price subject to change: see help)
Asin: 0470107626
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
The latest techniques and principles of parallel and grid database processing

The growth in grid databases, coupled with the utility of parallel query processing, presents an important opportunity to understand and utilize high-performance parallel database processing within a major database management system (DBMS). This important new book provides readers with a fundamental understanding of parallelism in data-intensive applications, and demonstrates how to develop faster capabilities to support them. It presents a balanced treatment of the theoretical and practical aspects of high-performance databases to demonstrate how parallel query is executed in a DBMS, including concepts, algorithms, analytical models, and grid transactions.

High-Performance Parallel Database Processing and Grid Databases serves as a valuable resource for researchers working in parallel databases and for practitioners interested in building a high-performance database. It is also a much-needed, self-contained textbook for database courses at the advanced undergraduate and graduate levels. ... Read more


57. Spatially Structured Evolutionary Algorithms: Artificial Evolution in Space and Time (Natural Computing Series)
by Marco Tomassini
Paperback: 193 Pages (2010-11-30)
list price: US$79.95 -- used & new: US$79.95
(price subject to change: see help)
Asin: 364206339X
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description

Evolutionary algorithms (EAs) is now a mature problem-solving family of heuristics that has found its way into many important real-life problems and into leading-edge scientific research. Spatially structured EAs have different properties than standard, mixing EAs. By virtue of the structured disposition of the population members they bring about new dynamical features that can be harnessed to solve difficult problems faster and more efficiently. This book describes the state of the art in spatially structured EAs by using graph concepts as a unifying theme. The models, their analysis, and their empirical behavior are presented in detail. Moreover, there is new material on non-standard networked population structures such as small-world networks.

The book should be of interest to advanced undergraduate and graduate students working in evolutionary computation, machine learning, and optimization. It should also be useful to researchers and professionals working in fields where the topological structures of populations and their evolution plays a role.

... Read more

58. Massively Parallel, Optical, and Neural Computing in Japan, (German National Research Center for Computer Scie)
 Paperback: 170 Pages (1992-01-01)
list price: US$92.00 -- used & new: US$92.00
(price subject to change: see help)
Asin: 9051990987
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
A survey of products and research projects in the field of highly parallel, optical and neural computers in Japan. The research activities are listed by type of organization, eg universities and public research organizations, and by industry. ... Read more


59. Applied Parallel Computing. Large Scale Scientific and Industrial Problems: 4th International Workshop, PARA'98, Umea, Sweden, June 14-17, 1998, Proceedings ... Notes in Computer Science) (v. 1541)
Paperback: 586 Pages (1998-12-28)
list price: US$97.00 -- used & new: US$39.00
(price subject to change: see help)
Asin: 3540654143
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
This book constitutes the carefully refereed proceedings of the 4th International Workshop on Applied Parallel Computing, PARA'98, held in Umea, Sweden, in June 1998. The 75 revised papers presented were carefully reviewed and selected for inclusion in the book. The papers address a variety of topics in large scale scientific and industrial-strength computing, in particular high-performance computing and networking; tools, languages, and environments for parallel processing; scientific visualization and virtual reality; and future directions in high-performance computing and communication. ... Read more


60. Massively Parallel, Optical, and Neural Computing in the United States,
by Robert Moxley, Gilbert Kalb
 Paperback: 216 Pages (1992-01-01)
list price: US$104.00 -- used & new: US$104.00
(price subject to change: see help)
Asin: 9051990979
Canada | United Kingdom | Germany | France | Japan
Editorial Review

Product Description
A survey of products and research projects in the field of highly parallel, optical and neural computers in the USA. It covers operating systems, language projects and market analysis, as well as optical computing devices and optical connections of electronic parts. ... Read more


  Back | 41-60 of 100 | Next 20

Prices listed on this site are subject to change without notice.
Questions on ordering or shipping? click here for help.

site stats