HOME > 상세정보

상세정보

Task scheduling in parallel and distributed systems

Task scheduling in parallel and distributed systems (1회 대출)

자료유형
단행본
개인저자
EL-Rewini, Hesham. Lewis, T. G. (Theodore Gyle), 1941-. Ali, Hesham H.
서명 / 저자사항
Task scheduling in parallel and distributed systems / Hesham El-Rewini, Theodore G. Lewis, and Hesham H. Ali.
발행사항
Englewood Cliffs, N.J. :   Prentice Hall,   c1994.  
형태사항
xiv, 290 p. : ill. ; 24 cm.
총서사항
Prentice Hall series in innovative technology.
ISBN
0130992356
서지주기
Includes bibliographical references and index.
일반주제명
Parallel processing (Electronic computers). Electronic data processing --Distributed processing. Multitasking (Computer science).
000 01000camuuu200277 a 4500
001 000000917829
005 19990111140657.0
008 930915s1994 njua b 001 0 eng
010 ▼a 93034848
020 ▼a 0130992356
040 ▼a DLC ▼c DLC ▼d DLC ▼d 244002
049 0 ▼l 151006858
050 0 0 ▼a QA76.58 ▼b .E43 1994
082 0 0 ▼a 005.2 ▼2 20
090 ▼a 005.2 ▼b E48t
100 2 ▼a EL-Rewini, Hesham.
245 1 0 ▼a Task scheduling in parallel and distributed systems / ▼c Hesham El-Rewini, Theodore G. Lewis, and Hesham H. Ali.
260 ▼a Englewood Cliffs, N.J. : ▼b Prentice Hall, ▼c c1994.
300 ▼a xiv, 290 p. : ▼b ill. ; ▼c 24 cm.
440 0 ▼a Prentice Hall series in innovative technology.
504 ▼a Includes bibliographical references and index.
650 0 ▼a Parallel processing (Electronic computers).
650 0 ▼a Electronic data processing ▼x Distributed processing.
650 0 ▼a Multitasking (Computer science).
700 1 ▼a Lewis, T. G. ▼q (Theodore Gyle), ▼d 1941-.
700 1 ▼a Ali, Hesham H.

소장정보

No. 소장처 청구기호 등록번호 도서상태 반납예정일 예약 서비스
No. 1 소장처 세종학술정보원/과학기술실(5층)/ 청구기호 005.2 E48t 등록번호 151006858 (1회 대출) 도서상태 대출가능 반납예정일 예약 서비스 B M ?

컨텐츠정보

책소개

This reference addresses various resource allocation problems and surveys the most important scheduling techniques proposed over the last decade. It offers explanation of the problems, methods of solution under a variety of conditions, and an evaluation of the goodness of solutions.


정보제공 : Aladin

목차


CONTENTS
Preface
Foreword = ⅹ
Introduction : A Road Map to Scheduling
1 Scheduling Parallel Tasks : A Classical Problem
 1.1 The Scheduling Problem
 1.2 Scheduling Taxonomy
 1.3 An Alternate Scheduling Taxonomy
  1.3.1 Single Application Versus Multiple Application System
  1.3.2 Nonpreemptive Versus Preemptive
  1.3.3 Nonadaptive Versus Adaptive
 1.4 Task-Scheduling Model
  1.4.1 Target Machine
  1.4.2 Parallel Program Tasks
  1.4.3 Execution and Communication Cost
  1.4.4 The Schedule
  1.4.5 Performance Measures
2 Optimal Scheduling Algorithms
 2.1 Program and Machine Models
 2.2 Scheduling Tree-Structured Task Graphs
 2.3 Scheduling Interval Ordered Tasks
 2.4 Arbitrary Task Graphs On Two Processors = 23
 2.5 The NP-Completeness of the Scheduling Problem = 25
3 The Relationship Between Matching and Two-Processor Scheduling = 29
 3.1 Matching = 29
 3.2 Definitions = 31
 3.3 The Matching-Scheduling Relationship = 31
 3.4 The Effect of Communication Delay = 34
4 Considering Communication Delay = 38
 4.1 Communication-Cost Models = 39
 4.2 Augmented Task Graphs = 43
 4.3 Scheduling Tree-Structured Task Graphs with Communication = 45
 4.4 Scheduling Interval Ordered Tasks with Communication = 48
  4.4.1 The Algorithm = 48
  4.4.2 The Optimality of SCHION = 50
5 List-Scheduling Heuristics = 56
 5.1 List Scheduling = 57
 5.2 Communication Issues = 59
  5.2.1 Parallelism Versus Communication Delay = 60
  5.2.2 Level Alteration = 62
 5.3 List-Heuristics With Communication = 63
  5.3.1 Program and Machine Models = 63
  5.3.2 Terminology = 64
 5.4 General List-Scheduling Heuristic = 64
 5.5 Insertion-Scheduling Heuristic (ICH) = 67
 5.6 Duplication-Scheduling Heuristic (DSH) = 71
 5.7 Experiment Results = 76
6 Advanced Scheduling Heuristics = 82
 6.1 Clustering Algorithms = 83
 6.2 General Description of Clustering Algorithms = 84
 6.3 Clustering Heuristic Algorithms = 87
  6.3.1 Kim and Brown's Algorithm = 87
  6.3.2 Sarkar's Algorithm = 89
  6.3.3 The Dominant-Sequence Clustering Algorithm = 91
  6.3.4 Wu and Gajski's Clustering Algorithm = 92
 6.4 The Mapping Heuristic (MH) = 95
 6.5 Routing Tables = 97
 6.6 Simulation Experiments = 101
7 Data Parallel Scheduling = 106
 7.1 Piece-Wise Scheduling = 107
 7.2 Models and Assumptions = 107
 7.3 Replicates = 109
  7.3.1 Programming Replicates = 112
 7.4 Divide-and-Conquer Patterns = 114
  7.4.1 Programming Trees = 118
 7.5 Nearest Neighbor Meshes = 121
  7.5.1 Programming Meshes = 125
 7.6 Composite Task Graphs = 126
8 Dynamic Task Scheduling = 131
 8.1 Nondeterministic Programs = 132
  8.1.1 Graphs with Cyclers = 132
  8.1.2 Graphs with And/Or Relations = 135
  8.1.3 Graph with Variable Size Nodes = 136
  8.1.4 Graphs with Variable Size Arcs = 137
 8.2 Approaches to Dynamic Scheduling = 139
 8.3 Unconstrained FIFO Scheduling = 140
 8.4 Balance-Constrained Scheduling = 141
 8.5 Cost-Constrained Scheduling = 144
 8.6 Hybrid Scheduling = 145
9 Static Scheduling for Conditional Branching = 150
 9.1 Modeling Conditional Branching = 151
 9.2 Reducing the Degree of Nondeterminism = 156
  9.2.1 Similarity in Nondeterministic Task Graphs = 156
  9.2.2 Reduced Task Graphs = 158
 9.3 The Multiphase Approach = 160
  9.3.1 Phase l : Generate = 162
  9.3.2 Phase 2 : Construct = 164
  9.3.3 Phase 3 : Schedule = 164
  9.3.4 Phase 4 : Merge = 165
10 Loop Scheduling on Shared-Memory Computers = 172
 10.1 Scheduling Loop Iterations = 173
 10.2 Loop Spreading = 176
  10.2.1 Independent Substatements = 176
  10.2.2 Dependent Substatements = 178
  10.2.3 Performance Analysis = 182
 10.3 Safe Self-Scheduling = 183
  10.3.1 Overview of SSS = 184
 10.4 The SSS Algorithm = 186
  10.4.1 Reducing the Total Number of Chores = 186
  10.4.2 Calculating |CSo| = 188
  10.4.3 Assigning Iterations to Processors at Compile-Time = 191
  10.4.4 The Scheduling Function = 191
  10.4.5 Calculating Chore Sizes For Subsequent Rounds = 193
 10.5 Extending SSS to General Cases = 194
 10.6 Experiment Results and Discussions = 196
11 Loop Scheduling on Distributed-Memory Computers = 200
 11.1 Task Graph Representation in Loops = 201
  11.1.1 Loop Task Graph = 202
 11.2 Loop Unrolling = 203
 11.3 Replicated Task Graphs = 205
 11.4 Scheduling Loop Task Graphs = 205
 l1.5 Loop Unrolling Optimization = 209
  11.5.1 The Problem Formulation = 210
  11.5.2 The Objective Function = 211
 11.6 Optimal Loop Unrolling = 213
12 Task Partitioning and Grain Packing = 218
 12.1 What Is Grain Packing? = 219
 12.2 Grain Packing Through Scheduling = 219
 12.3 Heuristic for Grain Packing by Scheduling = 221
 12.4 Grain Packing by Graph Partitioning = 222
13 Scheduling CASE Tools = 229
 13.1 Design and Analysis Tools = 230
  13.1.1 Parallax - The Tool = 231
  13.1.2 Analysis of Some Parallel Applications = 237
 13.2 Code Generation Tools = 243
  13.2.1 Hypertool = 244
  13.2.2 PYRROS = 250
14 Task Allocation = 254
 14.1 Introduction = 255
 14.2 Optimal Algorithms = 256
  14.2.1 Optimal Task Assignment in Two-Processor Systems = 256
  14.2.2 Optimal Task Assignment in Linear Array Networks = 259
 14.3 Task Allocation Heuristics = 261
  14.3.1 An m-way Network Flow Algorithm = 263
  14.3.2 An Approximation Algorithm = 267
 14.4 The NP-Completeness of the Task Allocation Problem = 268
  14.4.1 Split Graphs = 268
  14.4.2 The Split Graph Model = 269
  14.4.3 Minimizing the Total Cost = 271
  14.4.4 The NP-Completeness Proof = 272
 14.5 Task Allocation and Clique Partitioning = 276
Index = 287


관련분야 신착자료

Harvard Business Review (2025)