| 000 | 00856camuu22002534a 4500 | |
| 001 | 000045221434 | |
| 005 | 20060306132354 | |
| 008 | 010221s2001 gw a b 001 0 eng | |
| 010 | ▼a 2001020510 | |
| 020 | ▼a 3540416331 (alk. paper) | |
| 035 | ▼a (KERIS)REF000010424097 | |
| 040 | ▼a DLC ▼c DLC ▼d DLC ▼d 211009 | |
| 042 | ▼a pcc | |
| 050 | 0 0 | ▼a Q370 ▼b .A76 2001 |
| 082 | 0 0 | ▼a 003/.54 ▼2 21 |
| 090 | ▼a 003.54 ▼b A747i | |
| 100 | 1 | ▼a Arndt, C. ▼q (Christoph) , ▼d 1967-. |
| 245 | 1 0 | ▼a Information measures : ▼b information and its decription in science and engineering / ▼c C. Arndt. |
| 260 | ▼a Berlin ; ▼a New York : ▼b Springer , ▼c c2001. | |
| 300 | ▼a xix, 547 p. : ▼b ill. ; ▼c 24 cm. | |
| 504 | ▼a Includes bibliographical references (p. [519]-543) and index. | |
| 650 | 0 | ▼a Information measurement. |
| 945 | ▼a KINS |
소장정보
| No. | 소장처 | 청구기호 | 등록번호 | 도서상태 | 반납예정일 | 예약 | 서비스 |
|---|---|---|---|---|---|---|---|
| No. 1 | 소장처 과학도서관/Sci-Info(2층서고)/ | 청구기호 003.54 A747i | 등록번호 121121305 (8회 대출) | 도서상태 대출가능 | 반납예정일 | 예약 | 서비스 |
컨텐츠정보
책소개
From the reviews: "Bioinformaticians are facing the challenge of how to handle immense amounts of raw data, […] and render them accessible to scientists working on a wide variety of problems. [This book] can be such a tool." IEEE Engineering in Medicine and Biology
This book is intended to be an introduction to the mathematical description of information in science. The necessary mathematical theory of this introduction will be treated in a more vivid way than in the usual theorem-proof structure. This, however, enables us to develop an idea of the connections between different information measures and to understand the trains of thought in their derivation, which is a crucial point for correct applications. It is therefore our intention in the mathematical descriptions to evolve the important ideas of the derivations, so that we obtain the resulting functions as well as the main thoughts and the conditions for the validity of the result. This simplifies the handling of the information measures, which are sometimes hard to classify without any additional background information. Though the mathematical descriptions are the exact formulations of the measures examined, we do not restrict ourselves to rigorous mathematical considerations, but we will also integrate the different measures into the structure and context of possible information measures. Nevertheless the mathematical approach is unavoidable when we are looking for an objective description and for possible applications in optimization.
New feature
This book is an introduction to the mathematical description of information in science and engineering. The necessary mathematical theory will be treated in a more vivid way than in the usual theorem-proof structure. This enables the reader to develop an idea of the connections between different information measures and to understand the trains of thoughts in their derivation. As there exist a great number of different possible ways to describe information, these measures are presented in a coherent manner. Some examples of the information measures examined are: Shannon information, applied in coding theory; Akaike information criterion, used in system identification to determine auto-regressive models and in neural networks to identify the number of neu-rons; and Cramer-Rao bound or Fisher information, describing the minimal variances achieved by unbiased estimators. This softcover edition addresses researchers and students in electrical engineering, particularly in control and communications, physics, and applied mathematics.
정보제공 :
목차
표제
Information measures : information and its decription in science and engineering
저작자
C. Arndt
CONTENTS
Symbols, expressions and abbreviations = XVII
Abstract = 1
Structure and Structuring = 3
1 Introduction = 7
Science and information = 8
Man as control loop = 13
Information, complexity and typical sequences = 14
Concepts of information = 15
Information, its technical dimension and the meaning of a message = 16
Information as a central concept? = 18
2 Basic considerations = 23
2.1 Formal derivation of information = 23
2.1.1 Unit and reference scale = 28
2.1.2 Information and the unit element = 30
2.2 Application of the information measure(Shannon's information) = 31
2.2.1 Summary = 39
2.3 The law of Weber and Fechner = 42
2.4 Information of discrete random variables = 44
3 Historic development of information theory = 47
3.1 Development of information transmission = 47
3.1.1 Samuel F. B. Morse 1837 = 47
3.1.2 Thomas Edison 1874 = 47
3.1.3 Nyquist 1924 = 48
3.1.4 Optimal number of characters of the alphabet used for the coding = 49
3.2 Development of information functions = 51
3.2.1 Hartley 1928 = 51
3.2.2 Dennis Gabor 1946 = 52
3.2.3 Shannon 1948 = 53
3.2.3.1 Validity of the postulates for Shannon's Information = 57
3.2.3.2 Shannon's information(another possibility of a derivation) = 59
3.2.3.3 Properties of Shannon's information, entropy = 61
3.2.3.4 Shannon's entropy or Shannon's information? = 66
3.2.3.5 The Kraft inequality = 67
Kraft's inequality : = 67
Proof of Kraft's inequality : = 68
3.2.3.6 Limits of the optimal length of codewords = 75
3.2.3.6.1 Shannon's coding theorem = 75
3.2.3.6.2 A sequence of n symbols(elements) = 76
3.2.3.6.3 Application of the previous results = 79
3.2.3.7 Information and utility(coding, porfolio analysis) = 82
4 The concept of entropy in physics = 85
The laws of thermodynamics : = 85
4.1 Macroscopic entropy = 86
4.1.1 Sadi Carnot 1824 = 86
4.1.2 Clausius's entropy 1850 = 86
4.1.3 Increase of entropy in a closed system = 87
4.1.4 Prigogine's entropy = 88
4.1.5 Entropy balance equation = 89
4.1.6 Gibbs's free energy and the quality of the energy = 90
4.1.7 Considerations on the macroscopic entropy = 91
4.1.7.1 Irreversible transformations = 92
4.1.7.2 Perpetuum mobile and transfer of heat = 93
4.2 Statistical entropy = 94
4.2.1 Boltzmann's entropy = 94
4.2.2 Derivation of Boltzmann's entropy = 95
4.2.2.1 Variation, permutation and the formula of Stirling = 95
4.2.2.2 Special case : Two states = 100
4.2.2.3 Example : Lottery = 101
4.2.3 The Boltzmann factor = 102
4.2.4 Maximum entropy in equilibrium = 106
4.2.5 Statistical interpretation of entropy = 112
4.2.6 Examples regarding statistical entropy = 113
4.2.6.1 Energy and fluctuation = 115
4.2.6.2 Quantized oscillator = 116
4.2.7 Brillouin-Schr o ·· dinger negentropy = 120
4.2.7.1 Brillouin : Precise definition of information = 121
4.2.7.2 Negentropy as a generalization of Carnot's principle = 124
Maxwell's demon = 125
4.2.8 Information measures of Hartley and Boltzmann = 126
4.2.8.1 Examples = 128
4.2.9 Shannon's entropy = 128
4.3 Dynamic entropy = 130
4.3.1 Eddington and the arrow of time = 130
4.3.2 Kolmogorov's entropy = 131
4.3.3 R e' nyi's entropy = 132
5 Extension of Shannon's information = 133
5.1 R e' nyi's Information 1960 = 133
5.1.1 Properties of R e' nyi's entropy = 137
5.1.2 Limits in the interval 0 ≤ α < ∞ = 140
5.1.3 Nonnegativity for discrete events = 143
5.1.4 Additivity and a connection to Minkowski's norm = 145
5.1.5 The meaning of (A) for α < 1 and α > 1 : = 147
5.1.6 Graphical presentations of R e' nyi's information = 155
5.2 Another generalized entropy(logical expansion) = 156
5.3 Gain of information via conditional probabilities = 162
5.4 Other entropy or information measures = 173
5.4.1 Daroczy's entropy = 173
5.4.2 Quadratic entropy = 174
5.4.3 R-norm entropy = 176
6 Generalized entropy measures = 179
6.1 The corresponding measures of divergence = 189
6.2 Weighted entropies and expectation values of entropies = 193
7 Information functions and gaussian distributions = 197
7.1 R e' nyi's information of a gaussian distributed random variable = 197
7.1.1 R e' nyi's α-information = 198
7.1.2 R e' nyi's G-divergence = 200
7.2 Shannon's information = 204
8 Shannon's information of discrete probability distributions = 207
8.1 Continuous and discrete random variables = 209
8.1.1 Summary = 212
8.2 Shannon's information of a gaussian distribution = 213
8.3 Shannon's information as the possible gain of information in an observation = 217
8.4 Limits of the information, limitations of the resolution = 219
8.4.1 The resolution or the precision of the measurements = 219
8.4.2 The, uncertainty relation of the Fourier transformation = 221
8.5 Maximization of the entropy of a continuous random variable = 222
9 Information functions for gaussian distributions part Ⅱ = 227
9.1 Kullback's information = 227
9.1.1 G₁ for gaussian distribution densites = 230
9.2 Kullback's divergence = 237
9.2.1 Jensen's inequality for G₁ = 238
9.3 Kolmogorov's information = 239
9.4 Transformation of the coordinate system and the effects on the information = 246
9.4.1 -information = 247
9.4.2 G-divergence = 249
9.4.3 S-information = 250
9.4.3.1 Example = 251
9.4.4 Discrimination information = 252
9.4.5 Kolmogorov's information = 253
9.4.6 Prerequisites for the transformations = 255
9.5 Transformation, discrete and continuous measures of entropy = 255
9.6 Summary of the information functions = 257
10 Bounds of the variance = 261
10.1 Cram e' r-Rao bound = 262
10.1.1 Fisher's information for gaussian distribution densities = 265
10.1.2 Fisher's information and Kullback's information = 268
10.1.3 Fisher's information and the metric tensor = 273
10.1.4 Fisher's information and the stochastic observability = 274
10.1.4.1 Fisher's information and the Matrix-Riccati equation = 276
10.1.5 Fisher's information and maximum likelihood estimation = 279
10.1.6 Fisher's information and weighted least-squares estimation = 282
10.1.7 The availability of the Cram e' r-Rao bound = 284
10.1.8 Efficiency, asymptotic efficiency, consistency, bias = 287
10.1.8.1 Unbiased estimator = 287
10.1.8.2 Consistency = 287
10.1.8.3 Efficiency = 288
10.1.9 Summary = 289
10.2 Chapman-Robbins bound = 289
10.2.1 Cram e' r-Rao bound versus Chapman-Robbins bound = 293
10.3 Bhattacharrya bound = 294
Remark = 298
Remark = 302
10.3.1 Bhattacharrya bound and Cram e' r-Rao bound = 302
10.3.2 Bhattacharrya's bound for gaussian distribution densities = 304
10.4 Barankin bound = 307
10.5 Other bounds = 312
Fraser-Guttman bound = 313
Kiefer bound = 313
Extended Fraser-Guttman bound = 314
10.6 Summary = 315
10.7 Biased estimator = 316
10.7.1 Biased estimator versus unbiased estimator = 321
11 Ambiguity function = 327
11.1 The ambiguity function and Kullback's information = 332
11.2 Connection between ambiguity function and Fisher's information = 333
11.3 Maximum likelihood estimation and the ambiguity function = 334
11.3.1 Maximum likelihood estimation = minimum Kullback estimation maximum ambiguity estimation = minimum variance estimation = 334
11.3.2 Maximum likelihood estimation = 336
11.3.2.1 Application : Discriminator(Demodulation) = 336
11.4 The ML estimation is asymptotically efficient = 340
11.5 Transition to the Akaike information criterion = 344
12 Akaike's information criterion = 347
12.1 Akaike's information criterion and regression = 347
12.1.1 Least-squares regression = 347
12.1.2 Application of the results to the ambiguity function = 351
12.2 BIC, SC or HQ = 356
13 Channel information = 363
13.1 Redundancy = 370
13.1.1 Knowledge, redundancy, utility = 374
13.2 Rate of transmission and equivocation = 375
13.3 Hadamard's inequality and Gibbs's second theorem = 377
13.4 Kolmogorov's information = 379
13.5 Kullbacks divergence = 382
13.6 An example of a transmission = 387
13.7 Communication channel and information processing = 389
13.7.1 Semantic, syntactic and pragmatic information = 390
13.7.2 Information, first-time occurrence, confirmation = 391
13.8 Shannon's bound = 391
13.9 Example of the channel capacity = 396
14 'Deterministic' and stochastic information = 399
14.1 Information in state space models = 399
14.2 The observation equation = 401
14.3 Transmission faster than light = 412
14.4 Information about state space variables = 413
15 Maximum entropy estimation = 433
15.1 The difference between maximum entropy and minimum variance = 435
15.2 The difference from bootstrap or resampling methods = 436
15.3 A maximum entropy example = 437
15.4 Maximum entropy : The method = 438
15.4.1 Maximum Shannon entropy = 438
15.4.2 Minimum Kullback-Leibler distance = 442
15.5 Maximum entropy and minimum discrimination information = 446
15.6 Generation of generalized entropy measures = 449
15.6.1 Example : Gaussian distribution and Shannon's information = 452
16 Concluding remarks = 463
16.1 Information, entropy and self-organization = 463
16.2 Complexity theory = 466
16.3 Data reduction = 467
16.4 Cryptology = 468
16.5 Concluding considerations = 468
16.5.1 Information, entropy and probability = 470
16.6 Information = 472
Appendix = 475
A. 1 Inequality for Kullback's information = 475
A. 2 The log-sum inequality = 476
A. 3 Generalized entropy, divergence and distance measures = 481
A.3.1 Entropy measures = 481
A.3.2 Generalized measures of distance = 488
A.3.3 Generalized measures of the directed divergence = 490
A.3.4 Generalized measures of divergence = 493
A.3.4.1 Information radius and the J-divergence = 493
A.3.4.2 Generalization of the R-divergence = 494
A.3.4.3 Generalization of the J-divergence = 498
A.4 A short introduction to probability theory = 500
A.4.1 Axiomatic definition of probability = 501
A.4.1.1 Events, elementary events, sample space = 501
A.4.1.2 Classes of subsets, fields = 502
A.4.1.3 Axiomatic definition of probability according to Kolmogorov = 505
Probability space = 506
A.4.1.4 Random variables = 506
A.4.1.5 Probability distribution = 507
A.4.1.6 Probability space, sample space, realization space = 508
A.4.1.7 Probability distribution and distribution density function = 508
A.4.1.8 Probability distribution density function(PDF) = 510
A.5 The regularity conditions = 513
A.6 State space description = 516
Bibliography = 519
Index = 545
