| 000 | 00849camuuu200253 a 4500 | |
| 001 | 000000919940 | |
| 005 | 19990119114745.0 | |
| 008 | 900706s1990 maua b 00110 eng | |
| 020 | ▼a 026213263X | |
| 040 | ▼a DLC ▼c DLC ▼d DLC ▼d 244002 | |
| 049 | 0 | ▼l 151024995 |
| 050 | 0 0 | ▼a TA1635 ▼b .M87 1990 |
| 082 | 0 0 | ▼a 006.3/7 ▼2 20 |
| 090 | ▼a 006.37 ▼b M981e | |
| 100 | 1 | ▼a Murray, David W. |
| 245 | 1 0 | ▼a Experiments in the machine interpretation of visual motion / ▼c David W. Murray and Bernard F. Buxton. |
| 260 | ▼a Cambridge, Mass. : ▼b MIT Press, ▼c 1990. | |
| 300 | ▼a 236 p. : ▼b ill. ; ▼c 24 cm. | |
| 490 | 1 | ▼a Artificial intelligence. |
| 504 | ▼a Includes bibliographical references (p. 215-229) and index. | |
| 650 | 0 | ▼a Computer vision. |
| 650 | 0 | ▼a Motion perception (Vision). |
| 700 | 1 | ▼a Buxton, Bernard F. |
| 830 | 0 | ▼a Artificial intelligence (Cambridge, Mass.) |
소장정보
| No. | 소장처 | 청구기호 | 등록번호 | 도서상태 | 반납예정일 | 예약 | 서비스 |
|---|---|---|---|---|---|---|---|
| No. 1 | 소장처 세종학술정보원/과학기술실(5층)/ | 청구기호 006.37 M981e | 등록번호 151024995 (2회 대출) | 도서상태 대출가능 | 반납예정일 | 예약 | 서비스 |
컨텐츠정보
책소개
If robots are to act intelligently in everyday environments, they must have a perception of motion and its consequences. This book describes experimental advances made in the interpretation of visual motion over the last few years that have moved researchers closer to emulating the way in which we recover information about the surrounding world. It describes algorithms that form a complete, implemented, and tested system developed by the authors to measure two-dimensional motion in an image sequence, then to compute three-dimensional structure and motion, and finally to recognize the moving objects.
The authors develop algorithms to interpret visual motion around four principal constraints. The first and simplest allows the scene structure to be recovered on a pointwise basis. The second constrains the scene to a set of connected straight edges. The third makes the transition between edge and surface representations by demanding that the wireframe recovered is strictly polyhedral. And the final constraint assumes that the scene is comprised of planar surfaces, and recovers them directly.
David W. Murray is University Lecturer in Engineering Science at the University of Oxford and Draper's Fellow in Robotics at St Anne's College, Oxford. Bernard F. Buxton is Senior Research Fellow at the General Electric Company's Hirst Research Centre, Wembley, UK, where he leads the Computer Vision Group in the Long Range Research Laboratory.
Contents: Image, Scene, and Motion. Computing Image Motion. Structure from Motion of Points. The Structure and Motion of Edges. From Edges to Surfaces. Structure and Motion of Planes. Visual Motion Segmentation. Matching to Edge Models. Matching to Planar Surfaces.
정보제공 :
목차
CONTENTS List of Figures List of Tables Preface 1 Image, Scene and Motion = 1 1.1 Exegesis = 3 1.2 Scene and image motion = 6 1.3 Paradigms for computing visual motion = 9 1.4 Visual versus projected motion = 14 1.5 Remarks = 20 2 Computing Image Motion = 23 2.1 Edgels as weak tokens = 23 2.2 An edgel matching algorithm = 25 2.3 The algorithm in detail = 26 2.4 Computer experiments = 32 3 Structure from Motion of Points = 47 3.1 The aperture problem = 47 3.2 3D constraints on the aperture problem = 48 3.3 Point structure from known motion = 51 3.4 Pointwise depths = 51 3.5 Straight edges with Known motion = 55 3.6 Remarks = 58 4 The Structure and Motion of Edges = 61 4.1 Motion segmentation using edges = 61 4.2 The Structure from motion algorithm = 68 4.3 Computer experiments = 70 4.4 How many edges? = 83 5 From Edges to Surfaces = 85 5.1 Imposing polyhedral constraints = 85 5.2 The polyhedral motion algorithm = 92 5.3 Computer experiments = 94 5.4 Remarks = 101 6 Structure and Motion of Planes = 103 6.1 Planar scenes = 103 6.2 Recovering planar structure and motion = 104 6.3 Planar facets with known motion = 112 6.4 3D reconstructions : computer experiments = 112 6.5 Failures of the planar facet algorithms = 117 6.6 Reconstructing full visual motion = 121 7 Visual Motion segmentation = 129 7.1 Global segmentation = 130 7.2 Local segmentation = 144 7.3 Remarks = 148 8 Matching to Edge Models = 151 8.1 Model and data specification = 152 8.2 Matching in overview = 154 8.3 The constraints in detail = 157 8.4 Sign management within search = 165 8.5 Location stage and testing global validity = 167 8.6.Computer experiments = 169 8.7 Remarks = 176 8.8 Appendix = 177 9 Matching to Planar Surfaces = 181 9.1 The matching constraints = 181 9.2 Location stage = 185 9.3 An experimental example = 185 10 Commentary = 189 10.1 Sensing, perception and action = 189 10.2 Representation = 192 10.3 Computing motion and depth = 195 10.4 Object recognition and location = 198 10.5 What next? = 201 10.6 Perception begets action = 201 10.7 Dynamic vision = 202 10.8 Reactive vision = 204 10.9 Vision and control = 207 10.10 Recognition = 209 10.11 Shape and motion = 213 References = 215 Copyright acknowledgements = 231 Index = 233
