Written by two of the pioneers in the field, this book contains a wealth of practical information unavailable anywhere else. The authors give a comprehensive presentation of the field of adaptive control, carefully bending theory and implementation to provide the reader with insight and understanding. Benefitting from the feedback of students and colleagues who have used the first edition, the material has been reorganized and rewritten, giving a more balanced and teachable presentation of fundamentals and applications.
Introduction. Linear Feedback. Effects of Process Variation. Adaptive Schemes. The Adaptive Control Problem. Applications. Conclusions. Problems. References.
2. Real-Time Parameter Estimation.
Introduction. Least Squares and Regression Models. Estimating Parameters in Dynamical Systems. Experimental Conditions. Simulation of Recursive Estimation. Prior Information. Conclusions. Problems. References.
3. Deterministic Self-Tuning Regulators.
Introduction. Pole Placement Design. Indirect Self-tuning Regulators. Continuous-Time Self-tuners. Direct Self-tuning Regulators. Disturbances with Known Characteristics. Conclusions. Problems. References.
4. Stochastic And Predictive Self-Tuning Regulators.
Introduction. Design of Minimum-Variance and Moving-Average Controllers. Stochastic Self-Tuning Regulators. Unification of Direct Self-tuning Regulators. Linear Quadratic STR. Adaptive Predictive Control. Conclusions. Problems. References.
5. Model-Reference Adaptive Systems.
Introduction. The MIT Rule. Determination of the Adaptation Gain. Lyapunov Theory. Design of MRAS Using Lyapunov Theory. Bounded-Input, Bounded-Output Stability. Applications to Adaptive Control. Output Feedback. Relations between MRAS and STR. Nonlinear Systems. Conclusions. Problems. References.
6. Properties Of Adaptive Systems.
Introduction. Nonlinear Dynamics. Adaptation of a Feedforward Gain. Analysis of Indirect Discrete-Time Self-tuners. Stability of Direct Discrete-Time Algorithms. Averaging. Application of Averaging Techniques. Averaging in Stochastic Systems. Robust Adaptive Controllers. Conclusions. Problems. References.
7. Stochastic Adaptive Control.
Introduction. Multistop Decision Problems. The Stochastic Adaptive Problem. Dual Control. Suboptimal Strategies. Examples. Conclusions. Problems. References.
Introduction. PID Control. Auto-Tuning Techniques. Transient Response Methods. Methods Based on Relay Feedback. Relay Oscillations. Conclusions. Problems. References.
9. Gain Scheduling.
Introduction. The Principle. Design of Gain-Scheduling Controllers. Nonlinear Transformations. Applications of Gain Scheduling. Conclusions. Problems. References.
10. Robust And Self-Oscillating Systems.
Why Not Adaptive Control? Robust High-Gain Feedback Control. Self-oscillating Adaptive Systems. Variable-Structure Systems. Conclusions. Problems. References.
11. Practical Issues And Implementation.
Introduction. Controller Implementation. Controller Design. Solving the Diophantine Equation. Estimator Implementation. Square Root Algorithms. Interaction of Estimation and Control. Prototype Algorithms. Operational Issues. Conclusions. Problems. References.
12. Commercial Products And Applications.
Introduction. Status of Applications. Industrial Adaptive Controllers. Some Industrial Adaptive Controllers. Process Control. Automobile Control. Ship Steering. Ultrafiltration. Conclusions. Problems. References.
13. Perspectives On Adaptive Control.
Introduction. Adaptive Signal Processing. Extremum Control. Expert Control Systems. Learning Systems. Future Trends. Conclusions. References.