Adaptive Deep Reinforcement Learning for Robust Control of Uncertain Dynamic Systems
DOI:
https://doi.org/10.9999/ijair.v1i1.3Keywords:
uncertain systems; adaptive control; robust control; deep reinforcement learning; actor–critic.Abstract
Uncertainty is a defining feature of many control problems: parameters drift, actuators saturate or slow down, and disturbances do not follow a single tidy model. When the mismatch between a nominal model and the real plant grows, classical designs that work well in a narrow envelope can lose tracking quality or violate safety limits. This paper examines adaptive deep reinforcement learning for robust control of uncertain nonlinear systems. The control task is posed as a constrained continuous-control problem. We train an actor–critic policy over a family of randomized dynamics and augment the observa- tion with lightweight identification cues extracted from short histories of state and input. At execution time, a small safety layer enforces hard command bounds. Across several uncertain benchmark systems, the resulting controller shows improved ro- bustness to parameter drift and disturbance bursts, with lower violation rates than fixed-gain baselines. We also report sensitivity studies (randomization width, observation latency, and history length) and summarize engineering lessons that matter for deployment.
Downloads
Published
Versions
- 2026-01-30 (2)
- 2026-01-30 (1)
How to Cite
Issue
Section
License
This article is published under the Creative Commons Attribution 4.0 International License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.