Multi-Temporal Predictive Coding for Robotic Arm Control
Jimoh, Azeez (2025-07-16)
Multi-Temporal Predictive Coding for Robotic Arm Control
Jimoh, Azeez
(16.07.2025)
Lataukset:
Julkaisu on tekijänoikeussäännösten alainen. Teosta voi lukea ja tulostaa henkilökohtaista käyttöä varten. Käyttö kaupallisiin tarkoituksiin on kielletty.
avoin
Julkaisun pysyvä osoite on:
https://urn.fi/URN:NBN:fi-fe2025072979943
https://urn.fi/URN:NBN:fi-fe2025072979943
Tiivistelmä
Predictive coding is a theoretical framework inspired by the brain’s mechanisms for processing information. It holds significant potential for enhancing robotic control systems, particularly in dynamic and uncertain environments. Traditional control methods often rely on reactive error correction, which can lead to inefficiencies and degraded performance under changing conditions. In contrast, predictive coding manages these corrections proactively by minimizing prediction errors, resulting in more stable and resilient robotic behavior. This study develops and evaluates a predictive coding framework for the Franka Emika Panda robotic arm, enabling adaptive and robust performance in response to environmental changes. The proposed approach continuously updates the system’s internal model by comparing predicted and actual sensory inputs in real time.
In this thesis, a hierarchical predictive model for robotic arm behavior is introduced, operating across multiple temporal scales (e.g., 0.2s, 0.3s, and 0.5s horizons) to anticipate future states and adapt motion accordingly. Complementing this structure is a vision-in-the-loop system, which incorporates real-time camera feedback not only for perception but also for predicting how the target will appear in future views. This joint prediction of both physical movement and visual perception is combined with an error minimization mechanism to ensure smooth and robust control, particularly during repetitive tasks and in environments with inherent uncertainty. By continuously minimizing discrepancies between predicted and actual camera views, the system refines its motion strategy to maintain optimal target visibility and centering when possible, while adaptively avoiding obstacles based on system confidence estimates derived from the error minimization mechanism.
Simulation-based experiments demonstrate that the framework improves control accuracy and robustness, especially in scenarios with high uncertainty and dynamic changes. These findings highlight the potential of predictive coding to advance adaptive robotic control, with future work aimed at real-world deployment and broader generalization.
In this thesis, a hierarchical predictive model for robotic arm behavior is introduced, operating across multiple temporal scales (e.g., 0.2s, 0.3s, and 0.5s horizons) to anticipate future states and adapt motion accordingly. Complementing this structure is a vision-in-the-loop system, which incorporates real-time camera feedback not only for perception but also for predicting how the target will appear in future views. This joint prediction of both physical movement and visual perception is combined with an error minimization mechanism to ensure smooth and robust control, particularly during repetitive tasks and in environments with inherent uncertainty. By continuously minimizing discrepancies between predicted and actual camera views, the system refines its motion strategy to maintain optimal target visibility and centering when possible, while adaptively avoiding obstacles based on system confidence estimates derived from the error minimization mechanism.
Simulation-based experiments demonstrate that the framework improves control accuracy and robustness, especially in scenarios with high uncertainty and dynamic changes. These findings highlight the potential of predictive coding to advance adaptive robotic control, with future work aimed at real-world deployment and broader generalization.