ejeai Open Access Journal

European Journal of Emerging Artificial Intelligence

eISSN: Applied
Publication Frequency : 2 Issues per year.

  • Peer Reviewed & International Journal
Table of Content
Issues (Year-wise)
Loading…

Open Access iconOpen Access

ARTICLE

An Empirical Framework for Evaluating Reinforcement Learning in Automated Optimization Systems

1 Department of Computer Engineering North Cascadia Institute of Technology, Seattle, USA

https://doi.org/10.64917/

Citations: Loading…
ABSTRACT VIEWS: 10   |   FILE VIEWS: 2   |   PDF: 2   HTML: 0   OTHER: 0   |   TOTAL: 12
Views + Downloads (Last 90 days)
Cumulative % included

Abstract

The integration of Reinforcement Learning (RL) into automation represents a paradigm shift in solving complex optimization problems across various industries6. While RL has demonstrated significant potential, its practical application is often hampered by a lack of standardized evaluation frameworks, making it difficult for practitioners to select appropriate algorithms for specific tasks7. This study introduces and executes a comprehensive empirical investigation to systematically evaluate the performance of leading RL algorithms across a diverse set of simulated automation environments8. We designed three high-fidelity simulation suites mimicking critical optimization tasks in manufacturing (production scheduling, inventory management), energy systems (microgrid management, HVAC control), and robotics (motion planning, multi-robot coordination)9. Within these environments, we benchmarked a portfolio of algorithms, including Deep Q-Networks (DQN), Proximal Policy Optimization (PPO), Soft Actor-Critic (SAC), and Multi-Agent Deep Deterministic Policy Gradient (MADDPG), against key performance indicators: task efficiency, sample complexity, scalability, and robustness to environmental stochasticity10. Our results reveal a nuanced performance landscape where no single algorithm dominates across all domains11. For instance, while PPO demonstrated superior stability and performance in continuous control tasks prevalent in robotics and HVAC systems, DQN-based variants excelled in discrete action spaces typical of scheduling and inventory problems12. Multi-agent algorithms showed profound efficiency gains in cooperative tasks but suffered from higher training complexity13. The findings underscore a critical trade-off between algorithm complexity, sample efficiency, and task-specific performance14. This research provides a foundational empirical baseline, offering actionable insights for deploying RL in real-world automation and highlighting critical areas for future research, particularly in enhancing transfer learning, safety, and interpretability to bridge the persistent gap between simulation and practical deployment15.


Keywords

Reinforcement Learning, Automation, Optimization

References

[1] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. MIT press, 2018. 338

[2] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al., “Human-level control through deep reinforcement learning,” nature, vol. 518, no. 7540, pp. 529–533, 2015. 339

[3] C. Li, P. Zheng, Y. Yin, B. Wang, and L. Wang, “Deep reinforcement learning in smart manufacturing: A review and prospects,” CIRP Journal of Manufacturing Science and Technology, vol. 40, pp. 75–101, 2023. 340

[4] A. Perera and P. Kamalaruban, “Applications of reinforcement learning in energy systems,” Renewable and Sustainable Energy Reviews, vol. 137, p. 110618, 2021. 341

[5] J. Kober, J. A. Bagnell, and J. Peters, “Reinforcement learning in robotics: A survey,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1238–1274, 2013. 342


How to Cite

An Empirical Framework for Evaluating Reinforcement Learning in Automated Optimization Systems. (2025). European Journal of Emerging Artificial Intelligence, 2(01), 21-33. https://doi.org/10.64917/

Related articles

Share Link