Open Access Open Access  Restricted Access Subscription Access

AI-Driven Strategic Insights: Predicting Competitive RTS Game Outcomes

Nagajayant Nagamani

Abstract


This study investigates the application of advanced machine learning techniques for analyzing and predicting match outcomes in real-time strategy (RTS) games, with a particular focus on StarCraft II as a representative competitive platform. By leveraging large-scale historical gameplay data, the research explores both supervised and unsupervised learning methods to identify crucial player behaviors, map dynamics, and strategic decision-making patterns that influence in-game outcomes. The methodology integrates predictive modeling with domain-specific feature engineering, aiming to extract and interpret critical factors such as unit composition, actions per minute (APM), resource management, race selection, and map characteristics. These factors are shown to have significant impacts on win probabilities, especially as matches progress through different phases. The core contribution includes the development of a machine learning-based Win Prediction Engine (WPE) capable of producing real-time win probability estimates during live gameplay, providing tactical and strategic insights for both human players and AI agents. This framework not only advances the precision of predictive analytics in eSports contexts but also has broader implications for autonomous agent training, military simulation environments, and real-time decision-support systems. The research highlights the interplay between macro- and micro-level strategies, showing how ensemble models like Gradient Boosted Trees outperform linear models in forecasting match outcomes with high accuracy, particularly beyond early-game phases. Overall, the findings pave the way for scalable, interpretable AI-driven game intelligence systems and contribute to the ongoing evolution of data-centric strategic analysis in complex, dynamic environments.


Full Text:

PDF

References


O. Vinyals, I. Babuschkin, W. M. Czarnecki et al., “Grandmaster level in starcraft ii using multi-agent reinforcement learning,” Nature, vol. 575, no. 7782, pp. 350–354, 2019.

Z. Lin, W. Sun, G. Synnaeve, M. Buro, and S. Ontanon, “STARDATA: A starcraft ai research dataset,” arXiv preprint arXiv:1708.02139, 2017.

O. Vinyals, T. Ewalds, S. Bartunov et al., “Starcraft ii: A new challenge for reinforcement learning,” arXiv preprint arXiv:1708.04782, 2017.

J. H. Friedman, “Greedy function approximation: A gradient boosting machine,” Annals of Statistics, vol. 29, no. 5, pp. 1189–1232, 2001.

D. Leblanc and S. Louis, “Early prediction of a game outcome in starcraft 2,” in 2020 IEEE Conference on Games (CoG). IEEE, 2020, pp. 588–595.

T. Chen and C. Guestrin, “Xgboost: A scalable tree boosting system,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 785–794.

S. Ontanon, G. Synnaeve, A. Uriarte, F. Richoux, D. Churchill, and´ M. Preuss, “A survey of real-time strategy game ai research and competition in starcraft,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 5, no. 4, pp. 293–311, 2013.

N. Usunier, G. Synnaeve, Z. Lin, and S. Chintala, “Episodic exploration for deep deterministic policies: An application to starcraft micromanagement tasks,” in Advances in Neural Information Processing Systems, 2016, pp. 2848–2856.

V. Mnih, K. Kavukcuoglu, D. Silver et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, pp. 529–533, 2015.

D. Silver, A. Huang, C. J. Maddison et al., “Mastering the game of go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, 2016.


Refbacks

  • There are currently no refbacks.