News
Comparative Study of Training Methods and Architectures of Echo State Networks
Abstract
This paper examines echo state networks (ESNs), one of the most prevalent approaches to implementing reservoir computing. An ESN consists of a recurrent neural network with fixed (untrained) weights and a readout layer that is typically linear and trainable. This approach enables the creation of energy-efficient and computationally efficient neural networks capable of real-time learning. However, since ESN weights are not trained, their selection constitutes a separate challenge that requires careful analysis. The present paper provides a comparative analysis and review of various ESN architectures and readout layer training methods. This analysis is based on practical experience with implementations and theoretical foundations, including studies of how reservoir dynamics depend on the topology of the connectivity graph and the spectrum of the connectivity matrix. To examine the reservoir structure, tools such as connectivity graph condensation and linearization of dynamics are utilized, along with the introduction of a novel concept termed graph memory. In addition to well-established ESN architectures, the review includes less common or previously unapplied models in the context of reservoir computing, such as reaction-diffusion systems, a single neuron with delay, FORCE learning, and neural fields. Experimental evaluation is conducted through comprehensive experiments on the chaotic Mackey-Glass time series prediction task. This paper not only serves as a practical guide for selecting ESN architectures and readout layer training methods but also identifies promising directions for future research.
Keywords
Edition
Proceedings of the Institute for System Programming, vol. 38, issue 3, part 1, 2026, pp. 87-114
ISSN 2220-6426 (Online), ISSN 2079-8156 (Print).
DOI: 10.15514/ISPRAS-2026-38(3)-5
For citation
Full text of the paper in pdf (in Russian)
Back to the contents of the volume