Neuromorphic Memristive VLSI Architectures for Cognition (NMVAC)
Final Report Abstract
Neuromorphic computing aims to technically emulate the efficiency and adaptability of biological information processing systems by mimicking the computational principles of cognitive functions in customized hardware. This makes it possible to overcome existing limitations of machine learning systems. The goal is to realize cognitive systems that have generalizable capabilities and require minimal computing resources. Technically, the approach of this project is based on the development of analogue (sub-threshold) VLSI circuits in combination with memristive devices. In recent years, it has been shown that the non-volatile memory function of memristive devices enables parallelization of matrix-vector multiplications in hardware, thus enabling enormous savings in energy consumption and computing times of AI systems. However, the variability of memristive devices based on inherent stochasticity presents limitations in reliability and accuracy, making it difficult to develop more complex neuromorphic systems for real-world applications. Therefore, the aim of this project was to use the inherent stochastic of memristive devices for the development of learning processes in order to realize robust, reliable and scalable neuromorphic systems. This project has addressed these challenges through three approaches: 1. Development of suitable learning models that take into account the inherent stochastic of memristive devices and enable the implementation of cognitive learning functions within RRAM structures. 2. Development of spiking neural networks that allow memristive devices of different characteristics to be integrated within a CMOS circuit. 3. Combination of vector-symbolic architectures (VSAs) with attractor networks. VSAs allow information to be distributed evenly to all neurons of the network, while attractor networks have an emergent stability and autoassociative memory function. The approach developed here is particularly attractive for neuromorphic hardware, where size and parallelism are present, but the reliability of individual devices is not always given.
Publications
-
Robust Spiking Attractor Networks with a Hard Winner-Take-All Neuron Circuit. 2023 IEEE International Symposium on Circuits and Systems (ISCAS), 1-5. IEEE.
Cotteret, Madison; Richter, Ole; Mastella, Michele; Greatorex, Hugh; Janotte, Ella; Girão, Willian Soares; Ziegler, Martin & Chicca, Elisabetta
-
Blooming and pruning: learning from mistakes with memristive synapses. Scientific Reports, 14(1).
Nikiruy, Kristina; Perez, Eduardo; Baroni, Andrea; Reddy, Keerthi Dorai Swamy; Pechmann, Stefan; Wenger, Christian & Ziegler, Martin
-
Vector Symbolic Finite State Machines in Attractor Neural Networks. Neural Computation, 36(4), 549-595.
Cotteret, Madison; Greatorex, Hugh; Ziegler, Martin & Chicca, Elisabetta
-
A neuromorphic processor with on-chip learning for beyond-CMOS device integration. Nature Communications, 16(1).
Greatorex, Hugh; Richter, Ole; Mastella, Michele; Cotteret, Madison; Klein, Philipp; Fabre, Maxime; Rubino, Arianna; Soares, Girão Willian; Chen, Junren; Ziegler, Martin; Bégon-Lours, Laura; Indiveri, Giacomo & Chicca, Elisabetta
-
Distributed representations enable robust multi-timescale symbolic computation in neuromorphic hardware. Neuromorphic Computing and Engineering, 5(1), 014008.
Cotteret, Madison; Greatorex, Hugh; Renner, Alpha; Chen, Junren; Neftci, Emre; Wu, Huaqiang; Indiveri, Giacomo; Ziegler, Martin & Chicca, Elisabetta
