Artificial Intelligence
3 Apr 2023

Memristor-based Neural Network Accelerators for Space Applications

Background

Traditional (von Neumann architecture) computing paradigms are not inherently suited to AI applications, and were never designed with AI in mind. Most methods of mainstream AI computing use GPUs or CPUs, and thus incur significant speed and performance penalties due to the von Neumann bottleneck. This limitation is a result of having to move data from the memory to the unit of computation for every operation performed, which drastically increases energy consumption and execution time [1].

To combat this, In-memory computing is an alternative architecture in which computation is peformed inside of the memory, eliminating the bottleneck. As there is no need for data movement between memory and compute, higher speeds, complete parallelization and lower energy consumption can be achieved [2]. These are all features which are attractive for AI in general, and especially so for on-board AI (or so-called edge computing).

A schematic view of an analog neural network accelerator, where the crosspoint devices could be implemented using memristors, from [2]
A schematic view of an analog neural network accelerator, where the crosspoint devices could be implemented using memristors, from [2]

In-memory computing can be used in a wide variety of scenarios, but of particular interest are AI applications. Ai applications tend to be highly parallelizable, and are at their core large sets of matrix-vector multiplications (or more generally multiply-accumulate operations). As such, neural network accelerators based on in-memory computing have become of interest in recent years. Multiple methods exist to implement in-memory computing, such as analog in-memory computing, but also digital implementations using traditiona transistor-based technologies (such as SRAM). Memristors are one such avenue. Memristors are in essence programmable resistors with memory, this makes them naturally suited to in-memory computing as both the element of computation and of memory.

In a neural network accelerator, memristors are placed in a crossbar, where the weights of the network are stored in the conductance states of the devices at the crosspoints. Often, multiple devices are used to encoding a single weight. Memristors also boast radiation hardness [3], and when used appropriately, significantly higher energy efficiency over conventional methods [4]. Memristors and other analog technologies still pose serious challenges with regards to noise, physical non-idealities and scalability, which affect its ability to perform computations and act as storage.

Project Overview

This project aims to enable In-Memory Computing for on-board AI acceleration by using memristive devices as both the unit of computation, and of memory. The main goals of this project are as follows:

  • Selection of viable AI applications, simulation of neural networks on memristive hardware

    This aspect of the project focuses on the selection or development of a neural network which is suitable for implementation as an on-board AI application for space, and benefits from implementation in memristive technologies. Possible applications include GNC, science data processing and control tasks.

  • Research in mitigation of non-idealities

    After exploration and simulation of an appropriate set of neural networks, research will be done on how to mitigate the non-ideal properties of memristors, whilst exploiting the benefits

  • Development of prototype(s) and characterization

    Through the production of physical prototypes of parts of the envisioned neural network, characterization is possible this characterization would focus on energy consumption, accuracy and radation sensitivity.

The final objective of the project being to demonstrate the possibilities of memristors as neural network accelerators, both in a general sense and specific to aerospace and space applications.

References

[1] Sally A McKee. “Reflections on the memory wall”. In: Proceedings of the 1st conference on Computing frontiers. 2004, p. 162.

[2] Malte J. Rasch, Diego Moreda, Tayfun Gokmen, Manuel Le Gallo, Fabio Carta, Cindy Goldberg, Kaoutar El Maghraoui, Abu Sebastian, Vijay Narayanan. "A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays" (2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems)

[3] He Lyu et al. “Research on single event effect test of a RRAM memory and space flight demonstration”. In: Microelectronics Reliability (2021), p. 114347. issn: 0026-2714

[4] Abhairaj Singh et al. “Low-power Memristor-based Computing for Edge-AI Applications”. In: 2021 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE. 2021, pp. 1–5.

Hamburger icon
Menu
Advanced Concepts Team