Faculty Development Competitive Research Grants 2020-2022
Hardware artificial intelligence (AI) is a broad area with the focus to develop hardware accelerators for implementing faster and energy efficient neural networks. The development of hardware AI chips further focus on downscaling the area, and energy with an aim to be compete with human brain. As brain is the reference point, it is important to mimic features that enable this low energy processing. The core aspect of brain include memory based computing, which is a driving principle for this research. Generally, known as processing in-memory or in-memory computing, the semiconductor memories are arranged in crossbar arrays with outputs from array representing the dot-product operation, which is the basic operation for building neural networks. In this approach, the conductance of the memories is equated to the weights in a neural networks, the currents out of the column as the input to the activation functions, and voltage input to the rows as the input to the neural network. Identifying the right combination of weights for a given set of inputs for classification or prediction task is known as learning. Learning includes the minimization of error metric as an objective function, and gradual update of weights to minimize the errors for a given set of training inputs, in a supervised or semi-supervised manner.
|Effective start/end date||1/1/20 → 12/31/22|
- neural learning