Differential Intelligence
Many of the most celebrated techniques in Artificial Intelligence would not be as successful if, at some level, they were not exploiting differentiable quantities. The back-propagation algorithm, at the center of learning in artificial neural networks, leverages on the first and (sometimes) second order differentials of the error to update the network weights. Gradient boosting techniques make use of the negative gradients of a loss function to iteratively improve over some initial model. More recently, differentiable memory access operations were successfully implemented and shown to give rise to new, exciting, neural architectures. Even in the area of evolutionary computations, also sometimes referred to as derivative-free optimization, having the derivatives of the fitness function is immensely useful, to the extent that many derivative-free algorithms, in one way or another, seek to approximate such information (e.g. the covariance matrix in CMA-ES as an approximation of the inverse Hessian).
We argue that since differential information is known to be of paramount importance in successful learning algorithms, its systematic use in machine learning, robotic controllers or evolutionary computations will bring substantial advances to the fields.
The project:
In this project we investigate the use of high order differential information to accelerate learning in computer programs or, similarly, in robotic tasks. We call this new area of research Differential Intelligence.
By the use of differential algebraic methods (implemented in the open source project AuDI) we can express the output of computer programs as high order Taylor expansions and hence use the created map to influence learning.
Similarly, in the case of robotic behaviours, the technique can be used to express controlled trajectories as a high order Taylor map of controller's parameters that can thus be learned efficiently.