NELLI –  A Lifelong Learning Optimisation System – first results

A previous post described our approach to creating a life-long learning system for optimisation systems. Our latest results are now published in an article in  Evolutionary Computation (MITPress) describing the system we have dubbed NELLI – Network for Lifelong Learning.  NELLI takes inspiration from the field of Artificial Immune Systems, and consists of a continuously running system that

  • continuously generates novel heuristics
  • integrates useful heuristics into a self-adapting network of heuristics that  collectively can solve sets of problems
  • removes superfluous heuristics from the network if they become redundant

The idea is that as the system is continuously be exposed to problems over time, its performance should improve over time, it should learn to generalise to unseen problems  and also maintain a memory so that it can quickly solve problems to those seen in the past. Full details are in the paper which is available here at MITPress  – the highlights are summarised below:

NELLI learns over time from experience – the more problems it sees, the better it gets at solving them

The problem ‘environment’ remains static and consists of 3968 bin-packing problems. Every iteration, 30 problems are randomly selected and presented to the network which dynamically changes overtime in terms of its size and of the heuristics contained in it: performance (always measured on the entire set of 3968 problems)  increases over time, but the number of heuristics and problems sustained in the network stabilises, indicating scalability.

62_3968_graphBW

 

NELLI generalises to problems that are similar to those it has been exposed to 

Every 1000 time-steps, the problem ‘environment’ changes:  a new set of 100 instances are randomly chosen from a larger set os 3968 to form the current environment. Every iteration, 30 of these are randomly presented to the network. Performance is measured against all 3968 problemsdespite the fact that many of the problems in PS2 might never have been presented to the system, particularly in its early phases.

59_Bar_100BW

 

 NELLI incorporates a memory

As in the previous experiment, the environment changes every 500 iterations – however, this time, the environment is toggled between two datasets that contain very different problems – heuristics that perform well on one dataset do not perform well on the second. We perform experiments in which a) we restart the system every 500 iterations (no memory) and b) the existing network is retained when the environment changes (memory)

60-61graphEzoomNELLI — with its implicit memory — always outperforms the system with no memory. Due to the retained network, the system does not have to adapt from scratch to a new environment 

The system, with more detailed results, is fully described in the Evolutionary Computation paper.