Category Archives: Blog

Updated results on life-long learning

Two recent papers, published at GECCO 2014 and PPSN 2014 described some improved results from our NELLI system, whose goal is to provide fast, reasonable solution to problems that are presented to the system in a continuous stream, possibly changing in characteristics over time.

NELLI has two novel features:

  1. A mechanism to continuously generate novel heuristics to solve problems
  2. A cooperative mechanism that sustains set of heuristics that collaborate to solve all problems that the system has been exposed to, adapting  autonomously over time as the nature of the problems changes.

The system was described previously in an article in ECJ 2014. Since then, we have made two modifications that significantly improve results. The first modification simplifies the representation, facilitating the search for new heuristics. The second mechanism enables a better balance between exploitation and exploration, by introducing a second method to generate new heuristics that uses mutation to exploit heuristics that are already sustained, inspired by clonal selection processes in the immune system.

nodestable

The new representation uses a variable length sequences of nodes  – 9 possible nodes represent packing heuristics taken from the literature, such as ‘pack the biggest item’ of ‘pack the largest combination of 2 items that fit exactly’.  NELLI evolves new sequences – a pointer moves across the sequence, applying each heuristic before moving to the next heuristic, cycling back to the beginning when it reaches the end.

repNew

 

Continuous Learning and Adaptation to Environment

The new system is presented with alternating sets of problems. The two problem sets are taken from the literature and are known to have completely different characteristics. Heuristics tuned to work well on one set do not perform well on the other.

The figure track performance over epochs over which the sets are

1.  The system retains memory: at the start of a new epoch, there is no significant difference in performance when compared to the best value obtained at the end of the previous epoch

2.  The system continues to learn over an epoch: there is a significant difference between the performance at the start of an epoch and the (better) performance at the end of an epoch

3. The system continues to learn over its lifetime:  there is significant difference between the best value obtained during the first epoch the dataset appears and the best value obtained at the end of the final epoch

Dataset1_2E

 Summary

NELLI is compared to a set of known heuristics  in which one is selected greedily each iteration based on the problem at hand and to the previous version showing clear improvements.

We believe this goes some way towards creating a new kind of optimisation system: one in which problems can be presented continuously, and in which the system autonomously adapts to new problem characteristics. Of course, specialist algorithms tuned to specific problem sets may well outperform NELLI*, however we believe that in the real-world, systems that do not need extensive tuning and produce quick, reasonable solutions to problems are of more interest to the practitioner.

 

 

 

Workshop on Real-World Problems@GECCO 2014

The 2nd workshop on Real-World Problem and Problem Understanding was held at GECCO 2014 in Vancouver. The workshop offered a mixed programme of invited talks,  peer-reviewed paper presentations and a demo paper, focusing on various aspects of tackling real-world problems with meta-heuristic algorithms.

The workshop opened with a talk by Prof. Darrell Whitley from Colorado State University, drawing on his many years of experience in the field to discuss Why Are There Not More Applications of Evolutionary Algorithms? Darrel highlighted a number of factors, highlighting in the main the gap between the types of problems typically solved by academics and the complex, constrained nature of real-world problems.

This was followed by two talks from authors of accepted papers:

  • A Comparison of Antenna Placement Algorithms, Abhinav Jauhri, Jason D. Lohn, Derek S. Linden
  • Hyper-Heuristic Genetic Algorithm for Solving Frequency Assignment Problem in TD-SCDMA , Chao Yang, Shuming Peng, Bin Jiang, Lei Wang, Renfa Li

 Dr. Anna Esparcia-Alcazar, head of R&D at S2-Grupo provided an entertaining view from an industry perspective, illustrating her talk with a variety of anecdotes and advice gleaned from working with a company providing solutions to optimisation problems to SMEs. Her talk Six Impossible Things Before Breakfast: Computational Intelligence in the SME illustrated some of the ups and downs of working with a company, not all of them in the control of the meta-heuristic designer!

The workshop concluded with a demo and talk by Prof. Emma Hart, describing work recently undertaken with a company to develop an optimisation algorithm aimed at employee scheduling and routing, i.e. allocating jobs to workers and additionally routing them around the job, accounting for service-completition deadlines and maximising efficiency. An interactive GUI that enable the fitness function to be altered during the course of a run was demonstrated during the talk.

All workshop papers are available in the GECCO companion proceedings.

 

NELLI –  A Lifelong Learning Optimisation System – first results

A previous post described our approach to creating a life-long learning system for optimisation systems. Our latest results are now published in an article in  Evolutionary Computation (MITPress) describing the system we have dubbed NELLI – Network for Lifelong Learning.  NELLI takes inspiration from the field of Artificial Immune Systems, and consists of a continuously running system that

  • continuously generates novel heuristics
  • integrates useful heuristics into a self-adapting network of heuristics that  collectively can solve sets of problems
  • removes superfluous heuristics from the network if they become redundant

The idea is that as the system is continuously be exposed to problems over time, its performance should improve over time, it should learn to generalise to unseen problems  and also maintain a memory so that it can quickly solve problems to those seen in the past. Full details are in the paper which is available here at MITPress  – the highlights are summarised below:

NELLI learns over time from experience – the more problems it sees, the better it gets at solving them

The problem ‘environment’ remains static and consists of 3968 bin-packing problems. Every iteration, 30 problems are randomly selected and presented to the network which dynamically changes overtime in terms of its size and of the heuristics contained in it: performance (always measured on the entire set of 3968 problems)  increases over time, but the number of heuristics and problems sustained in the network stabilises, indicating scalability.

62_3968_graphBW

 

NELLI generalises to problems that are similar to those it has been exposed to 

Every 1000 time-steps, the problem ‘environment’ changes:  a new set of 100 instances are randomly chosen from a larger set os 3968 to form the current environment. Every iteration, 30 of these are randomly presented to the network. Performance is measured against all 3968 problemsdespite the fact that many of the problems in PS2 might never have been presented to the system, particularly in its early phases.

59_Bar_100BW

 

 NELLI incorporates a memory

As in the previous experiment, the environment changes every 500 iterations – however, this time, the environment is toggled between two datasets that contain very different problems – heuristics that perform well on one dataset do not perform well on the second. We perform experiments in which a) we restart the system every 500 iterations (no memory) and b) the existing network is retained when the environment changes (memory)

60-61graphEzoomNELLI — with its implicit memory — always outperforms the system with no memory. Due to the retained network, the system does not have to adapt from scratch to a new environment 

The system, with more detailed results, is fully described in the Evolutionary Computation paper.

SEABIS Meeting on Real-World Optimisation

The ROLL team was delighted to welcome Dr Bill Langdon from UCL to Edinburgh Napier in September.  Bill spoke to around 25 researchers from across Scotland at a meeting of the SEABIS group, on the theme of Real-World Optimisation. (SEABIS is a sub-group of the SICSA theme on Complex Systems Engineering).

Bill is currently working on two EPSRC funded projects, looking at aspects of Search-Based Software Engineering. He has been using Genetic Programming to show that a machine can be used to optimise human-written code, trading off functional and non-functional properties. Using a 50,000 C++ program called Bowtie2 (a modern DNA look up tool that aligns sequences of DNA against reference data) as an example, Bill showed the GP could be used to evolve 39 patches to the existing code, that when applied produced a 70-fold speed up in performance, in which 98% of the results were better or identical to an existing system.

This is a really nice example of applying optimisation techniques to a common real-world task that on the face of it, seems very difficult for an optimisation technique, given the difficulty of maintaining working code after applying random mutations. Interestingly,  Bill showed that only 74% of evolved programs compile and  a further 6% of these fail at run time … yet despite this, the technique produces excellent results as described above. There is definitely a lesson in there for others wishing to tackle very difficult problems with evolutionary methods who might have been put off..

Some of the speed-up in the code was attributed to simple things such as evolution removing redundant code or simplifying loops. Perhaps there is some mileage in tackling other very constrained optimisation problems in the same manner. Take vehicle routing that has many idiosyncratic constraints. Perhaps an evolutionary method might simplify or ignore constraints,  thus making it easier for a(nother?) problem solver to tackle the problem… more on this in the future..

The meeting concluded with a round table discussion the whether there is a perceived gap between the academic research undertaken in universities and the types of problems people in real businesses worry about. Many people contributed to the discussion and a number of interesting points were raised:

  • Most real-world problems have far more constraints than those included in academic benchmarks – Vehicle Routing is a good example of this, where even the most constrained benchmarks don’t come close to the number of constraints in a real-world problem
  • When working with outside clients, its important to listen carefully to what they actually want and separate this from the research you want to do – the client doesn’t necessarily need a novel technique or complex problem solving…..
  • A possible method of bringing the two communities closer together would be to create a website where people could post problems they would like solving, and inviting the  academic community to tackle them.
  • Competitions have proved to be a useful method of generating interest from a wider audience  –  using real problems in a competition format might go some way towards building links between the two communities.

 

 

 

 

 

 

 

Lifelong Machine Learning Systems & Optimisation

Typical meta and hyper heuristic algorithms tend to operate in the same manner: an algorithm is tuned to work well on a (possibly large) set of representative problems and each time a new problem instance needs to be solved, the algorithm conducts a search of either the solution space or the heuristic space to locate good solutions. Whilst this often leads to acceptable solutions, such approaches have a number of weaknesses in that if the nature of the problems to be solved changes over time, then the algorithm needs to be periodically re-tuned. Furthermore, such approaches are likely to be inefficient, failing to exploit previously learned knowledge in the search for a solution.

In contrast, in the field of machine-learning, several contemporary learning systems employ methods that use prior knowledge when learning behaviours in new, but similar tasks. [1] recently proposed that the AI community should move beyond learning algorithms to more seriously consider the nature of systems that are capable of learning over a lifetime, creating lifelong machine learning or LML systems. Although this was in some senses directed at typical ML applications such as robotics, it seems entirely appropriate that optimisation systems should embody the same properties.

[1] identify three essential components of an LML system that clearly apply in an optimisation context:

  1. it should be able to retain and/or consolidate knowledge, i.e. incorporate a long- term memory
  2. it should selectively transfer prior knowledge when learning new tasks; it should adopt a systems approach that ensures the effective and efficient interaction of the elements of the system.
  3. the system should be computationally efficient when storing learned knowledge in long-term memory; ideally, retention should occur online.

We propose that the natural immune system has properties that map very closely to these requirements:

  •  It exhibits memory that enables it to respond rapidly when faced with pathogens it has previously been exposed to;
  • it can selectively adapt prior knowledge via clonal selection mechanisms that can rapidly adapt existing antibodies to new variants of previous pathogens and
  •  it embodies a systemic approach by maintaining a repertoire of antibodies that collectively cover the space of potential pathogens.

We’ve developed a novel optimisation method that combines inspiration from AIS with hyper-heuristics into a system solves 1D bin-packing problems; the system is continuously fed problems – the AIS component retains a network of interacting deterministic heuristics that both solves problems as they arrive but also acts as a memory of past experience, enabling solutions to be rapidly found to problems that share characteristics with those previously solved.

Furthermore, the system continuously generates new knowledge in the form of novel deterministic heuristics; this knowledge, if useful, becomes integrated into the network of heuristics. Both the content and topology of the network are plastic, enabling adaption over time to new environments.

conceptA description of an early version of the system is given in our ECAL 2013 paper;  an improved version that we’re calling NELLI (Network for Lifelong Learning) will be described in a journal paper, a draft version of which will be shortly available.
The new system creates a network of interacting heuristics and problems: the problems incorporated in the network provide a minimal representative map of the problem space; the heuristics generalise over the problem space, each occupying its own niche.

  1. D.Silver,Q.Yang,L.Li. Lifelong machine learning systems: Beyond learning algorithms, AAAI 2013 Spring Symposium Series.

 

 

 

New 1D Bin Packing Benchmark Problems Available

The literature describes many instances of benchmark 1D bin-packing problems, mainly due to Scholl[1] and Falkenauer[2]. Gathering these instances together results in a large set of 1370 problems, of which subsets have been used to illustrate the performance of a range of different optimisation algorithms and simple deterministic heuristics.

Some preliminary work we’ve been doing showed (not surprisingly) that generally, instances in a particular set are best solved by the same heuristic, while problems in a number of the sets are solved by many heuristics, suggesting they are not particularly difficult. This is illustrated in the graphic – ffd-djd each horizontal row represents one of the benchmark sets from Scholl or Falkenauer (which containing either 10 or 20 instances). Each cell (an instance) is coloured green or blue according to which of two heuristics (FFD or DJD respectively)  it is best solved by. If the cell is white then the instance is solved equally by both heuristics. Obvious patterns are apparent, suggesting strong commonalities between instances in a set. More on this soon in a paper we’re currently working on  mapping problem characteristics to heuristics.

This led us to thinking that the current benchmarks are too easy.. so we decided to generate some new ones. The new benchmarks are available here for downloading, with a detailed description of how they were generated.

Problem Set A contains 15,830 problem instances – these instances are all hard. What does hard mean ? In this case, it means that none of these instances can be solved optimally by Falkenauer’s Hybrid Grouping Genetic Algorithm, or four well-known deterministic heuristics (FFD, DJD, DJT, ADJD).. These instances were used in our recent GECCO 2013 paper Generating Single and Multiple Cooperative Heuristics for the One Dimensional Bin Packing Problem Using a Single Node Genetic Programming Island Model

Problem Set B contains a further 3968 problem instances. Three new problem instances were generated for the parameters sampled from each of the 1370 problem instances outlined above using the procedure outlined on the Benchmark page – all feasible instances were retained. These problems were used to test the AIS hyper-heuristic system described in our ECAL 2013 paper Learning to Solve Bin Packing Problems with an Immune Inspired Hyper-heuristic

GECCO 2013 Workshop Summary

Workshop on Problem Understanding and Real-World Optimisation

The workshop was jointly organised by the ROLL team (Emma Hart and Kevin Sim) and Dr Kent McClymont of the University of Exeter. The morning was devoted to looking at aspects of real-word optimisation – why real-world problems are hard, efforts to facilitate solving problems and shared experiences from people who have worked at closely in industry.

Real World Problems

Unfortunately, the invited speaker, Tim Pigden of Optrak Logistics was stricken with a bout of gout and sadly couldn’t make it to the workshop. However, technology prevailed and Tim was able to deliver his talk from his bed over Skype (the empty stage generating a few odd looks from people passing the room!). Based on many years of experience of supply software to the vehicle routing industry, Tim outlined some of the many real constraints associated with vehicle routing that require practitioners to understand demand, travel, real-time and people constraints. To summarise Tim’s words – academics creating benchmarks need to understand that:

        • people place orders
        • tractors, trailers and drivers need to be treated properly
        • linear capacity constraints are best suited to liquids

This was illustrated with many practical examples, citing examples of deliveries of newspapers, petrol and lubricant, pipes and beer! A great many constraints were highlighted that it seems safe to say most of the audience had never considered in their own research on VRP and provided much food for thought. One of the things we intend to pursue as part of the ROLL project is the creation of a more realistic set of VRP benchmarks, that capture at least some of the many constraints that Tim raised.

Tim’s slides can be downloaded here as a powerpoint file.

This talk was followed by a presentation from Neil Urquhart, who described recent work in using GIS such as Google Maps or Open Street Map with VRPs, accounting for real road-network layouts rather than using proxy measures such as Euclidean distance. With the increasing availability of such tools, combining proper mapping with real problem descriptions as outlined by Tim surely has the potential to close some of the gap between academic VRP research and the real-world.

Tim Pigden’s talk was nicely complemented by a talk from Bogdan Filipic from the Franz Josef Institute, Slovenia, who shared experiences of working within industry in two specific cases; tuning the process parameters for casting of steel, and scheduling interruptions in a car factory to minimise energy costs. Bogdan concluded with a number of tips for those embarking on a joint work with industrial partners that will be of benefit to many people.

Also relating to real-world applications, Dr Gabriela Ochoa discussed some exciting opportunities for applying optimisation research to the software-engineering domain, outlining new research in finding and fixing bugs in software. She also described a recent research initiative Dynamic Adaptive Automated Software Engineering (DAASE), whose goal is to embed optimisation into deployed software to create self-optimising adaptive systems – definitely a real-word application for optimisation researchers to get their teeth into!

Finally, Ioannis Giagkiozis from the University of Sheffield presented Liger – an open source integrated optimization environment which is designed to be extensible and have a smooth learning curve so that it can be used by the non-expert in industry, thus addressing a current problem that many software packages designed to solve real-world problems are commercial or closed-source. The framework provides a number of
fully configurable algorithms, the facility to create new algorithms and a visual exploration tool – essential for decision makers. Making tools such as Liger available will hopefully help address one of the current problems for industry, i.e. that adopting novel technologies often requires expensive and hard-to-find access to an academic expert.

Problem Understanding
The morning’s workshop focusing on aspects of real-world problems was nicely complemented by the afternoon’s discussions, which examined current methods of understanding problem landscapes. Although this research uses benchmark problems to gain some theoretical understanding into what makes certain landscapes difficult for optimisation algorithms, this kind of research is crucial in developing our understanding of when and where algorithms work. By combining this knowledge with an improved understanding of the nature of real-world problems, it should indeed to possible to make inroads into ‘bridging the gap’ between real-world and academic problems.