Author Archives: hartnu918

Ensembles Methods for Optimisation Workshop

gecco_logo_HQ

Ensemble Methods for Optimisation

Workshop @ GECCO 2017, Berlin

Organisers: Emma Hart and Kevin Sim

In the field of machine-learning, ensemble-methods that combine decisions from multiple learning algorithms to make accurate predictions have been the focus of intense research for well over a decade, with a broad range of empirical results underpinned by a sound theoretical understanding. Ensemble methods have also found favour within the constraint satisfaction and satisfiability domains where they are commonly referred to as portfolio methods. In the latter case, portfolios tend to be composed from exact solvers, and are evaluated according to run-time metrics.

On the other hand, research in ensemble-methods using meta-heuristic algorithms – in which solution quality rather than run-time is the driving factor – lags behind machine-learning and satisfiability research in both theory and practice. Many fundamental questions remain with respect to how to construct, and design ensembles that will be addressed during the workshop:

  • How should we select algorithms to form an ensemble?
  • How large should the selection pool be – and where do we find algorithms to form the pool?
  • Are automated algorithm generation techniques required to design new algorithms to provide a large enough pool?
  • Machine-learning theory suggests that diversity between components is a key factor – what diversity measures can be used to successfully distinguish between meta-heuristic algorithms?
  • How should the ensemble operate? Algorithms might collaborate, i.e. the computational budget is divided between algorithms within the ensemble, or cooperate, in that different algorithms solve different instances?
  • What domains are ensemble methods best suited to?
Submissions
We invite paper submissions describing technical work as well as conceptual/visionary short papers. Demos accompanied by short papers are also welcomed. The workshop encourages presentation of work in early stages in order to encourage discussion amongst participants.
Technical papers have a limit of 8 pages.
Conceptual and Demo papers have a limit of 2 pages.
Submissions should be emailed to e DOT hart AT napier.ac.uk with the subject heading:
EfO Submission
All papers should be in ACM (Association for Computing Machinery)
format. Please see GECCO 2017 http://gecco-2017.sigevo.org for
details. Papers should not be anonymised. All papers should be
submitted in PDF format.
All accepted papers will be presented at GI-2017 and will appear in
the GECCO workshop volume.
Key Dates
Paper submission deadline:  March 29, 2017 (no extensions permitted)

Workshop on the Mathematical Modelling of the Risk of Wind Damage to Forests

Background

Wind is a major disturbance agent in forests and a key part of the dynamics of many forest ecosystems and although this has long been recognised in boreal and temperate forests there is now increasing evidence of its importance in tropical forests. In addition, the high levels of damage that can occur during storms have important economic and social impacts, particularly for managed forests. For example, in European forests wind is responsible for more than 50 % of all damage by volume and the cost of such damage can be very high (e.g. € 6 billion in France from storms Lothar and Martin in 1999, and €2.4 billion in Sweden after storm Gudrun in 2005). There is clear evidence that damage levels to forests have been increasing over the past century due to a mixture of changing climate and forest management practice, and damage levels are predicted to continue to increase. To understand the influence of wind damage on forest ecosystems or to manage forests in order to mitigate the risk of wind damage, it is necessary to have methods for predicting the levels of damage at different wind speeds and the risk of such wind speed occurrence.

Normally mathematical models are used to make assessments of wind damage risk to forests and such modelling has been active for approximately 15 years following two main approaches:

  1. 1:  Statistical models developed from observations of damage in well monitored forests with information on soil, tree size, elevation, forest management, etc.
  2. 2:  Mechanistic models that use methods from civil/mechanical engineering to directly predict the wind loading on trees and the wind speed at which they will fail.

Statistical models generally make good predictions within the locale for which they were developed but do not always translate well to other locations or conditions. Mechanistic models are usually not as accurate as statistical models for a specific area but are easier to transfer to other situations. They also the possibility of providing wind risk evaluations at all scales from individual trees to forest, regional, national and potentially global scales. However, it is becoming clear from the literature and conference discussions (e.g. IUFRO Wind and Trees Conference, August 2014, Brazil) that current mechanistic models have reached a limit in their capability (spatially and temporarily) and a re-evaluation of the approach is required to make progress. Therefore it was recommended by Prof. Steve Mitchell, chairman of the IUFRO working group 8.03.06, that a workshop focussed on mechanistic wind risk modelling would be of value in starting the discussion on the best ways to develop such models in the future.

Goals

The planned outputs of the workshop are:

  1. A report providing recommendations for the future of forest wind damage modelling, with a list of available tools/model and data, together with a discussion of the most appropriate methodologies for different temporal and spatial scales. The report will also contain an assessment of barriers to the development of new mechanistic modelling approaches and the major gaps in our knowledge.
  2. A review article on methods for modelling forest wind risk, from the single tree to national scale, for submission to Environmental Modelling and Software

Overview

The workshop was organised by ROLL collaborator Prof. Barry Gardiner and took place in Arcachon (near Bordeaux, France) from 28-30th October and brought together forest wind risk modellers from around the world along with experts in subjects required to improve the models (e.g. meso-scale airflow modellers, wind climate scientists, risk analysis engineers, etc.), but who have to date not always been actively involved in forest wind damage risk modelling.  Professor Emma Hart gave a talk on the opportunities for using Machine Learning techniques within the industry, and posed the question of whether novel optimisation methods might be developed given that potential solutions can be easily evaluated by replacing expensive simulations with tools like neural networks that approximate solutions. Slides from her talk are available here : WindDamageForests

(photograph courtesy of Barry Gardiner)

AHDB Smart Agriculture 2015

The exchange of innovation and technology is essential for any industry to overcome current issues and adapt to face future challenges and opportunities. The use of technology in agriculture & horticulture is nothing new. Farmers and growers are continuously seeking ways to reduce cost, improve quality and advance productivity. But what can be learned from other industries? What is out there and how can we exploit advances in engineering, manufacturing, computer science or the medical arena and apply them to the future of precision farming?

AHDB’s Smart Agriculture Conference was an opportunity for scientists, researchers and engineers across multiple disciplines to build relationships that stimulate discussion and identify potential research and application opportunities that address bridge the gap between farmer need and technological development. The conference focused on smart technology encompassing engineering, sensors, nano technology, decision support systems and robotics .

AHDBlogo

Professor Hart gave a presentation focusing on the opportunities and benefits of applying optimisation techniques with the Agricultural Industry, highlighting potential applications within scheduling, decision making and parameter optimisation. A link to the slides and a video of the presentation are available.

Updated results on life-long learning

Two recent papers, published at GECCO 2014 and PPSN 2014 described some improved results from our NELLI system, whose goal is to provide fast, reasonable solution to problems that are presented to the system in a continuous stream, possibly changing in characteristics over time.

NELLI has two novel features:

  1. A mechanism to continuously generate novel heuristics to solve problems
  2. A cooperative mechanism that sustains set of heuristics that collaborate to solve all problems that the system has been exposed to, adapting  autonomously over time as the nature of the problems changes.

The system was described previously in an article in ECJ 2014. Since then, we have made two modifications that significantly improve results. The first modification simplifies the representation, facilitating the search for new heuristics. The second mechanism enables a better balance between exploitation and exploration, by introducing a second method to generate new heuristics that uses mutation to exploit heuristics that are already sustained, inspired by clonal selection processes in the immune system.

nodestable

The new representation uses a variable length sequences of nodes  – 9 possible nodes represent packing heuristics taken from the literature, such as ‘pack the biggest item’ of ‘pack the largest combination of 2 items that fit exactly’.  NELLI evolves new sequences – a pointer moves across the sequence, applying each heuristic before moving to the next heuristic, cycling back to the beginning when it reaches the end.

repNew

 

Continuous Learning and Adaptation to Environment

The new system is presented with alternating sets of problems. The two problem sets are taken from the literature and are known to have completely different characteristics. Heuristics tuned to work well on one set do not perform well on the other.

The figure track performance over epochs over which the sets are

1.  The system retains memory: at the start of a new epoch, there is no significant difference in performance when compared to the best value obtained at the end of the previous epoch

2.  The system continues to learn over an epoch: there is a significant difference between the performance at the start of an epoch and the (better) performance at the end of an epoch

3. The system continues to learn over its lifetime:  there is significant difference between the best value obtained during the first epoch the dataset appears and the best value obtained at the end of the final epoch

Dataset1_2E

 Summary

NELLI is compared to a set of known heuristics  in which one is selected greedily each iteration based on the problem at hand and to the previous version showing clear improvements.

We believe this goes some way towards creating a new kind of optimisation system: one in which problems can be presented continuously, and in which the system autonomously adapts to new problem characteristics. Of course, specialist algorithms tuned to specific problem sets may well outperform NELLI*, however we believe that in the real-world, systems that do not need extensive tuning and produce quick, reasonable solutions to problems are of more interest to the practitioner.

 

 

 

Workshop on Real-World Problems@GECCO 2014

The 2nd workshop on Real-World Problem and Problem Understanding was held at GECCO 2014 in Vancouver. The workshop offered a mixed programme of invited talks,  peer-reviewed paper presentations and a demo paper, focusing on various aspects of tackling real-world problems with meta-heuristic algorithms.

The workshop opened with a talk by Prof. Darrell Whitley from Colorado State University, drawing on his many years of experience in the field to discuss Why Are There Not More Applications of Evolutionary Algorithms? Darrel highlighted a number of factors, highlighting in the main the gap between the types of problems typically solved by academics and the complex, constrained nature of real-world problems.

This was followed by two talks from authors of accepted papers:

  • A Comparison of Antenna Placement Algorithms, Abhinav Jauhri, Jason D. Lohn, Derek S. Linden
  • Hyper-Heuristic Genetic Algorithm for Solving Frequency Assignment Problem in TD-SCDMA , Chao Yang, Shuming Peng, Bin Jiang, Lei Wang, Renfa Li

 Dr. Anna Esparcia-Alcazar, head of R&D at S2-Grupo provided an entertaining view from an industry perspective, illustrating her talk with a variety of anecdotes and advice gleaned from working with a company providing solutions to optimisation problems to SMEs. Her talk Six Impossible Things Before Breakfast: Computational Intelligence in the SME illustrated some of the ups and downs of working with a company, not all of them in the control of the meta-heuristic designer!

The workshop concluded with a demo and talk by Prof. Emma Hart, describing work recently undertaken with a company to develop an optimisation algorithm aimed at employee scheduling and routing, i.e. allocating jobs to workers and additionally routing them around the job, accounting for service-completition deadlines and maximising efficiency. An interactive GUI that enable the fitness function to be altered during the course of a run was demonstrated during the talk.

All workshop papers are available in the GECCO companion proceedings.

 

NELLI –  A Lifelong Learning Optimisation System – first results

A previous post described our approach to creating a life-long learning system for optimisation systems. Our latest results are now published in an article in  Evolutionary Computation (MITPress) describing the system we have dubbed NELLI – Network for Lifelong Learning.  NELLI takes inspiration from the field of Artificial Immune Systems, and consists of a continuously running system that

  • continuously generates novel heuristics
  • integrates useful heuristics into a self-adapting network of heuristics that  collectively can solve sets of problems
  • removes superfluous heuristics from the network if they become redundant

The idea is that as the system is continuously be exposed to problems over time, its performance should improve over time, it should learn to generalise to unseen problems  and also maintain a memory so that it can quickly solve problems to those seen in the past. Full details are in the paper which is available here at MITPress  – the highlights are summarised below:

NELLI learns over time from experience – the more problems it sees, the better it gets at solving them

The problem ‘environment’ remains static and consists of 3968 bin-packing problems. Every iteration, 30 problems are randomly selected and presented to the network which dynamically changes overtime in terms of its size and of the heuristics contained in it: performance (always measured on the entire set of 3968 problems)  increases over time, but the number of heuristics and problems sustained in the network stabilises, indicating scalability.

62_3968_graphBW

 

NELLI generalises to problems that are similar to those it has been exposed to 

Every 1000 time-steps, the problem ‘environment’ changes:  a new set of 100 instances are randomly chosen from a larger set os 3968 to form the current environment. Every iteration, 30 of these are randomly presented to the network. Performance is measured against all 3968 problemsdespite the fact that many of the problems in PS2 might never have been presented to the system, particularly in its early phases.

59_Bar_100BW

 

 NELLI incorporates a memory

As in the previous experiment, the environment changes every 500 iterations – however, this time, the environment is toggled between two datasets that contain very different problems – heuristics that perform well on one dataset do not perform well on the second. We perform experiments in which a) we restart the system every 500 iterations (no memory) and b) the existing network is retained when the environment changes (memory)

60-61graphEzoomNELLI — with its implicit memory — always outperforms the system with no memory. Due to the retained network, the system does not have to adapt from scratch to a new environment 

The system, with more detailed results, is fully described in the Evolutionary Computation paper.