> > |
META TOPICPARENT |
name="HolgersSpace" |
meta-algorithmic design patterns:
- tuning/config:
- input: parametric algorithm A[p]; set of benchmark instances I; performance measure m
- output: parameter-less algorithm A[p:=...] with optimised m on I
- static restart:
- input: (parameter-less) algorithm A; time t
- output: algorithm that performs multiple, sequential runs of length <= t
- optimised static restart:
- input: (parameter-less) algorithm A; set of benchmark instances I; performance measure m
- output: algorithm that performs multiple, sequential runs of length <= t* where t* has been determined such that optimal m is achieved on I
- (per-distribution) selection (=racing)
- input: (parameter-less) algorithms A1, ..., Ak; set of benchmark instances I; performance measure m
- output: Ai that achieves the best m on I
- per-instance selection
- input: (parameter-less) algorithms A1, ..., Ak; feature extractor f; set of benchmark instances I; performance measure m; (f can be easily formalised - skipped for brevity here)
- output: algorithm that for a given input instance i computes ufeatures using f and selects an Ai based on these, then runs Ai on i; the whole thing is built such that m is optimised on I
- ...
meta-algorithmic analysis patterns:
- simple single alg analysis:
- input: one algorithm A; set of benchmark instances I; performance measure m (m needs to be 'supported' by A; fully formally, m should be part of the output, in which case it is not needed here)
- output: stats on m of A on I (may include various plots, including full PDFs/CDFs, ...)
- simple single alg scaling analysis:
- input: one algorithm A; sets of benchmark instances (I1,s1), ..., (Ip,sp); performance measure m (where I1, ...., Ip are sets of instances of size s1, ..., sp)
- output: scaling data (stats, plots, ...) for m of A on I1, ..., Ip (may include various plots, including full PDFs/CDFs, ...) note: variant of this could fit + return / evaluate a scaling model
- two alg comparative analysis:
- input: two algorithms A,B; set of benchmark instances I; performance measure m (of course, A and B are algs for the same problem)
- output: paired stats of A,B on I (may include data from correlation analysis, stat tests, ...) note: could also return better alg, but would then be a meta-alg design proc
- k>2 alg comparative analysis: (generalisation of 2-alg case)
meta-algorithmic design procedure D:
meta-algorithmic procedure whose output includes at least one algorithm.
note:
- the purpose is typically to create one or more algorithms based on one or more
given algorithms.
examples:
- a procedure that takes a (target) algorithm and adds a restart mechanism to it
- a procedure that takes k algorithms and returns a static parallel algorithm portfolio
but also:
- a procedure that takes k algorithms and runs one after the other sequentially (n-ary pipe operator)
- a procedure that takes k algorithms and returns one of these, selected uniformly at random (off-line k-ary random choice)
- a procedure that takes k algorithms and returns a procedure that, whenever called, selects one of the k uniformly at random for that run (on-line k-ary random choice)
- a procedure that takes one algorithm and transforms it to run more efficiently, possibly in a different language and on a specific platform (optimising compiler)
meta-algorithmic analysis procedure N:
meta-algorithmic procedure whose output includes no algorithm
note:
- the purpose is typically to determine properties of or statements about
given algorithms.
-- HolgerHoos - 31 Mar 2010 |