Difference: ArrowClusterCFISummaries (7 vs. 8)

Revision 82007-05-11 - FrankHutter

Line: 1 to 1
 
META TOPICPARENT name="SunGridEngine"
The Arrow cluster and our 22 CPLEX licenses were funded under a CFI grant. This grant requires us to complete annual reports explaining how this infrastructure is being used. If you use the cluster or CPLEX licenses for a project, large or small, please enter a bullet item here that gives a short description of your project and the role the cluster played. If you used these resources for multiple projects and/or for a period of time that spans the time periods given here, please create multiple bullets.
Line: 15 to 15
 
  • Exact regularization of convex programs: I used CPLEX (thanks, Kevin!) for the numerical results of a paper (see reference below). This paper gives the most general conditions possible under which a general convex program (eg, linear, quadratic, semidefinite, second-order cone, etc) can be regularized, and a solution of the original problem still be obtained.
    • Michael P. Friedlander and Paul Tseng, Exact regularization of convex programs, To appear in SIAM Journal on Optimization. Also listed as Dept. of Computer Science Tech. Rept. TR-2006-26, November 2006 (revised April 2007).
  • Empirical hardness model for SAT: Using arrow cluster, we collect SAT solvers’ runtimes data and problem instances’ features data. Empirical hardness models are trained to predict a solver’s runtime for a given problem instance based on instance’s features. We can also predict the satisfiability for an instance based on those features.(May 2006 - ongoing; Lin Xu, Holger H. Hoos and Kevin Leyton-Brown, CS)
Added:
>
>
  • Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms: The arrow cluster was extensively used to run randomized local search algorithms on propositional satisfiability (SAT) instances in order to learn relationships between the characteristics of the instance, search parameter settings, and the algorithm's runtime. This work extends previous work on empirical hardness models in three important ways: incomplete randomized algorithms (as opposed to complete deterministic algorithms in past work), algorithm parameters (past work held the parameter setting of each algorithm fix), and automatically setting these parameters on a per instance basis. It is published in [1] (and the AAAI workshop if you want to mention it). This research would not have been possible in its final form without the arrow cluster.
[1] @InProceedings{HutHamHooLey06b, lauthor = "Frank Hutter and Youssef Hamadi and Kevin Leyton-Brown and Holger H. Hoos", author = "F. Hutter and Y. Hamadi and K. Leyton-Brown and H.~H. Hoos", title = "Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms", booktitle = "Principles and Practice of Constraint Programming~(CP'06)", pages = {213--228}, year={2006} }
  • Automatic Algorithm Configuration based on Local Search: The arrow cluster was also heavily used for developing an automated parameter tuning mechanism for arbitrary algorithms and problem instances. It was needed in this context as the evaluation of a single parameter configurationr required running an algorithm many times on thousands of instances, each run taking up to hours. This research would not have been possible in this way without the cluster. The iterated local search algorithm developed is called ParamILS and published in [2]. It was used to tune the local search algorithm SAPS and the the tree search algorithm Spear for SAT solving, the local search algorithm GLS4MPE for the most probabble explanation (MPE) problem, and CPLEX for the winner determiniation problem (this part also required CPLEX licenses).
[2] @INPROCEEDINGS{HutHooStu07, Author = {F. Hutter and H.~H. Hoos and T. St\"{u}tzle}, lAuthor = {Frank Hutter and Holger~H. Hoos and Thomas St\"{u}tzle}, title = {Automatic Algorithm Configuration based on Local Search}, BOOKTITLE = {AAAI '07: Proceedings of the Twenty-Second Conference on Artifical Intelligence}, YEAR = {2007}, note = {To appear.} }
  • Algorithm tuning for SAT competition 2007: We further used the cluster for tuning several algorithms in preparation for the SAT competition 2006. In particular, we tuned the submitted versions of SAPS (getting speedups of over an order of magnitude by being able to tune on many instances at once) and Spear (getting over an order of magnitude speedup). People involved: SAPS: Frank. Spear: Frank and Domagoj.
  • Case study for algorithm tuning for real-world instances: ParamILS was also used in a further case study for tuning Spear on hard real-world hardware and software verification instances. On the cluster, we needed a week of constant tuning - without the cluster, this would have taken over a year which would have made the research impossible. This case study resulted in a further submission to a hardware-oriented conference, [3].
[3] @misc{HutBabHooHu07, author = "F. Hutter and D. Babi\'c and H. Hoos and A. Hu", title = "Boosting Verification by Automatic Tuning of Decision Procedures", year = 2007, note = "Submitted to Formal Methods in Computer Aided Design (FMCAD-07)" }
  -- KevinLeytonBrown - 05 May 2007
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2025 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback