Agenda

  • 2-page abstract and video due Friday. Dave started, need to add content
  • List of pre-known objects posted
  • Start planning itinerary update from Scott
  • Robot shipping plan
  • Self-administered PC and networking update
  • Demo of working image processing nodes
  • Finally, the list of completed and in progress tasks.

Completed components:

  • Porting of basic drivers for: bumblebee, cannon and powerbot
  • Tower design
  • gmapping
  • Tilting laser drivers
  • Robot coordinate transform code
  • Network configuration and development environment
    • Robot router setup
    • Setup self-administered PC's
    • ROS instructions

Current in-progress task list:

  • Capture data from robot for testing
  • Basic robot functions based on ROS. (with aim to perform a preliminary test run of navigation and mapping) [MM and DM]
    • WG nav stack
    • Tower upgrade:
      • Order material for building a new laser/camera mount and assemble same.
    • Saliency maps and visual attention
      • Basic saliency map computation [DM]
      • Stereo + saliency combined to identify interesting regions [PV]
  • High-level control functionality such as planning
    • Random walk behavior
    • 3 main high-level planners:
      • Exploring frontiers
      • Find tables [PV]
      • Space coverage
      • Look back at objects
    • Top level state machine to choose between above planners
    • Choice of "where-to-look" aka attention system
  • Recognition framework (James module directly or something built upon that) [AG and CG]
    • Combining results from different types of detectors (different algorithms)
    • Combining results from various viewpoints * We'll meet on the previous two topics tomorrow
    • Collect data for 5 "given" object classes once they're published
    • Test data interface
    • Felzenswalb detector
      • MB profiled Kenji's python implementation - most of the time in convolution - promising
      • Will investigate cuda'ing pieces
    • Helmer detector
      • Using point cloud,
    • Mccann
    • Training data interface and additional parameters
  • Cuda on fraser [MB, WW and TH]
    • Need to get the code compiling
    • GPUSift
    • FastHOG
  • Web grabbing module [PF and CG]
    • Add additional sources of info
    • Investigate filtering techniques
    • Integrate output data format with classification
  • Speed-up of Felzenswalb training [MB]
    • Initial investigation to verify this is a doable task (profiling current code, ensuring good performance on web data, investigation of potential speedups such as GPU feature extraction and SVM learning)

Future tasks pending completion of others:

  • Use of 3D models in recognition
  • Use of 3D information and context in attention system
  • Real time result reporting
  • Feeding back classification results to robot planner
  • Investigate new cameras which might be faster than the Cannon
  • Prioritizing computation done by classifiers towards images which look really promising to the attention system, and based on the classes which have already been recognized.

-- DavidMeger - 14 Oct 2009

Edit | Attach | Watch | Print version | History: r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r1 - 2009-10-14 - DavidMeger
 
  • Edit
  • Attach
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2025 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback