Difference: Oct72009 (1 vs. 4)

Revision 42009-10-14 - DavidMeger

Line: 1 to 1
 
META TOPICPARENT name="TheSemanticRobotVisionChallenge"

Minutes:

Line: 85 to 85
 
    • Add additional sources of info
    • Investigate filtering techniques
    • Integrate output data format with classification
Changed:
<
<
  • Speed-up of Felzenswalb training
>
>
  • Speed-up of Felzenswalb training [MB]
 
    • Initial investigation to verify this is a doable task (profiling current code, ensuring good performance on web data, investigation of potential speedups such as GPU feature extraction and SVM learning)

Future tasks pending completion of others:

Revision 32009-10-07 - DavidMeger

Line: 1 to 1
 
META TOPICPARENT name="TheSemanticRobotVisionChallenge"
Changed:
<
<

Agenda:

>
>

Minutes:

  • 2-page abstract due for this year. We'll develop it in the directory:
    • DOC/abstract inside SVN. Dave will start it and email around [DM]
  • Qualification video - take it sometime that's convenient
 
Deleted:
<
<
  • Start thinking about qualification material (this includes software league if we want to participate in that)
 
  • Start planning itinerary
Added:
>
>
    • 12 possibly interested {TH, AG, MM, CG, PF, MB, SH, DM, PV, SM, WW, TS}
    • SH to email the 4 supervisors, who will figure it out for us smile {JL, DL, BW, AM}
    • For budgeting reasons, the costs are roughly:
      • >= $380 US round-trip for flights
      • ~ $55/night per room at the Monte Carlo (cheaper elsewhere)
      • ~ $2000 for robot shipping
      • Potentially some conference registrations ($400.00 for students)

  • Catherine update on shipping
    • Cost for shipping is about the same as cost to rent a car and drive
    • MM might be ok with driving his own car. Wonder how much UBC/SRVC would compensate him for putting 6K on his new Jetta
    • Debate on driving vs shipping on several axes:
      • Shipping:
        • Pros: Doesnt take our time, if we invest in a crate (~$800), we always have it later
        • Cons: Potential for damage, extra downtime for robot with packaging
      • Driving:
        • Pros: We keep the robot in our possession, saves cost on flights
        • Cons: Takes a lot of our time
    • Major issue seems to be how long the shipping downtime would be. CG to check out.
    • Also would like to know if it's possible to rent a truck with unlimited km. SH to check out.
    • Dont forget that loading the robot into anything without a ramp is difficult.
    • Need to decide soon.
  • Theakston and Lapinkulta are self-admined and can be used for development. Log in with lciuser... and sudo adduser to make your own account.
  • Finally, the list of completed and in progress tasks.
 

Completed components:

Changed:
<
<
  • Porting of basic drivers for: bumblebee and powerbot
>
>
  • Porting of basic drivers for: bumblebee, cannon and powerbot
 
  • Tower design
Deleted:
<
<

Current in-progress task list:

  • Basic robot functions based on ROS. (with aim to perform a preliminary test run of navigation and mapping) [MM and DM]
    • WG nav stack
 
    • gmapping
Changed:
<
<
    • Coordinate transform codes
    • Tilting laser driver
>
>
  • Tilting laser drivers
  • Robot coordinate transform code
 
    • Network configuration and development environment
      • Robot router setup
      • Setup self-administered PC's
      • ROS instructions
Added:
>
>

Current in-progress task list:

  • Capture data from robot for testing
  • Basic robot functions based on ROS. (with aim to perform a preliminary test run of navigation and mapping) [MM and DM]
    • WG nav stack
 
    • Tower upgrade:
      • Order material for building a new laser/camera mount and assemble same.
Changed:
<
<
>
>
    • Saliency maps and visual attention
      • Basic saliency map computation [DM]
      • Stereo + saliency combined to identify interesting regions [PV]
  • High-level control functionality such as planning
    • Random walk behavior
    • 3 main high-level planners:
      • Exploring frontiers
      • Find tables [PV]
      • Space coverage
      • Look back at objects
    • Top level state machine to choose between above planners
    • Choice of "where-to-look" aka attention system
 
  • Recognition framework (James module directly or something built upon that) [AG and CG]
Added:
>
>
    • Combining results from different types of detectors (different algorithms)
    • Combining results from various viewpoints * We'll meet on the previous two topics tomorrow
    • Collect data for 5 "given" object classes once they're published
 
    • Test data interface
    • Felzenswalb detector
Changed:
<
<
    • Helmer
>
>
      • MB profiled Kenji's python implementation - most of the time in convolution - promising
      • Will investigate cuda'ing pieces
    • Helmer detector
      • Using point cloud,
 
    • Mccann
    • Training data interface and additional parameters
Changed:
<
<
  • Cuda on fraser [WW and TH]
    • Cuda Sift
>
>
  • Cuda on fraser [MB, WW and TH]
    • Need to get the code compiling
    • GPUSift
 
  • Web grabbing module [PF and CG]
    • Add additional sources of info
Line: 42 to 90
 

Future tasks pending completion of others:

Deleted:
<
<
  • High-level control functionality such as planning [DM]
    • Random walk behavior
    • 3 main high-level planners:
    • Top level state machine to choose between above planners
    • Choice of "where-to-look" aka attention system
 
  • Use of 3D models in recognition
  • Use of 3D information and context in attention system
  • Real time result reporting
Line: 54 to 97
 
  • Investigate new cameras which might be faster than the Cannon
  • Prioritizing computation done by classifiers towards images which look really promising to the attention system, and based on the classes which have already been recognized.
Deleted:
<
<
-- DavidMeger - 02 Oct 2009

Revision 22009-10-02 - DavidMeger

Line: 1 to 1
 
META TOPICPARENT name="TheSemanticRobotVisionChallenge"

Agenda:

  • Start thinking about qualification material (this includes software league if we want to participate in that)
Added:
>
>
  • Start planning itinerary
 

Completed components:

Revision 12009-10-02 - DavidMeger

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="TheSemanticRobotVisionChallenge"

Agenda:

  • Start thinking about qualification material (this includes software league if we want to participate in that)

Completed components:

  • Porting of basic drivers for: bumblebee and powerbot
  • Tower design

Current in-progress task list:

  • Basic robot functions based on ROS. (with aim to perform a preliminary test run of navigation and mapping) [MM and DM]
    • WG nav stack
    • gmapping
    • Coordinate transform codes
    • Tilting laser driver
    • Network configuration and development environment
      • Robot router setup
      • Setup self-administered PC's
      • ROS instructions
    • Tower upgrade:
      • Order material for building a new laser/camera mount and assemble same.

  • Recognition framework (James module directly or something built upon that) [AG and CG]
    • Test data interface
    • Felzenswalb detector
    • Helmer
    • Mccann
    • Training data interface and additional parameters
  • Cuda on fraser [WW and TH]
  • Web grabbing module [PF and CG]
    • Add additional sources of info
    • Investigate filtering techniques
    • Integrate output data format with classification
  • Speed-up of Felzenswalb training
    • Initial investigation to verify this is a doable task (profiling current code, ensuring good performance on web data, investigation of potential speedups such as GPU feature extraction and SVM learning)

Future tasks pending completion of others:

  • High-level control functionality such as planning [DM]
    • Random walk behavior
    • 3 main high-level planners:
    • Top level state machine to choose between above planners
    • Choice of "where-to-look" aka attention system
  • Use of 3D models in recognition
  • Use of 3D information and context in attention system
  • Real time result reporting
  • Feeding back classification results to robot planner
  • Investigate new cameras which might be faster than the Cannon
  • Prioritizing computation done by classifiers towards images which look really promising to the attention system, and based on the classes which have already been recognized.

-- DavidMeger - 02 Oct 2009

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2025 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback