|
META TOPICPARENT |
name="TheSemanticRobotVisionChallenge" |
Agenda |
|
< < |
- 2-page abstract and video due Friday. Dave started, need to add content
- List of pre-known objects posted
|
|
- Start planning itinerary update from Scott
- Robot shipping plan
- Self-administered PC and networking update
|
|
> > |
- List of pre-known objects posted
|
|
- Demo of working image processing nodes
- Finally, the list of completed and in progress tasks.
|
|
> > | Minutes
- Everyone should contribute to the abstract.
- It can be checked out like this: svn co svn+ssh://username@cascade.cs.ubc.ca/lci/project/raid1/srvc/SVN/DOC/abstract
- Bug Scott about planning who's going to Vegas
- Start looking into renting SUV/minivan to drive the bot to vegas. In parallel, Catherine will investigate shipping options in case those look better.
- Robot network setup is complete. From any locally administered machine, we should be able to drive the robot
- Put this in your .bashrc.ros file:
function fraser() {
export ROS_MASTER_URI=http://fraser:11311
}
-
- Then type "fraser()" in your shell in order to have ROS use the robot's roscore
- Test this by typing "rostopic list" and make sure you see the robot's devices
- This is the list of pre-known objects, grouped by our assessments:
- We can probably do these already:
- laptop
- toy car
- toy Stegosaurus
- table lamp
- These objects are really going to need some contours or structure:
- mug
- bottle
- bowl
- Frying pan
- Not sure if there's any point working on:
- We need to obtain a bunch of these objects for testing. We'll go buy more if needed. [ALL]
|
| Completed components:
- Porting of basic drivers for: bumblebee, cannon and powerbot
|
|
-
- Robot router setup
- Setup self-administered PC's
- ROS instructions
|
|
> > |
- WG nav stack
- Basic saliency map computation
|
|
Current in-progress task list:
- Capture data from robot for testing
|
|
< < |
- Basic robot functions based on ROS. (with aim to perform a preliminary test run of navigation and mapping) [MM and DM]
|
|
-
- Tower upgrade:
- Order material for building a new laser/camera mount and assemble same.
|
|
< < |
-
- Saliency maps and visual attention
- Basic saliency map computation [DM]
|
> > | |
|
-
-
- Stereo + saliency combined to identify interesting regions [PV]
|
|
> > |
-
- Tilt laser point cloud segmentation [MM]
- Choice of where to look
|
|
- High-level control functionality such as planning
- Random walk behavior
- 3 main high-level planners:
|
|
-
-
- Space coverage
- Look back at objects
- Top level state machine to choose between above planners
|
|
< < |
-
- Choice of "where-to-look" aka attention system
|
|
- Recognition framework (James module directly or something built upon that) [AG and CG]
|
|
> > |
-
- Skeleton framework for the recognition system (inputs: robot images, outputs class guesses)
|
|
-
- Combining results from different types of detectors (different algorithms)
- Combining results from various viewpoints
|
|
< < | * We'll meet on the previous two topics tomorrow
-
- Collect data for 5 "given" object classes once they're published
|
> > |
-
- Evaluate on the known objects
|
|
-
- Test data interface
- Felzenswalb detector
- MB profiled Kenji's python implementation - most of the time in convolution - promising
|
|
-
-
- Mccann
- Training data interface and additional parameters
|
|
> > |
-
- Load balancing between various recognition algorithms
|
|
- Cuda on fraser [MB, WW and TH]
- Need to get the code compiling
- GPUSift
|
|
-
- Investigate filtering techniques
- Integrate output data format with classification
- Speed-up of Felzenswalb training [MB]
|
|
< < |
-
- Initial investigation to verify this is a doable task (profiling current code, ensuring good performance on web data, investigation of potential speedups such as GPU feature extraction and SVM learning)
|
> > |
-
- Data transfer kills several ideas we've had about converting to Cuda
- Kenji suggested several non-GPU speedups which Matt will work on next
-
|
|
Future tasks pending completion of others: |