We discussed moving the robot. CG reported that shipping the bot is possible for about $600 each way. Still need to investigate how to get the bot packed for the trip, which may be possible to do through UBC Shipping/Receiving. CG to investigate and report next week. [CATHERINE].
We did a bit of a round-table and decided on the following TO-DO's which we'll aim to be done by next Wednesday's meeting:
Complete conversion of basic functions to ROS [Marius and Dave]
Tilting laser driver
Order material for building a new laser/camera mount and assemble same.
WG nav stack
Mapping solutions (either ROS::gmapping or port pmap from our old code)
Convert "control" type functionality such as high-level planners and executive layer into a format compatible with newly "ROS"d low-level functions. [Dave with help from whoever's interested]
Random walk behaviour
3 main high-level planners:
Explore frontiers
Cover contest area
Find tables
Look-back and objects
Top level state machine to choose between above planners
Port all required coordinate transform and viewpoint modeling functions which now exist in matlab to the new architecture to support the planning code.
Finalize network configuration and ensure everyone has access to a suitable machine on which to develop [Marius, Dave, Tomas]
Setup router on robot with all laptops to operate on local subnet, do nfs mounts, static ips, etc
Setup self-administered PC's on 54 subnet and give these machines access to the robot's network (candidate PC's are chifeng (MM), lapinkulta (DM) and norrlands (TH)
Provide instructions for all team members to setup their own ROS installation on these machine and/or to use a shared installation [MARIUS]
Integrate the recognition harness that James produced, run some tests and add new detectors. [Ankur and Catherine]
Define an interface for test data previously collected in robot dry runs, so that detectors can be evaluated on this data. [Ankur]
Implement Felzenswalb detector in the framework [Ankur]
Implement Helmer detector in the framework [Scott]
Implement Mccann detector in the framework [Sancho]
Provide an interface for training data which is compatible with the web search phase, and allows for many iterations of training/detection to be done with different training set parameters to allow evaluation of various parameters [Ankur and Catherine]
Add additional information and/or filters to the web data grabbing phase [Pooyan, Ankur, Catherine]
Provide additional information about each object (which we do plan to use this time): object size, 3D object models whenever possible, object weight, a flag indicating whether or not each image has a simple background [Catherine and Pooyan]
Investigate additional image filtering techniques (possibly could use the work of Jay) [Scott, Pooyan]
Integrate with new classification framework [Ankur, Catherine, Pooyan]
Get fraser (the new big laptop) up and running with CUDA [Tomas and Walter]
Install cuda drivers
Try Cuda Sift
An outstanding TODO from previous meetings: Ask if mechanical turk is ok to use during SRVC training data gathering. [DAVE?]