Tags:
tag this topic
create new tag
view all tags
---+ SRVC Meeting minutes for September 30th, 2009 ---++Discussion: * Report that mechanical turk will not be allowed this year * Reminder to lockup the laptops because of the recently reported break-in attempts. * Matt Baumann will start on the team tomorrow, and we discussed many ideas that we have for things he can help us with. These are mainly related to speeding up various classifiers including the Felzenswalb detector and performing fast feature extraction on the GPU. * Next we went over a list of the current status. Going forward, I'll record things based on a past-present-future view. That is, what we've completed, what tasks are active, and what things we still need to start down the road. ---++ Completed components: * Porting of basic drivers for: bumblebee and powerbot * Tower design ---++Current in-progress task list: * Basic robot functions based on ROS. (with aim to perform a preliminary test run of navigation and mapping) [MM and DM] * WG nav stack * gmapping * Coordinate transform codes * Tilting laser driver * Network configuration and development environment * Robot router setup * Setup self-administered PC's * ROS instructions * Tower upgrade: * Order material for building a new laser/camera mount and assemble same. * Recognition framework (James module directly or something built upon that) [AG and CG] * Test data interface * Felzenswalb detector * Helmer * Mccann * Training data interface and additional parameters * Cuda on fraser [WW and TH] * Cuda Sift * FastHOG * Web grabbing module [PF and CG] * Add additional sources of info * Investigate filtering techniques * Integrate output data format with classification * Speed-up of Felzenswalb training * Initial investigation to verify this is a doable task (profiling current code, ensuring good performance on web data, investigation of potential speedups such as GPU feature extraction and SVM learning) ---++ Future tasks pending completion of others: * High-level control functionality such as planning [DM] * Random walk behavior * 3 main high-level planners: * Top level state machine to choose between above planners * Choice of "where-to-look" aka attention system * Use of 3D models in recognition * Use of 3D information and context in attention system * Real time result reporting * Feeding back classification results to robot planner * Investigate new cameras which might be faster than the Cannon * Prioritizing computation done by classifiers towards images which look really promising to the attention system, and based on the classes which have already been recognized. -- Main.DavidMeger - 30 Sep 2009
E
dit
|
A
ttach
|
Watch
|
P
rint version
|
H
istory
: r2
<
r1
|
B
acklinks
|
V
iew topic
|
Ra
w
edit
|
M
ore topic actions
Topic revision: r2 - 2009-10-01
-
DavidMeger
Home
Site map
BETA web
Communications web
Faculty web
Imager web
LCI web
Main web
SPL web
Sandbox web
TWiki web
TestCases web
LCI Web
Create New Topic
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
P
View
Raw View
Print version
Find backlinks
History
More topic actions
Edit
Raw edit
Attach file or image
Edit topic preference settings
Set new parent
More topic actions
Account
Log In
Register User
E
dit
A
ttach
Copyright © 2008-2025 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback