HumanEva-I and HumanEva-II Datasets
To aid ourselves and the community in quantitative evaluation of articulated human motion and pose estimation algorithms, we developed two publically distributed datasets that contain synchronized motion capture data and multi-camera video. The first and the larger dataset, HumanEva-I, contains software synchronized data from 7 video cameras (4 grayscale and 3 color) and 3D motion capture data (obtained using a standard marker-based motion capture system from Vicon). The data is partitioned into training, validation and test sub-sets. The HumanEva-I dataset contains 4 subjects performing 6 common actions (e.g. walking, jogging, gesturing, etc.). The second dataset, HumanEva-II, contains hardware synchronized data from 4 color video cameras and similarly obtained 3D motion capture data. HumanEva-II contains 2 subjects performing a continuous sequence of actions. The error metrics for computing error in 2D and 3D pose were also formulated and an on-line evaluation methodology developed (to ensure that there are no biases in the obtained quantitative evaluation results). More details on the data and evaluation metrics can be found on the HumanEva web page.
Thess datasets have been downloaded by over 70 research groups around the world. They also served as bases for two workshops organized by us on Evaluation of Human Motion and Pose Estiamtion: EHuM and EHuM2. Both datasets can be obtained by registering and following instructions here.
Relevant papers:
- HumanEva: Synchronized Video and Motion Capture Dataset for Evaluation of Articulated Human Motion, L. Sigal and M. J. Black, Techniacl Report CS-06-08, Brown University, 2006.
© Copyright 2006, Leonid Sigal
All Rights Reserved Permission to use, copy, modify, and distribute this software and its documentation for any non-commercial purpose is hereby granted without fee, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of the author not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR ANY PARTICULAR PURPOSE. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Image & MOCAP Synchronized Dataset (v. 1.0)
This data is the predecessor of HumanEva datasets and was obtained in the maner similar to HumanEva-I dataset. This data in part was used as a ground truth for the quantitative evaluation of tracking using the loose-limbed body model presented at CVPR 2004. It was later used in our peper on quantitative evaluation of annealed particle filter at VS-PETS 2005.
The data is available for research purposes only, please see the full copyright notice bellow. For non-comercial uses, please email me (ls@cs.brown.edu) for the login and password.
Download data and example code (password protected)
Download required library [Camera Calibration Toolbox for Matlab]
© Copyright 2004, Leonid Sigal
All Rights Reserved Permission to use, copy, modify, and distribute this software and its documentation for any non-commercial purpose is hereby granted without fee, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of the author not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR ANY PARTICULAR PURPOSE. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Pampas / Non-parametric Belief Propagation Toolbox for Matlab (v. 0.1)
This toolbox is an implementation of Pampas: Real-Valued Graphical Models for Computer Vision; published by M. Isard. This is the base implementation of the Non-parametric Belief Propagation (NBP) algorithm, that also contains simple example code and short documentation on how this implementation can be extended for other applications. The data is available for research purposes only, please see the full copyright notice bellow. For non-commercial uses, please email me (ls@cs.brown.edu) for the login and password. This toolbox in an on-going effort, for bug reports and suggestions please contact me.
PaMPas / Non-parametric Belief Propagation Toolbox (password protected)
Download required library [Kernel Density Estimation Toolbox for Matlab] (currently included in distribution).
Relevant papers:
- Measure Locally, Reason Globally: Occlusion-sensitive Articulated Pose Estimation, L. Sigal and M. J. Black, IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2006.
- Tracking Loose-limbed People, L. Sigal, S. Bhatia, S. Roth, M. J. Black and M. Isard, IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2004.
- Attractive people: Assembling loose-limbed models using non-parametric belief propagation, L. Sigal, M. Isard, B. H. Sigelman and M. J. Black, Advances in Neural Information Processing Systems 16, NIPS 2003.
Update Log:
Date | Description |
5/11/2005 | Toolbox is released for general public use (v. 0.1) |
© Copyright 2005, Leonid Sigal
All Rights Reserved Permission to use, copy, modify, and distribute this software and its documentation for any non-commercial purpose is hereby granted without fee, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of the author not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR ANY PARTICULAR PURPOSE. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Color-based Tracking under Varying Illumination Code and Dataset
In our CVPR 2000 paper and the follow up PAMI 2004 journal version we presented a novel approach for adaptive skin-color segmentation. A short description of the project can be found on my Research page and a slightly longer version on the project webpage. The data, code and results obtained using our approach can be downloaded freely for research purposes. The data contains manually segmented image sequences from feature films.
Download: [Dataset], [Source code], [Results].
Relevant papers:
- Skin Color-Based Video Segmentation under Time-Varying Illumination, L. Sigal, S. Sclaroff and V. Athitsos, IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(7), pp. 862-877, July 2004.
- Estimation and Prediction of Evolving Color Distributions for Skin Segmentation Under Varying Illumination, L. Sigal, S. Sclaroff and V. Athitsos, IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2000.
© Copyright 2000, 2004, Leonid Sigal
All Rights Reserved Permission to use, copy, modify, and distribute this software and its documentation for any non-commercial purpose is hereby granted without fee, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of the author not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR ANY PARTICULAR PURPOSE. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.