Skip to content

shiraez/DataHack2018

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DATAHACK 2018


Are you passionate about making widespread, impactful global changes? Autonomous vehicles represent one of the biggest revolutions mankind has ever seen and they will affect every aspect of our daily lives. In this challenge you will help to enable the autonomous car revolution. Teams undertaking Innoviz’s Rigid Motion Segmentation Challenge will solve the problem of decomposing LIDAR data (point cloud) into background and moving objects.

Rigid Motion Segmentation Challenge

Detect all points that belong to a moving object!

Check out our online point cloud (open in chrome)

Score board

Team Score

Dataset

The dataset consists of simulated videos of urban driving.
Lidar simulation is generated by Carla.
Lidar Details
Field of view - 80°X40°
Resolution - 0.2°X0.2°
Maximal distance - 100m
Minimal distance - 3m
Coordinate system origin - Center of the Lidar, translation between Lidar and center of ego vehicle is (x=0.6, y=0.0, z=1.3)[m]

Coordinate system:

We always use right hand coordinate system, x is forward and z is upward.

Each frame consists of 3 types of data (different csv files).

1. Point cloud

Point cloud is a set of (3D) points in space, in our case, generated by Lidar. In addition to the spatial location of the points Innoviz's Lidar extract also reflectivity, which is similar to "color". The point cloud coordinate system is the center of the ego vehicle (aligned with the ego motion data).
File structure:
x[cm], y[cm], z[cm], reflectivity[0-100]
number of rows (number of point in point cloud) is unknown.

2. Ego motion

Ego motion is the motion of the vehicle on which the Lidar is mounted. Applying the ego motion translation and rotation to the point cloud will transform it to the global coordinate system.
Rotation order: rotation_x -> rotation_y -> rotation_z File structure
rotation_x[rad], rotation_y[rad], rotation_z[rad], translation_x[m], translation_y[m], translation_z[m]
single row

3. Labels

Points that belong to moving objects are labeled by 1 all others by 0.
File structure
label[0-1]
number of rows is identical to point cloud file.

How to download the data set

Just download test and train set ant unzip them.
Test set
Train set

In The Repo

YOU DON'T NEED TO BE A 3D OR POINT CLOUD EXPERT!!
We will give you all you need to get started with point cloud data. Every thing we give is open source and you are more than welcome to explore our source code and change it to suit your need.
The code has been tested on Linux and (most of it) on Windows.
In the repo you will find the following (among other code):

1. utilities/math_utils.py

In this file you will find the RotationTranslationData class that can help with affine transformation you might want to apply on the point cloud. We strongly recommend using this class for transformations.

2. visualization/vis.py

In this file you will find pc_show() function. This is our point cloud viewer for the challenge. It is based on Panda3d graphic engine. You should think of it as "matplotlib.pyplot.imshow()" for point cloud (after installing Panda3d). If you are familiar with cython you can dramatically accelerate the viewer. Install cython and build the our code by running this in terminal:


cd .../datahack2018

python visualizations/setup.py build_ext --inplace


3. examples/

Holds two scripts for playing a point cloud video and aggregation of point cloud. This is the best place to start your challenge.

4. evaluation/iou_evaluation.py

We will evaluate your scores using this script. more in the Evaluation section below

Evaluation

Evaluation is IOU based.

Let the yellow circle be the set of ground truth moving points and the orange circle be the positive predictions.
The IOU score is (A intersection B) / (A + B).
Note: We only consider the positive labels/prediction IOU.

More information here

The evaluation script is using the directory and file name to identify the correct gt file, so you need to keep the original directory tree.

Submissions

In order to submit your predictions for the test set you need to zip your prediction directory and send it to datahack2018@innoviz.tech. Each team can submit results at most 3 times throughout the hackathon.

Awards

Coming soon..

About

Rigid Motion Segmentation Challenge

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published