Download:

To download the STPLS3D point clouds for SEMANTIC segmentation click Here.

To download the STPLS3D point clouds for INSTANCE segmentation click Here.

We also provide baseline implementations for both segmentation tasks Here.

To download the source images for both synthetic and real datasets click Here.

Please scroll down to see more download options and other data formats.

Citation

If you use this dataset, please kindly cite our paper:

@misc{chen2022stpls3d,
      title={STPLS3D: A Large-Scale Synthetic and Real Aerial Photogrammetry 3D Point Cloud Dataset}, 
      author={Meida Chen and Qingyong Hu and Thomas Hugues and Andrew Feng and Yu Hou and Kyle McCullough and Lucio Soibelman},
      year={2022},
      eprint={2203.09065},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Details

Semantic annotations:

0: Ground, 1: Building, 2: LowVegetation, 3: MediumVegetation, 4: HighVegetation, 5: Vehicle, 6: Truck, 7: Aircraft, 8: MilitaryVehicle, 9: Bike, 10: Motorcycle, 11: LightPole, 12: StreetSgin, 13: Clutter, 14: Fence, 15: Road, 17: Windows, 18: Dirt, 19: Grass.

The ground points that don't have the material available (15, 18, 19) are labeled with 0. Note that not all datasets we are currently providing have all the semantic labels available. Please refer to the tables below for more details.

Instance annotations:

The ground is labeled with -100. Window instance is currently per building but not per window but could be post-processed using connect component algorithm. Our experiments did not include the window instances.

Note that only synthetic datasets v2 and v3 have the instance labels, and we only benchmarked the instance segmentation algorithms using v3.

Data format and point spacing:

 

The full dataset is provided in .ply format with 0.1-meter point spacing. Points were sampled from the photogrammetric (i.e., ContextCapture) reconstructed 3D meshes using CloudCompare. The data can be directly used with our KpConv implementation to produce the baseline result for semantic segmentation.

In addition, we are also providing the synthetic datasets v3 in the .txt format which can be directly used with our HAIS implementation to produce the baseline result for instance segmentation. Note that the .txt point clouds have been downsampled with 0.3-meter point spacing using CloudComapre, the ground materials (15, 18, 19) are merged together as ground (0), and windows are merged into their corresponding buildings.

Synthetic datasets

Datasets              Available Semantic              Instance            Images/Meshes           Point Cloud

   Version 1:                  0, 1, 2, 3, 4, 5                        No                Coming soon...                   ply

   Version 2:             All expect 17, 18, 19                   Yes               Coming soon...                   ply

   Version 3:                           All                                 Yes                  Sample data                    ply

Real-world datasets

Datasets              Available Semantic              Instance              Images/Meshes       Point Cloud​

        USC:              1,2,5,11,13,14,15,18,19                  No                          Meshes                     ply

      WMSC:            1,2,5,11,13,14,15,18,19                  No                          Meshes                     ply

        OCC:             1,2,5,11,13,14,15,18,19                  No                          Meshes                     ply

Residential area:    1,2,5,11,13,14,15,18,19                  No                          Meshes                     ply

Comparison of Data Quality

Directly sampling or ray casting point clouds from the 3D virtual environment will lead to a large domain gap from the real photogrammetry data. Here, we provide an intuitive comparison by visualizing the ray casting 3D points, the synthetic photogrammetric points, and real-world photogrammetric points of tree crowns.

Additional Visualization of the Dataset

Drawing1_1.png

The datasets provided on this page are published under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. Please contact us if you want to use it commercially.