This page contains information about several resources that are necessary or helpful for participating in the competition.

Test Datasets

We've released our test datasets in the following bucket under the 'eval' directory. The gsutil path (see video tutorial below) is gs://decathlon_groundtruth_datasets/eval.

Submission Portal and Important Details

Participants will submit their methods through CodaLab. It will also contain details on:

  • Task metadata
  • Baseline methods
  • Leaderboards
  • Scoring protocol
  • Imposed compute and hardware constraints

Starter Kit

We provide a starter kit through CodaLab that contains everything you need to create your own code submission, and to test it on any local computer or cloud instance. The starter kit contains the following:

  • The 10 development datasets and corresponding metadata
  • Dataloaders
  • Dockerfiles
  • Several baseline implementations
  • A code skeleton for implementing methods and submitting to the competition's evaluation pipeline
  • Instructions and small code examples on how to use the above

All provided code in the starter kit will be in Python.

Requesting Compute Resources

Participants who would otherwise not have the necessary compute to participate in the competition can apply for cloud GPU resources, and we will review applications on a first-come first-served basis. Please email the organizers for more details.

Additional News and Information

We have opened a dedicated Slack channel to act as a forum for participants to connect with the organizers, ask questions, and post any findings.

Also follow our official twitter for competition announcements and news!


Below is the public tutorial which gives simple instructions on navigating the CodaLab and the competition starter kit as you begin developing your methods.