build details

Show: section status errors & todos local changes recent changes last change in-page changes feedback controls

General rules

Modified 2020-11-07 by Andrea Censi

Protocol

Modified 2018-07-14 by julian121266

Deployment technique

Modified 2021-10-31 by tanij

We use Docker containers to package, deploy, and run the applications on the physical Duckietown platform as well as on the cloud for simulation. Base Docker container images are provided and distributed via Docker HUB.

A challenges server is used to collect and queue all submitted agents. The simulation evaluations execute each queued agents as they become available. Submissions that pass the simulation environment will be queued for execution in the Autolab.

The AI-DO evaluations workflow supports local and remote development, in simulation and on hardware.

For validation of submitted code and evaluation the competition finals a surprise environment will be employed. This is to discourage over-fitting to any particular Duckietown configuration.

Submission of entries

Modified 2021-10-31 by tanij

Participants can submit their code in the form of a docker container to a challenge. Templates are provided for creating the container image in a conforming way.

The system will schedule to run the submitted robot agent on the cloud on the challenges selected by the user, and, if simulations pass, in the Autolabs.

Participants can submit entries as many times as they would like, which will be processed on a best effort basis. Access control and prioritization policies are in place to provide equal opportunities to all participants and prevent monopolization of the computational and physical resources available.

Participants are required to open source their solutions source code. If auxiliary training data are used to train the models, that data must be made available.

Submitted code will be evaluated in simulation and if sufficient on physical Autolabs. Scores and logs generated with submitted code are made available on the challenges server.

Modified 2021-10-31 by tanij

Simulation code is available as open source for everybody to use on computers that they control. The baselines interact with the simulator through a standardized interfaces that mimics the interface with the real robot.

Autolab test and validation

Modified 2021-10-31 by tanij

When an experiment is run in a training/testing Autolab, the participants receive, in addition to the score, detailed feedback, including logs, telemetry, videos, etc. The sensory data generated by the robots is continuously recorded and becomes available to the entire community.

The video is at https://vimeo.com/561305335.

Autolab LF-challenge evaluation demo.

When an experiment is run in a validation Autolab, the only output to the user is the test score and minimal statistics (number of collisions, number of rule violations, etc.). Here are some examples.

Leaderboards

Modified 2021-10-31 by tanij

After each run in simulation and in Autolabs, the participants can see the metrics statistics on the competition scoring website. Extended leaderboards are made available for each challenge.

Eligibility

Modified 2021-10-31 by tanij

Employees and affiliates of organizing and sponsoring organizations are ineligible from participation in the competition, but they are welcome to submit baseline solutions that will be reported in a special leaderboard.

Students of organizing institutions (ETH Zürich, University of Montreal, and TTIC), are eligible to participate in the competition as part of coursework, if they do not work in the organization of the competition.

Intellectual property

Modified 2020-11-07 by Andrea Censi

Participants of AI-DO are required to provide the source code / data / learning models of their submission to the organizers before the finals (so that we can check for their regularity.)

Winners of AI-DO are required to make their submission open source so that it can be reused later in the next challenges.