build details

Show: section status errors & todos local changes recent changes last change in-page changes feedback controls

PyTorch Template

Modified 2021-10-30 by liampaull

This section describes the basic procedure for making a submission with a model trained in using PyTorch.

It can be used as a starting point for any of the LF, LFV_multi, and LFI challenges.

That you have setup your accounts.

That you meet the software requirement.

You make a submission to all of the LF* challenges and can view their status and output.

The video is at https://vimeo.com/480202594.

PyTorch Template

Quickstart

Modified 2020-11-07 by Andrea Censi

Clone the template repo:

$ git clone git://github.com/duckietown/challenge-aido_LF-template-pytorch.git

Change into the directory:

$ cd challenge-aido_LF-template-pytorch

Run the submission:

Either make a submission with:

$ dts challenges submit --challenge CHALLENGE_NAME

where you can find a list of the open challenges here.

Or, run local evaluation with:

$ dts challenges evaluate --challenge CHALLENGE_NAME

Verify the submission(s)

Modified 2019-04-15 by Liam Paull

This will make a number of submissions (as described below). You can track the status of these submissions in the command line with:

$ dts challenges follow --submission SUBMISSION_NUMBER

or through your browser by navigating the webpage: https://challenges.duckietown.org/v4/humans/submissions/SUBMISSION_NUMBER

where SUBMISSION_NUMBER should be replaced with the number of the submission which is reported in the terminal output.

Anatomy of the submission

Modified 2019-04-15 by Liam Paull

The submission consists of all of the basic files that required for a basic submission. Below we will highlight the specifics with respect to this template.

solution.py

Modified 2020-11-07 by Liam Paull

The only differences in solution.py (the python script that is run by our submission) are:

  • We conditionally load the model in the initializaiton procedure:
self.model = DDPG(state_dim=self.preprocessor.shape, action_dim=2, max_action=1, net_type="cnn")
self.current_image = np.zeros((640, 480, 3))

if load_model:
    logger.info('PytorchRLTemplateAgent loading models')
    fp = model_path if model_path else "model"
    self.model.load(fp, "models", for_inference=True)
  • We abort if no GPU is detected and the environment variable AIDO_REQUIRE_GPU.

  • We are calling our model to compute an action with the following code:

def compute_action(self, observation):
        action = self.model.predict(observation)
        return action.astype(float)

Model files

Modified 2019-04-15 by Liam Paull

The other addition files are the following:

wrappers.py
model.py
models

wrappers.py contains a simple wrapper for resizing the input image. model.py is used for training the model, and the models are stored in models.