Introduction

This year's challenge has two independent tracks:

Track1: Anti-UAV Tracking Given the bounding box of a drone target in the first frame, this challenge track requires algorithms to track the given target in each video frame by predicting its bounding box. When the target disappears, an invisible mark (no bounding box) needs to be given.

Track2: Anti-UAV Detection & Tracking Whether a drone target exits in the first frame is unknown. This challenge track requires algorithms to detect and track the drone target when it appears by predicting its bounding box. When the target does not exist or disappears, an invisible mark (no bounding box) needs to be given.


News

🔥🔥🔥 Codalab server for track1 and track2 is online now!🔥🔥🔥

Coming soon!


Guideline for Challenge

  • The two tracks share a set of training data, but use two different sets of test data. The video in the test data of Track1 contains a drone target in the first frame; the video in the training data and in the test data of Track2 may or may not contain a drone target in the first frame.

  • The two tracks use the same evaluation metric, as will be introduced below.

  • Two independent Codalab severs are used to evaluate and rank the submissions for Track1 and Track2, respectively. The Codalab severs and the test data for both tracks will be publicly accessible online after the testing phase begins on Mar 10 '23 06:00 PM PDT.

  • We provide Baseline model and Evaluation code on ModelScope, Please refer to the evaluation code and resulting output file to test your algorithm and prepare final submission (.zip) to Codalab server.

  • The deadline for result submission is Mar 13 '23 06:00 PM PDT.

  • If you encounter any questions or misunderstandings, please feel free to contact us:

    zhaojian90@u.nus.edu,

    jinlei@bupt.edu.cn,

    lijianan@bit.edu.cn.

    We also set up a WeChat group and QQ group (277498917) for quick communication.


Participation requirements

  • The provided test data is NOT allowed to be used for training.

  • NO additional training data is allowed to train/pretrain the model. The public dataset like ImageNet and CoCo can be used, but the additional datasets about UAVs are BANNED.

  • The submission description should clearly state the algorithm framework.


Award For Each Track

1st-Place: Certificate + 1500 USD

2nd-Place: Certificate + 1000 USD

3rd-Place: Certificate + 500 USD

Award For Paper

Best Paper: Certificate + 500 USD


Metrics

We define the tracking accuracy as:

For frame t, IoUt is Intersection over Union (IoU) between the predicted tracking box and its corresponding ground-truth box, pt is the predicted visibility flag, it equals 1 when the predicted box is empty and 0 otherwise. The vt is the ground-truth visibility flag of the target, the indicator function δ(vt>0) equals 1 when vt > 0 and 0 otherwise. The accuracy is averaged over all frames in a sequence, T indicates total frames and T* denotes the number of frames corresponding to the presence of the target in the ground-truth.


Results Format

For tracking with bounding boxes, please use the following format:

[

[x,y,width,height],

[x,y,width,height],

[],

...

[x,y,width,height]]

Note: box coordinates are floats measured from the top left image corner (and are 0-indexed). An empty list denotes there is no target in the current frame.


Dates

Training dataset release: Feb 06 '23 06:00 PM PDT

Test dataset release: Mar 06 '23 06:00 PM PDT

Results Submission: Mar 10 '23 through Mar 13 '23 06:00 PM PDT