Challenge Description

Smartphone has been one of the most popular digital devices in the past decades, with more than 300 million smartphones sold every quarter in the world wide. Most of the smartphone vendors, such as Apple, Huawei, Samsung, launch their new flagship smartphones every year. People use smartphone cameras to shoot selfie photos, film scenery or events, and record videos of family and friends. The specifications of smartphone camera and the quality of taken pictures are major criteria for consumer to select and buy smartphones. Many smartphone manufacturers also introduce and advertise their smartphones by introducing the strengths and advantages of their smartphone cameras. However, how to evaluate the quality of smartphone cameras and the taken pictures remains a problem for both smartphone manufacturers and confumers. Currently in the market, there are several teams and companies who evaluate the quality of smartphone cameras and announce the ranking and scores of the quality of smartphone cameras, and the scores of smartphone cameras are subjectively graded by several photographers and experts from different aspects, such as exposure, color, noise and texture. However, subjective assessment is not easy to reproduce, and it is not easy to deploy in practical image processing systems.

In the last two decades, objective image quality assessment (IQA) has been widely researched, and a large amount of objective IQA algorithms have been designed to automatically and accurately estimate the quality of images. However, most of objective IQA methods are designed to assess the overall perceived quality of the image degraded by various simulated distortions, which rarely exist in pictures taken by the modern smartphone cameras. Thus these methods are not suitable for the task of smartphone camera quality assessment, while objective evaluation methods specifically designed for the purpose of smartphone camera quality assessment are relatively rare.

The purpose of this Grand Challenge is to drive efforts of image quality assessment towards smartphone camera quality assessment. With this Grand challenge, it is expected to develop objective smartphone camera quality assessment models from four different aspects, including exposure, color, noise and texture, by using the datasets released by the organizers. The goal is to provide reference quality rankings or scores for smartphone cameras and to both smartphone manufacturers and consumers.

Participants are asked to submit four computational models to cacluate the rankings of smartphone camera from four aspects: exposure, color, noise and texture.

Dataset/APIs/Library URL

The training dataset is composed of 1500 pictures taken from 100 scenes using 15 smartphones. The 15 smartphones cover a wide price range. The dataset includes various challenge scenes, e.g. high dynamic scenes, backlight scenes, night scenes, colorful scenes, portraits and distant scenes. For the 15 pictures of the same scene, four rankings of the quality of 15 smartphone cameras in four aspects: exposure, color, noise and texture will be provided. The 15 smartphones, are named and provided as Device A, Device B, and so on, and the realitic details of these smartphones will not be provided.

The goal of this database is to evaluate the picture shooting performances of smartphone cameras designed for ordinary consumers. Therefore, we restore all smartphones to the factory settings and shoot the pictures in the default mode. All participants are assumed to use only the image information to design algorithms. For all pictures, we tried to remove all smartphone information, for example the Exchangeable image file format (Exif) information, which includes camera manufacturer, camera model, 35mm focal length and so on. For several limited smartphones, for example Mi 9, some watermarks will be added in the left bottom of pictures in the default mode, and the watermarks are kept in the final pictures.

In our training dataset, 1 in rankings means the best performance and 15 means the worst. When several smartphones have same performance, we will provide their average rankings. For example, Device A=B=C, and their positions in the whole ranking are 3, 4, 5, we define them as ranking 4.

Please cite the description of this database in the following paper: http://arxiv.org/abs/2003.01299.

Evaluation Criteria

The participants are free to use the training dataset for training and tuning their algorithms as necessary and may also compute the benchmark rankings as a reference for themselves. When the participants have decided to go ahead with the algorithm submission, they need to submit the code to the organizers for the performance evaluation of their algorithms.

Release of the evaluation dataset (content only): the proponent then submit the outputs of his/her model in expected format (When 15 smartphone camera images in one scene are entered, four rankings of them are output).

The output of model on the evaluation dataset must be submitted online before the mentioned date.

Organizers will compute the performance of the submitted model and release them to the proponent such that the proponent can report the results in publications and presentations.

After the end of the challenge, the tools for computation of benchmark rankings will be made available online for free, for easy use by the research community.

The prediction rankings generated by the candidate algorithm will be compared with the ground truth subjective rankings using the Spearman Rank order Correlation Coefficient (SRCC). For each testing scene, four SRCC values will be computed from four aspects: exposure, color, noise and texture. The final result is averaged by the SRCC values of all scenes in testing set.

Important dates Deadline of Submission

  • Registration Deadline: March 12th 2020
  • Paper submission Deadline (for interested participants): March 27th 2020
  • Model submission Deadline: March 27th 2020
  • Author Notification: April 15th 2020
  • Evaluation results announcement: April 15th 2020
  • Camera ready Submission: April 29th 2020

Registration

Please send the name of your team, names of participants, emails of participants and the name of department via email to zhuwenhan823@sjtu.edu.cn, if you prepare to attend our competition.

Submission Guidelines

By March 27, 2020, we ask all participants to submit paper and test result (i.e. completed rankings).

By March 27, 2020, we ask all participants to submit their (testing) code and model, which serves for verification purpose.

Results and codes are submitted via email to zhuwenhan823@sjtu.edu.cn.

Grand Challenge Results and Awards

Grand Challenge Results

Team Name [Color , Exposure , Noise , Texture] overall Order
CUC-IMC [0.341 , 0.478 , 0.657 , 0.588] 0.516 1
3721 [0.381 , 0.442 , 0.569 , 0.427] 0.455 2
ECNUfirst [0.273 , 0.428 , 0.443 , 0.335] 0.370 3
SEUZhou [0.162 , 0.051 , 0.128 , 0.422] 0.191 4
PhysicalYuan [0.378 , 0.025 , 0.188,  0.158] 0.187 5
UMXu [0.124 , 0.038 , 0.010 , 0.187] 0.090 6

Awards

      Awards Winning team
ICME2020 Grand Challenge on QA4Camera: Quality Assessment for Smartphone Camera
        ---Best overall performance
CUC-IMC
ICME2020 Grand Challenge on QA4Camera: Quality Assessment for Smartphone Camera
        ---Best performance on Color aspect
3721

P.S.:Because Team CUC-IMC has the best performance on Exposure, Noise, Texture and the best overall performance, we grant a representative award to Team CUC-IMC for the best overall performance.

Host Organization

Shanghai Jiao Tong University (SJTU, China)

Organizers

Guangtao Zhai

Professor

zhaiguangtao@sjtu.edu.cn

Wenhan Zhu

PHD student

zhuwenhan823@sjtu.edu.cn

Zongxi Han

PHD student

zongxihan@sjtu.edu.cn

Xiongkuo Min

Postdoctor

minxiongkuo@sjtu.edu.cn

Tao Wang

Undergraduate

f1603011.wangtao@sjtu.edu.cn

Zicheng Zhang

Undergraduate

zzc1998@sjtu.edu.cn

Wei Lu

Undergraduate

SJTU-Luwei@sjtu.edu.cn

Contacts

Wenhan Zhu

PHD student

zhuwenhan823@sjtu.edu.cn

Tao Wang

undergraduate

f1603011.wangtao@sjtu.edu.cn

Zicheng Zhang

undergraduate

zzc1998@sjtu.edu.cn