Smartphone has been one of the most popular digital devices in the past decades, with more than 300 million smartphones sold every quarter in the world wide. Most of the smartphone vendors, such as Apple, Huawei, Samsung, launch their new flagship smartphones every year. People use smartphone cameras to shoot selfie photos, film scenery or events, and record videos of family and friends. The specifications of smartphone camera and the quality of taken pictures are major criteria for consumer to select and buy smartphones. Many smartphone manufacturers also introduce and advertise their smartphones by introducing the strengths and advantages of their smartphone cameras. However, how to evaluate the quality of smartphone cameras and the taken pictures remains a problem for both smartphone manufacturers and confumers. Currently in the market, there are several teams and companies who evaluate the quality of smartphone cameras and announce the ranking and scores of the quality of smartphone cameras, and the scores of smartphone cameras are subjectively graded by several photographers and experts from different aspects, such as exposure, color, noise and texture. However, subjective assessment is not easy to reproduce, and it is not easy to deploy in practical image processing systems.
In the last two decades, objective image quality assessment (IQA) has been widely researched, and a large amount of objective IQA algorithms have been designed to automatically and accurately estimate the quality of images. However, most of objective IQA methods are designed to assess the overall perceived quality of the image degraded by various simulated distortions, which rarely exist in pictures taken by the modern smartphone cameras. Thus these methods are not suitable for the task of smartphone camera quality assessment, while objective evaluation methods specifically designed for the purpose of smartphone camera quality assessment are relatively rare.
The purpose of this Grand Challenge is to drive efforts of image quality assessment towards smartphone camera quality assessment. With this Grand challenge, it is expected to develop objective smartphone camera quality assessment models from four different aspects, including exposure, color, noise and texture, by using the datasets released by the organizers. The goal is to provide reference quality rankings or scores for smartphone cameras and to both smartphone manufacturers and consumers.
Participants are asked to submit four computational models to cacluate the rankings of smartphone camera from four aspects: exposure, color, noise and texture.
The training dataset is composed of 1500 pictures taken from 100 scenes using 15 smartphones. The 15 smartphones cover a wide price range. The dataset includes various challenge scenes, e.g. high dynamic scenes, backlight scenes, night scenes, colorful scenes, portraits and distant scenes. For the 15 pictures of the same scene, four rankings of the quality of 15 smartphone cameras in four aspects: exposure, color, noise and texture will be provided. The 15 smartphones, are named and provided as Device A, Device B, and so on, and the realitic details of these smartphones will not be provided.
The goal of this database is to evaluate the picture shooting performances of smartphone cameras designed for ordinary consumers. Therefore, we restore all smartphones to the factory settings and shoot the pictures in the default mode. All participants are assumed to use only the image information to design algorithms. For all pictures, we tried to remove all smartphone information, for example the Exchangeable image file format (Exif) information, which includes camera manufacturer, camera model, 35mm focal length and so on. For several limited smartphones, for example Mi 9, some watermarks will be added in the left bottom of pictures in the default mode, and the watermarks are kept in the final pictures.
In our training dataset,
Participants can download the
Please do not hesitate to contact us via email: firstname.lastname@example.org if you have queries about the dataset.
The participants are free to use the training dataset for training and tuning their algorithms as necessary and may also compute the benchmark rankings as a reference for themselves. When the participants have decided to go ahead with the algorithm submission, they need to submit the code to the organizers for the performance evaluation of their algorithms.
Release of the evaluation dataset (content only): the proponent then submit the outputs of his/her model in expected format (When 15 smartphone camera images in one scene are entered, four rankings of them are output).
The output of model on the evaluation dataset must be submitted online before the mentioned date.
Organizers will compute the performance of the submitted model and release them to the proponent such that the proponent can report the results in publications and presentations.
After the end of the challenge, the tools for computation of benchmark rankings will be made available online for free, for easy use by the research community.
The prediction rankings generated by the candidate algorithm will be compared with the ground truth subjective rankings using the Spearman Rank order Correlation Coefficient (SRCC). For each testing scene, four SRCC values will be computed from four aspects: exposure, color, noise and texture. The final result is averaged by the SRCC values of all scenes in testing set.
By March 13, 2020, we ask all participants to submit paper and test result (i.e. completed rankings).
By March 14, 2020, we ask all participants to submit their (testing) code and model, which serves for verification purpose.
Results and codes are submitted via email to email@example.com.