Chee Seng Chan

Senior Lecturer

cs.chan at um edu my
cs.chan at ieee org

I received my Ph.D. degree from University of Portsmouth, U.K. in 2008. Currenty, I am a Senior Lecturer at the Faculty of Computer Science & Information Technology (FCSIT), University of Malaya. I previously held research appointments at Universities of Ulster and Portsmouth, U.K., respectively.

In general, my research interests are fuzzy qualitative reasoning and computer vision; with a focus on image/video content analysis and human-robot interaction.

[ more ]
    September 2015:
    One(1) paper accepted in ACPR'15, Kuala Lumpur.

    At nighttime, visibility will be greatly decreased and causes image foreground and background to appear blended together. However, ambient light is always present in the natural environment, and as a consequent it creates some contrast within the darkness. In this paper, we for- mulated a visual analytic method that automatically unveils the contrast of dark images (i.e nighttime), revealing the ”hidden” contents. We utilize the traits of image represen- tations obtained from computer vision techniques through a learning based inversion algorithm, eliminating the re- liance to night vision camera and at the same time minimiz- ing the need of human intervention (i.e manual fine-tuning the gamma correction using Adobe Photoshop software). Experiments using the new Malaya Pedestrian in the Dark (MyPD) dataset that we collected from the website Flickr, and a comparison to conventional methods such as image integral and gamma correction show the efficacy of the pro- posed method. Additionally, we showed the potential of this framework in some applications that would benefit public safety.

    August 2015:
    Our paper on "Early Human Actions Detection using BK Sub-triangle Product" was nominated as the Best Student Paper Award in FUZZ-IEEE, 2015, Istanbul, Turkey. Congratulations to Ekta and Chee Kau.
    May 2015:
    One(1) paper (oral) accepted in ICIP'15, Quebec City.

    This paper studies convolutional neural networks (CNN) to learn unsupervised feature representations for 44 different plant species, collected at the Royal Botanic Gardens, Kew, England. To gain intuition on the chosen features from the CNN model (opposed to a 'black box' solution), a visualisation technique based on the deconvolutional networks (DN) is utilized. It is found that venations of different order have been chosen to uniquely represent each of the plant species. Experimental results using these CNN features with different classifiers show consistency and superiority compared to the state-of-the art solutions which rely on hand-crafted features.

Older News Archive    [ click to Expand ]
Crowd Dataset for our ICPR (2014) paper is now online. Download here.
Curve Text (CUTE80) Dataset for our ESWA (2014) paper is now online. Download here.
Tracking Dataset (Malaya Abrupt Motion (MAMo)) for our Information Science (2014) paper is now online. Download here.
Human Skin Detection Dataset (Pratheepan) for our IEEE T-II (2012) paper is now online. Download here.