Chee Seng Chan

Senior Lecturer

cs.chan at um edu my
cs.chan at ieee org

I received my Ph.D. degree from University of Portsmouth, U.K. in 2008. Currenty, I am a Senior Lecturer at the Faculty of Computer Science & Information Technology (FCSIT), University of Malaya. I previously held research appointments at Universities of Ulster and Portsmouth, U.K., respectively.

In general, my research interests are fuzzy qualitative reasoning and computer vision; with a focus on image/video content analysis and human-robot interaction.

[ more ]
    September 2015:
    One(1) paper accepted in ACPR'15, Kuala Lumpur.

    At nighttime, reduced visibility could cause foreground and background images to appear to blend together. However, ambient light is always present in the natural environment, and as a consequence, it creates some contrast in darkness. In this paper, we formulate a visual analytic method that automatically unveils the contrast of dark images (i.e. nighttime), revealing the ”hidden” contents. We utilize the traits of image representations obtained from computer vision techniques through a learning based inversion algorithm, eliminating the reliance on night vision camera and at the same time minimizing the need for human intervention (i.e. manual fine-tuning the gamma correction using Adobe Photoshop software). Experiments using the new Malaya Pedestrian in the Dark (MyPD) dataset that we have collected from the website Flickr, and in comparison with conventional methods such as image integral and gamma correction, it shows the efficacy of the proposed method. Additionally, we show the potential of this framework in applications that could benefit public safety.

    August 2015:
    Our paper on "Early Human Actions Detection using BK Sub-triangle Product" was nominated as the Best Student Paper Award in FUZZ-IEEE, 2015, Istanbul, Turkey. Congratulations to Ekta and Chee Kau.
    May 2015:
    One(1) paper (oral) accepted in ICIP'15, Quebec City.

    This paper studies convolutional neural networks (CNN) to learn unsupervised feature representations for 44 different plant species, collected at the Royal Botanic Gardens, Kew, England. To gain intuition on the chosen features from the CNN model (opposed to a 'black box' solution), a visualisation technique based on the deconvolutional networks (DN) is utilized. It is found that venations of different order have been chosen to uniquely represent each of the plant species. Experimental results using these CNN features with different classifiers show consistency and superiority compared to the state-of-the art solutions which rely on hand-crafted features.


Older News Archive    [ click to Expand ]
Crowd Dataset for our ICPR (2014) paper is now online. Download here.
Curve Text (CUTE80) Dataset for our ESWA (2014) paper is now online. Download here.
Tracking Dataset (Malaya Abrupt Motion (MAMo)) for our Information Science (2014) paper is now online. Download here.
Human Skin Detection Dataset (Pratheepan) for our IEEE T-II (2012) paper is now online. Download here.