Panasonic R&D Center Singapore has harnessed great leaders as much as it has great talent. These key figures lead major research and development projects at the top of their individual fields. Commanding authority and notable achievements in the industry, we are proud that they are a core part of the team.
An efficient method to find a triangle with the least sum of distances from its vertices to the covered point
Gouyi Chi, KengLiang Loi, Pongsak Lasang
Proc. of the 11th International Conference on Computer Vision Systems, 2017.07
An efficient method to find a triangle with the least sum of distances from its vertices to the covered point
Gouyi Chi, KengLiang Loi, Pongsak Lasang
Proc. of the 11th International Conference on Computer Vision Systems, 2017.07
Depth sensors are used to acquire a scene from various viewpoints, with the resultant depth images integrated into a 3d model. Generally, due to surface reflectance properties, absorptions, occlusions and accessibility limitations, certain areas of scenes are not sampled, leading to holes and introducing undesirable artifacts. An efficient algorithm for filling holes on organized depth images is high significance. Points far away from a covered point, are usually low probability in the aspect of spatial information, due to contamination of outliers and distortion. The paper shows an algorithm to find a triangle whose vertices are nearest to the covered point.
“FootSnap”: A New Mobile Application for Standardizing Diabetic Foot Images
Moi Hoon Yap, Katie E Chatwin, Choon-Ching Ng, Caroline A Abbott, Frank L Bowling, Satyan Rajbhandari, Andrew JM Boulton, Neil D Reeves
Journal of Diabetes Science and Technology
“FootSnap”: A New Mobile Application for Standardizing Diabetic Foot Images
Moi Hoon Yap, Katie E Chatwin, Choon-Ching Ng, Caroline A Abbott, Frank L Bowling, Satyan Rajbhandari, Andrew JM Boulton, Neil D Reeves
Journal of Diabetes Science and Technology
A New Mobile Application for Standardizing Diabetic Foot Images
Moi Hoon Yap, Katie E Chatwin, Choon-Ching Ng, Caroline A Abbott, Frank L Bowling, Satyan Rajbhandari, Andrew JM Boulton, Neil D Reeves
Journal of Diabetes Science and Technology 2018, Vol. 12(1) 169–173, 2017.06
A New Mobile Application for Standardizing Diabetic Foot Images
Moi Hoon Yap, Katie E Chatwin, Choon-Ching Ng, Caroline A Abbott, Frank L Bowling, Satyan Rajbhandari, Andrew JM Boulton, Neil D Reeves
Journal of Diabetes Science and Technology 2018, Vol. 12(1) 169–173, 2017.06
Background: We describe the development of a new mobile app called “FootSnap,” to standardize photographs of diabetic feet and test its reliability on different occasions and between different operators. Methods: FootSnap was developed by a multidisciplinary team for use with the iPad. The plantar surface of 30 diabetic feet and 30 nondiabetic control feet were imaged using FootSnap on two separate occasions by two different operators. Reproducibility of foot images was determined using the Jaccard similarity index (JSI). Results: High intra- and interoperator reliability was demonstrated with JSI values of 0.89-0.91 for diabetic feet and 0.93- 0.94 for control feet. Conclusions: Similarly high reliability between groups
Video-based Person Re-identification with Accumulative Motion Context
Hao Liu, Zequn Jie, Jayashree Karlekar, Meibin Qi, Jianguo Jiang, Shuicheng Yan, Jiashi Feng
IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) Volume: 28 , Issue: 10 , Oct. 2018
Background: We describe the development of a new mobile app called “FootSnap,” to standardize photographs of diabetic feet and test its reliability on different occasions and between different operators. Methods: FootSnap was developed by a multidisciplinary team for use with the iPad. The plantar surface of 30 diabetic feet and 30 nondiabetic control feet were imaged using FootSnap on two separate occasions by two different operators. Reproducibility of foot images was determined using the Jaccard similarity index (JSI). Results: High intra- and interoperator reliability was demonstrated with JSI values of 0.89-0.91 for diabetic feet and 0.93- 0.94 for control feet. Conclusions: Similarly high reliability between groups
Multi-layer Age Regression for Face Age Estimation
Choon-Ching Ng, Yi-Tseng Cheng, Gee-Sern Hsu, Moi Hoon Yap
IAPR International Conference on Machine Vision Applications (MVA), 2017, 2017.05
Face features convey many personal information that promote and regulate our social linkages. Age prediction using single layer estimation such as aging subspace or a hybrid pattern is limited due to the complexity of human faces. In this work, we propose Multilayer Age Regression (MAR) where the face age is predicted based on a coarse-to-fine estimation using global and local features. In the first layer, Support Vector Regression (SVR) performs a between group prediction by the parameters of Facial Appearance Model (FAM). In the second layer, a within group estimation is performed using FAM, Bio-Inspired Features (BIF), Kernel-based Local Binary Patterns (KLBP) and Multi-scale Wrinkle Patterns (MWP). The performance of MAR is assessed on four benchmark datasets: FGNET, MORPH, FERET and PAL. Results showed that MAR outperforms the state of the art on FERET with a Mean Absolute Error (MAE) of 3.00 (±4.14).
Unconstrained face recognition performance evaluations have traditionally focused on Labeled Faces in the Wild (LFW) dataset for imagery and the YouTubeFaces (YTF) dataset for videos in the last couple of years. Spectacular progress in this field has resulted in saturation on verification and identification accuracies for those benchmark datasets. In this paper, we propose a unified learning framework named Transferred Deep Feature Fusion (TDFF) targeting at the new IARPA Janus Benchmark A (IJB-A) face recognition dataset released by NIST face challenge. The IJB-A dataset includes real-world unconstrained faces from 500 subjects with full pose and illumination variations which are much harder than the LFW and YTF datasets. Inspired by transfer learning, we train two advanced deep convolutional neural networks (DCNN) with two different large datasets in source domain, respectively. By exploring the complementarity of two distinct DCNNs, deep feature fusion is utilized after feature extraction in target domain. Then, template specific linear SVMs is adopted to enhance the discrimination of framework. Finally, multiple matching scores corresponding different templates are merged as the final results. This simple unified framework exhibits excellent performance on IJB-A dataset. Based on the proposed approach, we have submitted our IJB-A results to National Institute of Standards and Technology (NIST) for official evaluation. Moreover, by introducing new data and advanced neural architecture, our method outperforms the state-of-the-art by a wide margin on IJB-A dataset.
Objective Micro-Facial Movement Detection Using FACS-Based Regions and Baseline Evaluation
Adrian K. Davison, Cliff Lansley, Choon-Ching Ng, Kevin Tan, Moi Hoon Yap
arXiv, 2016, 2016.12
Micro-facial expressions are regarded as an important human behavioural event that can highlight emotional deception. Spotting these movements is difficult for humans and machines, however research into using computer vision to detect subtle facial expressions is growing in popularity. This paper proposes an individualised baseline micro-movement detection method using 3D Histogram of Oriented Gradients (3D HOG) temporal difference method. We define a face template consisting of 26 regions based on the Facial Action Coding System (FACS). We extract the temporal features of each region using 3D HOG. Then, we use Chi-square distance to find subtle facial motion in the local regions. Finally, an automatic peak detector is used to detect micro-movements above the newly proposed adaptive baseline threshold. The performance is validated on two FACS coded datasets: SAMM and CASME II. This objective method focuses on the movement of the 26 face regions. When comparing with the ground truth, the best result was an AUC of 0.7512 and 0.7261 on SAMM and CASME II, respectively. The results show that 3D HOG outperformed for micro-movement detection, compared to state-of-the-art feature representations: Local Binary Patterns in Three Orthogonal Planes and Histograms of Oriented Optical Flow.
This paper presents a modular lightweight network model for road objects detection, such as car, pedestrian and cyclist, especially when they are far away from the camera and their sizes are small. Great advances have been made for the deep networks, but small objects detection is still a challenging task. In order to solve this problem, majority of existing methods utilize complicated network or bigger image size, which generally leads to higher computation cost. The proposed network model is referred to as modular feature fusion detector (MFFD), using a fast and efficient network architecture for detecting small objects. The contribution lies in the following aspects: 1) Two base modules have been designed for efficient computation: Front module reduce the information loss from raw input images; Tinier module decrease model size and computation cost, while ensuring the detection accuracy. 2) By stacking the base modules, we design a context features fusion framework for multi-scale object detection. 3) The propose method is efficient in terms of model size and computation cost, which is applicable for resource limited devices, such as embedded systems for advanced driver assistance systems (ADAS). Comparisons with the state-of-the-arts on the challenging KITTI dataset reveal the superiority of the proposed method. Especially, 100 fps can be achieved on the embedded GPUs such as Jetson TX2.
Optimal Depth Recovery using Image Guided TGV with Depth Confidence for High-Quality View Synthesis
Pongsak Lasang, Wuttipong Kumwilaisak, Yazhou Liu and Shengmei Shen
Journal of Visual Communication and Image Representation, 2016.05
This paper describes our proposed method targeting at the MSR Image Recognition Challenge MS-Celeb-1M. The challenge is to recognize one million celebrities from their face images captured in the real world. The challenge provides a large scale dataset crawled from the Web, which contains a large number of celebrities with many images for each subject. Given a new testing image, the challenge requires an identify for the image and the corresponding confidence score. To complete the challenge, we propose a two-stage approach consisting of data cleaning and multi-view deep representation learning. The data cleaning can effectively reduce the noise level of training data and thus improves the performance of deep learning based face recognition models. The multi-view representation learning enables the learned face representations to be more specific and discriminative. Thus the difficulties of recognizing faces out of a huge number of subjects are substantially relieved. Our proposed method achieves a coverage of 46.1% at 95% precision on the random set and a coverage of 33.0% at 95% precision on the hard set of this challenge.