A. Lynn Abbott

A. Lynn Abbott

Virginia Polytechnic Institute and State University

H-index: 30

North America-United States

About A. Lynn Abbott

A. Lynn Abbott, With an exceptional h-index of 30 and a recent h-index of 13 (since 2020), a distinguished researcher at Virginia Polytechnic Institute and State University, specializes in the field of Computer Vision, Biometrics, Machine Learning.

His recent articles reflect a diverse array of research interests and contributions to the field:

Corrigendum to" Group Differences in Facial Emotion Expression in Autism: Evidence for the Utility of Machine Classification"[Behav. Therapy 50 (4)(2019) 828-838]

Group Differences in Facial Emotion Expression in Autism: Evidence for the Utility of Machine Classification (vol 50, pg 828, 2019)

A temporal encoder-decoder approach to extracting blood volume pulse signal morphology from face videos

Characterization, detection, and segmentation of work-zone scenes from naturalistic driving data

Camera-based recovery of cardiovascular signals from unconstrained face videos using an attention network

Color invariant skin segmentation

Leveraging se (3) equivariance for self-supervised category-level object pose estimation from point clouds

A Compositional Approach to Occlusion in Panoptic Segmentation

A. Lynn Abbott Information

University

Virginia Polytechnic Institute and State University

Position

Professor Electrical & Computer Engineering

Citations(all)

3344

Citations(since 2020)

756

Cited By

2461

hIndex(all)

30

hIndex(since 2020)

13

i10Index(all)

64

i10Index(since 2020)

20

Email

University Profile Page

Virginia Polytechnic Institute and State University

A. Lynn Abbott Skills & Research Interests

Computer Vision

Biometrics

Machine Learning

Top articles of A. Lynn Abbott

Corrigendum to" Group Differences in Facial Emotion Expression in Autism: Evidence for the Utility of Machine Classification"[Behav. Therapy 50 (4)(2019) 828-838]

Authors

Nicole N Capriola-Hall,Andrea Trubanova Wieckowski,Deanna Swain,Sherin Aly,Amira Youssef,A Lynn Abbott,Susan W White

Journal

Behavior therapy

Published Date

2024/3

Corrigendum to "Group Differences in Facial Emotion Expression in Autism: Evidence for the Utility of Machine Classification" [Behav. Therapy 50(4) (2019) 828-838] Corrigendum to "Group Differences in Facial Emotion Expression in Autism: Evidence for the Utility of Machine Classification" [Behav. Therapy 50(4) (2019) 828-838] Behav Ther. 2024 Mar;55(2):430. doi: 10.1016/j.beth.2024.01.003. Epub 2024 Jan 30. Authors Nicole N Capriola-Hall 1 , Andrea Trubanova Wieckowski 2 , Deanna Swain 2 , Sherin Aly 3 , Amira Youssef 4 , A Lynn Abbott 2 , Susan W White 5 Affiliations 1 University of Alabama. Electronic address: nncapriola@crimson.ua.edu. 2 Virginia Tech. 3 Alexandria University. 4 City of Scientific Research and Technological Applications; Virginia Tech. 5 University of Alabama. PMID: 38418052 DOI: 10.1016/j.beth.2024.01.003 No abstract available Publication types Published Erratum …

Group Differences in Facial Emotion Expression in Autism: Evidence for the Utility of Machine Classification (vol 50, pg 828, 2019)

Authors

Nicole N Capriola-Hall,Andrea Trubanova Wieckowski,Deanna Swain,Virginia Tech,Sherin Aly,Amira Youssef,A Lynn Abbott,Susan W White

Journal

Behavior therapy

Published Date

2019/7/1

Effective social communication relies, in part, on accurate nonverbal expression of emotion. To evaluate the nature of facial emotion expression (FEE) deficits in children with autism spectrum disorder (ASD), we compared 20 youths with ASD to a sample of typically developing (TD) youth (n = 20) using a machine-based classifier of FEE. Results indicate group differences in FEE for overall accuracy across emotions. In particular, a significant group difference in accuracy of FEE was observed when participants were prompted by a video of a human expressing an emotion, F(2, 36) = 4.99, p = .032, η2 = .12. Specifically, youth with ASD made significantly more errors in FEE relative to TD youth. Findings support continued refinement of machine-based approaches to assess and potentially remediate FEE impairment in youth with ASD.

A temporal encoder-decoder approach to extracting blood volume pulse signal morphology from face videos

Authors

Fulan Li,Surendrabikram Thapa,Shreyas Bhat,Abhijit Sarkar,A. Lynn Abbott

Published Date

2023

This thesis considers methods for extracting blood volume pulse (BVP) representations from video of the human face. Whereas most previous systems have been concerned with estimating vital signs such as average heart rate, this thesis addresses the more difficult problem of recovering BVP signal morphology. We present a new approach that is inspired by temporal encoder-decoder architectures that have been used for audio signal separation. As input, this system accepts a temporal sequence of RGB (red, green, blue) values that have been spatially averaged over a small portion of the face. The output of the system is a temporal sequence that approximates a BVP signal. In order to reduce noise in the recovered signal, a separate processing step extracts individual pulses and performs normalization and outlier removal. After these steps, individual pulse shapes have been extracted that are sufficiently distinct to support biometric authentication. Our findings demonstrate the effectiveness of our approach in extracting BVP signal morphology from facial videos, which presents exciting opportunities for further research in this area. The source code is available at https://github.com/Adleof/CVPM-2023-Temporal-Encoder-Decoder-iPPG

Characterization, detection, and segmentation of work-zone scenes from naturalistic driving data

Authors

Vaibhav Sundharam,Abhijit Sarkar,Andrei Svetovidov,Jeffrey S Hickman,A Lynn Abbott

Journal

Transportation research record

Published Date

2023/3

This paper elucidates the automatic detection and analysis of work zones (construction zones) in naturalistic roadway images. An underlying motivation is to identify locations that may pose a challenge to advanced driver assistance systems (ADAS) or autonomous vehicle navigation systems. We first present an in-depth characterization of work-zone scenes from a custom data set collected from more than a million miles of naturalistic driving data. We then describe two machine learning algorithms based on the ResNet and U-Net architectures. The first approach works in an image classification framework that classifies an image as a work-zone scene or non-work-zone scene. The second algorithm was developed to identify individual components representing evidence of a work zone (signs, barriers, machines, etc.). These systems achieved an score of 0.951 for the classification task and an score of 0.611 …

Camera-based recovery of cardiovascular signals from unconstrained face videos using an attention network

Authors

Yogesh Deshpande,Surendrabikram Thapa,Abhijit Sarkar,A Lynn Abbott

Published Date

2023

This paper addresses the problem of recovering the shape morphology of blood volume pulse (BVP) information from a video of a person's face. Video-based remote plethysmography methods have shown promising results in estimating vital signs such as heart rate and breathing rate. However, recovering the instantaneous pulse rate signals is still a challenge for the community. This is due to the fact that most of the previous methods concentrate on capturing the temporal average of the cardiovascular signals. In contrast, we present an approach in which BVP signals are extracted with a focus on the recovery of the signal shape morphology as a generalized form for the computation of physiological metrics. We also place emphasis on allowing natural movements by the subject. Furthermore, our system is capable of extracting individual BVP instances with sufficient signal detail to facilitate candidate re-identification. These improvements have resulted in part from the incorporation of a robust skin-detection module into the overall imaging-based photoplethysmography (iPPG) framework. We present extensive experimental results using the challenging UBFC-Phys dataset and the well-known COHFACE dataset. The source code is available at https://github. com/yogeshd21/CVPM-2023-iPPG-Paper.

Color invariant skin segmentation

Authors

Han Xu,Abhijit Sarkar,A Lynn Abbott

Published Date

2022

This paper addresses the problem of automatically detecting human skin in images without reliance on color information. A primary motivation of the work has been to achieve results that are consistent across the full range of skin tones, even while using a training dataset that is significantly biased toward lighter skin tones. Previous skin-detection methods have used color cues almost exclusively, and we present a new approach that performs well in the absence of such information. A key aspect of the work is dataset repair through augmentation that is applied strategically during training, with the goal of color invariant feature learning to enhance generalization. We have demonstrated the concept using two architectures, and experimental results show improvements in both precision and recall for most Fitzpatrick skin tones in the benchmark ECU dataset. We further tested the system with the RFW dataset to show that the proposed method performs much more consistently across different ethnicities, thereby reducing the chance of bias based on skin color. To demonstrate the effectiveness of our work, extensive experiments were performed on grayscale images as well as images obtained under unconstrained illumination and with artificial filters. Source code will be provided with the final version of this paper.

Leveraging se (3) equivariance for self-supervised category-level object pose estimation from point clouds

Authors

Xiaolong Li,Yijia Weng,Li Yi,Leonidas J Guibas,A Abbott,Shuran Song,He Wang

Journal

Advances in neural information processing systems

Published Date

2021/12/6

Category-level object pose estimation aims to find 6D object poses of previously unseen object instances from known categories without access to object CAD models. To reduce the huge amount of pose annotations needed for category-level learning, we propose for the first time a self-supervised learning framework to estimate category-level 6D object pose from single 3D point clouds. During training, our method assumes no ground-truth pose annotations, no CAD models, and no multi-view supervision. The key to our method is to disentangle shape and pose through an invariant shape reconstruction module and an equivariant pose estimation module, empowered by SE (3) equivariant point cloud networks. The invariant shape reconstruction module learns to perform aligned reconstructions, yielding a category-level reference frame without using any annotations. In addition, the equivariant pose estimation module achieves category-level pose estimation accuracy that is comparable to some fully supervised methods. Extensive experiments demonstrate the effectiveness of our approach on both complete and partial depth point clouds from the ModelNet40 benchmark, and on real depth point clouds from the NOCS-REAL 275 dataset. The project page with code and visualizations can be found at: dragonlong. github. io/equi-pose.

A Compositional Approach to Occlusion in Panoptic Segmentation

Authors

Ajit Sarkaar,A Lynn Abbott

Published Date

2021/10/6

This paper concerns image segmentation, with emphasis on correctly classifying objects that are partially occluded. We present a novel approach based on compositional modeling that has proven to be effective at classifying separate instances of foreground objects. We demonstrate the efficacy of the approach by replacing the object detection pipeline in UPSNet with a compositional element that utilizes a mixture of distributions to model parts of objects. We also show extensive experimental results for the COCO and Cityscapes datasets. The results show an improvement of 2.6 points in panoptic quality for the top “thing” classes of COCO, and a 3.43% increase in overall recall, using standard UPSNet as a baseline. Moreover, we present qualitative results to demonstrate that improved metrics and datasets are needed for proper characterization of panoptic segmentation systems.

Convolutional neural network-based in-vehicle occupant detection and classification method using second strategic highway research program cabin images

Authors

Ioannis Papakis,Abhijit Sarkar,Andrei Svetovidov,Jeffrey S Hickman,A Lynn Abbott

Journal

Transportation research record

Published Date

2021/8

This paper describes an approach for automatic detection and localization of drivers and passengers in automobiles using in-cabin images. We used a convolutional neural network (CNN) framework and conducted experiments based on the Faster R-CNN and Cascade R-CNN detectors. Training and evaluation were performed using the Second Strategic Highway Research Program (SHRP 2) naturalistic dataset. In SHRP 2, the cabin images have been blurred to maintain privacy. After detecting occupants inside the vehicle, the system classifies each occupant as driver, front-seat passenger, or back-seat passenger. For one SHRP 2 test set, the system detected occupants with an accuracy of 94.5%. Those occupants were correctly classified as front-seat passenger with an accuracy of 97.3%, as driver with 99.5% accuracy, and as back-seat passenger with 94.3% accuracy. The system performed slightly better for …

COCO-bridge: Structural detail data set for bridge inspections

Authors

Eric Bianchi,Amos Lynn Abbott,Pratap Tokekar,Matthew Hebdon

Journal

Journal of Computing in Civil Engineering

Published Date

2021/5/1

The purpose of this research is to propose a means to address two issues faced by unmanned aerial vehicles (UAVs) during bridge inspection. The first issue is that UAVs have a notoriously difficult time operating near bridges. This is because of the potential for the navigation signal to be lost between the operator and the UAV. Therefore, there is a push to automate or semiautomate the UAV inspection process. One way to improve automation is by improving UAVs’ ability to contextualize their environment through object detection and object avoidance. The second issue is that, to the best of the authors’ knowledge, no method has been developed to automatically contextualize detected defects to a structural bridge detail during or after UAV flight. Significant research has been conducted on UAVs’ ability to detect defects, like cracks and corrosion. However, detecting the presence of a defect alone does not contextualize …

Multi-level generative chaotic recurrent network for image inpainting

Authors

Cong Chen,Amos Abbott,Daniel Stilwell

Published Date

2021

This paper presents a novel multi-level generative chaotic Recurrent Neural Network (RNN) for image inpainting. This technique utilizes a general framework with multiple chaotic RNN that makes learning the image prior from a single corrupted image more robust and efficient. The proposed network utilizes a randomly-initialized process for parameterization, along with a unique quad-directional encoder structure, chaotic state transition, and adaptive importance for multi-level RNN updating. The efficacy of the approach has been validated through multiple experiments. In spite of a much lower computational load, quantitative comparisons reveal that the proposed approach exceeds the performance of several image restoration benchmarks.

Robust Unsupervised Cleaning of Underwater Bathymetric Point Cloud Data.

Authors

Cong Chen,Abel Gawel,Stephen Krauss,Yuliang Zou,A Lynn Abbott,Daniel J Stilwell

Published Date

2020/9

This paper presents a novel unified one-stage unsupervised learning framework for point cloud cleaning of noisy partial data from underwater side-scan sonars. By combining a swath-based point cloud tensor representation, an adaptive multi-scale feature encoder, and a generative Bayesian framework, the proposed method provides robust sonar point cloud denoising, completion, and outlier removal simultaneously. The condensed swath-based tensor representation preserves the point cloud associated with the underlying three-dimensional geometry by utilizing spatial and temporal correlation of sonar data. The adaptive multi-scale feature encoder identifies noisy partial tensor data without handcrafted feature labeling by utilizing CANDECOMP/PARAFAC tensor factorization. Each local embedded outlier feature under various scales is aggregated into a global context by a generative Bayesian framework. The model is automatically inferred by a variational Bayesian, without parameter tuning and model pre-training. Extensive experiments on large scale synthetic and real data demonstrate robustness against environmental perturbation. The proposed algorithm compares favourably with existing methods.

Category-level articulated object pose estimation

Authors

Xiaolong Li,He Wang,Li Yi,Leonidas J Guibas,A Lynn Abbott,Shuran Song

Published Date

2020

This paper addresses the task of category-level pose estimation for articulated objects from a single depth image. We present a novel category-level approach that correctly accommodates object instances previously unseen during training. We introduce Articulation-aware Normalized Coordinate Space Hierarchy (ANCSH)-a canonical representation for different articulated objects in a given category. As the key to achieve intra-category generalization, the representation constructs a canonical object space as well as a set of canonical part spaces. The canonical object space normalizes the object orientation, scales and articulations (eg joint parameters and states) while each canonical part space further normalizes its part pose and scale. We develop a deep network based on PointNet++ that predicts ANCSH from a single depth point cloud, including part segmentation, normalized coordinates, and joint parameters in the canonical object space. By leveraging the canonicalized joints, we demonstrate: 1) improved performance in part pose and scale estimations using the induced kinematic constraints from joints; 2) high accuracy for joint parameter estimation in camera space.

Adaptive label smoothing

Authors

Ujwal Krothapalli,A Lynn Abbott

Journal

arXiv preprint arXiv:2009.06432

Published Date

2020/9/14

This paper concerns the use of objectness measures to improve the calibration performance of Convolutional Neural Networks (CNNs). CNNs have proven to be very good classifiers and generally localize objects well; however, the loss functions typically used to train classification CNNs do not penalize inability to localize an object, nor do they take into account an object's relative size in the given image. During training on ImageNet-1K almost all approaches use random crops on the images and this transformation sometimes provides the CNN with background only samples. This causes the classifiers to depend on context. Context dependence is harmful for safety-critical applications. We present a novel approach to classification that combines the ideas of objectness and label smoothing during training. Unlike previous methods, we compute a smoothing factor that is \emph{adaptive} based on relative object size within an image. This causes our approach to produce confidences that are grounded in the size of the object being classified instead of relying on context to make the correct predictions. We present extensive results using ImageNet to demonstrate that CNNs trained using adaptive label smoothing are much less likely to be overconfident in their predictions. We show qualitative results using class activation maps and quantitative results using classification and transfer learning tasks. Our approach is able to produce an order of magnitude reduction in confidence when predicting on context only images when compared to baselines. Using transfer learning, we gain 2.1mAP on MS COCO compared to the hard label approach.

See List of Professors in A. Lynn Abbott University(Virginia Polytechnic Institute and State University)

A. Lynn Abbott FAQs

What is A. Lynn Abbott's h-index at Virginia Polytechnic Institute and State University?

The h-index of A. Lynn Abbott has been 13 since 2020 and 30 in total.

What are A. Lynn Abbott's top articles?

The articles with the titles of

Corrigendum to" Group Differences in Facial Emotion Expression in Autism: Evidence for the Utility of Machine Classification"[Behav. Therapy 50 (4)(2019) 828-838]

Group Differences in Facial Emotion Expression in Autism: Evidence for the Utility of Machine Classification (vol 50, pg 828, 2019)

A temporal encoder-decoder approach to extracting blood volume pulse signal morphology from face videos

Characterization, detection, and segmentation of work-zone scenes from naturalistic driving data

Camera-based recovery of cardiovascular signals from unconstrained face videos using an attention network

Color invariant skin segmentation

Leveraging se (3) equivariance for self-supervised category-level object pose estimation from point clouds

A Compositional Approach to Occlusion in Panoptic Segmentation

...

are the top articles of A. Lynn Abbott at Virginia Polytechnic Institute and State University.

What are A. Lynn Abbott's research interests?

The research interests of A. Lynn Abbott are: Computer Vision, Biometrics, Machine Learning

What is A. Lynn Abbott's total number of citations?

A. Lynn Abbott has 3,344 citations in total.

What are the co-authors of A. Lynn Abbott?

The co-authors of A. Lynn Abbott are Theodore (Ted) S. Rappaport, Narendra Ahuja, Edward Fox, Randolph Hamilton Wynne, Michael Hsiao, Valerie Thomas.

    Co-Authors

    H-index: 129
    Theodore (Ted) S. Rappaport

    Theodore (Ted) S. Rappaport

    New York University

    H-index: 89
    Narendra Ahuja

    Narendra Ahuja

    University of Illinois at Urbana-Champaign

    H-index: 65
    Edward Fox

    Edward Fox

    Virginia Polytechnic Institute and State University

    H-index: 43
    Randolph Hamilton Wynne

    Randolph Hamilton Wynne

    Virginia Polytechnic Institute and State University

    H-index: 41
    Michael Hsiao

    Michael Hsiao

    Virginia Polytechnic Institute and State University

    H-index: 25
    Valerie Thomas

    Valerie Thomas

    Virginia Polytechnic Institute and State University

    academic-engine

    Useful Links