Abdelkareem Bedri

Abdelkareem Bedri

Carnegie Mellon University

H-index: 11

North America-United States

Professor Information

University

Carnegie Mellon University

Position

PhD Student HCII

Citations(all)

700

Citations(since 2020)

557

Cited By

331

hIndex(all)

11

hIndex(since 2020)

11

i10Index(all)

12

i10Index(since 2020)

12

Email

University Profile Page

Carnegie Mellon University

Research & Interests List

Activity Recognition

Mobile Health

Wearable Computing

Assistive Technologies

Ubicomp

Top articles of Abdelkareem Bedri

SECONDARY DEVICE PRESENCE FOR TRIGGERING PRIMARY DEVICE FUNCTIONALITY

Ranging between a primary electronic device and a secondary electronic device can be used to control functional states for the electronic devices. One or more functions of a primary electronic device can be controlled based on the location of a secondary electronic device. The secondary electronic device can be a mobile device or a wearable computer, and the secondary device's location can be a proxy for a user's position. The functions, that are enabled or disabled, can be functions that are related to the user's use and enjoyment of the primary device.

Published Date

2024/2/29

Vision-Based Hand Gesture Customization from a Single Demonstration

Hand gesture recognition is becoming a more prevalent mode of human-computer interaction, especially as cameras proliferate across everyday devices. Despite continued progress in this field, gesture customization is often underexplored. Customization is crucial since it enables users to define and demonstrate gestures that are more natural, memorable, and accessible. However, customization requires efficient usage of user-provided data. We introduce a method that enables users to easily design bespoke gestures with a monocular camera from one demonstration. We employ transformers and meta-learning techniques to address few-shot learning challenges. Unlike prior work, our method supports any combination of one-handed, two-handed, static, and dynamic gestures, including different viewpoints. We evaluated our customization method through a user study with 20 gestures collected from 21 participants, achieving up to 97% average recognition accuracy from one demonstration. Our work provides a viable path for vision-based gesture customization, laying the foundation for future advancements in this domain.

Authors

Soroush Shahi,Cori Tymoszek Park,Richard Kang,Asaf Liberman,Oron Levy,Jun Gong,Abdelkareem Bedri,Gierad Laput

Journal

arXiv preprint arXiv:2402.08420

Published Date

2024/2/13

Moonwalk: Advancing Gait-Based User Recognition on Wearable Devices with Metric Learning

Personal devices have adopted diverse authentication methods, including biometric recognition and passcodes. In contrast, headphones have limited input mechanisms, depending solely on the authentication of connected devices. We present Moonwalk, a novel method for passive user recognition utilizing the built-in headphone accelerometer. Our approach centers on gait recognition; enabling users to establish their identity simply by walking for a brief interval, despite the sensor's placement away from the feet. We employ self-supervised metric learning to train a model that yields a highly discriminative representation of a user's 3D acceleration, with no retraining required. We tested our method in a study involving 50 participants, achieving an average F1 score of 92.9% and equal error rate of 2.3%. We extend our evaluation by assessing performance under various conditions (e.g. shoe types and surfaces). We discuss the opportunities and challenges these variations introduce and propose new directions for advancing passive authentication for wearable devices.

Authors

Asaf Liberman,Oron Levy,Soroush Shahi,Cori Tymoszek Park,Mike Ralph,Richard Kang,Abdelkareem Bedri,Gierad Laput

Journal

arXiv preprint arXiv:2402.08451

Published Date

2024/2/13

Advancing Location-Invariant and Device-Agnostic Motion Activity Recognition on Wearable Devices

Wearable sensors have permeated into people's lives, ushering impactful applications in interactive systems and activity recognition. However, practitioners face significant obstacles when dealing with sensing heterogeneities, requiring custom models for different platforms. In this paper, we conduct a comprehensive evaluation of the generalizability of motion models across sensor locations. Our analysis highlights this challenge and identifies key on-body locations for building location-invariant models that can be integrated on any device. For this, we introduce the largest multi-location activity dataset (N=50, 200 cumulative hours), which we make publicly available. We also present deployable on-device motion models reaching 91.41% frame-level F1-score from a single model irrespective of sensor placements. Lastly, we investigate cross-location data synthesis, aiming to alleviate the laborious data collection tasks by synthesizing data in one location given data from another. These contributions advance our vision of low-barrier, location-invariant activity recognition systems, catalyzing research in HCI and ubiquitous computing.

Authors

Rebecca Adaimi,Abdelkareem Bedri,Jun Gong,Richard Kang,Joanna Arreaza-Taylor,Gerri-Michelle Pascual,Michael Ralph,Gierad Laput

Journal

arXiv preprint arXiv:2402.03714

Published Date

2024/2/6

Instrumented Footwear Device for Upper-Body Exercise Identification

A computing system configured to perform operations that identify user movement (s) as upper-body exercise (s). The operations can include obtaining sensor data generated by one or more sensors of a footwear device worn by a user, the one or more sensors positioned in one or more positions of the footwear device, the sensor data associated with one or more movements performed by the user. The sensor data can be input into a machine-learned exercise identification model. Exercise identification data can be received as an output of the machine-learned exercise identification model. The exercise identification data can identify the one or more movements performed by the user as one or more upper-body exercises performed by the user, the exercise identification data comprising one or more exercise characteristics associated with each upper-body exercise of the one or more upper-body exercises …

Published Date

2023/4/20

Prism-tracker: A framework for multimodal procedure tracking using wearable sensors and state transition information with user-driven handling of errors and uncertainty

A user often needs training and guidance while performing several daily life procedures, e.g., cooking, setting up a new appliance, or doing a COVID test. Watch-based human activity recognition (HAR) can track users' actions during these procedures. However, out of the box, state-of-the-art HAR struggles from noisy data and less-expressive actions that are often part of daily life tasks. This paper proposes PrISM-Tracker, a procedure-tracking framework that augments existing HAR models with (1) graph-based procedure representation and (2) a user-interaction module to handle model uncertainty. Specifically, PrISM-Tracker extends a Viterbi algorithm to update state probabilities based on time-series HAR outputs by leveraging the graph representation that embeds time information as prior. Moreover, the model identifies moments or classes of uncertainty and asks the user for guidance to improve tracking …

Authors

Riku Arakawa,Hiromu Yakura,Vimal Mollyn,Suzanne Nie,Emma Russell,Dustin P DeMeo,Haarika A Reddy,Alexander K Maytin,Bryan T Carroll,Jill Fain Lehman,Mayank Goel

Journal

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies

Published Date

2023/1/11

SilentSpeller: Towards mobile, hands-free, silent speech text entry using electropalatography

Speech is inappropriate in many situations, limiting when voice control can be used. Most unvoiced speech text entry systems can not be used while on-the-go due to movement artifacts. Using a dental retainer with capacitive touch sensors, SilentSpeller tracks tongue movement, enabling users to type by spelling words without voicing. SilentSpeller achieves an average 97% character accuracy in offline isolated word testing on a 1164-word dictionary. Walking has little effect on accuracy; average offline character accuracy was roughly equivalent on 107 phrases entered while walking (97.5%) or seated (96.5%). To demonstrate extensibility, the system was tested on 100 unseen words, leading to an average 94% accuracy. Live text entry speeds for seven participants averaged 37 words per minute at 87% accuracy. Comparing silent spelling to current practice suggests that SilentSpeller may be a viable alternative …

Authors

Naoki Kimura,Tan Gemicioglu,Jonathan Womack,Richard Li,Yuhui Zhao,Abdelkareem Bedri,Zixiong Su,Alex Olwal,Jun Rekimoto,Thad Starner

Published Date

2022/4/29

Fitnibble: A field study to evaluate the utility and usability of automatic diet monitoring in food journaling using an eyeglasses-based wearable

The ultimate goal of automatic diet monitoring systems (ADM) is to make food journaling as easy as counting steps with a smartwatch. To achieve this goal, it is essential to understand the utility and usability of ADM systems in real-world settings. However, this has been challenging since many ADM systems perform poorly outside the research labs. Therefore, one of the main focuses of ADM research has been on improving ecological validity. This paper presents an evaluation of ADM’s utility and usability using an end-to-end system, FitNibble. FitNibble is robust to many challenges that real-world settings pose and provides just-in-time notifications to remind users to journal as soon as they start eating. We conducted a long-term field study to compare traditional self-report journaling and journaling with ADM in this evaluation. We recruited 13 participants from various backgrounds and asked them to try each …

Authors

Abdelkareem Bedri,Yuchen Liang,Sudershan Boovaraghavan,Geoff Kaufman,Mayank Goel

Published Date

2022/3/22

Professor FAQs

What is Abdelkareem Bedri's h-index at Carnegie Mellon University?

The h-index of Abdelkareem Bedri has been 11 since 2020 and 11 in total.

What are Abdelkareem Bedri's research interests?

The research interests of Abdelkareem Bedri are: Activity Recognition, Mobile Health, Wearable Computing, Assistive Technologies, Ubicomp

What is Abdelkareem Bedri's total number of citations?

Abdelkareem Bedri has 700 citations in total.

What are the co-authors of Abdelkareem Bedri?

The co-authors of Abdelkareem Bedri are Thad Starner, Irfan Essa, Omer T. Inan, William Harwin, Mayank Goel, Cheng Zhang.

Co-Authors

H-index: 88
Thad Starner

Thad Starner

Georgia Institute of Technology

H-index: 69
Irfan Essa

Irfan Essa

Georgia Institute of Technology

H-index: 44
Omer T. Inan

Omer T. Inan

Georgia Institute of Technology

H-index: 39
William Harwin

William Harwin

University of Reading

H-index: 31
Mayank Goel

Mayank Goel

Carnegie Mellon University

H-index: 17
Cheng Zhang

Cheng Zhang

Cornell University

academic-engine

Useful Links