Browsing by Author "Palipana, Sameera"
Now showing 1 - 6 of 6
- Results Per Page
- Sort Options
- 3D Head Motion Detection Using Millimeter-Wave Doppler Radar
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä(2020-01-01) Raja, Muneeba; Vali, Zahra; Palipana, Sameera; Michelson, David G.; Sigg, StephanIn advanced driver assistance systems to conditional automation systems, monitoring of driver state is vital for predicting the driver's capacity to supervise or maneuver the vehicle in cases of unexpected road events and to facilitate better in-car services. The paper presents a technique that exploits millimeter-wave Doppler radar for 3D head tracking. Identifying the bistatic and monostatic geometry for antennas to detect rotational vs. translational movements, the authors propose the biscattering angle for computing a distinctive feature set to isolate dynamic movements via class memberships. Through data reduction and joint time-frequency analysis, movement boundaries are marked for creation of a simplified, uncorrelated, and highly separable feature set. The authors report movement-prediction accuracy of 92%. This non-invasive and simplified head tracking has the potential to enhance monitoring of driver state in autonomous vehicles and aid intelligent car assistants in guaranteeing seamless and safe journeys. - Beamsteering for Training-free Counting of Multiple Humans Performing Distinct Activities
A4 Artikkeli konferenssijulkaisussa(2020-03) Palipana, Sameera; Malm, Nicolas; Sigg, StephanRecognition of the context of humans plays an important role in pervasive applications such as intrusion detection, human density estimation for heating, ventilation and air-conditioning in smart buildings, as well as safety guarantee for workers during human-robot interaction. Radio vision is able to provide these sensing capabilities with low privacy intrusion. A common challenge though, for current radio sensing solutions is to distinguish simultaneous movement from multiple subjects. We present an approach that exploits antenna installations, for instance, found in upcoming 5G technology, to detect and extract activities from spatially scattered human targets in an ad-hoc manner in arbitrary environments and without prior training of the multi-subject detection. We perform receiver-side beamforming and beam-sweeping over different azimuth angles to detect human presence in those regions separately. We characterize the resultant fluctuations in the spatial streams due to human influence using a case study and make the traces publicly available. We demonstrate the potential of this approach through two applications: 1) By feeding the similarities of the resulting spatial streams into a clustering algorithm, we count the humans in a given area without prior training. (up to 6 people in a 22.4 m2 area with an accuracy that significantly exceeds the related work). 2) We demonstrate that simultaneously conducted activities and gestures can be extracted from the spatial streams through blind source separation. - Capturing Human-Machine Interaction Events from Radio Sensors in Industry 4.0 Environments
A4 Artikkeli konferenssijulkaisussa(2019-01-01) Sigg, Stephan; Palipana, Sameera; Savazzi, Stefano; Kianoush, SanazIn manufacturing environments, human workers interact with increasingly autonomous machinery. To ensure workspace safety and production efficiency during human-robot cooperation, continuous and accurate tracking and perception of workers’ activities is required. The RadioSense project intends to move forward the state-of-the-art in advanced sensing and perception for next generation manufacturing workspace. In this paper, we describe our ongoing efforts towards multi-subject recognition cases with multiple persons conducting several simultaneous activities. Perturbations induced by moving bodies/objects on the electromagnetic wavefield can be processed for environmental perception by leveraging next generation (5G) New Radio (NR) technologies, including MIMO systems, high performance edge-cloud computing and novel (or custom designed) deep learning tools. - Extracting Human Context Through Receiver-End Beamforming
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä(2019) Palipana, Sameera; Sigg, StephanDevice-free passive sensing of the human targets using wireless signals have acquired much attention in the recent past because of its importance in many applications including security, heating, ventilation and air conditioning, activity recognition, and elderly care. In this paper, we use receiver-side beamforming to isolate the array response of a human target when the line of sight array response is several magnitudes stronger than the human response. The solution is implemented in a 5G testbed using a software-defined radio (SDR) platform. As beamforming with SDRs faces the challenge to train the beamformer to different azimuth angles, we present an algorithm to generate the steering vectors for all azimuth angles from a few training directions amidst imprecise prior information on the training steering vectors. We extract the direction of arrival (DoA) from the array response of the human target, and conducting experiments in a semi-anechoic chamber, we detect the DoAs of up to four stationary human targets and track the DoA of up to two walking persons simultaneously. - Motion pattern recognition in 4D point clouds
A4 Artikkeli konferenssijulkaisussa(2020-09) Salami, Dariush; Palipana, Sameera; Kodali, Manila; Sigg, StephanWe address an actively discussed problem in signal processing, recognizing patterns from spatial data in motion. In particular, we suggest a neural network architecture to recognize motion patterns from 4D point clouds. We demonstrate the feasibility of our approach with point cloud datasets of hand gestures. The architecture, PointGest, directly feeds on unprocessed timelines of point cloud data without any need for voxelization or projection. The model is resilient to noise in the input point cloud through abstraction to lower-density representations, especially for regions of high density. We evaluate the architecture on a benchmark dataset with ten gestures. PointGest achieves an accuracy of 98.8%, outperforming five state-of-the-art point cloud classification models. - Tesla-Rapture: A Lightweight Gesture Recognition System from mmWave Radar Sparse Point Clouds
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä(2023-08) Salami, Dariush; Hasibi, Ramin; Palipana, Sameera; Popovski, Petar; Michoel, Tom; Sigg, StephanWe present Tesla-Rapture, a gesture recognition system for sparse point clouds generated by mmWave Radars. State of the art gesture recognition models are either too resource consuming or not sufficiently accurate for the integration into real-life scenarios using wearable or constrained equipment such as IoT devices (e.g. Raspberry PI), XR hardware (e.g. HoloLens), or smart-phones. To tackle this issue, we have developed Tesla, a Message Passing Neural Network (MPNN) graph convolution approach for mmWave radar point clouds. The model outperforms the state of the art on three datasets in terms of accuracy while reducing the computational complexity and, hence, the execution time. In particular, the approach, is able to predict a gesture almost 8 times faster than the most accurate competitor. Our performance evaluation in different scenarios (environments, angles, distances) shows that Tesla generalizes well and improves the accuracy up to 20% in challenging scenarios, such as a through-wall setting and sensing at extreme angles. Utilizing Tesla, we develop Tesla-Rapture, a real-time implementation using a mmWave Radar on a Raspberry PI 4 and evaluate its accuracy and time-complexity. We also publish the source code, the trained models, and the implementation of the model for embedded devices.