Browsing by Author "Chiang, Yi-Han"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Chameleon: Latency and Resolution Aware Task Offloading for Visual-Based Assisted Driving(IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019-09) Zhu, Chao; Chiang, Yi-Han; Mehrabi, Abbas; Xiao, Yu; Yla-Jaaski, Antti; Ji, Yusheng; Department of Communications and Networking; Department of Computer Science; Mobile Cloud Computing; Helsinki Institute for Information Technology (HIIT); Professorship Ylä-Jääski A.; Computer Science - Computing Systems (ComputingSystems); National Institute of InformaticsEmerging visual-based driving assistance systems involve time-critical and data-intensive computational tasks, such as real-time object recognition and scene understanding. Due to the constraints on space and power capacity, it is not feasible to install extra computing devices on all the vehicles. To solve this problem, different scenarios of vehicular fog computing have been proposed, where computational tasks generated by vehicles can be sent to and processed at fog nodes located for example at 5G cell towers or moving buses. In this paper, we propose Chameleon, a novel solution for task offloading for visual-based assisted driving. Chameleon takes into account the spatiotemporal variation in service demand and supply, and provides latency and resolution aware task offloading strategies based on partially observable Markov decision process (POMDP). To evaluate the effectiveness of Chameleon, we simulate the availability of vehicular fog nodes at different times of day based on the bus trajectories collected in Helsinki, and use the real-world performance measurements of visual data transmission and processing. Compared with adaptive and random task offloading strategies, the POMDP-based offloading strategies provided by Chameleon shortens the average service latency of task offloading by up to 65% while increasing the average resolution level of processed images by up to 83%.Item FlexSensing: A QoI and Latency Aware Task Allocation Scheme for Vehicle-based Visual Crowdsourcing via Deep Q-Network(IEEE, 2021-05-01) Zhu, Chao; Chiang, Yi-Han; Xiao, Yu; Ji, Yusheng; Department of Communications and Networking; Mobile Cloud Computing; Osaka Prefecture University; National Institute of InformaticsVehicle-based visual crowdsourcing is an emerging paradigm where the visual data collected from dash cameras are analyzed with the aim of measuring phenomena of common interest. To ensure the efficiency in vehicle-based visual crowdsourcing, there remain at least two technical challenges. First, to maximize the quality of information (QoI), which measures the amount of information extracted from the collected data, the context of data collection (e.g., camera position and orientation) must be taken into account in the process of task allocation. Second, intensive data collection from dense measurement points is key to ensure timely and accurate sensing of the targets of interest, whereas there exists a trade-off between the amount and rate of data collection and the computing and communication resources required to fulfill the latency constraint. To solve these challenges, we propose gathering and processing the collected data at the edge of the network and design a context-aware task allocation scheme, called FlexSensing, to jointly optimize the QoI and processing latency. We target application scenarios where commercial vehicles are turned into vehicular fog nodes (VFNs). These nodes gather and process the visual data collected from other vehicles within their coverage areas. The key idea of FlexSensing is to determine the rate of data collection for each sensing vehicle in the targeted area and to assign processing tasks to VFNs based on the estimated QoI and the workload of the VFNs. Given the excessive computational complexity of task allocation in this context, we formulate task allocation as a Markov decision process and apply a deep Q-network (DQN) to learn the optimized task allocation strategies for increasing the QoI of collected data while reducing the processing latency. To evaluate the effectiveness of FlexSensing, we simulate the mobility of different vehicles involved in the scenario at different times of the day based on real-world traffic data collected from the city of Helsinki and select a real-time object detection application for a case study. As compared with the existing task allocation strategies, the DQN-based task allocation strategies reduce the average processing latency by up to 51% and increase the QoI of the collected data by up to 34%.Item SVP: Sinusoidal Viewport Prediction for 360-Degree Video Streaming(IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2020) Jiang, Xiaolan; Naas, Si Ahmed; Chiang, Yi-Han; Sigg, Stephan; Ji, Yusheng; National Institute of Informatics; Department of Communications and Networking; Osaka Prefecture University; Ambient IntelligenceThe rapid growth of user expectations and network technologies has proliferated the service needs of 360-degree video streaming. In the light of the unprecedented bitrates required to deliver entire 360-degree videos, tile-based streaming, which associates viewport and non-viewport tiles with different qualities, has emerged as a promising way to facilitate 360-degree video streaming in practice. Existing work on viewport prediction primarily targets prediction accuracy, which potentially gives rise to excessive computational overhead and latency. In this paper, we propose a sinusoidal viewport prediction (SVP) system for 360-degree video streaming to overcome the aforementioned issues. In particular, the SVP system leverages 1) sinusoidal values of rotation angles to predict orientation, 2) the relationship between prediction errors, prediction time window and head movement velocities to improve the prediction accuracy, and 3) the normalized viewing probabilities of tiles to further improve adaptive bitrate (ABR) streaming performance. To evaluate the performance of the SVP system, we conduct extensive simulations based on real-world datasets. Simulation results demonstrate that the SVP system outperforms state-of-the-art schemes under various buffer thresholds and bandwidth settings in terms of viewport prediction accuracy and video quality, revealing its applicability to both live and video-on-demand streaming in practical scenarios.