In-bed motion detection and classification are important techniques that can enable an array of applications, among which are sleep monitoring and abnormal movement detection. In this paper, we present a low-cost, low-overhead, and highly robust system for in-bed movement detection and classification that uses low-end load cells. To detect movements, we have designed a feature which we refer to as Log-Peak, that can be extracted from load cell data which is collected through wireless links in an energy-efficient manner. After detection, we set out to achieve a precise body motion classification. Towards this goal, we define 9 classes of movements, and design a machine learning algorithm using Support Vector Machine (SVM), Random Forest, and XGBoost techniques to classify a movement into one of 9 classes. For every movement, we have extracted 24 features and used them in our model. This movement detection/classification system was evaluated on data collected from 40 subjects who performed 35 predefined movements in each experiment. We have applied multiple tree topologies for each technique to reach their best results. After examining various combinations, we have achieved the final classification accuracy of 91.5 %. This system can be used conveniently for long-term home monitoring.
Urban Mobility Models (UMMs) are fundamental tools for estimating the population in urban sites and their spatial movements over time. Most existing UMMs were developed primarily in 2D. However, we argue that people's movements and living patterns involve 3D space, i.e., buildings, which can heavily affect the accuracy of UMMs. In this paper, we for the first time conduct a comprehensive study on the impacts of buildings on human movements and the effect on UMMs. We innovatively capture the impacts by developing a Semi-absorbing Urban Mobility model (SUM) and theoretically prove its properties on its difference from that of previous UMMs. We also show that calibrating our original SUM may need a large number of parameters. As such, we develop two SUM extensions with a substantially reduced number of parameters, making calibration practical. Our evaluation also demonstrates that, as a basis for supporting mobile applications in an intracity and hourly scale, the SUM is far superior to previous UMMs. In a case study, we also show that the performance of the resource allocation scheme in a cellular network substantially improves by using SUM, with a reduction in the packet loss probability of 3.19 times.
With the development of the Internet of Things (IoT) and the birth of various new IoT devices, the capacity of massive IoT devices is facing challenges. Fortunately, edge computing can optimize problems such as delay and connectivity by offloading part of the computational tasks to edge nodes close to the data source. Using this feature, IoT devices can save more resources while still maintaining the quality of service. However, since computation offloading decisions concern joint and complex resource management, we use multiple Deep Reinforcement Learning (DRL) agents deployed on IoT devices to guide their own decisions. Besides, Federated Learning (FL) is utilized to train DRL agents in a distributed fashion, aiming to make the DRL-based decision making practical and further decrease the transmission cost between IoT devices and Edge Nodes. In this paper, we first study the problem of computation offloading optimization and prove the problem is an NP-hard problem. Then, based on DRL and FL, we propose an offloading algorithm that is different from the traditional method. Finally, we studied the effects of various parameters on the performance of the algorithm and verified the effectiveness of both the DRL and FL in the IoT system.
Current smartphone-based navigation applications fail to provide lane-level information due to poor GPS accuracy. Detecting and tracking a vehicle's lane position on the road assists in lane-level navigation. For instance, it would be important to know whether a vehicle is in the correct lane for safely making a turn, or whether the vehicle's speed is compliant with a lane-specific speed limit. Recent efforts have used road network information and inertial sensors to estimate lane position. While inertial sensors can detect lane shifts over short windows, it would suffer from error accumulation over time. In this paper we present DeepLane, a system that leverages the back camera of a windshield-mounted smartphone to provide an accurate estimate of the vehicle's current lane. We employ a deep learning based technique to classify the vehicle's lane position. DeepLane does not depend on any infrastructure support such as lane markings and works even when there are no lane markings, a characteristic of many roads in developing regions. We perform extensive evaluation of DeepLane on real world datasets collected in developed and developing regions. DeepLane can detect vehicle's lane position with an accuracy of over 90% and we have implemented DeepLane as an Android-app.
Patients with respiratory diseases require accurately measuring and frequently monitoring blood oxygen level. Existing techniques however either need a dedicated hardware or fails to predict low saturation levels. To fill in this gap, we propose a phone-based oxygen level estimation system, called PhO2, using camera and flashlight functions that are readily available on today's off-the-shelf smartphones. Since phone's camera and flashlight are not made for this purpose, utilizing them for oxygen level estimation poses many difficulties. We introduce a cost-effective add-on together with a set of algorithms for spatial and spectral optical signal modulation to amplify the optical signal of interest while minimizing noise. A near field-based pressure detection and feedback mechanism are also proposed to mitigate the negative impacts of user's behavior during the measurement. We also derive a non-linear referencing model with an outlier removal technique that allows PhO2 to accurately estimate the oxygen level from color intensity ratios produced by smartphone's camera. An evaluation on COTS smartphone with 6 subjects shows that PhO2 can estimate the oxygen saturation within 3.5% error rate comparing to FDA-approved gold standard pulse oximetry. In addition, our evaluation in hospitals presents high correlation with ground-truth qualified by the 0.83/1.0 Kendall Ä coefficient
Access to large amounts of real-world data has long been a barrier to the development and evaluation of analytics applications for the built environment. Open data sets exist, but they are limited in their span (how much data is available) and context (what kind of data is available and how it is described). Evaluation of such analytics is also limited by how the analytics themselves are implemented, often using hard-coded names of building components, points and locations, or unique input data formats. To advance the methodology for how such analytics are implemented and evaluated, we present Mortar: an open testbed for portable building analytics, currently spanning 90 buildings and containing over 9.1 billion data points. All buildings in the testbed are described using Brick, a recently developed metadata schema, providing rich functional descriptions of building assets and subsystems. We also propose a simple architecture for writing portable analytics applications that are robust to the diversity of buildings and can configure themselves based on context. We demonstrate the utility of Mortar by implementing 11 applications from the literature.
Jamming may become a serious threat in Internet of Things networks of battery-powered nodes, as attackers can disrupt packet delivery and significantly reduce the lifetime of the nodes. In this work, we model an active defense scenario in which an energy-limited node uses power control to defend itself from a malicious attacker, whose energy constraints may not be known to the defender. The interaction between the two nodes is modeled as an asymmetric Bayesian game where the victim has incomplete information about the attacker. We show how to derive the optimal Bayesian strategies for both the defender and the attacker, which may then serve as guidelines to develop and gauge efficient heuristics that are less computationally expensive than the optimal strategies. For example, we propose a neural network-based learning method that allows the node to effectively defend itself from the jamming with a significantly reduced computational load. The outcomes of the ideal strategies highlight the trade-off between node lifetime and communication reliability and the importance of an intelligent defense from jamming attacks.
Lawns, also known as turf, cover an estimated 128,000 square kilometers in North America alone, with landscape requirements representing 30% of freshwater consumed in the residential domain. With this consumption comes a large amount of environmental, economic, and social incentive to make turf irrigation systems as efficient as possible. Recent work introduced the concept of distributed control in irrigation systems, but existing control strategies either do not take advantage of the distributed control, or don?t revise the strategy over time in response to collected data. In this work, we introduce OPTICS, a data-driven control strategy that self-improves over time, adapts to the local specific conditions and weather changes, and requires virtually no human input in both setup and maintenance providing a plug-and-play system that requires minimal pre-deployment efforts. In addition to substantial improvements in ease-of-use, we find across 4 weeks of large-scale irrigation system deployment that OPTICS improves system efficiency by 12.0% in comparison to industry best and 3.3% in comparison to academic state-of-the-art. Despite using less water, OPTICS also was found to improve quality of service by a factor of 4.0x compared to industry best and 2.5x compared to academic state of the art.
Internet-of-things (IoT) have created a new paradigm of integrated sensing and actuation systems. One viable application is that of IoT-empowered smart lighting systems, which provide automated illuminance control functions tailored to users' preferences. Despite the usefulness of these systems, practical deployment usually precludes the inclusion of sophisticated but costly location-aware sensors that are capable of mapping out the details of a dynamic environment. Instead, cheap oblivious mobile sensors are often utilized, which are plagued with uncertainty in the locations of sensors and light bulbs. The presence of these sensors impedes the design of effective smart lighting systems for uncertain indoor environments. In this article, we shed light on the adaptive control algorithms of smart lighting systems based on oblivious mobile sensors. We first formulate a general model capturing an oblivious multi-sensor illuminance control problem, and a robust adaptive control framework agnostic to a dynamic surrounding environment with unknown parameters. With this model, we devise efficient algorithms for an adaptive illuminance control system that can minimize energy consumption of light bulbs and satisfy users' preferences. Our algorithms are then studied under extensive empirical evaluations in a proof-of-concept smart lighting testbed system with programmable smart light bulbs and mobile light sensors.
We present PC-RPL, a transmission power-controlled IPv6 routing protocol for low-power and lossy wireless networks (LLNs) that significantly improves the end-to-end packet delivery performance compared to the standard RPL. We show through actual design, implementation, and experiments that a multihop wireless network can achieve better bandwidth and routing stability when transmission power and routing topology are "jointly and adaptively" controlled. Our experiments show that the predominant "fixed and uniform" transmission power strategy with "link quality and hop distance"-based routing topology construction loses significant bandwidth due to hidden terminal and load imbalance problems. We design an adaptive and distributed control mechanism for transmission power and routing topology, named PC-RPL, on top of the standard RPL routing protocol for hidden terminal mitigation and load balancing. We implement PC-RPL on real embedded devices and evaluate its performance on a 49-node multihop testbed. PC-RPL reduces total end-to-end packet losses <7-fold without increasing hop distance compared to RPL with the highest transmission power, resulting in 17% improvement in aggregate bandwidth and 64% improvement for the worst-case node by successfully alleviating both hidden terminal and load imbalance problems.
Energy-neutral Internet of Things requires freeing embedded devices from batteries and powering them from ambient energy. Ambient energy is, however, unpredictable and can only power a device intermittently. Therefore, the paradigm of intermittent execution is to save the program state into non-volatile memory frequently to preserve the execution progress. In task-based intermittent programming, the state is saved at task transition. Tasks are fixed at compile-time and agnostic to energy conditions. Thus, the state may be saved either more often than necessary or not often enough for the program to progress and terminate. To address these challenges, we propose Coala, an adaptive and efficient task-based execution model. Coala progresses on a multi-task scale when energy permits and preserves the computation progress on a sub-task scale if necessary. Coala's specialized memory virtualization mechanism ensures that power failures do not leave the program state in non-volatile memory inconsistent. Our evaluation on a real energy-harvesting platform not only shows that Coala reduces run time by up to 54% as compared to a state-of-the-art system, but also it is able to progress where static systems fail.
Transportation and distribution (T&D) of fresh food products is a substantial and increasing part of the economic activity throughout the world. Unfortunately, fresh food T&D not only suffers from significant spoilage and waste, but also from dismal efficiency due to tight transit timing constraints between the the availability of harvested food until its delivery to the retailer. Fresh food is also easily contaminated, and together with deteriorated fresh food is responsible for much of Food Borne Illnesses. The logistics operations are ongoing rapid transformation on multiple fronts including infusion of information technology in the logistics operations, automation in the physical product handling, standardization of labeling, addressing and packaging, and shared logistics operations under 3rd party-logistics (3PL) and related models. In this paper, we discuss how these developments can be exploited to turn fresh food logistics into an intelligent cyberphysical system driven by online monitoring and associated operational control to enhance food freshness and safety, reduce food waste, and increase T&D efficiency. Some of the issues discussed in this context are fresh food quality deterioration processes, food quality/contamination sensing technologies, communication technologies for transmitting sensed data through the challenging fresh food media, intelligent management of T&D pipeline, and other operational issues.