Privacy Risks Emerging from the Adoption of Innocuous Wearable Sensors in the Mobile Environment
Andrew Raij, Animikh Ghosh, Santosh Kumar, Mani Srivastava
Presented at CHI 2011, May 7-12, 2011, Vancouver, British Columbia, Canada
Author Bios
- Andrew Raij is a Post Doctoral Fellow in University of Memphis's Computer Science Department as a member of Dr. Kumar's lab. He is interested in persuasive interfaces.
- Animikh Ghosh is a junior research associate at SETLabs and was a research assistant to Dr. Kumar at the University of Memphis. He is interested in privacy risks from participatory sensing.
- Santosh Kumar is an Associate Professor in the University of Memphis's Computer Science Department. He leads the Wireless Sensors and Mobile Ad Hoc Networks Lab.
- Mani Srivastava is a professor in UCLA's Electrical Engineering Department. He worked at Bell Labs, which he considers to be the driving force behind his interest in mobile and wireless systems.
Summary
Hypothesis
How comfortable are individuals with the possibility that their data may be made public? How can we reduce the risk of sensitive data leakage?
Methods
The authors administered a survey to users who had their data stored, people who did not have data stored, and people with stored data who were informed of the extent of the stored data to determine their comfort levels with the possibility of data compromises. The people with stored data thus took the survey twice, after having participated in the cooperating study of AutoSense that stored their data. The participants were college students. The survey measured their level of concern about data disclosure both with and without data restrictions and abstractions. Participants were informed of the informed data through the Aha visualization system.
Results
Positively-associated activities, like exercise, were acceptable to share, as was location. The group with no data stake and the group who had not yet learned of the extent of the data collected about them had similar levels of unconcern for data storage. After learning about their data, the second group had higher concern ratings. Some participants referenced that they expect physiological states to remain private. Adding a temporal context increased concern, with increasing abstractions reducing the concern. Duration was less worrisome than a timestamp. Increasing the publicity of data sets concerned users, especially when the identity was included in the data. Participants seemed initially naive to the danger of shared data, with the exception of location. The differing concerns about certain activities suggested that privacy should be handled to different extents depending on the study.
Contents
Wearable sensors record sensitive physical and physiological data, to which machine learning algorithms can be applied. This algorithms reveal a wealth of private information about behavioral states and activities, including stress levels and addictions. These inferences can be shared without the user's permission, potentially revealing private data or identifying the individual. Data sets produced from tests of wearable sensors cannot be released for that reason. Most notably, seemingly innocuous data can be combined to produce informed inferences about a person. Sensor data is hard to anonymize because it is inherently sensitive and quasi-identifying.
The authors produced a framework that focuses on how to displace the boundary where privacy and publicity are in tension. It covers measurements, behaviors, contexts, restrictions, abstractions, and privacy threats. Behaviors and contexts derive from measurements. Contexts can be further subdivided into temporal, physical, physiological, and social contexts. Restrictions and abstractions safeguard data. The former removes data from the set, and the latter tries to reduce the extent of exposure in the set.
The authors developed the Aha visualization system to provide four visualizations of individual behavior, including daily life and stress.
Discussion
The authors wanted to find out how concerned users were about sensor data being used to determine things about them and how to prevent identifiable information from being released. Their survey was well-founded and their framework seems reasonable, so I am convinced that this paper is sound.
I was very interested to see just how much could be determined about a person through seemingly unrelated data points. It was actually extremely disturbing to think that so much information could be inferred through accelerometers and stress meters.
I would be very interested in seeing this survey being expanded to cover a variety of demographics. While I would think that college students would be the most knowledgeable about the extent of information they are revealing, I am curious to see what a child or senior citizen might think. Perhaps generation gaps would emerge--or everyone would be equally ignorant of the dangers. Either way, I would love to see those results.
No comments:
Post a Comment