A Protocol-hidden Perturbation Scheme for Pervasive Computing

7 downloads 124 Views 1MB Size Report
are processed on the user-side, are prospective techniques for utilizing personal ... Democratic Privacy provides a secu
Democratic Privacy: A Protocol-hidden Perturbation Scheme for Pervasive Computing Shunsuke Aoki∗† and Kaoru Sezaki† ∗ Electrical

and Computer Engineering, Carnegie Mellon University for Spatial Information Science, The University of Tokyo, Email: [email protected], [email protected]

† Center

Abstract—Privacy issue has become serious in the area of mobile participatory sensing, smart grids, location-based services, and intelligent transportation systems. In response, many researchers are tackling the issue with technical approaches. In particular, data perturbation schemes, where all sensor data are processed on the user-side, are prospective techniques for utilizing personal data collected by embedded systems and sensorequipped devices. However, existing data perturbation schemes require sharing the protocol between the user-side and the server-side, even though this can allow malicious attackers to estimate the original sensor data from the perturbed data. In this context, the paper presents a protocol-hidden perturbation framework, called Democratic Privacy, with which a service provider can acquire the original data probability without sharing the perturbation protocol with each user. In Democratic Privacy, the perturbation protocol is selected dramatically by each user, and the original data are reconstructed based on the selection of the crowd. Consequently, malicious attackers cannot distinguish the individual settings by eavesdropping. In addition, this paper presents Three-level Perturbation as one of the methods for Democratic Privacy, and evaluates it by means of simulations. The results of our evaluation demonstrate that Democratic Privacy provides a secure platform for pervasive computing environments.

Fig. 1. Concept of Democratic Privacy. Each user sends perturbed data to the Application Server, and the privacy level to the Privacy Level Storage Server. The individual relationship between the perturbed data and the privacy level is hidden.

I. I NTRODUCTION The utilization of sensor data from individual users is a significant component in smart grids[1], crowd-sensing[2], location-based services[3], [4], and transportation management services[5]. However, privacy has become a serious concern with these applications. To preserve privacy in these applications, numerous researchers have presented technical approaches. In particular, data perturbation schemes, where the sensor data are processed before transmission, are prospective techniques for using personal data collected by embedded systems and sensor-equipped devices[6], [7], [8]. With data perturbation, sensor data are processed on the user-side in order to preserve privacy, and only statistical information is reconstructed on the server-side. However, previous data perturbation schemes require sharing the protocol between the user-side and the server-side, even though this can help malicious attackers estimate the original sensor values from the perturbed data. Once malicious attackers know the protocol for data perturbation, personal data can easily be estimated and reconstructed. To enhance the privacy in pervasive computing environments, a novel data perturbation method must be designed such that the service

provider does not need to share the perturbation protocol with each user. In this context, the paper presents a protocol-hidden perturbation scheme, called Democratic Privacy, with which service providers can acquire the data probability without sharing the perturbation protocol with each user. With our proposed scheme, the perturbation protocol is selected dramatically by each user, and statistics are acquired with the help of a crowd. As far as we know, our Democratic Privacy is the first crowd-based data perturbation technique. With the proposed system, a third party cannot distinguish individual settings by eavesdropping, because the processed data and the processing schemes are separated on the user-side before transmission. The contributions of our paper are as follows: (1) we show that the existing data perturbation technique cannot prevent eavesdropping attacks; (2) we present the concept of Democratic Privacy, a scheme with which sensor data are processed in line with general users’ selections; (3) we present an algorithm for three-level perturbation in order to realize Democratic Privacy; and (4) we study data integrity

and reliability through theoretical analysis and simulations. The remainder of our paper is organized as follows. Section II describes previous works related to our research. The preliminaries for our scheme are presented in Section III. Section IV describes the proposed Democratic Privacy. In Section V, we describe the protocol for three-level perturbation. The simulations and evaluations are described in Section VI, and Section VII provides a discussion of the results. Finally, Section VIII presents future work and concludes the paper. II. R ELATED W ORK Research related to privacy in pervasive computing environments is grouped into two general categories: designing theoretical algorithms, and implementing secure personal database systems. The theoretical approaches are evaluated on the basis of mathematical and quantitative values. Algorithmic privacy techniques with data perturbation, such as Randomized Response[6], [9] and k-anonymity[10], have been developed for data mining. In data perturbation, all raw data are processed to preserve privacy, and the data are utilized statistically. Differential privacy[11] aims to provide maximal accuracy in terms of the responses to users raising questions on a statistical database, while minimizing the ability of these users to identify records in the database. Encryption techniques[2] have been widely developed to protect user privacy from malicious attackers in transmission. Database systems for preserving privacy have improved considerably. Such systems allow users to decide on the granularity and purposes for their own sensor data[3]. The design of such systems is essentially based on the presence of a trusted third party, because users must provide all their data to the system server[4]. Existing systems provide user-friendly interfaces enabling users to manage their data superficially; they are required to deposit their personal sensor data in the central server. III. C ONCEPT OF Democratic Privacy This section presents the concept and system overview of Democratic Privacy. Democratic Privacy preserves privacy in pervasive computing environments with the help of a crowd. The system meeting the concept of democratic privacy enables service providers to collect personal data without sharing the protocol with each user, because the data are reconstructed based solely on the statistics of the privacy settings. With Democratic Privacy, each user is able to set their own respective privacy level, and the statistics from these settings help to reconstruct the data. Users who set a low level of privacy provide more information than users who set a high level of privacy. By contrast, existing systems[6], [7], [8] have been designed to determine the perturbation protocol before collecting the sensor data. As such, it is difficult to change the protocol once an application is opened to the public. Malicious attackers can easily estimate the original data by the eavesdropping once they know the perturbation protocols. In addition, attackers

might reveal the perturbation protocol itself with a long-term monitoring attack. To enhance the privacy, a novel perturbation framework is needed, with which general users can change the perturbation protocol without notifying the service provider. The system overview for Democratic Privacy is illustrated in Fig. 1. Each user submits perturbed data to the Application Server (AP Server) after data processing on the user-side, and continuously reports the privacy level to the Privacy Level Storage Server (PLS Server). With this system, no one is able to link the processed data with the perturbation protocol after data transmission. The PLS server gathers the privacy level set by each user, and forwards the statistics of the privacy level, named Public Privacy Value (PPV), to the AP server. The PPV can be open to the public, because the value is merely statistical. At the same time, the AP server gathers the processed data and reconstructs the original data distributions with the statistics of the privacy level. IV. P RELIMINARIES : DATA P ERTURBATION The proposed Democratic Privacy is based on two data perturbation techniques; Randomized Response[9] and Negative Surveys[7], [12]. These existing techniques currently require sharing the protocol between each user and a server administrator. With existing perturbation techniques, all original data are processed for privacy protection before the transmission, and the data are reconstructed as statistics. Data perturbation techniques enable us to reveal the statistics while keeping personal data secret[13]. Randomized Response, which was originally designed for categorical data, is one of the most general data perturbation schemes, and it has been extended to various research areas, such as location privacy and privacy-preserving data mining. With Randomized Response, all original data are perturbed and replaced with another value with a fixed probability. The value of the perturbed data is the same as that of the original data with a probability p, which satisfies 0 < p < 1. For example, if a participant belongs to group C and there are 5 categories, groups A to E in a survey, the participant’s data can be processed for groups A, B, D, or E, with a probability 1−p 4 . Also, the perturbed data can be C with a probability p. In addition, F. Esponda et al.[14] presented Negative Surveys, where the value of the original data is always different from that of the perturbed data. In the case of a participant belonging to group C, when there are 5 categories from A to E, the participant’s data can be perturbed to groups A, B, D, or E, with a probability 14 , and it will never be processed to C. As shown in Equation(1), this technique reconstructs the statistical data without revealing each answer; ∀i | Ai = N − (α − 1) · Yi

(1)

where Ai and Yi denote the reconstructed number and the reported perturbed number of values in category i, respectively, and where N is the total number of data, and α denotes the number of categories. Both techniques, i.e. Randomized Response and Negative Surveys, have been extended to multidimensional data[6], [7],

which contain multiple fields, and preserve the data correlation between the dimensions in a dataset. According to existing work[6], Randomized Response is relatively more powerful than Negative Surveys in terms of protecting privacy, because malicious attackers cannot estimate the original data from the perturbed data. With Negative Surveys, on the other hand, users should constantly provide a clue to attackers because of the mutually complementary relationship between the original value and the perturbed value. V. P ROTOCOL : T HREE - LEVEL P ERTURBATION This section presents a specific protocol, namely Three-level Perturbation, as one of the methods for realizing Democratic Privacy. With this protocol, all sensor data are processed on the user-side, in response to users’ settings. The privacy settings are reported to the PLS server, whereas the processed data are transmitted to the AP server. After the transmission, the serverside reconstructs the statistics of the original data without knowing individual sensor values. The protocol is based on two data perturbation schemes, Randomized Response[6], [9] and Negative Surveys[7], [14]. Users have the option of reporting the raw sensor data to the AP server in the protocol, in cases where they prefer to contribute actively to applications. In addition, according to J. Lindqvist et al.[15], general users frequently change thier privacy settings depending on their location and situation. For example, users might decide to make a significant contribution to sensing applications while traveling. In this section, we first present the protocol for singledimensional data, based on Randomized Response and Negative Surveys. Second, we extend the protocol for multidimensional data, in order to preserve the data correlation of multiple sensor data. A. For Single-dimensional Data First, we present the three-level perturbation protocol for single-dimensional data.

1) User-side Protocol: In the design of Democratic Privacy, service providers are unable to know the relationship between the individual perturbed data and the privacy setting. They only need to acquire the statistics of the perturbed data and the privacy settings. Therefore, all sensor data are processed before transmission and the data and settings are separately reported. The users can select from one of the three levels; Raw data (Level 1), Negated data (Level 2), and Randomized data (Level 3). When a user sets the privacy to Level 2, sensor data are processed with Negative Surveys, and at Level 3, the sensor data are processed with Randomized Response. In addition, at Level 1, the data are processed as raw data. The higher the level of privacy, the lower the shared information entropy[6]. All processed data are reported to the AP server, and the privacy level set by each user is separately reported to the PLS server. Therefore, no one other than the reporter knows the relationship between the data and the privacy level. The statistical transition probability is derived with Equation (2). This equation calculates the statistical probability between the original data value and the processed data value. The value of the original data and the processed data are described as a and b, respectively. Here, x, y, and z denote the statistical ratios for the settings at Level 1, Level 2, and Level 3, respectively. Moreover, α is the number of categories, and the value of p is the transition probability of retaining the same categorical value in Randomized Response. That is to say, the processed data have the same values as the original data with a probability p, for data processed at Level 3. x+p·z y + (1 − p) · z (2) P(a6=b) = α−1 We provide an example of the statistical relationship between the original data and the processed data in Fig. 2. In this example, the number of categories, α, is 5. Statistically, when the original data is 1, the processed data would be 1 with the probability of x+p·z, and would be 2, 3, 4, or 5 with the probability of y+(1−p)·z . The ratio is assumed 4 to be a uniform distribution in each category. Moreover, these values always satisfy Equation (3). P(a=b)

=

x+y+z =1

Fig. 2. Example of single-dimensional datasets with 5 categories. The original data of A1 are processed to B1 with a probability of x + p · z, y+(1−p)·z statistically. The transition probabilities to other categories are . 4

(3)

2) Server-side Protocol: The PLS server gathers the information about the privacy level set by each user, and forwards the statistics for this privacy level to the AP server. On the other hand, the AP server gathers the processed data from each user, and reconstructs the original data distributions with the statistics of the privacy level. This reconstruction process is shown in Equation (4). The equation is derived from the statistical relationship between original data and the processed data.  N · 1 − (x + zp) − (α − 1) · Bi (4) ∀i | Ai = 1 − α(x + zp)

where N is the total number of sensed values, and Ai and Bi denote the original number and the reported number of values in category i, respectively, with 1 ≤ i ≤ α. Here, A = (A1 , · · · , Aα ) and B = (B1 , · · · , Bα ) denote the number of original and reported values, respectively. The AP server is able to reveal the original data distributions, while keeping the relationship between the perturbed data and the privacy level secret. B. For Multidimensional Data We can also apply the protocol to multidimensional data. Indeed, the data correlation between multiple sensor data is frequently significant to participatory sensing and mobile crowd-sensing. For instance, participatory sensing applications for urban environments collect the data correlation between location information and environmental sensor data. Here, α1 and α2 denote the number of categories in each dimension, and A = (A1,1 , · · · , Aα1 ,α2 ) and B = (B1,1 , · · · , Bα1 ,α2 ) denote the number of original and reported values as statistics, respectively. 1) User-side Protocol: With the proposed protocol for multidimensional data, users are able to select one of three levels; Raw data (Level 1), Negated data (Level 2), and Randomized data (Level 3). We use two existing schemes, Multidimensional Negative Surveys[7] and Multidimensional Randomized Response[6], to deal with multidimensional data, because the data correlation cannot be preserved when processing each data individually. First, we describe the statistical relationship between the original data and the processed data in multidimensional data. Again, x, y, and z must satisfy Equation (3). The statistical z , transition probability for the same value is x + (α1 −1)·(α 2 −1) and the transition probability for categories that have the z·(α1 −1)·(α2 −2) same value only in the dimension of α1 is (α 2 2. 1 −1) ·(α2 −1) Moreover, the transition probability for categories that have z·(α2 −1)·(α1 −2) the same value only in the dimension of α2 is (α 2 2. 1 −1) ·(α2 −1) Finally, the transition probability for categories that have different categorical values in each dimension is described as z·(α1 −2)·(α2 −2) y (α1 −1)·(α2 −1) + (α1 −1)2 ·(α2 −1)2 . Fig. 3 illustrates an example of the statistical relationship between the original data and processed data for multidimensional data, with datasets comprising 4 categories and 6 categories. The transition probabilities are separated into four 1 4 2 1 8 z, 75 z, 45 z, and 15 y + 225 z. The probabilities values, x + 15 are determined by the number of categories in each dimension. 2) Server-side Protocol: As with the protocol for singledimensional data, all processed multidimensional data are reported to the AP server, and the privacy settings are transmitted to the PLS server. The reconstruction process in the AP server is as shown in Equation (5).

Fig. 3. Example of multidimensional datasets with 4 categories and 6 categories. With 2-dimensional data, the transition probabilities contain four values.

Here, P , Q, R, S, T , U , V , and W are shown in Equation (6).

P

1 1 y+ z 2 (α1 − 1)(α2 − 1) (α1 − 1) (α2 − 1)2 (α1 − 2)(α2 − 2) 1 y+ z (α1 − 1)(α2 − 1) (α1 − 1)2 (α2 − 1)2 1 (α1 − 2) − y+ z (α1 − 1)(α2 − 1) (α1 − 1)2 (α2 − 1)2 1 (α2 − 2) − y+ z (α1 − 1)(α2 − 1) (α1 − 1)2 (α2 − 1)2 1 α1 − 2 y+ z (α1 − 1) (α1 − 1)2 1 1 x− y+ z (α1 − 1) (α1 − 1)2 1 α2 − 2 y+ z (α2 − 1) (α2 − 1)2 1 1 x− y+ z (6) (α2 − 1) (α2 − 1)2

= x+

Q = R

=

S

=

T

=

U

=

V

=

W

=

The AP server reconstructs the original data distributions with low computational complexity, because only B = (B1,1 , · · · , Bα1 ,α2 ) are variables in Equation (5). Through these processes, users are not required to reveal personal data, and our protocols do not rely on a trusted third party, which is an unrealistic framework for real-world application. VI. E VALUATION

∀~x | A(~x) = P1 ·  α1 X R Bi,j − Bi,j − W i=1

S U

α2 X j=1

Bi,j + ( RV W +

ST U

 + Q)N

This section presents the evaluation of our protocol, threelevel perturbation, both for single-dimensional data and multi(5) dimensional data. We evaluate the proposed protocol from the perspective of data integrity. Our simulation evaluates the data

integrity of the reconstructed data distributions using JensenShannon Divergence, JSD. The Jensen-Shannon Divergence measures the similarity between two probability distributions, X(i) and A(i), as follows: 1 1 DKL (X||M ) + DKL (A||M ) (7) 2 2 where, M (i) and DKL are derived respectively with the following equations: JSD(X||A) =

M=

1 (X + A) 2

DKL (X||A) =

X i

X(i) log

(8) X(i) A(i)

Fig. 5. Case study for single-dimensional data with two non-uniform data distributions. The size of dataset is 30, 000. In both cases, the PPV is {0.4, 0.3, 0.3}, and p = 0.2. The JSD in Figure (a) and Figure (b) is 0.000036 and 0.000077, respectively.

(9)

In our simulations, the public privacy value, PPV, is described as {xP P V , yP P V , zP P V }, where the values of xP P V , yP P V , and zP P V refer to the ratio of participants with privacy settings at Level 1, Level 2, and Level 3, respectively. A. For Single-dimensional Data Pervasive computing systems frequently deal with single dimensional data. For example, data mining is used to monitor traffic in urban areas using location information. Likewise, smart grids collect data of energy consumption. To demonstrate the feasibility of our protocol, we first present the simulation results in Fig. 4 and Fig. 5. Fig. 4 describes the results from data with a uniform distribution. Here, the value of p is 0.2 and 0.25, and the PPV is {0.1, 0.2, 0.7}, {0.2, 0.7, 0.1}, and {0.4, 0.3, 0.3}. We evaluate the protocol using datasets of 4 categories and 7 categories. In addition, Fig. 5 describes two case studies with datasets of 7 categories, where the data distributions are not uniform. From the results described in Fig. 4 and Fig. 5, our protocol maintains data integrity regardless of the original data distribution and user privacy-level ratio. The results are

Fig. 6. Evaluations of the data integrity of multidimensional data with a uniform data distribution. Figure (a) describes the two-dimensional data distributions with 3 and 7 categories, where the size of the dataset is 12, 000. The PPV is {0.4, 0.3, 0.3}, and p = 0.2. The JSD is 0.00056. Figure(b) describes the results evaluated according to the JSD.

evaluated by calculating the JSD. It is clear from these results that our protocol can accurately reconstruct the original data distributions. With these simulations, we assumed that there was no correlation between the original data and the privacy level set by each user. B. For Multidimensional Data

Fig. 4. Evaluations of the data integrity of single-dimensional data, with a uniform data distribution. Figure (a) describes the data distributions with 7 categories, where the size of the datasets is 30, 000. The PPV is {0.4, 0.3, 0.3}, and p = 0.2. The JSD is 0.00072. Figure(b) describes the results evaluated according to the Jensen-Shannon Divergence (JSD).

Crowd-sensing applications are required in order to collect multidimensional data and to preserve the data correlation between multiple sensor information. For instance, crowdsensing to reveal noise pollution utilizes the data correlation between location information and the noise level. The results of the simulation are presented in Fig. 6 and Fig. 7. Fig. 6 describes the results of 2-dimensional datasets with 4 and 5 categories, and with 3 and 4 categories, with a uniform distribution of the data. The value of p is 0.2 and 0.25, and the PPV is {0.4, 0.3, 0.3}. Moreover, Fig. 7 depicts the feasibility of our protocol, with results from a non-uniform data distribution. This simulation used 2-dimensional datasets with 4 and 5 categories. From these results, it is clear that our protocol for multidimensional data can successfully reconstruct the statistics while maintaining data integrity. In addition, the server side

personal data with eavesdropping attacks, since the data and the processing protocol are separately reported to the server administrator. Our scheme is useful for preserving privacy in applications where statistics are collected. However, general users are unable to use their own personal data, since all of the data are gathered as statistics. Therefore, in future work, we will extend Democratic Privacy to personalized applications such as personal healthcare systems and multi-media targeted advertising. Democratic Privacy is prospective scheme and will generate the environment that enables us to utilize personal data with preserving privacy. Fig. 7. Case study for multidimensional data with a two-dimensional datasets of 4 and 5 categories. The size of the dataset is 66, 000. The PPV is {0.4, 0.3, 0.3}, and the value of p is 0.2. The JSD is 0.002116.

acquires the original data distributions without knowing each user’s privacy protocol. VII. D ISCUSSION The objective of the research is to design a novel data perturbation scheme, with which the services can know the data probability without sharing the protocol with each user. With the proposed scheme, malicious attackers cannot distinguish the original values by eavesdropping, because it is impossible to reconstruct the relationship of processed data and perturbation protocol. Our Democratic Privacy is designed based on the assumption that the original data is not related to the participants’ preferences. Therefore, it will be valuable to survey the tendencies of general users before implementing Democratic Privacy in practical applications. With our system, the PLS server collects the users’ privacy settings, and these settings can be open to the public as a PPV. The PPV will encourage application administrators to design reliable and secure systems, and to build trust from general users. For general users, on the other hand, the PPV is the interface for expressing their perceptions and responses. Users with low computer literacy can decide their own privacy settings by checking the PPV. In addition, monitoring the PPV over a long period of time can confer significant benefits to multiple academic fields, including cognitive science and social science. Existing lab-based research has not investigated users’ perceptions of privacy, owing to limitations in time and cost. VIII. C ONCLUSION This paper presented the novel concept of Democratic Privacy and Three-level Perturbation scheme for preserving privacy without sharing data perturbation protocols. The proposed scheme does not entail sharing the perturbation protocol between the user-side and the server-side, and will enable us to preserve privacy in smart grid, crowd-sensing, locationbased services, and intelligent transportation systems. With the proposed scheme, malicious attackers are unable to estimate

R EFERENCES [1] R. Lu, X. Lin, X. Liang, and X. Shen, “FLIP: an efficient privacypreserving protocol for finding like-minded vehicles on the road,” in Global Telecommunications Conference (GLOBECOM 2010), 2010 IEEE. IEEE, 2010, pp. 1–5. [2] E. De Cristofaro and C. Soriente, “Extended capabilities for a privacyenhanced participatory sensing infrastructure (PEPSI),” Information Forensics and Security, IEEE Transactions on, vol. 8, no. 12, pp. 2021– 2033, Dec 2013. [3] H. Choi, S. Chakraborty, Z. M. Charbiwala, and M. B. Srivastava, “Sensorsafe: a framework for privacy-preserving management of personal sensory information,” in Secure Data Management. Springer, 2011, pp. 85–100. [4] M. Y. Mun, D. H. Kim, K. Shilton, D. Estrin, M. Hansen, and R. Govindan, “PDVloc: A personal data vault for controlled location data sharing,” ACM Trans. Sen. Netw., vol. 10, no. 4, pp. 58:1–58:29, Jun. 2014. [5] J. C. L. Cheung, T. W. Chim, S.-M. Yiu, V. O. Li, and L. C. K. Hui, “Credential-based privacy-preserving power request scheme for smart grid network,” in Global Telecommunications Conference (GLOBECOM 2011), 2011 IEEE. IEEE, 2011, pp. 1–5. [6] S. Aoki and K. Sezaki, “Privacy-preserving community sensing for medical research with duplicated perturbation,” in Communications (ICC), 2014 IEEE International Conference on, June 2014, pp. 4252– 4257. [7] M. M. Groat, B. Edwards, J. Horey, W. He, and S. Forrest, “Enhancing privacy in participatory sensing applications with multidimensional data,” in Pervasive Computing and Communications (PerCom), 2012 IEEE International Conference on. IEEE, 2012, pp. 144–152. [8] R. K. Ganti, N. Pham, Y.-E. Tsai, and T. F. Abdelzaher, “Poolview: stream privacy for grassroots participatory sensing,” in Proceedings of the 6th ACM conference on Embedded network sensor systems. ACM, 2008, pp. 281–294. [9] S. L. Warner, “Randomized response: A survey technique for eliminating evasive answer bias,” Journal of the American Statistical Association, vol. 60, no. 309, pp. 63–69, 1965. [10] L. Sweeney, “k-anonymity: A model for protecting privacy,” International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 10, no. 05, pp. 557–570, 2002. [11] C. Dwork, F. McSherry, K. Nissim, and A. Smith, “Calibrating noise to sensitivity in private data analysis,” in Theory of Cryptography. Springer, 2006, pp. 265–284. [12] S. Aoki, M. Iwai, and K. Sezaki, “Limited negative surveys: Privacypreserving participatory sensing,” in Cloud Networking (CLOUDNET), 2012 IEEE 1st International Conference on, Nov 2012, pp. 158–160. [13] S. Aoki and K. Sezaki, “Negative surveys with randomized response techniques for privacy-aware participatory sensing,” IEICE Transactions on Communications, vol. 97, no. 4, pp. 721–729, 2014. [14] F. Esponda, “Negative surveys,” arXiv preprint math/0608176, 2006. [15] J. Lindqvist, J. Cranshaw, J. Wiese, J. Hong, and J. Zimmerman, “I’m the mayor of my house: Examining why people use foursquare - a socialdriven location sharing application,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ser. CHI ’11. New York, NY, USA: ACM, 2011, pp. 2409–2418.