If you had serious concerns a major corporation were harvesting your personal information for its commercial gain but not your benefit, would you:
a. Take steps to prevent access to your personal information?
b. Close down your links to this corporation?
c. Install a device in your home from the corporation that listens to your conversations?
Obviously, you might think, you would take option a or b. That would frustrate the corporation, if it were harvesting your personal information. But the reality is, many people choose option c.
This is not an imaginary question, more than half a billion AI-enabled voice assistants, such as Amazon’s Alexa, have been sold internationally by major corporations, despite persistent concerns about their handling of users’ personal data.
We conducted research among a small group of young, generally tech-savvy people who have had this dilemma and, despite voicing serious concerns, chose option c. Why would they do this, when it is a tech luxury, not a necessity, and seems so counter-intuitive?
A few had integrated voice assistants into their homes, so they could switch on the lights and turn on the oven. But most just used the devices independently, to answer questions and provide voice-activated digital access. Most of those we spoke to did not even purchase the device, they received an Alexa as a gift or it was provided free, for example through a company give-away or as an Amazon Prime membership perk.
Despite using voice assistants, we found most in the research group expressed genuine concern and negative attitudes about Amazon. Our paper reveals, ‘Our findings highlight a fundamental lack of trust in Amazon among the participants.’
A minority had taken active steps to overcome their concerns about Alexa – unplugging or deactivating the assistant when not in use
Yet they all found a way to trust the device and used distinct strategies to help them overcome their worries about data security and privacy.
A minority had taken active steps to overcome their concerns about Alexa – unplugging or deactivating the assistant when not in use. One said the device was like having a salesman in the room and disposed of it.
But most took a more passive approach. This involved finding a way to rationalise to themselves the use of these smart devices in their own homes. Our research interviews shows this mainly takes two forms:
Digital resignation. Many of those interviewed expressed a sense of resignation. They felt there was nothing they could do in the face of data harvesting: If they were not going to give up their phones or their laptops, it made no sense to give up the voice assistant.
Many of those interviewed expressed a sense of resignation. They felt there was nothing they could do in the face of data harvesting
Anthropomorphism. Even though our respondents rationally knew ‘Alexa’ is a machine, and referred to it as an ‘it’, especially when talking about it as one of the devices developed by Amazon, they were not consistent in this framing of the device. A majority of people we interviewed switched between referring to Alexa as an ‘it’ and a ‘she’. When referred to as a ‘she’, Alexa was more likely to be described as a friend or helper, rather than an emblem of a tech firm.
People found the voice assistants useful enough to keep using them – although they were not as essential to everyday life as, say, smartphones have become in recent years.
When referred to as a ‘she’, Alexa was more likely to be described as a friend or helper, rather than an emblem of a tech firm
Our research suggests, despite public distrust of big tech corporations, the huge growth in popularity of AI-enabled voice assistants will continue, and users will continue to mitigate their fears and allow them to become integrated in their lives.