Blog Layout

Part 1: What should be considered under data privacy law with voice assistants?

Adrian Bieri and Nina Habicht • Nov 29, 2020

Introduction

Voice assistants such as Google Assistant, Alexa or Siri - also called "virtual assistants" - have been criticised by media because of their capacity to "listen" what users say. While tech providers like Google, Apple and Amazon provide vast amounts of information in privacy policies that are difficult to read and understand, companies still lack clarity on how to proceed with a chatbot or voice assistant project. 

In this first blog article Voicetechhub and Dr. Adrian Bieri, Partner of the Swiss law firm Bratschi AG, give concrete answers to the questions what should be considered from a technical point of view and under data privacy law when implementing digital assistants. It also gives a great deep-dive into data protection law (in particular GDPR) and its impact on digital assistants based on Natural Language Processing. 

When to involve legal in voice assistant projects?

Before discussing with your legal team, you must have a clear idea of what your digital assistant should solve. We experience that companies often do not know what they want to reach with a chatbot or voice assistant.  

A short checklist for voice product managers

During the early-stage of your project, you should have answered basic questions first before starting discussions with your responsible managers in the fields of legal and data protection law.


  • What is the goal of the chatbot or voice assistant? 
  • There are various methods such as creating a customer journey, scenarios and thinking about the pains and needs of your customers. 
  • How does the current process of your specific use case look like? Start thinking about this process and where a bot supports best the current problems you encounter. After that, you can go into a deep-dive with conversational design and user story definitions. 
  • Finally, you should have a list of goals you want to achieve with the bot. 
  • Which systems are needed? 
  • During the requirements phase, clarify which systems are needed. Note down any knowledge base, data base or system which might be an interface to your chatbot, e.g. CRM system. 
  • What kind of data is saved in which system? 
  • After having sketched down the system’s architecture, write down where you gather which kind of information of your users. 
  • Also, make a note, if there is any personalised data stored somewhere. And: do you authenticate your users (i.e. request for login data)?       
  • Verify with your project team how long you need the data to be stored. 
  • On which platforms do you deploy the chatbot? Is it a Facebook, Slack, MS Teams bot or is the assistant running on smart speakers or your website? Depending on the platform, control platform-specific guidelines for your developers. 

What has to be considered in terms of data privacy law (DSVGO) with chatbot and voice assistant projects?

General data protection law principles 

Data protection laws such as the European Union’s General Data Protection Regulation ("GDPR") and the Swiss Federal Act on Data Protection ("FADP") aim to protect personal data. Personal data means any information relating to an identified or identifiable natural person. An identifiable natural person in the sense of the GDPR and the FADP is one who can be identified, either directly by reference to an identifier such as a name, an identification number, location data, an online identifier (such as an IP-address or an E-Mail address) but also indirectly by comparing various pieces of information. Data that does not relate to a specific individual, on the other hand, usually falls outside the scope of data protection legislation. 

Chatbots and other applications based on artificial intelligence ("AI") process various amounts of personal data, in particular also for training purposes. Therefore, chatbots and other similar AI-applications (e.g. digital assistants) regularly fall within the scope of data protection law. And if chatbots and digital assistants process biometric data such as facial images or a person's voice (e.g. for voice and face recognition purposes), the stricter data protection regulations for processing sensitive personal data (i.e. processing of special categories of personal data) apply. It should be noted, that once personal data is anonymised – which requires irreversibly removing the possibility of identifying an individual through the data in question – it no longer constitutes personal data, thus becoming exempted from data protection law obligations. This option should be considered, where the purpose of the processing of data does not require the link to an identifiable individual. Besides anonymisation, pseudonymisation can be seen as a further means of restricting the applicability of data protection law.

Once it is clear that data protection law applies, the question arises as to which data protection obligations must be observed when using chatbots and other AI-applications. Data protection law imposes various obligations with respect to the processing of personal data. The central data protection obligations in connection with chatbots certainly include compliance with the information obligations under Art. 13 GDPR and ensuring that there is a legal basis for the processing of personal data according to Art. 6 GDPR. 

Other relevant obligations in connection with the processing of personal data by chatbots and other similar AI-applications include:
  • (i) the implementation of appropriate technical and organisational security measures (Art. 32 GDPR) to protect personal data against accidental or unlawful destruction or loss, alteration, unauthorised disclosure or access as well as 
  • (ii) conducting a data protection impact assessment according to Art. 35 GDPR to evaluate and, where necessary, mitigate the risks associated with the application in question. 
Further specific obligations may apply if chatbots and other AI-applications (in particular in combination with digital tracking tools) are used to create large user-profiles and, therefore, such processing activities are considered as profiling under Art. 22 GDPR. Finally, the GDPR and also the FADP impose various documentation obligations and other duties, which must be respected when processing personal data via a chatbot or other similar AI-applications.    

Information obligation and having a legal basis for processing in particular 

As previously mentioned, providing transparent information about the data processing that takes place via the chatbot and ensuring a legal basis for the processing is provided for are paramount considerations.

According to Art. 13 GDPR, the data controller (e.g. the company who wants to provide a chatbot or a voice assistant to its customers, see also section 4.3) shall, at the time when personal data are obtained, provide the chatbot user with various information about the relevant data processing activity. This information includes, among others, the purposes of the processing for which the personal data are intended, the legal basis for the processing as well as the recipients or categories of recipients of the personal data, if any. 

In order to fulfil the information obligations according to Art. 13 GDPR, it is necessary to inform the user at the beginning of the chat or before using the digital assistant about the purpose and scope of processing. This must be done, in accordance with Art. 12 GDPR, in a precise, transparent, comprehensible and easily accessible form in clear and straightforward language.  

In the case of a chatbot, it is advisable to indicate within the chat itself, in a basic form, how personal data are processed and to provide a link to the privacy policy which contains further information. This information must also be provided before personal data is processed, ideally before the digital assistant is installed, and must be kept available via the app or website. Informing the user about the purposes of processing is feasible as long as the chatbot’s functionalities are limited, thus, making it possible to predetermine which topics will be covered by the chatbot. The situation is, however, different and more complex with regard to the variety of possible voice commands in the field of digital assistants.

Art. 6 GDPR provides that processing of personal data is lawful only if, and to the extent that, a legal basis as provided under Art. 6 GDPR is applicable for a given data processing activity. These legal bases under Art. 6 GDPR include, among others, the user's consent to the processing (Art. 6(1)(a) GDPR), or, as an alternative, the necessity of processing the data to enter into or perform a contract or contractual negotiations (Art. 6(1)(b) GDPR; contractual necessity). Moreover, Art. 6(1)(f) GDPR establishes that a processing activity necessary for the legitimate interests pursued by a data controller (or by a third party) form a legal basis for processing, insofar as the controller's interests are not overridden by the interests, fundamental rights or freedoms of the affected data subjects. We recommended carrying out as many data processing operations as possible based on Art. 6(1)(b) GDPR because this legal basis neither provides for the possibility of withdrawal – as in the case of consent by the user – nor for a possibility of objection. 

Since many chatbots are currently still designed for a specific function (e.g. customer support) and the corresponding user input usually contains product- or service-related questions or orders from the user, it is often possible to base the data processing operation on contractual necessity according to Art. 6(1)(b) GDPR. Also, it seems feasible to argue that the purpose of data processing is aligned with the contractually-agreed provision of a functional digital assistant. However, should court case law or regulators opt against this approach, the legal basis of contractual necessity (Art. 6(1)(b) GDPR) could in many cases fail to serve as a legal basis for data processing by digital assistants. In view of the fact that the legal basis of the legitimate interest under Art. 6(1)(f) GDPR involves, depending on the specific individual case, significant legal uncertainties, the providers of digital assistants would in many instances be advised to obtain consent from the user, which conforms with the requirements under Art. 6 (1)(a) GDPR for legally valid consent. 

Moreover, if a user discloses sensitive data to a chatbot or voice assistant and if the company that offers the chatbot or voice assistant to its customers obtains the consent from the users, Art. 9 GDPR requires the explicit informed consent of the user. This requires among others that the user is informed by the controller before giving his consent as to which specific data processing activities will be covered by his consent. A mere voice command to activate a digital assistant, for example, is most likely not sufficient to be considered as legally valid consent according to the requirements of Art. 6 (1)(a) and 9 GDPR, particularly because of the absence of sufficient prior information about the relevant data processing activities for the user by the data controller. However, practically this is often done, which is solved by the chatbot providers through anonymisation of data.

Which role do you have: Controller or Processor?

When it comes to the question, who is responsible for the fulfilment of the various obligations data protection law provides for chatbots or voice assistants, one has to first clarify, who of the involved parties (i.e. the company, that offers the chatbot or voice assistant to its customers, and the various service providers the company engages to provide the chatbot or voice assistant) plays what role from a data protection law perspective. 

The primary distinction is between the controller and the processor:
  • The controller is the person who determines the purpose and means of a data processing activity (Art. 5(j) FADP). 
  • The processor processes personal data on behalf of the controller and not for his own purposes. In principle, the controller is responsible for the compliance with most of the obligations that the GDPR or the FADP set forth. 
  • In the particular scenario where two or more controllers jointly determine the purposes and means of the processing of personal data, they are joint controllers. Joint controllers must, by means of an "arrangement" between them, apportion data protection compliance responsibilities (e.g., it should be agreed which controller shall be responsible for providing clear information to data subjects).  
With respect to digital assistants, the company that offers them to the users is considered as a controller, and, therefore, is responsible for compliance with data protection law. It cannot be excluded that the company which uses the digital assistant and the company which develops and operates the digitalassistant are to be considered as joint controllers, however.  

Legal checklist for voice assistants and chatbots

To summarise, we developed the following final checklist, which helps companies from a data protection law point of view when implementing voice- or chatbots:

  1. Clarify (i) which kind of data (e.g. non-personal data, personal data, sensitive personal data) is processed by chatbots and digital assistants and (ii) how the flow of such data is structured (e.g. by whom and where is such data processed).
  2. Make sure you have a proper legal basis for the processing or consider to implement anonymisation or pseudonymisation procedures in order to exclude (at least in part) the applicability of data protection law. 
  3. Provide the users in advance with clear information about the associated processing of personal data (e.g. privacy policy). 
  4. Perform before the use of chatbots and digital assistants a data protection impact assessment ("DPIA") and think of additional legal requirements that apply in the event of profiling. 
  5. Verify which role from a data protection law point of view you have in a specific digital assistant project and make sure to conclude the appropriate agreements (e.g. data processing agreement, joint-controller agreements, controller-controller agreement) with all the parties that process personal data in the course of a chatbot or voice assistant project. 
  6. Comply with the additional legal requirements if personal data processed by digital assistants is transferred to a destination country which does not provide an adequate level of data protection.

Philipp Leuthold

Dr. Adrian Bieri

Partner at Bratschi Ltd., Head of PG Intellectual Property, Technology and Privacy

Adrian has an in-depth experience in the fields of legal technology, intellectual property and privacy. He has been working as a lawyer and partner at Bratschi Ltd., which is a full-service law firm with six offices throughout Switzerland. He is highly engaged in solving new legal issues related to emerging technologies. Adrian holds a degree in law from the University of Zurich.

Need support with your Generative Ai Strategy and Implementation?

🚀 AI Strategy, business and tech support 

🚀 ChatGPT, Generative AI & Conversational AI (Chatbot)

🚀 Support with AI product development

🚀 AI Tools and Automation

Get in touch
How Does Perplexity AI Work?
By Nina Habicht 04 May, 2024
Article about Perplexity AI and how to properly use it. We highlight the difference between ChatGPT and Perplexity AI.
How to choose between ChatGPT, Claude, Copilot and Gemini
By Nina Habicht 04 May, 2024
How to choose between ChatGPT, Claude, Copilot and Gemini. Please find our ultimate guideline on how the models differ and which llm is strong at which task.
How to strategically use GPTs from OpenAI
By Nina Habicht 03 May, 2024
This blog explains how gpts can be used as a part of your Generative AI journey and exploration towards your Ai strategy.
Why implementing ai tools is not an ai strategy
By Nina Habicht 03 May, 2024
This post explains why implementing ai tools without any strategy and business view can be detrimental and lead to not successful ai projects.
Generative AI in 2024, Investment areas in 2024
By Nina Habicht 01 Jan, 2024
This post is abou the major generative AI trends and investment areas in 2024
How schools and universities can use Generative AI
By Nina Habicht 29 Dec, 2023
universities and schools need to change learining approach due to generative AI. How schools and universities can use Generative AI
Supports with the definition of GPTs, alternatives and options to build own chatbots or assistant
By Nina Habicht 25 Dec, 2023
A comprehensive Guide to Alternatives of GPTs and Assistant API from OpenAI
By Nina Habicht 26 Nov, 2023
Many companies are reluctant when implementing llm-based products because they fear bein confronted with high costs. Especially for medium-sized companies which have not the ressouces or enough capacity to deploy and oprimize their AI models nor to set up an own infrastructure with MLOps. As described in our article about sustainability of Gen. AI applications , cloud and performance costs of running an llm can become very high. What are the cost types when implementing OpenAI or other llms? T here are four types of costs related to llms: Inference Costs Setup and Maintenance Costs Costs depending on the Use Case Other Costs related to Generative AI products What are inference costs? An llm has been trained on a huge library of books, articles, and websites. Now, when you ask it something, it uses all that knowledge to make its best guess or create something new that fits what you asked for. That process of coming up with answers or creating new text based on what it has learned is called inference in LLMs . Usually, developers would call a large language model like GPT-4. But here comes the "but": usually not only large language models account to the total costs when running the final product. To explain: LLMs can be used to classify data (e.g undestand that the text talks about "searching a new car insurance"), for summarization, for translation and for many other tasks. Download the ultimative Gen. AI Task Overview to learn where llms make sense:
Checklist to implement Generative AI in your company
By Nina Habicht 24 Nov, 2023
this article helps companies like enterprises and sme to successfully implement generative AI by providing best-in-breed frameworks.
By Nina Habicht 01 Nov, 2023
In this blog you will learn about the alternatives to ChatGPT and OpenAI. Where is Bard better than ChatGPT? Bard is the response to OpenAI's ChatGPT. What makes Bard so different to OpenAI? It is free! So you can try it out here whereas ChatGPT costs $20 per month. Another advantage is the microphone on the desktop version to directly speak in your question and get a response. Bard has internet access whereas ChatGPT you need to jump from one service (Web Browsing) to the other Bard covers far more languages (265 as of October 2023) Some drawbacks: it is not able to generate pictures. With ChatGPT DALL E-3 you can generate pictures. Bard only offers you a nice description. Where is Claude better than ChatGPT? Claude is the version of ChatGPT developed by the company Anthropic. This tool is currently accessible only in the UK and US, and not yet available in Switzerland. You might consider using Nord VPN to explore its functionality in your country. Claude has one big advantage to ChatGPT: It can process more "context" ( Generative AI from A to Z ), meaning the input token (100 token equals around 75 words) can be up to 100'000 tokens (75'000 words!). GPT-3 has a limit of 4096 tokens (3072 words) and GPT-4 of 8192 tokens (= 6000 words). So when you want to upload huge files, use Claude.
Show More
Share by: