By Matt Burt, Director, Business Operations and Business Affairs, EMEA, at R/GA London
By its definition, Artificial Intelligence (AI) relies on a multitude of data inputs to deliver desired results through techniques such as machine learning. Whether that’s using content in generative adversarial networks to create bespoke content, or facial recognition using facial mapping for security and surveillance. Data is at the heart of AI which has cemented itself as ‘the future’ in business strategy and something that already plays an active role in our daily lives.
Given this, privacy must be at the core of future AI development. Personal data such as your name, mobile number, and home address, as well as sensitive information such as your health data, are concepts universally known through legislation such as the General Data Protection Regulation (GDPR) in the EU. Companies face serious fines, up to 4% of global turnover, for breach of this, with AI now posing a genuine challenge to how this data is processed and how increasing multi-jurisdictional regulation is complied with. An example of this being a recent European Consumer Organization revealed 45-60% of EU citizens agreed that AI will lead to more abuse online.
Companies using algorithms which rely on large data sets need to consider the type of data being processed, for example whether it’s (directly or indirectly) identifiable data that falls under the scrutiny of privacy regulation. One of the issues we’ve seen with using identifiable data is a bad actor reverse engineering an AI model, exposing the data used as part of that model meaning identifiable data ends up potentially in the wrong hands constituting a data breach. That never ends well.
Privacy by design needs to be at the forefront of AI development, and one of the key concepts is the mechanism of consent. Do companies have the consent of the individual’s data that their processing for the purpose in which they’re processing the data? Transparency is a fundamental pillar of privacy, and companies not being honest with how they are using your data, for example in machine learning, not only risks breaching privacy regulation but consumer confidence and brand reputation. People are increasingly becoming more aware of their privacy rights and as a result the privacy policies of companies, which is where companies should tell you how your data is being used, are under more scrutiny.
Lensa AI, the magic avatar generator, is an example of an AI generator currently having its moment on social. You grant the app access to your camera or camera roll, select a photo and the AI-powered tool works its magic to create a bespoke avatar. How this app uses your personal data such as the photos you grant them access to as part of the process, is solely in their control and we’re placing trust that apps such as Lensa AI protect the data we willingly hand over to see the outputs of AI generators that have the scope to be brilliant.
With AI shaping the future, privacy by design needs to be front of mind for developers when designing AI models and algorithms. This will contribute to a sustainable AI-powered future where we will (hopefully) over time build more trust in how companies are using our data.