Home Technology Why companies using AI must place privacy at the core
    Editorial & Advertiser disclosure
Our website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website.

Why companies using AI must place privacy at the core

by uma
0 comment

 

By Matt Burt, Director, Business Operations and Business Affairs, EMEA, at R/GA London

By its definition, Artificial Intelligence (AI) relies on a multitude of data inputs to deliver desired results through techniques such as machine learning. Whether that’s using content in generative adversarial networks to create bespoke content, or facial recognition using facial mapping for security and surveillance. Data is at the heart of AI which has cemented itself as ‘the future’ in business strategy and something that already plays an active role in our daily lives. 

Given this, privacy must be at the core of future AI development. Personal data such as your name, mobile number, and home address, as well as sensitive information such as your health data, are concepts universally known through legislation such as the General Data Protection Regulation (GDPR) in the EU. Companies face serious fines, up to 4% of global turnover, for breach of this, with AI now posing a genuine challenge to how this data is processed and how increasing multi-jurisdictional regulation is complied with. An example of this being a recent European Consumer Organization revealed 45-60% of EU citizens agreed that AI will lead to more abuse online.

Companies using algorithms which rely on large data sets need to consider the type of data being processed, for example whether it’s (directly or indirectly) identifiable data that falls under the scrutiny of privacy regulation. One of the issues we’ve seen with using identifiable data is a bad actor reverse engineering an AI model, exposing the data used as part of that model meaning identifiable data ends up potentially in the wrong hands constituting a data breach. That never ends well.

Privacy by design needs to be at the forefront of AI development, and one of the key concepts is the mechanism of consent. Do companies have the consent of the individual’s data that their processing for the purpose in which they’re processing the data? Transparency is a fundamental pillar of privacy, and companies not being honest with how they are using your data, for example in machine learning, not only risks breaching privacy regulation but consumer confidence and brand reputation. People are increasingly becoming more aware of their privacy rights and as a result the privacy policies of companies, which is where companies should tell you how your data is being used, are under more scrutiny.

Lensa AI, the magic avatar generator, is an example of an AI generator currently having its moment on social. You grant the app access to your camera or camera roll, select a photo and the AI-powered tool works its magic to create a bespoke avatar. How this app uses your personal data such as the photos you grant them access to as part of the process, is solely in their control and we’re placing trust that apps such as Lensa AI protect the data we willingly hand over to see the outputs of AI generators that have the scope to be brilliant. 

With AI shaping the future, privacy by design needs to be front of mind for developers when designing AI models and algorithms. This will contribute to a sustainable AI-powered future where we will (hopefully) over time build more trust in how companies are using our data.  

 

You may also like