The use of AI systems has seen both large-scale industry transformations and the smaller day-to-day changes that shape our lives. The benefits cannot be denied but there are also risks associated with their usage and a thorough risk assessment should be conducted by your organisation, both in relation to data privacy and other fundamental aspects before you jump on the AI wagon ✋.
Governance and Accountability
Your organisation must ensure that there are sufficient data security protocols in place when using machine learning and other AI systems to ensure good governance. This is because large quantities of data are continuously being transferred, stored and shared with third parties in order to train the machine learning algorithm. AI systems can also be built by third party providers which would require inter-organisational transfers of data. In the context of data governance, this would require documenting the flow of data between different organisations.
Your team must ensure that movements of data are being documented and that your records are up to date. This would not only allow you to understand where you hold your data in order to apply the correct level of safeguarding, but it will also feed into the accountability principle and satisfy reporting requirements.
The ICO also advises that any intermediate files for transferring data such as temporary zip files should be deleted as soon as the data has been properly transferred.
It is also a requirement under the UK GDPR that only the minimum amount of data should be stored and used in order for you to fulfil your data processing needs (Article 5(1)(c)). At first glance, this appears contradictory to the use of AI systems as they usually require vast amounts of training data in order to improve their accuracy. However, the ICO recommends various techniques that you can deploy to help you stick to the data minimisation principle.
When you are using large data sets to train your machine learning systems, there may be some features of the personal data that are not necessary. For example, training the system to recognise faces would not require data related to someone’s credit score. If you are able to ensure that you only process the relevant data by segmenting the useful data, you are abiding by this principle.
At the application stage of a machine learning system, such as when you are using the system to make a judgement or prediction, there are also techniques to help you minimise personal data. For example, changing the format of the data so it is less readable by humans can help you minimise personal data. In facial recognition algorithms, a face is sometimes converted into a ‘faceprint’ of geometric shapes that identify certain parts of the face such as the range between the eyes and the mouth. This requires machines to recognise the format of these faces and thus cannot be easily recognised by humans, making such data more secure in the process as well.
It is also worth noting that data sets can be completely anonymised as then, the UK GDPR would not apply to this form of data. If on the other hand you are only able to make your data sets pseudonymous, where data can still be attributed to data subjects via re-identification, this would still constitute personal data and be within the scope of the UK GDPR.
Lack of transparency
Data processing by AI systems still requires a lawful basis and purpose for processing personal data. If the algorithm is processing data for various applications, your team must ensure that each reason is being recorded separately as individual use cases. It is your responsibility that you can identify the lawful basis of processing that is most accurate to your use of personal data and that this is documented - it is not possible to change the legal basis in the future so you should do this before going live.
To rely on consent as a basis for processing you must have a direct relationship with the data subject and have the consent be freely given, specific, informed and unambiguous, and which constitutes an affirmative act. To accurately rely on consent, you must inform individuals of the specific ways their data will be used and the individual applications for the use of their data. They must also have the ability to withdraw their consent whenever they want to.
To rely on performance of a contract, the processing using the AI is required for the fulfilment of a contract with the individual whose data is in question. This does not mean that the individual’s data can then be used to train the system or that it would be 'good to have' and thus stored. If the purpose is not to fulfil the contract or terms of service at hand, this legal basis cannot be used and the data must be deleted.
To rely on legal obligations, public tasks or vital interest, there must be a valid legal provision you can rely upon. Similar to performance of a contract, this cannot be used as a basis for training AI systems or for use in continuous development.
To rely on legitimate interest would require you undertaking a necessity and proportionality assessment as this legal basis has the broadest scope. This requires conducting a balancing exercise to see whether the legitimate interest of such processing of data outweighs the individual’s rights, interests and freedoms and that it is necessary to process their data in order to achieve this legitimate interest.
Whatever legal basis you decide to choose, it is important that you document it to show your justifications for processing under each. This is why Privasee can help. Our platform documents and tracks the data within your organisation for you by mapping it in a visual and easily understandable format. It will help you identify each relevant legal basis for processing multiple data sets, keep track of how long each data set has been stored for and help you manage individual data subject access requests to their data. By helping you coordinate where consent is given and retracted by individual's surrounding their data can also prevent data breaches and costly fines in the future for your organisation.
This article does not constitute legal advice in any form and only seeks to break down some of the main points set out by the ICO.
Sources and further resources
For more information, please visit the ICO website for their guidelines on the implementation of AI systems found here.
Share this post
CTO & Co-founder
Get Compliant in <1 Hour
Are you Fully GDPR Compliant?
Did you know the average cost of fine for a SME failing GDPR compliance is €30,000? Ensure your policies are always up to date with Privasee, an AI powered GDPR compliance solution that does it all.