GDPR Compliance

GDPR-principles

Before delving into the GDPR-related issues of Chat style LLM’s, let’s review the fundamental principles of the EU-GDPR. Here is a brief overview:

• The EU-GDPR sets regulations for the handling of personal data by controllers and processors in the European Union (EU).

• The EU’s data protection legislation applies to any business, entity, or person who handles personal data in the European Union, regardless of their location.

• People are granted the authority to have access to and manage their own personal information.

• Organisations must acquire explicit permission from people before accumulating or utilising their personal data.

• Organisations must guarantee the protection of personal information with the appropriate technical and organizational precautions.

• In case of a data breach, organisations are required to notify regulatory authorities and those affected.

Public LLM’s — what are they, and how do they work?

In November 2022, OpenAI unveiled ChatGPT, a chatbot that uses a deep learning method known as “transformer architecture.” This technique sorts through terabytes of data that contain billions of words to come up with answers for questions or prompts. It is part of the GPT-3 family of larger language models, and has been specialized with supervised and reinforcement learning approaches. Utilizing data from the web, books, and Wikipedia, ChatGPT can be used without charge to generate conversational replies.

The development of AI such as GPT-3 can be revolutionary, but we must also consider the risks and drawbacks. A key issue is privacy, as it can be challenging to tell if data has been used in the machine learning training. Also, the legality of using personal data to coach machine learning models, like GPT-3, can vary according to the laws and regulations of a certain country or region.

ChatGPT has the capability to generate text that resembles natural writing and can be used for a variety of tasks, including language translation, text generation for chatbots, and language modeling. It is one of the largest and most effective AI language processors available, with 175 billion parameters.

Was ChatGPT trained on personal data?

ChatGPT is a massive language model that is based on an extensive amount of web-based information, such as individual websites and social media posts.

Just because information is available on the web does not mean that it’s not personal data. If an foundational LLM was trained with this type of data, then the system would’ve been trained based on personal information. This could have serious repercussions for one’s privacy, as well as potential legal ramifications.

GDPR issue 1: Data Collection

There is a strong case to be made that ChatGPT is not complying with GDPR standards regarding data collection, particularly with regards to the GDPR’s principle of data minimization. According to the GDPR, personal data must be lawfully, fairly and transparently gathered and used in connection to the data subject. Additionally, the data minimization rule states that only the bare minimum of information should be obtained and dealt with.

Privacy Guidelines state that all data will remain undercover, used only for purposes specified in the contract. Nonetheless, it’s unclear if this includes data stored in AI models like ChatGPT.

GDPR issue 2: Data Security

Particularly, ChatGPT and other large AI models must be safeguarded from attacks and data breaches. The GDPR mandates that adequate steps are taken to guarantee safety and confidentiality of customer information.

ChatGPT presents potential security dangers, such as data pilfering, spam and phishing emails, and malicious software. Additionally, those with bad intentions can alter the code and use it to execute cyberattacks.

ChatGPT’s Privacy Policy raises several doubts about its compliance with the General Data Protection Regulation (GDPR). Article 3 of the Privacy Policy states that OpenAI may share a user’s personal information with third parties “in certain circumstances without further notice” unless the law requires otherwise. This statement may prove to be an obstacle for ChatGPT to comply with GDPR requirements for data security and privacy.

GDPR Issue 3: Fairness & Transparency

The EU-GDPR requires that any decision made by an AI system needs to be both explainable and justifiable. This means that the AI must ensure its decisions are both fair and clear in its reasoning. Nevertheless, it is uncertain whether ChatGPT meets this criterion.

When ChatGPT first made its debut, it gave wrong answers that some scientists refer to as a “hallucination”. Fake medical advice was especially concerning. Bogus social media accounts are already a problem, and bots like ChatGPT can make them even more difficult to spot. Furthermore, incorrect information might circulate if ChatGPT can make even inaccurate answers sound convincing.

GDPR Issue 4: Accuracy & Reliability

The GDPR stipulates that companies must be open in their handling of personal data and make sure it is accurate and trustworthy. This includes AI programs, which need to be screened, tested and monitored to guarantee accuracy and dependability.

It is not certain if ChatGPT adheres to the standards of Article 17 of the GDPR. This article grants individuals the right to be forgotten, meaning that if they ask, their personal data must be eliminated from the model. Unfortunately, with ChatGPT having the ability of providing incorrect answers, it is hard to erase all evidence of an individual’s personal details.

GDPR Issue 5: Accountability

Under the General Data Protection Regulation (GDPR), organizations must be able to prove that they have taken appropriate measures to protect people’s data and hold AI systems responsible for their outcomes. They must be able to demonstrate that these measures have been effective if requested.

The use of LLMss has raised a number of questions concerning accountability when it comes to wrong answers concerning personal data of human beings.

The burden of data privacy rests with the users, not providerI. This is concerning, as LLMs can generate wrong responses, some of which might be wildly wrong, leading to the spread of false information and online abuse. Additionally, OpenAI researchers and developers select the data used to prepare LLMs, and prejudice in the data might result in a negative effect on the model’s output.

FSA Compliance

The financial world has never been static. As with all industries, it evolves with the march of technology. One of the most significant technological advancements impacting the financial sector today is Artificial Intelligence (AI). The Isle of Man Financial Services Authority (IOMFSA), the primary regulatory body of the Isle of Man, is no stranger to this evolution.

Opportunities:

Enhanced Monitoring and Detection: AI can process vast amounts of data at unparalleled speeds. For IOMFSA, this means a more efficient way of monitoring transactions, identifying unusual patterns, and flagging potential regulatory breaches. Machine learning algorithms, a subset of AI, can be trained to recognize signs of fraudulent activities, money laundering, or any irregularities that might otherwise go unnoticed in manual audits.

Predictive Analysis: With AI’s ability to analyze trends and make predictions based on historical data, IOMFSA can anticipate potential areas of concern before they become significant issues. This proactive approach can save time, resources, and potentially prevent financial crises.

Automated Reporting: AI can automate the process of generating reports, ensuring they are consistent, accurate, and timely. This reduces the administrative burden on regulated entities and allows IOMFSA to focus on more strategic tasks.

Dangers:

Over-reliance on Technology: While AI can be a potent tool, it shouldn’t replace human judgment entirely. There’s always the danger of becoming too dependent on technology, leading to potential oversights if the AI system misses a crucial detail or misinterprets data.

Data Privacy Concerns: AI systems require substantial amounts of data to function effectively. This accumulation of data can raise concerns about data privacy and protection. If not handled correctly, there’s a risk of data breaches, which can undermine public trust in the IOMFSA.

Lack of Transparency: AI algorithms, especially deep learning models, are often seen as “black boxes.” This means that even experts might struggle to understand how specific decisions or predictions are made. In a regulatory context, this lack of transparency can be problematic when trying to determine the rationale behind certain decisions.

Bias and Fairness: AI models are only as good as the data they are trained on. If the training data contains biases, the AI system might make biased decisions, potentially leading to unfair treatment or discrimination.

Directorial Responsibilities and Risks:

Directors of regulated entities bear significant responsibilities in ensuring compliance with regulatory standards. Under the IOMFSA’s purview, directors have a duty to ensure accurate and timely reporting. Failure to do so can result in substantial fines or even legal consequences.

Additionally, with the integration of AI in regulatory and reporting processes, directors must ensure that AI tools are used responsibly and ethically. Any misuse or over-reliance on AI can be seen as negligence, further exposing directors to individual risks2.

It’s worth noting that in many jurisdictions, including the Isle of Man, directors can be held personally liable for breaches of regulatory compliance, making the stakes even higher.

Conclusion:

The integration of AI within the regulatory framework of the IOMFSA presents a tantalizing mix of opportunities and challenges. While AI can undoubtedly enhance the efficiency and effectiveness of regulatory oversight, it’s crucial to approach its implementation with caution. Balancing the power of AI with the nuances of human judgment, ensuring data protection, and maintaining transparency and fairness will be vital for the IOMFSA to harness AI’s potential without compromising its core values and objectives.

ISO Compliance

The International Organisation for Standardisation (ISO) is known for establishing standards that address various aspects of technology and business processes. ISO 20001 and ISO 27001 are two such standards that relate to IT Service Management and Information Security Management respectively. As Artificial Intelligence (AI) becomes more integrated into businesses and technology infrastructures, understanding its implications concerning these standards is essential.

Benefits of AI in the context of ISO 20001 (IT Service Management):

1. Automated Incident Management: AI can quickly detect and respond to IT incidents, ensuring faster resolution times. It can also predict potential incidents based on historical data, allowing for proactive measures.

2. Enhanced Customer Experience: AI-driven chatbots and virtual assistants can provide 24/7 support to users, streamlining the service desk function and enhancing user satisfaction.

3. Optimised Resource Allocation: AI can analyse resource usage patterns and optimise allocation, ensuring that IT services are delivered efficiently without wasting resources.

4. Predictive Maintenance: AI can predict when certain components or systems might fail, allowing for timely maintenance and ensuring continuous service delivery.

5. Data-Driven Decision Making: AI-driven analytics can provide insights from large datasets, helping IT managers make informed decisions about service improvements, investments, and more.

Dangers of AI in the context of ISO 20001:

1. Over-reliance on Automation: While AI can automate many ITSM processes, over-reliance can lead to reduced human oversight and potential errors.

2. Data Privacy Concerns: AI systems require vast amounts of data for training. This data, if mishandled, can lead to breaches and privacy concerns.

3. Integration Challenges: Integrating AI into existing ITSM processes can be complex and might lead to compatibility issues or disruptions.

4. **Bias in Decision Making**: AI models can inherit biases from their training data. This can lead to skewed analytics or incorrect incident resolutions.

5. Job Displacements: As AI takes over certain ITSM roles, there’s a potential for job displacements, which can lead to morale and social issues.

Benefits of AI in the context of ISO 27001 (Information Security Management):

1. Threat Detection: AI-driven security solutions can quickly detect anomalies or patterns indicative of a cyberattack, allowing for rapid response.

2. Phishing Detection: AI can analyse emails and detect phishing attempts with high accuracy, protecting organisations from such threats.

3. Behavioural Analysis: AI can monitor user behavior and detect any deviations that might indicate a security compromise.

4. Vulnerability Management: AI can scan systems and identify vulnerabilities, prioritising them based on potential impact.

5. Security Automation: AI can automate routine security tasks, freeing up human resources to focus on more complex issues.

Dangers of AI in the context of ISO 27001:

1. Manipulation by Adversaries**: Advanced threat actors can manipulate AI systems to bypass security measures or to create false alarms.

2. Data Privacy: AI systems that analyze security data might inadvertently access or leak sensitive information.

3. Complexity: AI-driven security solutions add another layer of complexity to an organisation’s security posture. If not properly managed, they could introduce new vulnerabilities.

4. Over-reliance: Just like with ISO 20001, over-reliance on AI for security can lead to reduced human oversight and potential security lapses.

5. Ethical Concerns: Using AI for surveillance or employee monitoring, even for security reasons, can raise ethical and privacy concerns.