AI Governance Policy

1. Purpose and Scope 

1.1 Purpose 

This AI Governance Policy ("Policy") establishes the compliance framework of EZclass OÜ ("EZclass," "we," or "the Company") with Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (the "EU AI Act" or "AI Act"). It documents EZclass's risk classification analysis, governance structures, deployer obligations, human oversight mechanisms, and ongoing monitoring commitments in relation to AI systems operated by the Company. 

This Policy is a standalone regulatory compliance document. It is separate from, and complementary to, EZclass's existing data protection documentation, which addresses obligations arising under Regulation (EU) 2016/679 ("GDPR") and related legislation. Where relevant, this Policy cross-references the following published EZclass documents: 

  • Terms & Conditions ("T&C") — in particular T&C §1.2 (AI providers, accuracy disclaimer, biometric clarification) and T&C §10.3 (limitation of liability)
  • Privacy Policy — in particular §7 (retention), §8 (data subject rights), §10 (Automated Processing, Article 22 GDPR position, AI scoring transparency), and §11 (voice data)
  • Cookie Policy
  • Data Processing Agreement ("DPA") — including the sub-processor table
  • Refund Policy
  • Legal Notice — including the Article 22 GDPR disclaimer and AI provider disclosure 

1.2 Scope 

This Policy applies to the following AI system operated by EZclass: 

  • AI English Placement Test hosted at placement.ezclass.io (the "Placement Test" or "System") 

This Policy covers: 

  • The use of third-party AI services (OpenAI Whisper and DeepSeek) as components of the Placement Test processing pipeline
  • EZclass's role as a deployer within the meaning of Article 3(4) of the AI Act
  • Compliance obligations arising from Articles 4, 6, 26, 27, 49, and 50 of the AI Act, as applicable to the Company's risk classification position 

1.3 Out of Scope 

This Policy does not cover: 

  • Internal team AI tools used for productivity (e.g., code assistants, internal chatbots)
  • Marketing tools (e.g., ContentDog) that do not fall within the scope of Annex III of the AI Act
  • Observability and analytics tools (e.g., Datadog, Sentry, BetterStack, Contentsquare, Microsoft Clarity) insofar as they do not constitute AI systems within the meaning of Article 3(1) of the AI Act
  • Google reCAPTCHA v3, which operates as a fraud detection mechanism and is not deployed by EZclass for any purpose listed in Annex III 

1.4 Regulatory Context 

The AI Act entered into force on 1 August 2024. The phased implementation timeline relevant to EZclass is: 

Date Milestone 
2 February 2025 Prohibited practices (Article 5) and AI literacy obligations (Article 4) applicable 
2 August 2025 General-purpose AI model obligations applicable 
2 August 2026 High-risk AI system obligations (Chapter III), deployer obligations (Article 26), registration obligations (Article 49), transparency obligations (Article 50), and FRIA requirements (Article 27) applicable 
2 August 2027 High-risk systems embedded in regulated products — extended transition 

 

2. AI System Description 

2.1 System Overview 

The AI English Placement Test is a web-based assessment tool that evaluates a user's English language proficiency and assigns a level on the Common European Framework of Reference for Languages ("CEFR") scale, ranging from A1 (beginner) to C2 (proficient). The System is accessible at placement.ezclass.io and operates as a component of the broader EZclass platform at ezclass.io

2.2 Processing Pipeline 

The Placement Test follows a sequential processing pipeline: 

Step 1 — User Input Collection 

Users complete a series of written tasks (typed text responses) and spoken tasks (audio recordings captured via the browser). All user interactions occur via the EZclass web interface, hosted on Google Cloud Platform infrastructure located in Frankfurt, Germany and the Netherlands, with edge delivery via Cloudflare. 

Step 2 — Speech-to-Text Transcription 

Audio recordings of spoken responses are transmitted to the OpenAI Whisper API (provided by OpenAI, L.L.C., USA) for automated speech-to-text transcription. The Whisper API returns a text transcription of the user's spoken response. OpenAI processes this data as a data processor under the terms of its API Data Processing Addendum. Audio data is transmitted to the United States; the transfer mechanism is the EU-U.S. Data Privacy Framework ("DPF"). OpenAI's API terms confirm that API inputs are not used for model training. 

Step 3 — AI-Powered Evaluation 

Both the user's written text responses and the transcribed speech are submitted to the DeepSeek API (provided by DeepSeek AI Co., Ltd., Beijing, China; API endpoint: api.deepseek.com; model: deepseek-chat) for evaluation. The AI model assesses the user's responses against CEFR-aligned criteria, including grammar, vocabulary, coherence, fluency indicators, and task completion. DeepSeek processes this data as a data processor. The international transfer mechanism is Standard Contractual Clauses pursuant to EU Commission Implementing Decision (EU) 2021/914, Module 2 (controller to processor). DeepSeek's API terms confirm that API inputs are not used for model training. 

Step 4 — Output Generation 

Based on the AI evaluation, the System produces the following outputs: 

  • CEFR Level: An assigned proficiency level (A1, A2, B1, B2, C1, or C2)
  • Score: A numerical score reflecting the evaluation
  • Diagnostic Feedback: Textual feedback identifying strengths and areas for improvement
  • Certificate: An unofficial EZclass certificate reflecting the placement result 

2.3 Role in Decision-Making 

The Placement Test serves an advisory and indicative function. Specifically: 

  • The output is a suggested starting level for English language learning. It does not constitute a certification, qualification, or binding determination.
  • The result does not determine admission to any educational programme or institution, nor does it grant or deny access to any educational opportunity.
  • Users are free to override the suggested level by selecting a different course level on the EZclass platform.
  • Institutional clients who use EZclass are informed, through contractual terms and documentation, that placement results are advisory only and must not be treated as binding determinations.
  • No legal effects or similarly significant effects within the meaning of Article 22(1) GDPR arise from the Placement Test output. This position is documented in the EZclass Privacy Policy §10 and Legal Notice. 

2.4 Users and Eligibility 

  • Minimum age: 16 years. Users aged 16 to 17 must have parental or institutional supervision.
  • Target users: Individual learners seeking to assess their English proficiency level and/or to be placed in an appropriate English language course on the EZclass platform.
  • Institutional users: Educational institutions or corporate clients that deploy the Placement Test for their students, employees, or members, subject to separate contractual arrangements. 


3. Risk Classification and Legal Basis 

3.1 Annex III Analysis 

The AI English Placement Test falls within the scope of Annex III, Category 3(c) of the AI Act, which covers: 

"AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions at all levels." 

The System assesses an individual's English language proficiency and suggests a CEFR level, which may be used to determine the appropriate level of English language instruction the individual receives on the EZclass platform or, in institutional deployments, within an educational institution. 

3.2 Article 6(3) Derogation — Primary Position 

EZclass claims the Article 6(3) derogation from high-risk classification. Under Article 6(3) of the AI Act, an AI system referred to in Annex III shall not be considered high-risk where it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making. 

EZclass's position is that the Placement Test satisfies the conditions set out in Article 6(3), specifically Article 6(3)(d): 

"the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III." 

The justification for this position rests on the following cumulative factors: 

(a) Preparatory and advisory nature. The Placement Test performs a preparatory assessment that suggests a starting level for language learning. It does not make a final determination regarding educational access, admission, progression, or certification. The output is one input among several that a user or institution may consider when determining an appropriate course level. 

(b) No binding decision-making authority. The System output does not bind any natural person, educational institution, or other entity to a particular course of action. Users can freely disregard the suggested level and enrol in any course level of their choosing. 

(c) No legal effects or similarly significant effects. The placement result does not produce legal effects within the meaning of Article 22(1) GDPR. It does not determine admission to, or exclusion from, any educational programme. It does not constitute a recognised certification or qualification. This position is documented in T&C §1.2.3 and Privacy Policy §10. 

(d) Human review available. Users may request a human review of their placement result by contacting [email protected]. This provides an effective override mechanism, ensuring that no individual is irremediably bound by the automated assessment. 

(e) User override. Beyond human review, users retain the practical ability to select any course level on the platform regardless of the AI-suggested placement. The System does not enforce the suggested level. 

(f) No profiling. The Placement Test does not perform profiling of natural persons within the meaning of Article 6(3), second subparagraph. The System evaluates language proficiency in a single assessment instance. It does not build a profile of the individual across multiple interactions, behavioural patterns, or personal characteristics for the purpose of predicting future behaviour or making inferences about personal attributes beyond language ability. 

3.3 Article 6(4) Documentation Requirement 

Article 6(4) of the AI Act requires that a provider (or, in EZclass's case as a deployer relying on the derogation, the deploying entity) who considers that an Annex III AI system is not high-risk shall document its assessment before the system is placed on the market or put into service. This Policy, together with the risk classification summary in Appendix A, constitutes that documented assessment. The documentation shall be made available to national competent authorities upon request. 

3.4 Registration Obligation — Article 49(2) 

Notwithstanding the Article 6(3) derogation, Article 49(2) of the AI Act requires that where a provider has concluded that an Annex III system is not high-risk pursuant to Article 6(3), that provider or authorised representative shall register themselves and the system in the EU database referred to in Article 71. EZclass, as the deployer integrating third-party AI services into a system falling within Annex III, will register in the EU AI Act database in compliance with this obligation. The deadline for registration is 2 August 2026. 

As of the effective date of this Policy, registration has not yet been completed. This is identified as a compliance gap in Appendix B and an action item in Appendix C. 

3.5 Conditions for Reclassification to High-Risk 

EZclass acknowledges that the Article 6(3) derogation is contingent on the System's actual use and effects. The following conditions would trigger a reassessment and potential reclassification of the Placement Test as a high-risk AI system: 

  1. Binding institutional decisions. An educational institution or employer uses Placement Test scores as the sole or determinative basis for admission, exclusion, streaming, or progression decisions that materially affect an individual's access to education or employment.
  2. Certification or qualification. The Placement Test output is presented or treated as a formal certification, qualification, or credential recognised by educational authorities or employers.
  3. Removal of human override. The practical ability of users or institutional administrators to override the AI-suggested level is eliminated or materially constrained.
  4. Legal or similarly significant effects. Any deployment scenario in which the Placement Test output produces legal effects or similarly significant effects on natural persons within the meaning of Article 22(1) GDPR.
  5. Profiling. Any modification to the System that introduces profiling of natural persons, including the aggregation of placement results with other personal data to build individual profiles for predictive purposes.
  6. Regulatory guidance. The European Commission, the AI Office, or a national competent authority issues guidance or a decision indicating that systems of the type operated by EZclass should be classified as high-risk notwithstanding the Article 6(3) conditions. 

In the event of reclassification, EZclass will undertake the full set of deployer obligations under Article 26, complete a conformity assessment pathway in cooperation with the relevant AI system providers, and update all documentation accordingly. 

3.6 Dual-Track Compliance Approach 

While EZclass claims the Article 6(3) derogation as its primary regulatory position, the Company adopts a precautionary dual-track approach by voluntarily implementing the majority of high-risk deployer obligations as a matter of best practice. This approach: 

  • Demonstrates good faith compliance and preparedness
  • Reduces the operational impact of any future reclassification
  • Aligns with the expectations of institutional clients, particularly those in the public education sector
  • Reflects the Company's commitment to responsible AI deployment 

The specific obligations adopted on a voluntary basis are identified in the relevant sections of this Policy and include human oversight mechanisms (Section 5), data governance controls (Section 6), evaluation and monitoring procedures (Section 7), incident reporting processes (Section 8), logging and audit trails (Section 11), and risk management measures (Section 14). 

 

4. Governance and Accountability 

4.1 Governance Structure 

EZclass maintains the following governance roles with respect to AI system oversight. Given the Company's size as a startup, a single individual may hold multiple roles. The governance structure shall be reviewed and expanded as the organisation grows. 

4.2 AI System Owner 

Responsibilities: 

  • Overall accountability for the Placement Test's compliance with this Policy and applicable provisions of the AI Act
  • Final decision authority on risk classification, reclassification triggers, and system deployment or suspension
  • Approval of material changes to the AI processing pipeline (e.g., change of AI provider, change of model, introduction of new use cases)
  • Sign-off on the annual Policy review and any interim updates
  • Liaison with national competent authorities and the AI Office in the event of regulatory enquiry
  • Oversight of institutional deployment terms to ensure alignment with the non-high-risk classification position 

4.3 Technical Lead 

Responsibilities: 

  • Day-to-day operational management of the Placement Test processing pipeline
  • Implementation and maintenance of prompt engineering, scoring calibration, and quality controls
  • Monitoring of AI provider performance, model versioning, and API availability
  • Implementation of technical safeguards (input validation, output monitoring, logging)
  • First-responder role for technical incidents (scoring anomalies, provider outages, data pipeline failures)
  • Execution of periodic accuracy benchmarking and drift detection
  • Management of integrations with infrastructure providers (GCP, Cloudflare, Firebase, Datadog, BetterStack, Sentry) 

4.4 Compliance and Privacy Lead 

Contact: [email protected] 

Responsibilities: 

  • Oversight of regulatory compliance across the AI Act, GDPR, and related legislation
  • Maintenance of this Policy and associated documentation
  • Management of international data transfer mechanisms (DPF for OpenAI, SCCs for DeepSeek)
  • Handling of user rights requests, complaints, and escalations related to AI processing
  • Coordination with the Estonian Data Protection Inspectorate (Andmekaitse Inspektsioon) where required
  • Oversight of AI literacy training compliance (Article 4)
  • Completion and maintenance of the FRIA template for institutional deployments
  • Management of the EU AI Act database registration process 

4.5 Decision Authority Matrix 

Decision Authority Consulted Informed 
Risk classification / reclassification AI System Owner Compliance Lead, Technical Lead All staff 
Change of AI provider or model AI System Owner Technical Lead, Compliance Lead All staff 
System suspension due to incident Technical Lead (immediate), AI System Owner (confirmation) Compliance Lead All staff, affected users 
Institutional deployment approval AI System Owner Compliance Lead Technical Lead 
FRIA completion for institutional client Compliance Lead AI System Owner, Technical Lead Institutional client 
Regulatory notification or reporting Compliance Lead AI System Owner Technical Lead 
Policy update or new version AI System Owner (sign-off) Compliance Lead (drafting), Technical Lead (technical review) All staff 
Human review of placement result Support team (execution) Technical Lead (if technical issue) Compliance Lead (if complaint/dispute) 

 

 

5. Human Oversight Framework 

5.1 Principle 

In accordance with the voluntary adoption of Article 26(1) and (2) deployer obligations, EZclass ensures that human oversight is embedded at each stage of the Placement Test lifecycle: pre-deployment, runtime, and post-deployment. 

5.2 Pre-Deployment Oversight 

(a) Prompt engineering review. All prompts and instructions submitted to the DeepSeek API for scoring evaluation are designed, reviewed, and approved by the Technical Lead before deployment. Prompt design is documented and version-controlled. Changes to prompts undergo review for potential bias, scoring accuracy, and alignment with CEFR standards before being deployed to production. 

(b) Scoring calibration. Prior to deployment of any new prompt version or model update, the Technical Lead conducts calibration testing against a set of reference responses with known CEFR levels. The calibration process verifies that AI-generated scores fall within acceptable tolerance ranges relative to expected levels. 

(c) Model selection review. Selection of AI providers and models (currently OpenAI Whisper for STT and DeepSeek deepseek-chat for evaluation) is subject to review by the AI System Owner, considering accuracy, reliability, data protection posture, and compliance with applicable transfer mechanisms. 

5.3 Runtime Oversight 

(a) Human review on request. Any user may request a human review of their placement result by contacting [email protected]. Upon receipt of such a request, a qualified EZclass staff member will review the user's responses and the AI-generated evaluation, and may adjust the assigned CEFR level if the human reviewer determines that the AI assessment is inaccurate. This mechanism is disclosed to users in T&C §1.2 and Privacy Policy §10. 

(b) Support escalation. Where a user disputes the outcome of a human review, or where a systemic issue is identified through user complaints, the matter is escalated to the Compliance and Privacy Lead at [email protected] for further investigation. 

(c) Automated monitoring. Real-time monitoring of API performance, error rates, and response times is maintained via Datadog and BetterStack. Anomalous patterns (e.g., unusually high error rates, response time degradation, unexpected output distributions) trigger alerts to the Technical Lead. 

5.4 Post-Deployment Oversight 

(a) Periodic accuracy audits. The Technical Lead conducts accuracy audits on a quarterly basis (at minimum), comparing a sample of AI-generated placements against human grader assessments. Results are documented and reported to the AI System Owner. 

(b) Output distribution analysis. The Technical Lead monitors the distribution of CEFR levels assigned by the System over time. Significant shifts in distribution patterns (e.g., anomalous clustering at a particular level) are investigated as potential indicators of model drift or scoring bias. 

(c) User feedback analysis. User complaints, feedback, and human review requests related to placement accuracy are aggregated and analysed on a quarterly basis to identify patterns that may indicate systematic scoring issues. 

5.5 Override Mechanisms 

Users have two distinct override mechanisms available: 

  1. Self-service override: Users may select any course level on the EZclass platform regardless of their AI-suggested placement. The System does not enforce the suggested level.
  2. Human review override: Users may request a human review of their placement via [email protected]. If the human reviewer determines the AI assessment to be inaccurate, the CEFR level is adjusted accordingly. 

5.6 Limitations Disclosure 

EZclass discloses to users that: 

  • The Placement Test is an AI-powered assessment and not a human evaluation (T&C §1.2)
  • Results are indicative and not equivalent to formal certifications such as IELTS, TOEFL, or Cambridge English examinations (T&C §1.2.3)
  • AI scoring may not reflect human grader assessments in all cases
  • Input quality (audio clarity, response completeness) affects assessment accuracy
  • Human review is available on request 

 

6. Data Governance and Input Quality 

6.1 Principle 

EZclass implements data governance controls to ensure that input data to the Placement Test is of sufficient quality to support accurate and fair assessment outcomes, in alignment with the voluntary adoption of Article 26(4) deployer obligations regarding input data relevance and representativeness. 

6.2 Audio Input Quality Controls 

The following controls are applied to audio inputs (spoken task responses): 

  • Format validation: Audio recordings are captured in a standardised format (WebM/Opus or WAV) via the browser interface, ensuring compatibility with the Whisper API.
  • Duration constraints: Minimum and maximum recording durations are enforced per task to ensure sufficient speech content for meaningful transcription and evaluation, while preventing excessively long or empty submissions.
  • Noise guidance: Users are provided with guidance to record in a quiet environment. The interface includes a microphone check functionality.
  • Retry logic: Where the Whisper API returns a transcription error or an empty transcription, the System may prompt the user to re-record or flag the response for manual review. 

6.3 Text Input Validation 

The following controls are applied to written text inputs: 

  • Character length constraints: Minimum and maximum character limits per task are enforced to ensure that responses contain sufficient content for meaningful evaluation.
  • Input sanitisation: Text inputs are sanitised to remove code injection attempts, excessive special characters, or non-linguistic content that could interfere with AI evaluation.
  • Language detection: Responses that are not in English (or contain predominantly non-English content) may be flagged. 

6.4 Bias Considerations and Fairness 

EZclass acknowledges the following bias risks in the Placement Test pipeline and implements the corresponding mitigations: 

(a) Accent and dialect bias (Whisper transcription). The Whisper speech-to-text model may exhibit differential transcription accuracy across English accents and dialects (e.g., non-native accents, regional varieties of English). Mitigation measures include: 

  • Periodic testing of transcription accuracy across a range of accent profiles
  • Monitoring of transcription error rates by user-reported language background (where available)
  • Availability of human review for users who believe their spoken responses were inaccurately transcribed 

(b) Non-native speaker patterns (DeepSeek evaluation). The DeepSeek evaluation model may not consistently account for non-native speaker patterns that are linguistically valid but non-standard. Mitigation measures include: 

  • Prompt design that instructs the model to evaluate communicative competence in accordance with CEFR descriptors, rather than penalising non-native patterns that do not impair communication
  • Periodic review of scoring outcomes for users at lower CEFR levels (A1-A2) to identify systematic under- or over-scoring
  • Inclusion of diverse non-native English samples in calibration benchmarks 

(c) Prompt design for fairness. Evaluation prompts submitted to DeepSeek are designed to: 

  • Apply CEFR criteria consistently across all users
  • Avoid penalising cultural references, regional vocabulary, or discourse patterns that reflect linguistic diversity
  • Focus assessment on communicative competence, grammatical range, vocabulary breadth, and task completion 

6.5 Special Categories of Data 

The Placement Test does not process special categories of personal data within the meaning of Article 9(1) GDPR. Specifically: 

  • Voice data: Audio recordings of spoken responses constitute personal data but do not constitute biometric data processed for the purpose of uniquely identifying a natural person. The audio is processed solely for speech-to-text transcription in order to evaluate language proficiency. This position is documented in T&C §1.2 (biometric clarification) and Privacy Policy §11.
  • No data concerning health, racial or ethnic origin, political opinions, religious beliefs, trade union membership, genetic data, or sexual orientation is collected or processed by the Placement Test. 

 

 

7. AI Evaluation and Monitoring 

7.1 Accuracy Benchmarking 

The Technical Lead shall establish and maintain a benchmark dataset consisting of English language responses with known CEFR levels, validated by qualified human assessors. The benchmark dataset shall: 

  • Cover all CEFR levels from A1 to C2
  • Include both written and spoken response types
  • Represent a range of first-language backgrounds and English varieties
  • Be used to evaluate AI scoring accuracy upon initial deployment, after any model or prompt change, and on a quarterly basis thereafter 

Benchmarking results shall be documented, including the percentage of AI-assigned levels that match human assessor levels, the percentage within one level of the human assessment, and the identification of any systematic over- or under-scoring at particular levels. 

Note: As of the effective date of this Policy, the formal CEFR calibration benchmark dataset has not yet been established. This is identified as a compliance gap in Appendix B and an action item in Appendix C. 

7.2 Output Distribution Monitoring 

The Technical Lead shall monitor the distribution of CEFR levels assigned by the System on a monthly basis. Monitoring shall include: 

  • The overall distribution of assigned levels (percentage of users at each CEFR level)
  • Month-over-month changes in distribution
  • Comparison of distribution patterns against expected population-level distributions for the user base
  • Identification of anomalous clustering (e.g., disproportionate assignment to a single level), which may indicate prompt failure, model drift, or scoring bias 

Significant deviations shall be escalated to the AI System Owner for investigation. 

7.3 Provider Model Change Monitoring 

EZclass depends on third-party AI models that may be updated by their providers. The Technical Lead shall: 

  • Monitor DeepSeek API documentation, release notes, and communications for announcements of model updates, deprecations, or changes to the deepseek-chat model
  • Monitor OpenAI API documentation for announcements of Whisper model updates
  • Where possible, pin API requests to specific model versions to prevent unannounced model changes from affecting production scoring
  • Upon notification of a model change, conduct re-benchmarking against the calibration dataset before deploying the updated model in production
  • Document the model version in use at any given time, including the date of any model transitions 

7.4 Comparison with Human Grader Assessments 

On a quarterly basis, the Technical Lead shall select a random sample of completed placement tests and have the corresponding responses independently evaluated by a qualified human assessor. The results shall be compared against the AI-generated placements to: 

  • Calculate inter-rater agreement between the AI system and human assessors
  • Identify response types, CEFR levels, or user profiles where agreement is consistently low
  • Inform prompt refinement and calibration efforts 

7.5 Drift Detection Methodology 

Drift detection shall be conducted through a combination of: 

  • Statistical monitoring: Tracking the mean, median, and standard deviation of numerical scores over time, with alerts triggered when these metrics deviate beyond predefined thresholds
  • Distribution shift analysis: Comparing current CEFR level distributions against historical baselines using appropriate statistical tests
  • Qualitative review: Periodic manual inspection of AI-generated feedback text to identify changes in tone, specificity, or alignment with CEFR descriptors
  • Trigger-based re-evaluation: Automatic re-benchmarking whenever a model version change is detected or when statistical monitoring triggers an alert

 

8. Incident Detection and Reporting 

8.1 Definition of AI Incident 

For the purposes of this Policy, an "AI incident" is defined as any event involving the Placement Test that results in, or has the potential to result in, one or more of the following: 

  • Systematic scoring errors: A pattern of incorrect CEFR level assignments affecting multiple users, as opposed to isolated individual inaccuracies
  • Bias patterns: Evidence that the System systematically scores users of a particular demographic group, language background, accent profile, or other protected characteristic differently from comparable users
  • Provider outage or failure: A failure of the OpenAI Whisper API or DeepSeek API that prevents the Placement Test from functioning, that results in degraded output quality, or that returns erroneous results
  • Data leak or unauthorised access: Any unauthorised disclosure of, or access to, user input data (audio recordings, text responses), AI-generated outputs (scores, feedback), or system logs
  • Prompt injection or manipulation: An event in which a user or third party manipulates the AI evaluation through adversarial inputs, resulting in unreliable outputs
  • Model drift: A significant and sustained change in scoring patterns that is not attributable to changes in the user population, indicating degradation or alteration of the underlying AI model 

8.2 Detection Mechanisms 

EZclass employs the following mechanisms for incident detection: 

Mechanism Description Responsibility 
Automated monitoring (Datadog, BetterStack) Real-time monitoring of API error rates, response times, HTTP status codes, and system health Technical Lead 
Error logging (Sentry) Capture and alerting on application-level exceptions and errors in the Placement Test pipeline Technical Lead 
Output distribution monitoring Monthly analysis of CEFR level distributions and score statistics to detect anomalies Technical Lead 
User complaints Review of support tickets related to placement accuracy, transcription quality, or scoring fairness Support team, escalated to Technical Lead 
Periodic accuracy audits Quarterly comparison of AI outputs against human assessor evaluations Technical Lead 
Provider communications Monitoring of OpenAI and DeepSeek API status pages, release notes, and incident reports Technical Lead 

 

8.3 Escalation Procedure 

Upon detection or credible report of an AI incident, the following escalation procedure shall be followed: 

Level 1 — Technical Lead (Immediate) 

  • Assess the incident scope and severity
  • If necessary, suspend the Placement Test or the affected component to prevent ongoing harm
  • Implement immediate technical mitigation (e.g., fallback to cached results, disabling the affected API, displaying a maintenance notice)
  • Document the incident in the internal incident log 

Level 2 — AI System Owner (Within 24 hours) 

  • Review the Technical Lead's assessment and confirm severity classification
  • Authorise continued suspension or resumption of the System
  • Determine whether affected users should be notified
  • Determine whether institutional clients should be notified 

Level 3 — Compliance and Privacy Lead (Within 48 hours) 

  • Assess whether the incident involves a personal data breach requiring notification under Article 33 GDPR
  • Assess whether the incident constitutes a "serious incident" within the meaning of Article 3(49) of the AI Act
  • If the System is reclassified as high-risk, assess whether reporting to the relevant market surveillance authority is required under Article 26(5) of the AI Act
  • Coordinate notification to the Estonian Data Protection Inspectorate if a personal data breach is confirmed
  • Coordinate notification to the relevant market surveillance authority if required 

8.4 Reporting Timelines 

Scenario Timeline Authority 
Personal data breach (GDPR Article 33) Within 72 hours of awareness Estonian Data Protection Inspectorate 
Serious AI incident (if high-risk reclassification applies) Without undue delay, after first notifying the provider (Article 26(5)) Relevant market surveillance authority 
User notification (data breach with high risk to rights) Without undue delay (GDPR Article 34) Affected users 
Institutional client notification As specified in contract, or within 48 hours Institutional client 

 

8.5 Internal Incident Log 

EZclass shall maintain an internal AI incident log documenting, for each incident: 

  • Date and time of detection
  • Description of the incident
  • Detection mechanism
  • Severity classification (Critical / High / Medium / Low)
  • Affected scope (number of users, time period, specific components)
  • Root cause analysis (where determinable)
  • Mitigation actions taken
  • Resolution date and outcome
  • Lessons learned and preventive measures implemented 

The incident log shall be retained for a minimum of five (5) years and made available for regulatory inspection upon request. 

 

9. User Rights and Safeguards 

9.1 Transparency 

In compliance with Article 50(1) of the AI Act and in alignment with existing GDPR transparency obligations, EZclass ensures that users of the Placement Test are informed that they are interacting with an AI system. This information is provided through: 

  • Terms & Conditions §1.2: Discloses the use of AI providers (OpenAI and DeepSeek) in the assessment pipeline, the advisory nature of results, and the biometric data clarification regarding voice recordings
  • Privacy Policy §10: Discloses the automated processing of user responses for AI-based scoring, the Article 22 GDPR position (not automated decision-making with legal or similarly significant effects), and the transparency of the AI scoring methodology
  • Privacy Policy §11: Discloses the processing of voice data, including the transmission of audio recordings to OpenAI for transcription
  • Legal Notice: Includes the Article 22 GDPR disclaimer and AI provider disclosure
  • In-product notice: Users are informed at the point of test commencement that their responses will be evaluated by AI 

9.2 Right to Challenge and Human Review 

Any user may challenge their placement result and request a human review by contacting [email protected]. The process is as follows: 

  1. The user submits a request for human review, identifying the placement test in question and the basis for the challenge (e.g., belief that the score is inaccurate, concern about transcription quality).
  2. A qualified EZclass staff member reviews the user's original responses (written text and, where relevant, audio/transcription) and the AI-generated evaluation.
  3. The reviewer independently assesses the user's responses against CEFR criteria.
  4. If the reviewer determines that the AI-assigned level is inaccurate, the CEFR level is adjusted and the user is notified of the revised result.
  5. If the reviewer confirms the AI-assigned level, the user is notified with an explanation. 

The human review process shall be completed within a reasonable timeframe, not to exceed fifteen (15) business days from receipt of the request. 

9.3 Dispute Escalation 

Where a user is not satisfied with the outcome of the human review, the user may escalate the matter to the Compliance and Privacy Lead at [email protected]. The Compliance Lead shall: 

  • Review the original AI assessment, the human review outcome, and the user's grounds for dispute
  • Determine whether further investigation is warranted
  • Provide a final written response to the user within fifteen (15) business days of escalation 

9.4 Limitations Disclosure 

EZclass clearly discloses the limitations of the Placement Test to users, as documented in T&C §1.2.3 and T&C §10.3: 

  • Results are indicative only and do not constitute certifications, qualifications, or credentials
  • The Placement Test is not a substitute for official language proficiency examinations (e.g., IELTS, TOEFL, Cambridge English)
  • AI scoring may not reflect human grader assessments in all cases
  • Accuracy depends on the quality of user inputs (clarity of audio, completeness of written responses)
  • No warranty of fitness for any particular purpose is provided 

9.5 GDPR Data Subject Rights 

Users' rights under GDPR are fully preserved and are exercisable in relation to data processed by the Placement Test. These rights are documented in Privacy Policy §8 and include: 

  • Right of access (Article 15 GDPR)
  • Right to rectification (Article 16 GDPR)
  • Right to erasure (Article 17 GDPR)
  • Right to restriction of processing (Article 18 GDPR)
  • Right to data portability (Article 20 GDPR)
  • Right to object (Article 21 GDPR)
  • Right not to be subject to solely automated decision-making (Article 22 GDPR) — noting that EZclass's position is that the Placement Test does not constitute automated decision-making with legal or similarly significant effects 

Data subject rights requests may be submitted to [email protected] and are handled in accordance with the procedures set out in the Privacy Policy. 

 

10. AI Literacy Requirements 

10.1 Legal Basis 

Article 4 of the AI Act requires providers and deployers of AI systems to take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf. This obligation has been applicable since 2 February 2025

EZclass, as a deployer of AI systems, is subject to Article 4. The Company is committed to ensuring that all personnel involved in the operation, maintenance, or oversight of the Placement Test possess the knowledge and skills necessary to fulfil their roles responsibly. 

10.2 Scope of the AI Literacy Obligation 

The following categories of personnel are within scope of EZclass's AI literacy programme: 

Role Category Literacy Requirements 
Technical team (development, operations) Full understanding of the AI pipeline (Whisper STT, DeepSeek evaluation), prompt engineering, bias risks, monitoring procedures, incident escalation 
Support team Understanding of how the Placement Test works at a functional level, awareness of AI limitations, ability to handle human review requests and user complaints about AI scoring 
Management / AI System Owner Understanding of AI Act obligations, risk classification position, governance responsibilities, decision authority for system changes and reclassification 
Compliance / Privacy Lead Comprehensive understanding of AI Act requirements, GDPR intersection, international transfer obligations, FRIA requirements, registration obligations 
Institutional sales / account management Ability to explain the AI system to institutional clients, awareness of limitations and disclaimers, understanding of contractual guardrails for institutional use 

 

10.3 Training Content 

AI literacy training at EZclass shall cover, at a minimum: 

  1. General AI concepts: What AI is, how machine learning models work, the distinction between rule-based systems and generative AI, and the concept of model limitations and uncertainty
  2. EZclass AI pipeline: Detailed explanation of the Placement Test processing pipeline (input collection, Whisper transcription, DeepSeek evaluation, output generation), including the role of each component and the data flows involved
  3. Bias risks and limitations: Understanding of potential biases in speech-to-text transcription (accent bias) and AI evaluation (non-native speaker patterns), and the measures EZclass employs to mitigate these risks
  4. EU AI Act obligations: Overview of the AI Act's risk classification framework, EZclass's position under Article 6(3), the deployer obligations under Article 26 (adopted voluntarily), transparency obligations under Article 50, and the AI literacy obligation under Article 4
  5. GDPR intersection: How the AI Act's requirements interact with GDPR obligations, including the Article 22 position, data subject rights, and international transfer mechanisms
  6. Escalation procedures: How to identify and escalate potential AI incidents, user complaints, and scoring anomalies
  7. User-facing communication: How to explain the AI system to users and institutional clients, including required disclosures and limitation statements 

10.4 Training Cadence 

  • Onboarding: All new personnel within scope receive AI literacy training as part of their onboarding process, to be completed within the first thirty (30) days of engagement.
  • Annual refresher: All personnel within scope receive an annual refresher training, to be completed within Q1 of each calendar year.
  • Ad hoc updates: Where material changes occur to the AI pipeline, risk classification, regulatory framework, or governance structure, targeted training updates are provided within thirty (30) days of the change. 

10.5 Documentation 

EZclass shall maintain records of AI literacy training, including: 

  • Training materials and content (version-controlled)
  • Attendance/completion records for each training session
  • Date of training completion for each individual
  • Training gap analysis (if applicable) 

These records shall be made available to national competent authorities upon request. 

Note: As of the effective date of this Policy, the formal AI literacy training programme has not yet been fully documented, although informal training and knowledge-sharing occurs within the team. Formalisation of the training programme is identified as a compliance gap in Appendix B and an action item in Appendix C. 

 

11. Logging, Retention, and Audit Trail 

11.1 Data Logged 

EZclass logs the following data in connection with the Placement Test: 

Inputs: 

Data Type Description Storage Location 
Audio recordings User's spoken task responses (audio files) GCP (Frankfurt/Netherlands) 
Text responses User's written task responses GCP (Frankfurt/Netherlands) 
AI prompts Evaluation prompts and instructions submitted to DeepSeek GCP (Frankfurt/Netherlands) 

 

Outputs: 

Data Type Description Storage Location 
CEFR level Assigned proficiency level (A1-C2) GCP (Frankfurt/Netherlands) 
Numerical score Score reflecting the evaluation GCP (Frankfurt/Netherlands) 
Diagnostic feedback AI-generated textual feedback GCP (Frankfurt/Netherlands) 
Certificate data Data used to generate the unofficial EZclass certificate GCP (Frankfurt/Netherlands) 

 

System Logs: 

Data Type Description Storage Location 
API call metadata Request/response timestamps, HTTP status codes, response times for Whisper and DeepSeek API calls Datadog, BetterStack 
Error logs Application-level exceptions and error traces Sentry 
Performance metrics System performance data (latency, throughput, error rates) Datadog 
Access logs Authentication and access events Firebase, GCP 

 

11.2 Retention Periods 

Retention periods for Placement Test data are aligned with the Privacy Policy §7: 

Data Category Retention Period Justification 
Audio recordings (raw) 90 days Required for human review requests and quality assurance; thereafter deleted 
Text inputs (raw) 90 days Required for human review requests and quality assurance; thereafter deleted 
AI prompts Retained as system configuration Necessary for audit trail and version control; no personal data 
Test results (CEFR level, score) 3 years Retained to provide users access to their historical results and certificates 
Certificates 3 years Retained to allow users to re-access their certificates 
Diagnostic feedback 3 years Retained as part of the test result record 
System logs (API metadata, errors, performance) As per provider retention policies (Datadog: contract term; Sentry: contract term; BetterStack: contract term) Operational necessity; no raw user content 

 

11.3 Audit Trail 

EZclass maintains an audit trail sufficient to reconstruct the processing of any individual Placement Test for the duration of the retention period. The audit trail includes: 

  • The user's original inputs (audio and text) — available for 90 days
  • The AI prompts used for evaluation — available for the system configuration retention period
  • The AI-generated outputs (CEFR level, score, feedback) — available for 3 years
  • API call metadata (timestamps, model version, response codes) — available for the system log retention period
  • Any human review conducted, including the reviewer identity, date, and outcome 

11.4 Regulatory Access 

Logs and audit trail records shall be made available for inspection by: 

  • National competent authorities (market surveillance authorities) upon lawful request
  • The Estonian Data Protection Inspectorate in the context of GDPR enforcement
  • The AI Office in the context of AI Act enforcement
  • Institutional clients, to the extent specified in contractual arrangements and the DPA 

 

12. Fundamental Rights Impact Assessment (FRIA) 

12.1 Applicability 

Under Article 27 of the AI Act, the obligation to conduct a Fundamental Rights Impact Assessment prior to deploying a high-risk AI system applies to: 

  • Deployers that are bodies governed by public law, or private entities providing public services
  • Deployers of AI systems used for creditworthiness evaluation, credit scoring, or risk assessment and pricing for life and health insurance 

For EZclass's current deployment model: 

  • B2C direct-to-consumer use: Where EZclass offers the Placement Test directly to individual consumers via placement.ezclass.io, a FRIA is not currently required. EZclass is a private company and the Placement Test is not a public service in this deployment context.
  • Institutional deployment (B2B): Where EZclass sells the Placement Test to, or deploys it on behalf of, public educational institutions or entities providing public educational services, a FRIA may be required of the institutional deployer. In such cases, EZclass shall cooperate with the institutional client to complete a FRIA, and shall provide the institutional client with all necessary information about the AI system to support their assessment. 

Additionally, where EZclass itself deploys the System in a manner that constitutes the provision of a public service (e.g., a contract with a government education ministry), EZclass will complete a FRIA as a deployer obligation. 

12.2 Identified Risk Areas 

Regardless of the current FRIA obligation status, EZclass has identified the following fundamental rights risk areas relevant to the Placement Test: 

(a) Non-discrimination (Article 21, EU Charter of Fundamental Rights) 

  • Risk: The AI system may produce systematically different scores for users based on accent, dialect, first language, or other characteristics correlated with protected characteristics (race, ethnic origin, nationality).
  • Affected groups: Non-native English speakers, speakers of non-standard English varieties, speakers with accents underrepresented in AI training data.
  • Mitigation: Periodic bias testing across accent and language backgrounds; human review option; prompt design emphasising communicative competence over native-like accuracy; calibration benchmarking with diverse samples. 

(b) Right to education (Article 14, EU Charter) 

  • Risk: Inaccurate placement may result in a user being directed to a course level that is too easy or too difficult, potentially affecting the quality of their educational experience.
  • Affected groups: All users, particularly those at boundary levels (e.g., B1/B2 boundary) where small scoring differences determine placement.
  • Mitigation: Advisory nature of results; user override available; human review on request; clear disclosure that results are indicative. 

(c) Accessibility (Article 26, EU Charter — Integration of persons with disabilities) 

  • Risk: Users with speech impairments, hearing impairments affecting self-monitoring of speech production, or other disabilities affecting oral language production may receive inaccurate scores on spoken tasks.
  • Affected groups: Users with speech or language disabilities.
  • Mitigation: Human review on request; consideration of alternative assessment pathways for users who disclose disabilities; clear disclosure of system limitations. 

(d) Right to good administration (Article 41, EU Charter) 

  • Risk: In institutional deployments, the Placement Test may be perceived as an administrative decision without adequate explanation or recourse.
  • Affected groups: Users in institutional contexts where they did not voluntarily take the test.
  • Mitigation: Transparency obligations (Section 9.1); human review mechanism (Section 9.2); institutional contractual guardrails requiring that results not be used as binding decisions. 

12.3 FRIA Template 

EZclass shall prepare a FRIA template that can be completed on a per-deployment basis for institutional clients. The template shall follow the structure anticipated by Article 27(1) and shall include: 

  • Description of the AI system and its deployment context
  • Identification of affected groups and categories of persons
  • Mapping of relevant Charter rights to identified risks
  • Severity and likelihood assessment for each identified risk
  • Mitigation measures (technical and organisational)
  • Human oversight structure
  • Monitoring and review plan 

Note: As of the effective date of this Policy, the FRIA template has not yet been completed. This is identified as a compliance gap in Appendix B and an action item in Appendix C. The template must be completed before the first institutional deployment to a public educational institution or entity providing public services. 

 

13. Third-Party Dependencies 

13.1 AI Service Providers 

The Placement Test depends on two third-party AI service providers: 

13.1.1 OpenAI (Whisper API) 

Attribute Detail 
Provider OpenAI, L.L.C. 
Registered office San Francisco, California, USA 
Service Speech-to-text transcription (Whisper API) 
Role Data processor (sub-processor to EZclass) 
Data processed Audio recordings of user spoken responses 
Transfer mechanism EU-U.S. Data Privacy Framework (DPF) 
Training policy API inputs are not used for model training (per API terms) 
Data retention by provider As specified in OpenAI API Data Processing Addendum; no retention beyond API call processing 
Contractual basis OpenAI API Terms of Service and Data Processing Addendum 

 

13.1.2 DeepSeek AI Co., Ltd. 

Attribute Detail 
Provider DeepSeek AI Co., Ltd. 
Registered office Beijing, China 
API endpoint api.deepseek.com 
Model deepseek-chat 
Service AI-powered evaluation of English language proficiency 
Role Data processor (sub-processor to EZclass) 
Data processed Written text responses and transcribed speech 
Transfer mechanism Standard Contractual Clauses (EU Commission Implementing Decision (EU) 2021/914, Module 2: Controller to Processor) 
Training policy API inputs are not used for model training (per API terms) 
Data retention by provider No retention beyond API call processing 
Contractual basis DeepSeek API Terms of Service and SCCs 

 

13.2 EZclass's Role and Obligations as Deployer 

EZclass is a deployer within the meaning of Article 3(4) of the AI Act. EZclass does not develop or train AI models. The provider obligations under the AI Act (Articles 8-25) rest with OpenAI and DeepSeek as providers of the respective AI components. EZclass's obligations as a deployer include: 

  • Use in accordance with instructions: Using the AI services in accordance with the providers' instructions for use and terms of service
  • Performance monitoring: Monitoring the performance of AI components in the context of the Placement Test and identifying degradation, drift, or anomalies
  • Issue reporting: Reporting identified issues, risks, or incidents to the providers in accordance with their terms and the AI Act
  • Input data quality: Ensuring that input data submitted to the AI services is relevant and of sufficient quality (Section 6)
  • Human oversight: Implementing human oversight measures appropriate to the deployment context (Section 5) 

13.3 Contractual Safeguards 

The following contractual safeguards are in place with respect to the AI service providers: 

  • No training on inputs: Both OpenAI and DeepSeek's API terms prohibit the use of API inputs for model training purposes
  • No data retention beyond processing: Both providers process data for the purpose of returning an API response and do not retain input data beyond the API call processing
  • Data Processing Addendum / SCCs: Data protection obligations are governed by the OpenAI Data Processing Addendum (DPF transfer) and the DeepSeek SCCs (Module 2)
  • Sub-processor transparency: Both providers are listed in EZclass's DPA sub-processor table 

13.4 Transfer Impact Assessment 

A Transfer Impact Assessment ("TIA") has been or shall be conducted for each international transfer: 

  • OpenAI (USA): Transfer relies on the EU-U.S. Data Privacy Framework. The adequacy of the DPF is subject to ongoing review. EZclass monitors developments regarding the DPF and maintains awareness of supplementary measures that may be required.
  • DeepSeek (China): Transfer relies on Standard Contractual Clauses (Module 2). China does not benefit from an EU adequacy decision. The TIA for the DeepSeek transfer considers the legal framework in China (including the PIPL and Cybersecurity Law), the nature of data transferred (text responses and transcribed speech, not special category data), the limited scope of processing (API call evaluation, no data retention), and the supplementary measures in place (data minimisation, encryption in transit, no bulk data transfer). 

Note: As of the effective date of this Policy, the execution of SCCs with DeepSeek is still pending formal completion. This is identified as a compliance gap in Appendix B and an action item in Appendix C. 

13.5 Infrastructure and Supporting Services 

In addition to the AI service providers, the Placement Test relies on the following infrastructure and supporting services, which are not AI systems within the scope of this Policy but are relevant to the overall system architecture: 

Service Provider Purpose Location 
Google Cloud Platform (GCP) Google LLC Primary infrastructure (compute, storage, database) Frankfurt, Germany / Netherlands 
Cloudflare Cloudflare, Inc. CDN, DDoS protection, edge delivery Global (EU data processing) 
Firebase Google LLC Authentication, real-time database EU region 
Datadog Datadog, Inc. Application monitoring, alerting EU region 
BetterStack Better Stack, Inc. Uptime monitoring, incident management EU region 
Sentry Functional Software, Inc. Error tracking EU region 
Stripe Stripe, Inc. Payment processing EU/USA 
Zoom Video SDK Zoom Video Communications, Inc. Video classes (not part of Placement Test) Global 
Directus Self-hosted CMS GCP (Frankfurt/Netherlands) 
ZeptoMail Zoho Corporation Transactional email EU/India 
Brevo Brevo (formerly Sendinblue) Marketing email EU 
Contentsquare Contentsquare SAS Digital experience analytics EU 
Microsoft Clarity Microsoft Corporation Session analytics EU/USA 
Google reCAPTCHA v3 Google LLC Bot detection Global 

 

 

14. Risk Management System 

14.1 Purpose 

In voluntary alignment with the risk management approach contemplated by Articles 9 and 26 of the AI Act, EZclass maintains a risk register identifying the principal risks associated with the Placement Test, their severity, likelihood, and corresponding mitigation measures. 

14.2 Risk Register 

Risk 1: Scoring Bias Against Specific Accents or Dialects 

Attribute Detail 
Risk ID RISK-001 
Severity HIGH 
Likelihood Medium 
Description The AI evaluation pipeline (Whisper transcription + DeepSeek scoring) may systematically disadvantage users with certain accents, dialects, or first-language backgrounds, leading to inaccurate CEFR level assignments that correlate with protected characteristics. 
Affected rights Non-discrimination (Article 21 EU Charter); right to education (Article 14 EU Charter) 
Mitigation (1) Periodic calibration testing with diverse accent samples; (2) human review on request; (3) prompt design emphasising communicative competence; (4) monitoring of score distribution by user-reported language background; (5) output distribution monitoring for anomalous patterns 
Residual risk Medium — full elimination of accent bias in third-party models is not within EZclass's direct control 

 

Risk 2: DeepSeek Model Change Degrading Accuracy 

Attribute Detail 
Risk ID RISK-002 
Severity HIGH 
Likelihood Medium 
Description DeepSeek may update the deepseek-chat model without prior notice or with insufficient notice, causing a change in scoring behaviour that degrades accuracy, introduces new biases, or produces outputs inconsistent with CEFR standards. 
Affected rights Right to education (Article 14 EU Charter); data quality principles 
Mitigation (1) Where available, version pinning of the DeepSeek model; (2) monitoring of DeepSeek release communications; (3) pre-deployment benchmarking against calibration dataset upon any detected model change; (4) automated output distribution monitoring to detect scoring shifts; (5) ability to suspend the System and revert to a previous configuration if necessary 
Residual risk Medium — dependency on third-party provider's release practices 

 

Risk 3: Whisper Transcription Errors Affecting Evaluation 

Attribute Detail 
Risk ID RISK-003 
Severity MEDIUM 
Likelihood Medium 
Description The Whisper speech-to-text model may produce inaccurate transcriptions due to poor audio quality, unusual accents, background noise, or model limitations, resulting in the DeepSeek evaluation model receiving incorrect input text. 
Affected rights Accuracy of assessment; fairness 
Mitigation (1) Audio quality checks and user guidance; (2) retry logic for failed or empty transcriptions; (3) human review on request, where the reviewer can listen to the original audio; (4) monitoring of transcription quality metrics 
Residual risk Low-Medium — audio quality issues are partially within user control 

 

Risk 4: International Transfer Risk (China) 

Attribute Detail 
Risk ID RISK-004 
Severity MEDIUM 
Likelihood Low 
Description The transfer of text responses and transcribed speech to DeepSeek (China) for evaluation creates international transfer risk, including potential access by Chinese authorities under the PIPL, Cybersecurity Law, or National Security Law. 
Affected rights Right to privacy (Article 7 EU Charter); protection of personal data (Article 8 EU Charter) 
Mitigation (1) Standard Contractual Clauses (Module 2); (2) Transfer Impact Assessment; (3) data minimisation (only text data transferred, not audio); (4) no data retention by DeepSeek beyond API call; (5) no special category data transferred; (6) encryption in transit; (7) ongoing monitoring of regulatory developments regarding EU-China data transfers 
Residual risk Medium — legal framework in China presents inherent risks that cannot be fully eliminated through contractual measures 

 

Risk 5: User Manipulation of Test Inputs 

Attribute Detail 
Risk ID RISK-005 
Severity LOW 
Likelihood Medium 
Description Users may attempt to manipulate the Placement Test by using AI-generated responses, copying text from external sources, having another person complete the test, or injecting adversarial prompts into text responses. 
Affected rights Integrity of assessment; fairness to other users 
Mitigation (1) Time limits on task completion; (2) behavioural analysis (e.g., paste detection, typing pattern analysis); (3) prompt injection defences in the evaluation prompt; (4) advisory nature of results limits harm from manipulation (user harms primarily themselves); (5) institutional clients may implement additional proctoring 
Residual risk Low — the advisory nature of results means that manipulation does not produce legal or similarly significant effects 

 

Risk 6: Over-Reliance on AI Scores by Institutional Clients 

Attribute Detail 
Risk ID RISK-006 
Severity LOW (escalates to HIGH if mitigation fails) 
Likelihood Medium 
Description Institutional clients may treat Placement Test results as binding determinations rather than advisory suggestions, using scores as the sole basis for admission, streaming, or progression decisions without human review. This would fundamentally alter the risk classification analysis and could trigger reclassification to high-risk. 
Affected rights Right to education (Article 14 EU Charter); non-discrimination (Article 21 EU Charter); right to good administration (Article 41 EU Charter) 
Mitigation (1) Contractual guardrails requiring institutional clients to treat results as advisory only; (2) clear disclaimers in all institutional-facing documentation; (3) prohibition on using results as sole basis for binding decisions in institutional terms; (4) periodic review of institutional client use patterns; (5) reclassification trigger monitoring (Section 3.5) 
Residual risk Low — contractual and documentary safeguards in place; risk escalates if safeguards are not respected 

 

14.3 Risk Review 

The risk register shall be reviewed: 

  • At least annually as part of the Policy review cycle (Section 16)
  • Upon any material change to the AI pipeline, provider, model, or deployment context
  • Upon receipt of regulatory guidance or enforcement action relevant to EZclass's risk classification
  • Following any AI incident classified as High or Critical severity 

 

15. Limitations and Disclaimers 

15.1 Nature of the Assessment 

The AI English Placement Test is an indicative assessment tool. It provides a suggested CEFR level based on AI evaluation of user responses. The following limitations apply: 

15.2 Not a Certification or Qualification 

The Placement Test result, including the CEFR level, score, diagnostic feedback, and the unofficial EZclass certificate, does not constitute: 

  • A formal language proficiency certification
  • A qualification recognised by educational authorities, governments, or regulatory bodies
  • An accredited assessment equivalent to IELTS, TOEFL, Cambridge English (FCE, CAE, CPE), DELF/DALF, or any other officially recognised examination
  • A credential suitable for immigration, academic admission, professional licensing, or employment eligibility purposes 

This limitation is disclosed in T&C §1.2.3. 

15.3 Indicative Results Only 

Results should be treated as guidance for educational purposes. They are designed to suggest an appropriate starting level for English language learning on the EZclass platform and should not substitute for professional assessment or official examinations where formal proof of language proficiency is required. 

15.4 AI Scoring Limitations 

  • AI scoring may not reflect human grader assessments in all cases
  • The evaluation model may not fully account for non-standard English varieties, dialects, or non-native speaker patterns that are linguistically valid but atypical
  • Speech-to-text transcription accuracy varies depending on audio quality, accent, background noise, and the Whisper model's performance on specific language varieties
  • The AI model may exhibit scoring inconsistencies at CEFR level boundaries (e.g., between B1 and B2)
  • Results may vary if the same user takes the test multiple times under different conditions 

15.5 Input Quality Dependency 

Assessment accuracy depends on the quality of user inputs: 

  • Audio recordings with excessive background noise, low microphone volume, or poor connectivity may produce inaccurate transcriptions and, consequently, inaccurate evaluations
  • Incomplete written responses, responses of insufficient length, or responses not addressing the task prompt may not provide the AI model with sufficient information for accurate assessment
  • Technical issues (browser compatibility, microphone access, network interruptions) may affect input quality 

15.6 No Warranty 

The Placement Test is provided on an "as is" basis. EZclass makes no warranty, express or implied, regarding the accuracy, reliability, or fitness for any particular purpose of the assessment results. This limitation is documented in T&C §10.3. 

15.7 Cross-References 

  • Terms & Conditions §1.2.3 (limitations and accuracy disclaimer)
  • Terms & Conditions §10.3 (limitation of liability)
  • Privacy Policy §10 (automated processing transparency)
  • Legal Notice (AI provider disclosure and Article 22 disclaimer) 

 

16. Review and Update Mechanism 

16.1 Review Cadence 

This Policy shall be reviewed at least annually. The annual review shall be completed within Q1 of each calendar year, beginning in Q1 2027. 

16.2 Trigger Events 

In addition to the annual review, this Policy shall be reviewed and, where necessary, updated upon the occurrence of any of the following trigger events: 

  1. New AI provider or model: Introduction of a new AI service provider, or migration to a different model from an existing provider (e.g., change from deepseek-chat to a different DeepSeek model, change from Whisper to an alternative STT provider)
  2. Model version change: A significant update to the DeepSeek or Whisper model that alters scoring behaviour, as detected through benchmarking or output monitoring
  3. New use case: Extension of the Placement Test to a new use case not currently covered by this Policy (e.g., assessment of languages other than English, assessment for a different purpose)
  4. Institutional deployment: Deployment of the Placement Test to or on behalf of a public educational institution or entity providing public services, triggering FRIA obligations
  5. Regulatory guidance: Issuance of guidelines, opinions, or decisions by the European Commission, the AI Office, a national competent authority, or the AI Board that affect EZclass's risk classification position, deployer obligations, or transparency requirements
  6. Reclassification: Any event described in Section 3.5 that triggers reclassification of the Placement Test as a high-risk AI system
  7. AI incident: Any AI incident classified as High or Critical severity, as described in Section 8
  8. Legal or contractual change: Material changes to the terms of service, data processing agreements, or API terms of OpenAI or DeepSeek
  9. International transfer mechanism change: Any development affecting the validity of the DPF or SCCs relied upon for international transfers
  10. Legislative amendment: Amendment to the AI Act, including through delegated acts or implementing acts 

16.3 Version Control 

This Policy is maintained under version control. Each version shall be identified by a version number, effective date, and a summary of changes made. 

Version Date Summary of Changes 
1.0 11 April 2026 Initial version 

 

16.4 Approval 

Each new version of this Policy requires sign-off from the AI System Owner, with review and input from the Compliance and Privacy Lead and the Technical Lead

16.5 Distribution 

This Policy is classified as CONFIDENTIAL. It is made available to: 

  • All EZclass personnel within the scope of the AI literacy programme (Section 10)
  • National competent authorities upon lawful request
  • Institutional clients under NDA, where required for compliance purposes
  • Legal counsel and auditors engaged by EZclass 

Portions of this Policy may be disclosed publicly (e.g., in summary form on the EZclass website) where necessary to meet transparency obligations under Article 50 of the AI Act or in response to user enquiries. 

Appendix A: Risk Classification Summary 

Executive Summary 

Classification position: EZclass claims the Article 6(3) derogation. The AI English Placement Test at placement.ezclass.io is not classified as high-risk under the EU AI Act. 

Annex III Applicability 

The Placement Test falls within the scope of Annex III, Category 3(c) (AI systems used for assessing the appropriate level of education). However, EZclass invokes the Article 6(3) derogation on the grounds that the System does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. 

Justification 

The derogation is justified by the following factors: 

  • Preparatory task (Article 6(3)(d)). The System performs a preparatory task to a broader educational assessment and placement process. It suggests a starting level; it does not determine educational access or outcomes.
  • Advisory output only. Results are explicitly advisory. No binding decisions are made by the System. Users and institutional clients are repeatedly informed that results are indicative.
  • No legal or similarly significant effects. The placement result does not produce legal effects or similarly significant effects within the meaning of Article 22(1) GDPR. This is documented across multiple EZclass policy documents.
  • Human override available. Users can request human review ([email protected]) and can freely select any course level regardless of the AI suggestion.
  • No profiling. The System does not perform profiling of natural persons.
  • Limited autonomy. The System does not operate autonomously in decision-making. It generates a recommendation that requires user acceptance (explicit course selection) to have any practical effect. 

Conditions for Reclassification 

The classification would change to high-risk if: 

  • Institutional clients use scores for binding admission or exclusion decisions
  • The output is treated as a formal certification or qualification
  • The human override mechanism is removed or materially constrained
  • Regulatory guidance indicates that the derogation does not apply
  • The System begins performing profiling of natural persons 

Precautionary Measures 

Notwithstanding the non-high-risk classification, EZclass voluntarily adopts the majority of deployer obligations applicable to high-risk systems (Article 26) as a matter of best practice, as detailed throughout this Policy. 

Appendix B: Compliance Gaps 

EZclass identifies the following compliance gaps as of the effective date of this Policy (11 April 2026). These gaps are documented in good faith to demonstrate awareness and to support a credible compliance roadmap. 

Gap 1: Article 49(2) Registration Not Yet Completed 

  • Requirement: Article 49(2) requires registration in the EU AI Act database for Annex III systems for which the provider/deployer has concluded that the system is not high-risk pursuant to Article 6(3).
  • Deadline: 2 August 2026
  • Status: Registration has not yet been completed. EZclass will register before the deadline.
  • Risk: Non-compliance with Article 49(2) after the deadline. 

Gap 2: FRIA Template Not Yet Completed 

  • Requirement: Article 27 requires a FRIA before deploying a high-risk AI system for public bodies or public service providers. While the Placement Test is not classified as high-risk, institutional deployments to public educational institutions may trigger FRIA obligations.
  • Deadline: Before first institutional deployment to a public educational institution or entity providing public services.
  • Status: The FRIA template has not yet been completed.
  • Risk: Inability to support institutional clients' compliance obligations; delay in institutional deployments. 

Gap 3: Formal AI Literacy Training Programme Not Yet Documented 

  • Requirement: Article 4 requires providers and deployers to take measures to ensure sufficient AI literacy among staff. This obligation has been applicable since 2 February 2025.
  • Deadline: Already applicable.
  • Status: Informal training and knowledge-sharing occurs within the team. A formal, documented training programme with materials, attendance records, and completion tracking has not yet been established.
  • Risk: Inability to demonstrate compliance with Article 4 upon regulatory enquiry; potential aggravating factor in enforcement proceedings for other AI Act violations. 

Gap 4: Scoring Calibration Benchmark Dataset Not Yet Established 

  • Requirement: Voluntary best practice (aligned with Article 9 risk management and Article 15 accuracy requirements for high-risk systems). A benchmark dataset of responses with known CEFR levels is needed for accuracy validation and drift detection.
  • Deadline: No regulatory deadline (voluntary measure), but operationally needed for credible monitoring.
  • Status: No formal benchmark dataset exists. Ad hoc testing is conducted but not against a structured, validated dataset.
  • Risk: Limited ability to systematically validate AI scoring accuracy, detect drift, or demonstrate monitoring rigour to regulators or institutional clients. 

Gap 5: DeepSeek SCC Execution Still Pending 

  • Requirement: International transfers of personal data to China (DeepSeek) require a valid transfer mechanism under Chapter V GDPR. Standard Contractual Clauses (Module 2) are the identified mechanism.
  • Deadline: Already applicable (transfers are ongoing).
  • Status: SCC execution with DeepSeek is still pending formal completion. Transfer is currently conducted under DeepSeek's API terms, which incorporate data processing commitments, but the formal SCC instrument has not been fully executed.
  • Risk: Non-compliance with GDPR Chapter V; potential enforcement action by the Estonian Data Protection Inspectorate; risk to the lawfulness of personal data transfers to China. 

Appendix C: Recommended Next Steps 

The following actions are prioritised by urgency and regulatory deadline: 

Priority 1 — Immediate (Q2 2026) 

# Action Owner Deadline Notes 
Complete DeepSeek SCC execution Compliance Lead Immediate Highest priority. Ongoing data transfers require a valid transfer mechanism. Engage DeepSeek to execute the EU SCCs (Module 2) and complete the associated Transfer Impact Assessment. 
Establish AI literacy training programme Compliance Lead Q2 2026 Article 4 obligation already enforceable. Document training materials, establish onboarding and annual refresher schedule, implement completion tracking. 
Register in EU AI Act database Compliance Lead Before 2 August 2026 Article 49(2) registration for non-high-risk Annex III system. Monitor the opening of the EU database registration portal and complete registration before the deadline. 

 

Priority 2 — Medium-Term (Q2-Q3 2026) 

# Action Owner Deadline Notes 
Build CEFR calibration benchmark dataset Technical Lead Q2-Q3 2026 Assemble a dataset of English language responses across CEFR levels A1-C2, validated by qualified human assessors. Include diverse accents and first-language backgrounds. Use for accuracy benchmarking and drift detection. 
Prepare FRIA template for institutional clients Compliance Lead Q3 2026 Complete before first deployment to a public educational institution. Template should follow Article 27(1) structure and be adaptable per-deployment. 
Conduct initial accuracy benchmark Technical Lead Q3 2026 (after benchmark dataset completion) Run the Placement Test evaluation against the benchmark dataset and document results. Identify any systematic biases or accuracy gaps. 

 

Priority 3 — Ongoing 

# Action Owner Cadence Notes 
Update Legal Notice regulatory framework Compliance Lead Q2 2026 Add the EU AI Act to the regulatory framework table in the Legal Notice, alongside existing GDPR and ePrivacy references. 
Annual Policy review AI System Owner Q1 each year First review: Q1 2027. Incorporate regulatory developments, model changes, and lessons from monitoring. 
Quarterly accuracy audits Technical Lead Quarterly Compare AI placements against human assessor evaluations. Document and report results. 
10 Monitor regulatory developments Compliance Lead Ongoing Track EU Commission guidelines, AI Office communications, national competent authority guidance, and Digital Omnibus developments that may affect EZclass's obligations or classification position. 
11 Monitor DPF adequacy status Compliance Lead Ongoing Track CJEU proceedings, EDPB opinions, and EU Commission adequacy reviews that may affect the validity of the DPF as a transfer mechanism for OpenAI. 

 

End of AI Governance Policy — Version 1.0 

EZclass OÜ 

Registry Code: 16802842 

Tornimäe tn 5, 10145 Tallinn, Estonia 

[email protected] | [email protected] 

CONFIDENTIAL 

Copyright © 2026 EZclass OÜ. All rights reserved.EZclass OÜ · Registry code: 16985909 · Tallinn, Estonia