AI Code of Ethics

SCANDICDEV's AI Code of Ethics powered by "Scandic Pilot" Library – Key Areas & Direct Links

1. Preamble & Scope
2. Core Values & Guiding Principles
3. Governance & Responsibilities (AI Ethics Board, RACI)
4. Legal & Regulatory Framework (EU AI Act, GDPR, DSA, Copyright Law, Commercial Law)
5. Risk Classification & AI Impact Assessment (AIIA)
6. Data Ethics & Data Protection (Legal Basis, DPIA, Cookies, Third Countries)
7. Model & Data Lifecycle (ML Lifecycle, Data Cards, Model Cards)
8. Transparency, explainability & user notices
9. Human-in-the-loop & supervisory obligations
10. Security, Robustness & Red Teaming (prompt injection, jailbreaks)
11. Supply Chain, Human Rights & Fair Labour (Modern Slavery, LkSG-analogous)
12. Bias management, fairness & inclusion (vulnerable customers, accessibility)
13. Generative AI, proof of origin & labelling (C2PA, watermarks)
14. Content, moderation & DSA processes (reporting, complaints, transparency)
15. Domain-specific application (News, Data, Health, Aviation, Yachts, Estate, Pay/Trade/Trust/Coin, Cars)
16. Third parties, procurement & vendor risk management
17. Operations, observability, contingency and recovery plans
18. Incidents & Remedial Action (Ethics, Data Protection, Security)
19. Metrics, KPIs & assurance (internal/external)
20. Training, awareness & cultural change
21. Implementation & Roadmap (0–6 / 6–12 / 12–24 months)
22. Roles & RACI matrix (excerpt)
23. Checklists (AIIA summary, data release, go-live gate)
24. Forms & Templates (Model Card, Data Card, Incident Report)
25. Glossary (excerpt) & References

1. Preamble & Scope

This Code sets out binding principles, processes and controls for the development, procurement, operation and use of AI within the LEGIER Group by SCANDIC DATA. It applies across the Group to employees, managers, data processors, suppliers and partners. It integrates existing group policies (data protection, digital services processes, corporate governance, sustainability, human rights policy, modern slavery statement) and expands them to include AI-specific requirements. The aim is to enable benefits and innovation, make risks manageable, and safeguard the rights of users, customers and the general public.

2. Core Values & Guiding Principles

Human dignity and fundamental rights take precedence over economic efficiency. AI serves people – never the other way round.

Legal compliance: Adherence to the EU AI Act, GDPR, DSA and sector-specific standards. No use of prohibited practices.

Responsibility & Accountability: A responsible owner is designated for every AI system; decisions are transparent and subject to challenge.

Proportionality: Balance between purpose, risk, intensity of intervention and societal impact.

Transparency & explainability: Appropriate guidance, documentation and communication channels regarding how the system works, data sources and limitations.

Fairness & Inclusion: Systematic bias testing, protection of vulnerable groups, accessibility and multilingualism.

Security & Resilience: Security by design, defence in depth, continuous hardening and monitoring.

Sustainability: Efficiency of models and data centres (energy, PUE/CFE), lifecycle perspective on data/models.

3. Governance & Responsibilities (AI Ethics Board, RACI)

AI Ethics Board (AIEB): Interdisciplinary (Tech, Legal/Compliance, Data Protection, Security, Editorial/Product, People). Responsibilities: Updating principles, granting approvals (particularly for high-risk cases), resolving conflicts, monitoring reports.

Roles: Use Case Owner, Model Owner, Data Steward, DPO, Security Lead, Responsible Editor, Service Owner, Procurement Lead.

Committees & Gateways: AIIA approval prior to go-live; Change Advisory Board for material changes; annual management reviews.

RACI principle: Clear assignment of responsibility for each activity (Responsible, Accountable, Consulted, Informed).

4. Legal & regulatory framework (EU AI Act, GDPR, DSA, copyright law, commercial law)

EU AI Act: A risk-based framework with prohibitions, obligations for high-risk systems, documentation, logging, governance and transparency requirements; phased implementation from 2025/2026.

GDPR: Legal bases (Art. 6/9), data subjects' rights, privacy by design/default, data protection impact assessment (DPIA), transfers to third countries (Art. 44 et seq.).

DSA: Platform processes for reporting, complaints, transparency reports, risk assessments of large platforms.

Copyright & related rights / personality rights: Clear licence chains, image/name rights, third-party property rights.

Industry-specific requirements (e.g. aviation/maritime law/health) must also be complied with.

5. Risk classification & AI impact assessment (AIIA)

Classification: (a) Prohibited practices (not permitted), (b) High-risk systems (strict obligations), (c) Limited risk (transparency), (d) Minimal risk.

AIIA process: Description of purpose/scope, stakeholders, legal basis, data sources; risk analysis (legal, ethical, security, bias, environmental impact); mitigation plan; decision (AIEB approval).

Re-assessments: In the event of material changes; annually for high-risk cases; documentation in the central register.

6. Data ethics & data protection (legal basis, DPIA, cookies, third countries)

Data minimisation & purpose limitation; pseudonymisation/anonymisation preferred.

Transparency: privacy notices, procedures for access and erasure; portability; options to object.

Cookies/tracking: Consent management; withdrawal; IP anonymisation; only approved tools.

Transfers to third countries: Only with appropriate safeguards (SCCs/adequacy); regular audits of sub-processors.

DPIA: Mandatory for high-risk processing; document technical and organisational measures (TOMs).

7. Model & Data Lifecycle (ML Lifecycle, Data Cards, Model Cards)

Data Lifecycle: Acquisition, Curation, Labelling, Quality Gates, Versioning, Retention/Deletion.

Model Lifecycle: Problem definition, Architecture selection, Training/Fine-tuning, Evaluation (offline/online), Release, Operation, Monitoring, Retraining/Retirement.

Data Cards: Origin, representativeness, quality, bias findings, usage restrictions.

Model Cards: Purpose, training data, benchmarks, metrics, limitations, expected error patterns, dos and don'ts.

Provenance & reproducibility: hashes, data/model versions, pipeline records.

8. Transparency, explainability & user notices

Labelling for AI interaction and AI-generated content.

Explainability: Explanations tailored to the use case and understandable to the general public (local/global).

User notices: Purpose, main influencing factors, limitations; feedback and correction channels.

9. Human-in-the-loop & supervisory duties

Human oversight as standard for relevant decisions (especially high-risk).

Dual-control principle for editorially or socially sensitive applications.

Override/cancellation functions; escalation procedures; documentation.

10. Security, robustness & red teaming (prompt injection, jailbreaks)

Threat modelling (STRIDE + AI-specific): prompt injection, training data poisoning, model theft, data leakage.

Red teaming & adversarial testing; jailbreak prevention; rate limiting; output filters; secret scanning.

Robustness: fallback prompts, guardrails, rollback plans; canary releases; chaos testing for safety.

11. Supply chain, human rights & fair labour (Modern Slavery, LkSG-analogous)

Human rights due diligence: risk analysis, supplier code of conduct, contractual commitments, audits, remedial action.

Modern Slavery: Annual Statement, Awareness-raising, Reporting Channels.

Labour standards: Fair pay, working hours, health and safety; protection of whistleblowers.

12. Bias management, fairness & inclusion (vulnerable customers, accessibility)

Bias assessments: Data set analyses, balancing, diverse test groups, fairness metrics; documented mitigation.

Vulnerable customers: Protection objectives, alternative channels, plain language; no exploitation of cognitive impairments.

Accessibility: WCAG compliance; multilingualism; inclusive language.

13. Generative AI, proof of origin & labelling (C2PA, watermarks)

Labelling: Visible labels/metadata for AI content; notification during interactions.

Proof of origin: C2PA context, signatures/watermarks where technically feasible.

Copyright/neighbouring rights: Clarify licences; training data compliance; document rights chain.

14. Content, moderation & DSA processes (reporting, complaints, transparency)

Reporting channels: Low-threshold user reporting; prioritised handling of illegal content.

Complaints procedures: Transparent reasoning, appeal, escalation.

Transparency reports: Periodic publication of relevant key figures and measures.

15. Domain-specific applications (News, Data, Health, Aviation, Yachts, Estate, Pay/Trade/Trust/Coin, Cars)

News/Publishing: Research assistance, translation, moderation; clear labelling of generative content.

SCANDIC DATA: Secure AI/HPC infrastructure, client isolation, HSM/KMS, observability, compliance artefacts.

Health: Evidence-based use, human final decision, no untested diagnoses.

Aviation/Yachts: Safety processes, human oversight, emergency procedures.

Estate: Valuation models with fairness checks; ESG integration.

Pay/Trade/Trust/Coin: Fraud prevention, KYC/AML, market surveillance, explainable decisions.

Cars: Personalised services with strict data protection.

16. Third parties, procurement & vendor risk management

Due diligence prior to onboarding: security/data protection levels, data locations, sub-processors, certificates.

Contracts: Audit rights, transparency and remedy clauses, SLA/OLA metrics.

Monitoring: Performance KPIs, findings/incident sharing, exit plans.

17. Operations, observability, contingency and recovery plans

Operations: Observability (logs, metrics, traces), SLO/SLI management, capacity planning.

Emergency: Runbooks, DR tests, recovery times, communication plans.

Configuration/secret management: least privilege, rotations, hardening.

18. Incidents & Remediation (Ethics, Data Protection, Security)

Ethics incidents: Unwanted discrimination, disinformation, unclear origin – immediate measures and AIEB review.

Data protection incidents: Reporting processes to DPO/supervisory authority; information for data subjects; root cause analysis.

Security incidents: CSIRT procedures, forensics, lessons learnt, preventive measures.

19. Metrics, KPIs & Assurance (internal/external)

Mandatory KPIs: 100% AIIA coverage of productive AI use cases; <14 days median processing time for complaints; >95% training rate; 0 open critical audit findings.

Fairness metrics: Disparate impact, equalised odds (use-case specific).

Sustainability: Energy/PUE/carbon metrics of data centres; model efficiency.

20. Training, Awareness & Cultural Change

Mandatory training (annual): AI ethics, data protection, security, media ethics; target group-specific modules.

Awareness campaigns: guidelines, brown-bag sessions, consultation hours; internal communities of practice.

Culture: Leadership setting an example, culture of learning from mistakes, rewarding responsible behaviour.

21. Implementation & Roadmap (0–6 / 6–12 / 12–24 months)

0–6 months: Inventory of AI use cases; AIIA process; minimum controls; training programme; supplier screening.

6–12 months: Roll out red teaming; first transparency reports; energy programme; finalise RACI.

12–24 months: Alignment with ISO/IEC 42001; limited assurance; continuous improvement; CSRD/ESRS preparation (where applicable).

22. Roles & RACI Matrix

Use Case Owner (A): Purpose, benefits, KPIs, budget, reassessments.

Model Owner (R): Data/training/evaluation, model card, drift monitoring.

DPO (C/A for data protection): Legal basis, DPIA, data subject rights.

Security Lead (C): Threat modelling, red teaming, TOMs.

Responsible Editor (C): Media ethics, labelling, correction register.

Service Owner (R): Operations, SLO, incident management.

Procurement Lead (R/C): Third parties, contracts, exit plans.

23. Checklists (AIIA Summary, Data Release, Go-Live Gate)

AIIA Quick Check: Purpose? Legal basis? Data subjects? Risks (legal/ethical/security/bias/environmental)? Mitigation? HIL controls?

Data release: Is the source lawful? Minimisation? Retention? Access? Third country?

Go-Live Gate: Are artefacts complete (Data/Model Cards, Logs)? Have Red Team results been addressed? Has monitoring/DR been set up?

24. Forms & Templates (Model Card, Data Card, Incident Report)

Model Card Template: Purpose, data, training, benchmarks, limitations, risks, responsible parties, contact.

Data Card Template: Origin, licence, quality, representativeness, bias checks, usage restrictions.

Incident Report Template: Event, Impact, Affected Parties, Immediate Actions, Root Cause, Remedial Action, Lessons Learned.

25. Glossary & References

Glossary: AI system, generative AI, high-risk system, AIIA, HIL, C2PA, red-teaming, DPIA, RACI, SLO/SLI.

References: EU AI Act, GDPR, DSA, OECD AI Principles, NIST AI RMF, ISO/IEC 42001, internal guidelines (data protection, DSA processes, modern slavery, sustainability).

Date: 1 January 2026