AI Policy

Ethical AI Principles for Working Career

At Working Career, we believe AI can enhance coaching when used responsibly, transparently, and with human oversight. These ethical principles guide how we integrate AI tools such as ChatGPT into our practice to ensure they align with professional standards, client well-being, and our core values.

1. Human-Centred Coaching

AI supports but never replaces the role of a coach. Clients always have access to a human coach for discussion, clarification, or support. In line with EMCC (2023), human oversight remains essential to ethical practice. If a client experiences distress or raises sensitive issues, a human coach will immediately take over to provide guidance and care.

2. Transparency and Informed Consent

We are open about how and when AI is used. Clients are told when AI contributes to their materials, such as draft interview questions or reflection exercises. No AI process is used without the explicit consent of the client (ICF, 2024). Clear explanations are given about what each tool does, its benefits, and its limitations. Transparency builds trust and allows clients to make informed decisions about their participation.

3. Privacy and Data Protection

Client confidentiality is a cornerstone of our work. Working Career complies fully with GDPR and best-practice data standards. Personal or identifiable information is never entered into public AI platforms. All internal data is encrypted during storage and transmission (ICF, 2024). Clients are informed about how their data is collected, stored, and used, and must provide written consent before any AI tool processes their information. We never share data with third parties without explicit permission.

4. Fairness and Bias Mitigation

AI tools can reflect societal bias if left unchecked. To prevent this, outputs are reviewed for fairness, inclusivity, and representational accuracy. We utilise data and prompts that are designed to reflect diverse backgrounds and perspectives (AC, 2023; ICF, 2024). Clients are encouraged to question and challenge any AI suggestions that seem biased or inappropriate. This shared reflection supports fairness and ethical awareness.

5. Client Safety and Psychological Boundaries

AI tools can offer structure and guidance but are not a substitute for therapy or crisis support. Following CDI (2023) recommendations, all AI systems used in coaching include clear e-safety policies and human fallback options. If AI-generated exercises raise emotional concerns, the coach intervenes. Clients may pause or discontinue their use of AI at any time. Client safety and autonomy remain our top priorities.

6. Accountability and Quality Control

Working Career takes full responsibility for all AI-assisted content. Coaches are accountable for the quality, accuracy, and relevance of every AI-generated suggestion. Tools are regularly checked for accuracy and potential harm (EMCC, 2023). Clients can raise any AI-related concerns, which will be reviewed and resolved by a human coach. We also continue to educate ourselves on the ethical use of AI through academic and professional studies.

7. Professional Integrity and Ongoing Learning

Our approach to AI is grounded in professional ethics from AC, EMCC, ICF, and BPS. We also integrate current insights from the University of Oxford’s Artificial Intelligence Programme (2025). We review these principles annually as AI continues to evolve, ensuring our practices remain safe, inclusive, and human-led. AI may enhance reflection and creativity—but professional judgement, empathy, and integrity will always guide our work.