Belgium
With the ever growing impact of AI solutions in business processes comes an equally important growing need for companies to manage risks and compliancy with AI governance frameworks in their use of AI. We are looking for a consultant who can guide clients towards safe, compliant and impactful use of AI within their business.
Are you passionate about AI and aware about the risks companies need to manage in order to unlock its full potential? Do you like engaging with stakeholders on different levels to create, implement and sustain a well-connected vision concerning responsible AI usage?
You are nor a lawyer, nor a developer? Good! We are looking for an experienced consultant with a broad understanding of the use of AI within a company in order to shape processes involving responsible and compliant use of AI on strategic level, but who can also advise in specific situations on operational level.
Your responsibilities
Client projects & advisory:
Conduct comprehensive AI risk assessments and ethical evaluations for client systems
Design and implement Responsible AI governance processes and frameworks
Support sales activities by articulating the value of Responsible AI in client conversations
Engage with clients as a trusted advisor on emerging AI regulations and ethical considerations
Translate complex technical concepts for non-technical stakeholders
Subject matter expertise:
Serve as the Responsible AI subject matter expert for internal development teams
Stay current with evolving regulations, standards, and best practices in AI ethics and governance
Develop methodologies for AI risk management and mitigation strategies
Create policies and standards for responsible AI development and deployment
Bridge the gap between AI/data teams, controlling functions, and business units
Your qualifications
Required experience & skills:
3-5 years of consulting experience, preferably in technology, risk, or compliance, with a proven ability to develop policies and governance processes
Experience in stakeholder management: Pragmatically engaging with stakeholders on different levels and communicating seamlessly with both technical and non-technical audiences
Strong understanding of AI and machine learning fundamentals, including large language models
Experience identifying and evaluating AI-related risks (bias, privacy, security, explainability)
Knowledge of emerging AI governance frameworks and regulations (EU AI Act, GDPR etc.)
Excellent communication skills with the ability to translate complex concepts for diverse audiences
Fluent in English. Dutch is a plus
Desired qualifications:
Practical background in data ethics, data protection impact assessments (DPIA), or algorithmic impact assessments
Knowledge of emerging AI frameworks (e.g., NIST, ISO, EU AI Act)
Familiarity with relevant regulations (GDPR, EU AI Act, Digital Services Act)
Certifications in relevant areas (data privacy, risk management, etc.)
Experience with AI documentation tools or methodologies (Model Cards, Datasheets for Datasets)
Understanding of responsible AI tooling (explainability tools, fairness metrics, bias detection)
Optional - experience in financial service, pharma, or human resources is a plus
Our offer
A highly entrepreneurial team;
A no-nonsense working environment with a focus on high quality;
Excellent terms of employment (including company car, fuel card, meal vouchers, bonus,...);
Fringe benefits including continuous training that builds and extends professional, technical and management skills in all areas;
The opportunity to grow together with Datashift in terms of capabilities as well as financial benefits;
With our offices in Mechelen, Leuven and Gent you can work from several locations.
Interested?
Don't hesitate to apply via the link below!
Apply