AI Engineer
EFG International AG · Geneve (GE)
Role overview
- At EFG we want to create value fom data. Within our Data Office department, we’re seeking a highly skilled and motivated AI Software Engineer to join our Machine Learning & GenAI team. You will embark on building a new, scalable AI platform and design, build, and deploy AI-driven systems that deliver measurable business impact.
- This is an excellent opportunity to make a significant impact in a growing organization committed to delivering an outstanding digital banking experience for our clients.
- Design and build a hybrid (on-prem / on-cloud) AI/ML platform to run AI use cases at scale (feature stores, model registry, experimentation, evaluation, observability).
Application process
- Should you wish to apply for this position use this link to apply.
Introduction of the team
Main responsibilities
- At EFG we want to create value fom data. Within our Data Office department, we’re seeking a highly skilled and motivated AI Software Engineer to join our Machine Learning & GenAI team. You will embark on building a new, scalable AI platform and design, build, and deploy AI-driven systems that deliver measurable business impact.
- This is an excellent opportunity to make a significant impact in a growing organization committed to delivering an outstanding digital banking experience for our clients.
- <strong>1) Platform and Architecture </strong>
- Design and build a hybrid (on-prem / on-cloud) AI/ML platform to run AI use cases at scale (feature stores, model registry, experimentation, evaluation, observability).
- Define and implement secure, reliable inference and training architectures, including vector search and RAG components where applicable.
- Provide platform support for embeddings, vector databases, and AI agentic communication protocols to enable grounded, interoperable AI workflows.
- Document machine learning processes, system architecture, and operational runbooks for reproducibility and knowledge sharing.
- <strong>2) Model Development & Evaluation</strong>
- Collaborate on training, fine-tuning, and optimizing models (LLMs, NLP, recommendations), including LoRA/PEFT when relevant.
- Implement guardrails and prompt strategies to reduce hallucinations and improve safety and consistency, and support agentic workflows.
1) Education
- Advanced degree in Computer Science, Data Science, Mathematics, Statistics, Physics, or related.
2) Must-Have
- Extensive knowledge of ML/AI frameworks: PyTorch or TensorFlow; Hugging Face ecosystem; LangChain/LlamaIndex or equivalent for orchestration, data structures, data modeling, and software architecture.
- Practical LLM experience: prompt engineering, fine-tuning/LoRA, embeddings, vector databases (FAISS, Pinecone, Weaviate), RAG patterns.
- Solid programming skills in Python, R or Java/Scala, hands on experience in SQL, ETL tool and Linux and Control-M & Terraform knowledge are a plus. <i> </i>
- Prior experience deploying applications on cloud environments (Azure); familiarity with hybrid on‑prem/cloud setups.
- Experience building production-grade services and APIs (REST/gRPC), cloud-native (AWS/GCP/Azure), containers (Docker), and orchestration (Openshift, Kubernetes).
- MLOps foundations: experiment tracking (MLflow/W&B), model registries, CI/CD, model monitoring, feature stores.
- Ability to monitor, debug, and maintain CI/CD pipelines that feed into production deployments (GitHub Actions/GitLab CI/Azure DevOps).
- Data engineering proficiency: SQL, data modeling, ETL/ELT, and working with warehouses/lakes (Snowflake, BigQuery, S3/Delta).
- Ability to work in a SCRUM/Agile environment with a focus on delivery and stakeholder collaboration.
- Excellent analytical and problem-solving abilities; results- and detail-oriented with strong written and verbal communication.
3) Nice-to-Have
Our Values
- Experience deploying and optimizing open-source models (Llama, Mistral, Mixtral) on GPUs; quantization (INT8/4), tensor/Flash attention.
- Knowledge of retrieval systems (BM25, hybrid search), semantic caching, and structured tool use/agents.
- Evaluation expertise for LLMs: rubric-based grading, golden sets, adversarial testing, and A/B experimentation.
- Security practices for AI applications: prompt injection defenses, output filtering, content moderation, and red-teaming.
- Contributions to open-source AI projects or published work.
- Accountability: Taking ownership for tasks and challenges, as well as seeking continuous improvement
- Hands-on: Being proactive to rapidly deliver high-quality results
- Passionate: Being committed and striving for excellence
- Solution-driven: Focusing on client outcomes and treating clients fairly with a risk-aware mindset
- Partnership-oriented: Promoting collaboration and teamwork. Working together with an entrepreneurial spirit.