yeweyewe.com
Jobs
Skip to content
Skip to content
Menu
YYC
  • OnlineJobs
    • Post Online-Job
  • Making Money Online
  • AI for Work
  • AI Tips
  • Passive Income
  • Info
YYC

Machine Learning Ops & AI Infrastructure, Technical Referent – Madrid / Barcelona

  • Full Time
  • Anywhere (Remote)
  • Posted on March 13, 2026
dLocal

dLocal

Software Engineer

MaltingsRetreat
Machine Learning Ops & AI Infrastructure, Technical Referent – Madrid / Barcelona
last updated March 13, 2026 20:06 UTC
dLocal
dLocal

HQ: Hybrid

OFF: Madrid / Barcelona
Full-Time
Sales and Marketing
more jobs in this category:

-> Social Media Work From Home @ HireSociall
-> Writers Looking For A Steady Reliable Income @ AmpiFire
-> English Content Writer @ Livingston Research
-> Marketing / Sales Assistant with Excellent Project Management skills (10 hrs / week) @ The Search Guru
-> Writer @ ShoutVox
Why should you join dLocal?
dLocal enables the biggest companies in the world to collect payments in 40 countries in emerging markets. Global brands rely on us to increase conversion rates and simplify payment expansion effortlessly. As both a payments processor and a merchant of record where we operate, we make it possible for our merchants to make inroads into the world’s fastest-growing, emerging markets.
By joining us you will be a part of an amazing global team that makes it all happen. Being a part of dLocal means working with 1000+ teammates from 30+ different nationalities and developing an international career that impacts millions of people’s daily lives. We are builders, we never run from a challenge, we are customer-centric, and if this sounds like you, we know you will thrive in our team.
What’s the opportunity?
As an MLOps & AI Infrastructure Technical Referent at dLocal, you will be the senior technical reference for how we build, operate and evolve our ML and AI infrastructure.

Your mission is to enable Data Science and AI teams to take models and AI-powered services from idea to production in a reliable, observable and compliant way. You will own the technical direction of our MLOps stack, introduce AI safely into our engineering workflows, and help the team scale its impact as usage and complexity grow.

A core part of this role is to use agents and AI services to automate as much as possible of what we do in MLOps — from feature store and platform operations to fraud/anomaly workflows and ML cost optimization — working side by side with the AI team.

This is a hands-on architecture and leadership role: you won’t own product models yourself, but you will deeply influence how every model and AI component is trained, deployed, monitored and run in production.

What will I be doing?
1. Technical strategy & architecture (MLOps)
– Define and evolve the end-to-end ML platform architecture (data, training, registry, serving, monitoring, governance) used by multiple squads.

– Design standard patterns for:
– Reproducible training pipelines and experiment tracking.
– Model packaging, versioning and promotion flows (dev → staging → production).
– Online and batch inference, with safe rollout strategies (canary, shadow, rollback).

– Balance reliability, performance and cost for ML workloads, working closely with SRE/Infra and Finance/FinOps.

2. Day‑to‑day MLOps enablement & operations
– Act as the go‑to person for complex MLOps questions: how to structure pipelines, choose serving patterns, or design monitoring and rollback.

– Review and challenge designs and deployments for new models and data pipelines, ensuring they follow platform standards and non‑functional requirements.

– Partner with Fraud, Anomaly and other product squads to ensure:
– Clear SLAs/SLOs for ML components.
– Proper logging, metrics and alerts for incidents and regressions.

– Contribute to on‑call readiness: playbooks, dashboards, incident reviews and continuous improvement of our operational posture.

3. AI infrastructure & AI‑assisted operations
– Define infrastructure, contracts and guardrails so that we can safely consume agents and AI services built by the AI team, and extend them when needed from MLOps.

– Design patterns and tooling so that AI and agents automate as much as possible of what we do in MLOps, for example:
– Feature platform operations (feature store pipelines, backfills, parity checks, DQ/drift monitoring).
– MLOps platform workflows (training/eval pipelines, promotion gates, rollbacks, documentation and runbook generation).
– Operational flows in Fraud / Anomaly (triage of alerts, log/metric analysis, enrichment of incident context).
– Platform FinOps & cost optimization (suggesting right‑sizing, schedule changes, decommissioning opportunities).

– Contribute to evaluation, observability and safety for these AI‑powered automations (e.g. prompts, policies, redaction, auditability), in close collaboration with dedicated AI teams.

4. Governance, security & compliance
– Set and maintain technical standards for:
– Model and data access control, PII handling and redaction.
– Auditability of model changes, deployments and runtime behavior.
– Environment separation and change management for ML/AI workloads.

– Work with InfoSec and Architecture to ensure the platform aligns with regulatory and internal requirements while remaining practical for engineers and data scientists.

5. Leadership, mentoring & collaboration
– Mentor MLOps and Data/ML engineers on:
– System design, reliability and observability.
– Good practices for CI/CD, testing and rollback in ML systems.
– Lead design and architecture reviews, helping teams de‑risk decisions and converge on simple, robust solutions.

– Collaborate closely with:
– Data Science squads and the AI Team (to understand needs and shape the platform).
– SRE/Infra (for capacity, reliability, networking and security).
– Product/Engineering leaders (to align roadmap, trade‑offs and priorities).

What skills do I need?
Must‑haves– Solid experience owning or designing MLOps platforms or ML infrastructure used by multiple teams.
– Strong background in distributed systems and data/stream processing (e.g. Spark, Flink, or similar technologies).
– Experience building production‑grade ML pipelines:
– Experiment tracking, reproducible training and model registry.
– CI/CD for models and data pipelines.
– Online and batch inference at scale.
– Familiarity with cloud‑based ML platforms (e.g. Databricks, SageMaker, Vertex AI, or equivalent) and container‑based deployments.
– Strong understanding of observability for ML systems:
-Metrics, logs and traces.
-Data and model drift, freshness and quality checks.
– Ability to communicate clearly with both technical and non‑technical stakeholders, translating infra and AI/ML trade‑offs into business language.
Nice to have– Experience rolling out AI assistants (code or infra copilots, AI log analysis, etc.) inside engineering organizations, including policies and best practices.
– Exposure to LLM and AI infrastructure (gateways, vector stores, evaluation harnesses), even if not as a core focus.
– Prior responsibilities as Technical Referent / Tech Lead / Architect for platforms or shared services.
– Contributions to internal standards, RFCs, guilds or tech communities.

What do we offer?
Besides the tailored benefits we have for each country, dLocal will help you thrive and go that extra mile by offering you:
– Flexibility: we have flexible schedules and we are driven by performance.
– Fintech industry: work in a dynamic and ever-evolving environment, with plenty to build and boost your creativity.
– Referral bonus program: our internal talents are the best recruiters – refer someone ideal for a role and get rewarded.
– Learning & development: get access to a Premium Coursera subscription.
– Language classes: we provide free English, Spanish, or Portuguese classes.
– Social budget: you’ll get a monthly budget to chill out with your team (in person or remotely) and deepen your connections!
– dLocal Houses: want to rent a house to spend one week anywhere in the world coworking with your team? We’ve got your back!
Flexibility in how you work: We focus on impact and productivity over fixed hours. This means our teams have flexible schedules and, depending on your role and location, you will combine self‑managed focus time with moments of in‑person connection in our collaboration hubs.
What happens after you apply?
Our Talent Acquisition team is invested in creating the best candidate experience possible, so don’t worry, you will definitely hear from us. We will review your CV and keep you posted by email at every step of the process!

To apply for this job please visit jobs.lever.co.

Related

Post navigation

Senior Mobile Software Engineer – Serbia

Recent Jobs

  • Machine Learning Ops & AI Infrastructure, Technical Referent – Madrid / Barcelona

    • Anywhere (Remote)
    • dLocal
    • Full Time
  • Senior Mobile Software Engineer – Serbia

    • Anywhere (Remote)
    • Xsolla
    • Full Time
  • AI
  • AI for Work
  • AI Tips
  • Making Money Online
  • Online Business
  • Online Jobs
  • Opinion
  • Passive Income
  • Remote Work
  • Skills & Learning
  • USA




© yeweyewe.com 2026