Xsolla
Software Engineer
Xsolla is a global commerce company with robust tools and services to help developers solve the inherent challenges of the video game industry. From indie to AAA, companies partner with Xsolla to help them fund, distribute, market, and monetize their games. Grounded in the belief in the future of video games, Xsolla is resolute in the mission to bring opportunities together, and continually make new resources available to creators. Headquartered and incorporated in Los Angeles, California, Xsolla operates as the merchant of record and has helped over 1,500+ game developers to reach more players and grow their businesses around the world. With more paths to profits and ways to win, developers have all the things needed to enjoy the game.
This role will assign the data engineering efforts for the Uer Platform (CDP) and Recommendation Engine, ensuring data accuracy, performance, and security across pipelines connecting Snowflake, Postgres, Kafka, and API Gateway services.
You’ll collaborate with ML engineers, backend teams, and business stakeholders to build reliable, high-performance data systems that support insights, automation, and machine learning use cases
$90,000 – $110,000 a year
Key Responsibilities
1. Development
Build, and optimize data pipelines, data dictionary and ETL workflows in Snowflake using Snowpark, Streams/Tasks, and Snowpipe.
Develop scalable data models supporting user 360 views, churn prediction, and recommendation engine inputs.
Support integration across data sources: MySQL, BigQuery, Redis, Kafka, GCP Storage, and API Gateway.
Implement CI/CD for data pipelines using Git, dbt, and automated testing.
Define data quality checks and auditing pipelines for ingestion and transformation layers.
2. Performance & Scalability
Tune warehouse performance and cost efficiency via query optimization, caching, and cluster sizing.
Establish data partitioning, clustering, and materialized views for fast query execution.
Build dashboards and monitors for pipeline health, job success, and data latency metrics (e.g., via Looker, Tableau, or Snowsight).
3. Governance & Best Practices
Establish and enforce naming conventions, data lineage, and metadata standards across schemas.
Contribute to the company’s evolving data mesh and streaming architecture vision.
0-3 years of experience in Data Engineering, with databases ecosystem.
SQL and Python skills, with proven experience building ETL/ELT at scale.
Understanding of Snowflake performance tuning, query optimization, and warehouse orchestration.
Understanding of data modeling (Kimball, Data Vault, or hybrid).
Familiarity with API-based data integration and microservice architectures.
Preferred
Excellent cross-functional communication — can translate between engineering and business.
Hands-on problem solver who balances velocity with reliability.
To apply for this job please visit jobs.lever.co.