![Figment](https://i0.wp.com/yeweyewe.com/wp-content/plugins/wp-job-manager/assets/images/company.png?w=640&ssl=1)
Figment
Senior Data Engineer
is the world’s leading provider of blockchain infrastructure. We provide the most comprehensive staking solution for our over 200+ institutional clients including exchanges, wallets, foundations, custodians, and large token holders to earn rewards on their crypto assets. These clients rely on Figment’s institutional staking service including rewards optimization, rapid API development, rewards reporting, partner integrations, governance, and slashing protection. Figment is backed by industry experts, financial institutions and our global team across twenty three countries. This all leads to our mission to support the adoption, growth and long term success of the Web3 ecosystem.
We are a growth stage technology company – looking for people who are builders and doers. People who are comfortable plotting their course through ambiguity and uncertainty to drive impact and who are excited to work in new ways and empower a generative company culture.
About the Role
Join Figment and help it in becoming the world’s leading staking services provider. Figment currently has over $15B assets under stake and growing. This role combines data engineering practices and software development, focusing on data pipelines and cloud infrastructure. The position requires building custom tools and automating data processes in a highly secure and scalable environment.
Responsibilities
Implement and maintain reliable data pipelines and data storage solutions.
Implement data modeling and integrate technologies according to project needs.
Manage specific data pipelines and oversees the technical aspects of data operations
Ensure data processes are optimized and align with business requirements
Identify areas for process improvements and suggests tools and technologies to enhance efficiency
Continuously improve data infrastructure automation, ensuring reliable and efficient data processing.
Develop and maintain data pipelines and ETL processes using technologies such as Dagster and DBT to ensure efficient data flow and processing.
Automate data ingestion, transformation, and loading processes to support blockchain data analytics and reporting.
Utilize Snowflake data warehousing solutions to manage and optimize data storage and retrieval.
Collaborate with Engineering Leadership and Product teams to articulate data strategies and progress.
Promote best practices in data engineering, cloud infrastructure, networking, and security.
Qualifications
Extensive experience with data engineering, including building and managing data pipelines and ETL processes.
Proficiency in programming languages such as Golang and Python.
Strong foundation in data networking, storage, and security best practices.
Experience developing CI/CD pipelines for automated data infrastructure provisioning and application deployment.
Familiarity with a data orchestration tool (Dagster, Airflow, Mage, etc.).
Familiarity with a data transformation tool (DBT, Beam, Dataform, Talend, etc.)
Experience with data warehousing solutions like Snowflake or similar technologies.
Experience in managing infrastructure across multiple cloud providers (AWS, GCP), with a focus on performance and security
Nice to Have
Experience with the following: Snowflake, Dagster, DBT, Python, Golang, AWS, Temporal, and Monte Carlo
Knowledge of decentralized consensus mechanisms, including Proof-of-Work and Proof-of-Stake.
Experience in developing custom Terraform modules for data infrastructure.
This role is designed for an individual who is passionate about leveraging their technical expertise to support and advance staking data solutions, working at the cutting edge of technology. If you thrive in a fast-paced, innovative environment and are driven by high standards of security and reliability, we encourage you to apply.
To apply for this job please visit boards.greenhouse.io.