Figment Logo

Base City: 

Remote-Canada - Remote

Salary: 

$170k to $190k

Rating: 

Self-taught: 

Position Type: 

Full-time

Position Keywords: 

Other Experience: 

  • Experience with the data transformation tool DBT
    • Having led the design and implementation of complex data transformations using advanced DBT models, materializations, and configurations to streamline data workflows and improve performance.
    • Having led the optimization and troubleshooting of DBT pipelines for scale, ensuring that transformations run efficiently in production environments, handling large datasets without issues.
  • Experience programming in Python
    • Having led the design and implementation of scalable, high-performance applications by leveraging Python's advanced libraries and frameworks (e.g., Pandas, FastAPI, asyncio), ensuring clean code, modularity, and maintainability.
    • Having led the optimization of code for performance and memory usage through profiling and refactoring, ensuring efficient execution, particularly when handling large datasets or real-time processing tasks.
  • Experience with the data orchestration tool - Dagster or Airflow
    • Having led the design and orchestration of complex DAGs to manage dependencies, triggers, and retries for data workflows, ensuring reliable and efficient pipeline execution.
    • Having led the implementation of monitoring and alerting for pipeline failures or performance bottlenecks, using observability tools integrated with Dagster/Airflow to maintain robust pipeline health.
  • Experience with a data warehousing solution such as Snowflake, BigQuery, Redshift, or Databricks.
    • Having led architecting and optimizing warehouse environments for performance, including designing partitioning strategies, clustering keys, and storage optimizations for cost-effective scaling.
    • Having led the implementation of security and governance policies within the wareshouse, including data encryption, access control, and audit logging to meet compliance and security best practices.
  • Extensive data engineering experience, including building and managing data pipelines and ETL processes.
  • Experience developing CI/CD pipelines for automated data infrastructure provisioning and application deployment.
  • Experience in managing infrastructure across AWS, with a focus on performance and security.

Nice to Have

  • Experience with Golang, Temporal, Monte Carlo.
  • Knowledge of decentralized consensus mechanisms, including Proof-of-Work and Proof-of-Stake.
  • Experience in developing custom Terraform modules for data infrastructure.

About the Job: 

Figment is the world’s leading provider of blockchain infrastructure. We provide the most comprehensive staking solution for our over 500+ institutional clients including exchanges, wallets, foundations, custodians, and large token holders to earn rewards on their crypto assets. These clients rely on Figment’s institutional staking service including rewards optimization, rapid API development, rewards reporting, partner integrations, governance, and slashing protection. Figment is backed by industry experts, financial institutions and our global team across twenty three countries. This all leads to our mission to support the adoption, growth and long term success of the Web3 ecosystem.

We are a growth stage technology company – looking for people who are builders and doers. People who are comfortable plotting their course through ambiguity and uncertainty to drive impact and who are excited to work in new ways and empower a generative company culture.

What they want you to do: 

  • Lead the design and implementation of reliable data pipelines and data storage solutions.
  • Lead the implementation of data modeling and integrate technologies according to project needs.
  • Manage specific data pipelines and oversees the technical aspects of data operations.
  • Ensure data processes are optimized and align with business requirements.
  • Identify areas for process improvements and suggests tools and technologies to enhance efficiency.
  • Continuously improve data infrastructure automation, ensuring reliable and efficient data processing.
  • Lead the development and maintenance of data pipelines and ETL processes using technologies such as Dagster and DBT to ensure efficient data flow and processing.
  • Automate data ingestion, transformation, and loading processes to support blockchain data analytics and reporting.
  • Utilize Snowflake data warehousing solutions to manage and optimize data storage and retrieval.
  • Collaborate with Engineering Leadership and Product teams to articulate data strategies and progress.
  • Promote best practices in data engineering, cloud infrastructure, networking, and security.

© 2024