Figment Logo

Base City: 

Remote-Canada - Remote

Salary: 

$140k to $170k

Rating: 

Self-taught: 

Position Type: 

Full-time

Position Keywords: 

Other Experience: 

  • Experience with the data transformation tool DBT
    • Designing and implementing complex data transformations using advanced DBT models, materializations, and configurations to streamline data workflows and improve performance.
    • Optimizing and troubleshoot DBT pipelines for scale, ensuring that transformations run efficiently in production environments, handling large datasets without issues.
  • Experience programming in Python
    • Design and implement scalable, high-performance applications by leveraging Python's advanced libraries and frameworks (e.g., Pandas, FastAPI, asyncio), ensuring clean code, modularity, and maintainability.
    • Optimize code for performance and memory usage through profiling and refactoring, ensuring efficient execution, particularly when handling large datasets or real-time processing tasks.
  • Experience with the data orchestration tool - Dagster or Airflow
    • Designing and orchestrating complex DAGs to manage dependencies, triggers, and retries for data workflows, ensuring reliable and efficient pipeline execution.
    • Implementing monitoring and alerting for pipeline failures or performance bottlenecks, using observability tools integrated with Dagster/Airflow to maintain robust pipeline health.
  • Experience with a data warehousing solution such as Snowflake, BigQuery, Redshift, or Databricks.
    • Architecting and optimizing warehouse environments for performance, including designing partitioning strategies, clustering keys, and storage optimizations for cost-effective scaling.
    • Implementing security and governance policies within the wareshouse, including data encryption, access control, and audit logging to meet compliance and security best practices.
  • Extensive data engineering experience, including building and managing data pipelines and ETL processes.
  • Experience developing CI/CD pipelines for automated data infrastructure provisioning and application deployment.
  • Experience in managing infrastructure across AWS, with a focus on performance and security.

Nice to Have

  • Experience with Golang, Temporal, Monte Carlo.
  • Knowledge of decentralized consensus mechanisms, including Proof-of-Work and Proof-of-Stake.
  • Experience in developing custom Terraform modules for data infrastructure.

About the Job: 

Figment is the world’s leading provider of blockchain infrastructure. We provide the most comprehensive staking solution for our over 500+ institutional clients including exchanges, wallets, foundations, custodians, and large token holders to earn rewards on their crypto assets. These clients rely on Figment’s institutional staking service including rewards optimization, rapid API development, rewards reporting, partner integrations, governance, and slashing protection. Figment is backed by industry experts, financial institutions and our global team across twenty three countries. This all leads to our mission to support the adoption, growth and long term success of the Web3 ecosystem.

We are a growth stage technology company – looking for people who are builders and doers. People who are comfortable plotting their course through ambiguity and uncertainty to drive impact and who are excited to work in new ways and empower a generative company culture.

What they want you to do: 

Join Figment and help it in becoming the world’s leading staking services provider. Figment currently has over $15B assets under stake and growing. This role combines data engineering practices and software development, focusing on data pipelines and cloud infrastructure. The position requires building custom tools and automating data processes in a highly secure and scalable environment.

Responsibilities

  • Implement and maintain reliable data pipelines and data storage solutions.
  • Implement data modeling and integrate technologies according to project needs.
  • Manage specific data pipelines and oversees the technical aspects of data operations.
  • Ensure data processes are optimized and align with business requirements.
  • Identify areas for process improvements and suggests tools and technologies to enhance efficiency.
  • Continuously improve data infrastructure automation, ensuring reliable and efficient data processing.
  • Develop and maintain data pipelines and ETL processes using technologies such as Dagster and DBT to ensure efficient data flow and processing.
  • Automate data ingestion, transformation, and loading processes to support blockchain data analytics and reporting.
  • Utilize Snowflake data warehousing solutions to manage and optimize data storage and retrieval.
  • Collaborate with Engineering Leadership and Product teams to articulate data strategies and progress.
  • Promote best practices in data engineering, cloud infrastructure, networking, and security.

© 2024