Velir

Senior Data Engineer

Job Locations US
Job ID
2024-1435
# of Openings
1
Category
Analytics & Data

Overview

Senior Data Engineers play a crucial role in helping our client organizations manage and leverage their data effectively. Their responsibilities include: Data Architecture Design; Data Warehousing; Data Quality Assurance; Scalability and Performance Optimization; and Data Security. As senior-level individual contributors, SDEs are also responsible for recommending and implementing data engineering tools and technologies that best suit the client’s needs.

 

Because our clients are mostly US-based organizations, we look for the ability to communicate with professional proficiency in English, verbally and in writing.

Responsibilities

Key Responsibilities

  • Data Engineering Expertise: You are responsible for building the infrastructure to support the storing and movement of data, so that it can be prepared by analytics engineers to eventually be interpreted by analysts. Your job is informing, developing, and implementing data architecture solutions, enabling our clients to utilize data to make decisions, create data products, and create value. As a senior member of the Data Engineering function, you serve as a mentor to other engineers, both individually and in group settings.
  • Cross-Team Collaboration: You are responsible for collaborating with peers and other functional departments to develop and implement data engineering strategies and approaches that support engagement goals and understanding client needs.
  • Project Delivery: You are responsible for ensuring that large and/or more complex data engineering projects are delivered on time, within scope, and within budget.

Responsibilities Breakdown

Data Engineering Expertise

  • Consistently seeks out and delivers on engagement level vision, tasks and problems.
  • Actively assists in scoping and executing most impactful work for the team.
  • Regularly delivers large features and product improvements that have a meaningful impact on clients’ data infrastructure and capabilities.
  • Autonomous in approach and may direct or coach other less experienced Engineers.

 

Cross-Team Collaboration

  • Collaborate with clients and functional managers to plan for data engineering needs for a product or feature launch.
  • Pair with a teammate or with someone at a client on strategies for solving a data engineering problem.
  • Create a process or reporting template that helps cross-functional teams solve for common data engineering problems.
  • Take initiative to identify and solve important problems. Coordinates with others on cross-cutting technical issues

 

Project Delivery

  • Architects and designs services/systems using design patterns that allow for iterative delivery and future scaling.
  • Proactively identifies and tackles technical debt through planning work and aligning the team. Does this with careful evaluation of additional cost on development.
  • Optimizes for the predictability and regular cadence of deliverables.

Skills & Qualifications

Tools & Technologies

  • Programming languages (e.g. SQL, Python)
  • Data Processing (e.g. Apache Spark, dbt)
  • Cloud-based data warehouses (e.g., Snowflake, Google BigQuery, etc.)
  • Data orchestration (e.g., Apache Airflow, Azure Data Factory, Dagster, etc.)

Technical Skills

  • Data Movement: You can optimize the latency of end-to-end pipelines through intelligent data orchestration in addition to approaches such as incrementalization or streaming. You have strong knowledge of common data integration patterns (CDC, ELT, etc.).
  • Data Warehousing: You have a high proficiency in warehousing, including working knowledge of common ingestion SaaS platforms (e.g., Fivetran) and / or frameworks (e.g., Meltano, Airbyte), an ability to configure warehouse-specific ingestion features (e.g., Snowpipe) and can provision, maintain, and optimize at least one cloud data warehouse (e.g., Snowflake).
  • Programming: You are considered a highly proficient programmer, approaching your code holistically, achieving a high standard routinely. You can optimize performance for large workloads and are able to troubleshoot complex queries / functions. Proficiency in Python required.
  • Domain Expertise: You have a strong foundation of knowledge in domains in which you’re working. You can relate how the business works with the goals of the immediate team.
  • Technical Management: Can display a clear technical confidence and understanding. For the most part, can use organizational- and team-specific tools independently.

 

Bonus points for:

  • Data Modeling & Transformation: You have high proficiency with data transformation tools such as dbt and expert proficiency in data modeling approaches and philosophies (Kimball, OBT).
  • Data Orchestration. You’re familiar with at least one data orchestration platform (Azure Data Factory, Apache Airflow, Dagster, etc.).
  • Data Infrastructure: You understand more complex infrastructure approaches, including the implications and suitability of different deployment options and how to deploy self-hosted applications for clients with high security requirements.

Qualifications 

  • Proven experience as a Data Engineer or related role, with a focus on designing and developing data pipelines.
  • Strong programming skills in Python and SQL. Experience with Scala and Rust is a plus but not required.
  • Deep knowledge of data warehousing and ETL/ELT processes.
  • Intermediate / expert proficiency with common data integration / orchestration platforms (e.g., Fivetran, Azure Data Factory, Apache Airflow)
  • Hands-on experience with data warehouses like Snowflake, BigQuery, Databricks, or similar.
  • Experience with streaming solutions such as Spark Streaming, Kafka, or Flink is desirable but not required.
  • Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.
  • Familiarity with machine learning operations (MLOps) techniques and platforms is a plus but not required.
  • Experience mentoring and advising other engineers.

 

Compensation Range: $125,000 - $145,000 annually.

 

Please note that compensation packages are finalized after the interview process is concluded. We use a competency-based approach to base pay, which means it is based on the competencies and skills demonstrated for this role.

 

Core Company Values

Velir is an established mid-sized agency with a top-tier portfolio of clients, ranging from the world’s largest non-profits to Fortune 500 brands. As of 2023, Velir acquired Brooklyn Data Company, a premier data and analytics consultancy focused on leadership, process improvement, implementation, and advanced analytics. Collectively, we pride ourselves on our people-first culture and a low-ego workplace that embraces experimentation, collaboration, and continuous improvement. We offer flexible work locations, competitive pay and excellent benefits.

  • Take the Long View - Ensure the company is built to last
  • Be Courageous - Make the right decisions even when they aren't the easiest decisions
  • Be Genuine - Bring honesty and authenticity to all that you do
  • Work with Focus + Passion - Display purpose and pride in your work and never stop learning

As an equal opportunity employer, we are firmly committing to diversity, equity, and inclusion in our hiring efforts. We recognize that we need team members from all backgrounds and experiences to successfully shape a positive employee experience as well as deliver our product and service solutions. To that end, we actively seek candidates who can bring diverse experiences and backgrounds to our team. We know that complex factors and systemic bias can get in the way of us meeting strong candidates, so please don't hesitate to apply even if you're not 100% sure.

 

At this time, Velir does not sponsor candidates and unfortunately cannot accept those on OPT or CPT.

 

#LI-Remote

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed