About
Since its founding 13 years ago, Great Day Improvements, LLC has grown rapidly toward its vision of becoming one of the largest home improvement companies in the U.S. Headquartered in Twinsburg, Ohio, Great Day Improvements is a $1.5 billion, vertically integrated, direct-to-consumer provider of premium home improvement products.
The company’s family of brands includes Patio Enclosures®, Champion Windows and Home Exteriors®, Universal Windows Direct®, Apex Energy Solutions®, Stanek Windows®, Hartshorn Custom Contracting, Your Home Improvement Company, K Designers, Leafguard®, Englert®, and The Bath Authority.
With an expanding workforce of over 4,800 employees across 130 metropolitan markets throughout the U.S. and Canada, Great Day Improvements continues to rank among the top home improvement companies nationwide and is one of the fastest growing private companies in America.
Job Summary
The Senior Data Engineer spearheads digital transformation initiatives at Great Day Improvements, liaising with system architects and integrating data from various sources. This role is centered on the Databricks Lakehouse platform and requires deep expertise in Unity Catalog for data governance and access control, Delta Live Tables (DLT) for pipeline orchestration, and metadata-driven development patterns that maximize reuse, configurability, and maintainability across the data estate.
The ideal candidate will have proven experience in constructing and managing RDBMS and NoSQL ETL, migrations, and data management, with a focus on developing optimized data architecture. The Senior Data Engineer provides critical support to software developers, data analysts, and data scientists in data-related initiatives, ensuring that the architecture for data delivery is maintained optimally throughout all ongoing projects.
This role requires an individual who is self-motivated, embraces AI-assisted development workflows, and is adept at addressing the data needs of multiple teams, systems, and products. The ideal candidate will be enthusiastic about contributing to the design and enhancement of the data infrastructure to support the continuous growth of Great Day’s portfolio of brands.
Location:
Twinsburg, OH (Hybrid)
Pay:
$160,000 per year
Responsibilities Data Pipeline Design & Development
Design, develop, and maintain scalable and reliable data pipelines that integrate data from multiple sources (CRM, ERP, etc.) into a cohesive data ecosystem
Collaborate with stakeholders to understand data requirements and deliver comprehensive data models that support business needs
Build processes supporting data transformation, data structures, metadata, dependency, and workload management
Architecture & Standards
Analyze and improve existing data architectures to enhance performance and scalability within the Databricks Lakehouse platform
Build and maintain metadata-driven pipeline frameworks that use external configuration (tables, JSON, YAML) to control pipeline behavior, schema mappings, transformations, and data flow—minimizing hardcoded logic and maximizing reusability
Contribute to developing and documenting internal and external standards for pipeline configurations, naming conventions, partitioning strategies, and more
Stay abreast of industry trends and technologies to drive innovation within the data management space
Data Governance & Quality
Develop and enforce data governance policies and procedures to ensure data integrity and security
Configure and maintain Unity Catalog securable (catalogs, schemas, tables, volumes) with appropriate grants and privilege hierarchies to enforce least-privilege access
Ensure high operational efficiency and quality of data platform datasets for project reliability and accuracy through DLT expectations, data quality checks, and monitoring
Implement and manage Master Data Management (MDM) strategies and solutions to ensure data accuracy, completeness, and consistency across the organization
Leverage Unity Catalog’s data lineage and audit logging capabilities to support compliance, impact analysis, and operational transparency
Qualifications Required
5+ years of experience in a data engineer role, with a graduate degree in computer science, statistics, informatics, information systems, or another quantitative field
Proven experience in data engineering, data integration, and data architecture
Strong proficiency in SQL and experience with relational and NoSQL databases
Advanced working SQL knowledge including query authoring and familiarity with various databases (MS SQL, PostgreSQL, MySQL, Oracle, etc.)
Experience building and optimizing big data pipelines, architectures, and data sets
A successful history of manipulating, processing, and extracting value from large, disconnected datasets
Skills in one or more languages such as SQL (Required), Python, C#, Java, Kotlin, Scala, R, and JavaScript
Excellent problem-solving, analytical, and communication skills
Preferred
3+ years of hands-on experience with the Databricks Lakehouse platform, including Delta Lake, DLT pipelines, and the Databricks SQL and notebook environments
Experience implementing Unity Catalog across multiple workspaces with centralized governance patterns
Proven track record building parameterized, config-driven DLT pipelines that can onboard new data sources with minimal code changes
Experience with Databricks Auto Loader, Structured Streaming, and Change Data Capture (CDC) patterns
Experience with CRM and ERP systems integration
Knowledge of index optimization, data replication/clustering, and archive strategies
Experience with SQL Server linked servers, triggers, constraints, synonyms, views, UDFs, and stored procedures
Experience with scheduling and process execution tools such as SQLCMD, BCP, DTExec, mysqldump, mongodump, mongosh, bash, PowerShell, and Python
Experience documenting complex data architecture in GitHub Flavored Markdown
Understanding of data concepts including CRDT, event sourcing, and the “Big Vs” of data in a polyglot hybrid cloud environment
Competencies
Strong analytical skills related to working with structured and unstructured datasets
Ability to quickly understand complex systems and data flows
Problem-solving orientation and technical communication
Self-motivated with ability to address data needs across multiple teams, systems, and products
Passion for data infrastructure design, metadata-driven development patterns, and continuous improvement
Enthusiasm for leveraging AI tools and assistants to enhance engineering productivity and code quality
Success Measures
Success in this role will be measured by:
Data pipelines are reliable, performant, and deliver data on time to downstream consumers
Data architecture supports current and future business needs with minimal rework
Metadata-driven frameworks are adopted and new data source onboarding requires configuration changes rather than new pipeline code
Data governance standards are documented, enforced, and adopted across the organization
Data quality issues are identified proactively through DLT expectations and monitoring, and resolved efficiently
Cross-functional teams (analysts, data scientists, developers) are effectively supported with quality data
MDM strategies are implemented and drive consistency across the organization
GDI is an Equal Employment Opportunity Employer
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.