1
Professional Summary
“Data engineer with 5 years building and maintaining scalable data pipelines and warehousing solutions. Expert in Spark, Airflow, and cloud data platforms (Snowflake, BigQuery) with a focus on data quality, pipeline reliability, and enabling self-service analytics for business teams.”
2
Key Skills
PythonSQLApache SparkAirflowSnowflakeAWS (S3, Glue, Redshift)KafkadbtData ModelingETL/ELTDelta Lake
3
Sample Experience Bullets
- Built a real-time data pipeline with Kafka and Spark Streaming that processes 5TB+ daily from 30+ source systems
- Migrated legacy ETL to dbt on Snowflake. Transformation time went from 6 hours to 25 minutes
- Set up a data quality framework with Great Expectations - 500+ rules monitoring pipeline health at 99.5% reliability
- Implemented the medallion architecture (bronze/silver/gold) in Delta Lake. 100+ business users can now do self-service analytics
- Cut Snowflake compute costs by 40% through query tuning, warehouse scheduling, and adding materialized views for heavy reports
- Responsible for Airflow DAGs that orchestrate nightly batch processing across all data pipelines. About 200 DAGs total
- Worked with the analytics team to understand their data needs and build clean, well-documented tables they could query directly
- Wrote Python scripts for data ingestion from REST APIs, SFTP drops, and database CDC streams
- Maintained schema documentation in our data catalog. Made sure every production table had descriptions, owners, and freshness SLAs
4
ATS Keywords
Include these keywords in your resume to pass Applicant Tracking Systems.
data engineerETL pipelinedata warehouseSparkAirflowdata pipelineSnowflakedata modelingdata lakebatch processingstreaming
5
Recommended Certifications
- Databricks Certified Data Engineer Associate
- AWS Certified Data Analytics - Specialty
Build your Data Engineer resume
Paste a job description and get a tailored, ATS-optimized resume in 20 seconds.
Generate Resume FreeNo credit card required