I am a results-driven Data Engineer with expertise in designing, building, and optimizing data pipelines to enable efficient data flow, storage, and analytics at scale. I’m passionate about continuous learning and enjoy exploring new tools and approaches that help teams make better decisions with data. My background includes hands-on experience across the full Software Development Life Cycle (SDLC), with a strong focus on Extract, Transform, Load (ETL) processes, relational databases, and cloud-native solutions.
In my current role at the Shoprite Group, I develop and optimize scalable data solutions using Python, PySpark, and AWS. To ensure accuracy and reliability, I leverage SQL for unit testing and validation. I also apply Infrastructure as Code (IaC) principles with Terraform to provision and manage AWS infrastructure, creating secure, cost-effective, and reliable cloud environments.
My journey began as a Data Engineering Apprentice, where I gained practical experience in building and maintaining data workflows. Since then, I have advanced into a Data Engineer role, where I continue to solve complex data challenges, improve processes, and contribute to innovative, data-driven strategies. My experience extends to implementing CI/CD pipelines and automation using Git, Bitbucket, and Microsoft Azure, which streamline deployments and improve delivery speed.
Responsibilities:
Selected Projects:
• ETL Processes
• Data Analysis
• IT Systems Development Processes (SDLC)
• Standards and Governance
• Relational Databases
• AWS
• API Data Extraction
• Performance Tuning
• Critical thinking
• Communication
• Analytical skills.
• Problem-solving
• Python
• Pyspark
• SQL
• Terraform
• CI/CD Pipelines
• Git
• SQL
• Terraform
• CI/CD Pipelines