Hi — I'm Amir, a Data Engineer

I build reliable data platforms,

See projects CV

About

I design data architectures and pipelines that move quickly from prototype to production. My work focuses on reliability, observability, and cost-efficient processing at scale.

  • Experience with batch & streaming systems
  • Infrastructure as code, monitoring, and testing
  • Strong collaboration with analysts, ML engineers and product teams

Skills

Python SQL Airflow / Prefect Spark Kafka GCP / AWS DBT Docker / Kubernetes Monitoring (Prometheus, Grafana)
Python
SQL
Airflow / Prefect
Spark

Selected Projects

Run/Walk Data Pipeline (GitHub)

Batch pipeline: CSV → Parquet → DuckDB with SQL transformations and materialized tables. Orchestrated with Apache Airflow and exposed reporting via a Streamlit dashboard and Flask API for validation and reporting.

Python, Airflow, DuckDB, Streamlit, Flask

Analytics Platform (GitHub)

End-to-end data platform with ETL pipelines, DuckDB storage, FastAPI, and Streamlit dashboards. Normalized JSONL telemetry into analytical tables and built APIs for reporting.

Python, FastAPI, DuckDB, Streamlit, ETL

Experience

  1. Backend / Data Engineer — 08/2019 – Present
    Multiple Companies: Hiretodo, Ardakan Glass, Paziresh 24, Mahya Pardaz Companies
    • Built ETL pipelines for collecting, transforming, and validating data.
    • Wrote complex SQL queries (joins, aggregations, window functions) for analysis and reporting.
    • Performed data cleaning and quality checks to ensure accuracy and consistency.
    • Designed optimized MySQL schemas for large datasets.
    • Investigated and resolved data-related issues.
    • Worked in Linux environments using command-line tools for data processing.

Contact

Interested in hiring or collaborating? Email me at amirrezazare59@gmail.com, find my work on GitHub, or connect on LinkedIn.