- Qui mô công ty:
- Loại hình hoạt động:
- Website:
• Collaborate with Agile product teams (mobile, digital banking, corporate banking, and customer insight) to design, develop, test, implement, and support full-stack data solutions that directly impact millions of mobile banking users
• Design and evolve our enterprise Data Platform with Data Vault 2.0 as the foundational modeling methodology while delivering high-quality, trusted data layers in Google Cloud
• Build high-throughput, low-latency real-time and batch data pipelines to power behavioral analytics, next-best-action recommendations, and seamless customer experiences
• Create scalable ingestion, transformation, quality enforcement, and serving layers that feed our Next-Gen Mobile Banking App and other Digital Banking platform products
• Ensure enterprise-grade data quality, governance, lineage, security, and compliance in a highly regulated environment
• Optimize pipelines for performance, cost-efficiency, reliability, and scalability on Google Cloud Platform (GCP)
• Mentor junior and mid-level data engineers, conduct design reviews, and leverage DataOps best practices (CI/CD, automated testing, observability)
• Partner closely with mobile app teams, data scientists, and business stakeholders to translate digital banking requirements into production-grade data products
• Bachelor’s Degree in Computer Science, Engineering, or a related field
• Solid experience (typically 5+ years equivalent) in data engineering using Python, SQL, Spark / PySpark/
• Solid understanding of digital banking data domains (T24 data, mobile transactions, banking products, app events, customer behavioral, corporate banking flows)
• Proven hands-on experience with Google Cloud Platform (GCP), particularly BigQuery as a core data warehouse/lakehouse solution
• Strong background in building and operating big data pipelines, including real-time/near-real-time processing with Spark, Spark Streaming, or equivalent frameworks
• Practical experience with real-time streaming technologies (Spark Streaming, Flink, Kafka Streams) and workflow orchestration (Airflow)
• Proficiency in dbt for data transformations, Python/PySpark development, and modern data warehouse/lakehouse approaches
• Expertise in scalable data modeling — Data Vault 2.0 strongly preferred (or demonstrated success delivering agile, auditable enterprise data models such as medallion or lakehouse patterns)
• Familiarity with data governance, lineage practices, and compliance requirements in a regulated banking environment
• Experience designing, developing, and optimizing RESTful APIs and/or GraphQL APIs (including schema design, performance tuning, versioning, security, rate limiting, and integration with mobile/web frontends or microservices)
• Experience leveraging AI copilots and hands-on familiarity with AI agents (e.g., GitHub Copilot, Claude Code, OpenClaw, ...) to accelerate development, code review, and pipeline automation
N/A
$ Thỏa thuận
N/A
$ Thỏa thuận
N/A
$ Thỏa thuận
N/A
$ Thỏa thuận
N/A
$ Thỏa thuận
N/A
$ Thỏa thuận
N/A
$ Thỏa thuận
N/A
$ Thỏa thuận
N/A
$ Thỏa thuận
N/A
$ Thỏa thuận