People on the move deserve a bank that moves with them. Since 2022, Aspora has been building a borderless financial operating system that makes money as mobile and transparent as its users.
Backed by influential venture capitalists like Sequoia Capital, Greylock Partners, Hummingbird Ventures, Y Combinator & Global Founders Capital. We're a team of 75+ across India, the UK, the UAE, EU and the US, working with extreme ownership, radical candour, and an obsession with customer impact.
We celebrate builders who question assumptions, ship fast, and turn regulatory complexity into elegant solutions. If you’re driven to redefine what global banking can be, we’d love to build the future with you.
About the role
You'll join our Data Platform team and get hands-on experience building and improving the infrastructure that powers analytics and machine learning across the company. You'll work alongside senior engineers on real problems — not toy projects.
What you'll work on
Pipeline development
Contribute to ETL/ELT pipelines using Python and SQL. Learn to write idempotent, testable pipeline code with guidance from your team.
Spark & Databricks exploration
Run and optimise PySpark jobs, explore execution plans, and understand how data moves through our lakehouse (Delta Lake).
Orchestration & scheduling
Write and debug Airflow DAGs. Learn dependency management, alerting patterns, and how SLAs are enforced in production.
Data quality & observability
Help build automated data quality checks and learn how the team monitors pipeline health and responds to incidents.
Developer tooling
Contribute to internal libraries and pipeline templates that make the broader data engineering team more productive.
What we're looking for
Currently pursuing a degree in Computer Science, Data Engineering, Software Engineering, or a related field
Comfortable writing Python and SQL — you've used them in coursework, projects, or internships
Familiar with core data concepts — relational databases, querying, and how data flows between systems
Basic understanding of version control (Git) and how software is developed collaboratively
Curious, self-directed, and comfortable asking questions when stuck
Bonus points
Any exposure to distributed computing or big data tools (Spark, Hadoop, Kafka) — even from a class or online course
Experience with cloud platforms (AWS, GCP, or Azure) — even personal or project-level use counts
Projects (personal, academic, or open source) that show you enjoy working with data at scale