Be Part Of A High-Performing Team:
Join a globally recognized financial institution operating at the forefront of banking and financial services. The organization is known for its strong commitment to innovation, operational excellence, and data-driven decision-making. The technology division plays a critical role in enabling enterprise-wide transformation initiatives, with a focus on modernizing data platforms and enhancing analytics capabilities. Teams operate in a collaborative, fast-paced environment where cutting-edge cloud technologies and large-scale data solutions are actively leveraged to drive business impact.
What’s In Store For You:
- Engagement: W2 only (no C2C/1099)
- Opportunity to work on enterprise-scale cloud data platforms within a leading financial services environment
- Exposure to modern Azure data ecosystem tools and large-scale data engineering initiatives
- Opportunity to contribute to high-impact data transformation and analytics programs
How You Will Make An Impact:
- Design, build, and optimize scalable data pipelines using Azure cloud technologies
- Implement data solutions leveraging Azure Data Factory, Azure Data Lake, and Azure Databricks
- Develop and maintain data ingestion frameworks for structured and unstructured data
- Optimize performance of data workflows and Databricks jobs for efficiency and scalability
- Collaborate with cross-functional teams to support data analytics and reporting initiatives
- Ensure data quality, integrity, and governance across enterprise data platforms
- Contribute to Agile delivery processes, including sprint planning and Jira tracking
Are you a proven Azure Data Engineering expert ready to drive enterprise data solutions?
- 10+ years of experience in data engineering or related roles
- Strong hands-on experience with Azure Data Lake (ADLS), Azure Data Factory, Azure Databricks, and Azure SQL/Data Warehouse
- Proven expertise in building and deploying cloud-based data solutions on Azure
- Advanced knowledge of SQL, T-SQL, and/or PL/SQL
- Experience with Python, PySpark, and Spark SQL for big data processing
- Strong experience with performance tuning and optimization in Databricks environments
- Experience working in Agile environments with tools such as Jira
- Background in designing and implementing data ingestion pipelines in cloud environments
- Strong analytical and problem-solving skills, especially within large-scale data ecosystems
- Excellent communication and collaboration skills