Description:
local and fully vaccinated against covid 19 candidates only
Business Background
Position is for a role in TEDRA department.
TEDRA (Trade Enrichment Data Reporting & Allocations) is part of the Institutional Securities Technology (IST) Division. It is responsible for maintaining, distributing, and reporting on trading, revenue, risk, and reference data (client, product, and pricing). As the authoritative source of key data sets, we are at the forefront of database technology and are heavily involved in data engineering, data science, data visualization, and machine learning efforts across the Firm. Position Introduction
This is a data engineer role in the team responsible for developing on the firm's Trade Capture data stores that holds the transactional big data for real time and archive processing and getting it into the archives and data lake.
The global team consists of highly technical team members who are adaptable to both hands on development and project management. We deliver multiple projects for multiple business areas in parallel. The business owners and subject matter experts will be globally distributed, making communication and pro-active to be important. You will be expected to work closely with our operations partners on requirements for projects.
The development will be performed using an agile methodology which is based on scrum (time boxing, daily scrum meetings, retrospectives, etc) and XP (continuous integration, refactoring, unit testing, etc) best practices. Candidates must therefore be able to work collaboratively, demonstrate good ownership and be able to work well in teams.
Primary responsibilities include:
1. Translate business requirement into queries against a set of relational tables and produce reporting based on the requirements.
2. Design and build reporting layer from different data sources and act as a SPOC for user queries
3. Database and ETL development, including stored procedures, queries, performance tuning etc, using python, SQL and ETL tools such as Informatica.
4. Efficient and clean automation script (using Python etc.) as part of the ETL process.
The current global team members are all very skilled in domain modeling, database design, big data, Java and messaging so this is an excellent opportunity to play a key role in the growing team.
Technical Skills Requirement
* Strong relational database skills especially with DB2 / Sybase or / and Greenplum.
* Knowledge of Hadoop/Spark/Snowflake is desirable.
* Create high quality and optimized stored procedures and queries
* Experience with Power Designer or some similar modeling tool.
* Strong with scripting language such as Python and Unix / K-Shell
* Strong knowledge base of relational database performance and tuning such as: proper use of indices, database statistics/reorgs, de-normalization concepts.
* Experience with data mining is a big plus
* Familiar with lifecycle of a trade and flows of data in an investment banking operation.
* Experienced in Agile development process