JOB ROLE:Data Engineer
Location - Bangalore
The role will be part of the Data and Analytics Team responsible for expanding and optimizing AECOM’s data and data pipeline architecture, data flow, and collection for cross-functional teams. The role will support software developers, database architects, data analysts, and data scientists on data initiatives and will ensure consistent optimal data delivery architecture throughout ongoing projects.
Responsibilities & Duties
Create and maintain optimal data pipeline architecture
Assemble large, complex data sets that meet functional / non-functional business requirements.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Azure ‘big data’ technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Keep AECOM’s data separated and secure across national boundaries through multiple data centres and regions.
Create data tools for analytics and data scientist team members -to assist them in building and optimizing our product into an innovative industry leader.
Bachelor’s degree in Computer Science, Statistics, Informatics, Information Systems, or relevant discipline in the quantitative field
Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
Advanced working SQL/nosql, ADLS, Databricks, ADF, Azure DevOps
Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets.
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets.
Build processes supporting data transformation, data structures, metadata, dependency,and workload management.
Demonstrated ability to manipulate, process, and extract value from large disconnected datasets.
Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
Strong project management and organizational skills.
Experience supporting and working with cross-functional teams in a dynamic environment.
Experience with big data tools: Hadoop, Spark, Kafka, etc.
Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
Experience with AWS cloud services: EC2, EMR, RDS, Redshift
Experience with stream-processing systems: Storm, Spark-Streaming, etc.
Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.