<linearGradient id="sl-pl-bubble-svg-grad01" linear-gradient(90deg, #ff8c59, #ffb37f 24%, #a3bf5f 49%, #7ca63a 75%, #527f32)
Loading ...

POSHMARK is hiring for Software Engineer II Data Engineering | Apply Now

Role :

Software Engineer II Data Engineering

Location :

Chennai, Tamil Nadu, India

JOB DESCRIPTION:

The Data Engineering team at Poshmark is looking for an experienced software engineer to take care of Poshmak’s growth data, ensuring real-time access to quality data for all the stakeholders. The role requires strong understanding of software engineering best practices and excellent software development skills to build and maintain real-time and batch data pipelines with a focus on scalability and optimizations. In addition, the role also requires collaborating with Data Science, Analytics and other Engineering teams to build newer ETLs analyzing terabytes of data.

What you will do

    • Design, Develop & Maintain growth data pipelines and integrate paid media sources like Facebook and Google to drive insights for business. 
    • Build highly scalable, available, fault-tolerant data processing systems using AWS technologies, Kafka, Spark, and other big data technologies. These systems should handle batch and real-time data processing over 100s of terabytes of data ingested every day and a petabyte-sized data warehouse.
    • Responsible for architecting/designing/developing critical data pipelines at Poshmark.
    • Productionizing ML models in collaboration with the Data Science and Engineering teams.
    • Maintain and support existing platforms and evolve to newer technology stacks and architectures.
    • Participate and contribute to constantly improving best practices in development.

Education Required:

  • GRADUATE

skills

  • Excellent technical problem solving using data structures and algorithms, with emphasis on optimization and code quality.
  • 2 + years of relevant software engineering experience using object oriented programming languages like Scala / Java / Ruby / Python / C++ etc.
  • Expertise in architecting and building large-scale data processing systems using Big Data technologies like Spark, Hadoop, EMR, Kafka/ Kinesis, Flink, Druid.
  • Expertise in SQL with knowledge on any existing data warehouse technology like Redshift
  • Expertise in Google Apps Script, Databricks or API Integrations is a plus.
  • Be self-driven, take complete ownership of initiatives, make pragmatic technical decisions and collaborate with cross-functional teams.

Get instant updates on the latest jobs! Join our WhatsApp and Telegram groups.

To join our WhatsApp channel and Telegram Group – Click on the WhatsApp and telegram icons below

SALARY :

AS PER COMPANY NORMS

HOW TO CREATE YOUR RESUME