Skip to main content

Proteja-se contra fraudes de recrutamento. Estamos cientes de que indivíduos não autorizados podem se passar por recrutadores da Cargill, contatando sobre oportunidades de emprego e oferendo ofertas de emprego através de mensagens de texto, mensagens instantâneas ou salas de chat. Para garantir que um anúncio de emprego é legítimo, ele deve estar listado no site Cargill.com/Careers. Saiba como se proteger de fraudes de recrutamento.

Data Engineer, Data Analytics & Reporting - Ag & Trading

Candidate-se agora
ID da oferta 319943 Data de publicação 01/12/2026 Location : Bangalore, Índia Category  Digital Data & Technology Job Status  Salaried Full Time

Job Purpose and Impact

  • The Professional, Data Engineering job designs, builds and maintains moderately complex data systems that enable data analysis and reporting. With limited supervision, this job collaborates to ensure that large sets of data are efficiently processed and made accessible for decision making.

Key Accountabilities

  • DATA & ANALYTICAL SOLUTIONS: Develops moderately complex data products and solutions using advanced data engineering and cloud based technologies, ensuring they are designed and built to be scalable, sustainable and robust.
  • DATA PIPELINES: Maintains and supports the development of streaming and batch data pipelines that facilitate the seamless ingestion of data from various data sources, transform the data into information and move to data stores like data lake, data warehouse and others.
  • DATA SYSTEMS: Reviews existing data systems and architectures to implement the identified areas for improvement and optimization.
  • DATA INFRASTRUCTURE: Helps prepare data infrastructure to support the efficient storage and retrieval of data.
  • DATA FORMATS: Implements appropriate data formats to improve data usability and accessibility across the organization.
  • STAKEHOLDER MANAGEMENT: Partners with multi-functional data and advanced analytic teams to collect requirements and ensure that data solutions meet the functional and non-functional needs of various partners.
  • DATA FRAMEWORKS: Builds moderately complex prototypes to test new concepts and implements data engineering frameworks and architectures to support the improvement of data processing capabilities and advanced analytics initiatives.
  • AUTOMATED DEPLOYMENT PIPELINES: Implements automated deployment pipelines to support improving efficiency of code deployments with fit for purpose governance.
  • DATA MODELING: Performs moderately complex data modeling aligned with the datastore technology to ensure sustainable performance and accessibility.

Qualifications

  • Minimum requirement of 3 years of relevant work experience. Typically reflects 3 years or more of relevant experience.
  • Big Data Technologies: Hands-on experience with the Hadoop ecosystem (HDFS, Hive, MapReduce) and distributed processing frameworks like Apache Spark (including PySpark and Spark SQL) for large-scale batch and streaming workloads.
  • Programming Expertise: Strong proficiency in Python (data manipulation, orchestration, and automation), Scala(Spark-based development), and advanced SQL (window functions, CTEs, query optimization) for high‑volume analytical queries.
  • Data Pipeline Development: Proven ability to design, build, and optimize ETL/ELT pipelines for batch and real-time ingestion using tools/frameworks such as Spark Structured Streaming, Kafka Connect, Airflow/Azure Data Factory, or Glue, with robust error handling, observability, and SLAs.
  • Cloud & Data Warehousing: Hands-on with modern data warehouses like Snowflake & Lakehouse Architecture.
  • Transactional Data Systems: Experience with transaction management (isolation levels, locking, concurrency), backup/restore, replication (logical/physical), and high availability (Patroni, PgBouncer, read replicas).
  • Data Governance & Security: Understanding and implementation of data quality frameworks (DQ checks, Great Expectations/Deequ), metadata management (Glue/Azure Purview), role-based access control and row/column-level security, encryption, and compliance-aligned data handling (PII masking, auditability).

Preferred Skills

  • Experience with Apache Kafka or similar platforms for real-time data streaming.
  • Exposure to CI/CD pipelines, containerization (Docker), and orchestration tools (Kubernetes) for data workflows.
  • Understanding of supply chain analytics, commodity trading data flows, and risk management metrics (ideal for agri commodities industry).
  • Ability to collaborate with data scientists on predictive modeling and machine learning pipelines.
Candidate-se agora

Correspondente de vagas no LinkedIn

Encontre o seu lugar na Cargill. Acesse seu perfil do LinkedIn para encontrar vagas correspondentes às suas habilidades e experiência.

Encontre a opção ideal

Cacau sustentável

A Cargill Cocoa Promise tem o compromisso de garantir uma indústria próspera de cacau por muitas gerações.

Saiba mais

Diversidade, Equidade e Inclusão

Nossa cultura de inclusão nos ajuda a moldar o futuro do mundo.

Saiba mais

A vida na Cargill

Descubra como você pode alcançar seu objetivo mais alto com uma carreira na Cargill.

Saiba mais

Exibir todas as nossas oportunidades disponíveis

Thrive