Sr. Data Engineer
Job Purpose and Impact
The Sr. Data Engineer will design, build and operate high performance data centric solutions utilizing the comprehensive big data capabilities for the company's data platform environment. In this role, you will act as an authority for data access pathways and techniques working with analysts within the functional data analytics team. You will design data structures and pipelines to collect data and design and implement data transformations, combinations or aggregations.
Key Accountabilities
Collaborate with businesses, application and process owners, and product team members to define requirements and design solutions for the company's big data and analytics solutions.
Participate in the decision-making process related to architecting solutions.
Develop technical solutions utilizing big data and cloud based technologies and ensuring they are designed and built to be sustainable and robust.
Perform data modeling and prepare data in databases for use in various analytics tools and configurate and develop data pipelines to move and optimize data assets.
Provide necessary technical support through all phases of solution life cycle.
Build prototypes to test new concepts and be a key contributor of ideas and code that improve the core software infrastructure, patterns and standards.
Help drive the adoption of new technologies and methods within the functional data and analytics team and be a role model and mentor for data engineers.
Independently handle complex issues with minimal supervision, while escalating only the most complex issues to appropriate staff.
Other duties as assigned.
Qualifications
Minimum Qualifications
Bachelor's degree in a related field or equivalent experience.
Minimum of four years of related work experience.
Advanced English skills, both oral and written.
Preferred Qualifications
Experience with data collection and ingestion capabilities, such as AWS Glue, Kafka Connect, Flink and others.
Experience with data storage and management of large, heterogenous datasets, including formats, structures, and cataloging with such tools as Iceberg, Parquet, Avro, ORC, S3, HFDS, HIVE, Kudu or others.
Experience with transformation and modeling tools, including SQL based transformation frameworks, orchestration and quality frameworks such as dbt, Apache Nifi, Talend, AWS Glue, Airflow, Dagster, Great Expectations, Oozie and others.
Experience working in Big Data environments including tools such as Hadoop and Spark.
Experience working in Cloud Platforms such as AWS, GCP or Azure.
Experience of streaming and stream integration or middleware platforms, tools, and architectures such as Kafka, Flink, JMS, or Kinesis.
Strong programming knowledge of SQL, Python, R, Java, Scala or equivalent.
Proficiency in engineering tooling such as docker, git, and container orchestration services.
Strong experience of working in devops models with demonstratable understanding of associated best practices for code management, continuous integration, and deployment strategies.
Experience and knowledge of data governance considerations including quality, privacy, security associated implications for data product development and consumption.
Linkedin Job Matcher
Find where you fit in at Cargill. Log in to connect your LinkedIn profile and we’ll use your skills and experience to search the jobs that might be right for you.
Sustainable
Cocoa
The Cargill Cocoa Promise is committed to securing a thriving cocoa sector for generations.
Diversity,
Equity
& Inclusion
Our inclusive culture helps us shape the future of the world.
Life at
Cargill
Discover how you can achieve your higher purpose with a career at Cargill. Learn More