$ads={1}
At DB Schenker, you are part of a global logistics network that connects the world. A network that allows you to shape your career by encouraging you to contribute and truly make a difference. With more than 76,000 colleagues worldwide, we welcome diversity and thrive on individual backgrounds, perspectives and skills. Together as one team, we are Here to move.
The Global Data & AI department is on a mission to turn DB Schenker into a data-driven company. Our Data Engineering team focuses on designing advanced analytics solutions to solve potential business use cases in logistics using Machine Learning, AI techniques, Data Visualization tools and Big Data technologies in cooperation with our Software Engineering team, internal business units and global IT. We are looking for a talented Data Engineer who can contribute to our projects with solid data intuition, hands-on problem-solving skills, engineering mindset and eagerness to learn about our logistics business data.
Your tasks:
Analyze data and design, code, test, debug, automate, document, and maintain data solutions- Support building, improving, and maintaining a cloud native data lake platform
- Integrate machine learning and operations research solutions into the DB Schenker system landscape
- Support and interact with data scientist, operations research specialists and business consultants in all their data-related activities
Experience working with distributed computing tools (Spark, Flink, Hadoop etc.)- Knowledge of cloud platforms (ideally Azure, alternatively AWS or GCP) and respective Data and Machine Learning related service associated models
- Fluency in at least one programming language such as Python or Scala
- Experience with relational databases (Oracle, PostgreSQL) and good knowledge of SQL
- Fluency in POSIX systems (e.g., Linux, MacOS X) and the command-line terminal
- Experience with orchestration / data pipelining tools like Argo, Azure Data Factory, Fivetran, Airflow etc.
- Experience in delivering software and of the software development life cycle: source code repositories (Git) and versioning/branching/peer reviewing, continuous integration (e.g., GitLab CI, Jenkins, Travis), deployment/release (e.g., artifact building and repositories), maintenance
- Experience with Machine Learning Operations processes (data versioning, model versioning, ...) and tools (MLFlow, DVC, Azure Machine Learning, SageMaker)
- Familiarity with Cloud Data Warehouse and/or Data Lakehouse Approaches (Databricks, Azure Synapse, Snowflake)
- Knowledge of front-end analysis tools (e.g. MS PowerBI, Tableau)
- Good knowledge of an analysis framework such as Python Pandas, Spark Data Frames
- Experience working with advanced Data Acquisition (Change Data Capture, Streams, APIs)
- Experience in Metadata Management and Data Cataloging
Benefits:
- With seminars, trainings and further qualifications we offer our employees (m/f/d) individual and long-term development and career opportunities on professional, project and management levels.
- We follow the principle of trust based working time, and with agreement we also offer regular working time at home
- We take social responsibility for our employees (m/f/d). DB Schenker offers a comprehensive workplace health management with programs for health promotion and prevention.