As a Data Engineering Intern, you will learn about data architecture and tools while assisting with tasks such as data entry, pipeline maintenance, and documentation. You'll also gain hands-on experience through workshops and collaborative projects.
Anforderungen
- •Recently completed a degree in Data Science
- •Relevant certifications on Data Engineering preferred
- •Strong interest in data engineering and analytics
- •Familiarity with SQL and Python programming languages
- •Knowledge of PySpark or dbt is an advantage
- •Interest in Azure Databricks and Azure Data Factory
- •Familiarity with Azure DevOps or GitHub versioning tools
- •Eager to learn about Data Mesh approaches
- •Strong analytical and problem-solving skills
- •Fluency in English; additional languages preferred
Deine Aufgaben
- •Familiarize with Sika’s data architecture.
- •Shadow team members to learn workflows.
- •Assist with data extraction and entry.
- •Get certified in Azure, Databricks, and dbt.
- •Learn data engineering tools through tutorials.
- •Help maintain and update ETL/ELT pipelines.
- •Monitor data quality and perform validations.
- •Support data preparation for analysis and reporting.
- •Create and update documentation and templates.
- •Participate in troubleshooting data pipeline issues.
- •Attend workshops and collaborate on projects.
Original Beschreibung
## Job Description
Sika is expanding rapidly, and our data requirements are growing just as fast. To meet these needs, we are shaping a global strategy for how Sika manages and leverages analytical data, and we are building a state-of-the-art cloud-based data analytics platform.
Join our Central Data & Analytics Team as a Data Engineering Intern in Zürich, where you will have the opportunity to learn from experienced professionals and contribute to innovative solutions for data engineering and analytics. In this internship, you will gain hands-on experience with modern data technologies and help set the standard for how Sika utilizes data to drive business value.
What to Expect in Your First 6 Months
Month 1-2:
* Get familiar with Sika’s data architecture, technology stack, and main data sources (e.g., SAP S/4Hana, Salesforce, SAP ByDesign).
* Shadow team members to understand daily workflows and best practices in data engineering.
* Support the team with basic data extraction, data entry, and documentation tasks.
* Get certified in different technologies such as Azure, Databricks and dbt.
Month 3-4:
* Learn to use key data engineering tools such as Azure Databricks, dbt, and Azure Data Factory through guided tutorials and hands-on exercises.
* Assist in maintaining and updating simple ETL/ELT pipelines under supervision.
* Help monitor data quality and perform basic data validation checks.
Month 5-onwards:
* Support teammates in preparing data for analytics and reporting use cases.
* Help create and update documentation, templates, and user guides for data processes.
* Participate in troubleshooting sessions and learn how to resolve common data pipeline issues.
* Continue to build your skills by attending internal workshops and collaborating on small-scale data projects with guidance from mentors.
## Qualifications
* Recently completed a degree in Data Science, Computer Science, Management of Information Systems, or a related field; Relevant certifications on Data Engineering are a plus.
* Strong interest in data engineering and analytics; relevant coursework or project experience is a plus.
* Familiarity with programming languages such as SQL and Python; knowledge of PySpark or dbt is an advantage.
* Interest in cloud platforms and services (e.g., Azure Databricks, Azure Data Factory); prior experience is a plus but not required.
* Familiarity with code versioning tools such as Azure DevOps or GitHub is a plus.
* Eager to learn about domain-oriented data management approaches, such as Data Mesh.
* Strong analytical and problem-solving skills.
* Fluency in English; proficiency in additional languages is a plus.