Die KI-Suchmaschine für Jobs
Senior Data Engineer(m/w/x)
Architecting and scaling Databricks lakehouse solutions with Delta Lake and Medallion architecture for reputational risk data. Proven Dimensional Data Modelling and Data Vault experience required. Flexible working hours, state-of-the-art open-source technologies.
Anforderungen
- Bachelor’s Degree in computer science or related STEM field
- 5+ years hands-on Data Engineering experience
- Strong Python and SQL proficiency
- Solid Batch and stream processing experience
- Proven Dimensional Data Modelling and Data Vault experience
- Data Orchestration tools experience (Airflow or Dagster)
- Familiarity with data quality and validation frameworks
- Metadata tools integration experience (Collibra, OpenMetadata)
- Strong Git and CI/CD pipelines understanding
- Cloud platforms experience (AWS preferred)
- Practical Data Lakehouse experience (Databricks, Snowflake)
- Proactive mindset, strong ownership, initiative, drive
- Strong communication skills, professional English proficiency
- BPM software workflow configuration experience (Camunda)
- ML teams experience, ML/DL/NLP concepts familiarity
- Valid work permit
Aufgaben
- Architect, build, and scale modern data platforms
- Lead design and delivery of enterprise data infrastructure
- Implement Databricks lakehouse solutions
- Apply Delta Lake and Unity Catalog principles
- Utilize Medallion architecture (Bronze/Silver/Gold)
- Design, build, and maintain scalable ELT pipelines
- Leverage Databricks workflows and Delta Live Tables
- Develop pipelines with Apache Spark
- Develop and optimize high-throughput streaming and batch pipelines
- Use Spark Structured Streaming and Auto Loader
- Tune Databricks data platform performance
- Optimize Databricks data platform costs
- Govern Databricks cluster and compute resources
- Define and enforce data contracts and schemas
- Enforce data governance standards
- Utilize Unity Catalog and Delta Lake for governance
- Ensure data quality, observability, and lineage
- Use Databricks Data Observability tools
- Utilize Great Expectations for data validation
- Collaborate with data scientists, analysts, and platform teams
- Deliver reliable, self-serve data products
- Establish and champion data engineering best practices
- Establish and champion data engineering standards
- Establish reusable data engineering frameworks
- Stay current with Databricks ecosystem developments
- Monitor lakehouse architecture trends
- Track emerging data engineering patterns
- Participate in code reviews
- Maintain high standards for code quality
- Ensure code performance and security
- Engage in Agile/Scrum ceremonies
- Contribute architectural insights
- Provide technical direction to the team
Berufserfahrung
- 5 Jahre
Ausbildung
- Bachelor-Abschluss
Sprachen
- Englisch – verhandlungssicher
Tools & Technologien
- Python
- SQL
- AWS Glue
- dbt
- Kafka
- Airflow
- Dagster
- Great Expectations
- SODA
- Collibra
- OpenMetadata
- Git
- CI/CD pipelines
- AWS
- Databricks
- Snowflake
- Camunda
Benefits
Flexibles Arbeiten
- Flexible working hours
Weiterbildungsangebote
- Skill development
Sonstige Vorteile
- Team support
Startup-Atmosphäre
- Agile development ecosystem
Moderne Technikausstattung
- State-of-the-art open-source technologies
Lockere Unternehmenskultur
- Entrepreneurial, international, and dynamic work environment
- Diverse company culture
Sinnstiftende Arbeit
- Shared mission
Noch nicht perfekt?
- FriendsuranceVollzeitmit HomeofficeSeniorBerlin
- 4flow SE
Senior Data Engineer(m/w/x)
Vollzeitmit HomeofficeSeniorBerlin - dda diconium data GmbH
Senior Data Engineer(m/w/x)
Vollzeitmit HomeofficeManagementBerlin - Zalando
Senior CRM Data Engineer(m/w/x)
Vollzeitmit HomeofficeSeniorBerlin - Arsipa GmbH
Senior Data Engineer(m/w/x)
Vollzeitmit HomeofficeSeniorBerlin
Senior Data Engineer(m/w/x)
Architecting and scaling Databricks lakehouse solutions with Delta Lake and Medallion architecture for reputational risk data. Proven Dimensional Data Modelling and Data Vault experience required. Flexible working hours, state-of-the-art open-source technologies.
Anforderungen
- Bachelor’s Degree in computer science or related STEM field
- 5+ years hands-on Data Engineering experience
- Strong Python and SQL proficiency
- Solid Batch and stream processing experience
- Proven Dimensional Data Modelling and Data Vault experience
- Data Orchestration tools experience (Airflow or Dagster)
- Familiarity with data quality and validation frameworks
- Metadata tools integration experience (Collibra, OpenMetadata)
- Strong Git and CI/CD pipelines understanding
- Cloud platforms experience (AWS preferred)
- Practical Data Lakehouse experience (Databricks, Snowflake)
- Proactive mindset, strong ownership, initiative, drive
- Strong communication skills, professional English proficiency
- BPM software workflow configuration experience (Camunda)
- ML teams experience, ML/DL/NLP concepts familiarity
- Valid work permit
Aufgaben
- Architect, build, and scale modern data platforms
- Lead design and delivery of enterprise data infrastructure
- Implement Databricks lakehouse solutions
- Apply Delta Lake and Unity Catalog principles
- Utilize Medallion architecture (Bronze/Silver/Gold)
- Design, build, and maintain scalable ELT pipelines
- Leverage Databricks workflows and Delta Live Tables
- Develop pipelines with Apache Spark
- Develop and optimize high-throughput streaming and batch pipelines
- Use Spark Structured Streaming and Auto Loader
- Tune Databricks data platform performance
- Optimize Databricks data platform costs
- Govern Databricks cluster and compute resources
- Define and enforce data contracts and schemas
- Enforce data governance standards
- Utilize Unity Catalog and Delta Lake for governance
- Ensure data quality, observability, and lineage
- Use Databricks Data Observability tools
- Utilize Great Expectations for data validation
- Collaborate with data scientists, analysts, and platform teams
- Deliver reliable, self-serve data products
- Establish and champion data engineering best practices
- Establish and champion data engineering standards
- Establish reusable data engineering frameworks
- Stay current with Databricks ecosystem developments
- Monitor lakehouse architecture trends
- Track emerging data engineering patterns
- Participate in code reviews
- Maintain high standards for code quality
- Ensure code performance and security
- Engage in Agile/Scrum ceremonies
- Contribute architectural insights
- Provide technical direction to the team
Berufserfahrung
- 5 Jahre
Ausbildung
- Bachelor-Abschluss
Sprachen
- Englisch – verhandlungssicher
Tools & Technologien
- Python
- SQL
- AWS Glue
- dbt
- Kafka
- Airflow
- Dagster
- Great Expectations
- SODA
- Collibra
- OpenMetadata
- Git
- CI/CD pipelines
- AWS
- Databricks
- Snowflake
- Camunda
Benefits
Flexibles Arbeiten
- Flexible working hours
Weiterbildungsangebote
- Skill development
Sonstige Vorteile
- Team support
Startup-Atmosphäre
- Agile development ecosystem
Moderne Technikausstattung
- State-of-the-art open-source technologies
Lockere Unternehmenskultur
- Entrepreneurial, international, and dynamic work environment
- Diverse company culture
Sinnstiftende Arbeit
- Shared mission
Über das Unternehmen
RepRisk AG
Branche
FinancialServices
Beschreibung
RepRisk is the world’s most respected Data as a Service (DaaS) company for reputational risks and responsible business conduct.
Noch nicht perfekt?
- Friendsurance
Senior Data Engineer(m/w/x)
Vollzeitmit HomeofficeSeniorBerlin - 4flow SE
Senior Data Engineer(m/w/x)
Vollzeitmit HomeofficeSeniorBerlin - dda diconium data GmbH
Senior Data Engineer(m/w/x)
Vollzeitmit HomeofficeManagementBerlin - Zalando
Senior CRM Data Engineer(m/w/x)
Vollzeitmit HomeofficeSeniorBerlin - Arsipa GmbH
Senior Data Engineer(m/w/x)
Vollzeitmit HomeofficeSeniorBerlin