Your personal AI career agent
Senior Data Pipeline Engineer(m/w/x)
Building and scaling AI-driven music label operations platform's data backbone. 5+ years Python web crawling and pipeline orchestration experience required. Remote-friendly with flexible hours.
Requirements
- 5+ years building data pipelines/web crawling infrastructure
- Strong Python skills (Scrapy, BeautifulSoup, Selenium, Playwright, or equivalent)
- Experience with pipeline orchestration tools (Airflow, Prefect, or Dagster)
- Solid understanding of databases (PostgreSQL, MongoDB, or similar) and data modeling
- Experience handling messy, inconsistent data and building validation layers
- Comfort with cloud infrastructure (AWS, GCP, or Azure)
- Track record of building reliable, self-supervised systems
- Deep familiarity with AI tools and workflows
- Experience with headless browsers and anti-detection techniques at scale
- Familiarity with music industry, streaming platforms, or media data
- Experience with data warehousing (BigQuery, Snowflake, Redshift)
- Exposure to message queues (Kafka, RabbitMQ, Redis)
- Experience building internal tooling and dashboards
- Systems thinking for crawling/data ingestion infrastructure
- Experience with sites changing layouts, anti-bot systems, rate limits, inconsistent APIs, missing data
- End-to-end pipeline ownership
- Self-directed with architectural decision-making
- Obsessive focus on data quality and validation checks
Tasks
- Build and own data ingestion infrastructure
- Own the company's data backbone
- Scale existing data infrastructure
- Improve data infrastructure reliability
- Expand data coverage
- Add new data sources
- Evolve data architecture
- Design new data pipelines
- Build new data pipelines
- Shape external data collection
- Shape external data processing
- Inherit and own existing data infrastructure
Work Experience
- 5 - 7 years
Education
- Bachelor's degreeOR
- Master's degree
Languages
- English – Business Fluent
Tools & Technologies
- Python
- Scrapy
- BeautifulSoup
- Selenium
- Playwright
- Airflow
- Prefect
- Dagster
- PostgreSQL
- MongoDB
- AWS
- GCP
- Azure
- BigQuery
- Snowflake
- Redshift
- Kafka
- RabbitMQ
- Redis
Benefits
Flexible Working
- Remote-friendly
- Flexible hours
Like this job?
BetaYour Career Agent finds similar jobs for you every day.
Not a perfect match?
- SpeechifyFull-timeRemoteSeniorZürich
- Zühlke Engineering AG
Senior Data Engineer(m/w/x)
Full-timeWith HomeofficeSeniorSchlieren - RepRisk AG
Senior Data Engineer(m/w/x)
Full-timeWith HomeofficeSeniorZürich - Consulteer
Data Engineering Team Lead - ML & Robotics Data(m/w/x)
Full-timeWith HomeofficeSeniorZürichfrom USD 80,000 - 100,000 / year - Mimacom AG
Data Engineer(m/w/x)
Full-timeWith HomeofficeSeniorZürich
Senior Data Pipeline Engineer(m/w/x)
Building and scaling AI-driven music label operations platform's data backbone. 5+ years Python web crawling and pipeline orchestration experience required. Remote-friendly with flexible hours.
Requirements
- 5+ years building data pipelines/web crawling infrastructure
- Strong Python skills (Scrapy, BeautifulSoup, Selenium, Playwright, or equivalent)
- Experience with pipeline orchestration tools (Airflow, Prefect, or Dagster)
- Solid understanding of databases (PostgreSQL, MongoDB, or similar) and data modeling
- Experience handling messy, inconsistent data and building validation layers
- Comfort with cloud infrastructure (AWS, GCP, or Azure)
- Track record of building reliable, self-supervised systems
- Deep familiarity with AI tools and workflows
- Experience with headless browsers and anti-detection techniques at scale
- Familiarity with music industry, streaming platforms, or media data
- Experience with data warehousing (BigQuery, Snowflake, Redshift)
- Exposure to message queues (Kafka, RabbitMQ, Redis)
- Experience building internal tooling and dashboards
- Systems thinking for crawling/data ingestion infrastructure
- Experience with sites changing layouts, anti-bot systems, rate limits, inconsistent APIs, missing data
- End-to-end pipeline ownership
- Self-directed with architectural decision-making
- Obsessive focus on data quality and validation checks
Tasks
- Build and own data ingestion infrastructure
- Own the company's data backbone
- Scale existing data infrastructure
- Improve data infrastructure reliability
- Expand data coverage
- Add new data sources
- Evolve data architecture
- Design new data pipelines
- Build new data pipelines
- Shape external data collection
- Shape external data processing
- Inherit and own existing data infrastructure
Work Experience
- 5 - 7 years
Education
- Bachelor's degreeOR
- Master's degree
Languages
- English – Business Fluent
Tools & Technologies
- Python
- Scrapy
- BeautifulSoup
- Selenium
- Playwright
- Airflow
- Prefect
- Dagster
- PostgreSQL
- MongoDB
- AWS
- GCP
- Azure
- BigQuery
- Snowflake
- Redshift
- Kafka
- RabbitMQ
- Redis
Benefits
Flexible Working
- Remote-friendly
- Flexible hours
Like this job?
BetaYour Career Agent finds similar jobs for you every day.
About the Company
iGroove AG
Industry
Entertainment
Description
The company is revolutionizing the music distribution landscape by empowering artists to reach global audiences with ease and efficiency.
Not a perfect match?
- Speechify
Software Engineer, Data Infrastructure & Acquisition(m/w/x)
Full-timeRemoteSeniorZürich - Zühlke Engineering AG
Senior Data Engineer(m/w/x)
Full-timeWith HomeofficeSeniorSchlieren - RepRisk AG
Senior Data Engineer(m/w/x)
Full-timeWith HomeofficeSeniorZürich - Consulteer
Data Engineering Team Lead - ML & Robotics Data(m/w/x)
Full-timeWith HomeofficeSeniorZürichfrom USD 80,000 - 100,000 / year - Mimacom AG
Data Engineer(m/w/x)
Full-timeWith HomeofficeSeniorZürich