Dein persönlicher KI-Karriere-Agent
Senior Data Pipeline Engineer(m/w/x)
Building and scaling AI-driven music label operations platform's data backbone. 5+ years Python web crawling and pipeline orchestration experience required. Remote-friendly with flexible hours.
Anforderungen
- 5+ years building data pipelines/web crawling infrastructure
- Strong Python skills (Scrapy, BeautifulSoup, Selenium, Playwright, or equivalent)
- Experience with pipeline orchestration tools (Airflow, Prefect, or Dagster)
- Solid understanding of databases (PostgreSQL, MongoDB, or similar) and data modeling
- Experience handling messy, inconsistent data and building validation layers
- Comfort with cloud infrastructure (AWS, GCP, or Azure)
- Track record of building reliable, self-supervised systems
- Deep familiarity with AI tools and workflows
- Experience with headless browsers and anti-detection techniques at scale
- Familiarity with music industry, streaming platforms, or media data
- Experience with data warehousing (BigQuery, Snowflake, Redshift)
- Exposure to message queues (Kafka, RabbitMQ, Redis)
- Experience building internal tooling and dashboards
- Systems thinking for crawling/data ingestion infrastructure
- Experience with sites changing layouts, anti-bot systems, rate limits, inconsistent APIs, missing data
- End-to-end pipeline ownership
- Self-directed with architectural decision-making
- Obsessive focus on data quality and validation checks
Aufgaben
- Build and own data ingestion infrastructure
- Own the company's data backbone
- Scale existing data infrastructure
- Improve data infrastructure reliability
- Expand data coverage
- Add new data sources
- Evolve data architecture
- Design new data pipelines
- Build new data pipelines
- Shape external data collection
- Shape external data processing
- Inherit and own existing data infrastructure
Berufserfahrung
- 5 - 7 Jahre
Ausbildung
- Bachelor-AbschlussODER
- Master-Abschluss
Sprachen
- Englisch – verhandlungssicher
Tools & Technologien
- Python
- Scrapy
- BeautifulSoup
- Selenium
- Playwright
- Airflow
- Prefect
- Dagster
- PostgreSQL
- MongoDB
- AWS
- GCP
- Azure
- BigQuery
- Snowflake
- Redshift
- Kafka
- RabbitMQ
- Redis
Benefits
Flexibles Arbeiten
- Remote-friendly
- Flexible hours
Gefällt dir diese Stelle?
BetaDein Career Agent findet täglich ähnliche Jobs für dich.
Noch nicht perfekt?
- SpeechifyVollzeitRemoteSeniorZürich
- Zühlke Engineering AG
Senior Data Engineer(m/w/x)
Vollzeitmit HomeofficeSeniorSchlieren - RepRisk AG
Senior Data Engineer(m/w/x)
Vollzeitmit HomeofficeSeniorZürich - Consulteer
Data Engineering Team Lead - ML & Robotics Data(m/w/x)
Vollzeitmit HomeofficeSeniorZürichab USD 80.000 - 100.000 / Jahr - Mimacom AG
Data Engineer(m/w/x)
Vollzeitmit HomeofficeSeniorZürich
Senior Data Pipeline Engineer(m/w/x)
Building and scaling AI-driven music label operations platform's data backbone. 5+ years Python web crawling and pipeline orchestration experience required. Remote-friendly with flexible hours.
Anforderungen
- 5+ years building data pipelines/web crawling infrastructure
- Strong Python skills (Scrapy, BeautifulSoup, Selenium, Playwright, or equivalent)
- Experience with pipeline orchestration tools (Airflow, Prefect, or Dagster)
- Solid understanding of databases (PostgreSQL, MongoDB, or similar) and data modeling
- Experience handling messy, inconsistent data and building validation layers
- Comfort with cloud infrastructure (AWS, GCP, or Azure)
- Track record of building reliable, self-supervised systems
- Deep familiarity with AI tools and workflows
- Experience with headless browsers and anti-detection techniques at scale
- Familiarity with music industry, streaming platforms, or media data
- Experience with data warehousing (BigQuery, Snowflake, Redshift)
- Exposure to message queues (Kafka, RabbitMQ, Redis)
- Experience building internal tooling and dashboards
- Systems thinking for crawling/data ingestion infrastructure
- Experience with sites changing layouts, anti-bot systems, rate limits, inconsistent APIs, missing data
- End-to-end pipeline ownership
- Self-directed with architectural decision-making
- Obsessive focus on data quality and validation checks
Aufgaben
- Build and own data ingestion infrastructure
- Own the company's data backbone
- Scale existing data infrastructure
- Improve data infrastructure reliability
- Expand data coverage
- Add new data sources
- Evolve data architecture
- Design new data pipelines
- Build new data pipelines
- Shape external data collection
- Shape external data processing
- Inherit and own existing data infrastructure
Berufserfahrung
- 5 - 7 Jahre
Ausbildung
- Bachelor-AbschlussODER
- Master-Abschluss
Sprachen
- Englisch – verhandlungssicher
Tools & Technologien
- Python
- Scrapy
- BeautifulSoup
- Selenium
- Playwright
- Airflow
- Prefect
- Dagster
- PostgreSQL
- MongoDB
- AWS
- GCP
- Azure
- BigQuery
- Snowflake
- Redshift
- Kafka
- RabbitMQ
- Redis
Benefits
Flexibles Arbeiten
- Remote-friendly
- Flexible hours
Gefällt dir diese Stelle?
BetaDein Career Agent findet täglich ähnliche Jobs für dich.
Über das Unternehmen
iGroove AG
Branche
Entertainment
Beschreibung
The company is revolutionizing the music distribution landscape by empowering artists to reach global audiences with ease and efficiency.
Noch nicht perfekt?
- Speechify
Software Engineer, Data Infrastructure & Acquisition(m/w/x)
VollzeitRemoteSeniorZürich - Zühlke Engineering AG
Senior Data Engineer(m/w/x)
Vollzeitmit HomeofficeSeniorSchlieren - RepRisk AG
Senior Data Engineer(m/w/x)
Vollzeitmit HomeofficeSeniorZürich - Consulteer
Data Engineering Team Lead - ML & Robotics Data(m/w/x)
Vollzeitmit HomeofficeSeniorZürichab USD 80.000 - 100.000 / Jahr - Mimacom AG
Data Engineer(m/w/x)
Vollzeitmit HomeofficeSeniorZürich