The AI Job Search Engine
Software Engineer, AI Data Infrastructure(m/w/x)
Description
In this role, you will be at the forefront of maintaining and evolving a robust data platform for AI pipelines. Your work will involve designing data ingestion processes, building APIs, and ensuring data integrity while collaborating with research teams.
Let AI find the perfect jobs for you!
Upload your CV and Nejo AI will find matching job offers for you.
Requirements
- •4 - 6 years proficiency in Python and Go
- •Comfort with containerization and orchestration tools like Docker and Kubernetes
- •Experience with AWS services (S3, KMS, IAM) and Terraform
- •Skilled in designing and operating data ingestion and transformation workflows
- •Familiarity with CI/CD pipelines and version control practices, ideally using GitHub Actions
- •Commitment to building secure and observable systems
- •Ability to contribute lightweight internal dashboards using frameworks like Streamlit or Next.js
- •Obsession with reliability, observability, and data governance
- •Strong fundamentals in data modeling and schema design
- •Comfortable working with NoSQL systems like MongoDB
- •Experience with event-driven data pipelines using SQS, SNS, Lambda, or Step Functions
- •Knowledge of data partitioning strategies and large-scale dataset optimization
- •Familiarity with metadata management and dataset versioning
- •Exposure to monitoring and alerting stacks such as Datadog or Prometheus
- •Proficiency in Rust or interest in learning it
Work Experience
4 - 6 years
Tasks
- •Design and maintain ingestion pipelines for data transfer
- •Develop transformation processes for clean, structured tables
- •Build internal APIs and backend services for data access
- •Instrument systems with metrics and automated checks
- •Contribute to MLOps tooling for dataset monitoring
- •Improve CI/CD pipelines and integration tests
- •Build lightweight dashboards for internal data access
- •Design scalable, fault-tolerant real-time data pipelines
- •Own the lifecycle of critical data assets with tracking and control
- •Work with structured and semi-structured data sources
Tools & Technologies
Languages
English – Business Fluent
Benefits
Healthcare & Fitness
- •Healthcare
- •Dental
Other Benefits
- •Vision
- •Life insurance
Retirement Plans
- •401(k) plan and match
Flexible Working
- •Flexible time off
Parking & Commuter Benefits
- •Commuter benefits
- SpeechifyFull-timeOn-siteSeniorMünchen
- Resaro
Senior Software Engineer - AI Platform(m/w/x)
Full-timeOn-siteSeniorMünchen - Intrinsic
Senior Software Engineer, ML Ops & Infrastructure(m/w/x)
Full-timeOn-siteSeniorMünchen - Intrinsic
Senior Research Engineer, Data Engine(m/w/x)
Full-timeOn-siteSeniorMünchen - NVIDIA
Senior Software Engineer(m/w/x)
Full-timeOn-siteSeniorMünchen
Software Engineer, AI Data Infrastructure(m/w/x)
The AI Job Search Engine
Description
In this role, you will be at the forefront of maintaining and evolving a robust data platform for AI pipelines. Your work will involve designing data ingestion processes, building APIs, and ensuring data integrity while collaborating with research teams.
Let AI find the perfect jobs for you!
Upload your CV and Nejo AI will find matching job offers for you.
Requirements
- •4 - 6 years proficiency in Python and Go
- •Comfort with containerization and orchestration tools like Docker and Kubernetes
- •Experience with AWS services (S3, KMS, IAM) and Terraform
- •Skilled in designing and operating data ingestion and transformation workflows
- •Familiarity with CI/CD pipelines and version control practices, ideally using GitHub Actions
- •Commitment to building secure and observable systems
- •Ability to contribute lightweight internal dashboards using frameworks like Streamlit or Next.js
- •Obsession with reliability, observability, and data governance
- •Strong fundamentals in data modeling and schema design
- •Comfortable working with NoSQL systems like MongoDB
- •Experience with event-driven data pipelines using SQS, SNS, Lambda, or Step Functions
- •Knowledge of data partitioning strategies and large-scale dataset optimization
- •Familiarity with metadata management and dataset versioning
- •Exposure to monitoring and alerting stacks such as Datadog or Prometheus
- •Proficiency in Rust or interest in learning it
Work Experience
4 - 6 years
Tasks
- •Design and maintain ingestion pipelines for data transfer
- •Develop transformation processes for clean, structured tables
- •Build internal APIs and backend services for data access
- •Instrument systems with metrics and automated checks
- •Contribute to MLOps tooling for dataset monitoring
- •Improve CI/CD pipelines and integration tests
- •Build lightweight dashboards for internal data access
- •Design scalable, fault-tolerant real-time data pipelines
- •Own the lifecycle of critical data assets with tracking and control
- •Work with structured and semi-structured data sources
Tools & Technologies
Languages
English – Business Fluent
Benefits
Healthcare & Fitness
- •Healthcare
- •Dental
Other Benefits
- •Vision
- •Life insurance
Retirement Plans
- •401(k) plan and match
Flexible Working
- •Flexible time off
Parking & Commuter Benefits
- •Commuter benefits
About the Company
Tools for Humanity
Industry
IT
Description
The company is built on privacy-preserving proof-of-human technology and enables the free flow of digital assets for all.
- Speechify
Software Engineer, Data Infrastructure & Acquisition(m/w/x)
Full-timeOn-siteSeniorMünchen - Resaro
Senior Software Engineer - AI Platform(m/w/x)
Full-timeOn-siteSeniorMünchen - Intrinsic
Senior Software Engineer, ML Ops & Infrastructure(m/w/x)
Full-timeOn-siteSeniorMünchen - Intrinsic
Senior Research Engineer, Data Engine(m/w/x)
Full-timeOn-siteSeniorMünchen - NVIDIA
Senior Software Engineer(m/w/x)
Full-timeOn-siteSeniorMünchen