You focus on developing and optimizing fine-tuning methodologies for AI models, ensuring effective data curation and collaboration for successful deployment and ongoing improvement.
Anforderungen
- •Degree in Computer Science or related field
- •PhD in NLP or Machine Learning preferred
- •Hands-on experience with fine-tuning experiments
- •Deep understanding of fine-tuning methodologies
- •Strong expertise in PyTorch and Hugging Face
- •Ability to apply empirical research to fine-tuning
Deine Aufgaben
- •Develop and implement fine-tuning methodologies.
- •Build, run, and monitor fine-tuning experiments.
- •Document results and compare against benchmarks.
- •Identify and process high-quality datasets.
- •Set criteria for data curation impact.
- •Debug and optimize the fine-tuning process.
- •Analyze computational and model performance metrics.
- •Collaborate with teams to deploy models.
- •Define success metrics and monitor improvements.
Original Beschreibung
## AI Research Engineer (Fine-tuning)
**About the job:**
As a member of the AI model team, you will drive innovation in supervised fine-tuning methodologies for advanced models. Your work will refine pre-trained models so that they deliver enhanced intelligence, optimized performance, and domain-specific capabilities designed for real-world challenges. You will work on a wide spectrum of systems, ranging from streamlined, resource-efficient models that run on limited hardware to complex multi-modal architectures that integrate data such as text, images, and audio.
We expect you to have deep expertise in large language model architectures and substantial experience in fine-tuning optimization. You will adopt a hands-on, research-driven approach to developing, testing, and implementing new fine-tuning techniques and algorithms. Your responsibilities include curating specialized data, strengthening baseline performance, and identifying as well as resolving bottlenecks in the fine-tuning process. The goal is to unlock superior domain-adapted AI performance and push the limits of what these models can achieve.
**Responsibilities**:
* Develop and implement new state-of-the-art and novel fine-tuning methodologies for pre-trained models with clear performance targets.
* Build, run, and monitor controlled fine-tuning experiments while tracking key performance indicators. Document iterative results and compare against benchmark datasets.
* Identify and process high-quality datasets tailored to specific domains. Set measurable criteria to ensure that data curation positively impacts model performance in fine-tuning tasks.
* Systematically debug and optimize the fine-tuning process by analyzing computational and model performance metrics.
* Collaborate with cross-functional teams to deploy fine-tuned models into production pipelines. Define clear success metrics and ensure continuous monitoring for improvements and domain adaptation.
## Requirements
* A degree in Computer Science or related field. Ideally PhD in NLP, Machine Learning, or a related field, complemented by a solid track record in AI R&D (with good publications in A\* conferences).
* Hands-on experience with large-scale fine-tuning experiments, where your contributions have led to measurable improvements in domain-specific model performance.
* Deep understanding of advanced fine-tuning methodologies, including state-of-the-art modifications for transformer architectures as well as alternative approaches. Your expertise should emphasize techniques that enhance model intelligence, efficiency, and scalability within fine-tuning workflows.
* Strong expertise in PyTorch and Hugging Face libraries with practical experience in developing fine-tuning pipelines, continuously adapting models to new data, and deploying these refined models in production on target platforms.
* Demonstrated ability to apply empirical research to overcome fine-tuning bottlenecks. You should be comfortable designing evaluation frameworks and iterating on algorithmic improvements to continuously push the boundaries of fine-tuned AI performance.