Responsibilities
Architect & Implement end-to-end AI pipelines, ensuring robust data ingestion, model training, and deployment.
Design scalable infrastructure using containers (Docker, Kubernetes) and cloud platforms (AWS, GCP, or Azure).
Automate CI / CD pipelines for efficient testing, deployment, and monitoring of ML models.
Collaborate with cross-functional teams (Data Science, DevOps, Product) to deliver impactful AI solutions.
Stay Current on emerging AI trends and best practices (GPU / TPU acceleration, distributed computing, etc.).
Qualifications
Bachelor’s or Master’s in Computer Science, Engineering, or related field.
Proficiency with ML frameworks (TensorFlow, PyTorch, Scikit-learn) and programming languages like Python or Golang.
Experience with MLOps tools (Kubeflow, MLflow, Jenkins) and version control (Git).
Familiarity with big data (Spark, Hadoop, Kafka) and both SQL / NoSQL databases.
Strong problem-solving skills and passion for large-scale AI challenges.
Why Join
Contribute to innovative AI projects shaping multiple industries.
Enjoy competitive compensation and flexible engagement options.
Access a global network of like-minded professionals and continuous learning opportunities.
Sherlock loves to share $1,000 referral success bonuses!
Machine Learning • Miami