Research Engineer
Sciforium is an AI infrastructure company developing next-generation multimodal AI models and a proprietary, high-efficiency serving platform. Backed by multi-million-dollar funding and direct sponsorship from AMD with hands-on support from AMD engineers the team is scaling rapidly to build the full stack powering frontier AI models and real-time applications.
About the Role
As a Research Engineer, you'll work across the full foundation-model stack : pretraining and scaling, post-training and Reinforcement Learning, sandbox environments for evaluation and agentic learning, and deployment + inference optimization. You'll build and iterate quickly on research ideas, contribute production-grade infrastructure, and help deliver models that can serve real-world use cases at scale.
What You'll Work On
This role spans multiple tracks - candidates may focus on one or contribute across several. Examples include :
Pretraining & Scaling
- Train large byte-native foundation models across massive, heterogeneous corpora
- Design stable training recipes and scaling laws for novel architectures
- Improve throughput, memory efficiency, and utilization on large GPU clusters
- Build and maintain distributed training infrastructure and fault-tolerant pipelines
Post-training & RL
Develop post-training pipelines (SFT, preference optimization, RLHF / RLAIF, RL)Curate and generate targeted datasets to improve specific model capabilitiesBuild reward models and evaluation frameworks to drive iterative improvementExplore inference-time learning and compute techniques to enhance performanceSandbox Environments & Evaluation
Build scalable sandbox environments for agent evaluation and learningCreate realistic, high-signal automated evals for reasoning, tool use, and safetyDesign offline + online environments that support RL-style training at scaleInstrument environments for observability, reproducibility, and iteration speedDeployment & Inference Optimization
Optimize inference throughput / latency for byte-native architecturesBuild high-performance serving pipelines (KV caching, batching, quantization, etc.)Improve end-to-end model efficiency, cost, and reliability in productionProfile and optimize GPU kernels, runtime bottlenecks, and memory behaviorIdeal Candidate Credentials
Technical strength
Strong general software engineering skills (writing robust, performant systems)Experience with training or serving large neural networks (LLMs or similar)Solid grasp of deep learning fundamentals and modern literatureComfort working in high-performance environments (GPU, distributed systems, etc.)Relevant experience (one or more)
Pretraining / large-scale distributed training (FSDP / ZeRO / Megatron-style systems)Post-training pipelines (SFT, RLHF / RLAIF, preference optimization, eval loops)Building RL environments, simulators, or agent frameworksInference optimization, model compression, quantization, kernel-level profilingBuilding large ETL pipelines for internet-scale data ingestion and cleaningOwning end-to-end production ML systems with monitoring and reliabilityResearch orientation
Ability to propose and evaluate research ideas quicklyStrong experimental hygiene : ablations, metrics, reproducibility, analysisBias toward building you can turn ideas into working code and resultsEducation
MS or PhD in Computer Science, Machine Learning, AI, Mathematics, or related fieldBenefits Include
Medical, dental, and vision insurance401k planDaily lunch, snacks, and beveragesFlexible time offCompetitive salary and equityEqual Opportunity
Sciforium is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.