A
On-device ML Infrastructure Engineer (ML Compiler)
Apple
Onsite (Cupertino, California)
Mid Level
Posted 3 weeks ago
Skills
MLIR
C++
GPU Architecture
Compiler Optimizations
System Software Engineering
PyTorch
Kernel Writing
Debugging Toolchains
Model Compression
Acceleration
Runtime Development
Profiling
Transformation
Neural Engine
CPU
About the Role
Imagine being at the forefront of an evolution where innovative AI meets the elegance of Apple silicon. The On-Device Machine Learning team transforms groundbreaking research into practical applications, enabling billions of Apple devices to run powerful AI models locally, privately, and efficiently. We stand at the unique intersection of research, software engineering, hardware engineering, and product development, making Apple the leading destination for machine learning innovation.
Our team builds the essential infrastructure that enables machine learning at scale on Apple devices. This involves onboarding powerful architectures to embedded systems, developing optimization toolkits for model compression and acceleration, building ML compilers and runtimes for efficient execution, and creating comprehensive benchmarking and debugging toolchains. This infrastructure forms the backbone of Apple’s machine learning workflows across Camera, Siri, Health, Vision, and other core experiences, contributing to the overall Apple Intelligence ecosystem.
If you are passionate about the technical challenges of running sophisticated ML models across all devices, from resource-constrained devices to powerful clusters, and eager to directly impact how machine learning operates across the Apple ecosystem, this role presents a great opportunity to work on the next generation of intelligent experiences on Apple platforms.
Our group is looking for an ML Infrastructure Engineer, with a focus on model compilation. The role entails working closely with model authoring, runtime, and performance teams to ensure that models can bring to bear the full capabilities of the hardware.
We’re building an end-to-end developer experience for machine learning development that employs Apple’s vertical integration. This allows developers to iterate on model authoring, optimization, transformation, execution, debugging, profiling, and analysis. This role focuses on the core runtime for execution across a wide variety of devices and use cases. We’re seeking a highly motivated software engineer who is creative, versatile, and passionate about machine learning, common compiler optimizations, and system software engineering in the fast-paced and dynamic field of machine learning. We have an MLIR-based compiler stack, and use it to target the neural engine, GPU, and CPU in order to harness the full capabilities of the system for ML workflows and execution.
Knowledge of GPU architecture and programming paradigms (e.g. Cuda/Triton or equivalent) 3-5 years working on MLIR-based compilers. Familiarity with common ML model architectures, execution schemes, and operations. Fluent with C++ Familiarity with PyTorch or related training frameworks
Knowledge of other ML frameworks and ML pipelines Familiarity with Swift. Familiarity with programming paradigms for the GPU, CPU, and Neural Engine. Familiarity with writing kernels for ML model execution.
Description
We’re building an end-to-end developer experience for machine learning development that employs Apple’s vertical integration. This allows developers to iterate on model authoring, optimization, transformation, execution, debugging, profiling, and analysis. This role focuses on the core runtime for execution across a wide variety of devices and use cases. We’re seeking a highly motivated software engineer who is creative, versatile, and passionate about machine learning, common compiler optimizations, and system software engineering in the fast-paced and dynamic field of machine learning. We have an MLIR-based compiler stack, and use it to target the neural engine, GPU, and CPU in order to harness the full capabilities of the system for ML workflows and execution.
Minimum Qualifications
Knowledge of GPU architecture and programming paradigms (e.g. Cuda/Triton or equivalent) 3-5 years working on MLIR-based compilers. Familiarity with common ML model architectures, execution schemes, and operations. Fluent with C++ Familiarity with PyTorch or related training frameworks
Preferred Qualifications
Knowledge of other ML frameworks and ML pipelines Familiarity with Swift. Familiarity with programming paradigms for the GPU, CPU, and Neural Engine. Familiarity with writing kernels for ML model execution.
Similar Jobs
A
ML Platform & Infrastructure Engineer
AGI, Inc.
Onsite (San Francisco, California)
ML Ops Infrastructure Engineer
Deepgram
Remote (Remote, California)
$160k - $220k/yr
Cloud Engineer
Qualcomm
Onsite (San Diego, CA,US)
$122k - $184k/yr
A
Senior ML Infrastructure Engineer
Apple
Onsite (Cupertino, California)
Performance Infrastructure Engineer- Data Center GPU
Advanced Micro Devices, Inc
Onsite (Santa Clara, California)
$0 - $0/yr
M
Infrastructure Engineer
Maxana
Remote (San Francisco, California)
$130k - $240k/yr
M
Infrastructure Engineer
Maxana
Remote (Seattle, Washington)
$130k - $240k/yr
M
AI Infrastructure Engineer
Matter Inc
Onsite (Menlo Park, California)
C
ML Ops Infrastructure Engineer
Centific
Remote (Remote Work( USA))
$150k - $150k/yr
Cloud Infrastructure Engineer at zaimler.ai
Jack & Jill/External ATS
Onsite (San Mateo, San Mateo)