High-performance AI compute engineer
-
Location:San Jose, California, US
-
Area of InterestEngineer - Software
-
Compensation Range279000 USD - 354600 USD
-
Job TypeProfessional
-
Technology InterestAI or Artificial Intelligence, Internet & Mass Scale Infrastructure
-
Job Id1445895
Meet the Team
We are an innovation team on a mission to transform how enterprises harness AI. Operating with the agility of a startup and the focus of an incubator, we’re building a tight-knit group of AI and infrastructure experts driven by bold ideas and a shared goal: to rethink systems from the ground up and deliver breakthrough solutions that redefine what's possible — faster, leaner, and smarter.
We thrive in a fast-paced, experimentation-rich environment where new technologies aren’t just welcome — they’re expected. Here, you'll work side-by-side with seasoned engineers, architects, and thinkers to craft the kind of iconic products that can reshape industries and unlock entirely new models of operation for the enterprise.
If you're energized by the challenge of solving hard problems, love working at the edge of what's possible, and want to help shape the future of AI infrastructure — we'd love to meet you.
Impact
As High-performance AI compute engineer, you will be instrumental in defining and delivering the next generation of enterprise-grade AI infrastructure. As a principal engineer within our GPU and CUDA Runtime team, you will play a critical role in shaping the future of high-performance compute infrastructure. Your contributions will directly influence the performance, reliability, and scalability of large-scale GPU-accelerated workloads, powering mission-critical applications across AI/ML, scientific computing, and real-time simulation.
You will be responsible for developing low-level components that bridge user space and kernel space, optimizing memory and data transfer paths, and enabling cutting-edge interconnect technologies like NVLink and RDMA. Your work will ensure that systems efficiently utilize GPU hardware to its full potential, minimizing latency, maximizing throughput, and improving developer experience at scale.
This role offers the opportunity to impact both open and proprietary systems, working at the intersection of device driver innovation, runtime system design, and platform integration.
KEY RESPONSIBILITIEs
- Design, develop, and maintain device drivers and runtime components for GPU and network components of the systems.
- Working with kernel and platform components to build efficient memory management paths using pinned memory, peer-to-peer transfers, and unified memory.
- Optimize data movement using high-speed interconnects such as RDMA, InfiniBand, NVLink, and PCIe, with a focus on reducing latency and increasing bandwidth.
- Implement and fine-tune GPU memory copy paths with awareness of NUMA topologies and hardware coherency.
- Develop instrumentation and telemetry collection mechanisms to monitor GPU and memory performance without impacting runtime workloads.
- Contribute to internal tools and libraries for GPU system introspection, profiling, and debugging.
- Provide technical mentorship and peer reviews, and guide junior engineers on best practices for low-level GPU development.
- Stay current with evolving GPU architectures, memory technologies, and industry standards.
Minimum Qualifications :
- 18+ of experience in systems programming, ideally with 5+ years focused on CUDA/GPU driver and runtime internals.
- Minimum of 5+ years of experience with kernel-space development, ideally in Linux kernel modules, device drivers, or GPU runtime libraries (e.g., CUDA, ROCm, or OpenCL runtimes).
- Direct experience working with NVIDIA GPU architecture, CUDA toolchains, and performance tools (Nsight, CUPTI, etc.).
- Experience optimizing for NVLink, PCIe, Unified Memory (UM), and NUMA architectures.
- Strong grasp of RDMA, InfiniBand, and GPUDirect technologies and their using in frameworks like UCX.
- Minimum of 8+ years of experience programming within C/C++ with low-level systems proficiency (memory management, synchronization, cache coherence).
- Strong understanding of multi-threaded and asynchronous programming models.
- Deep understanding of HPC workloads, performance bottlenecks, and compute/memory tradeoffs.
- Expertise in zero-copy memory access, pinned memory, peer-to-peer memory copy, and device memory lifetimes.
Preferred Qualifications
- Familiarity with python and AI framework like pytorch.
- Familiarity with assembly or PTX/SASS for debugging or optimizing CUDA kernels.
- Familiarity with NVMe storage offloads, IOAT/DPDK, or other DMA-based acceleration methods.
- Familiarity with Valgrind, cuda-memcheck, gdb, and profiling with Nsight Compute/Systems.
- Proficiency with perf, ftrace, eBPF, and other Linux tracing tools
#WeAreCisco
#WeAreCisco where every individual brings their unique skills and perspectives together to pursue our purpose of powering an inclusive future for all.
Our passion is connection—we celebrate our employees’ diverse set of backgrounds and focus on unlocking potential. Cisconians often experience one company, many careers where learning and development are encouraged and supported at every stage. Our technology, tools, and culture pioneered hybrid work trends, allowing all to not only give their best, but be their best.
We understand our outstanding opportunity to bring communities together and at the heart of that is our people. One-third of Cisconians collaborate in our 30 employee resource organizations, called Inclusive Communities, to connect, foster belonging, learn to be informed allies, and make a difference. Dedicated paid time off to volunteer—80 hours each year—allows us to give back to causes we are passionate about, and nearly 86% do!
Our purpose, driven by our people, is what makes us the worldwide leader in technology that powers the internet. Helping our customers reimagine their applications, secure their enterprise, transform their infrastructure, and meet their sustainability goals is what we do best. We ensure that every step we take is a step towards a more inclusive future for all. Take your next step and be you, with us!
When available, the salary range posted for this position reflects the projected hiring range for new hire, full-time salaries in U.S. and/or Canada locations, not including equity or benefits. For non-sales roles the hiring ranges reflect base salary only; employees are also eligible to receive annual bonuses. Hiring ranges for sales positions include base and incentive compensation target. Individual pay is determined by the candidate's hiring location and additional factors, including but not limited to skillset, experience, and relevant education, certifications, or training. Applicants may not be eligible for the full salary range based on their U.S. or Canada hiring location. The recruiter can share more details about compensation for the role in your location during the hiring process.
U.S. employees have access to quality medical, dental and vision insurance, a 401(k) plan with a Cisco matching contribution, short and long-term disability coverage, basic life insurance and numerous wellbeing offerings.
Employees receive up to twelve paid holidays per calendar year, which includes one floating holiday (for non-exempt employees), plus a day off for their birthday. Non-Exempt new hires accrue up to 16 days of vacation time off each year, at a rate of 4.92 hours per pay period. Exempt new hires participate in Cisco’s flexible Vacation Time Off policy, which does not place a defined limit on how much vacation time eligible employees may use, but is subject to availability and some business limitations. All new hires are eligible for Sick Time Off subject to Cisco’s Sick Time Off Policy and will have eighty (80) hours of sick time off provided on their hire date and on January 1st of each year thereafter. Up to 80 hours of unused sick time will be carried forward from one calendar year to the next such that the maximum number of sick time hours an employee may have available is 160 hours. Employees in Illinois have a unique time off program designed specifically with local requirements in mind. All employees also have access to paid time away to deal with critical or emergency issues. We offer additional paid time to volunteer and give back to the community.
Employees on sales plans earn performance-based incentive pay on top of their base salary, which is split between quota and non-quota components. For quota-based incentive pay, Cisco typically pays as follows:
.75% of incentive target for each 1% of revenue attainment up to 50% of quota;
1.5% of incentive target for each 1% of attainment between 50% and 75%;
1% of incentive target for each 1% of attainment between 75% and 100%; and once performance exceeds 100% attainment, incentive rates are at or above 1% for each 1% of attainment with no cap on incentive compensation.
For non-quota-based sales performance elements such as strategic sales objectives, Cisco may pay up to 125% of target. Cisco sales plans do not have a minimum threshold of performance for sales incentive compensation to be paid.