RTX 5090 Outperforms Expensive AI GPUs at Password Cracking

by Priyanka Patel

In the current arms race for artificial intelligence supremacy, the most expensive hardware isn’t always the most capable. New benchmark data reveals a surprising gap in versatility: the ultra-premium GPUs powering the world’s largest data centers are remarkably inefficient at the foundational task of password cracking, often losing to consumer-grade gaming hardware.

Research conducted by Specops indicates that high-end AI accelerators, some costing upwards of $30,000, are outperformed by the Nvidia RTX 5090. The study sought to determine if the massive compute power of data center GPUs could be repurposed for password recovery and hacking—a theoretical “second job” for the hardware should the current AI investment bubble shift.

The results suggest a stark divide between “general purpose” compute and the highly specialized architecture of modern AI chips. Even as the RTX 5090 is designed for the varied workloads of a consumer desktop, the Nvidia H200 and AMD MI300X are so streamlined for machine learning that they struggle with the specific mathematical operations required to break encrypted passwords.

The disparity is most evident when comparing the RTX 5090 to the H200. On average, the consumer gaming card was 63.7% faster than the H200 across various tests. Even when compared to the AMD MI300X, the RTX 5090 maintained a lead, performing roughly 20% faster on average.

(Image credit: Noctua)

The Architecture Gap: Why AI GPUs Fail at Cracking

To understand why a $30,000 chip loses to a gaming card, one must look at the underlying math. Password cracking—the process of guessing a password by repeatedly hashing a string and comparing it to a stored hash—relies heavily on 32-bit integer (INT32) operations. This is a compute-intensive process that requires raw, linear processing power.

The Architecture Gap: Why AI GPUs Fail at Cracking

Modern AI GPUs, however, are built for a different world. Machine learning workloads prioritize different instruction types, such as FP8, FP4, BF16, and INT8. These “lower precision” formats allow AI models to process massive amounts of data more quickly and with less energy, which is essential for training Large Language Models (LLMs).

Because of this specialization, data center GPUs sacrifice general-purpose integer cores to make room for Tensor cores. The Nvidia H200, for example, possesses only half as many INT32 cores as FP32 cores. In contrast, the RTX 5090 maintains a higher count of these cores, making it far more effective for the specific needs of a tool like Hashcat.

The Role of Software Optimization

The hardware isn’t the only factor. The research team utilized Hashcat, a widely used password recovery tool that is similarly frequently employed by malicious actors to automate attacks. The data showed that while the AMD MI300X actually possesses superior raw INT32 performance compared to the RTX 5090, it still trailed behind in the actual benchmarks.

This discrepancy is attributed to the deep software optimizations Nvidia has baked into the Hashcat code. Because Hashcat is heavily optimized for Nvidia’s CUDA architecture, the RTX 5090 can extract more efficiency from its hardware than the AMD equivalent can from its own, despite the MI300X’s theoretical power.

Performance Breakdown by Algorithm

The Specops team benchmarked the GPUs across five common hashing algorithms: MD5, NTLM, bcrypt, SHA-256, and SHA-512. The results were consistent: the gaming GPU led in every category, with the most dramatic difference appearing in SHA-512, where the RTX 5090 was 93.5% faster than the H200.

Password Cracking Performance Comparison (Hash Rates)
Algorithm Nvidia H200 AMD MI300X RTX 5090
MD5 124.4 GH/s 164.1 GH/s 219.5 GH/s
NTLM 218.2 GH/s 268.5 GH/s 340.1 GH/s
bcrypt 275.3 kH/s 142.3 kH/s 304.8 kH/s
SHA-256 15092.3 MH/s 24673.6 MH/s 27681.6 MH/s
SHA-512 5173.6 MH/s 8771.4 MH/s 10014.2 MH/s

What This Means for Cybersecurity

For security professionals and organizations, these findings provide a nuanced view of the threat landscape. While the narrative often focuses on the “infinite” power of AI-driven data centers, the actual tools used for credential cracking remain rooted in consumer-grade hardware. A cluster of gaming GPUs is currently a more potent weapon for a password-cracking campaign than a cluster of AI accelerators.

This highlights a critical trend in semiconductor design: extreme specialization. As GPUs become “AI chips,” they lose the versatility that once made them the gold standard for any parallel processing task. The H200 and MI300X are world-class at their intended roles, but they are effectively “blind” to the requirements of 32-bit integer operations.

As long as password hashing algorithms rely on these specific compute patterns, consumer desktop GPUs will likely remain the fastest option for both legitimate password recovery and illegal cracking attempts.

The industry continues to watch how Nvidia and AMD balance this trade-off in future iterations. The next major checkpoint will be the release of updated architectural whitepapers and the subsequent rollout of next-generation data center chips, which may either lean further into AI specialization or attempt to reclaim some general-purpose utility.

Do you think the move toward hyper-specialized AI hardware leaves a gap in general-purpose computing? Share your thoughts in the comments below.

You may also like

Leave a Comment