Artificial intelligence has revolutionized countless industries, but password generation isn’t one of them. Security researchers have exposed a critical flaw in AI-driven password creation: the systems behind tools like ChatGPT, Google’s Gemini, and Anthropic’s Claude don’t produce truly random passwords—they generate patterns that can be cracked with alarming ease.

What changed? Until recently, many users assumed AI could solve the age-old problem of weak passwords. Instead, security experts at Irregular discovered that AI models produce passwords with structured predictability, meaning they follow detectable rules rather than randomness. For example, every password tested began with an uppercase letter—often the letter G—and relied on a fixed set of characters (L, 9, m, 2, $, #). Worse, some passwords were identical, with one example—G7$kL9#mQ2&xP4!w—appearing 18 times across 50 generated samples.

How it works—and why it fails. Unlike a dedicated password manager, which uses cryptographic randomness to create unique, unguessable strings, AI models rely on probability-based algorithms trained on existing data. This means they favor patterns they’ve seen before, even if those patterns are insecure. The absence of duplicate characters in the tested passwords, for instance, wasn’t a feature of security—it was an artificial constraint to avoid appearing random. The result? Passwords that look complex on the surface but are mathematically weak.

**AI Passwords Are a Hacker’s Goldmine—Here’s Why You Should Never Use Them**

The consequences are already playing out in the wild. Researchers found traces of these AI-generated patterns in open-source code repositories like GitHub, suggesting developers may have unknowingly embedded vulnerable credentials into projects. For everyday users, the risk is just as real: a hacker armed with a brute-force tool could exploit these patterns to gain access to accounts, potentially compromising everything from email to banking.

Why this matters—and what to do instead. The core issue isn’t just that AI passwords are crackable; it’s that they advertise their own weaknesses. Some AI tools, like Gemini, now include warnings against using their password generators, citing server-side processing as a security risk. But the deeper problem is one of trust. Relying on AI for passwords assumes the system understands security as well as humans do—it doesn’t.

The solution is straightforward: use a password manager with a built-in random generator. These tools create passwords with true entropy, ensuring each character is independent of the last. They also store credentials securely, eliminating the need to remember (or generate) passwords manually. For those who need recommendations, options like Dashlane offer tiered pricing—from free basic plans to premium features starting at $4.99 per month—with additional protections like two-factor authentication and breach monitoring.

In an era where data breaches are increasingly sophisticated, the last thing users need is another layer of predictable vulnerability. The message is clear: AI may be brilliant at many things, but password security isn’t one of them.