Don’t trust AI with this security essential

Don’t trust AI with this security essential

Let me start with a question: If you needed a strong password, would you ask AI to generate one for you?

It sounds reasonable enough. 

Tools like ChatGPT and Copilot can write reports, draft emails and even create bits of code. Asking them for a 16-character password packed with symbols and numbers feels like a smart shortcut.

But you might want to rethink that. 

Researchers recently tested AI tools by asking them to generate secure passwords. 

On the surface, the results looked great. Long strings of mixed-case letters, numbers and symbols. 

When checked using online password strength meters, they scored highly. Some tools even suggested it would take centuries to crack them.

But when those passwords were analyzed properly, a different picture emerged.

AI systems are powered by something called a large language model, or LLM. That means they’re trained to predict what text should come next. They’re brilliant at producing text that looks natural and plausible.

What they are not designed to do is create true randomness.

And strong passwords rely on randomness.

When researchers examined dozens of AI-generated passwords, they found repeating patterns. Some passwords were duplicates. Many followed very similar structures. 

Interestingly, none of them contained repeating characters. 

That might sound like a good thing, but real randomness often includes repetition. The absence of it suggests the password is following learned rules rather than being generated unpredictably.

The researchers measured something called “entropy”, which is a technical way of describing how unpredictable something is. 

AI-generated passwords scored far lower than a genuinely random 16-character password should. 

That means they could be much easier to crack using a brute-force attack, where attackers try huge numbers of combinations very quickly.

Online password checkers don’t catch this because they only look at visible complexity. 

They see symbols and numbers and assume it’s secure. They don’t account for the hidden patterns created by AI.

Even newer models like Gemini 3 Pro have issued warnings when asked to generate passwords, advising people not to rely on chat-generated credentials for sensitive accounts. 

That should tell you something.

If you want properly secure passwords, use a password manager with a built-in generator. 

These use cryptographic randomness, in other words, mathematical processes specifically designed to create unpredictable results.

AI is an excellent productivity tool. But when it comes to security essentials like passwords, it’s the wrong tool for the job.

If you’d like help choosing the right password manager for your business, get in touch.