What should you do?
I Am a Newspaper Junkie
I have subscriptions to the LA Times, Washington Post, NY Times, and — for my writing — Newspapers.com, so I can dig through back issues. It’s one of my favorite habits.
So last week I came across a Washington Post headline that’s hard to argue with: Don’t tell your AI chatbot these 5 things to keep your money safe. Names. Addresses. Social Security numbers. Specific debts. Tax returns. Don’t share any of it. The advice is sound. To be honest, I don’t always follow it — but maybe I should think more carefully about that.
What stopped me, though, wasn’t the headline. It was a number buried in the middle of the column: 77% of Gen Z and 72% of millennials are using AI for financial guidance. A Cisco survey found 29% of global AI users have already entered personal or confidential information into chatbots. The advice is “don’t.” The behavior is “already did.”
When seven out of ten people are doing something, “stop” isn’t advice. It’s denial.
This is something educators understand. You don’t tell a student who’s three grade levels behind to “just catch up.” You meet them where they are and build forward. You don’t tell a parent who already missed the IEP deadline that they should have been on time. You help them figure out what comes next. Good advice isn’t about being right. It’s about being useful to the person standing in front of you.
The column was right about the risk. It just wasn’t useful to the 70% who’d already taken it.
There’s something else in that article I’ve been turning over for a few days.
The whole piece rests on a quiet assumption: that AI is the risky, untrustworthy place to put your financial information, and that humans and institutions are the trustworthy alternative. Don’t tell the chatbot. Tell a real advisor. Use a real bank. Trust the system.
I want to gently push back on that.
Ken Lay and Jeffrey Skilling were humans. They ran Enron. Twenty thousand employees lost their pensions and 401(k)s — billions of dollars of ordinary people’s retirement money — because credentialed executives at a Fortune 500 company lied to them. The auditors, also humans, signed off on it.
Bernie Madoff was a human. Former chairman of NASDAQ. He ran the largest Ponzi scheme in history — $65 billion — for decades. The SEC investigated him multiple times and cleared him. Humans investigating humans, blessing the fraud.
The 2008 financial crisis was engineered by humans at the most trusted institutions in the country, selling mortgage-backed securities they privately knew were toxic. Wells Fargo employees opened millions of fake accounts in customers’ names. Equifax — a credit bureau whose entire job is protecting your data — lost the personal information of 147 million Americans in a single breach.
I’m not saying AI is safer than humans. I’m saying the math is more complicated than the article admits.
The honest version of the advice isn’t “AI is risky, humans are safe.” It’s “trust is hard, and you have to think carefully about who gets what information and why — whether it’s a chatbot, a financial advisor, a credit bureau, or your dentist’s receptionist asking for your Social Security number.”
That’s not a comfortable headline. But it’s a more honest one.
So if you’re in the 70% — if you’ve already shared more than the article says you should have — what can we do now?