The AI-Washing Problem in Finance: When “AI-Powered” Means Nothing

In March 2024, the Securities and Exchange Commission settled its first enforcement actions for “AI washing” — making false or misleading claims about artificial intelligence capabilities. The targets were two investment advisory firms, Delphia (USA) Inc. and Global Predictions Inc., both accused of fabricating claims about using AI in their investment processes. Delphia paid $225,000 in civil penalties. Global Predictions paid $175,000.

These were small penalties. The signal they sent was not.

Since those initial actions, the SEC has escalated. In January 2025, the agency charged Presto Automation, a formerly Nasdaq-listed company, marking the first AI-washing enforcement against a public company. In April 2025, the SEC and Department of Justice jointly charged the founder of Nate Inc. with fraudulently raising over $42 million by claiming his shopping app used AI to process transactions. The reality: the company relied on manual workers to complete purchases. The claimed automation rate was “above 90%.” The actual rate was essentially zero.

The SEC has established a dedicated Cybersecurity and Emerging Technologies Unit (CETU) with AI-washing as an explicit enforcement priority. Securities class actions targeting AI misrepresentations doubled between 2023 and 2024. The agency’s Division of Examinations incorporated AI-washing into its 2024 and 2025 examination priorities, specifically reviewing investment advisors’ AI claims for accuracy.

The regulatory message is clear: claiming your product uses AI when it doesn’t is fraud. But the regulatory actions address only the most egregious cases — companies that fabricated AI capabilities entirely. The more pervasive problem is the vast grey area between genuine AI and pure fiction, where “AI-powered” has become a marketing label applied to products that use no artificial intelligence whatsoever.

The Spectrum of AI Claims

Not every “AI-powered” financial product is lying. Some are genuinely innovative. Some use basic automation dressed up in AI terminology. Some are pure marketing. Understanding the spectrum helps you evaluate claims critically.

Genuine Machine Learning

At one end of the spectrum: products that use real machine learning models trained on data, learning from outcomes, and improving over time.

Stripe Radar analyses hundreds of signals per transaction — behavioural patterns, device fingerprints, network-level fraud signals — to make real-time approve/decline decisions. The models learn from billions of transactions and adapt to new fraud patterns. This is genuine AI that demonstrably improves outcomes.

Upstart’s credit scoring uses machine learning models trained on alternative data to assess creditworthiness beyond FICO scores, reportedly approving 27% more borrowers at the same loss rate as traditional models. The models incorporate patterns across hundreds of variables that no rule-based system could identify.

Wealthfront’s tax-loss harvesting continuously scans portfolios for harvesting opportunities using algorithms that evaluate wash-sale rules, lot-level tax implications, and portfolio-wide optimisation. The system learns from market conditions and execution patterns.

These products use real machine learning. They learn. They improve. They make decisions that rule-based systems can’t. When they say “AI-powered,” the claim has substance.

Rule-Based Automation Labelled as AI

In the middle: products that use traditional automation — if-then rules, decision trees, keyword matching — and call it AI.

A budgeting app that categorises your transactions as “food,” “transport,” or “entertainment” based on merchant category codes isn’t using AI. It’s reading a database of merchant codes and applying labels. This is useful automation, but it doesn’t learn, doesn’t improve, and doesn’t make predictions. It’s a lookup table with a modern interface.

A chatbot that matches your question to a library of pre-written responses isn’t AI in any meaningful sense. It’s keyword matching — the same technology that powered customer service phone trees in the 2000s, now rendered in a text interface.

A robo-advisor that assigns you to one of five portfolio templates based on a risk questionnaire isn’t using AI for portfolio management. It’s using a decision tree: if risk tolerance = aggressive, assign Portfolio E. This is valuable automation (it removes bias and ensures consistency), but calling it “AI” implies a sophistication that doesn’t exist.

This grey area is where most “AI-powered” financial products live. The underlying technology is legitimate automation that provides genuine value. The AI labelling is marketing. The products work; the description is inflated.

Pure Marketing Fiction

At the far end: products that claim AI capabilities they simply don’t have. This is where the SEC has focused its enforcement.

Nate Inc. is the most dramatic example. The company raised $42 million from investors by claiming its e-commerce app used AI to complete purchases automatically. Investors were told the AI could navigate websites, fill in checkout forms, and process transactions autonomously. In reality, the company employed hundreds of workers in the Philippines to manually complete purchases. The “AI” was people.

Delphia claimed to use AI and machine learning that “incorporated client data” into its investment process. The SEC found these claims were false — the described AI capabilities did not exist.

Global Predictions marketed itself as the “first regulated AI financial advisor” and claimed to use AI-driven forecasting. The SEC found these statements misleading.

These cases are clear fraud. The products described capabilities that didn’t exist to attract investment or customers. But they represent a tiny fraction of the AI-washing problem. The vast majority of AI inflation occurs in the grey middle — rule-based products honestly marketed as AI, where the line between acceptable simplification and misleading exaggeration is genuinely unclear.

Why This Matters for Consumers

If you choose a financial product because it claims to be “AI-powered,” you’re making a decision based on a description that may mean anything from “genuine machine learning that improves your outcomes” to “we added a chatbot to the interface.”

The AI label creates an expectation of sophistication that influences how much trust you place in the product’s recommendations, how much you’re willing to pay, and how thoroughly you evaluate alternatives. A product labelled “AI-powered” implies it’s doing something a non-AI product can’t — learning from your behaviour, identifying patterns humans miss, improving over time. If the product is actually using keyword matching and decision trees, that expectation is misplaced.

This doesn’t mean rule-based products are bad. Automatic transaction categorisation is useful regardless of whether it uses AI or a lookup table. A risk-questionnaire-based portfolio allocation works well regardless of whether it uses machine learning or a decision tree. The product’s value doesn’t depend on the technology label. But the label influences your evaluation, and that influence is exactly what the marketing intends.

How to Evaluate AI Claims

When a financial product claims to be “AI-powered,” ask three questions:

What specific task does the AI perform? A legitimate AI product can describe what the model does: “analyses transaction patterns to detect fraud,” “evaluates alternative data to assess creditworthiness,” “identifies tax-loss harvesting opportunities.” Vague claims — “AI-powered insights,” “AI-driven recommendations,” “powered by artificial intelligence” — with no specific task description are marketing labels.

Does it learn and improve? Genuine machine learning improves with more data and feedback. If the product performs the same way regardless of how much you use it, it’s likely rule-based automation, not AI.

Would it work the same without the AI label? If removing the word “AI” from the product description doesn’t change what the product actually does, the label is marketing. A budgeting app that categorises transactions is a budgeting app, regardless of whether you call the categorisation “AI” or “automatic.”

The “Agentic AI” Warning

The newest iteration of AI marketing in finance is “agentic AI” — AI that acts autonomously on your behalf. Some agentic AI applications are real (fraud detection systems that make autonomous approve/decline decisions, portfolio rebalancing that executes without human approval). Many are aspirational.

Expect the same pattern that played out with “AI-powered” to repeat with “agentic AI.” Early genuine applications will be followed by a flood of products relabelling existing automation as “agentic.” The critical evaluation framework remains the same: what specifically does the agent do, does it learn, and would the product work the same without the buzzword?

For our assessment of which AI financial tools deliver genuine value versus which are marketing, see our honest AI tools review. For the evidence-based question of whether robo-advisors — the original “AI” financial products — actually deliver on their promises, see do robo-advisors actually work.


FinTech Essential does not earn commissions from products mentioned in this article. Our analysis is editorially independent and funded by advertising, not affiliate relationships.

SEC enforcement information sourced from official SEC press releases and filings. References to specific companies reflect publicly documented regulatory actions. This article is for informational purposes only and does not constitute legal or investment advice.