The neat thing about all this is that you don’t get a choice!
Your favorite services are adding “AI” features (and raising prices to boot), your data is being collected and analyzed (probably incorrectly) by AI tools, you are interacting with AI-generated responses on social media, viewing AI-generated images and videos, and reading articles generated by AI. Business leaders are making decisions about your job and your value using AI, and political leaders are making policy and military decisions based on AI output.
I do have a choice, I just stop using the product. When messenger added AI assistants, I switched to WhatsApp. Now WhatsApp has one too, now I’m using Signal. Wife brought home a win11 laptop, didn’t like the cheeky AI integration, now it runs Linux.
Sadly, almost none of my friends care or understand (older family members or non-tech people). If I tried to convince friends to move to Signal because of my disdain for AI profiteering, they'd react as if I were trying to get them to join a church.
Visa hasn't worked for online purchases for me for a few months, seemingly because of a rogue fraud-detection AI their customer service can't override.
Is there any chance that's just a poorly implemented traditional solution rather than feeding all my data into an LLM?
If by "traditional solution" you mean a bunch of data is fed into creating an ML model and then your individual transaction is fed into that, and it spits out a fraud score, then no, they'd not using LLMs, but at this high a level, what's the difference? If their ML model uses a transformers-based architecture vs not, what difference does it make?
Traditional fraud-detection models have quantified type-i/ii error rates, and somebody typically chooses parameters such that those errors are within acceptable bounds. If somebody decided to use a transformers-based architecture in roughly the same setup as before then there would be no issue, but if somebody listened to some exec's hairbrained idea to "let the AI look for fraud" and just came up with a prompt/api wrapping a modern LLM then there would be huge issues.
I run a small online software business and I am continually getting cards refused for blue chip customers (big companies, universities etc). My payment processor (2Checkout/Verifone) say it is 3DS authentication failures and not their fault. The customers tell me that their banks say it isn't the bank's fault. The problem is particularly acute for UK customers. It is costing me sales. It has happened before as well:
I've recently found myself having to pay for a few things online with bitcoin, not because they have anything to do with bitcoin, but because bitcoin payments actually worked and Visa/MC didn't!
For all the talk in the early days of Bitcoin comparing it to Visa and how it couldn't reach the scale of Visa, I never thought it would be that Visa just decided to place itself lower than Bitcoin.
Kind of the same as Windows getting so bad it got worse than Linux, actually...
Even if my favorite service is so irreplaceable, I still can use it without touching AI part of it. If majority who use a popular service never touch AI features, it will inevitably send a message to the owner one way or another - you are wasting money with AI.
Nah the owner will get a filtered truth from the middle managers that present them with information that everything's going great with AI, and the lost money is actually because of those greedy low level employees drinking up all the profit by working from home! The entire software industry has a massive Truth-To-Power problem that just keeps getting worse. I'd say the software industry in this day and age feels like Lord of the Flies but honestly feels too kind.
Exactly this. "AI usage is 20% of our customer base" "AI usage has increased 5% this quarter" "Due to our xyz campaign, AI usage has increased 10%"
It writes a narrative of success even if it's embellished. Managers respond to data and the people collecting the data are incentivised to indicate success.
Your favorite services are adding “AI” features (and raising prices to boot), your data is being collected and analyzed (probably incorrectly) by AI tools, you are interacting with AI-generated responses on social media, viewing AI-generated images and videos, and reading articles generated by AI. Business leaders are making decisions about your job and your value using AI, and political leaders are making policy and military decisions based on AI output.
It’s happening, with you or to you.