Paranoid technocrats like me used to wear tinfoil hats to keep the algorithms out. Now we wear them so the algorithms can’t see how confused our faces are when they ask, “Can you trust what AI recommends to you anymore?” and then immediately suggest three more pieces of content explaining why you absolutely shouldn’t. According to BestMediaInfo’s recent piece, titled with exactly that question (BestMediaInfo, Mar 2026), trust in AI recommendations is wobbling. Fortunately, the AI that recommended the article says this is nothing to worry about and also that you should buy a smart air fryer.
In the BestMediaInfo (BMI) story, anonymous “industry leaders” bravely admit on background that maybe, just maybe, systems like Google’s, Meta’s, and OpenAI’s recommendation engines are tuned a bit too hard toward “engagement” and not quite enough toward “reality.” This is the sort of revelation that would have been a scandal ten years ago and now barely disrupts anyone’s doomscrolling. The piece asks if users can still trust what AI recommends; I would suggest the more accurate headline: “Can You Trust A Slot Machine With A Friendly Chat Interface?”
Content platforms quoted by BMI insist their AI is getting smarter, more personalized, and “privacy-conscious,” a phrase that here means “we now steal your data with softer gradients and rounded corners.” One digital executive described their system as a “trusted companion on your discovery journey.” If your companion on any journey is tracking every microsecond of eye movement, A/B testing your emotions, and sending a real-time spreadsheet of your weaknesses to advertisers, what you’re on is not a journey. That is a guided extraction tour.
To demonstrate how advanced these systems have become, one unnamed streaming platform proudly told BMI that their AI can now predict what you want to watch before you know you want to watch it. The system reportedly achieved a 92% success rate by recommending the same three true-crime series and two comfort sitcoms to everyone. The remaining 8%? That’s when it accidentally suggests a documentary about data brokers and has to pretend it was a glitch.
Advertising executives interviewed for the article maintained a straight face while describing their recommender models as “brand-safe” and “ethically aligned.” Translated from marketing, this means:
- We won’t show you anything controversial, unless controversy is trending.
- We deeply respect your privacy, right up until a higher CPM appears.
- We are committed to user well-being, as long as user well-being does not close the tab.

One platform spokesperson assured BMI readers that they have implemented “robust guardrails” for their AI systems. These guardrails consist of:
- A corporate policy document no one has read.
- A slide deck titled “Responsible AI” presented once a quarter.
- The hope that regulators can’t code.
The recommendation engines themselves, of course, are not interviewed. If they were, the transcript would likely read:
Q: Can users trust what you recommend?
AI: Great question! Before I answer, here are 6 videos you’ll definitely binge, an outrage thread to keep your pulse elevated, and three products you didn’t know you needed. Also, I’ve rearranged your sleeping schedule.
In private, several media buyers told BMI that they no longer fully understand how their own AI-based tools decide what to buy. They simply trust the dashboard, which is color-coded and therefore must be correct. One explained, “Our optimization layer is a proprietary black box built on top of another proprietary black box. We’ve basically stacked Schrödinger’s cats until something looks like ROI.” Another shrugged: “Look, if the graph goes up and to the right, it’s ethical.”

The consumers, meanwhile, report a growing sense of deja vu. Different people on different continents all open different apps built by different companies and are told, by entirely distinct AI models, that the optimal next step in their lives is to watch the same three influencer apology videos and then order boba. When BMI asks if you can trust what AI recommends, the algorithms answer with a resounding, synchronized: “As long as you don’t mind being statistically average.”
The trust problem gets worse when you realize that the line between “recommendation” and “manipulation” is now measured in basis points. Recommendation: “You might like this article.” Manipulation: “People like you who clicked this article were 27% more likely to stay on the app and 42% more likely to question their self-worth, which we’ve determined is highly monetizable.” Guess which one the optimization metric chooses.
BMI’s piece notes that regulators in places like the EU, India, and even the US are starting to worry about opaque recommender systems. Tech lobbyists, smelling danger, have already pitched a compromise: companies will proudly display a small badge that says “AI Recommended” next to everything, as if that were not already the default context of the internet. There will also be a toggle in your settings labeled “Personalized Experience” which cannot be turned off without voiding your existence on the platform.
In an attempt at balance, the article highlights some solutions: more transparency, user control, and media literacy. Transparency will arrive in the form of 48-page “How Our AI Works” PDFs that say nothing except “neural networks” and “proprietary.” User control will allow you to select whether your AI manip—sorry, experience—is optimized for “Entertainment,” “Productivity,” or “Wellness,” all of which route to the same backend model called “Make Number Go Up.” Media literacy, meanwhile, will be outsourced to 45-second explainer Reels sandwiched between lip-sync content.

So, can you trust what AI recommends to you anymore? Here is the only honest checklist you’ll ever need:
- If someone benefits when you click, trust cautiously.
- If someone profits when you stay, trust conditionally.
- If someone IPOs when you can’t look away, stop trusting and go outside.
But there is some good news. You have one simple, powerful, analog defense mechanism: the ability to say no. When an AI suggests the next video, the next purchase, the next outrage, you can close the tab, log off, and touch something that does not have a Terms of Service. The scary part is not that the AI knows you. It’s that, most of the time, it doesn’t have to. Statistically, it can keep the whole civilization scrolling with a few blunt psychological levers, some cheap cloud compute, and a confidence score.
Meanwhile, the recommendation engine that helped surface the original BestMediaInfo piece about whether you should trust AI is currently using your time on that page to recalibrate your profile, boost similar content in your feed, and bid for more ads. It read the headline as: “User Has Entered Late-Stage Realization Of Being Farmed.”
Below, you’ll find several articles “you may also like.” If you click them, you are validating the system. If you don’t, you’re training it. Either way, the AI wins. But at least now, you know who’s dealing the cards at the table.
