Wave
Wave
Oct 7, 2025

The Wave of AI Slop TikToks - Sora 2 and More

AI content is tolerated, not sought out.

Most users don’t want “AI videos” as a category. They want funny, surprising, emotional content — and if it happens to be AI‑made, fine. But no one opens an app thinking “I want to watch AI slop today.” Some AI creations aren’t slop. But due to its nature, most of the content generated will be by people farming for views, reaction baiting. Just more slop content (lots of traditional slop content exist now too). 

Curated demos: Companies showcase the best‑case generations — often cherry‑picked, sometimes run with higher compute budgets, longer sampling schedules, or hand‑selected prompts. A RECURRING THEME is that the demo videos are full of content very similar used in the dataset, and the best examples you see like anime are literally ripped from real popular anime videos that get reposted a lot (Attack on titan cutting neck of Titan sequences), even south park (legal issues incoming…). Real user generations can also be upscaled for much better fidelity than the model itself, with many generations to cherry pick. 

  • Influencer partnerships: Paying influencers to “show off” the tech is a way to seed hype, but those influencers are usually given studio‑quality inputs (lighting, cameras, carefully scripted prompts) and sometimes even special access to higher‑tier inference settings.

  • Everyday usage: Regular users don’t have those conditions. They get noisier results, more artifacts, and slower runtimes. That’s why the gap feels so stark between the glossy launch reel and the “real generations” people share online.

  • Purpose of Sora app: Entertainment, novelty, social sharing. Sora itself is amazing once again, a nice leap up in physics.

  • Strength: Physics‑aware short clips, creative exaggerations, meme‑style content.


Limitations:

  • Not personal — head motions, expressions and emotions are generic priors.  (even arm motions, if prompted to speak naturally, are always similar). They also just place you in a totally made up body in their app… you can insert a full body picture of yourself as an item though like most generative video AI. (their demos show employees with full body a lot, likely special internal tuning).

  • Caricatured style (influencer defaults, smoothed skin & teeth, uncanny valley). This is their best tuned internally used in training data, etc. model, Sam. https://www.reddit.com/r/ChatGPT/s/u60UB2yfAh. And still his teeth are… super flat / even, unlike his real picture... And this is after they upscale for the demo. Voice is super bland. Face is always faint smile expression. 


Celebrities are worse. And unconsented, especially revival, is horrendous ethically. No consent. Using people's image to make 'hahas' or to push a message. From Martin Luther King Jr, Mr. Rogers, to Michael Jackson and Robin Williams.

  • Branding landmines galore. I thought the focus was on AGI. Overall, Mixed feelings about Ai art and videos. Slop. But can be funny content. Sam altman caricatures (his facial structure and looks are slightly distorted much of the time, teeth especially are bad). Animals. But cant do real things that we need it to be real for. Sports. Stunts, action sports. Cooking. People. Street interviews. Influencers. I like news from Philip d. 300,000 years of biological

  • Knowing a human is acting drawing these animations different. Anime could be ai adoptions of novels



    • If SAG‑AFTRA forbids it, models will plateau at “generic influencer / caricature” style. Data might be too limited even then, and need actors in the studio again for days.

      • It’s a current minefield of legal issues abound because you can get celebrities to appear because of the influencer data, youtube videos, etc.

  • That’s fine for memes, ads, or background extras — but not for cinema.



    Slow, GPU‑intensive, 10c to 50c per SECOND. Crazy. Not great for environment, or people's energy bills.


  • Fun for skits, but not built for intimacy or continuity. Can’t create >3s talking clip (uses lots of cuts). Exaggerates heavily. 

    • Only using a short ‘say 3 numbers and look up and down’, to extract reference images that are part of the conditioning cross-attention to create a video containing this ‘cameo’. Even if users had a 10 min video of themselves speaking, it doesn’t work because it’s a model in the hundreds of billions of parameters (likely requiring a days of content just for LoRA to be effective at all (heuristic from hours needed for smaller 10-20B models to start to learn a style, tens of hours is optimal there), full fine tuning would be instant catastrophic overfitting since the model itself was trained on hundreds of millions of clips)

    • Fundamental architecture difference. To avoid overfitting, you need proportionally more training data. And not remotely close to real-time on a consumer GPU.


Fun to play around with, but it's a novelty.

Putting myself in situations like drinking ketchup or skibidi toilet memes, bullying each other. Skits. App rated 12+... Children under 18 share without consent a lot... and a lot of bad things can happen (exports are easy, most of the videos I see from users posted on Reddit contain no watermarks… you can make a likeness of anyone do anything if you're given access to, crop out watermark/just screen record in app…). The overall concept of using even a friend’s likeness, because it’s not your own, and you can control it to do whatever you want. That’s directly touching a heartstring of ‘control’ that most humans have (unlike Grow where you control everything, no one can change your Grow to respond in a different way, look different, do something, etc.)

Think of the Sora app as: “AI TikTok filter on steroids.” Brand is instantly associated with a popular negative, AI slop. Fighting against that.