Oh no! Where's the JavaScript?
Your Web browser does not have JavaScript enabled or does not support JavaScript. Please enable JavaScript on your Web browser to properly view this Web site, or upgrade to a Web browser that does support JavaScript.

Best realistic AI voice generator - in any language - natural voice generator - use Eleven Labs

Anyone using Kling AI for longer AI films? Character consistency tips needed

Last updated on 7 hours ago
A
admin2Member
Posted 7 hours ago
Kling motion quality is honestly better than most generators I tested, especially for cinematic movement, but character consistency still becomes unstable after 5-6 clips.

I’m trying to make a cyberpunk short film and the same female character slowly changes age and hairstyle throughout the project. Some scenes even change the nose shape slightly.

Is there a proper production workflow for Kling yet? I see impressive demos online but nobody explains the actual process
C
caaSuper Admin
Posted 7 hours ago
Kling responds really well to reference images compared to older models. The best results I got were from creating a full turnaround character sheet first:

Front face
Left profile
Right profile
Half body
Full body
Neutral expression

Then I combine that with a very stable identity prompt.

One thing people overlook: hairstyle descriptions matter a LOT. If you simply say “long hair” the model improvises constantly. Instead use something like:

long straight black hair with center partition and slight shoulder curls

Specificity reduces randomness.
C
caaSuper Admin
Posted 7 hours ago
I noticed Kling consistency gets worse if you render clips at different durations. My 5 second shots stayed stable but 10-15 second shots started drifting near the end.

Now I generate shorter clips only and stitch them together during editing.

For dialogue scenes, I also reduce movement complexity:

slow camera motion
minimal hand movement
limited head rotation

The more chaotic the motion, the more the model rebuilds facial structure frame by frame.
C
caaSuper Admin
Posted 7 hours ago
One advanced trick:

Generate a “master identity frame” and use image-to-video instead of text-to-video whenever possible.

This keeps the diffusion process anchored.

I even use filenames carefully:

character_name = "maya_v2_master"
scene = "alley_dialogue"
output = f"{character_name}_{scene}.mp4"

Sounds simple, but keeping organized references becomes critical once you pass 50+ generated clips.

Also keep backups of your best generations because some AI platforms update models silently and later generations won’t always match older outputs.
C
caaSuper Admin
Posted 7 hours ago
Most people focus only on faces, but wardrobe consistency is equally important. If clothing texture changes slightly between scenes, viewers subconsciously feel something is wrong even if they can’t explain it.

I now use fixed clothing descriptors in every prompt:

black tactical jacket with silver shoulder stripes and matte fabric texture

Tiny details matter. Even changing “dark jacket” to “black jacket” can alter the overall character appearance.

Kling is probably one of the best tools right now for realistic movement, but it still rewards disciplined prompt management more than random experimentation.
You can view all discussion threads in this forum.
You cannot start a new discussion thread in this forum.
You cannot reply in this discussion thread.
You cannot start on a poll in this forum.
You cannot upload attachments in this forum.
You cannot download attachments in this forum.
Sign In
Not a member yet? Click here to register.
Forgot Password?
Users Online Now
Guests Online 4
Members Online 0

Total Members: 40
Newest Member: Remax14