Over 11,000 five-star assets
Rated by 85,000+ customers
Supported by 100,000+ forum members
Every asset moderated by Unity
This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.
1/4
SpeechBlend provides accurate, real-time lip syncing in Unity.
SpeechBlend works by analyzing the audio coming from any Audio Source and uses machine learning to predict realistic mouth shapes (visemes).
Currently the following viseme blendshape sets are supported:
- Daz Studio (Genesis 2/3/8)
- Character Creator 3
- iClone (v5.x/v6.x)
- Adobe Fuse (Mixamo Rigging)
- Character Creator 3/4
- Apple AR Face
- Any character model with similar blendshapes to the above
Now with WebGL Support!
To use SpeechBlend, just drop the component onto your character, select the voice audio source and head mesh blendshapes and you're ready to go!
SpeechBlend can be used with just a single jaw joint or "mouth open" blendshape for simple mouth tracking with your audio file. To create realistic lip syncing it's recommended to use a character model with viseme blendshapes available.
You can even lip-sync your own voice live with microphone input! Check out the included demo to see how.
Many options are available to tweak the viseme prediction to get the realistic look you want at the right performance level.
SpeechBlend works by analyzing the audio coming from any Audio Source and uses machine learning to predict realistic mouth shapes (visemes).
Currently the following viseme blendshape sets are supported:
- Daz Studio (Genesis 2/3/8)
- Character Creator 3
- iClone (v5.x/v6.x)
- Adobe Fuse (Mixamo Rigging)
- Character Creator 3/4
- Apple AR Face
- Any character model with similar blendshapes to the above
Now with WebGL Support!
To use SpeechBlend, just drop the component onto your character, select the voice audio source and head mesh blendshapes and you're ready to go!
SpeechBlend can be used with just a single jaw joint or "mouth open" blendshape for simple mouth tracking with your audio file. To create realistic lip syncing it's recommended to use a character model with viseme blendshapes available.
You can even lip-sync your own voice live with microphone input! Check out the included demo to see how.
Many options are available to tweak the viseme prediction to get the realistic look you want at the right performance level.
SpeechBlend LipSync
(28)
157 users have favourite this asset
(157)
$14.99
Seat
1
Updated price and taxes/VAT calculated at checkout
Refund policy
Secure checkout:

License agreement
Standard Unity Asset Store EULALicense type
File size
24.8 MB
Latest version
1.2
Latest release date
Aug 14, 2023
Original Unity version
2022.3.7
Support
Visit siteQuality assets
Over 11,000 five-star assets
Trusted
Rated by 85,000+ customers
Community support
Supported by 100,000+ forum members
Language
Feedback
Partners Program
PartnersUSD
EUR
Copyright © 2025 Unity Technologies
All prices are exclusive of tax
USD
EUR