DEV Community

Cover image for A beginner's guide to the Live-Portrait model by Mbukerepo on Replicate
aimodels-fyi
aimodels-fyi

Posted on • Originally published at aimodels.fyi

A beginner's guide to the Live-Portrait model by Mbukerepo on Replicate

This is a simplified guide to an AI model called Live-Portrait maintained by Mbukerepo. If you like these kinds of analysis, you should join AImodels.fyi or follow us on Twitter.

Model overview

The live-portrait model, created by maintainer mbukerepo, is an efficient portrait animation system that allows users to animate a portrait image using a driving video. The model builds upon previous work like LivePortrait, AniPortrait, and Live Speech Portraits, providing a simplified and optimized approach to portrait animation.

Model inputs and outputs

The live-portrait model takes two main inputs: an input portrait image and a driving video. The output is a generated animation of the portrait image following the motion and expression of the driving video.

Inputs

  • Input Image Path: A portrait image to be animated
  • Input Video Path: A driving video that will control the animation
  • Flag Do Crop Input: A boolean flag to determine whether the input image should be cropped
  • Flag Relative Input: A boolean flag to control whether the input motion is relative
  • Flag Pasteback: A boolean flag to control whether the generated animation should be pasted back onto the input image

Outputs

  • Output: The generated animation of the portrait image

Capabilities

The live-portrait model is capable o...

Click here to read the full guide to Live-Portrait

Top comments (0)