## Generate lip sync from uploaded media `lipsync.generate_with_media(LipsyncGenerateWithMediaParams**kwargs) -> LipsyncGenerate` **post** `/v1/lipsync/generate-with-media` Starts a lip sync job by uploading one video file and one audio file as multipart form-data. ### Parameters - `audio: FileTypes` Target audio file. - `video: FileTypes` Source video file. - `disable_active_speaker_detection: Optional[bool]` Disable active speaker detection and use max-face lipsync preprocessing. - `model: Optional[Literal["lipsync-2"]]` Optional model selector. - `"lipsync-2"` - `reference_id: Optional[str]` Optional client-provided identifier for later retrieval. ### Returns - `class LipsyncGenerate: …` - `request_id: str` Identifier of the created lip sync request. - `status: Literal["success"]` Current state of the newly created request. - `"success"` ### Example ```python import os from chamelaion import Chamelaion client = Chamelaion( api_key=os.environ.get("CHAMELAION_API_KEY"), # This is the default and can be omitted ) lipsync_generate = client.lipsync.generate_with_media( audio=b"(binary)", video=b"(binary)", disable_active_speaker_detection=False, model="lipsync-2", reference_id="upload-demo-01", ) print(lipsync_generate.request_id) ``` #### Response ```json { "status": "success", "request_id": "3b7df3e8-f578-44de-b4f5-5f58dd6b89e0" } ```