Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 16 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ For complete documentation, guides, and examples, visit:

## Quick Start

### Process Files
### Image Editing (Process API)

```python
import asyncio
Expand All @@ -32,29 +32,30 @@ from decart import DecartClient, models

async def main():
async with DecartClient(api_key=os.getenv("DECART_API_KEY")) as client:
# Generate a video from text
# Edit an image
result = await client.process({
"model": models.video("lucy-pro-t2v"),
"prompt": "A cat walking in a lego world",
"model": models.image("lucy-pro-i2i"),
"prompt": "Apply a painterly oil-on-canvas look while preserving the composition",
"data": open("input.png", "rb"),
})

# Save the result
with open("output.mp4", "wb") as f:
with open("output.png", "wb") as f:
f.write(result)

asyncio.run(main())
```

### Async Processing (Queue API)
### Video Editing (Queue API)

For video generation jobs, use the queue API to submit jobs and poll for results:
For video editing jobs, use the queue API to submit jobs and poll for results:

```python
async with DecartClient(api_key=os.getenv("DECART_API_KEY")) as client:
# Submit and poll automatically
result = await client.queue.submit_and_poll({
"model": models.video("lucy-pro-t2v"),
"prompt": "A cat playing piano",
"model": models.video("lucy-pro-v2v"),
"prompt": "Restyle this footage with anime shading and vibrant neon highlights",
"data": open("input.mp4", "rb"),
"on_status_change": lambda job: print(f"Status: {job.status}"),
})

Expand All @@ -71,8 +72,9 @@ Or manage the polling manually:
async with DecartClient(api_key=os.getenv("DECART_API_KEY")) as client:
# Submit the job
job = await client.queue.submit({
"model": models.video("lucy-pro-t2v"),
"prompt": "A cat playing piano",
"model": models.video("lucy-pro-v2v"),
"prompt": "Add cinematic teal-and-orange grading and gentle film grain",
"data": open("input.mp4", "rb"),
})
print(f"Job ID: {job.job_id}")

Expand Down Expand Up @@ -147,8 +149,8 @@ python test_ui.py
Then open http://localhost:7860 in your browser.

The UI provides tabs for:
- **Image Generation** - Text-to-image and image-to-image transformations
- **Video Generation** - Text-to-video, image-to-video, and video-to-video
- **Image Editing** - Image-to-image edits
- **Video Editing** - Video-to-video edits
- **Video Restyle** - Restyle videos using text prompts or reference images
- **Tokens** - Create short-lived client tokens

Expand Down
29 changes: 16 additions & 13 deletions decart/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@

class DecartClient:
"""
Decart API client for video and image generation/transformation.
Decart API client for image editing, video editing, and realtime workflows.

Args:
api_key: Your Decart API key. Defaults to the DECART_API_KEY environment variable.
Expand All @@ -35,16 +35,18 @@ class DecartClient:
# Option 2: Using DECART_API_KEY environment variable
client = DecartClient()

# Image generation (sync) - use process()
# Image editing (sync) - use process()
image = await client.process({
"model": models.image("lucy-pro-t2i"),
"prompt": "A serene lake at sunset",
"model": models.image("lucy-pro-i2i"),
"prompt": "Apply a painterly oil-on-canvas look while preserving the composition",
"data": open("input.png", "rb"),
})

# Video generation (async) - use queue
# Video editing (async) - use queue
result = await client.queue.submit_and_poll({
"model": models.video("lucy-pro-t2v"),
"prompt": "A serene lake at sunset",
"model": models.video("lucy-pro-v2v"),
"prompt": "Restyle this footage with anime shading and vibrant neon highlights",
"data": open("input.mp4", "rb"),
})
```
"""
Expand Down Expand Up @@ -75,15 +77,16 @@ def __init__(
@property
def queue(self) -> QueueClient:
"""
Queue client for async job-based video generation.
Queue client for async video editing jobs.
Only video models support the queue API.

Example:
```python
# Submit and poll automatically
result = await client.queue.submit_and_poll({
"model": models.video("lucy-pro-t2v"),
"prompt": "A cat playing piano",
"model": models.video("lucy-pro-v2v"),
"prompt": "Restyle this footage with anime shading and vibrant neon highlights",
"data": open("input.mp4", "rb"),
})

# Or submit and poll manually
Expand Down Expand Up @@ -135,16 +138,16 @@ async def __aexit__(self, exc_type, exc_val, exc_tb):

async def process(self, options: dict[str, Any]) -> bytes:
"""
Process image generation/transformation synchronously.
Process image editing synchronously.
Only image models support the process API.

For video generation, use the queue API instead:
For video editing, use the queue API instead:
result = await client.queue.submit_and_poll({...})

Args:
options: Processing options including model and inputs
- model: ImageModelDefinition from models.image()
- prompt: Text prompt for generation
- prompt: Text instructions describing the requested edit
- Additional model-specific inputs

Returns:
Expand Down
80 changes: 1 addition & 79 deletions decart/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,12 @@

RealTimeModels = Literal["mirage", "mirage_v2", "lucy_v2v_720p_rt", "lucy_2_rt", "live_avatar"]
VideoModels = Literal[
"lucy-dev-i2v",
"lucy-fast-v2v",
"lucy-pro-t2v",
"lucy-pro-i2v",
"lucy-pro-v2v",
"lucy-motion",
"lucy-restyle-v2v",
"lucy-2-v2v",
]
ImageModels = Literal["lucy-pro-t2i", "lucy-pro-i2i"]
ImageModels = Literal["lucy-pro-i2i"]
Model = Literal[RealTimeModels, VideoModels, ImageModels]

# Type variable for model name
Expand Down Expand Up @@ -46,24 +42,6 @@ class ModelDefinition(DecartBaseModel, Generic[ModelT]):
"""Type alias for model definitions that support realtime streaming."""


class TextToVideoInput(BaseModel):
prompt: str = Field(..., min_length=1, max_length=1000)
seed: Optional[int] = None
resolution: Optional[str] = None
orientation: Optional[str] = None


class ImageToVideoInput(DecartBaseModel):
prompt: str = Field(
...,
min_length=1,
max_length=1000,
)
data: FileInput
seed: Optional[int] = None
resolution: Optional[str] = None


class VideoToVideoInput(DecartBaseModel):
prompt: str = Field(
...,
Expand Down Expand Up @@ -128,17 +106,6 @@ class VideoEdit2Input(DecartBaseModel):
enhance_prompt: Optional[bool] = None


class TextToImageInput(BaseModel):
prompt: str = Field(
...,
min_length=1,
max_length=1000,
)
seed: Optional[int] = None
resolution: Optional[str] = None
orientation: Optional[str] = None


class ImageToImageInput(DecartBaseModel):
prompt: str = Field(
...,
Expand Down Expand Up @@ -195,38 +162,6 @@ class ImageToImageInput(DecartBaseModel):
),
},
"video": {
"lucy-dev-i2v": ModelDefinition(
name="lucy-dev-i2v",
url_path="/v1/generate/lucy-dev-i2v",
fps=25,
width=1280,
height=704,
input_schema=ImageToVideoInput,
),
"lucy-fast-v2v": ModelDefinition(
name="lucy-fast-v2v",
url_path="/v1/generate/lucy-fast-v2v",
fps=25,
width=1280,
height=704,
input_schema=VideoToVideoInput,
),
"lucy-pro-t2v": ModelDefinition(
name="lucy-pro-t2v",
url_path="/v1/generate/lucy-pro-t2v",
fps=25,
width=1280,
height=704,
input_schema=TextToVideoInput,
),
"lucy-pro-i2v": ModelDefinition(
name="lucy-pro-i2v",
url_path="/v1/generate/lucy-pro-i2v",
fps=25,
width=1280,
height=704,
input_schema=ImageToVideoInput,
),
"lucy-pro-v2v": ModelDefinition(
name="lucy-pro-v2v",
url_path="/v1/generate/lucy-pro-v2v",
Expand Down Expand Up @@ -261,14 +196,6 @@ class ImageToImageInput(DecartBaseModel):
),
},
"image": {
"lucy-pro-t2i": ModelDefinition(
name="lucy-pro-t2i",
url_path="/v1/generate/lucy-pro-t2i",
fps=25,
width=1280,
height=704,
input_schema=TextToImageInput,
),
"lucy-pro-i2i": ModelDefinition(
name="lucy-pro-i2i",
url_path="/v1/generate/lucy-pro-i2i",
Expand Down Expand Up @@ -297,11 +224,7 @@ def video(model: VideoModels) -> VideoModelDefinition:
Video models only support the queue API.

Available models:
- "lucy-pro-t2v" - Text-to-video
- "lucy-pro-i2v" - Image-to-video
- "lucy-pro-v2v" - Video-to-video
- "lucy-dev-i2v" - Image-to-video (Dev quality)
- "lucy-fast-v2v" - Video-to-video (Fast quality)
- "lucy-motion" - Image-to-motion-video
- "lucy-restyle-v2v" - Video-to-video with prompt or reference image
- "lucy-2-v2v" - Video-to-video editing (long-form, 720p)
Expand All @@ -318,7 +241,6 @@ def image(model: ImageModels) -> ImageModelDefinition:
Image models only support the process (sync) API.

Available models:
- "lucy-pro-t2i" - Text-to-image
- "lucy-pro-i2i" - Image-to-image
"""
try:
Expand Down
16 changes: 9 additions & 7 deletions decart/queue/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@

class QueueClient:
"""
Queue client for async job-based video generation.
Queue client for async job-based video editing.
Only video models support the queue API.

Jobs are submitted and processed asynchronously, allowing you to
Expand All @@ -37,15 +37,17 @@ class QueueClient:

# Option 1: Submit and poll automatically
result = await client.queue.submit_and_poll({
"model": models.video("lucy-pro-t2v"),
"prompt": "A cat playing piano",
"model": models.video("lucy-pro-v2v"),
"prompt": "Restyle this clip with anime shading and saturated colors",
"data": open("input.mp4", "rb"),
"on_status_change": lambda job: print(f"Status: {job.status}"),
})

# Option 2: Submit and poll manually
job = await client.queue.submit({
"model": models.video("lucy-pro-t2v"),
"prompt": "A cat playing piano",
"model": models.video("lucy-pro-v2v"),
"prompt": "Add cinematic teal-and-orange grading and subtle film grain",
"data": open("input.mp4", "rb"),
})
status = await client.queue.status(job.job_id)
result = await client.queue.result(job.job_id)
Expand All @@ -60,14 +62,14 @@ async def _get_session(self) -> aiohttp.ClientSession:

async def submit(self, options: dict[str, Any]) -> JobSubmitResponse:
"""
Submit a video generation job to the queue for async processing.
Submit a video editing job to the queue for async processing.
Only video models are supported.
Returns immediately with job_id and initial status.

Args:
options: Submit options including model and inputs
- model: VideoModelDefinition from models.video()
- prompt: Text prompt for generation
- prompt: Text instructions describing the requested edit
- Additional model-specific inputs

Returns:
Expand Down
15 changes: 11 additions & 4 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,10 @@ export DECART_API_KEY="your-api-key-here"

### Process API

- **`process_video.py`** - Generate and transform videos
- **`process_image.py`** - Generate and transform images
- **`process_video.py`** - Edit a local video with `lucy-pro-v2v`
- **`process_image.py`** - Edit the bundled example image with `lucy-pro-i2i`
- **`process_url.py`** - Transform videos from URLs
- **`queue_image_example.py`** - Turn the bundled example image into motion with `lucy-motion`

### Realtime API

Expand All @@ -37,13 +38,19 @@ pip install decart[realtime]

### Running Examples

`process_image.py` and `queue_image_example.py` use the bundled `examples/files/image.png` asset.
`process_video.py` expects you to place a local video at `examples/assets/example_video.mp4` first.

```bash
# Generate and transform videos
# Edit a local video (requires examples/assets/example_video.mp4)
python examples/process_video.py

# Generate and transform images
# Edit the bundled example image
python examples/process_image.py

# Turn the bundled example image into motion
python examples/queue_image_example.py

# Transform video from URL
python examples/process_url.py

Expand Down
Loading
Loading