diff --git a/common_x/README.md b/common_x/README.md deleted file mode 100644 index 371a83bc5640a438d5f59aecdae146af15991ec1..0000000000000000000000000000000000000000 --- a/common_x/README.md +++ /dev/null @@ -1,138 +0,0 @@ -# 🛠️ helpers/ - Ferramentas de IA de Terceiros Adaptadas para ADUC-SDR - -Esta pasta contém implementações adaptadas de modelos e utilitários de IA de terceiros, que servem como "especialistas" ou "ferramentas" de baixo nível para a arquitetura ADUC-SDR. - -**IMPORTANTE:** O conteúdo desta pasta é de autoria de seus respectivos idealizadores e desenvolvedores originais. Esta pasta **NÃO FAZ PARTE** do projeto principal ADUC-SDR em termos de sua arquitetura inovadora. Ela serve como um repositório para as **dependências diretas e modificadas** que os `DeformesXDEngines` (os estágios do "foguete" ADUC-SDR) invocam para realizar tarefas específicas (geração de imagem, vídeo, áudio). - -As modificações realizadas nos arquivos aqui presentes visam principalmente: -1. **Adaptação de Interfaces:** Padronizar as interfaces para que se encaixem no fluxo de orquestração do ADUC-SDR. -2. **Gerenciamento de Recursos:** Integrar lógicas de carregamento/descarregamento de modelos (GPU management) e configurações via arquivos YAML. -3. **Otimização de Fluxo:** Ajustar as pipelines para aceitar formatos de entrada mais eficientes (ex: tensores pré-codificados em vez de caminhos de mídia, pulando etapas de codificação/decodificação redundantes). - ---- - -## 📄 Licenciamento - -O conteúdo original dos projetos listados abaixo é licenciado sob a **Licença Apache 2.0**, ou outra licença especificada pelos autores originais. Todas as modificações e o uso desses arquivos dentro da estrutura `helpers/` do projeto ADUC-SDR estão em conformidade com os termos da **Licença Apache 2.0**. - -As licenças originais dos projetos podem ser encontradas nas suas respectivas fontes ou nos subdiretórios `incl_licenses/` dentro de cada módulo adaptado. - ---- - -## 🛠️ API dos Helpers e Guia de Uso - -Esta seção detalha como cada helper (agente especialista) deve ser utilizado dentro do ecossistema ADUC-SDR. Todos os agentes são instanciados como **singletons** no `hardware_manager.py` para garantir o gerenciamento centralizado de recursos de GPU. - -### **gemini_helpers.py (GeminiAgent)** - -* **Propósito:** Atua como o "Oráculo de Síntese Adaptativo", responsável por todas as tarefas de processamento de linguagem natural, como criação de storyboards, geração de prompts, e tomada de decisões narrativas. -* **Singleton Instance:** `gemini_agent_singleton` -* **Construtor:** `GeminiAgent()` - * Lê `configs/gemini_config.yaml` para obter o nome do modelo, parâmetros de inferência e caminhos de templates de prompt. A chave da API é lida da variável de ambiente `GEMINI_API_KEY`. -* **Métodos Públicos:** - * `generate_storyboard(prompt: str, num_keyframes: int, ref_image_paths: list[str])` - * **Inputs:** - * `prompt`: A ideia geral do filme (string). - * `num_keyframes`: O número de cenas a serem geradas (int). - * `ref_image_paths`: Lista de caminhos para as imagens de referência (list[str]). - * **Output:** `tuple[list[str], str]` (Uma tupla contendo a lista de strings do storyboard e um relatório textual da operação). - * `select_keyframes_from_pool(storyboard: list, base_image_paths: list[str], pool_image_paths: list[str])` - * **Inputs:** - * `storyboard`: A lista de strings do storyboard gerado. - * `base_image_paths`: Imagens de referência base (list[str]). - * `pool_image_paths`: O "banco de imagens" de onde selecionar (list[str]). - * **Output:** `tuple[list[str], str]` (Uma tupla contendo a lista de caminhos de imagens selecionadas e um relatório textual). - * `get_anticipatory_keyframe_prompt(...)` - * **Inputs:** Contexto narrativo e visual para gerar um prompt de imagem. - * **Output:** `tuple[str, str]` (Uma tupla contendo o prompt gerado para o modelo de imagem e um relatório textual). - * `get_initial_motion_prompt(...)` - * **Inputs:** Contexto narrativo e visual para a primeira transição de vídeo. - * **Output:** `tuple[str, str]` (Uma tupla contendo o prompt de movimento gerado e um relatório textual). - * `get_transition_decision(...)` - * **Inputs:** Contexto narrativo e visual para uma transição de vídeo intermediária. - * **Output:** `tuple[dict, str]` (Uma tupla contendo um dicionário `{"transition_type": "...", "motion_prompt": "..."}` e um relatório textual). - * `generate_audio_prompts(...)` - * **Inputs:** Contexto narrativo global. - * **Output:** `tuple[dict, str]` (Uma tupla contendo um dicionário `{"music_prompt": "...", "sfx_prompt": "..."}` e um relatório textual). - -### **flux_kontext_helpers.py (FluxPoolManager)** - -* **Propósito:** Especialista em geração de imagens de alta qualidade (keyframes) usando a pipeline FluxKontext. Gerencia um pool de workers para otimizar o uso de múltiplas GPUs. -* **Singleton Instance:** `flux_kontext_singleton` -* **Construtor:** `FluxPoolManager(device_ids: list[str], flux_config_file: str)` - * Lê `configs/flux_config.yaml`. -* **Método Público:** - * `generate_image(prompt: str, reference_images: list[Image.Image], width: int, height: int, seed: int = 42, callback: callable = None)` - * **Inputs:** - * `prompt`: Prompt textual para guiar a geração (string). - * `reference_images`: Lista de objetos `PIL.Image` como referência visual. - * `width`, `height`: Dimensões da imagem de saída (int). - * `seed`: Semente para reprodutibilidade (int). - * `callback`: Função de callback opcional para monitorar o progresso. - * **Output:** `PIL.Image.Image` (O objeto da imagem gerada). - -### **dreamo_helpers.py (DreamOAgent)** - -* **Propósito:** Especialista em geração de imagens de alta qualidade (keyframes) usando a pipeline DreamO, com capacidades avançadas de edição e estilo a partir de referências. -* **Singleton Instance:** `dreamo_agent_singleton` -* **Construtor:** `DreamOAgent(device_id: str = None)` - * Lê `configs/dreamo_config.yaml`. -* **Método Público:** - * `generate_image(prompt: str, reference_images: list[Image.Image], width: int, height: int)` - * **Inputs:** - * `prompt`: Prompt textual para guiar a geração (string). - * `reference_images`: Lista de objetos `PIL.Image` como referência visual. A lógica interna atribui a primeira imagem como `style` e as demais como `ip`. - * `width`, `height`: Dimensões da imagem de saída (int). - * **Output:** `PIL.Image.Image` (O objeto da imagem gerada). - -### **ltx_manager_helpers.py (LtxPoolManager)** - -* **Propósito:** Especialista na geração de fragmentos de vídeo no espaço latente usando a pipeline LTX-Video. Gerencia um pool de workers para otimizar o uso de múltiplas GPUs. -* **Singleton Instance:** `ltx_manager_singleton` -* **Construtor:** `LtxPoolManager(device_ids: list[str], ltx_model_config_file: str, ltx_global_config_file: str)` - * Lê o `ltx_global_config_file` e o `ltx_model_config_file` para configurar a pipeline. -* **Método Público:** - * `generate_latent_fragment(**kwargs)` - * **Inputs:** Dicionário de keyword arguments (`kwargs`) contendo todos os parâmetros da pipeline LTX, incluindo: - * `height`, `width`: Dimensões do vídeo (int). - * `video_total_frames`: Número total de frames a serem gerados (int). - * `video_fps`: Frames por segundo (int). - * `motion_prompt`: Prompt de movimento (string). - * `conditioning_items_data`: Lista de objetos `LatentConditioningItem` contendo os tensores latentes de condição. - * `guidance_scale`, `stg_scale`, `num_inference_steps`, etc. - * **Output:** `tuple[torch.Tensor, tuple]` (Uma tupla contendo o tensor latente gerado e os valores de padding utilizados). - -### **mmaudio_helper.py (MMAudioAgent)** - -* **Propósito:** Especialista em geração de áudio para um determinado fragmento de vídeo. -* **Singleton Instance:** `mmaudio_agent_singleton` -* **Construtor:** `MMAudioAgent(workspace_dir: str, device_id: str = None, mmaudio_config_file: str)` - * Lê `configs/mmaudio_config.yaml`. -* **Método Público:** - * `generate_audio_for_video(video_path: str, prompt: str, negative_prompt: str, duration_seconds: float)` - * **Inputs:** - * `video_path`: Caminho para o arquivo de vídeo silencioso (string). - * `prompt`: Prompt textual para guiar a geração de áudio (string). - * `negative_prompt`: Prompt negativo para áudio (string). - * `duration_seconds`: Duração exata do vídeo (float). - * **Output:** `str` (O caminho para o novo arquivo de vídeo com a faixa de áudio integrada). - - -### https://huggingface.co/spaces/ByteDance-Seed/SeedVR2-3B/tree/main - ---- - -## 🔗 Projetos Originais e Atribuições -(A seção de atribuições e licenças permanece a mesma que definimos anteriormente) - -### DreamO -* **Repositório Original:** [https://github.com/bytedance/DreamO](https://github.com/bytedance/DreamO) -... - -### LTX-Video -* **Repositório Original:** [https://github.com/Lightricks/LTX-Video](https://github.com/Lightricks/LTX-Video) -... - -### MMAudio -* **Repositório Original:** [https://github.com/hkchengrex/MMAudio](https://github.com/hkchengrex/MMAudio) -... \ No newline at end of file diff --git a/common_x/__init__.py b/common_x/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/common_x/cache.py b/common_x/cache.py deleted file mode 100644 index 89592fe8747a0b68b8553729abe908c6f06a5aa5..0000000000000000000000000000000000000000 --- a/common_x/cache.py +++ /dev/null @@ -1,47 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Callable - - -class Cache: - """Caching reusable args for faster inference""" - - def __init__(self, disable=False, prefix="", cache=None): - self.cache = cache if cache is not None else {} - self.disable = disable - self.prefix = prefix - - def __call__(self, key: str, fn: Callable): - if self.disable: - return fn() - - key = self.prefix + key - try: - result = self.cache[key] - except KeyError: - result = fn() - self.cache[key] = result - return result - - def namespace(self, namespace: str): - return Cache( - disable=self.disable, - prefix=self.prefix + namespace + ".", - cache=self.cache, - ) - - def get(self, key: str): - key = self.prefix + key - return self.cache[key] diff --git a/common_x/config.py b/common_x/config.py deleted file mode 100644 index f963e8229b8352ef514422609bcbaf9b8c761b15..0000000000000000000000000000000000000000 --- a/common_x/config.py +++ /dev/null @@ -1,110 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -""" -Configuration utility functions -""" - -import importlib -from typing import Any, Callable, List, Union -from omegaconf import DictConfig, ListConfig, OmegaConf - -OmegaConf.register_new_resolver("eval", eval) - - -def load_config(path: str, argv: List[str] = None) -> Union[DictConfig, ListConfig]: - """ - Load a configuration. Will resolve inheritance. - """ - config = OmegaConf.load(path) - if argv is not None: - config_argv = OmegaConf.from_dotlist(argv) - config = OmegaConf.merge(config, config_argv) - config = resolve_recursive(config, resolve_inheritance) - return config - - -def resolve_recursive( - config: Any, - resolver: Callable[[Union[DictConfig, ListConfig]], Union[DictConfig, ListConfig]], -) -> Any: - config = resolver(config) - if isinstance(config, DictConfig): - for k in config.keys(): - v = config.get(k) - if isinstance(v, (DictConfig, ListConfig)): - config[k] = resolve_recursive(v, resolver) - if isinstance(config, ListConfig): - for i in range(len(config)): - v = config.get(i) - if isinstance(v, (DictConfig, ListConfig)): - config[i] = resolve_recursive(v, resolver) - return config - - -def resolve_inheritance(config: Union[DictConfig, ListConfig]) -> Any: - """ - Recursively resolve inheritance if the config contains: - __inherit__: path/to/parent.yaml or a ListConfig of such paths. - """ - if isinstance(config, DictConfig): - inherit = config.pop("__inherit__", None) - - if inherit: - inherit_list = inherit if isinstance(inherit, ListConfig) else [inherit] - - parent_config = None - for parent_path in inherit_list: - assert isinstance(parent_path, str) - parent_config = ( - load_config(parent_path) - if parent_config is None - else OmegaConf.merge(parent_config, load_config(parent_path)) - ) - - if len(config.keys()) > 0: - config = OmegaConf.merge(parent_config, config) - else: - config = parent_config - return config - - -def import_item(path: str, name: str) -> Any: - """ - Import a python item. Example: import_item("path.to.file", "MyClass") -> MyClass - """ - return getattr(importlib.import_module(path), name) - - -def create_object(config: DictConfig) -> Any: - """ - Create an object from config. - The config is expected to contains the following: - __object__: - path: path.to.module - name: MyClass - args: as_config | as_params (default to as_config) - """ - item = import_item( - path=config.__object__.path, - name=config.__object__.name, - ) - args = config.__object__.get("args", "as_config") - if args == "as_config": - return item(config) - if args == "as_params": - config = OmegaConf.to_object(config) - config.pop("__object__") - return item(**config) - raise NotImplementedError(f"Unknown args type: {args}") \ No newline at end of file diff --git a/common_x/decorators.py b/common_x/decorators.py deleted file mode 100644 index 332a32d7b838cf7f8be902b9ae4895bad5edcd2e..0000000000000000000000000000000000000000 --- a/common_x/decorators.py +++ /dev/null @@ -1,147 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -""" -Decorators. -""" - -import functools -import threading -import time -from typing import Callable -import torch - -from common.distributed import barrier_if_distributed, get_global_rank, get_local_rank -from common.logger import get_logger - -logger = get_logger(__name__) - - -def log_on_entry(func: Callable) -> Callable: - """ - Functions with this decorator will log the function name at entry. - When using multiple decorators, this must be applied innermost to properly capture the name. - """ - - def log_on_entry_wrapper(*args, **kwargs): - logger.info(f"Entering {func.__name__}") - return func(*args, **kwargs) - - return log_on_entry_wrapper - - -def barrier_on_entry(func: Callable) -> Callable: - """ - Functions with this decorator will start executing when all ranks are ready to enter. - """ - - def barrier_on_entry_wrapper(*args, **kwargs): - barrier_if_distributed() - return func(*args, **kwargs) - - return barrier_on_entry_wrapper - - -def _conditional_execute_wrapper_factory(execute: bool, func: Callable) -> Callable: - """ - Helper function for local_rank_zero_only and global_rank_zero_only. - """ - - def conditional_execute_wrapper(*args, **kwargs): - # Only execute if needed. - result = func(*args, **kwargs) if execute else None - # All GPUs must wait. - barrier_if_distributed() - # Return results. - return result - - return conditional_execute_wrapper - - -def _asserted_wrapper_factory(condition: bool, func: Callable, err_msg: str = "") -> Callable: - """ - Helper function for some functions with special constraints, - especially functions called by other global_rank_zero_only / local_rank_zero_only ones, - in case they are wrongly invoked in other scenarios. - """ - - def asserted_execute_wrapper(*args, **kwargs): - assert condition, err_msg - result = func(*args, **kwargs) - return result - - return asserted_execute_wrapper - - -def local_rank_zero_only(func: Callable) -> Callable: - """ - Functions with this decorator will only execute on local rank zero. - """ - return _conditional_execute_wrapper_factory(get_local_rank() == 0, func) - - -def global_rank_zero_only(func: Callable) -> Callable: - """ - Functions with this decorator will only execute on global rank zero. - """ - return _conditional_execute_wrapper_factory(get_global_rank() == 0, func) - - -def assert_only_global_rank_zero(func: Callable) -> Callable: - """ - Functions with this decorator are only accessible to processes with global rank zero. - """ - return _asserted_wrapper_factory( - get_global_rank() == 0, func, err_msg="Not accessible to processes with global_rank != 0" - ) - - -def assert_only_local_rank_zero(func: Callable) -> Callable: - """ - Functions with this decorator are only accessible to processes with local rank zero. - """ - return _asserted_wrapper_factory( - get_local_rank() == 0, func, err_msg="Not accessible to processes with local_rank != 0" - ) - - -def new_thread(func: Callable) -> Callable: - """ - Functions with this decorator will run in a new thread. - The function will return the thread, which can be joined to wait for completion. - """ - - def new_thread_wrapper(*args, **kwargs): - thread = threading.Thread(target=func, args=args, kwargs=kwargs) - thread.start() - return thread - - return new_thread_wrapper - - -def log_runtime(func: Callable) -> Callable: - """ - Functions with this decorator will logging the runtime. - """ - - @functools.wraps(func) - def wrapped(*args, **kwargs): - torch.distributed.barrier() - start = time.perf_counter() - result = func(*args, **kwargs) - torch.distributed.barrier() - logger.info(f"Completed {func.__name__} in {time.perf_counter() - start:.3f} seconds.") - return result - - return wrapped diff --git a/common_x/diffusion/__init__.py b/common_x/diffusion/__init__.py deleted file mode 100644 index 034e36ef7f9eb0b3ae94280165e622a362e9fc1e..0000000000000000000000000000000000000000 --- a/common_x/diffusion/__init__.py +++ /dev/null @@ -1,56 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -""" -Diffusion package. -""" - -from .config import ( - create_sampler_from_config, - create_sampling_timesteps_from_config, - create_schedule_from_config, -) -from .samplers.base import Sampler -from .samplers.euler import EulerSampler -from .schedules.base import Schedule -from .schedules.lerp import LinearInterpolationSchedule -from .timesteps.base import SamplingTimesteps, Timesteps -from .timesteps.sampling.trailing import UniformTrailingSamplingTimesteps -from .types import PredictionType, SamplingDirection -from .utils import classifier_free_guidance, classifier_free_guidance_dispatcher, expand_dims - -__all__ = [ - # Configs - "create_sampler_from_config", - "create_sampling_timesteps_from_config", - "create_schedule_from_config", - # Schedules - "Schedule", - "DiscreteVariancePreservingSchedule", - "LinearInterpolationSchedule", - # Samplers - "Sampler", - "EulerSampler", - # Timesteps - "Timesteps", - "SamplingTimesteps", - # Types - "PredictionType", - "SamplingDirection", - "UniformTrailingSamplingTimesteps", - # Utils - "classifier_free_guidance", - "classifier_free_guidance_dispatcher", - "expand_dims", -] diff --git a/common_x/diffusion/config.py b/common_x/diffusion/config.py deleted file mode 100644 index f1d0468d88b5dd5f0d787c75ed3df06742d0a483..0000000000000000000000000000000000000000 --- a/common_x/diffusion/config.py +++ /dev/null @@ -1,74 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -""" -Utility functions for creating schedules and samplers from config. -""" - -import torch -from omegaconf import DictConfig - -from .samplers.base import Sampler -from .samplers.euler import EulerSampler -from .schedules.base import Schedule -from .schedules.lerp import LinearInterpolationSchedule -from .timesteps.base import SamplingTimesteps -from .timesteps.sampling.trailing import UniformTrailingSamplingTimesteps - - -def create_schedule_from_config( - config: DictConfig, - device: torch.device, - dtype: torch.dtype = torch.float32, -) -> Schedule: - """ - Create a schedule from configuration. - """ - if config.type == "lerp": - return LinearInterpolationSchedule(T=config.get("T", 1.0)) - - raise NotImplementedError - - -def create_sampler_from_config( - config: DictConfig, - schedule: Schedule, - timesteps: SamplingTimesteps, -) -> Sampler: - """ - Create a sampler from configuration. - """ - if config.type == "euler": - return EulerSampler( - schedule=schedule, - timesteps=timesteps, - prediction_type=config.prediction_type, - ) - raise NotImplementedError - - -def create_sampling_timesteps_from_config( - config: DictConfig, - schedule: Schedule, - device: torch.device, - dtype: torch.dtype = torch.float32, -) -> SamplingTimesteps: - if config.type == "uniform_trailing": - return UniformTrailingSamplingTimesteps( - T=schedule.T, - steps=config.steps, - shift=config.get("shift", 1.0), - device=device, - ) - raise NotImplementedError \ No newline at end of file diff --git a/common_x/diffusion/samplers/base.py b/common_x/diffusion/samplers/base.py deleted file mode 100644 index 8e65f19896b6d5844e769762e76d699b96abc733..0000000000000000000000000000000000000000 --- a/common_x/diffusion/samplers/base.py +++ /dev/null @@ -1,108 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -""" -Sampler base class. -""" - -from abc import ABC, abstractmethod -from dataclasses import dataclass -from typing import Callable -import torch -from tqdm import tqdm - -from ..schedules.base import Schedule -from ..timesteps.base import SamplingTimesteps -from ..types import PredictionType, SamplingDirection -from ..utils import assert_schedule_timesteps_compatible - - -@dataclass -class SamplerModelArgs: - x_t: torch.Tensor - t: torch.Tensor - i: int - - -class Sampler(ABC): - """ - Samplers are ODE/SDE solvers. - """ - - def __init__( - self, - schedule: Schedule, - timesteps: SamplingTimesteps, - prediction_type: PredictionType, - return_endpoint: bool = True, - ): - assert_schedule_timesteps_compatible( - schedule=schedule, - timesteps=timesteps, - ) - self.schedule = schedule - self.timesteps = timesteps - self.prediction_type = prediction_type - self.return_endpoint = return_endpoint - - @abstractmethod - def sample( - self, - x: torch.Tensor, - f: Callable[[SamplerModelArgs], torch.Tensor], - ) -> torch.Tensor: - """ - Generate a new sample given the the intial sample x and score function f. - """ - - def get_next_timestep( - self, - t: torch.Tensor, - ) -> torch.Tensor: - """ - Get the next sample timestep. - Support multiple different timesteps t in a batch. - If no more steps, return out of bound value -1 or T+1. - """ - T = self.timesteps.T - steps = len(self.timesteps) - curr_idx = self.timesteps.index(t) - next_idx = curr_idx + 1 - bound = -1 if self.timesteps.direction == SamplingDirection.backward else T + 1 - - s = self.timesteps[next_idx.clamp_max(steps - 1)] - s = s.where(next_idx < steps, bound) - return s - - def get_endpoint( - self, - pred: torch.Tensor, - x_t: torch.Tensor, - t: torch.Tensor, - ) -> torch.Tensor: - """ - Get to the endpoint of the probability flow. - """ - x_0, x_T = self.schedule.convert_from_pred(pred, self.prediction_type, x_t, t) - return x_0 if self.timesteps.direction == SamplingDirection.backward else x_T - - def get_progress_bar(self): - """ - Get progress bar for sampling. - """ - return tqdm( - iterable=range(len(self.timesteps) - (0 if self.return_endpoint else 1)), - dynamic_ncols=True, - desc=self.__class__.__name__, - ) diff --git a/common_x/diffusion/samplers/euler.py b/common_x/diffusion/samplers/euler.py deleted file mode 100644 index 5994979a43658b7ebb75316cefea737d1c54681b..0000000000000000000000000000000000000000 --- a/common_x/diffusion/samplers/euler.py +++ /dev/null @@ -1,89 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - - -""" -Euler ODE solver. -""" - -from typing import Callable -import torch -from einops import rearrange -from torch.nn import functional as F - -from models.dit_v2 import na - -from ..types import PredictionType -from ..utils import expand_dims -from .base import Sampler, SamplerModelArgs - - -class EulerSampler(Sampler): - """ - The Euler method is the simplest ODE solver. - - """ - - def sample( - self, - x: torch.Tensor, - f: Callable[[SamplerModelArgs], torch.Tensor], - ) -> torch.Tensor: - timesteps = self.timesteps.timesteps - progress = self.get_progress_bar() - i = 0 - for t, s in zip(timesteps[:-1], timesteps[1:]): - pred = f(SamplerModelArgs(x, t, i)) - x = self.step_to(pred, x, t, s) - i += 1 - progress.update() - - if self.return_endpoint: - t = timesteps[-1] - pred = f(SamplerModelArgs(x, t, i)) - x = self.get_endpoint(pred, x, t) - progress.update() - return x - - def step( - self, - pred: torch.Tensor, - x_t: torch.Tensor, - t: torch.Tensor, - ) -> torch.Tensor: - """ - Step to the next timestep. - """ - return self.step_to(pred, x_t, t, self.get_next_timestep(t)) - - def step_to( - self, - pred: torch.Tensor, - x_t: torch.Tensor, - t: torch.Tensor, - s: torch.Tensor, - ) -> torch.Tensor: - """ - Steps from x_t at timestep t to x_s at timestep s. Returns x_s. - """ - t = expand_dims(t, x_t.ndim) - s = expand_dims(s, x_t.ndim) - T = self.schedule.T - # Step from x_t to x_s. - pred_x_0, pred_x_T = self.schedule.convert_from_pred(pred, self.prediction_type, x_t, t) - pred_x_s = self.schedule.forward(pred_x_0, pred_x_T, s.clamp(0, T)) - # Clamp x_s to x_0 and x_T if s is out of bound. - pred_x_s = pred_x_s.where(s >= 0, pred_x_0) - pred_x_s = pred_x_s.where(s <= T, pred_x_T) - return pred_x_s diff --git a/common_x/diffusion/schedules/base.py b/common_x/diffusion/schedules/base.py deleted file mode 100644 index bcf6c6b6460977c6e2687e225c5c913a928bf812..0000000000000000000000000000000000000000 --- a/common_x/diffusion/schedules/base.py +++ /dev/null @@ -1,131 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -""" -Schedule base class. -""" - -from abc import ABC, abstractmethod, abstractproperty -from typing import Tuple, Union -import torch - -from ..types import PredictionType -from ..utils import expand_dims - - -class Schedule(ABC): - """ - Diffusion schedules are uniquely defined by T, A, B: - - x_t = A(t) * x_0 + B(t) * x_T, where t in [0, T] - - Schedules can be continuous or discrete. - """ - - @abstractproperty - def T(self) -> Union[int, float]: - """ - Maximum timestep inclusive. - Schedule is continuous if float, discrete if int. - """ - - @abstractmethod - def A(self, t: torch.Tensor) -> torch.Tensor: - """ - Interpolation coefficient A. - Returns tensor with the same shape as t. - """ - - @abstractmethod - def B(self, t: torch.Tensor) -> torch.Tensor: - """ - Interpolation coefficient B. - Returns tensor with the same shape as t. - """ - - # ---------------------------------------------------- - - def snr(self, t: torch.Tensor) -> torch.Tensor: - """ - Signal to noise ratio. - Returns tensor with the same shape as t. - """ - return (self.A(t) ** 2) / (self.B(t) ** 2) - - def isnr(self, snr: torch.Tensor) -> torch.Tensor: - """ - Inverse signal to noise ratio. - Returns tensor with the same shape as snr. - Subclass may implement. - """ - raise NotImplementedError - - # ---------------------------------------------------- - - def is_continuous(self) -> bool: - """ - Whether the schedule is continuous. - """ - return isinstance(self.T, float) - - def forward(self, x_0: torch.Tensor, x_T: torch.Tensor, t: torch.Tensor) -> torch.Tensor: - """ - Diffusion forward function. - """ - t = expand_dims(t, x_0.ndim) - return self.A(t) * x_0 + self.B(t) * x_T - - def convert_from_pred( - self, pred: torch.Tensor, pred_type: PredictionType, x_t: torch.Tensor, t: torch.Tensor - ) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Convert from prediction. Return predicted x_0 and x_T. - """ - t = expand_dims(t, x_t.ndim) - A_t = self.A(t) - B_t = self.B(t) - - if pred_type == PredictionType.x_T: - pred_x_T = pred - pred_x_0 = (x_t - B_t * pred_x_T) / A_t - elif pred_type == PredictionType.x_0: - pred_x_0 = pred - pred_x_T = (x_t - A_t * pred_x_0) / B_t - elif pred_type == PredictionType.v_cos: - pred_x_0 = A_t * x_t - B_t * pred - pred_x_T = A_t * pred + B_t * x_t - elif pred_type == PredictionType.v_lerp: - pred_x_0 = (x_t - B_t * pred) / (A_t + B_t) - pred_x_T = (x_t + A_t * pred) / (A_t + B_t) - else: - raise NotImplementedError - - return pred_x_0, pred_x_T - - def convert_to_pred( - self, x_0: torch.Tensor, x_T: torch.Tensor, t: torch.Tensor, pred_type: PredictionType - ) -> torch.FloatTensor: - """ - Convert to prediction target given x_0 and x_T. - """ - if pred_type == PredictionType.x_T: - return x_T - if pred_type == PredictionType.x_0: - return x_0 - if pred_type == PredictionType.v_cos: - t = expand_dims(t, x_0.ndim) - return self.A(t) * x_T - self.B(t) * x_0 - if pred_type == PredictionType.v_lerp: - return x_T - x_0 - raise NotImplementedError diff --git a/common_x/diffusion/schedules/lerp.py b/common_x/diffusion/schedules/lerp.py deleted file mode 100644 index 56b42bc17538b3217b2209234fc723ac3f58a746..0000000000000000000000000000000000000000 --- a/common_x/diffusion/schedules/lerp.py +++ /dev/null @@ -1,55 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -""" -Linear interpolation schedule (lerp). -""" - -from typing import Union -import torch - -from .base import Schedule - - -class LinearInterpolationSchedule(Schedule): - """ - Linear interpolation schedule (lerp) is proposed by flow matching and rectified flow. - It leads to straighter probability flow theoretically. It is also used by Stable Diffusion 3. - - - - x_t = (1 - t) * x_0 + t * x_T - - Can be either continuous or discrete. - """ - - def __init__(self, T: Union[int, float] = 1.0): - self._T = T - - @property - def T(self) -> Union[int, float]: - return self._T - - def A(self, t: torch.Tensor) -> torch.Tensor: - return 1 - (t / self.T) - - def B(self, t: torch.Tensor) -> torch.Tensor: - return t / self.T - - # ---------------------------------------------------- - - def isnr(self, snr: torch.Tensor) -> torch.Tensor: - t = self.T / (1 + snr**0.5) - t = t if self.is_continuous() else t.round().int() - return t diff --git a/common_x/diffusion/timesteps/base.py b/common_x/diffusion/timesteps/base.py deleted file mode 100644 index d1a598103547694d5ef4dc5db0be1e5be2deb60c..0000000000000000000000000000000000000000 --- a/common_x/diffusion/timesteps/base.py +++ /dev/null @@ -1,72 +0,0 @@ -from abc import ABC, abstractmethod -from typing import Sequence, Union -import torch - -from ..types import SamplingDirection - - -class Timesteps(ABC): - """ - Timesteps base class. - """ - - def __init__(self, T: Union[int, float]): - assert T > 0 - self._T = T - - @property - def T(self) -> Union[int, float]: - """ - Maximum timestep inclusive. - int if discrete, float if continuous. - """ - return self._T - - def is_continuous(self) -> bool: - """ - Whether the schedule is continuous. - """ - return isinstance(self.T, float) - - -class SamplingTimesteps(Timesteps): - """ - Sampling timesteps. - It defines the discretization of sampling steps. - """ - - def __init__( - self, - T: Union[int, float], - timesteps: torch.Tensor, - direction: SamplingDirection, - ): - assert timesteps.ndim == 1 - super().__init__(T) - self.timesteps = timesteps - self.direction = direction - - def __len__(self) -> int: - """ - Number of sampling steps. - """ - return len(self.timesteps) - - def __getitem__(self, idx: Union[int, torch.IntTensor]) -> torch.Tensor: - """ - The timestep at the sampling step. - Returns a scalar tensor if idx is int, - or tensor of the same size if idx is a tensor. - """ - return self.timesteps[idx] - - def index(self, t: torch.Tensor) -> torch.Tensor: - """ - Find index by t. - Return index of the same shape as t. - Index is -1 if t not found in timesteps. - """ - i, j = t.reshape(-1, 1).eq(self.timesteps).nonzero(as_tuple=True) - idx = torch.full_like(t, fill_value=-1, dtype=torch.int) - idx.view(-1)[i] = j.int() - return idx diff --git a/common_x/diffusion/timesteps/sampling/trailing.py b/common_x/diffusion/timesteps/sampling/trailing.py deleted file mode 100644 index 248d986aedaaff8f417c32a42e9d9e3a61012f58..0000000000000000000000000000000000000000 --- a/common_x/diffusion/timesteps/sampling/trailing.py +++ /dev/null @@ -1,49 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -import torch - -from ...types import SamplingDirection -from ..base import SamplingTimesteps - - -class UniformTrailingSamplingTimesteps(SamplingTimesteps): - """ - Uniform trailing sampling timesteps. - Defined in (https://arxiv.org/abs/2305.08891) - - Shift is proposed in SD3 for RF schedule. - Defined in (https://arxiv.org/pdf/2403.03206) eq.23 - """ - - def __init__( - self, - T: int, - steps: int, - shift: float = 1.0, - device: torch.device = "cpu", - ): - # Create trailing timesteps. - timesteps = torch.arange(1.0, 0.0, -1.0 / steps, device=device) - - # Shift timesteps. - timesteps = shift * timesteps / (1 + (shift - 1) * timesteps) - - # Scale to T range. - if isinstance(T, float): - timesteps = timesteps * T - else: - timesteps = timesteps.mul(T + 1).sub(1).round().int() - - super().__init__(T=T, timesteps=timesteps, direction=SamplingDirection.backward) diff --git a/common_x/diffusion/types.py b/common_x/diffusion/types.py deleted file mode 100644 index 076295f2be24dadc79da20a5f335b391eb9543bb..0000000000000000000000000000000000000000 --- a/common_x/diffusion/types.py +++ /dev/null @@ -1,59 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -""" -Type definitions. -""" - -from enum import Enum - - -class PredictionType(str, Enum): - """ - x_0: - Predict data sample. - x_T: - Predict noise sample. - Proposed by DDPM (https://arxiv.org/abs/2006.11239) - Proved problematic by zsnr paper (https://arxiv.org/abs/2305.08891) - v_cos: - Predict velocity dx/dt based on the cosine schedule (A_t * x_T - B_t * x_0). - Proposed by progressive distillation (https://arxiv.org/abs/2202.00512) - v_lerp: - Predict velocity dx/dt based on the lerp schedule (x_T - x_0). - Proposed by rectified flow (https://arxiv.org/abs/2209.03003) - """ - - x_0 = "x_0" - x_T = "x_T" - v_cos = "v_cos" - v_lerp = "v_lerp" - - -class SamplingDirection(str, Enum): - """ - backward: Sample from x_T to x_0 for data generation. - forward: Sample from x_0 to x_T for noise inversion. - """ - - backward = "backward" - forward = "forward" - - @staticmethod - def reverse(direction): - if direction == SamplingDirection.backward: - return SamplingDirection.forward - if direction == SamplingDirection.forward: - return SamplingDirection.backward - raise NotImplementedError diff --git a/common_x/diffusion/utils.py b/common_x/diffusion/utils.py deleted file mode 100644 index 69d4aec34f59b293e2354744a4329008063a30e3..0000000000000000000000000000000000000000 --- a/common_x/diffusion/utils.py +++ /dev/null @@ -1,84 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -""" -Utility functions. -""" - -from typing import Callable -import torch - - -def expand_dims(tensor: torch.Tensor, ndim: int): - """ - Expand tensor to target ndim. New dims are added to the right. - For example, if the tensor shape was (8,), target ndim is 4, return (8, 1, 1, 1). - """ - shape = tensor.shape + (1,) * (ndim - tensor.ndim) - return tensor.reshape(shape) - - -def assert_schedule_timesteps_compatible(schedule, timesteps): - """ - Check if schedule and timesteps are compatible. - """ - if schedule.T != timesteps.T: - raise ValueError("Schedule and timesteps must have the same T.") - if schedule.is_continuous() != timesteps.is_continuous(): - raise ValueError("Schedule and timesteps must have the same continuity.") - - -def classifier_free_guidance( - pos: torch.Tensor, - neg: torch.Tensor, - scale: float, - rescale: float = 0.0, -): - """ - Apply classifier-free guidance. - """ - # Classifier-free guidance (https://arxiv.org/abs/2207.12598) - cfg = neg + scale * (pos - neg) - - # Classifier-free guidance rescale (https://arxiv.org/pdf/2305.08891.pdf) - if rescale != 0.0: - pos_std = pos.std(dim=list(range(1, pos.ndim)), keepdim=True) - cfg_std = cfg.std(dim=list(range(1, cfg.ndim)), keepdim=True) - factor = pos_std / cfg_std - factor = rescale * factor + (1 - rescale) - cfg *= factor - - return cfg - - -def classifier_free_guidance_dispatcher( - pos: Callable, - neg: Callable, - scale: float, - rescale: float = 0.0, -): - """ - Optionally execute models depending on classifer-free guidance scale. - """ - # If scale is 1, no need to execute neg model. - if scale == 1.0: - return pos() - - # Otherwise, execute both pos nad neg models and apply cfg. - return classifier_free_guidance( - pos=pos(), - neg=neg(), - scale=scale, - rescale=rescale, - ) diff --git a/common_x/distributed/__init__.py b/common_x/distributed/__init__.py deleted file mode 100644 index a5b4f873ae3e5524c88942bb27ec98ac98c3b5b5..0000000000000000000000000000000000000000 --- a/common_x/distributed/__init__.py +++ /dev/null @@ -1,37 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -""" -Distributed package. -""" - -from .basic import ( - barrier_if_distributed, - convert_to_ddp, - get_device, - get_global_rank, - get_local_rank, - get_world_size, - init_torch, -) - -__all__ = [ - "barrier_if_distributed", - "convert_to_ddp", - "get_device", - "get_global_rank", - "get_local_rank", - "get_world_size", - "init_torch", -] diff --git a/common_x/distributed/advanced.py b/common_x/distributed/advanced.py deleted file mode 100644 index f55fe20ab45494c96124b072d628273d49def1fa..0000000000000000000000000000000000000000 --- a/common_x/distributed/advanced.py +++ /dev/null @@ -1,208 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -""" -Advanced distributed functions for sequence parallel. -""" - -from typing import Optional, List -import torch -import torch.distributed as dist -from torch.distributed.device_mesh import DeviceMesh, init_device_mesh -from torch.distributed.fsdp import ShardingStrategy - -from .basic import get_global_rank, get_world_size - - -_DATA_PARALLEL_GROUP = None -_SEQUENCE_PARALLEL_GROUP = None -_SEQUENCE_PARALLEL_CPU_GROUP = None -_MODEL_SHARD_CPU_INTER_GROUP = None -_MODEL_SHARD_CPU_INTRA_GROUP = None -_MODEL_SHARD_INTER_GROUP = None -_MODEL_SHARD_INTRA_GROUP = None -_SEQUENCE_PARALLEL_GLOBAL_RANKS = None - - -def get_data_parallel_group() -> Optional[dist.ProcessGroup]: - """ - Get data parallel process group. - """ - return _DATA_PARALLEL_GROUP - - -def get_sequence_parallel_group() -> Optional[dist.ProcessGroup]: - """ - Get sequence parallel process group. - """ - return _SEQUENCE_PARALLEL_GROUP - - -def get_sequence_parallel_cpu_group() -> Optional[dist.ProcessGroup]: - """ - Get sequence parallel CPU process group. - """ - return _SEQUENCE_PARALLEL_CPU_GROUP - - -def get_data_parallel_rank() -> int: - """ - Get data parallel rank. - """ - group = get_data_parallel_group() - return dist.get_rank(group) if group else get_global_rank() - - -def get_data_parallel_world_size() -> int: - """ - Get data parallel world size. - """ - group = get_data_parallel_group() - return dist.get_world_size(group) if group else get_world_size() - - -def get_sequence_parallel_rank() -> int: - """ - Get sequence parallel rank. - """ - group = get_sequence_parallel_group() - return dist.get_rank(group) if group else 0 - - -def get_sequence_parallel_world_size() -> int: - """ - Get sequence parallel world size. - """ - group = get_sequence_parallel_group() - return dist.get_world_size(group) if group else 1 - - -def get_model_shard_cpu_intra_group() -> Optional[dist.ProcessGroup]: - """ - Get the CPU intra process group of model sharding. - """ - return _MODEL_SHARD_CPU_INTRA_GROUP - - -def get_model_shard_cpu_inter_group() -> Optional[dist.ProcessGroup]: - """ - Get the CPU inter process group of model sharding. - """ - return _MODEL_SHARD_CPU_INTER_GROUP - - -def get_model_shard_intra_group() -> Optional[dist.ProcessGroup]: - """ - Get the GPU intra process group of model sharding. - """ - return _MODEL_SHARD_INTRA_GROUP - - -def get_model_shard_inter_group() -> Optional[dist.ProcessGroup]: - """ - Get the GPU inter process group of model sharding. - """ - return _MODEL_SHARD_INTER_GROUP - - -def init_sequence_parallel(sequence_parallel_size: int): - """ - Initialize sequence parallel. - """ - global _DATA_PARALLEL_GROUP - global _SEQUENCE_PARALLEL_GROUP - global _SEQUENCE_PARALLEL_CPU_GROUP - global _SEQUENCE_PARALLEL_GLOBAL_RANKS - assert dist.is_initialized() - world_size = dist.get_world_size() - rank = dist.get_rank() - data_parallel_size = world_size // sequence_parallel_size - for i in range(data_parallel_size): - start_rank = i * sequence_parallel_size - end_rank = (i + 1) * sequence_parallel_size - ranks = range(start_rank, end_rank) - group = dist.new_group(ranks) - cpu_group = dist.new_group(ranks, backend="gloo") - if rank in ranks: - _SEQUENCE_PARALLEL_GROUP = group - _SEQUENCE_PARALLEL_CPU_GROUP = cpu_group - _SEQUENCE_PARALLEL_GLOBAL_RANKS = list(ranks) - - -def init_model_shard_group( - *, - sharding_strategy: ShardingStrategy, - device_mesh: Optional[DeviceMesh] = None, -): - """ - Initialize process group of model sharding. - """ - global _MODEL_SHARD_INTER_GROUP - global _MODEL_SHARD_INTRA_GROUP - global _MODEL_SHARD_CPU_INTER_GROUP - global _MODEL_SHARD_CPU_INTRA_GROUP - assert dist.is_initialized() - world_size = dist.get_world_size() - if device_mesh is not None: - num_shards_per_group = device_mesh.shape[1] - elif sharding_strategy == ShardingStrategy.NO_SHARD: - num_shards_per_group = 1 - elif sharding_strategy in [ - ShardingStrategy.HYBRID_SHARD, - ShardingStrategy._HYBRID_SHARD_ZERO2, - ]: - num_shards_per_group = torch.cuda.device_count() - else: - num_shards_per_group = world_size - num_groups = world_size // num_shards_per_group - device_mesh = (num_groups, num_shards_per_group) - - gpu_mesh_2d = init_device_mesh("cuda", device_mesh, mesh_dim_names=("inter", "intra")) - cpu_mesh_2d = init_device_mesh("cpu", device_mesh, mesh_dim_names=("inter", "intra")) - - _MODEL_SHARD_INTER_GROUP = gpu_mesh_2d.get_group("inter") - _MODEL_SHARD_INTRA_GROUP = gpu_mesh_2d.get_group("intra") - _MODEL_SHARD_CPU_INTER_GROUP = cpu_mesh_2d.get_group("inter") - _MODEL_SHARD_CPU_INTRA_GROUP = cpu_mesh_2d.get_group("intra") - -def get_sequence_parallel_global_ranks() -> List[int]: - """ - Get all global ranks of the sequence parallel process group - that the caller rank belongs to. - """ - if _SEQUENCE_PARALLEL_GLOBAL_RANKS is None: - return [dist.get_rank()] - return _SEQUENCE_PARALLEL_GLOBAL_RANKS - - -def get_next_sequence_parallel_rank() -> int: - """ - Get the next global rank of the sequence parallel process group - that the caller rank belongs to. - """ - sp_global_ranks = get_sequence_parallel_global_ranks() - sp_rank = get_sequence_parallel_rank() - sp_size = get_sequence_parallel_world_size() - return sp_global_ranks[(sp_rank + 1) % sp_size] - - -def get_prev_sequence_parallel_rank() -> int: - """ - Get the previous global rank of the sequence parallel process group - that the caller rank belongs to. - """ - sp_global_ranks = get_sequence_parallel_global_ranks() - sp_rank = get_sequence_parallel_rank() - sp_size = get_sequence_parallel_world_size() - return sp_global_ranks[(sp_rank + sp_size - 1) % sp_size] \ No newline at end of file diff --git a/common_x/distributed/basic.py b/common_x/distributed/basic.py deleted file mode 100644 index f829aec01eba2cc44d7274b6a0430155c6d42af6..0000000000000000000000000000000000000000 --- a/common_x/distributed/basic.py +++ /dev/null @@ -1,84 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -""" -Distributed basic functions. -""" - -import os -from datetime import timedelta -import torch -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel - - -def get_global_rank() -> int: - """ - Get the global rank, the global index of the GPU. - """ - return int(os.environ.get("RANK", "0")) - - -def get_local_rank() -> int: - """ - Get the local rank, the local index of the GPU. - """ - return int(os.environ.get("LOCAL_RANK", "0")) - - -def get_world_size() -> int: - """ - Get the world size, the total amount of GPUs. - """ - return int(os.environ.get("WORLD_SIZE", "1")) - - -def get_device() -> torch.device: - """ - Get current rank device. - """ - return torch.device("cuda", get_local_rank()) - - -def barrier_if_distributed(*args, **kwargs): - """ - Synchronizes all processes if under distributed context. - """ - if dist.is_initialized(): - return dist.barrier(*args, **kwargs) - - -def init_torch(cudnn_benchmark=True, timeout=timedelta(seconds=600)): - """ - Common PyTorch initialization configuration. - """ - torch.backends.cuda.matmul.allow_tf32 = True - torch.backends.cudnn.allow_tf32 = True - torch.backends.cudnn.benchmark = cudnn_benchmark - torch.cuda.set_device(get_local_rank()) - dist.init_process_group( - backend="nccl", - rank=get_global_rank(), - world_size=get_world_size(), - timeout=timeout, - ) - - -def convert_to_ddp(module: torch.nn.Module, **kwargs) -> DistributedDataParallel: - return DistributedDataParallel( - module=module, - device_ids=[get_local_rank()], - output_device=get_local_rank(), - **kwargs, - ) diff --git a/common_x/distributed/meta_init_utils.py b/common_x/distributed/meta_init_utils.py deleted file mode 100644 index 794cd0b8162de596064e494c0b8140a04b9c36a0..0000000000000000000000000000000000000000 --- a/common_x/distributed/meta_init_utils.py +++ /dev/null @@ -1,41 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -import torch -from rotary_embedding_torch import RotaryEmbedding -from torch import nn -from torch.distributed.fsdp._common_utils import _is_fsdp_flattened - -__all__ = ["meta_non_persistent_buffer_init_fn"] - - -def meta_non_persistent_buffer_init_fn(module: nn.Module) -> nn.Module: - """ - Used for materializing `non-persistent tensor buffers` while model resuming. - - Since non-persistent tensor buffers are not saved in state_dict, - when initializing model with meta device, user should materialize those buffers manually. - - Currently, only `rope.dummy` is this special case. - """ - with torch.no_grad(): - for submodule in module.modules(): - if not isinstance(submodule, RotaryEmbedding): - continue - for buffer_name, buffer in submodule.named_buffers(recurse=False): - if buffer.is_meta and "dummy" in buffer_name: - materialized_buffer = torch.zeros_like(buffer, device="cpu") - setattr(submodule, buffer_name, materialized_buffer) - assert not any(b.is_meta for n, b in module.named_buffers()) - return module diff --git a/common_x/distributed/ops.py b/common_x/distributed/ops.py deleted file mode 100644 index 9b2ae02a6f77de3a8a31d217e0e1f2a6b359c3be..0000000000000000000000000000000000000000 --- a/common_x/distributed/ops.py +++ /dev/null @@ -1,494 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -""" -Distributed ops for supporting sequence parallel. -""" - -from collections import defaultdict -from typing import Any, Callable, Dict, List, Optional, Tuple, Union -import torch -import torch.distributed as dist -from torch import Tensor - -from common.cache import Cache -from common.distributed.advanced import ( - get_sequence_parallel_group, - get_sequence_parallel_rank, - get_sequence_parallel_world_size, -) - -from .basic import get_device - -_SEQ_DATA_BUF = defaultdict(lambda: [None, None, None]) -_SEQ_DATA_META_SHAPES = defaultdict() -_SEQ_DATA_META_DTYPES = defaultdict() -_SEQ_DATA_ASYNC_COMMS = defaultdict(list) -_SYNC_BUFFER = defaultdict(dict) - - -def single_all_to_all( - local_input: Tensor, - scatter_dim: int, - gather_dim: int, - group: dist.ProcessGroup, - async_op: bool = False, -): - """ - A function to do all-to-all on a tensor - """ - seq_world_size = dist.get_world_size(group) - prev_scatter_dim = scatter_dim - if scatter_dim != 0: - local_input = local_input.transpose(0, scatter_dim) - if gather_dim == 0: - gather_dim = scatter_dim - scatter_dim = 0 - - inp_shape = list(local_input.shape) - inp_shape[scatter_dim] = inp_shape[scatter_dim] // seq_world_size - input_t = local_input.reshape( - [seq_world_size, inp_shape[scatter_dim]] + inp_shape[scatter_dim + 1 :] - ).contiguous() - output = torch.empty_like(input_t) - comm = dist.all_to_all_single(output, input_t, group=group, async_op=async_op) - if async_op: - # let user's code transpose & reshape - return output, comm, prev_scatter_dim - - # first dim is seq_world_size, so we can split it directly - output = torch.cat(output.split(1), dim=gather_dim + 1).squeeze(0) - if prev_scatter_dim: - output = output.transpose(0, prev_scatter_dim).contiguous() - return output - - -def _all_to_all( - local_input: Tensor, - scatter_dim: int, - gather_dim: int, - group: dist.ProcessGroup, -): - seq_world_size = dist.get_world_size(group) - input_list = [ - t.contiguous() for t in torch.tensor_split(local_input, seq_world_size, scatter_dim) - ] - output_list = [torch.empty_like(input_list[0]) for _ in range(seq_world_size)] - dist.all_to_all(output_list, input_list, group=group) - return torch.cat(output_list, dim=gather_dim).contiguous() - - -class SeqAllToAll(torch.autograd.Function): - @staticmethod - def forward( - ctx: Any, - group: dist.ProcessGroup, - local_input: Tensor, - scatter_dim: int, - gather_dim: int, - async_op: bool, - ) -> Tensor: - ctx.group = group - ctx.scatter_dim = scatter_dim - ctx.gather_dim = gather_dim - ctx.async_op = async_op - if async_op: - output, comm, prev_scatter_dim = single_all_to_all( - local_input, scatter_dim, gather_dim, group, async_op=async_op - ) - ctx.prev_scatter_dim = prev_scatter_dim - return output, comm - - return _all_to_all(local_input, scatter_dim, gather_dim, group) - - @staticmethod - def backward(ctx: Any, *grad_output: Tensor) -> Tuple[None, Tensor, None, None]: - if ctx.async_op: - input_t = torch.cat(grad_output[0].split(1), dim=ctx.gather_dim + 1).squeeze(0) - if ctx.prev_scatter_dim: - input_t = input_t.transpose(0, ctx.prev_scatter_dim) - else: - input_t = grad_output[0] - return ( - None, - _all_to_all(input_t, ctx.gather_dim, ctx.scatter_dim, ctx.group), - None, - None, - None, - ) - - -class Slice(torch.autograd.Function): - @staticmethod - def forward(ctx: Any, group: dist.ProcessGroup, local_input: Tensor, dim: int) -> Tensor: - ctx.group = group - ctx.rank = dist.get_rank(group) - seq_world_size = dist.get_world_size(group) - ctx.seq_world_size = seq_world_size - ctx.dim = dim - dim_size = local_input.shape[dim] - return local_input.split(dim_size // seq_world_size, dim=dim)[ctx.rank].contiguous() - - @staticmethod - def backward(ctx: Any, grad_output: Tensor) -> Tuple[None, Tensor, None]: - dim_size = list(grad_output.size()) - split_size = dim_size[0] - dim_size[0] = dim_size[0] * ctx.seq_world_size - output = torch.empty(dim_size, dtype=grad_output.dtype, device=torch.cuda.current_device()) - dist._all_gather_base(output, grad_output, group=ctx.group) - return (None, torch.cat(output.split(split_size), dim=ctx.dim), None) - - -class Gather(torch.autograd.Function): - @staticmethod - def forward( - ctx: Any, - group: dist.ProcessGroup, - local_input: Tensor, - dim: int, - grad_scale: Optional[bool] = False, - ) -> Tensor: - ctx.group = group - ctx.rank = dist.get_rank(group) - ctx.dim = dim - ctx.grad_scale = grad_scale - seq_world_size = dist.get_world_size(group) - ctx.seq_world_size = seq_world_size - dim_size = list(local_input.size()) - split_size = dim_size[0] - ctx.part_size = dim_size[dim] - dim_size[0] = dim_size[0] * seq_world_size - output = torch.empty(dim_size, dtype=local_input.dtype, device=torch.cuda.current_device()) - dist._all_gather_base(output, local_input.contiguous(), group=ctx.group) - return torch.cat(output.split(split_size), dim=dim) - - @staticmethod - def backward(ctx: Any, grad_output: Tensor) -> Tuple[None, Tensor]: - if ctx.grad_scale: - grad_output = grad_output * ctx.seq_world_size - return ( - None, - grad_output.split(ctx.part_size, dim=ctx.dim)[ctx.rank].contiguous(), - None, - None, - ) - - -def gather_seq_scatter_heads_qkv( - qkv_tensor: Tensor, - *, - seq_dim: int, - qkv_shape: Optional[Tensor] = None, - cache: Cache = Cache(disable=True), - restore_shape: bool = True, -): - """ - A func to sync splited qkv tensor - qkv_tensor: the tensor we want to do alltoall with. The last dim must - be the projection_idx, which we will split into 3 part. After - spliting, the gather idx will be projecttion_idx + 1 - seq_dim: gather_dim for all2all comm - restore_shape: if True, output will has the same shape length as input - """ - group = get_sequence_parallel_group() - if not group: - return qkv_tensor - world = get_sequence_parallel_world_size() - orig_shape = qkv_tensor.shape - scatter_dim = qkv_tensor.dim() - bef_all2all_shape = list(orig_shape) - qkv_proj_dim = bef_all2all_shape[-1] - bef_all2all_shape = bef_all2all_shape[:-1] + [3, qkv_proj_dim // 3] - qkv_tensor = qkv_tensor.view(bef_all2all_shape) - qkv_tensor = SeqAllToAll.apply(group, qkv_tensor, scatter_dim, seq_dim, False) - if restore_shape: - out_shape = list(orig_shape) - out_shape[seq_dim] *= world - out_shape[-1] = qkv_proj_dim // world - qkv_tensor = qkv_tensor.view(out_shape) - - # remove padding - if qkv_shape is not None: - unpad_dim_size = cache( - "unpad_dim_size", lambda: torch.sum(torch.prod(qkv_shape, dim=-1)).item() - ) - if unpad_dim_size % world != 0: - padding_size = qkv_tensor.size(seq_dim) - unpad_dim_size - qkv_tensor = _unpad_tensor(qkv_tensor, seq_dim, padding_size) - return qkv_tensor - - -def slice_inputs(x: Tensor, dim: int, padding: bool = True): - """ - A func to slice the input sequence in sequence parallel - """ - group = get_sequence_parallel_group() - if group is None: - return x - sp_rank = get_sequence_parallel_rank() - sp_world = get_sequence_parallel_world_size() - dim_size = x.shape[dim] - unit = (dim_size + sp_world - 1) // sp_world - if padding and dim_size % sp_world: - padding_size = sp_world - (dim_size % sp_world) - x = _pad_tensor(x, dim, padding_size) - slc = [slice(None)] * len(x.shape) - slc[dim] = slice(unit * sp_rank, unit * (sp_rank + 1)) - return x[slc] - - -def remove_seqeunce_parallel_padding(x: Tensor, dim: int, unpad_dim_size: int): - """ - A func to remove the padding part of the tensor based on its original shape - """ - group = get_sequence_parallel_group() - if group is None: - return x - sp_world = get_sequence_parallel_world_size() - if unpad_dim_size % sp_world == 0: - return x - padding_size = sp_world - (unpad_dim_size % sp_world) - assert (padding_size + unpad_dim_size) % sp_world == 0 - return _unpad_tensor(x, dim=dim, padding_size=padding_size) - - -def gather_heads_scatter_seq(x: Tensor, head_dim: int, seq_dim: int) -> Tensor: - """ - A func to sync attention result with alltoall in sequence parallel - """ - group = get_sequence_parallel_group() - if not group: - return x - dim_size = x.size(seq_dim) - sp_world = get_sequence_parallel_world_size() - if dim_size % sp_world != 0: - padding_size = sp_world - (dim_size % sp_world) - x = _pad_tensor(x, seq_dim, padding_size) - return SeqAllToAll.apply(group, x, seq_dim, head_dim, False) - - -def gather_seq_scatter_heads(x: Tensor, seq_dim: int, head_dim: int) -> Tensor: - """ - A func to sync embedding input with alltoall in sequence parallel - """ - group = get_sequence_parallel_group() - if not group: - return x - return SeqAllToAll.apply(group, x, head_dim, seq_dim, False) - - -def scatter_heads(x: Tensor, dim: int) -> Tensor: - """ - A func to split heads before attention in sequence parallel - """ - group = get_sequence_parallel_group() - if not group: - return x - return Slice.apply(group, x, dim) - - -def gather_heads(x: Tensor, dim: int, grad_scale: Optional[bool] = False) -> Tensor: - """ - A func to gather heads for the attention result in sequence parallel - """ - group = get_sequence_parallel_group() - if not group: - return x - return Gather.apply(group, x, dim, grad_scale) - - -def gather_outputs( - x: Tensor, - *, - gather_dim: int, - padding_dim: Optional[int] = None, - unpad_shape: Optional[Tensor] = None, - cache: Cache = Cache(disable=True), - scale_grad=True, -): - """ - A func to gather the outputs for the model result in sequence parallel - """ - group = get_sequence_parallel_group() - if not group: - return x - x = Gather.apply(group, x, gather_dim, scale_grad) - if padding_dim is not None: - unpad_dim_size = cache( - "unpad_dim_size", lambda: torch.sum(torch.prod(unpad_shape, dim=1)).item() - ) - x = remove_seqeunce_parallel_padding(x, padding_dim, unpad_dim_size) - return x - - -def _pad_tensor(x: Tensor, dim: int, padding_size: int): - shape = list(x.shape) - shape[dim] = padding_size - pad = torch.zeros(shape, dtype=x.dtype, device=x.device) - return torch.cat([x, pad], dim=dim) - - -def _unpad_tensor(x: Tensor, dim: int, padding_size): - slc = [slice(None)] * len(x.shape) - slc[dim] = slice(0, -padding_size) - return x[slc] - - -def _broadcast_data(data, shape, dtype, src, group, async_op): - comms = [] - if isinstance(data, (list, tuple)): - for i, sub_shape in enumerate(shape): - comms += _broadcast_data(data[i], sub_shape, dtype[i], src, group, async_op) - elif isinstance(data, dict): - for key, sub_data in data.items(): - comms += _broadcast_data(sub_data, shape[key], dtype[key], src, group, async_op) - elif isinstance(data, Tensor): - comms.append(dist.broadcast(data, src=src, group=group, async_op=async_op)) - return comms - - -def _traverse(data: Any, op: Callable) -> Union[None, List, Dict, Any]: - if isinstance(data, (list, tuple)): - return [_traverse(sub_data, op) for sub_data in data] - elif isinstance(data, dict): - return {key: _traverse(sub_data, op) for key, sub_data in data.items()} - elif isinstance(data, Tensor): - return op(data) - else: - return None - - -def _get_shapes(data): - return _traverse(data, op=lambda x: x.shape) - - -def _get_dtypes(data): - return _traverse(data, op=lambda x: x.dtype) - - -def _construct_broadcast_buffer(shapes, dtypes, device): - if isinstance(shapes, torch.Size): - return torch.empty(shapes, dtype=dtypes, device=device) - - if isinstance(shapes, (list, tuple)): - buffer = [] - for i, sub_shape in enumerate(shapes): - buffer.append(_construct_broadcast_buffer(sub_shape, dtypes[i], device)) - elif isinstance(shapes, dict): - buffer = {} - for key, sub_shape in shapes.items(): - buffer[key] = _construct_broadcast_buffer(sub_shape, dtypes[key], device) - else: - return None - return buffer - - -class SPDistForward: - """A forward tool to sync different result across sp group - - Args: - module: a function or module to process users input - sp_step: current training step to judge which rank to broadcast its result to all - name: a distinct str to save meta and async comm - comm_shape: if different ranks have different shape, mark this arg to True - device: the device for current rank, can be empty - """ - - def __init__( - self, - name: str, - comm_shape: bool, - device: torch.device = None, - ): - self.name = name - self.comm_shape = comm_shape - if device: - self.device = device - else: - self.device = get_device() - - def __call__(self, inputs) -> Any: - group = get_sequence_parallel_group() - if not group: - yield inputs - else: - device = self.device - sp_world = get_sequence_parallel_world_size() - sp_rank = get_sequence_parallel_rank() - for local_step in range(sp_world): - src_rank = dist.get_global_rank(group, local_step) - is_src = sp_rank == local_step - local_shapes = [] - local_dtypes = [] - if local_step == 0: - local_result = inputs - _SEQ_DATA_BUF[self.name][-1] = local_result - local_shapes = _get_shapes(local_result) - local_dtypes = _get_dtypes(local_result) - if self.comm_shape: - group_shapes_lists = [None] * sp_world - dist.all_gather_object(group_shapes_lists, local_shapes, group=group) - _SEQ_DATA_META_SHAPES[self.name] = group_shapes_lists - else: - _SEQ_DATA_META_SHAPES[self.name] = [local_shapes] * sp_world - _SEQ_DATA_META_DTYPES[self.name] = local_dtypes - shapes = _SEQ_DATA_META_SHAPES[self.name][local_step] - dtypes = _SEQ_DATA_META_DTYPES[self.name] - buf_id = local_step % 2 - if local_step == 0: - sync_data = ( - local_result - if is_src - else _construct_broadcast_buffer(shapes, dtypes, device) - ) - _broadcast_data(sync_data, shapes, dtypes, src_rank, group, False) - _SEQ_DATA_BUF[self.name][buf_id] = sync_data - - # wait for async comm ops - if _SEQ_DATA_ASYNC_COMMS[self.name]: - for comm in _SEQ_DATA_ASYNC_COMMS[self.name]: - comm.wait() - # before return the sync result, do async broadcast for next batch - if local_step < sp_world - 1: - next_buf_id = 1 - buf_id - shapes = _SEQ_DATA_META_SHAPES[self.name][local_step + 1] - src_rank = dist.get_global_rank(group, local_step + 1) - is_src = sp_rank == local_step + 1 - next_sync_data = ( - _SEQ_DATA_BUF[self.name][-1] - if is_src - else _construct_broadcast_buffer(shapes, dtypes, device) - ) - _SEQ_DATA_ASYNC_COMMS[self.name] = _broadcast_data( - next_sync_data, shapes, dtypes, src_rank, group, True - ) - _SEQ_DATA_BUF[self.name][next_buf_id] = next_sync_data - yield _SEQ_DATA_BUF[self.name][buf_id] - - -sync_inputs = SPDistForward(name="bef_fwd", comm_shape=True) - - -def sync_data(data, sp_idx, name="tmp"): - group = get_sequence_parallel_group() - if group is None: - return data - # if sp_idx in _SYNC_BUFFER[name]: - # return _SYNC_BUFFER[name][sp_idx] - sp_rank = get_sequence_parallel_rank() - src_rank = dist.get_global_rank(group, sp_idx) - objects = [data] if sp_rank == sp_idx else [None] - dist.broadcast_object_list(objects, src=src_rank, group=group) - # _SYNC_BUFFER[name] = {sp_idx: objects[0]} - return objects[0] diff --git a/common_x/logger.py b/common_x/logger.py deleted file mode 100644 index faf795f0aecb2b16471c99802f2240880d701830..0000000000000000000000000000000000000000 --- a/common_x/logger.py +++ /dev/null @@ -1,44 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -""" -Logging utility functions. -""" - -import logging -import sys -from typing import Optional - -from common.distributed import get_global_rank, get_local_rank, get_world_size - -_default_handler = logging.StreamHandler(sys.stdout) -_default_handler.setFormatter( - logging.Formatter( - "%(asctime)s " - + (f"[Rank:{get_global_rank()}]" if get_world_size() > 1 else "") - + (f"[LocalRank:{get_local_rank()}]" if get_world_size() > 1 else "") - + "[%(threadName).12s][%(name)s][%(levelname).5s] " - + "%(message)s" - ) -) - - -def get_logger(name: Optional[str] = None) -> logging.Logger: - """ - Get a logger. - """ - logger = logging.getLogger(name) - logger.addHandler(_default_handler) - logger.setLevel(logging.INFO) - return logger diff --git a/common_x/partition.py b/common_x/partition.py deleted file mode 100644 index 648c87fe2a61294c09704b9af3e47f5a8570c215..0000000000000000000000000000000000000000 --- a/common_x/partition.py +++ /dev/null @@ -1,59 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -""" -Partition utility functions. -""" - -from typing import Any, List - - -def partition_by_size(data: List[Any], size: int) -> List[List[Any]]: - """ - Partition a list by size. - When indivisible, the last group contains fewer items than the target size. - - Examples: - - data: [1,2,3,4,5] - - size: 2 - - return: [[1,2], [3,4], [5]] - """ - assert size > 0 - return [data[i : (i + size)] for i in range(0, len(data), size)] - - -def partition_by_groups(data: List[Any], groups: int) -> List[List[Any]]: - """ - Partition a list by groups. - When indivisible, some groups may have more items than others. - - Examples: - - data: [1,2,3,4,5] - - groups: 2 - - return: [[1,3,5], [2,4]] - """ - assert groups > 0 - return [data[i::groups] for i in range(groups)] - - -def shift_list(data: List[Any], n: int) -> List[Any]: - """ - Rotate a list by n elements. - - Examples: - - data: [1,2,3,4,5] - - n: 3 - - return: [4,5,1,2,3] - """ - return data[(n % len(data)) :] + data[: (n % len(data))] diff --git a/common_x/seed.py b/common_x/seed.py deleted file mode 100644 index 52866de72fcf98f4a2ceff51a55986780a8b701a..0000000000000000000000000000000000000000 --- a/common_x/seed.py +++ /dev/null @@ -1,30 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -import random -from typing import Optional -import numpy as np -import torch - -from common.distributed import get_global_rank - - -def set_seed(seed: Optional[int], same_across_ranks: bool = False): - """Function that sets the seed for pseudo-random number generators.""" - if seed is not None: - seed += get_global_rank() if not same_across_ranks else 0 - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - diff --git a/common_x/utils.py b/common_x/utils.py deleted file mode 100644 index f2090852bf8371aa758c2c443ba3fc112055d67f..0000000000000000000000000000000000000000 --- a/common_x/utils.py +++ /dev/null @@ -1,232 +0,0 @@ -# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import math -import re - -import cv2 -import numpy as np -import torch -from torchvision.utils import make_grid - - -# from basicsr -def img2tensor(imgs, bgr2rgb=True, float32=True): - """Numpy array to tensor. - - Args: - imgs (list[ndarray] | ndarray): Input images. - bgr2rgb (bool): Whether to change bgr to rgb. - float32 (bool): Whether to change to float32. - - Returns: - list[tensor] | tensor: Tensor images. If returned results only have - one element, just return tensor. - """ - - def _totensor(img, bgr2rgb, float32): - if img.shape[2] == 3 and bgr2rgb: - if img.dtype == 'float64': - img = img.astype('float32') - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - img = torch.from_numpy(img.transpose(2, 0, 1)) - if float32: - img = img.float() - return img - - if isinstance(imgs, list): - return [_totensor(img, bgr2rgb, float32) for img in imgs] - return _totensor(imgs, bgr2rgb, float32) - - -def tensor2img(tensor, rgb2bgr=True, out_type=np.uint8, min_max=(0, 1)): - """Convert torch Tensors into image numpy arrays. - - After clamping to [min, max], values will be normalized to [0, 1]. - - Args: - tensor (Tensor or list[Tensor]): Accept shapes: - 1) 4D mini-batch Tensor of shape (B x 3/1 x H x W); - 2) 3D Tensor of shape (3/1 x H x W); - 3) 2D Tensor of shape (H x W). - Tensor channel should be in RGB order. - rgb2bgr (bool): Whether to change rgb to bgr. - out_type (numpy type): output types. If ``np.uint8``, transform outputs - to uint8 type with range [0, 255]; otherwise, float type with - range [0, 1]. Default: ``np.uint8``. - min_max (tuple[int]): min and max values for clamp. - - Returns: - (Tensor or list): 3D ndarray of shape (H x W x C) OR 2D ndarray of - shape (H x W). The channel order is BGR. - """ - if not (torch.is_tensor(tensor) or (isinstance(tensor, list) and all(torch.is_tensor(t) for t in tensor))): - raise TypeError(f'tensor or list of tensors expected, got {type(tensor)}') - - if torch.is_tensor(tensor): - tensor = [tensor] - result = [] - for _tensor in tensor: - _tensor = _tensor.squeeze(0).float().detach().cpu().clamp_(*min_max) - _tensor = (_tensor - min_max[0]) / (min_max[1] - min_max[0]) - - n_dim = _tensor.dim() - if n_dim == 4: - img_np = make_grid(_tensor, nrow=int(math.sqrt(_tensor.size(0))), normalize=False).numpy() - img_np = img_np.transpose(1, 2, 0) - if rgb2bgr: - img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR) - elif n_dim == 3: - img_np = _tensor.numpy() - img_np = img_np.transpose(1, 2, 0) - if img_np.shape[2] == 1: # gray image - img_np = np.squeeze(img_np, axis=2) - else: - if rgb2bgr: - img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR) - elif n_dim == 2: - img_np = _tensor.numpy() - else: - raise TypeError(f'Only support 4D, 3D or 2D tensor. But received with dimension: {n_dim}') - if out_type == np.uint8: - # Unlike MATLAB, numpy.unit8() WILL NOT round by default. - img_np = (img_np * 255.0).round() - img_np = img_np.astype(out_type) - result.append(img_np) - if len(result) == 1: - result = result[0] - return result - - -def resize_numpy_image_area(image, area=512 * 512): - h, w = image.shape[:2] - k = math.sqrt(area / (h * w)) - h = int(h * k) - (int(h * k) % 16) - w = int(w * k) - (int(w * k) % 16) - image = cv2.resize(image, (w, h), interpolation=cv2.INTER_AREA) - return image - -def resize_numpy_image_long(image, long_edge=768): - h, w = image.shape[:2] - if max(h, w) <= long_edge: - return image - k = long_edge / max(h, w) - h = int(h * k) - w = int(w * k) - image = cv2.resize(image, (w, h), interpolation=cv2.INTER_AREA) - return image - - -# reference: https://github.com/huggingface/diffusers/pull/9295/files -def convert_flux_lora_to_diffusers(old_state_dict): - new_state_dict = {} - orig_keys = list(old_state_dict.keys()) - - def handle_qkv(sds_sd, ait_sd, sds_key, ait_keys, dims=None): - down_weight = sds_sd.pop(sds_key) - up_weight = sds_sd.pop(sds_key.replace(".down.weight", ".up.weight")) - - # calculate dims if not provided - num_splits = len(ait_keys) - if dims is None: - dims = [up_weight.shape[0] // num_splits] * num_splits - else: - assert sum(dims) == up_weight.shape[0] - - # make ai-toolkit weight - ait_down_keys = [k + ".lora_A.weight" for k in ait_keys] - ait_up_keys = [k + ".lora_B.weight" for k in ait_keys] - - # down_weight is copied to each split - ait_sd.update({k: down_weight for k in ait_down_keys}) - - # up_weight is split to each split - ait_sd.update({k: v for k, v in zip(ait_up_keys, torch.split(up_weight, dims, dim=0))}) # noqa: C416 - - for old_key in orig_keys: - # Handle double_blocks - if 'double_blocks' in old_key: - block_num = re.search(r"double_blocks_(\d+)", old_key).group(1) - new_key = f"transformer.transformer_blocks.{block_num}" - - if "proj_lora1" in old_key: - new_key += ".attn.to_out.0" - elif "proj_lora2" in old_key: - new_key += ".attn.to_add_out" - elif "qkv_lora2" in old_key and "up" not in old_key: - handle_qkv( - old_state_dict, - new_state_dict, - old_key, - [ - f"transformer.transformer_blocks.{block_num}.attn.add_q_proj", - f"transformer.transformer_blocks.{block_num}.attn.add_k_proj", - f"transformer.transformer_blocks.{block_num}.attn.add_v_proj", - ], - ) - # continue - elif "qkv_lora1" in old_key and "up" not in old_key: - handle_qkv( - old_state_dict, - new_state_dict, - old_key, - [ - f"transformer.transformer_blocks.{block_num}.attn.to_q", - f"transformer.transformer_blocks.{block_num}.attn.to_k", - f"transformer.transformer_blocks.{block_num}.attn.to_v", - ], - ) - # continue - - if "down" in old_key: - new_key += ".lora_A.weight" - elif "up" in old_key: - new_key += ".lora_B.weight" - - # Handle single_blocks - elif 'single_blocks' in old_key: - block_num = re.search(r"single_blocks_(\d+)", old_key).group(1) - new_key = f"transformer.single_transformer_blocks.{block_num}" - - if "proj_lora" in old_key: - new_key += ".proj_out" - elif "qkv_lora" in old_key and "up" not in old_key: - handle_qkv( - old_state_dict, - new_state_dict, - old_key, - [ - f"transformer.single_transformer_blocks.{block_num}.attn.to_q", - f"transformer.single_transformer_blocks.{block_num}.attn.to_k", - f"transformer.single_transformer_blocks.{block_num}.attn.to_v", - ], - ) - - if "down" in old_key: - new_key += ".lora_A.weight" - elif "up" in old_key: - new_key += ".lora_B.weight" - - else: - # Handle other potential key patterns here - new_key = old_key - - # Since we already handle qkv above. - if "qkv" not in old_key and 'embedding' not in old_key: - new_state_dict[new_key] = old_state_dict.pop(old_key) - - # if len(old_state_dict) > 0: - # raise ValueError(f"`old_state_dict` should be at this point but has: {list(old_state_dict.keys())}.") - - return new_state_dict diff --git a/configs_3b_x/LICENSE.txt b/configs_3b_x/LICENSE.txt deleted file mode 100644 index 261eeb9e9f8b2b4b0d119366dda99c6fd7d35c64..0000000000000000000000000000000000000000 --- a/configs_3b_x/LICENSE.txt +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/configs_3b_x/README.md b/configs_3b_x/README.md deleted file mode 100644 index 371a83bc5640a438d5f59aecdae146af15991ec1..0000000000000000000000000000000000000000 --- a/configs_3b_x/README.md +++ /dev/null @@ -1,138 +0,0 @@ -# 🛠️ helpers/ - Ferramentas de IA de Terceiros Adaptadas para ADUC-SDR - -Esta pasta contém implementações adaptadas de modelos e utilitários de IA de terceiros, que servem como "especialistas" ou "ferramentas" de baixo nível para a arquitetura ADUC-SDR. - -**IMPORTANTE:** O conteúdo desta pasta é de autoria de seus respectivos idealizadores e desenvolvedores originais. Esta pasta **NÃO FAZ PARTE** do projeto principal ADUC-SDR em termos de sua arquitetura inovadora. Ela serve como um repositório para as **dependências diretas e modificadas** que os `DeformesXDEngines` (os estágios do "foguete" ADUC-SDR) invocam para realizar tarefas específicas (geração de imagem, vídeo, áudio). - -As modificações realizadas nos arquivos aqui presentes visam principalmente: -1. **Adaptação de Interfaces:** Padronizar as interfaces para que se encaixem no fluxo de orquestração do ADUC-SDR. -2. **Gerenciamento de Recursos:** Integrar lógicas de carregamento/descarregamento de modelos (GPU management) e configurações via arquivos YAML. -3. **Otimização de Fluxo:** Ajustar as pipelines para aceitar formatos de entrada mais eficientes (ex: tensores pré-codificados em vez de caminhos de mídia, pulando etapas de codificação/decodificação redundantes). - ---- - -## 📄 Licenciamento - -O conteúdo original dos projetos listados abaixo é licenciado sob a **Licença Apache 2.0**, ou outra licença especificada pelos autores originais. Todas as modificações e o uso desses arquivos dentro da estrutura `helpers/` do projeto ADUC-SDR estão em conformidade com os termos da **Licença Apache 2.0**. - -As licenças originais dos projetos podem ser encontradas nas suas respectivas fontes ou nos subdiretórios `incl_licenses/` dentro de cada módulo adaptado. - ---- - -## 🛠️ API dos Helpers e Guia de Uso - -Esta seção detalha como cada helper (agente especialista) deve ser utilizado dentro do ecossistema ADUC-SDR. Todos os agentes são instanciados como **singletons** no `hardware_manager.py` para garantir o gerenciamento centralizado de recursos de GPU. - -### **gemini_helpers.py (GeminiAgent)** - -* **Propósito:** Atua como o "Oráculo de Síntese Adaptativo", responsável por todas as tarefas de processamento de linguagem natural, como criação de storyboards, geração de prompts, e tomada de decisões narrativas. -* **Singleton Instance:** `gemini_agent_singleton` -* **Construtor:** `GeminiAgent()` - * Lê `configs/gemini_config.yaml` para obter o nome do modelo, parâmetros de inferência e caminhos de templates de prompt. A chave da API é lida da variável de ambiente `GEMINI_API_KEY`. -* **Métodos Públicos:** - * `generate_storyboard(prompt: str, num_keyframes: int, ref_image_paths: list[str])` - * **Inputs:** - * `prompt`: A ideia geral do filme (string). - * `num_keyframes`: O número de cenas a serem geradas (int). - * `ref_image_paths`: Lista de caminhos para as imagens de referência (list[str]). - * **Output:** `tuple[list[str], str]` (Uma tupla contendo a lista de strings do storyboard e um relatório textual da operação). - * `select_keyframes_from_pool(storyboard: list, base_image_paths: list[str], pool_image_paths: list[str])` - * **Inputs:** - * `storyboard`: A lista de strings do storyboard gerado. - * `base_image_paths`: Imagens de referência base (list[str]). - * `pool_image_paths`: O "banco de imagens" de onde selecionar (list[str]). - * **Output:** `tuple[list[str], str]` (Uma tupla contendo a lista de caminhos de imagens selecionadas e um relatório textual). - * `get_anticipatory_keyframe_prompt(...)` - * **Inputs:** Contexto narrativo e visual para gerar um prompt de imagem. - * **Output:** `tuple[str, str]` (Uma tupla contendo o prompt gerado para o modelo de imagem e um relatório textual). - * `get_initial_motion_prompt(...)` - * **Inputs:** Contexto narrativo e visual para a primeira transição de vídeo. - * **Output:** `tuple[str, str]` (Uma tupla contendo o prompt de movimento gerado e um relatório textual). - * `get_transition_decision(...)` - * **Inputs:** Contexto narrativo e visual para uma transição de vídeo intermediária. - * **Output:** `tuple[dict, str]` (Uma tupla contendo um dicionário `{"transition_type": "...", "motion_prompt": "..."}` e um relatório textual). - * `generate_audio_prompts(...)` - * **Inputs:** Contexto narrativo global. - * **Output:** `tuple[dict, str]` (Uma tupla contendo um dicionário `{"music_prompt": "...", "sfx_prompt": "..."}` e um relatório textual). - -### **flux_kontext_helpers.py (FluxPoolManager)** - -* **Propósito:** Especialista em geração de imagens de alta qualidade (keyframes) usando a pipeline FluxKontext. Gerencia um pool de workers para otimizar o uso de múltiplas GPUs. -* **Singleton Instance:** `flux_kontext_singleton` -* **Construtor:** `FluxPoolManager(device_ids: list[str], flux_config_file: str)` - * Lê `configs/flux_config.yaml`. -* **Método Público:** - * `generate_image(prompt: str, reference_images: list[Image.Image], width: int, height: int, seed: int = 42, callback: callable = None)` - * **Inputs:** - * `prompt`: Prompt textual para guiar a geração (string). - * `reference_images`: Lista de objetos `PIL.Image` como referência visual. - * `width`, `height`: Dimensões da imagem de saída (int). - * `seed`: Semente para reprodutibilidade (int). - * `callback`: Função de callback opcional para monitorar o progresso. - * **Output:** `PIL.Image.Image` (O objeto da imagem gerada). - -### **dreamo_helpers.py (DreamOAgent)** - -* **Propósito:** Especialista em geração de imagens de alta qualidade (keyframes) usando a pipeline DreamO, com capacidades avançadas de edição e estilo a partir de referências. -* **Singleton Instance:** `dreamo_agent_singleton` -* **Construtor:** `DreamOAgent(device_id: str = None)` - * Lê `configs/dreamo_config.yaml`. -* **Método Público:** - * `generate_image(prompt: str, reference_images: list[Image.Image], width: int, height: int)` - * **Inputs:** - * `prompt`: Prompt textual para guiar a geração (string). - * `reference_images`: Lista de objetos `PIL.Image` como referência visual. A lógica interna atribui a primeira imagem como `style` e as demais como `ip`. - * `width`, `height`: Dimensões da imagem de saída (int). - * **Output:** `PIL.Image.Image` (O objeto da imagem gerada). - -### **ltx_manager_helpers.py (LtxPoolManager)** - -* **Propósito:** Especialista na geração de fragmentos de vídeo no espaço latente usando a pipeline LTX-Video. Gerencia um pool de workers para otimizar o uso de múltiplas GPUs. -* **Singleton Instance:** `ltx_manager_singleton` -* **Construtor:** `LtxPoolManager(device_ids: list[str], ltx_model_config_file: str, ltx_global_config_file: str)` - * Lê o `ltx_global_config_file` e o `ltx_model_config_file` para configurar a pipeline. -* **Método Público:** - * `generate_latent_fragment(**kwargs)` - * **Inputs:** Dicionário de keyword arguments (`kwargs`) contendo todos os parâmetros da pipeline LTX, incluindo: - * `height`, `width`: Dimensões do vídeo (int). - * `video_total_frames`: Número total de frames a serem gerados (int). - * `video_fps`: Frames por segundo (int). - * `motion_prompt`: Prompt de movimento (string). - * `conditioning_items_data`: Lista de objetos `LatentConditioningItem` contendo os tensores latentes de condição. - * `guidance_scale`, `stg_scale`, `num_inference_steps`, etc. - * **Output:** `tuple[torch.Tensor, tuple]` (Uma tupla contendo o tensor latente gerado e os valores de padding utilizados). - -### **mmaudio_helper.py (MMAudioAgent)** - -* **Propósito:** Especialista em geração de áudio para um determinado fragmento de vídeo. -* **Singleton Instance:** `mmaudio_agent_singleton` -* **Construtor:** `MMAudioAgent(workspace_dir: str, device_id: str = None, mmaudio_config_file: str)` - * Lê `configs/mmaudio_config.yaml`. -* **Método Público:** - * `generate_audio_for_video(video_path: str, prompt: str, negative_prompt: str, duration_seconds: float)` - * **Inputs:** - * `video_path`: Caminho para o arquivo de vídeo silencioso (string). - * `prompt`: Prompt textual para guiar a geração de áudio (string). - * `negative_prompt`: Prompt negativo para áudio (string). - * `duration_seconds`: Duração exata do vídeo (float). - * **Output:** `str` (O caminho para o novo arquivo de vídeo com a faixa de áudio integrada). - - -### https://huggingface.co/spaces/ByteDance-Seed/SeedVR2-3B/tree/main - ---- - -## 🔗 Projetos Originais e Atribuições -(A seção de atribuições e licenças permanece a mesma que definimos anteriormente) - -### DreamO -* **Repositório Original:** [https://github.com/bytedance/DreamO](https://github.com/bytedance/DreamO) -... - -### LTX-Video -* **Repositório Original:** [https://github.com/Lightricks/LTX-Video](https://github.com/Lightricks/LTX-Video) -... - -### MMAudio -* **Repositório Original:** [https://github.com/hkchengrex/MMAudio](https://github.com/hkchengrex/MMAudio) -... \ No newline at end of file diff --git a/configs_3b_x/main.yaml b/configs_3b_x/main.yaml deleted file mode 100644 index 78579065f27852990354bd565b5375a679a76035..0000000000000000000000000000000000000000 --- a/configs_3b_x/main.yaml +++ /dev/null @@ -1,88 +0,0 @@ -__object__: - path: projects.video_diffusion_sr.train - name: VideoDiffusionTrainer - -dit: - model: - __object__: - path: models.dit_v2.nadit - name: NaDiT - args: as_params - vid_in_channels: 33 - vid_out_channels: 16 - vid_dim: 2560 - vid_out_norm: fusedrms - txt_in_dim: 5120 - txt_in_norm: fusedln - txt_dim: ${.vid_dim} - emb_dim: ${eval:'6 * ${.vid_dim}'} - heads: 20 - head_dim: 128 # llm-like - expand_ratio: 4 - norm: fusedrms - norm_eps: 1.0e-05 - ada: single - qk_bias: False - qk_norm: fusedrms - patch_size: [ 1,2,2 ] - num_layers: 32 # llm-like - mm_layers: 10 - mlp_type: swiglu - msa_type: None - block_type: ${eval:'${.num_layers} * ["mmdit_sr"]'} # space-full - window: ${eval:'${.num_layers} * [(4,3,3)]'} # space-full - window_method: ${eval:'${.num_layers} // 2 * ["720pwin_by_size_bysize","720pswin_by_size_bysize"]'} # space-full - rope_type: mmrope3d - rope_dim: 128 - compile: False - gradient_checkpoint: True - fsdp: - sharding_strategy: _HYBRID_SHARD_ZERO2 - -ema: - decay: 0.9998 - -vae: - model: - __inherit__: models/video_vae_v3/s8_c16_t4_inflation_sd3.yaml - freeze_encoder: False - # gradient_checkpoint: True - slicing: - split_size: 4 - memory_device: same - memory_limit: - conv_max_mem: 0.5 - norm_max_mem: 0.5 - checkpoint: ./ckpts/ema_vae.pth - scaling_factor: 0.9152 - compile: False - grouping: False - dtype: bfloat16 - -diffusion: - schedule: - type: lerp - T: 1000.0 - sampler: - type: euler - prediction_type: v_lerp - timesteps: - training: - type: logitnormal - loc: 0.0 - scale: 1.0 - sampling: - type: uniform_trailing - steps: 50 - transform: True - loss: - type: v_lerp - cfg: - scale: 7.5 - rescale: 0 - -condition: - i2v: 0.0 - v2v: 0.0 - sr: 1.0 - noise_scale: 0.25 diff --git a/configs_7b_x/LICENSE.txt b/configs_7b_x/LICENSE.txt deleted file mode 100644 index 261eeb9e9f8b2b4b0d119366dda99c6fd7d35c64..0000000000000000000000000000000000000000 --- a/configs_7b_x/LICENSE.txt +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/configs_7b_x/README.md b/configs_7b_x/README.md deleted file mode 100644 index 371a83bc5640a438d5f59aecdae146af15991ec1..0000000000000000000000000000000000000000 --- a/configs_7b_x/README.md +++ /dev/null @@ -1,138 +0,0 @@ -# 🛠️ helpers/ - Ferramentas de IA de Terceiros Adaptadas para ADUC-SDR - -Esta pasta contém implementações adaptadas de modelos e utilitários de IA de terceiros, que servem como "especialistas" ou "ferramentas" de baixo nível para a arquitetura ADUC-SDR. - -**IMPORTANTE:** O conteúdo desta pasta é de autoria de seus respectivos idealizadores e desenvolvedores originais. Esta pasta **NÃO FAZ PARTE** do projeto principal ADUC-SDR em termos de sua arquitetura inovadora. Ela serve como um repositório para as **dependências diretas e modificadas** que os `DeformesXDEngines` (os estágios do "foguete" ADUC-SDR) invocam para realizar tarefas específicas (geração de imagem, vídeo, áudio). - -As modificações realizadas nos arquivos aqui presentes visam principalmente: -1. **Adaptação de Interfaces:** Padronizar as interfaces para que se encaixem no fluxo de orquestração do ADUC-SDR. -2. **Gerenciamento de Recursos:** Integrar lógicas de carregamento/descarregamento de modelos (GPU management) e configurações via arquivos YAML. -3. **Otimização de Fluxo:** Ajustar as pipelines para aceitar formatos de entrada mais eficientes (ex: tensores pré-codificados em vez de caminhos de mídia, pulando etapas de codificação/decodificação redundantes). - ---- - -## 📄 Licenciamento - -O conteúdo original dos projetos listados abaixo é licenciado sob a **Licença Apache 2.0**, ou outra licença especificada pelos autores originais. Todas as modificações e o uso desses arquivos dentro da estrutura `helpers/` do projeto ADUC-SDR estão em conformidade com os termos da **Licença Apache 2.0**. - -As licenças originais dos projetos podem ser encontradas nas suas respectivas fontes ou nos subdiretórios `incl_licenses/` dentro de cada módulo adaptado. - ---- - -## 🛠️ API dos Helpers e Guia de Uso - -Esta seção detalha como cada helper (agente especialista) deve ser utilizado dentro do ecossistema ADUC-SDR. Todos os agentes são instanciados como **singletons** no `hardware_manager.py` para garantir o gerenciamento centralizado de recursos de GPU. - -### **gemini_helpers.py (GeminiAgent)** - -* **Propósito:** Atua como o "Oráculo de Síntese Adaptativo", responsável por todas as tarefas de processamento de linguagem natural, como criação de storyboards, geração de prompts, e tomada de decisões narrativas. -* **Singleton Instance:** `gemini_agent_singleton` -* **Construtor:** `GeminiAgent()` - * Lê `configs/gemini_config.yaml` para obter o nome do modelo, parâmetros de inferência e caminhos de templates de prompt. A chave da API é lida da variável de ambiente `GEMINI_API_KEY`. -* **Métodos Públicos:** - * `generate_storyboard(prompt: str, num_keyframes: int, ref_image_paths: list[str])` - * **Inputs:** - * `prompt`: A ideia geral do filme (string). - * `num_keyframes`: O número de cenas a serem geradas (int). - * `ref_image_paths`: Lista de caminhos para as imagens de referência (list[str]). - * **Output:** `tuple[list[str], str]` (Uma tupla contendo a lista de strings do storyboard e um relatório textual da operação). - * `select_keyframes_from_pool(storyboard: list, base_image_paths: list[str], pool_image_paths: list[str])` - * **Inputs:** - * `storyboard`: A lista de strings do storyboard gerado. - * `base_image_paths`: Imagens de referência base (list[str]). - * `pool_image_paths`: O "banco de imagens" de onde selecionar (list[str]). - * **Output:** `tuple[list[str], str]` (Uma tupla contendo a lista de caminhos de imagens selecionadas e um relatório textual). - * `get_anticipatory_keyframe_prompt(...)` - * **Inputs:** Contexto narrativo e visual para gerar um prompt de imagem. - * **Output:** `tuple[str, str]` (Uma tupla contendo o prompt gerado para o modelo de imagem e um relatório textual). - * `get_initial_motion_prompt(...)` - * **Inputs:** Contexto narrativo e visual para a primeira transição de vídeo. - * **Output:** `tuple[str, str]` (Uma tupla contendo o prompt de movimento gerado e um relatório textual). - * `get_transition_decision(...)` - * **Inputs:** Contexto narrativo e visual para uma transição de vídeo intermediária. - * **Output:** `tuple[dict, str]` (Uma tupla contendo um dicionário `{"transition_type": "...", "motion_prompt": "..."}` e um relatório textual). - * `generate_audio_prompts(...)` - * **Inputs:** Contexto narrativo global. - * **Output:** `tuple[dict, str]` (Uma tupla contendo um dicionário `{"music_prompt": "...", "sfx_prompt": "..."}` e um relatório textual). - -### **flux_kontext_helpers.py (FluxPoolManager)** - -* **Propósito:** Especialista em geração de imagens de alta qualidade (keyframes) usando a pipeline FluxKontext. Gerencia um pool de workers para otimizar o uso de múltiplas GPUs. -* **Singleton Instance:** `flux_kontext_singleton` -* **Construtor:** `FluxPoolManager(device_ids: list[str], flux_config_file: str)` - * Lê `configs/flux_config.yaml`. -* **Método Público:** - * `generate_image(prompt: str, reference_images: list[Image.Image], width: int, height: int, seed: int = 42, callback: callable = None)` - * **Inputs:** - * `prompt`: Prompt textual para guiar a geração (string). - * `reference_images`: Lista de objetos `PIL.Image` como referência visual. - * `width`, `height`: Dimensões da imagem de saída (int). - * `seed`: Semente para reprodutibilidade (int). - * `callback`: Função de callback opcional para monitorar o progresso. - * **Output:** `PIL.Image.Image` (O objeto da imagem gerada). - -### **dreamo_helpers.py (DreamOAgent)** - -* **Propósito:** Especialista em geração de imagens de alta qualidade (keyframes) usando a pipeline DreamO, com capacidades avançadas de edição e estilo a partir de referências. -* **Singleton Instance:** `dreamo_agent_singleton` -* **Construtor:** `DreamOAgent(device_id: str = None)` - * Lê `configs/dreamo_config.yaml`. -* **Método Público:** - * `generate_image(prompt: str, reference_images: list[Image.Image], width: int, height: int)` - * **Inputs:** - * `prompt`: Prompt textual para guiar a geração (string). - * `reference_images`: Lista de objetos `PIL.Image` como referência visual. A lógica interna atribui a primeira imagem como `style` e as demais como `ip`. - * `width`, `height`: Dimensões da imagem de saída (int). - * **Output:** `PIL.Image.Image` (O objeto da imagem gerada). - -### **ltx_manager_helpers.py (LtxPoolManager)** - -* **Propósito:** Especialista na geração de fragmentos de vídeo no espaço latente usando a pipeline LTX-Video. Gerencia um pool de workers para otimizar o uso de múltiplas GPUs. -* **Singleton Instance:** `ltx_manager_singleton` -* **Construtor:** `LtxPoolManager(device_ids: list[str], ltx_model_config_file: str, ltx_global_config_file: str)` - * Lê o `ltx_global_config_file` e o `ltx_model_config_file` para configurar a pipeline. -* **Método Público:** - * `generate_latent_fragment(**kwargs)` - * **Inputs:** Dicionário de keyword arguments (`kwargs`) contendo todos os parâmetros da pipeline LTX, incluindo: - * `height`, `width`: Dimensões do vídeo (int). - * `video_total_frames`: Número total de frames a serem gerados (int). - * `video_fps`: Frames por segundo (int). - * `motion_prompt`: Prompt de movimento (string). - * `conditioning_items_data`: Lista de objetos `LatentConditioningItem` contendo os tensores latentes de condição. - * `guidance_scale`, `stg_scale`, `num_inference_steps`, etc. - * **Output:** `tuple[torch.Tensor, tuple]` (Uma tupla contendo o tensor latente gerado e os valores de padding utilizados). - -### **mmaudio_helper.py (MMAudioAgent)** - -* **Propósito:** Especialista em geração de áudio para um determinado fragmento de vídeo. -* **Singleton Instance:** `mmaudio_agent_singleton` -* **Construtor:** `MMAudioAgent(workspace_dir: str, device_id: str = None, mmaudio_config_file: str)` - * Lê `configs/mmaudio_config.yaml`. -* **Método Público:** - * `generate_audio_for_video(video_path: str, prompt: str, negative_prompt: str, duration_seconds: float)` - * **Inputs:** - * `video_path`: Caminho para o arquivo de vídeo silencioso (string). - * `prompt`: Prompt textual para guiar a geração de áudio (string). - * `negative_prompt`: Prompt negativo para áudio (string). - * `duration_seconds`: Duração exata do vídeo (float). - * **Output:** `str` (O caminho para o novo arquivo de vídeo com a faixa de áudio integrada). - - -### https://huggingface.co/spaces/ByteDance-Seed/SeedVR2-3B/tree/main - ---- - -## 🔗 Projetos Originais e Atribuições -(A seção de atribuições e licenças permanece a mesma que definimos anteriormente) - -### DreamO -* **Repositório Original:** [https://github.com/bytedance/DreamO](https://github.com/bytedance/DreamO) -... - -### LTX-Video -* **Repositório Original:** [https://github.com/Lightricks/LTX-Video](https://github.com/Lightricks/LTX-Video) -... - -### MMAudio -* **Repositório Original:** [https://github.com/hkchengrex/MMAudio](https://github.com/hkchengrex/MMAudio) -... \ No newline at end of file diff --git a/configs_7b_x/main.yaml b/configs_7b_x/main.yaml deleted file mode 100644 index 51c5eaf880788ff941bcce84b2548e3f21646339..0000000000000000000000000000000000000000 --- a/configs_7b_x/main.yaml +++ /dev/null @@ -1,85 +0,0 @@ -__object__: - path: projects.video_diffusion_sr.train - name: VideoDiffusionTrainer - -dit: - model: - __object__: - path: models.dit.nadit - name: NaDiT - args: as_params - vid_in_channels: 33 - vid_out_channels: 16 - vid_dim: 3072 - txt_in_dim: 5120 - txt_dim: ${.vid_dim} - emb_dim: ${eval:'6 * ${.vid_dim}'} - heads: 24 - head_dim: 128 # llm-like - expand_ratio: 4 - norm: fusedrms - norm_eps: 1e-5 - ada: single - qk_bias: False - qk_rope: True - qk_norm: fusedrms - patch_size: [ 1,2,2 ] - num_layers: 36 # llm-like - shared_mlp: False - shared_qkv: False - mlp_type: normal - block_type: ${eval:'${.num_layers} * ["mmdit_sr"]'} # space-full - window: ${eval:'${.num_layers} * [(4,3,3)]'} # space-full - window_method: ${eval:'${.num_layers} // 2 * ["720pwin_by_size_bysize","720pswin_by_size_bysize"]'} # space-full - compile: False - gradient_checkpoint: True - fsdp: - sharding_strategy: _HYBRID_SHARD_ZERO2 - -ema: - decay: 0.9998 - -vae: - model: - __inherit__: models/video_vae_v3/s8_c16_t4_inflation_sd3.yaml - freeze_encoder: False - # gradient_checkpoint: True - slicing: - split_size: 4 - memory_device: same - memory_limit: - conv_max_mem: 0.5 - norm_max_mem: 0.5 - checkpoint: ./ckpts/ema_vae.pth - scaling_factor: 0.9152 - compile: False - grouping: False - dtype: bfloat16 - -diffusion: - schedule: - type: lerp - T: 1000.0 - sampler: - type: euler - prediction_type: v_lerp - timesteps: - training: - type: logitnormal - loc: 0.0 - scale: 1.0 - sampling: - type: uniform_trailing - steps: 50 - transform: True - loss: - type: v_lerp - cfg: - scale: 7.5 - rescale: 0 - -condition: - i2v: 0.0 - v2v: 0.0 - sr: 1.0 - noise_scale: 0.25 diff --git a/configs_x/LICENSE.txt b/configs_x/LICENSE.txt deleted file mode 100644 index 261eeb9e9f8b2b4b0d119366dda99c6fd7d35c64..0000000000000000000000000000000000000000 --- a/configs_x/LICENSE.txt +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/configs_x/README.md b/configs_x/README.md deleted file mode 100644 index 371a83bc5640a438d5f59aecdae146af15991ec1..0000000000000000000000000000000000000000 --- a/configs_x/README.md +++ /dev/null @@ -1,138 +0,0 @@ -# 🛠️ helpers/ - Ferramentas de IA de Terceiros Adaptadas para ADUC-SDR - -Esta pasta contém implementações adaptadas de modelos e utilitários de IA de terceiros, que servem como "especialistas" ou "ferramentas" de baixo nível para a arquitetura ADUC-SDR. - -**IMPORTANTE:** O conteúdo desta pasta é de autoria de seus respectivos idealizadores e desenvolvedores originais. Esta pasta **NÃO FAZ PARTE** do projeto principal ADUC-SDR em termos de sua arquitetura inovadora. Ela serve como um repositório para as **dependências diretas e modificadas** que os `DeformesXDEngines` (os estágios do "foguete" ADUC-SDR) invocam para realizar tarefas específicas (geração de imagem, vídeo, áudio). - -As modificações realizadas nos arquivos aqui presentes visam principalmente: -1. **Adaptação de Interfaces:** Padronizar as interfaces para que se encaixem no fluxo de orquestração do ADUC-SDR. -2. **Gerenciamento de Recursos:** Integrar lógicas de carregamento/descarregamento de modelos (GPU management) e configurações via arquivos YAML. -3. **Otimização de Fluxo:** Ajustar as pipelines para aceitar formatos de entrada mais eficientes (ex: tensores pré-codificados em vez de caminhos de mídia, pulando etapas de codificação/decodificação redundantes). - ---- - -## 📄 Licenciamento - -O conteúdo original dos projetos listados abaixo é licenciado sob a **Licença Apache 2.0**, ou outra licença especificada pelos autores originais. Todas as modificações e o uso desses arquivos dentro da estrutura `helpers/` do projeto ADUC-SDR estão em conformidade com os termos da **Licença Apache 2.0**. - -As licenças originais dos projetos podem ser encontradas nas suas respectivas fontes ou nos subdiretórios `incl_licenses/` dentro de cada módulo adaptado. - ---- - -## 🛠️ API dos Helpers e Guia de Uso - -Esta seção detalha como cada helper (agente especialista) deve ser utilizado dentro do ecossistema ADUC-SDR. Todos os agentes são instanciados como **singletons** no `hardware_manager.py` para garantir o gerenciamento centralizado de recursos de GPU. - -### **gemini_helpers.py (GeminiAgent)** - -* **Propósito:** Atua como o "Oráculo de Síntese Adaptativo", responsável por todas as tarefas de processamento de linguagem natural, como criação de storyboards, geração de prompts, e tomada de decisões narrativas. -* **Singleton Instance:** `gemini_agent_singleton` -* **Construtor:** `GeminiAgent()` - * Lê `configs/gemini_config.yaml` para obter o nome do modelo, parâmetros de inferência e caminhos de templates de prompt. A chave da API é lida da variável de ambiente `GEMINI_API_KEY`. -* **Métodos Públicos:** - * `generate_storyboard(prompt: str, num_keyframes: int, ref_image_paths: list[str])` - * **Inputs:** - * `prompt`: A ideia geral do filme (string). - * `num_keyframes`: O número de cenas a serem geradas (int). - * `ref_image_paths`: Lista de caminhos para as imagens de referência (list[str]). - * **Output:** `tuple[list[str], str]` (Uma tupla contendo a lista de strings do storyboard e um relatório textual da operação). - * `select_keyframes_from_pool(storyboard: list, base_image_paths: list[str], pool_image_paths: list[str])` - * **Inputs:** - * `storyboard`: A lista de strings do storyboard gerado. - * `base_image_paths`: Imagens de referência base (list[str]). - * `pool_image_paths`: O "banco de imagens" de onde selecionar (list[str]). - * **Output:** `tuple[list[str], str]` (Uma tupla contendo a lista de caminhos de imagens selecionadas e um relatório textual). - * `get_anticipatory_keyframe_prompt(...)` - * **Inputs:** Contexto narrativo e visual para gerar um prompt de imagem. - * **Output:** `tuple[str, str]` (Uma tupla contendo o prompt gerado para o modelo de imagem e um relatório textual). - * `get_initial_motion_prompt(...)` - * **Inputs:** Contexto narrativo e visual para a primeira transição de vídeo. - * **Output:** `tuple[str, str]` (Uma tupla contendo o prompt de movimento gerado e um relatório textual). - * `get_transition_decision(...)` - * **Inputs:** Contexto narrativo e visual para uma transição de vídeo intermediária. - * **Output:** `tuple[dict, str]` (Uma tupla contendo um dicionário `{"transition_type": "...", "motion_prompt": "..."}` e um relatório textual). - * `generate_audio_prompts(...)` - * **Inputs:** Contexto narrativo global. - * **Output:** `tuple[dict, str]` (Uma tupla contendo um dicionário `{"music_prompt": "...", "sfx_prompt": "..."}` e um relatório textual). - -### **flux_kontext_helpers.py (FluxPoolManager)** - -* **Propósito:** Especialista em geração de imagens de alta qualidade (keyframes) usando a pipeline FluxKontext. Gerencia um pool de workers para otimizar o uso de múltiplas GPUs. -* **Singleton Instance:** `flux_kontext_singleton` -* **Construtor:** `FluxPoolManager(device_ids: list[str], flux_config_file: str)` - * Lê `configs/flux_config.yaml`. -* **Método Público:** - * `generate_image(prompt: str, reference_images: list[Image.Image], width: int, height: int, seed: int = 42, callback: callable = None)` - * **Inputs:** - * `prompt`: Prompt textual para guiar a geração (string). - * `reference_images`: Lista de objetos `PIL.Image` como referência visual. - * `width`, `height`: Dimensões da imagem de saída (int). - * `seed`: Semente para reprodutibilidade (int). - * `callback`: Função de callback opcional para monitorar o progresso. - * **Output:** `PIL.Image.Image` (O objeto da imagem gerada). - -### **dreamo_helpers.py (DreamOAgent)** - -* **Propósito:** Especialista em geração de imagens de alta qualidade (keyframes) usando a pipeline DreamO, com capacidades avançadas de edição e estilo a partir de referências. -* **Singleton Instance:** `dreamo_agent_singleton` -* **Construtor:** `DreamOAgent(device_id: str = None)` - * Lê `configs/dreamo_config.yaml`. -* **Método Público:** - * `generate_image(prompt: str, reference_images: list[Image.Image], width: int, height: int)` - * **Inputs:** - * `prompt`: Prompt textual para guiar a geração (string). - * `reference_images`: Lista de objetos `PIL.Image` como referência visual. A lógica interna atribui a primeira imagem como `style` e as demais como `ip`. - * `width`, `height`: Dimensões da imagem de saída (int). - * **Output:** `PIL.Image.Image` (O objeto da imagem gerada). - -### **ltx_manager_helpers.py (LtxPoolManager)** - -* **Propósito:** Especialista na geração de fragmentos de vídeo no espaço latente usando a pipeline LTX-Video. Gerencia um pool de workers para otimizar o uso de múltiplas GPUs. -* **Singleton Instance:** `ltx_manager_singleton` -* **Construtor:** `LtxPoolManager(device_ids: list[str], ltx_model_config_file: str, ltx_global_config_file: str)` - * Lê o `ltx_global_config_file` e o `ltx_model_config_file` para configurar a pipeline. -* **Método Público:** - * `generate_latent_fragment(**kwargs)` - * **Inputs:** Dicionário de keyword arguments (`kwargs`) contendo todos os parâmetros da pipeline LTX, incluindo: - * `height`, `width`: Dimensões do vídeo (int). - * `video_total_frames`: Número total de frames a serem gerados (int). - * `video_fps`: Frames por segundo (int). - * `motion_prompt`: Prompt de movimento (string). - * `conditioning_items_data`: Lista de objetos `LatentConditioningItem` contendo os tensores latentes de condição. - * `guidance_scale`, `stg_scale`, `num_inference_steps`, etc. - * **Output:** `tuple[torch.Tensor, tuple]` (Uma tupla contendo o tensor latente gerado e os valores de padding utilizados). - -### **mmaudio_helper.py (MMAudioAgent)** - -* **Propósito:** Especialista em geração de áudio para um determinado fragmento de vídeo. -* **Singleton Instance:** `mmaudio_agent_singleton` -* **Construtor:** `MMAudioAgent(workspace_dir: str, device_id: str = None, mmaudio_config_file: str)` - * Lê `configs/mmaudio_config.yaml`. -* **Método Público:** - * `generate_audio_for_video(video_path: str, prompt: str, negative_prompt: str, duration_seconds: float)` - * **Inputs:** - * `video_path`: Caminho para o arquivo de vídeo silencioso (string). - * `prompt`: Prompt textual para guiar a geração de áudio (string). - * `negative_prompt`: Prompt negativo para áudio (string). - * `duration_seconds`: Duração exata do vídeo (float). - * **Output:** `str` (O caminho para o novo arquivo de vídeo com a faixa de áudio integrada). - - -### https://huggingface.co/spaces/ByteDance-Seed/SeedVR2-3B/tree/main - ---- - -## 🔗 Projetos Originais e Atribuições -(A seção de atribuições e licenças permanece a mesma que definimos anteriormente) - -### DreamO -* **Repositório Original:** [https://github.com/bytedance/DreamO](https://github.com/bytedance/DreamO) -... - -### LTX-Video -* **Repositório Original:** [https://github.com/Lightricks/LTX-Video](https://github.com/Lightricks/LTX-Video) -... - -### MMAudio -* **Repositório Original:** [https://github.com/hkchengrex/MMAudio](https://github.com/hkchengrex/MMAudio) -... \ No newline at end of file diff --git a/configs_x/ltxv-13b-0.9.7-dev.yaml b/configs_x/ltxv-13b-0.9.7-dev.yaml deleted file mode 100644 index ae548253526c1de5804bb430407850573305cd14..0000000000000000000000000000000000000000 --- a/configs_x/ltxv-13b-0.9.7-dev.yaml +++ /dev/null @@ -1,34 +0,0 @@ -pipeline_type: multi-scale -checkpoint_path: "ltxv-13b-0.9.7-dev.safetensors" -downscale_factor: 0.6666666 -spatial_upscaler_model_path: "ltxv-spatial-upscaler-0.9.7.safetensors" -stg_mode: "attention_values" # options: "attention_values", "attention_skip", "residual", "transformer_block" -decode_timestep: 0.05 -decode_noise_scale: 0.025 -text_encoder_model_name_or_path: "PixArt-alpha/PixArt-XL-2-1024-MS" -precision: "bfloat16" -sampler: "from_checkpoint" # options: "uniform", "linear-quadratic", "from_checkpoint" -prompt_enhancement_words_threshold: 120 -prompt_enhancer_image_caption_model_name_or_path: "MiaoshouAI/Florence-2-large-PromptGen-v2.0" -prompt_enhancer_llm_model_name_or_path: "unsloth/Llama-3.2-3B-Instruct" -stochastic_sampling: false - -first_pass: - guidance_scale: [1, 1, 6, 8, 6, 1, 1] - stg_scale: [0, 0, 4, 4, 4, 2, 1] - rescaling_scale: [1, 1, 0.5, 0.5, 1, 1, 1] - guidance_timesteps: [1.0, 0.996, 0.9933, 0.9850, 0.9767, 0.9008, 0.6180] - skip_block_list: [[], [11, 25, 35, 39], [22, 35, 39], [28], [28], [28], [28]] - num_inference_steps: 30 - skip_final_inference_steps: 3 - cfg_star_rescale: true - -second_pass: - guidance_scale: [1] - stg_scale: [1] - rescaling_scale: [1] - guidance_timesteps: [1.0] - skip_block_list: [27] - num_inference_steps: 30 - skip_initial_inference_steps: 17 - cfg_star_rescale: true \ No newline at end of file diff --git a/configs_x/ltxv-13b-0.9.7-distilled.yaml b/configs_x/ltxv-13b-0.9.7-distilled.yaml deleted file mode 100644 index 9df17bb001b39d6d12c7013cb823c44b85d28aea..0000000000000000000000000000000000000000 --- a/configs_x/ltxv-13b-0.9.7-distilled.yaml +++ /dev/null @@ -1,28 +0,0 @@ -pipeline_type: multi-scale -checkpoint_path: "ltxv-13b-0.9.7-distilled.safetensors" -downscale_factor: 0.6666666 -spatial_upscaler_model_path: "ltxv-spatial-upscaler-0.9.7.safetensors" -stg_mode: "attention_values" # options: "attention_values", "attention_skip", "residual", "transformer_block" -decode_timestep: 0.05 -decode_noise_scale: 0.025 -text_encoder_model_name_or_path: "PixArt-alpha/PixArt-XL-2-1024-MS" -precision: "bfloat16" -sampler: "from_checkpoint" # options: "uniform", "linear-quadratic", "from_checkpoint" -prompt_enhancement_words_threshold: 120 -prompt_enhancer_image_caption_model_name_or_path: "MiaoshouAI/Florence-2-large-PromptGen-v2.0" -prompt_enhancer_llm_model_name_or_path: "unsloth/Llama-3.2-3B-Instruct" -stochastic_sampling: false - -first_pass: - timesteps: [1.0000, 0.9937, 0.9875, 0.9812, 0.9750, 0.9094, 0.7250] - guidance_scale: 1 - stg_scale: 0 - rescaling_scale: 1 - skip_block_list: [42] - -second_pass: - timesteps: [0.9094, 0.7250, 0.4219] - guidance_scale: 1 - stg_scale: 0 - rescaling_scale: 1 - skip_block_list: [42] diff --git a/configs_x/ltxv-13b-0.9.8-dev-fp8.yaml b/configs_x/ltxv-13b-0.9.8-dev-fp8.yaml deleted file mode 100644 index 76b25f1373061a873a3134d471b927b66c37aa54..0000000000000000000000000000000000000000 --- a/configs_x/ltxv-13b-0.9.8-dev-fp8.yaml +++ /dev/null @@ -1,34 +0,0 @@ -pipeline_type: multi-scale -checkpoint_path: "ltxv-13b-0.9.8-dev-fp8.safetensors" -downscale_factor: 0.6666666 -spatial_upscaler_model_path: "ltxv-spatial-upscaler-0.9.8.safetensors" -stg_mode: "attention_values" # options: "attention_values", "attention_skip", "residual", "transformer_block" -decode_timestep: 0.05 -decode_noise_scale: 0.025 -text_encoder_model_name_or_path: "PixArt-alpha/PixArt-XL-2-1024-MS" -precision: "float8_e4m3fn" # options: "float8_e4m3fn", "bfloat16", "mixed_precision" -sampler: "from_checkpoint" # options: "uniform", "linear-quadratic", "from_checkpoint" -prompt_enhancement_words_threshold: 120 -prompt_enhancer_image_caption_model_name_or_path: "MiaoshouAI/Florence-2-large-PromptGen-v2.0" -prompt_enhancer_llm_model_name_or_path: "unsloth/Llama-3.2-3B-Instruct" -stochastic_sampling: false - -first_pass: - guidance_scale: [1, 1, 6, 8, 6, 1, 1] - stg_scale: [0, 0, 4, 4, 4, 2, 1] - rescaling_scale: [1, 1, 0.5, 0.5, 1, 1, 1] - guidance_timesteps: [1.0, 0.996, 0.9933, 0.9850, 0.9767, 0.9008, 0.6180] - skip_block_list: [[], [11, 25, 35, 39], [22, 35, 39], [28], [28], [28], [28]] - num_inference_steps: 30 - skip_final_inference_steps: 3 - cfg_star_rescale: true - -second_pass: - guidance_scale: [1] - stg_scale: [1] - rescaling_scale: [1] - guidance_timesteps: [1.0] - skip_block_list: [27] - num_inference_steps: 30 - skip_initial_inference_steps: 17 - cfg_star_rescale: true diff --git a/configs_x/ltxv-13b-0.9.8-dev.yaml b/configs_x/ltxv-13b-0.9.8-dev.yaml deleted file mode 100644 index 0c22e9e5b3704146d521e7c60a841c043373c66e..0000000000000000000000000000000000000000 --- a/configs_x/ltxv-13b-0.9.8-dev.yaml +++ /dev/null @@ -1,34 +0,0 @@ -pipeline_type: multi-scale -checkpoint_path: "ltxv-13b-0.9.8-dev.safetensors" -downscale_factor: 0.6666666 -spatial_upscaler_model_path: "ltxv-spatial-upscaler-0.9.8.safetensors" -stg_mode: "attention_values" # options: "attention_values", "attention_skip", "residual", "transformer_block" -decode_timestep: 0.05 -decode_noise_scale: 0.025 -text_encoder_model_name_or_path: "PixArt-alpha/PixArt-XL-2-1024-MS" -precision: "bfloat16" -sampler: "from_checkpoint" # options: "uniform", "linear-quadratic", "from_checkpoint" -prompt_enhancement_words_threshold: 120 -prompt_enhancer_image_caption_model_name_or_path: "MiaoshouAI/Florence-2-large-PromptGen-v2.0" -prompt_enhancer_llm_model_name_or_path: "unsloth/Llama-3.2-3B-Instruct" -stochastic_sampling: false - -first_pass: - guidance_scale: [1, 1, 6, 8, 6, 1, 1] - stg_scale: [0, 0, 4, 4, 4, 2, 1] - rescaling_scale: [1, 1, 0.5, 0.5, 1, 1, 1] - guidance_timesteps: [1.0, 0.996, 0.9933, 0.9850, 0.9767, 0.9008, 0.6180] - skip_block_list: [[], [11, 25, 35, 39], [22, 35, 39], [28], [28], [28], [28]] - num_inference_steps: 30 - skip_final_inference_steps: 3 - cfg_star_rescale: true - -second_pass: - guidance_scale: [1] - stg_scale: [1] - rescaling_scale: [1] - guidance_timesteps: [1.0] - skip_block_list: [27] - num_inference_steps: 30 - skip_initial_inference_steps: 17 - cfg_star_rescale: true \ No newline at end of file diff --git a/configs_x/ltxv-13b-0.9.8-distilled-fp8.yaml b/configs_x/ltxv-13b-0.9.8-distilled-fp8.yaml deleted file mode 100644 index 444718bacbaa698c6b3df9cff6c89c9a2f95923c..0000000000000000000000000000000000000000 --- a/configs_x/ltxv-13b-0.9.8-distilled-fp8.yaml +++ /dev/null @@ -1,29 +0,0 @@ -pipeline_type: multi-scale -checkpoint_path: "ltxv-13b-0.9.8-distilled-fp8.safetensors" -downscale_factor: 0.6666666 -spatial_upscaler_model_path: "ltxv-spatial-upscaler-0.9.8.safetensors" -stg_mode: "attention_values" # options: "attention_values", "attention_skip", "residual", "transformer_block" -decode_timestep: 0.05 -decode_noise_scale: 0.025 -text_encoder_model_name_or_path: "PixArt-alpha/PixArt-XL-2-1024-MS" -precision: "float8_e4m3fn" # options: "float8_e4m3fn", "bfloat16", "mixed_precision" -sampler: "from_checkpoint" # options: "uniform", "linear-quadratic", "from_checkpoint" -prompt_enhancement_words_threshold: 120 -prompt_enhancer_image_caption_model_name_or_path: "MiaoshouAI/Florence-2-large-PromptGen-v2.0" -prompt_enhancer_llm_model_name_or_path: "unsloth/Llama-3.2-3B-Instruct" -stochastic_sampling: false - -first_pass: - timesteps: [1.0000, 0.9937, 0.9875, 0.9812, 0.9750, 0.9094, 0.7250] - guidance_scale: 1 - stg_scale: 0 - rescaling_scale: 1 - skip_block_list: [42] - -second_pass: - timesteps: [0.9094, 0.7250, 0.4219] - guidance_scale: 1 - stg_scale: 0 - rescaling_scale: 1 - skip_block_list: [42] - tone_map_compression_ratio: 0.6 diff --git a/configs_x/ltxv-13b-0.9.8-distilled.yaml b/configs_x/ltxv-13b-0.9.8-distilled.yaml deleted file mode 100644 index a1ac7239f3c3ecf0a8e4e03c3a1415a8b257dbf0..0000000000000000000000000000000000000000 --- a/configs_x/ltxv-13b-0.9.8-distilled.yaml +++ /dev/null @@ -1,29 +0,0 @@ -pipeline_type: multi-scale -checkpoint_path: "ltxv-13b-0.9.8-distilled.safetensors" -downscale_factor: 0.6666666 -spatial_upscaler_model_path: "ltxv-spatial-upscaler-0.9.8.safetensors" -stg_mode: "attention_values" # options: "attention_values", "attention_skip", "residual", "transformer_block" -decode_timestep: 0.05 -decode_noise_scale: 0.025 -text_encoder_model_name_or_path: "PixArt-alpha/PixArt-XL-2-1024-MS" -precision: "bfloat16" -sampler: "from_checkpoint" # options: "uniform", "linear-quadratic", "from_checkpoint" -prompt_enhancement_words_threshold: 120 -prompt_enhancer_image_caption_model_name_or_path: "MiaoshouAI/Florence-2-large-PromptGen-v2.0" -prompt_enhancer_llm_model_name_or_path: "unsloth/Llama-3.2-3B-Instruct" -stochastic_sampling: false - -first_pass: - timesteps: [1.0000, 0.9937, 0.9875, 0.9812, 0.9750, 0.9094, 0.7250] - guidance_scale: 1 - stg_scale: 0 - rescaling_scale: 1 - skip_block_list: [42] - -second_pass: - timesteps: [0.9094, 0.7250, 0.4219] - guidance_scale: 1 - stg_scale: 0 - rescaling_scale: 1 - skip_block_list: [42] - tone_map_compression_ratio: 0.6 diff --git a/configs_x/ltxv-2b-0.9.1.yaml b/configs_x/ltxv-2b-0.9.1.yaml deleted file mode 100644 index 6e888de3fb5ff258cd4caf52453eb707a3941761..0000000000000000000000000000000000000000 --- a/configs_x/ltxv-2b-0.9.1.yaml +++ /dev/null @@ -1,17 +0,0 @@ -pipeline_type: base -checkpoint_path: "ltx-video-2b-v0.9.1.safetensors" -guidance_scale: 3 -stg_scale: 1 -rescaling_scale: 0.7 -skip_block_list: [19] -num_inference_steps: 40 -stg_mode: "attention_values" # options: "attention_values", "attention_skip", "residual", "transformer_block" -decode_timestep: 0.05 -decode_noise_scale: 0.025 -text_encoder_model_name_or_path: "PixArt-alpha/PixArt-XL-2-1024-MS" -precision: "bfloat16" -sampler: "from_checkpoint" # options: "uniform", "linear-quadratic", "from_checkpoint" -prompt_enhancement_words_threshold: 120 -prompt_enhancer_image_caption_model_name_or_path: "MiaoshouAI/Florence-2-large-PromptGen-v2.0" -prompt_enhancer_llm_model_name_or_path: "unsloth/Llama-3.2-3B-Instruct" -stochastic_sampling: false \ No newline at end of file diff --git a/configs_x/ltxv-2b-0.9.5.yaml b/configs_x/ltxv-2b-0.9.5.yaml deleted file mode 100644 index 5998c6040bdbc3b4b0f6838bb7b61b58d0b58b5d..0000000000000000000000000000000000000000 --- a/configs_x/ltxv-2b-0.9.5.yaml +++ /dev/null @@ -1,17 +0,0 @@ -pipeline_type: base -checkpoint_path: "ltx-video-2b-v0.9.5.safetensors" -guidance_scale: 3 -stg_scale: 1 -rescaling_scale: 0.7 -skip_block_list: [19] -num_inference_steps: 40 -stg_mode: "attention_values" # options: "attention_values", "attention_skip", "residual", "transformer_block" -decode_timestep: 0.05 -decode_noise_scale: 0.025 -text_encoder_model_name_or_path: "PixArt-alpha/PixArt-XL-2-1024-MS" -precision: "bfloat16" -sampler: "from_checkpoint" # options: "uniform", "linear-quadratic", "from_checkpoint" -prompt_enhancement_words_threshold: 120 -prompt_enhancer_image_caption_model_name_or_path: "MiaoshouAI/Florence-2-large-PromptGen-v2.0" -prompt_enhancer_llm_model_name_or_path: "unsloth/Llama-3.2-3B-Instruct" -stochastic_sampling: false \ No newline at end of file diff --git a/configs_x/ltxv-2b-0.9.6-dev.yaml b/configs_x/ltxv-2b-0.9.6-dev.yaml deleted file mode 100644 index 487f99708e0672dd17b5bd78424f25261163f7dc..0000000000000000000000000000000000000000 --- a/configs_x/ltxv-2b-0.9.6-dev.yaml +++ /dev/null @@ -1,17 +0,0 @@ -pipeline_type: base -checkpoint_path: "ltxv-2b-0.9.6-dev-04-25.safetensors" -guidance_scale: 3 -stg_scale: 1 -rescaling_scale: 0.7 -skip_block_list: [19] -num_inference_steps: 40 -stg_mode: "attention_values" # options: "attention_values", "attention_skip", "residual", "transformer_block" -decode_timestep: 0.05 -decode_noise_scale: 0.025 -text_encoder_model_name_or_path: "PixArt-alpha/PixArt-XL-2-1024-MS" -precision: "bfloat16" -sampler: "from_checkpoint" # options: "uniform", "linear-quadratic", "from_checkpoint" -prompt_enhancement_words_threshold: 120 -prompt_enhancer_image_caption_model_name_or_path: "MiaoshouAI/Florence-2-large-PromptGen-v2.0" -prompt_enhancer_llm_model_name_or_path: "unsloth/Llama-3.2-3B-Instruct" -stochastic_sampling: false \ No newline at end of file diff --git a/configs_x/ltxv-2b-0.9.6-distilled.yaml b/configs_x/ltxv-2b-0.9.6-distilled.yaml deleted file mode 100644 index 328d9291613f16ba191cb56f97340f3bfa4d341d..0000000000000000000000000000000000000000 --- a/configs_x/ltxv-2b-0.9.6-distilled.yaml +++ /dev/null @@ -1,16 +0,0 @@ -pipeline_type: base -checkpoint_path: "ltxv-2b-0.9.6-distilled-04-25.safetensors" -guidance_scale: 1 -stg_scale: 0 -rescaling_scale: 1 -num_inference_steps: 8 -stg_mode: "attention_values" # options: "attention_values", "attention_skip", "residual", "transformer_block" -decode_timestep: 0.05 -decode_noise_scale: 0.025 -text_encoder_model_name_or_path: "PixArt-alpha/PixArt-XL-2-1024-MS" -precision: "bfloat16" -sampler: "from_checkpoint" # options: "uniform", "linear-quadratic", "from_checkpoint" -prompt_enhancement_words_threshold: 120 -prompt_enhancer_image_caption_model_name_or_path: "MiaoshouAI/Florence-2-large-PromptGen-v2.0" -prompt_enhancer_llm_model_name_or_path: "unsloth/Llama-3.2-3B-Instruct" -stochastic_sampling: true \ No newline at end of file diff --git a/configs_x/ltxv-2b-0.9.8-distilled-fp8.yaml b/configs_x/ltxv-2b-0.9.8-distilled-fp8.yaml deleted file mode 100644 index c02b2057cb2050ea8f277697a3d741ce1ed03403..0000000000000000000000000000000000000000 --- a/configs_x/ltxv-2b-0.9.8-distilled-fp8.yaml +++ /dev/null @@ -1,28 +0,0 @@ -pipeline_type: multi-scale -checkpoint_path: "ltxv-2b-0.9.8-distilled-fp8.safetensors" -downscale_factor: 0.6666666 -spatial_upscaler_model_path: "ltxv-spatial-upscaler-0.9.8.safetensors" -stg_mode: "attention_values" # options: "attention_values", "attention_skip", "residual", "transformer_block" -decode_timestep: 0.05 -decode_noise_scale: 0.025 -text_encoder_model_name_or_path: "PixArt-alpha/PixArt-XL-2-1024-MS" -precision: "float8_e4m3fn" # options: "float8_e4m3fn", "bfloat16", "mixed_precision" -sampler: "from_checkpoint" # options: "uniform", "linear-quadratic", "from_checkpoint" -prompt_enhancement_words_threshold: 120 -prompt_enhancer_image_caption_model_name_or_path: "MiaoshouAI/Florence-2-large-PromptGen-v2.0" -prompt_enhancer_llm_model_name_or_path: "unsloth/Llama-3.2-3B-Instruct" -stochastic_sampling: false - -first_pass: - timesteps: [1.0000, 0.9937, 0.9875, 0.9812, 0.9750, 0.9094, 0.7250] - guidance_scale: 1 - stg_scale: 0 - rescaling_scale: 1 - skip_block_list: [42] - -second_pass: - timesteps: [0.9094, 0.7250, 0.4219] - guidance_scale: 1 - stg_scale: 0 - rescaling_scale: 1 - skip_block_list: [42] diff --git a/configs_x/ltxv-2b-0.9.8-distilled.yaml b/configs_x/ltxv-2b-0.9.8-distilled.yaml deleted file mode 100644 index 9e24b0eb46b7113e2fe52b3d86d8f0eb4adae8de..0000000000000000000000000000000000000000 --- a/configs_x/ltxv-2b-0.9.8-distilled.yaml +++ /dev/null @@ -1,28 +0,0 @@ -pipeline_type: multi-scale -checkpoint_path: "ltxv-2b-0.9.8-distilled.safetensors" -downscale_factor: 0.6666666 -spatial_upscaler_model_path: "ltxv-spatial-upscaler-0.9.8.safetensors" -stg_mode: "attention_values" # options: "attention_values", "attention_skip", "residual", "transformer_block" -decode_timestep: 0.05 -decode_noise_scale: 0.025 -text_encoder_model_name_or_path: "PixArt-alpha/PixArt-XL-2-1024-MS" -precision: "bfloat16" -sampler: "from_checkpoint" # options: "uniform", "linear-quadratic", "from_checkpoint" -prompt_enhancement_words_threshold: 120 -prompt_enhancer_image_caption_model_name_or_path: "MiaoshouAI/Florence-2-large-PromptGen-v2.0" -prompt_enhancer_llm_model_name_or_path: "unsloth/Llama-3.2-3B-Instruct" -stochastic_sampling: false - -first_pass: - timesteps: [1.0000, 0.9937, 0.9875, 0.9812, 0.9750, 0.9094, 0.7250] - guidance_scale: 1 - stg_scale: 0 - rescaling_scale: 1 - skip_block_list: [42] - -second_pass: - timesteps: [0.9094, 0.7250, 0.4219] - guidance_scale: 1 - stg_scale: 0 - rescaling_scale: 1 - skip_block_list: [42] diff --git a/configs_x/ltxv-2b-0.9.yaml b/configs_x/ltxv-2b-0.9.yaml deleted file mode 100644 index f501ca62c24085192cebe10c87261fba38c930bc..0000000000000000000000000000000000000000 --- a/configs_x/ltxv-2b-0.9.yaml +++ /dev/null @@ -1,17 +0,0 @@ -pipeline_type: base -checkpoint_path: "ltx-video-2b-v0.9.safetensors" -guidance_scale: 3 -stg_scale: 1 -rescaling_scale: 0.7 -skip_block_list: [19] -num_inference_steps: 40 -stg_mode: "attention_values" # options: "attention_values", "attention_skip", "residual", "transformer_block" -decode_timestep: 0.05 -decode_noise_scale: 0.025 -text_encoder_model_name_or_path: "PixArt-alpha/PixArt-XL-2-1024-MS" -precision: "bfloat16" -sampler: "from_checkpoint" # options: "uniform", "linear-quadratic", "from_checkpoint" -prompt_enhancement_words_threshold: 120 -prompt_enhancer_image_caption_model_name_or_path: "MiaoshouAI/Florence-2-large-PromptGen-v2.0" -prompt_enhancer_llm_model_name_or_path: "unsloth/Llama-3.2-3B-Instruct" -stochastic_sampling: false \ No newline at end of file diff --git a/data_x/LICENSE.txt b/data_x/LICENSE.txt deleted file mode 100644 index 261eeb9e9f8b2b4b0d119366dda99c6fd7d35c64..0000000000000000000000000000000000000000 --- a/data_x/LICENSE.txt +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/data_x/README.md b/data_x/README.md deleted file mode 100644 index 371a83bc5640a438d5f59aecdae146af15991ec1..0000000000000000000000000000000000000000 --- a/data_x/README.md +++ /dev/null @@ -1,138 +0,0 @@ -# 🛠️ helpers/ - Ferramentas de IA de Terceiros Adaptadas para ADUC-SDR - -Esta pasta contém implementações adaptadas de modelos e utilitários de IA de terceiros, que servem como "especialistas" ou "ferramentas" de baixo nível para a arquitetura ADUC-SDR. - -**IMPORTANTE:** O conteúdo desta pasta é de autoria de seus respectivos idealizadores e desenvolvedores originais. Esta pasta **NÃO FAZ PARTE** do projeto principal ADUC-SDR em termos de sua arquitetura inovadora. Ela serve como um repositório para as **dependências diretas e modificadas** que os `DeformesXDEngines` (os estágios do "foguete" ADUC-SDR) invocam para realizar tarefas específicas (geração de imagem, vídeo, áudio). - -As modificações realizadas nos arquivos aqui presentes visam principalmente: -1. **Adaptação de Interfaces:** Padronizar as interfaces para que se encaixem no fluxo de orquestração do ADUC-SDR. -2. **Gerenciamento de Recursos:** Integrar lógicas de carregamento/descarregamento de modelos (GPU management) e configurações via arquivos YAML. -3. **Otimização de Fluxo:** Ajustar as pipelines para aceitar formatos de entrada mais eficientes (ex: tensores pré-codificados em vez de caminhos de mídia, pulando etapas de codificação/decodificação redundantes). - ---- - -## 📄 Licenciamento - -O conteúdo original dos projetos listados abaixo é licenciado sob a **Licença Apache 2.0**, ou outra licença especificada pelos autores originais. Todas as modificações e o uso desses arquivos dentro da estrutura `helpers/` do projeto ADUC-SDR estão em conformidade com os termos da **Licença Apache 2.0**. - -As licenças originais dos projetos podem ser encontradas nas suas respectivas fontes ou nos subdiretórios `incl_licenses/` dentro de cada módulo adaptado. - ---- - -## 🛠️ API dos Helpers e Guia de Uso - -Esta seção detalha como cada helper (agente especialista) deve ser utilizado dentro do ecossistema ADUC-SDR. Todos os agentes são instanciados como **singletons** no `hardware_manager.py` para garantir o gerenciamento centralizado de recursos de GPU. - -### **gemini_helpers.py (GeminiAgent)** - -* **Propósito:** Atua como o "Oráculo de Síntese Adaptativo", responsável por todas as tarefas de processamento de linguagem natural, como criação de storyboards, geração de prompts, e tomada de decisões narrativas. -* **Singleton Instance:** `gemini_agent_singleton` -* **Construtor:** `GeminiAgent()` - * Lê `configs/gemini_config.yaml` para obter o nome do modelo, parâmetros de inferência e caminhos de templates de prompt. A chave da API é lida da variável de ambiente `GEMINI_API_KEY`. -* **Métodos Públicos:** - * `generate_storyboard(prompt: str, num_keyframes: int, ref_image_paths: list[str])` - * **Inputs:** - * `prompt`: A ideia geral do filme (string). - * `num_keyframes`: O número de cenas a serem geradas (int). - * `ref_image_paths`: Lista de caminhos para as imagens de referência (list[str]). - * **Output:** `tuple[list[str], str]` (Uma tupla contendo a lista de strings do storyboard e um relatório textual da operação). - * `select_keyframes_from_pool(storyboard: list, base_image_paths: list[str], pool_image_paths: list[str])` - * **Inputs:** - * `storyboard`: A lista de strings do storyboard gerado. - * `base_image_paths`: Imagens de referência base (list[str]). - * `pool_image_paths`: O "banco de imagens" de onde selecionar (list[str]). - * **Output:** `tuple[list[str], str]` (Uma tupla contendo a lista de caminhos de imagens selecionadas e um relatório textual). - * `get_anticipatory_keyframe_prompt(...)` - * **Inputs:** Contexto narrativo e visual para gerar um prompt de imagem. - * **Output:** `tuple[str, str]` (Uma tupla contendo o prompt gerado para o modelo de imagem e um relatório textual). - * `get_initial_motion_prompt(...)` - * **Inputs:** Contexto narrativo e visual para a primeira transição de vídeo. - * **Output:** `tuple[str, str]` (Uma tupla contendo o prompt de movimento gerado e um relatório textual). - * `get_transition_decision(...)` - * **Inputs:** Contexto narrativo e visual para uma transição de vídeo intermediária. - * **Output:** `tuple[dict, str]` (Uma tupla contendo um dicionário `{"transition_type": "...", "motion_prompt": "..."}` e um relatório textual). - * `generate_audio_prompts(...)` - * **Inputs:** Contexto narrativo global. - * **Output:** `tuple[dict, str]` (Uma tupla contendo um dicionário `{"music_prompt": "...", "sfx_prompt": "..."}` e um relatório textual). - -### **flux_kontext_helpers.py (FluxPoolManager)** - -* **Propósito:** Especialista em geração de imagens de alta qualidade (keyframes) usando a pipeline FluxKontext. Gerencia um pool de workers para otimizar o uso de múltiplas GPUs. -* **Singleton Instance:** `flux_kontext_singleton` -* **Construtor:** `FluxPoolManager(device_ids: list[str], flux_config_file: str)` - * Lê `configs/flux_config.yaml`. -* **Método Público:** - * `generate_image(prompt: str, reference_images: list[Image.Image], width: int, height: int, seed: int = 42, callback: callable = None)` - * **Inputs:** - * `prompt`: Prompt textual para guiar a geração (string). - * `reference_images`: Lista de objetos `PIL.Image` como referência visual. - * `width`, `height`: Dimensões da imagem de saída (int). - * `seed`: Semente para reprodutibilidade (int). - * `callback`: Função de callback opcional para monitorar o progresso. - * **Output:** `PIL.Image.Image` (O objeto da imagem gerada). - -### **dreamo_helpers.py (DreamOAgent)** - -* **Propósito:** Especialista em geração de imagens de alta qualidade (keyframes) usando a pipeline DreamO, com capacidades avançadas de edição e estilo a partir de referências. -* **Singleton Instance:** `dreamo_agent_singleton` -* **Construtor:** `DreamOAgent(device_id: str = None)` - * Lê `configs/dreamo_config.yaml`. -* **Método Público:** - * `generate_image(prompt: str, reference_images: list[Image.Image], width: int, height: int)` - * **Inputs:** - * `prompt`: Prompt textual para guiar a geração (string). - * `reference_images`: Lista de objetos `PIL.Image` como referência visual. A lógica interna atribui a primeira imagem como `style` e as demais como `ip`. - * `width`, `height`: Dimensões da imagem de saída (int). - * **Output:** `PIL.Image.Image` (O objeto da imagem gerada). - -### **ltx_manager_helpers.py (LtxPoolManager)** - -* **Propósito:** Especialista na geração de fragmentos de vídeo no espaço latente usando a pipeline LTX-Video. Gerencia um pool de workers para otimizar o uso de múltiplas GPUs. -* **Singleton Instance:** `ltx_manager_singleton` -* **Construtor:** `LtxPoolManager(device_ids: list[str], ltx_model_config_file: str, ltx_global_config_file: str)` - * Lê o `ltx_global_config_file` e o `ltx_model_config_file` para configurar a pipeline. -* **Método Público:** - * `generate_latent_fragment(**kwargs)` - * **Inputs:** Dicionário de keyword arguments (`kwargs`) contendo todos os parâmetros da pipeline LTX, incluindo: - * `height`, `width`: Dimensões do vídeo (int). - * `video_total_frames`: Número total de frames a serem gerados (int). - * `video_fps`: Frames por segundo (int). - * `motion_prompt`: Prompt de movimento (string). - * `conditioning_items_data`: Lista de objetos `LatentConditioningItem` contendo os tensores latentes de condição. - * `guidance_scale`, `stg_scale`, `num_inference_steps`, etc. - * **Output:** `tuple[torch.Tensor, tuple]` (Uma tupla contendo o tensor latente gerado e os valores de padding utilizados). - -### **mmaudio_helper.py (MMAudioAgent)** - -* **Propósito:** Especialista em geração de áudio para um determinado fragmento de vídeo. -* **Singleton Instance:** `mmaudio_agent_singleton` -* **Construtor:** `MMAudioAgent(workspace_dir: str, device_id: str = None, mmaudio_config_file: str)` - * Lê `configs/mmaudio_config.yaml`. -* **Método Público:** - * `generate_audio_for_video(video_path: str, prompt: str, negative_prompt: str, duration_seconds: float)` - * **Inputs:** - * `video_path`: Caminho para o arquivo de vídeo silencioso (string). - * `prompt`: Prompt textual para guiar a geração de áudio (string). - * `negative_prompt`: Prompt negativo para áudio (string). - * `duration_seconds`: Duração exata do vídeo (float). - * **Output:** `str` (O caminho para o novo arquivo de vídeo com a faixa de áudio integrada). - - -### https://huggingface.co/spaces/ByteDance-Seed/SeedVR2-3B/tree/main - ---- - -## 🔗 Projetos Originais e Atribuições -(A seção de atribuições e licenças permanece a mesma que definimos anteriormente) - -### DreamO -* **Repositório Original:** [https://github.com/bytedance/DreamO](https://github.com/bytedance/DreamO) -... - -### LTX-Video -* **Repositório Original:** [https://github.com/Lightricks/LTX-Video](https://github.com/Lightricks/LTX-Video) -... - -### MMAudio -* **Repositório Original:** [https://github.com/hkchengrex/MMAudio](https://github.com/hkchengrex/MMAudio) -... \ No newline at end of file diff --git a/data_x/image/transforms/area_resize.py b/data_x/image/transforms/area_resize.py deleted file mode 100644 index 9f621dae1b0af40f58e090405db1ac7338110980..0000000000000000000000000000000000000000 --- a/data_x/image/transforms/area_resize.py +++ /dev/null @@ -1,135 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -import math -import random -from typing import Union -import torch -from PIL import Image -from torchvision.transforms import functional as TVF -from torchvision.transforms.functional import InterpolationMode - - -class AreaResize: - def __init__( - self, - max_area: float, - downsample_only: bool = False, - interpolation: InterpolationMode = InterpolationMode.BICUBIC, - ): - self.max_area = max_area - self.downsample_only = downsample_only - self.interpolation = interpolation - - def __call__(self, image: Union[torch.Tensor, Image.Image]): - - if isinstance(image, torch.Tensor): - height, width = image.shape[-2:] - elif isinstance(image, Image.Image): - width, height = image.size - else: - raise NotImplementedError - - scale = math.sqrt(self.max_area / (height * width)) - - # keep original height and width for small pictures. - scale = 1 if scale >= 1 and self.downsample_only else scale - - resized_height, resized_width = round(height * scale), round(width * scale) - - return TVF.resize( - image, - size=(resized_height, resized_width), - interpolation=self.interpolation, - ) - - -class AreaRandomCrop: - def __init__( - self, - max_area: float, - ): - self.max_area = max_area - - def get_params(self, input_size, output_size): - """Get parameters for ``crop`` for a random crop. - - Args: - img (PIL Image): Image to be cropped. - output_size (tuple): Expected output size of the crop. - - Returns: - tuple: params (i, j, h, w) to be passed to ``crop`` for random crop. - """ - # w, h = _get_image_size(img) - h, w = input_size - th, tw = output_size - if w <= tw and h <= th: - return 0, 0, h, w - - i = random.randint(0, h - th) - j = random.randint(0, w - tw) - return i, j, th, tw - - def __call__(self, image: Union[torch.Tensor, Image.Image]): - if isinstance(image, torch.Tensor): - height, width = image.shape[-2:] - elif isinstance(image, Image.Image): - width, height = image.size - else: - raise NotImplementedError - - resized_height = math.sqrt(self.max_area / (width / height)) - resized_width = (width / height) * resized_height - - # print('>>>>>>>>>>>>>>>>>>>>>') - # print((height, width)) - # print( (resized_height, resized_width)) - - resized_height, resized_width = round(resized_height), round(resized_width) - i, j, h, w = self.get_params((height, width), (resized_height, resized_width)) - image = TVF.crop(image, i, j, h, w) - return image - -class ScaleResize: - def __init__( - self, - scale: float, - ): - self.scale = scale - - def __call__(self, image: Union[torch.Tensor, Image.Image]): - if isinstance(image, torch.Tensor): - height, width = image.shape[-2:] - interpolation_mode = InterpolationMode.BILINEAR - antialias = True if image.ndim == 4 else "warn" - elif isinstance(image, Image.Image): - width, height = image.size - interpolation_mode = InterpolationMode.LANCZOS - antialias = "warn" - else: - raise NotImplementedError - - scale = self.scale - - # keep original height and width for small pictures - - resized_height, resized_width = round(height * scale), round(width * scale) - image = TVF.resize( - image, - size=(resized_height, resized_width), - interpolation=interpolation_mode, - antialias=antialias, - ) - return image diff --git a/data_x/image/transforms/divisible_crop.py b/data_x/image/transforms/divisible_crop.py deleted file mode 100644 index d1815b03ee1ce99486143aca24b9023ab0b3973c..0000000000000000000000000000000000000000 --- a/data_x/image/transforms/divisible_crop.py +++ /dev/null @@ -1,40 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Union -import torch -from PIL import Image -from torchvision.transforms import functional as TVF - - -class DivisibleCrop: - def __init__(self, factor): - if not isinstance(factor, tuple): - factor = (factor, factor) - - self.height_factor, self.width_factor = factor[0], factor[1] - - def __call__(self, image: Union[torch.Tensor, Image.Image]): - if isinstance(image, torch.Tensor): - height, width = image.shape[-2:] - elif isinstance(image, Image.Image): - width, height = image.size - else: - raise NotImplementedError - - cropped_height = height - (height % self.height_factor) - cropped_width = width - (width % self.width_factor) - - image = TVF.center_crop(img=image, output_size=(cropped_height, cropped_width)) - return image diff --git a/data_x/image/transforms/na_resize.py b/data_x/image/transforms/na_resize.py deleted file mode 100644 index d230e25e3ca1710ad6261d8e14541a97732b9a30..0000000000000000000000000000000000000000 --- a/data_x/image/transforms/na_resize.py +++ /dev/null @@ -1,50 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Literal -from torchvision.transforms import CenterCrop, Compose, InterpolationMode, Resize - -from .area_resize import AreaResize -from .side_resize import SideResize - - -def NaResize( - resolution: int, - mode: Literal["area", "side"], - downsample_only: bool, - interpolation: InterpolationMode = InterpolationMode.BICUBIC, -): - if mode == "area": - return AreaResize( - max_area=resolution**2, - downsample_only=downsample_only, - interpolation=interpolation, - ) - if mode == "side": - return SideResize( - size=resolution, - downsample_only=downsample_only, - interpolation=interpolation, - ) - if mode == "square": - return Compose( - [ - Resize( - size=resolution, - interpolation=interpolation, - ), - CenterCrop(resolution), - ] - ) - raise ValueError(f"Unknown resize mode: {mode}") diff --git a/data_x/image/transforms/side_resize.py b/data_x/image/transforms/side_resize.py deleted file mode 100644 index 6e07402b2187a048b99d995d68ead12f790f5724..0000000000000000000000000000000000000000 --- a/data_x/image/transforms/side_resize.py +++ /dev/null @@ -1,54 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Union -import torch -from PIL import Image -from torchvision.transforms import InterpolationMode -from torchvision.transforms import functional as TVF - - -class SideResize: - def __init__( - self, - size: int, - downsample_only: bool = False, - interpolation: InterpolationMode = InterpolationMode.BICUBIC, - ): - self.size = size - self.downsample_only = downsample_only - self.interpolation = interpolation - - def __call__(self, image: Union[torch.Tensor, Image.Image]): - """ - Args: - image (PIL Image or Tensor): Image to be scaled. - - Returns: - PIL Image or Tensor: Rescaled image. - """ - if isinstance(image, torch.Tensor): - height, width = image.shape[-2:] - elif isinstance(image, Image.Image): - width, height = image.size - else: - raise NotImplementedError - - if self.downsample_only and min(width, height) < self.size: - # keep original height and width for small pictures. - size = min(width, height) - else: - size = self.size - - return TVF.resize(image, size, self.interpolation) diff --git a/data_x/video/transforms/rearrange.py b/data_x/video/transforms/rearrange.py deleted file mode 100644 index 895347991d71043742777f103d32b62c80284660..0000000000000000000000000000000000000000 --- a/data_x/video/transforms/rearrange.py +++ /dev/null @@ -1,24 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from einops import rearrange - - -class Rearrange: - def __init__(self, pattern: str, **kwargs): - self.pattern = pattern - self.kwargs = kwargs - - def __call__(self, x): - return rearrange(x, self.pattern, **self.kwargs) diff --git a/ltx_video_x/LICENSE.txt b/ltx_video_x/LICENSE.txt deleted file mode 100644 index 261eeb9e9f8b2b4b0d119366dda99c6fd7d35c64..0000000000000000000000000000000000000000 --- a/ltx_video_x/LICENSE.txt +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/ltx_video_x/README.md b/ltx_video_x/README.md deleted file mode 100644 index 964c76ea7d615f5287e9283e23df316dad8bfdd4..0000000000000000000000000000000000000000 --- a/ltx_video_x/README.md +++ /dev/null @@ -1,135 +0,0 @@ -# 🛠️ helpers/ - Ferramentas de IA de Terceiros Adaptadas para ADUC-SDR - -Esta pasta contém implementações adaptadas de modelos e utilitários de IA de terceiros, que servem como "especialistas" ou "ferramentas" de baixo nível para a arquitetura ADUC-SDR. - -**IMPORTANTE:** O conteúdo desta pasta é de autoria de seus respectivos idealizadores e desenvolvedores originais. Esta pasta **NÃO FAZ PARTE** do projeto principal ADUC-SDR em termos de sua arquitetura inovadora. Ela serve como um repositório para as **dependências diretas e modificadas** que os `DeformesXDEngines` (os estágios do "foguete" ADUC-SDR) invocam para realizar tarefas específicas (geração de imagem, vídeo, áudio). - -As modificações realizadas nos arquivos aqui presentes visam principalmente: -1. **Adaptação de Interfaces:** Padronizar as interfaces para que se encaixem no fluxo de orquestração do ADUC-SDR. -2. **Gerenciamento de Recursos:** Integrar lógicas de carregamento/descarregamento de modelos (GPU management) e configurações via arquivos YAML. -3. **Otimização de Fluxo:** Ajustar as pipelines para aceitar formatos de entrada mais eficientes (ex: tensores pré-codificados em vez de caminhos de mídia, pulando etapas de codificação/decodificação redundantes). - ---- - -## 📄 Licenciamento - -O conteúdo original dos projetos listados abaixo é licenciado sob a **Licença Apache 2.0**, ou outra licença especificada pelos autores originais. Todas as modificações e o uso desses arquivos dentro da estrutura `helpers/` do projeto ADUC-SDR estão em conformidade com os termos da **Licença Apache 2.0**. - -As licenças originais dos projetos podem ser encontradas nas suas respectivas fontes ou nos subdiretórios `incl_licenses/` dentro de cada módulo adaptado. - ---- - -## 🛠️ API dos Helpers e Guia de Uso - -Esta seção detalha como cada helper (agente especialista) deve ser utilizado dentro do ecossistema ADUC-SDR. Todos os agentes são instanciados como **singletons** no `hardware_manager.py` para garantir o gerenciamento centralizado de recursos de GPU. - -### **gemini_helpers.py (GeminiAgent)** - -* **Propósito:** Atua como o "Oráculo de Síntese Adaptativo", responsável por todas as tarefas de processamento de linguagem natural, como criação de storyboards, geração de prompts, e tomada de decisões narrativas. -* **Singleton Instance:** `gemini_agent_singleton` -* **Construtor:** `GeminiAgent()` - * Lê `configs/gemini_config.yaml` para obter o nome do modelo, parâmetros de inferência e caminhos de templates de prompt. A chave da API é lida da variável de ambiente `GEMINI_API_KEY`. -* **Métodos Públicos:** - * `generate_storyboard(prompt: str, num_keyframes: int, ref_image_paths: list[str])` - * **Inputs:** - * `prompt`: A ideia geral do filme (string). - * `num_keyframes`: O número de cenas a serem geradas (int). - * `ref_image_paths`: Lista de caminhos para as imagens de referência (list[str]). - * **Output:** `tuple[list[str], str]` (Uma tupla contendo a lista de strings do storyboard e um relatório textual da operação). - * `select_keyframes_from_pool(storyboard: list, base_image_paths: list[str], pool_image_paths: list[str])` - * **Inputs:** - * `storyboard`: A lista de strings do storyboard gerado. - * `base_image_paths`: Imagens de referência base (list[str]). - * `pool_image_paths`: O "banco de imagens" de onde selecionar (list[str]). - * **Output:** `tuple[list[str], str]` (Uma tupla contendo a lista de caminhos de imagens selecionadas e um relatório textual). - * `get_anticipatory_keyframe_prompt(...)` - * **Inputs:** Contexto narrativo e visual para gerar um prompt de imagem. - * **Output:** `tuple[str, str]` (Uma tupla contendo o prompt gerado para o modelo de imagem e um relatório textual). - * `get_initial_motion_prompt(...)` - * **Inputs:** Contexto narrativo e visual para a primeira transição de vídeo. - * **Output:** `tuple[str, str]` (Uma tupla contendo o prompt de movimento gerado e um relatório textual). - * `get_transition_decision(...)` - * **Inputs:** Contexto narrativo e visual para uma transição de vídeo intermediária. - * **Output:** `tuple[dict, str]` (Uma tupla contendo um dicionário `{"transition_type": "...", "motion_prompt": "..."}` e um relatório textual). - * `generate_audio_prompts(...)` - * **Inputs:** Contexto narrativo global. - * **Output:** `tuple[dict, str]` (Uma tupla contendo um dicionário `{"music_prompt": "...", "sfx_prompt": "..."}` e um relatório textual). - -### **flux_kontext_helpers.py (FluxPoolManager)** - -* **Propósito:** Especialista em geração de imagens de alta qualidade (keyframes) usando a pipeline FluxKontext. Gerencia um pool de workers para otimizar o uso de múltiplas GPUs. -* **Singleton Instance:** `flux_kontext_singleton` -* **Construtor:** `FluxPoolManager(device_ids: list[str], flux_config_file: str)` - * Lê `configs/flux_config.yaml`. -* **Método Público:** - * `generate_image(prompt: str, reference_images: list[Image.Image], width: int, height: int, seed: int = 42, callback: callable = None)` - * **Inputs:** - * `prompt`: Prompt textual para guiar a geração (string). - * `reference_images`: Lista de objetos `PIL.Image` como referência visual. - * `width`, `height`: Dimensões da imagem de saída (int). - * `seed`: Semente para reprodutibilidade (int). - * `callback`: Função de callback opcional para monitorar o progresso. - * **Output:** `PIL.Image.Image` (O objeto da imagem gerada). - -### **dreamo_helpers.py (DreamOAgent)** - -* **Propósito:** Especialista em geração de imagens de alta qualidade (keyframes) usando a pipeline DreamO, com capacidades avançadas de edição e estilo a partir de referências. -* **Singleton Instance:** `dreamo_agent_singleton` -* **Construtor:** `DreamOAgent(device_id: str = None)` - * Lê `configs/dreamo_config.yaml`. -* **Método Público:** - * `generate_image(prompt: str, reference_images: list[Image.Image], width: int, height: int)` - * **Inputs:** - * `prompt`: Prompt textual para guiar a geração (string). - * `reference_images`: Lista de objetos `PIL.Image` como referência visual. A lógica interna atribui a primeira imagem como `style` e as demais como `ip`. - * `width`, `height`: Dimensões da imagem de saída (int). - * **Output:** `PIL.Image.Image` (O objeto da imagem gerada). - -### **ltx_manager_helpers.py (LtxPoolManager)** - -* **Propósito:** Especialista na geração de fragmentos de vídeo no espaço latente usando a pipeline LTX-Video. Gerencia um pool de workers para otimizar o uso de múltiplas GPUs. -* **Singleton Instance:** `ltx_manager_singleton` -* **Construtor:** `LtxPoolManager(device_ids: list[str], ltx_model_config_file: str, ltx_global_config_file: str)` - * Lê o `ltx_global_config_file` e o `ltx_model_config_file` para configurar a pipeline. -* **Método Público:** - * `generate_latent_fragment(**kwargs)` - * **Inputs:** Dicionário de keyword arguments (`kwargs`) contendo todos os parâmetros da pipeline LTX, incluindo: - * `height`, `width`: Dimensões do vídeo (int). - * `video_total_frames`: Número total de frames a serem gerados (int). - * `video_fps`: Frames por segundo (int). - * `motion_prompt`: Prompt de movimento (string). - * `conditioning_items_data`: Lista de objetos `LatentConditioningItem` contendo os tensores latentes de condição. - * `guidance_scale`, `stg_scale`, `num_inference_steps`, etc. - * **Output:** `tuple[torch.Tensor, tuple]` (Uma tupla contendo o tensor latente gerado e os valores de padding utilizados). - -### **mmaudio_helper.py (MMAudioAgent)** - -* **Propósito:** Especialista em geração de áudio para um determinado fragmento de vídeo. -* **Singleton Instance:** `mmaudio_agent_singleton` -* **Construtor:** `MMAudioAgent(workspace_dir: str, device_id: str = None, mmaudio_config_file: str)` - * Lê `configs/mmaudio_config.yaml`. -* **Método Público:** - * `generate_audio_for_video(video_path: str, prompt: str, negative_prompt: str, duration_seconds: float)` - * **Inputs:** - * `video_path`: Caminho para o arquivo de vídeo silencioso (string). - * `prompt`: Prompt textual para guiar a geração de áudio (string). - * `negative_prompt`: Prompt negativo para áudio (string). - * `duration_seconds`: Duração exata do vídeo (float). - * **Output:** `str` (O caminho para o novo arquivo de vídeo com a faixa de áudio integrada). - ---- - -## 🔗 Projetos Originais e Atribuições -(A seção de atribuições e licenças permanece a mesma que definimos anteriormente) - -### DreamO -* **Repositório Original:** [https://github.com/bytedance/DreamO](https://github.com/bytedance/DreamO) -... - -### LTX-Video -* **Repositório Original:** [https://github.com/Lightricks/LTX-Video](https://github.com/Lightricks/LTX-Video) -... - -### MMAudio -* **Repositório Original:** [https://github.com/hkchengrex/MMAudio](https://github.com/hkchengrex/MMAudio) -... \ No newline at end of file diff --git a/ltx_video_x/__init__.py b/ltx_video_x/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/ltx_video_x/models/__init__.py b/ltx_video_x/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/ltx_video_x/models/autoencoders/__init__.py b/ltx_video_x/models/autoencoders/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/ltx_video_x/models/autoencoders/causal_conv3d.py b/ltx_video_x/models/autoencoders/causal_conv3d.py deleted file mode 100644 index 98249c2f5ffe52eead83b38476e034c4f03bdccd..0000000000000000000000000000000000000000 --- a/ltx_video_x/models/autoencoders/causal_conv3d.py +++ /dev/null @@ -1,63 +0,0 @@ -from typing import Tuple, Union - -import torch -import torch.nn as nn - - -class CausalConv3d(nn.Module): - def __init__( - self, - in_channels, - out_channels, - kernel_size: int = 3, - stride: Union[int, Tuple[int]] = 1, - dilation: int = 1, - groups: int = 1, - spatial_padding_mode: str = "zeros", - **kwargs, - ): - super().__init__() - - self.in_channels = in_channels - self.out_channels = out_channels - - kernel_size = (kernel_size, kernel_size, kernel_size) - self.time_kernel_size = kernel_size[0] - - dilation = (dilation, 1, 1) - - height_pad = kernel_size[1] // 2 - width_pad = kernel_size[2] // 2 - padding = (0, height_pad, width_pad) - - self.conv = nn.Conv3d( - in_channels, - out_channels, - kernel_size, - stride=stride, - dilation=dilation, - padding=padding, - padding_mode=spatial_padding_mode, - groups=groups, - ) - - def forward(self, x, causal: bool = True): - if causal: - first_frame_pad = x[:, :, :1, :, :].repeat( - (1, 1, self.time_kernel_size - 1, 1, 1) - ) - x = torch.concatenate((first_frame_pad, x), dim=2) - else: - first_frame_pad = x[:, :, :1, :, :].repeat( - (1, 1, (self.time_kernel_size - 1) // 2, 1, 1) - ) - last_frame_pad = x[:, :, -1:, :, :].repeat( - (1, 1, (self.time_kernel_size - 1) // 2, 1, 1) - ) - x = torch.concatenate((first_frame_pad, x, last_frame_pad), dim=2) - x = self.conv(x) - return x - - @property - def weight(self): - return self.conv.weight diff --git a/ltx_video_x/models/autoencoders/causal_video_autoencoder.py b/ltx_video_x/models/autoencoders/causal_video_autoencoder.py deleted file mode 100644 index 736c96a3c65e22a7ada0bb20535e0e15bc47b123..0000000000000000000000000000000000000000 --- a/ltx_video_x/models/autoencoders/causal_video_autoencoder.py +++ /dev/null @@ -1,1398 +0,0 @@ -import json -import os -from functools import partial -from types import SimpleNamespace -from typing import Any, Mapping, Optional, Tuple, Union, List -from pathlib import Path - -import torch -import numpy as np -from einops import rearrange -from torch import nn -from diffusers.utils import logging -import torch.nn.functional as F -from diffusers.models.embeddings import PixArtAlphaCombinedTimestepSizeEmbeddings -from safetensors import safe_open - - -from ltx_video.models.autoencoders.conv_nd_factory import make_conv_nd, make_linear_nd -from ltx_video.models.autoencoders.pixel_norm import PixelNorm -from ltx_video.models.autoencoders.pixel_shuffle import PixelShuffleND -from ltx_video.models.autoencoders.vae import AutoencoderKLWrapper -from ltx_video.models.transformers.attention import Attention -from ltx_video.utils.diffusers_config_mapping import ( - diffusers_and_ours_config_mapping, - make_hashable_key, - VAE_KEYS_RENAME_DICT, -) - -PER_CHANNEL_STATISTICS_PREFIX = "per_channel_statistics." -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -class CausalVideoAutoencoder(AutoencoderKLWrapper): - @classmethod - def from_pretrained( - cls, - pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], - *args, - **kwargs, - ): - pretrained_model_name_or_path = Path(pretrained_model_name_or_path) - if ( - pretrained_model_name_or_path.is_dir() - and (pretrained_model_name_or_path / "autoencoder.pth").exists() - ): - config_local_path = pretrained_model_name_or_path / "config.json" - config = cls.load_config(config_local_path, **kwargs) - - model_local_path = pretrained_model_name_or_path / "autoencoder.pth" - state_dict = torch.load(model_local_path, map_location=torch.device("cpu")) - - statistics_local_path = ( - pretrained_model_name_or_path / "per_channel_statistics.json" - ) - if statistics_local_path.exists(): - with open(statistics_local_path, "r") as file: - data = json.load(file) - transposed_data = list(zip(*data["data"])) - data_dict = { - col: torch.tensor(vals) - for col, vals in zip(data["columns"], transposed_data) - } - std_of_means = data_dict["std-of-means"] - mean_of_means = data_dict.get( - "mean-of-means", torch.zeros_like(data_dict["std-of-means"]) - ) - state_dict[f"{PER_CHANNEL_STATISTICS_PREFIX}std-of-means"] = ( - std_of_means - ) - state_dict[f"{PER_CHANNEL_STATISTICS_PREFIX}mean-of-means"] = ( - mean_of_means - ) - - elif pretrained_model_name_or_path.is_dir(): - config_path = pretrained_model_name_or_path / "vae" / "config.json" - with open(config_path, "r") as f: - config = make_hashable_key(json.load(f)) - - assert config in diffusers_and_ours_config_mapping, ( - "Provided diffusers checkpoint config for VAE is not suppported. " - "We only support diffusers configs found in Lightricks/LTX-Video." - ) - - config = diffusers_and_ours_config_mapping[config] - - state_dict_path = ( - pretrained_model_name_or_path - / "vae" - / "diffusion_pytorch_model.safetensors" - ) - - state_dict = {} - with safe_open(state_dict_path, framework="pt", device="cpu") as f: - for k in f.keys(): - state_dict[k] = f.get_tensor(k) - for key in list(state_dict.keys()): - new_key = key - for replace_key, rename_key in VAE_KEYS_RENAME_DICT.items(): - new_key = new_key.replace(replace_key, rename_key) - - state_dict[new_key] = state_dict.pop(key) - - elif pretrained_model_name_or_path.is_file() and str( - pretrained_model_name_or_path - ).endswith(".safetensors"): - state_dict = {} - with safe_open( - pretrained_model_name_or_path, framework="pt", device="cpu" - ) as f: - metadata = f.metadata() - for k in f.keys(): - state_dict[k] = f.get_tensor(k) - configs = json.loads(metadata["config"]) - config = configs["vae"] - - video_vae = cls.from_config(config) - if "torch_dtype" in kwargs: - video_vae.to(kwargs["torch_dtype"]) - video_vae.load_state_dict(state_dict) - return video_vae - - @staticmethod - def from_config(config): - assert ( - config["_class_name"] == "CausalVideoAutoencoder" - ), "config must have _class_name=CausalVideoAutoencoder" - if isinstance(config["dims"], list): - config["dims"] = tuple(config["dims"]) - - assert config["dims"] in [2, 3, (2, 1)], "dims must be 2, 3 or (2, 1)" - - double_z = config.get("double_z", True) - latent_log_var = config.get( - "latent_log_var", "per_channel" if double_z else "none" - ) - use_quant_conv = config.get("use_quant_conv", True) - normalize_latent_channels = config.get("normalize_latent_channels", False) - - if use_quant_conv and latent_log_var in ["uniform", "constant"]: - raise ValueError( - f"latent_log_var={latent_log_var} requires use_quant_conv=False" - ) - - encoder = Encoder( - dims=config["dims"], - in_channels=config.get("in_channels", 3), - out_channels=config["latent_channels"], - blocks=config.get("encoder_blocks", config.get("blocks")), - patch_size=config.get("patch_size", 1), - latent_log_var=latent_log_var, - norm_layer=config.get("norm_layer", "group_norm"), - base_channels=config.get("encoder_base_channels", 128), - spatial_padding_mode=config.get("spatial_padding_mode", "zeros"), - ) - - decoder = Decoder( - dims=config["dims"], - in_channels=config["latent_channels"], - out_channels=config.get("out_channels", 3), - blocks=config.get("decoder_blocks", config.get("blocks")), - patch_size=config.get("patch_size", 1), - norm_layer=config.get("norm_layer", "group_norm"), - causal=config.get("causal_decoder", False), - timestep_conditioning=config.get("timestep_conditioning", False), - base_channels=config.get("decoder_base_channels", 128), - spatial_padding_mode=config.get("spatial_padding_mode", "zeros"), - ) - - dims = config["dims"] - return CausalVideoAutoencoder( - encoder=encoder, - decoder=decoder, - latent_channels=config["latent_channels"], - dims=dims, - use_quant_conv=use_quant_conv, - normalize_latent_channels=normalize_latent_channels, - ) - - @property - def config(self): - return SimpleNamespace( - _class_name="CausalVideoAutoencoder", - dims=self.dims, - in_channels=self.encoder.conv_in.in_channels // self.encoder.patch_size**2, - out_channels=self.decoder.conv_out.out_channels - // self.decoder.patch_size**2, - latent_channels=self.decoder.conv_in.in_channels, - encoder_blocks=self.encoder.blocks_desc, - decoder_blocks=self.decoder.blocks_desc, - scaling_factor=1.0, - norm_layer=self.encoder.norm_layer, - patch_size=self.encoder.patch_size, - latent_log_var=self.encoder.latent_log_var, - use_quant_conv=self.use_quant_conv, - causal_decoder=self.decoder.causal, - timestep_conditioning=self.decoder.timestep_conditioning, - normalize_latent_channels=self.normalize_latent_channels, - ) - - @property - def is_video_supported(self): - """ - Check if the model supports video inputs of shape (B, C, F, H, W). Otherwise, the model only supports 2D images. - """ - return self.dims != 2 - - @property - def spatial_downscale_factor(self): - return ( - 2 - ** len( - [ - block - for block in self.encoder.blocks_desc - if block[0] - in [ - "compress_space", - "compress_all", - "compress_all_res", - "compress_space_res", - ] - ] - ) - * self.encoder.patch_size - ) - - @property - def temporal_downscale_factor(self): - return 2 ** len( - [ - block - for block in self.encoder.blocks_desc - if block[0] - in [ - "compress_time", - "compress_all", - "compress_all_res", - "compress_time_res", - ] - ] - ) - - def to_json_string(self) -> str: - import json - - return json.dumps(self.config.__dict__) - - def load_state_dict(self, state_dict: Mapping[str, Any], strict: bool = True): - if any([key.startswith("vae.") for key in state_dict.keys()]): - state_dict = { - key.replace("vae.", ""): value - for key, value in state_dict.items() - if key.startswith("vae.") - } - ckpt_state_dict = { - key: value - for key, value in state_dict.items() - if not key.startswith(PER_CHANNEL_STATISTICS_PREFIX) - } - - model_keys = set(name for name, _ in self.named_modules()) - - key_mapping = { - ".resnets.": ".res_blocks.", - "downsamplers.0": "downsample", - "upsamplers.0": "upsample", - } - converted_state_dict = {} - for key, value in ckpt_state_dict.items(): - for k, v in key_mapping.items(): - key = key.replace(k, v) - - key_prefix = ".".join(key.split(".")[:-1]) - if "norm" in key and key_prefix not in model_keys: - logger.info( - f"Removing key {key} from state_dict as it is not present in the model" - ) - continue - - converted_state_dict[key] = value - - super().load_state_dict(converted_state_dict, strict=strict) - - data_dict = { - key.removeprefix(PER_CHANNEL_STATISTICS_PREFIX): value - for key, value in state_dict.items() - if key.startswith(PER_CHANNEL_STATISTICS_PREFIX) - } - if len(data_dict) > 0: - self.register_buffer("std_of_means", data_dict["std-of-means"]) - self.register_buffer( - "mean_of_means", - data_dict.get( - "mean-of-means", torch.zeros_like(data_dict["std-of-means"]) - ), - ) - - def last_layer(self): - if hasattr(self.decoder, "conv_out"): - if isinstance(self.decoder.conv_out, nn.Sequential): - last_layer = self.decoder.conv_out[-1] - else: - last_layer = self.decoder.conv_out - else: - last_layer = self.decoder.layers[-1] - return last_layer - - def set_use_tpu_flash_attention(self): - for block in self.decoder.up_blocks: - if isinstance(block, UNetMidBlock3D) and block.attention_blocks: - for attention_block in block.attention_blocks: - attention_block.set_use_tpu_flash_attention() - - -class Encoder(nn.Module): - r""" - The `Encoder` layer of a variational autoencoder that encodes its input into a latent representation. - - Args: - dims (`int` or `Tuple[int, int]`, *optional*, defaults to 3): - The number of dimensions to use in convolutions. - in_channels (`int`, *optional*, defaults to 3): - The number of input channels. - out_channels (`int`, *optional*, defaults to 3): - The number of output channels. - blocks (`List[Tuple[str, int]]`, *optional*, defaults to `[("res_x", 1)]`): - The blocks to use. Each block is a tuple of the block name and the number of layers. - base_channels (`int`, *optional*, defaults to 128): - The number of output channels for the first convolutional layer. - norm_num_groups (`int`, *optional*, defaults to 32): - The number of groups for normalization. - patch_size (`int`, *optional*, defaults to 1): - The patch size to use. Should be a power of 2. - norm_layer (`str`, *optional*, defaults to `group_norm`): - The normalization layer to use. Can be either `group_norm` or `pixel_norm`. - latent_log_var (`str`, *optional*, defaults to `per_channel`): - The number of channels for the log variance. Can be either `per_channel`, `uniform`, `constant` or `none`. - """ - - def __init__( - self, - dims: Union[int, Tuple[int, int]] = 3, - in_channels: int = 3, - out_channels: int = 3, - blocks: List[Tuple[str, int | dict]] = [("res_x", 1)], - base_channels: int = 128, - norm_num_groups: int = 32, - patch_size: Union[int, Tuple[int]] = 1, - norm_layer: str = "group_norm", # group_norm, pixel_norm - latent_log_var: str = "per_channel", - spatial_padding_mode: str = "zeros", - ): - super().__init__() - self.patch_size = patch_size - self.norm_layer = norm_layer - self.latent_channels = out_channels - self.latent_log_var = latent_log_var - self.blocks_desc = blocks - - in_channels = in_channels * patch_size**2 - output_channel = base_channels - - self.conv_in = make_conv_nd( - dims=dims, - in_channels=in_channels, - out_channels=output_channel, - kernel_size=3, - stride=1, - padding=1, - causal=True, - spatial_padding_mode=spatial_padding_mode, - ) - - self.down_blocks = nn.ModuleList([]) - - for block_name, block_params in blocks: - input_channel = output_channel - if isinstance(block_params, int): - block_params = {"num_layers": block_params} - - if block_name == "res_x": - block = UNetMidBlock3D( - dims=dims, - in_channels=input_channel, - num_layers=block_params["num_layers"], - resnet_eps=1e-6, - resnet_groups=norm_num_groups, - norm_layer=norm_layer, - spatial_padding_mode=spatial_padding_mode, - ) - elif block_name == "res_x_y": - output_channel = block_params.get("multiplier", 2) * output_channel - block = ResnetBlock3D( - dims=dims, - in_channels=input_channel, - out_channels=output_channel, - eps=1e-6, - groups=norm_num_groups, - norm_layer=norm_layer, - spatial_padding_mode=spatial_padding_mode, - ) - elif block_name == "compress_time": - block = make_conv_nd( - dims=dims, - in_channels=input_channel, - out_channels=output_channel, - kernel_size=3, - stride=(2, 1, 1), - causal=True, - spatial_padding_mode=spatial_padding_mode, - ) - elif block_name == "compress_space": - block = make_conv_nd( - dims=dims, - in_channels=input_channel, - out_channels=output_channel, - kernel_size=3, - stride=(1, 2, 2), - causal=True, - spatial_padding_mode=spatial_padding_mode, - ) - elif block_name == "compress_all": - block = make_conv_nd( - dims=dims, - in_channels=input_channel, - out_channels=output_channel, - kernel_size=3, - stride=(2, 2, 2), - causal=True, - spatial_padding_mode=spatial_padding_mode, - ) - elif block_name == "compress_all_x_y": - output_channel = block_params.get("multiplier", 2) * output_channel - block = make_conv_nd( - dims=dims, - in_channels=input_channel, - out_channels=output_channel, - kernel_size=3, - stride=(2, 2, 2), - causal=True, - spatial_padding_mode=spatial_padding_mode, - ) - elif block_name == "compress_all_res": - output_channel = block_params.get("multiplier", 2) * output_channel - block = SpaceToDepthDownsample( - dims=dims, - in_channels=input_channel, - out_channels=output_channel, - stride=(2, 2, 2), - spatial_padding_mode=spatial_padding_mode, - ) - elif block_name == "compress_space_res": - output_channel = block_params.get("multiplier", 2) * output_channel - block = SpaceToDepthDownsample( - dims=dims, - in_channels=input_channel, - out_channels=output_channel, - stride=(1, 2, 2), - spatial_padding_mode=spatial_padding_mode, - ) - elif block_name == "compress_time_res": - output_channel = block_params.get("multiplier", 2) * output_channel - block = SpaceToDepthDownsample( - dims=dims, - in_channels=input_channel, - out_channels=output_channel, - stride=(2, 1, 1), - spatial_padding_mode=spatial_padding_mode, - ) - else: - raise ValueError(f"unknown block: {block_name}") - - self.down_blocks.append(block) - - # out - if norm_layer == "group_norm": - self.conv_norm_out = nn.GroupNorm( - num_channels=output_channel, num_groups=norm_num_groups, eps=1e-6 - ) - elif norm_layer == "pixel_norm": - self.conv_norm_out = PixelNorm() - elif norm_layer == "layer_norm": - self.conv_norm_out = LayerNorm(output_channel, eps=1e-6) - - self.conv_act = nn.SiLU() - - conv_out_channels = out_channels - if latent_log_var == "per_channel": - conv_out_channels *= 2 - elif latent_log_var == "uniform": - conv_out_channels += 1 - elif latent_log_var == "constant": - conv_out_channels += 1 - elif latent_log_var != "none": - raise ValueError(f"Invalid latent_log_var: {latent_log_var}") - self.conv_out = make_conv_nd( - dims, - output_channel, - conv_out_channels, - 3, - padding=1, - causal=True, - spatial_padding_mode=spatial_padding_mode, - ) - - self.gradient_checkpointing = False - - def forward(self, sample: torch.FloatTensor) -> torch.FloatTensor: - r"""The forward method of the `Encoder` class.""" - - sample = patchify(sample, patch_size_hw=self.patch_size, patch_size_t=1) - sample = self.conv_in(sample) - - checkpoint_fn = ( - partial(torch.utils.checkpoint.checkpoint, use_reentrant=False) - if self.gradient_checkpointing and self.training - else lambda x: x - ) - - for down_block in self.down_blocks: - sample = checkpoint_fn(down_block)(sample) - - sample = self.conv_norm_out(sample) - sample = self.conv_act(sample) - sample = self.conv_out(sample) - - if self.latent_log_var == "uniform": - last_channel = sample[:, -1:, ...] - num_dims = sample.dim() - - if num_dims == 4: - # For shape (B, C, H, W) - repeated_last_channel = last_channel.repeat( - 1, sample.shape[1] - 2, 1, 1 - ) - sample = torch.cat([sample, repeated_last_channel], dim=1) - elif num_dims == 5: - # For shape (B, C, F, H, W) - repeated_last_channel = last_channel.repeat( - 1, sample.shape[1] - 2, 1, 1, 1 - ) - sample = torch.cat([sample, repeated_last_channel], dim=1) - else: - raise ValueError(f"Invalid input shape: {sample.shape}") - elif self.latent_log_var == "constant": - sample = sample[:, :-1, ...] - approx_ln_0 = ( - -30 - ) # this is the minimal clamp value in DiagonalGaussianDistribution objects - sample = torch.cat( - [sample, torch.ones_like(sample, device=sample.device) * approx_ln_0], - dim=1, - ) - - return sample - - -class Decoder(nn.Module): - r""" - The `Decoder` layer of a variational autoencoder that decodes its latent representation into an output sample. - - Args: - dims (`int` or `Tuple[int, int]`, *optional*, defaults to 3): - The number of dimensions to use in convolutions. - in_channels (`int`, *optional*, defaults to 3): - The number of input channels. - out_channels (`int`, *optional*, defaults to 3): - The number of output channels. - blocks (`List[Tuple[str, int]]`, *optional*, defaults to `[("res_x", 1)]`): - The blocks to use. Each block is a tuple of the block name and the number of layers. - base_channels (`int`, *optional*, defaults to 128): - The number of output channels for the first convolutional layer. - norm_num_groups (`int`, *optional*, defaults to 32): - The number of groups for normalization. - patch_size (`int`, *optional*, defaults to 1): - The patch size to use. Should be a power of 2. - norm_layer (`str`, *optional*, defaults to `group_norm`): - The normalization layer to use. Can be either `group_norm` or `pixel_norm`. - causal (`bool`, *optional*, defaults to `True`): - Whether to use causal convolutions or not. - """ - - def __init__( - self, - dims, - in_channels: int = 3, - out_channels: int = 3, - blocks: List[Tuple[str, int | dict]] = [("res_x", 1)], - base_channels: int = 128, - layers_per_block: int = 2, - norm_num_groups: int = 32, - patch_size: int = 1, - norm_layer: str = "group_norm", - causal: bool = True, - timestep_conditioning: bool = False, - spatial_padding_mode: str = "zeros", - ): - super().__init__() - self.patch_size = patch_size - self.layers_per_block = layers_per_block - out_channels = out_channels * patch_size**2 - self.causal = causal - self.blocks_desc = blocks - - # Compute output channel to be product of all channel-multiplier blocks - output_channel = base_channels - for block_name, block_params in list(reversed(blocks)): - block_params = block_params if isinstance(block_params, dict) else {} - if block_name == "res_x_y": - output_channel = output_channel * block_params.get("multiplier", 2) - if block_name.startswith("compress"): - output_channel = output_channel * block_params.get("multiplier", 1) - - self.conv_in = make_conv_nd( - dims, - in_channels, - output_channel, - kernel_size=3, - stride=1, - padding=1, - causal=True, - spatial_padding_mode=spatial_padding_mode, - ) - - self.up_blocks = nn.ModuleList([]) - - for block_name, block_params in list(reversed(blocks)): - input_channel = output_channel - if isinstance(block_params, int): - block_params = {"num_layers": block_params} - - if block_name == "res_x": - block = UNetMidBlock3D( - dims=dims, - in_channels=input_channel, - num_layers=block_params["num_layers"], - resnet_eps=1e-6, - resnet_groups=norm_num_groups, - norm_layer=norm_layer, - inject_noise=block_params.get("inject_noise", False), - timestep_conditioning=timestep_conditioning, - spatial_padding_mode=spatial_padding_mode, - ) - elif block_name == "attn_res_x": - block = UNetMidBlock3D( - dims=dims, - in_channels=input_channel, - num_layers=block_params["num_layers"], - resnet_groups=norm_num_groups, - norm_layer=norm_layer, - inject_noise=block_params.get("inject_noise", False), - timestep_conditioning=timestep_conditioning, - attention_head_dim=block_params["attention_head_dim"], - spatial_padding_mode=spatial_padding_mode, - ) - elif block_name == "res_x_y": - output_channel = output_channel // block_params.get("multiplier", 2) - block = ResnetBlock3D( - dims=dims, - in_channels=input_channel, - out_channels=output_channel, - eps=1e-6, - groups=norm_num_groups, - norm_layer=norm_layer, - inject_noise=block_params.get("inject_noise", False), - timestep_conditioning=False, - spatial_padding_mode=spatial_padding_mode, - ) - elif block_name == "compress_time": - block = DepthToSpaceUpsample( - dims=dims, - in_channels=input_channel, - stride=(2, 1, 1), - spatial_padding_mode=spatial_padding_mode, - ) - elif block_name == "compress_space": - block = DepthToSpaceUpsample( - dims=dims, - in_channels=input_channel, - stride=(1, 2, 2), - spatial_padding_mode=spatial_padding_mode, - ) - elif block_name == "compress_all": - output_channel = output_channel // block_params.get("multiplier", 1) - block = DepthToSpaceUpsample( - dims=dims, - in_channels=input_channel, - stride=(2, 2, 2), - residual=block_params.get("residual", False), - out_channels_reduction_factor=block_params.get("multiplier", 1), - spatial_padding_mode=spatial_padding_mode, - ) - else: - raise ValueError(f"unknown layer: {block_name}") - - self.up_blocks.append(block) - - if norm_layer == "group_norm": - self.conv_norm_out = nn.GroupNorm( - num_channels=output_channel, num_groups=norm_num_groups, eps=1e-6 - ) - elif norm_layer == "pixel_norm": - self.conv_norm_out = PixelNorm() - elif norm_layer == "layer_norm": - self.conv_norm_out = LayerNorm(output_channel, eps=1e-6) - - self.conv_act = nn.SiLU() - self.conv_out = make_conv_nd( - dims, - output_channel, - out_channels, - 3, - padding=1, - causal=True, - spatial_padding_mode=spatial_padding_mode, - ) - - self.gradient_checkpointing = False - - self.timestep_conditioning = timestep_conditioning - - if timestep_conditioning: - self.timestep_scale_multiplier = nn.Parameter( - torch.tensor(1000.0, dtype=torch.float32) - ) - self.last_time_embedder = PixArtAlphaCombinedTimestepSizeEmbeddings( - output_channel * 2, 0 - ) - self.last_scale_shift_table = nn.Parameter( - torch.randn(2, output_channel) / output_channel**0.5 - ) - - def forward( - self, - sample: torch.FloatTensor, - target_shape, - timestep: Optional[torch.Tensor] = None, - ) -> torch.FloatTensor: - r"""The forward method of the `Decoder` class.""" - assert target_shape is not None, "target_shape must be provided" - batch_size = sample.shape[0] - - sample = self.conv_in(sample, causal=self.causal) - - upscale_dtype = next(iter(self.up_blocks.parameters())).dtype - - checkpoint_fn = ( - partial(torch.utils.checkpoint.checkpoint, use_reentrant=False) - if self.gradient_checkpointing and self.training - else lambda x: x - ) - - sample = sample.to(upscale_dtype) - - if self.timestep_conditioning: - assert ( - timestep is not None - ), "should pass timestep with timestep_conditioning=True" - scaled_timestep = timestep * self.timestep_scale_multiplier - - for up_block in self.up_blocks: - if self.timestep_conditioning and isinstance(up_block, UNetMidBlock3D): - sample = checkpoint_fn(up_block)( - sample, causal=self.causal, timestep=scaled_timestep - ) - else: - sample = checkpoint_fn(up_block)(sample, causal=self.causal) - - sample = self.conv_norm_out(sample) - - if self.timestep_conditioning: - embedded_timestep = self.last_time_embedder( - timestep=scaled_timestep.flatten(), - resolution=None, - aspect_ratio=None, - batch_size=sample.shape[0], - hidden_dtype=sample.dtype, - ) - embedded_timestep = embedded_timestep.view( - batch_size, embedded_timestep.shape[-1], 1, 1, 1 - ) - ada_values = self.last_scale_shift_table[ - None, ..., None, None, None - ] + embedded_timestep.reshape( - batch_size, - 2, - -1, - embedded_timestep.shape[-3], - embedded_timestep.shape[-2], - embedded_timestep.shape[-1], - ) - shift, scale = ada_values.unbind(dim=1) - sample = sample * (1 + scale) + shift - - sample = self.conv_act(sample) - sample = self.conv_out(sample, causal=self.causal) - - sample = unpatchify(sample, patch_size_hw=self.patch_size, patch_size_t=1) - - return sample - - -class UNetMidBlock3D(nn.Module): - """ - A 3D UNet mid-block [`UNetMidBlock3D`] with multiple residual blocks. - - Args: - in_channels (`int`): The number of input channels. - dropout (`float`, *optional*, defaults to 0.0): The dropout rate. - num_layers (`int`, *optional*, defaults to 1): The number of residual blocks. - resnet_eps (`float`, *optional*, 1e-6 ): The epsilon value for the resnet blocks. - resnet_groups (`int`, *optional*, defaults to 32): - The number of groups to use in the group normalization layers of the resnet blocks. - norm_layer (`str`, *optional*, defaults to `group_norm`): - The normalization layer to use. Can be either `group_norm` or `pixel_norm`. - inject_noise (`bool`, *optional*, defaults to `False`): - Whether to inject noise into the hidden states. - timestep_conditioning (`bool`, *optional*, defaults to `False`): - Whether to condition the hidden states on the timestep. - attention_head_dim (`int`, *optional*, defaults to -1): - The dimension of the attention head. If -1, no attention is used. - - Returns: - `torch.FloatTensor`: The output of the last residual block, which is a tensor of shape `(batch_size, - in_channels, height, width)`. - - """ - - def __init__( - self, - dims: Union[int, Tuple[int, int]], - in_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_groups: int = 32, - norm_layer: str = "group_norm", - inject_noise: bool = False, - timestep_conditioning: bool = False, - attention_head_dim: int = -1, - spatial_padding_mode: str = "zeros", - ): - super().__init__() - resnet_groups = ( - resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - ) - self.timestep_conditioning = timestep_conditioning - - if timestep_conditioning: - self.time_embedder = PixArtAlphaCombinedTimestepSizeEmbeddings( - in_channels * 4, 0 - ) - - self.res_blocks = nn.ModuleList( - [ - ResnetBlock3D( - dims=dims, - in_channels=in_channels, - out_channels=in_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - norm_layer=norm_layer, - inject_noise=inject_noise, - timestep_conditioning=timestep_conditioning, - spatial_padding_mode=spatial_padding_mode, - ) - for _ in range(num_layers) - ] - ) - - self.attention_blocks = None - - if attention_head_dim > 0: - if attention_head_dim > in_channels: - raise ValueError( - "attention_head_dim must be less than or equal to in_channels" - ) - - self.attention_blocks = nn.ModuleList( - [ - Attention( - query_dim=in_channels, - heads=in_channels // attention_head_dim, - dim_head=attention_head_dim, - bias=True, - out_bias=True, - qk_norm="rms_norm", - residual_connection=True, - ) - for _ in range(num_layers) - ] - ) - - def forward( - self, - hidden_states: torch.FloatTensor, - causal: bool = True, - timestep: Optional[torch.Tensor] = None, - ) -> torch.FloatTensor: - timestep_embed = None - if self.timestep_conditioning: - assert ( - timestep is not None - ), "should pass timestep with timestep_conditioning=True" - batch_size = hidden_states.shape[0] - timestep_embed = self.time_embedder( - timestep=timestep.flatten(), - resolution=None, - aspect_ratio=None, - batch_size=batch_size, - hidden_dtype=hidden_states.dtype, - ) - timestep_embed = timestep_embed.view( - batch_size, timestep_embed.shape[-1], 1, 1, 1 - ) - - if self.attention_blocks: - for resnet, attention in zip(self.res_blocks, self.attention_blocks): - hidden_states = resnet( - hidden_states, causal=causal, timestep=timestep_embed - ) - - # Reshape the hidden states to be (batch_size, frames * height * width, channel) - batch_size, channel, frames, height, width = hidden_states.shape - hidden_states = hidden_states.view( - batch_size, channel, frames * height * width - ).transpose(1, 2) - - if attention.use_tpu_flash_attention: - # Pad the second dimension to be divisible by block_k_major (block in flash attention) - seq_len = hidden_states.shape[1] - block_k_major = 512 - pad_len = (block_k_major - seq_len % block_k_major) % block_k_major - if pad_len > 0: - hidden_states = F.pad( - hidden_states, (0, 0, 0, pad_len), "constant", 0 - ) - - # Create a mask with ones for the original sequence length and zeros for the padded indexes - mask = torch.ones( - (hidden_states.shape[0], seq_len), - device=hidden_states.device, - dtype=hidden_states.dtype, - ) - if pad_len > 0: - mask = F.pad(mask, (0, pad_len), "constant", 0) - - hidden_states = attention( - hidden_states, - attention_mask=( - None if not attention.use_tpu_flash_attention else mask - ), - ) - - if attention.use_tpu_flash_attention: - # Remove the padding - if pad_len > 0: - hidden_states = hidden_states[:, :-pad_len, :] - - # Reshape the hidden states back to (batch_size, channel, frames, height, width, channel) - hidden_states = hidden_states.transpose(-1, -2).reshape( - batch_size, channel, frames, height, width - ) - else: - for resnet in self.res_blocks: - hidden_states = resnet( - hidden_states, causal=causal, timestep=timestep_embed - ) - - return hidden_states - - -class SpaceToDepthDownsample(nn.Module): - def __init__(self, dims, in_channels, out_channels, stride, spatial_padding_mode): - super().__init__() - self.stride = stride - self.group_size = in_channels * np.prod(stride) // out_channels - self.conv = make_conv_nd( - dims=dims, - in_channels=in_channels, - out_channels=out_channels // np.prod(stride), - kernel_size=3, - stride=1, - causal=True, - spatial_padding_mode=spatial_padding_mode, - ) - - def forward(self, x, causal: bool = True): - if self.stride[0] == 2: - x = torch.cat( - [x[:, :, :1, :, :], x], dim=2 - ) # duplicate first frames for padding - - # skip connection - x_in = rearrange( - x, - "b c (d p1) (h p2) (w p3) -> b (c p1 p2 p3) d h w", - p1=self.stride[0], - p2=self.stride[1], - p3=self.stride[2], - ) - x_in = rearrange(x_in, "b (c g) d h w -> b c g d h w", g=self.group_size) - x_in = x_in.mean(dim=2) - - # conv - x = self.conv(x, causal=causal) - x = rearrange( - x, - "b c (d p1) (h p2) (w p3) -> b (c p1 p2 p3) d h w", - p1=self.stride[0], - p2=self.stride[1], - p3=self.stride[2], - ) - - x = x + x_in - - return x - - -class DepthToSpaceUpsample(nn.Module): - def __init__( - self, - dims, - in_channels, - stride, - residual=False, - out_channels_reduction_factor=1, - spatial_padding_mode="zeros", - ): - super().__init__() - self.stride = stride - self.out_channels = ( - np.prod(stride) * in_channels // out_channels_reduction_factor - ) - self.conv = make_conv_nd( - dims=dims, - in_channels=in_channels, - out_channels=self.out_channels, - kernel_size=3, - stride=1, - causal=True, - spatial_padding_mode=spatial_padding_mode, - ) - self.pixel_shuffle = PixelShuffleND(dims=dims, upscale_factors=stride) - self.residual = residual - self.out_channels_reduction_factor = out_channels_reduction_factor - - def forward(self, x, causal: bool = True): - if self.residual: - # Reshape and duplicate the input to match the output shape - x_in = self.pixel_shuffle(x) - num_repeat = np.prod(self.stride) // self.out_channels_reduction_factor - x_in = x_in.repeat(1, num_repeat, 1, 1, 1) - if self.stride[0] == 2: - x_in = x_in[:, :, 1:, :, :] - x = self.conv(x, causal=causal) - x = self.pixel_shuffle(x) - if self.stride[0] == 2: - x = x[:, :, 1:, :, :] - if self.residual: - x = x + x_in - return x - - -class LayerNorm(nn.Module): - def __init__(self, dim, eps, elementwise_affine=True) -> None: - super().__init__() - self.norm = nn.LayerNorm(dim, eps=eps, elementwise_affine=elementwise_affine) - - def forward(self, x): - x = rearrange(x, "b c d h w -> b d h w c") - x = self.norm(x) - x = rearrange(x, "b d h w c -> b c d h w") - return x - - -class ResnetBlock3D(nn.Module): - r""" - A Resnet block. - - Parameters: - in_channels (`int`): The number of channels in the input. - out_channels (`int`, *optional*, default to be `None`): - The number of output channels for the first conv layer. If None, same as `in_channels`. - dropout (`float`, *optional*, defaults to `0.0`): The dropout probability to use. - groups (`int`, *optional*, default to `32`): The number of groups to use for the first normalization layer. - eps (`float`, *optional*, defaults to `1e-6`): The epsilon to use for the normalization. - """ - - def __init__( - self, - dims: Union[int, Tuple[int, int]], - in_channels: int, - out_channels: Optional[int] = None, - dropout: float = 0.0, - groups: int = 32, - eps: float = 1e-6, - norm_layer: str = "group_norm", - inject_noise: bool = False, - timestep_conditioning: bool = False, - spatial_padding_mode: str = "zeros", - ): - super().__init__() - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - self.inject_noise = inject_noise - - if norm_layer == "group_norm": - self.norm1 = nn.GroupNorm( - num_groups=groups, num_channels=in_channels, eps=eps, affine=True - ) - elif norm_layer == "pixel_norm": - self.norm1 = PixelNorm() - elif norm_layer == "layer_norm": - self.norm1 = LayerNorm(in_channels, eps=eps, elementwise_affine=True) - - self.non_linearity = nn.SiLU() - - self.conv1 = make_conv_nd( - dims, - in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1, - causal=True, - spatial_padding_mode=spatial_padding_mode, - ) - - if inject_noise: - self.per_channel_scale1 = nn.Parameter(torch.zeros((in_channels, 1, 1))) - - if norm_layer == "group_norm": - self.norm2 = nn.GroupNorm( - num_groups=groups, num_channels=out_channels, eps=eps, affine=True - ) - elif norm_layer == "pixel_norm": - self.norm2 = PixelNorm() - elif norm_layer == "layer_norm": - self.norm2 = LayerNorm(out_channels, eps=eps, elementwise_affine=True) - - self.dropout = torch.nn.Dropout(dropout) - - self.conv2 = make_conv_nd( - dims, - out_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1, - causal=True, - spatial_padding_mode=spatial_padding_mode, - ) - - if inject_noise: - self.per_channel_scale2 = nn.Parameter(torch.zeros((in_channels, 1, 1))) - - self.conv_shortcut = ( - make_linear_nd( - dims=dims, in_channels=in_channels, out_channels=out_channels - ) - if in_channels != out_channels - else nn.Identity() - ) - - self.norm3 = ( - LayerNorm(in_channels, eps=eps, elementwise_affine=True) - if in_channels != out_channels - else nn.Identity() - ) - - self.timestep_conditioning = timestep_conditioning - - if timestep_conditioning: - self.scale_shift_table = nn.Parameter( - torch.randn(4, in_channels) / in_channels**0.5 - ) - - def _feed_spatial_noise( - self, hidden_states: torch.FloatTensor, per_channel_scale: torch.FloatTensor - ) -> torch.FloatTensor: - spatial_shape = hidden_states.shape[-2:] - device = hidden_states.device - dtype = hidden_states.dtype - - # similar to the "explicit noise inputs" method in style-gan - spatial_noise = torch.randn(spatial_shape, device=device, dtype=dtype)[None] - scaled_noise = (spatial_noise * per_channel_scale)[None, :, None, ...] - hidden_states = hidden_states + scaled_noise - - return hidden_states - - def forward( - self, - input_tensor: torch.FloatTensor, - causal: bool = True, - timestep: Optional[torch.Tensor] = None, - ) -> torch.FloatTensor: - hidden_states = input_tensor - batch_size = hidden_states.shape[0] - - hidden_states = self.norm1(hidden_states) - if self.timestep_conditioning: - assert ( - timestep is not None - ), "should pass timestep with timestep_conditioning=True" - ada_values = self.scale_shift_table[ - None, ..., None, None, None - ] + timestep.reshape( - batch_size, - 4, - -1, - timestep.shape[-3], - timestep.shape[-2], - timestep.shape[-1], - ) - shift1, scale1, shift2, scale2 = ada_values.unbind(dim=1) - - hidden_states = hidden_states * (1 + scale1) + shift1 - - hidden_states = self.non_linearity(hidden_states) - - hidden_states = self.conv1(hidden_states, causal=causal) - - if self.inject_noise: - hidden_states = self._feed_spatial_noise( - hidden_states, self.per_channel_scale1 - ) - - hidden_states = self.norm2(hidden_states) - - if self.timestep_conditioning: - hidden_states = hidden_states * (1 + scale2) + shift2 - - hidden_states = self.non_linearity(hidden_states) - - hidden_states = self.dropout(hidden_states) - - hidden_states = self.conv2(hidden_states, causal=causal) - - if self.inject_noise: - hidden_states = self._feed_spatial_noise( - hidden_states, self.per_channel_scale2 - ) - - input_tensor = self.norm3(input_tensor) - - batch_size = input_tensor.shape[0] - - input_tensor = self.conv_shortcut(input_tensor) - - output_tensor = input_tensor + hidden_states - - return output_tensor - - -def patchify(x, patch_size_hw, patch_size_t=1): - if patch_size_hw == 1 and patch_size_t == 1: - return x - if x.dim() == 4: - x = rearrange( - x, "b c (h q) (w r) -> b (c r q) h w", q=patch_size_hw, r=patch_size_hw - ) - elif x.dim() == 5: - x = rearrange( - x, - "b c (f p) (h q) (w r) -> b (c p r q) f h w", - p=patch_size_t, - q=patch_size_hw, - r=patch_size_hw, - ) - else: - raise ValueError(f"Invalid input shape: {x.shape}") - - return x - - -def unpatchify(x, patch_size_hw, patch_size_t=1): - if patch_size_hw == 1 and patch_size_t == 1: - return x - - if x.dim() == 4: - x = rearrange( - x, "b (c r q) h w -> b c (h q) (w r)", q=patch_size_hw, r=patch_size_hw - ) - elif x.dim() == 5: - x = rearrange( - x, - "b (c p r q) f h w -> b c (f p) (h q) (w r)", - p=patch_size_t, - q=patch_size_hw, - r=patch_size_hw, - ) - - return x - - -def create_video_autoencoder_demo_config( - latent_channels: int = 64, -): - encoder_blocks = [ - ("res_x", {"num_layers": 2}), - ("compress_space_res", {"multiplier": 2}), - ("compress_time_res", {"multiplier": 2}), - ("compress_all_res", {"multiplier": 2}), - ("compress_all_res", {"multiplier": 2}), - ("res_x", {"num_layers": 1}), - ] - decoder_blocks = [ - ("res_x", {"num_layers": 2, "inject_noise": False}), - ("compress_all", {"residual": True, "multiplier": 2}), - ("compress_all", {"residual": True, "multiplier": 2}), - ("compress_all", {"residual": True, "multiplier": 2}), - ("res_x", {"num_layers": 2, "inject_noise": False}), - ] - return { - "_class_name": "CausalVideoAutoencoder", - "dims": 3, - "encoder_blocks": encoder_blocks, - "decoder_blocks": decoder_blocks, - "latent_channels": latent_channels, - "norm_layer": "pixel_norm", - "patch_size": 4, - "latent_log_var": "uniform", - "use_quant_conv": False, - "causal_decoder": False, - "timestep_conditioning": True, - "spatial_padding_mode": "replicate", - } - - -def test_vae_patchify_unpatchify(): - import torch - - x = torch.randn(2, 3, 8, 64, 64) - x_patched = patchify(x, patch_size_hw=4, patch_size_t=4) - x_unpatched = unpatchify(x_patched, patch_size_hw=4, patch_size_t=4) - assert torch.allclose(x, x_unpatched) - - -def demo_video_autoencoder_forward_backward(): - # Configuration for the VideoAutoencoder - config = create_video_autoencoder_demo_config() - - # Instantiate the VideoAutoencoder with the specified configuration - video_autoencoder = CausalVideoAutoencoder.from_config(config) - - print(video_autoencoder) - video_autoencoder.eval() - # Print the total number of parameters in the video autoencoder - total_params = sum(p.numel() for p in video_autoencoder.parameters()) - print(f"Total number of parameters in VideoAutoencoder: {total_params:,}") - - # Create a mock input tensor simulating a batch of videos - # Shape: (batch_size, channels, depth, height, width) - # E.g., 4 videos, each with 3 color channels, 16 frames, and 64x64 pixels per frame - input_videos = torch.randn(2, 3, 17, 64, 64) - - # Forward pass: encode and decode the input videos - latent = video_autoencoder.encode(input_videos).latent_dist.mode() - print(f"input shape={input_videos.shape}") - print(f"latent shape={latent.shape}") - - timestep = torch.ones(input_videos.shape[0]) * 0.1 - reconstructed_videos = video_autoencoder.decode( - latent, target_shape=input_videos.shape, timestep=timestep - ).sample - - print(f"reconstructed shape={reconstructed_videos.shape}") - - # Validate that single image gets treated the same way as first frame - input_image = input_videos[:, :, :1, :, :] - image_latent = video_autoencoder.encode(input_image).latent_dist.mode() - _ = video_autoencoder.decode( - image_latent, target_shape=image_latent.shape, timestep=timestep - ).sample - - first_frame_latent = latent[:, :, :1, :, :] - - assert torch.allclose(image_latent, first_frame_latent, atol=1e-6) - # assert torch.allclose(reconstructed_image, reconstructed_videos[:, :, :1, :, :], atol=1e-6) - # assert torch.allclose(image_latent, first_frame_latent, atol=1e-6) - # assert (reconstructed_image == reconstructed_videos[:, :, :1, :, :]).all() - - # Calculate the loss (e.g., mean squared error) - loss = torch.nn.functional.mse_loss(input_videos, reconstructed_videos) - - # Perform backward pass - loss.backward() - - print(f"Demo completed with loss: {loss.item()}") - - -# Ensure to call the demo function to execute the forward and backward pass -if __name__ == "__main__": - demo_video_autoencoder_forward_backward() diff --git a/ltx_video_x/models/autoencoders/conv_nd_factory.py b/ltx_video_x/models/autoencoders/conv_nd_factory.py deleted file mode 100644 index 718c69befd959c7466c4a57d71e46bb80bfe9fba..0000000000000000000000000000000000000000 --- a/ltx_video_x/models/autoencoders/conv_nd_factory.py +++ /dev/null @@ -1,90 +0,0 @@ -from typing import Tuple, Union - -import torch - -from ltx_video.models.autoencoders.dual_conv3d import DualConv3d -from ltx_video.models.autoencoders.causal_conv3d import CausalConv3d - - -def make_conv_nd( - dims: Union[int, Tuple[int, int]], - in_channels: int, - out_channels: int, - kernel_size: int, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True, - causal=False, - spatial_padding_mode="zeros", - temporal_padding_mode="zeros", -): - if not (spatial_padding_mode == temporal_padding_mode or causal): - raise NotImplementedError("spatial and temporal padding modes must be equal") - if dims == 2: - return torch.nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias=bias, - padding_mode=spatial_padding_mode, - ) - elif dims == 3: - if causal: - return CausalConv3d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias=bias, - spatial_padding_mode=spatial_padding_mode, - ) - return torch.nn.Conv3d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias=bias, - padding_mode=spatial_padding_mode, - ) - elif dims == (2, 1): - return DualConv3d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - bias=bias, - padding_mode=spatial_padding_mode, - ) - else: - raise ValueError(f"unsupported dimensions: {dims}") - - -def make_linear_nd( - dims: int, - in_channels: int, - out_channels: int, - bias=True, -): - if dims == 2: - return torch.nn.Conv2d( - in_channels=in_channels, out_channels=out_channels, kernel_size=1, bias=bias - ) - elif dims == 3 or dims == (2, 1): - return torch.nn.Conv3d( - in_channels=in_channels, out_channels=out_channels, kernel_size=1, bias=bias - ) - else: - raise ValueError(f"unsupported dimensions: {dims}") diff --git a/ltx_video_x/models/autoencoders/dual_conv3d.py b/ltx_video_x/models/autoencoders/dual_conv3d.py deleted file mode 100644 index dcf889296750d3d7e553af37ecf77d1b10245af3..0000000000000000000000000000000000000000 --- a/ltx_video_x/models/autoencoders/dual_conv3d.py +++ /dev/null @@ -1,217 +0,0 @@ -import math -from typing import Tuple, Union - -import torch -import torch.nn as nn -import torch.nn.functional as F -from einops import rearrange - - -class DualConv3d(nn.Module): - def __init__( - self, - in_channels, - out_channels, - kernel_size, - stride: Union[int, Tuple[int, int, int]] = 1, - padding: Union[int, Tuple[int, int, int]] = 0, - dilation: Union[int, Tuple[int, int, int]] = 1, - groups=1, - bias=True, - padding_mode="zeros", - ): - super(DualConv3d, self).__init__() - - self.in_channels = in_channels - self.out_channels = out_channels - self.padding_mode = padding_mode - # Ensure kernel_size, stride, padding, and dilation are tuples of length 3 - if isinstance(kernel_size, int): - kernel_size = (kernel_size, kernel_size, kernel_size) - if kernel_size == (1, 1, 1): - raise ValueError( - "kernel_size must be greater than 1. Use make_linear_nd instead." - ) - if isinstance(stride, int): - stride = (stride, stride, stride) - if isinstance(padding, int): - padding = (padding, padding, padding) - if isinstance(dilation, int): - dilation = (dilation, dilation, dilation) - - # Set parameters for convolutions - self.groups = groups - self.bias = bias - - # Define the size of the channels after the first convolution - intermediate_channels = ( - out_channels if in_channels < out_channels else in_channels - ) - - # Define parameters for the first convolution - self.weight1 = nn.Parameter( - torch.Tensor( - intermediate_channels, - in_channels // groups, - 1, - kernel_size[1], - kernel_size[2], - ) - ) - self.stride1 = (1, stride[1], stride[2]) - self.padding1 = (0, padding[1], padding[2]) - self.dilation1 = (1, dilation[1], dilation[2]) - if bias: - self.bias1 = nn.Parameter(torch.Tensor(intermediate_channels)) - else: - self.register_parameter("bias1", None) - - # Define parameters for the second convolution - self.weight2 = nn.Parameter( - torch.Tensor( - out_channels, intermediate_channels // groups, kernel_size[0], 1, 1 - ) - ) - self.stride2 = (stride[0], 1, 1) - self.padding2 = (padding[0], 0, 0) - self.dilation2 = (dilation[0], 1, 1) - if bias: - self.bias2 = nn.Parameter(torch.Tensor(out_channels)) - else: - self.register_parameter("bias2", None) - - # Initialize weights and biases - self.reset_parameters() - - def reset_parameters(self): - nn.init.kaiming_uniform_(self.weight1, a=math.sqrt(5)) - nn.init.kaiming_uniform_(self.weight2, a=math.sqrt(5)) - if self.bias: - fan_in1, _ = nn.init._calculate_fan_in_and_fan_out(self.weight1) - bound1 = 1 / math.sqrt(fan_in1) - nn.init.uniform_(self.bias1, -bound1, bound1) - fan_in2, _ = nn.init._calculate_fan_in_and_fan_out(self.weight2) - bound2 = 1 / math.sqrt(fan_in2) - nn.init.uniform_(self.bias2, -bound2, bound2) - - def forward(self, x, use_conv3d=False, skip_time_conv=False): - if use_conv3d: - return self.forward_with_3d(x=x, skip_time_conv=skip_time_conv) - else: - return self.forward_with_2d(x=x, skip_time_conv=skip_time_conv) - - def forward_with_3d(self, x, skip_time_conv): - # First convolution - x = F.conv3d( - x, - self.weight1, - self.bias1, - self.stride1, - self.padding1, - self.dilation1, - self.groups, - padding_mode=self.padding_mode, - ) - - if skip_time_conv: - return x - - # Second convolution - x = F.conv3d( - x, - self.weight2, - self.bias2, - self.stride2, - self.padding2, - self.dilation2, - self.groups, - padding_mode=self.padding_mode, - ) - - return x - - def forward_with_2d(self, x, skip_time_conv): - b, c, d, h, w = x.shape - - # First 2D convolution - x = rearrange(x, "b c d h w -> (b d) c h w") - # Squeeze the depth dimension out of weight1 since it's 1 - weight1 = self.weight1.squeeze(2) - # Select stride, padding, and dilation for the 2D convolution - stride1 = (self.stride1[1], self.stride1[2]) - padding1 = (self.padding1[1], self.padding1[2]) - dilation1 = (self.dilation1[1], self.dilation1[2]) - x = F.conv2d( - x, - weight1, - self.bias1, - stride1, - padding1, - dilation1, - self.groups, - padding_mode=self.padding_mode, - ) - - _, _, h, w = x.shape - - if skip_time_conv: - x = rearrange(x, "(b d) c h w -> b c d h w", b=b) - return x - - # Second convolution which is essentially treated as a 1D convolution across the 'd' dimension - x = rearrange(x, "(b d) c h w -> (b h w) c d", b=b) - - # Reshape weight2 to match the expected dimensions for conv1d - weight2 = self.weight2.squeeze(-1).squeeze(-1) - # Use only the relevant dimension for stride, padding, and dilation for the 1D convolution - stride2 = self.stride2[0] - padding2 = self.padding2[0] - dilation2 = self.dilation2[0] - x = F.conv1d( - x, - weight2, - self.bias2, - stride2, - padding2, - dilation2, - self.groups, - padding_mode=self.padding_mode, - ) - x = rearrange(x, "(b h w) c d -> b c d h w", b=b, h=h, w=w) - - return x - - @property - def weight(self): - return self.weight2 - - -def test_dual_conv3d_consistency(): - # Initialize parameters - in_channels = 3 - out_channels = 5 - kernel_size = (3, 3, 3) - stride = (2, 2, 2) - padding = (1, 1, 1) - - # Create an instance of the DualConv3d class - dual_conv3d = DualConv3d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - bias=True, - ) - - # Example input tensor - test_input = torch.randn(1, 3, 10, 10, 10) - - # Perform forward passes with both 3D and 2D settings - output_conv3d = dual_conv3d(test_input, use_conv3d=True) - output_2d = dual_conv3d(test_input, use_conv3d=False) - - # Assert that the outputs from both methods are sufficiently close - assert torch.allclose( - output_conv3d, output_2d, atol=1e-6 - ), "Outputs are not consistent between 3D and 2D convolutions." diff --git a/ltx_video_x/models/autoencoders/latent_upsampler.py b/ltx_video_x/models/autoencoders/latent_upsampler.py deleted file mode 100644 index 4a76bc21d1a503d61dec673cf5cb980bb6d703fd..0000000000000000000000000000000000000000 --- a/ltx_video_x/models/autoencoders/latent_upsampler.py +++ /dev/null @@ -1,203 +0,0 @@ -from typing import Optional, Union -from pathlib import Path -import os -import json - -import torch -import torch.nn as nn -from einops import rearrange -from diffusers import ConfigMixin, ModelMixin -from safetensors.torch import safe_open - -from ltx_video.models.autoencoders.pixel_shuffle import PixelShuffleND - - -class ResBlock(nn.Module): - def __init__( - self, channels: int, mid_channels: Optional[int] = None, dims: int = 3 - ): - super().__init__() - if mid_channels is None: - mid_channels = channels - - Conv = nn.Conv2d if dims == 2 else nn.Conv3d - - self.conv1 = Conv(channels, mid_channels, kernel_size=3, padding=1) - self.norm1 = nn.GroupNorm(32, mid_channels) - self.conv2 = Conv(mid_channels, channels, kernel_size=3, padding=1) - self.norm2 = nn.GroupNorm(32, channels) - self.activation = nn.SiLU() - - def forward(self, x: torch.Tensor) -> torch.Tensor: - residual = x - x = self.conv1(x) - x = self.norm1(x) - x = self.activation(x) - x = self.conv2(x) - x = self.norm2(x) - x = self.activation(x + residual) - return x - - -class LatentUpsampler(ModelMixin, ConfigMixin): - """ - Model to spatially upsample VAE latents. - - Args: - in_channels (`int`): Number of channels in the input latent - mid_channels (`int`): Number of channels in the middle layers - num_blocks_per_stage (`int`): Number of ResBlocks to use in each stage (pre/post upsampling) - dims (`int`): Number of dimensions for convolutions (2 or 3) - spatial_upsample (`bool`): Whether to spatially upsample the latent - temporal_upsample (`bool`): Whether to temporally upsample the latent - """ - - def __init__( - self, - in_channels: int = 128, - mid_channels: int = 512, - num_blocks_per_stage: int = 4, - dims: int = 3, - spatial_upsample: bool = True, - temporal_upsample: bool = False, - ): - super().__init__() - - self.in_channels = in_channels - self.mid_channels = mid_channels - self.num_blocks_per_stage = num_blocks_per_stage - self.dims = dims - self.spatial_upsample = spatial_upsample - self.temporal_upsample = temporal_upsample - - Conv = nn.Conv2d if dims == 2 else nn.Conv3d - - self.initial_conv = Conv(in_channels, mid_channels, kernel_size=3, padding=1) - self.initial_norm = nn.GroupNorm(32, mid_channels) - self.initial_activation = nn.SiLU() - - self.res_blocks = nn.ModuleList( - [ResBlock(mid_channels, dims=dims) for _ in range(num_blocks_per_stage)] - ) - - if spatial_upsample and temporal_upsample: - self.upsampler = nn.Sequential( - nn.Conv3d(mid_channels, 8 * mid_channels, kernel_size=3, padding=1), - PixelShuffleND(3), - ) - elif spatial_upsample: - self.upsampler = nn.Sequential( - nn.Conv2d(mid_channels, 4 * mid_channels, kernel_size=3, padding=1), - PixelShuffleND(2), - ) - elif temporal_upsample: - self.upsampler = nn.Sequential( - nn.Conv3d(mid_channels, 2 * mid_channels, kernel_size=3, padding=1), - PixelShuffleND(1), - ) - else: - raise ValueError( - "Either spatial_upsample or temporal_upsample must be True" - ) - - self.post_upsample_res_blocks = nn.ModuleList( - [ResBlock(mid_channels, dims=dims) for _ in range(num_blocks_per_stage)] - ) - - self.final_conv = Conv(mid_channels, in_channels, kernel_size=3, padding=1) - - def forward(self, latent: torch.Tensor) -> torch.Tensor: - b, c, f, h, w = latent.shape - - if self.dims == 2: - x = rearrange(latent, "b c f h w -> (b f) c h w") - x = self.initial_conv(x) - x = self.initial_norm(x) - x = self.initial_activation(x) - - for block in self.res_blocks: - x = block(x) - - x = self.upsampler(x) - - for block in self.post_upsample_res_blocks: - x = block(x) - - x = self.final_conv(x) - x = rearrange(x, "(b f) c h w -> b c f h w", b=b, f=f) - else: - x = self.initial_conv(latent) - x = self.initial_norm(x) - x = self.initial_activation(x) - - for block in self.res_blocks: - x = block(x) - - if self.temporal_upsample: - x = self.upsampler(x) - x = x[:, :, 1:, :, :] - else: - x = rearrange(x, "b c f h w -> (b f) c h w") - x = self.upsampler(x) - x = rearrange(x, "(b f) c h w -> b c f h w", b=b, f=f) - - for block in self.post_upsample_res_blocks: - x = block(x) - - x = self.final_conv(x) - - return x - - @classmethod - def from_config(cls, config): - return cls( - in_channels=config.get("in_channels", 4), - mid_channels=config.get("mid_channels", 128), - num_blocks_per_stage=config.get("num_blocks_per_stage", 4), - dims=config.get("dims", 2), - spatial_upsample=config.get("spatial_upsample", True), - temporal_upsample=config.get("temporal_upsample", False), - ) - - def config(self): - return { - "_class_name": "LatentUpsampler", - "in_channels": self.in_channels, - "mid_channels": self.mid_channels, - "num_blocks_per_stage": self.num_blocks_per_stage, - "dims": self.dims, - "spatial_upsample": self.spatial_upsample, - "temporal_upsample": self.temporal_upsample, - } - - @classmethod - def from_pretrained( - cls, - pretrained_model_path: Optional[Union[str, os.PathLike]], - *args, - **kwargs, - ): - pretrained_model_path = Path(pretrained_model_path) - if pretrained_model_path.is_file() and str(pretrained_model_path).endswith( - ".safetensors" - ): - state_dict = {} - with safe_open(pretrained_model_path, framework="pt", device="cpu") as f: - metadata = f.metadata() - for k in f.keys(): - state_dict[k] = f.get_tensor(k) - config = json.loads(metadata["config"]) - with torch.device("meta"): - latent_upsampler = LatentUpsampler.from_config(config) - latent_upsampler.load_state_dict(state_dict, assign=True) - return latent_upsampler - - -if __name__ == "__main__": - latent_upsampler = LatentUpsampler(num_blocks_per_stage=4, dims=3) - print(latent_upsampler) - total_params = sum(p.numel() for p in latent_upsampler.parameters()) - print(f"Total number of parameters: {total_params:,}") - latent = torch.randn(1, 128, 9, 16, 16) - upsampled_latent = latent_upsampler(latent) - print(f"Upsampled latent shape: {upsampled_latent.shape}") diff --git a/ltx_video_x/models/autoencoders/pixel_norm.py b/ltx_video_x/models/autoencoders/pixel_norm.py deleted file mode 100644 index 9bc3ea60e8a6453e7e12a7fb5aca4de3958a2567..0000000000000000000000000000000000000000 --- a/ltx_video_x/models/autoencoders/pixel_norm.py +++ /dev/null @@ -1,12 +0,0 @@ -import torch -from torch import nn - - -class PixelNorm(nn.Module): - def __init__(self, dim=1, eps=1e-8): - super(PixelNorm, self).__init__() - self.dim = dim - self.eps = eps - - def forward(self, x): - return x / torch.sqrt(torch.mean(x**2, dim=self.dim, keepdim=True) + self.eps) diff --git a/ltx_video_x/models/autoencoders/pixel_shuffle.py b/ltx_video_x/models/autoencoders/pixel_shuffle.py deleted file mode 100644 index 4e79ae28483d5ad684ea68092bc955ef025722e6..0000000000000000000000000000000000000000 --- a/ltx_video_x/models/autoencoders/pixel_shuffle.py +++ /dev/null @@ -1,33 +0,0 @@ -import torch.nn as nn -from einops import rearrange - - -class PixelShuffleND(nn.Module): - def __init__(self, dims, upscale_factors=(2, 2, 2)): - super().__init__() - assert dims in [1, 2, 3], "dims must be 1, 2, or 3" - self.dims = dims - self.upscale_factors = upscale_factors - - def forward(self, x): - if self.dims == 3: - return rearrange( - x, - "b (c p1 p2 p3) d h w -> b c (d p1) (h p2) (w p3)", - p1=self.upscale_factors[0], - p2=self.upscale_factors[1], - p3=self.upscale_factors[2], - ) - elif self.dims == 2: - return rearrange( - x, - "b (c p1 p2) h w -> b c (h p1) (w p2)", - p1=self.upscale_factors[0], - p2=self.upscale_factors[1], - ) - elif self.dims == 1: - return rearrange( - x, - "b (c p1) f h w -> b c (f p1) h w", - p1=self.upscale_factors[0], - ) diff --git a/ltx_video_x/models/autoencoders/vae.py b/ltx_video_x/models/autoencoders/vae.py deleted file mode 100644 index 5b22217c158eb26bca45b2b6a5e475e8a71b8181..0000000000000000000000000000000000000000 --- a/ltx_video_x/models/autoencoders/vae.py +++ /dev/null @@ -1,380 +0,0 @@ -from typing import Optional, Union - -import torch -import inspect -import math -import torch.nn as nn -from diffusers import ConfigMixin, ModelMixin -from diffusers.models.autoencoders.vae import ( - DecoderOutput, - DiagonalGaussianDistribution, -) -from diffusers.models.modeling_outputs import AutoencoderKLOutput -from ltx_video.models.autoencoders.conv_nd_factory import make_conv_nd - - -class AutoencoderKLWrapper(ModelMixin, ConfigMixin): - """Variational Autoencoder (VAE) model with KL loss. - - VAE from the paper Auto-Encoding Variational Bayes by Diederik P. Kingma and Max Welling. - This model is a wrapper around an encoder and a decoder, and it adds a KL loss term to the reconstruction loss. - - Args: - encoder (`nn.Module`): - Encoder module. - decoder (`nn.Module`): - Decoder module. - latent_channels (`int`, *optional*, defaults to 4): - Number of latent channels. - """ - - def __init__( - self, - encoder: nn.Module, - decoder: nn.Module, - latent_channels: int = 4, - dims: int = 2, - sample_size=512, - use_quant_conv: bool = True, - normalize_latent_channels: bool = False, - ): - super().__init__() - - # pass init params to Encoder - self.encoder = encoder - self.use_quant_conv = use_quant_conv - self.normalize_latent_channels = normalize_latent_channels - - # pass init params to Decoder - quant_dims = 2 if dims == 2 else 3 - self.decoder = decoder - if use_quant_conv: - self.quant_conv = make_conv_nd( - quant_dims, 2 * latent_channels, 2 * latent_channels, 1 - ) - self.post_quant_conv = make_conv_nd( - quant_dims, latent_channels, latent_channels, 1 - ) - else: - self.quant_conv = nn.Identity() - self.post_quant_conv = nn.Identity() - - if normalize_latent_channels: - if dims == 2: - self.latent_norm_out = nn.BatchNorm2d(latent_channels, affine=False) - else: - self.latent_norm_out = nn.BatchNorm3d(latent_channels, affine=False) - else: - self.latent_norm_out = nn.Identity() - self.use_z_tiling = False - self.use_hw_tiling = False - self.dims = dims - self.z_sample_size = 1 - - self.decoder_params = inspect.signature(self.decoder.forward).parameters - - # only relevant if vae tiling is enabled - self.set_tiling_params(sample_size=sample_size, overlap_factor=0.25) - - def set_tiling_params(self, sample_size: int = 512, overlap_factor: float = 0.25): - self.tile_sample_min_size = sample_size - num_blocks = len(self.encoder.down_blocks) - self.tile_latent_min_size = int(sample_size / (2 ** (num_blocks - 1))) - self.tile_overlap_factor = overlap_factor - - def enable_z_tiling(self, z_sample_size: int = 8): - r""" - Enable tiling during VAE decoding. - - When this option is enabled, the VAE will split the input tensor in tiles to compute decoding in several - steps. This is useful to save some memory and allow larger batch sizes. - """ - self.use_z_tiling = z_sample_size > 1 - self.z_sample_size = z_sample_size - assert ( - z_sample_size % 8 == 0 or z_sample_size == 1 - ), f"z_sample_size must be a multiple of 8 or 1. Got {z_sample_size}." - - def disable_z_tiling(self): - r""" - Disable tiling during VAE decoding. If `use_tiling` was previously invoked, this method will go back to computing - decoding in one step. - """ - self.use_z_tiling = False - - def enable_hw_tiling(self): - r""" - Enable tiling during VAE decoding along the height and width dimension. - """ - self.use_hw_tiling = True - - def disable_hw_tiling(self): - r""" - Disable tiling during VAE decoding along the height and width dimension. - """ - self.use_hw_tiling = False - - def _hw_tiled_encode(self, x: torch.FloatTensor, return_dict: bool = True): - overlap_size = int(self.tile_sample_min_size * (1 - self.tile_overlap_factor)) - blend_extent = int(self.tile_latent_min_size * self.tile_overlap_factor) - row_limit = self.tile_latent_min_size - blend_extent - - # Split the image into 512x512 tiles and encode them separately. - rows = [] - for i in range(0, x.shape[3], overlap_size): - row = [] - for j in range(0, x.shape[4], overlap_size): - tile = x[ - :, - :, - :, - i : i + self.tile_sample_min_size, - j : j + self.tile_sample_min_size, - ] - tile = self.encoder(tile) - tile = self.quant_conv(tile) - row.append(tile) - rows.append(row) - result_rows = [] - for i, row in enumerate(rows): - result_row = [] - for j, tile in enumerate(row): - # blend the above tile and the left tile - # to the current tile and add the current tile to the result row - if i > 0: - tile = self.blend_v(rows[i - 1][j], tile, blend_extent) - if j > 0: - tile = self.blend_h(row[j - 1], tile, blend_extent) - result_row.append(tile[:, :, :, :row_limit, :row_limit]) - result_rows.append(torch.cat(result_row, dim=4)) - - moments = torch.cat(result_rows, dim=3) - return moments - - def blend_z( - self, a: torch.Tensor, b: torch.Tensor, blend_extent: int - ) -> torch.Tensor: - blend_extent = min(a.shape[2], b.shape[2], blend_extent) - for z in range(blend_extent): - b[:, :, z, :, :] = a[:, :, -blend_extent + z, :, :] * ( - 1 - z / blend_extent - ) + b[:, :, z, :, :] * (z / blend_extent) - return b - - def blend_v( - self, a: torch.Tensor, b: torch.Tensor, blend_extent: int - ) -> torch.Tensor: - blend_extent = min(a.shape[3], b.shape[3], blend_extent) - for y in range(blend_extent): - b[:, :, :, y, :] = a[:, :, :, -blend_extent + y, :] * ( - 1 - y / blend_extent - ) + b[:, :, :, y, :] * (y / blend_extent) - return b - - def blend_h( - self, a: torch.Tensor, b: torch.Tensor, blend_extent: int - ) -> torch.Tensor: - blend_extent = min(a.shape[4], b.shape[4], blend_extent) - for x in range(blend_extent): - b[:, :, :, :, x] = a[:, :, :, :, -blend_extent + x] * ( - 1 - x / blend_extent - ) + b[:, :, :, :, x] * (x / blend_extent) - return b - - def _hw_tiled_decode(self, z: torch.FloatTensor, target_shape): - overlap_size = int(self.tile_latent_min_size * (1 - self.tile_overlap_factor)) - blend_extent = int(self.tile_sample_min_size * self.tile_overlap_factor) - row_limit = self.tile_sample_min_size - blend_extent - tile_target_shape = ( - *target_shape[:3], - self.tile_sample_min_size, - self.tile_sample_min_size, - ) - # Split z into overlapping 64x64 tiles and decode them separately. - # The tiles have an overlap to avoid seams between tiles. - rows = [] - for i in range(0, z.shape[3], overlap_size): - row = [] - for j in range(0, z.shape[4], overlap_size): - tile = z[ - :, - :, - :, - i : i + self.tile_latent_min_size, - j : j + self.tile_latent_min_size, - ] - tile = self.post_quant_conv(tile) - decoded = self.decoder(tile, target_shape=tile_target_shape) - row.append(decoded) - rows.append(row) - result_rows = [] - for i, row in enumerate(rows): - result_row = [] - for j, tile in enumerate(row): - # blend the above tile and the left tile - # to the current tile and add the current tile to the result row - if i > 0: - tile = self.blend_v(rows[i - 1][j], tile, blend_extent) - if j > 0: - tile = self.blend_h(row[j - 1], tile, blend_extent) - result_row.append(tile[:, :, :, :row_limit, :row_limit]) - result_rows.append(torch.cat(result_row, dim=4)) - - dec = torch.cat(result_rows, dim=3) - return dec - - def encode( - self, z: torch.FloatTensor, return_dict: bool = True - ) -> Union[DecoderOutput, torch.FloatTensor]: - if self.use_z_tiling and z.shape[2] > self.z_sample_size > 1: - num_splits = z.shape[2] // self.z_sample_size - sizes = [self.z_sample_size] * num_splits - sizes = ( - sizes + [z.shape[2] - sum(sizes)] - if z.shape[2] - sum(sizes) > 0 - else sizes - ) - tiles = z.split(sizes, dim=2) - moments_tiles = [ - ( - self._hw_tiled_encode(z_tile, return_dict) - if self.use_hw_tiling - else self._encode(z_tile) - ) - for z_tile in tiles - ] - moments = torch.cat(moments_tiles, dim=2) - - else: - moments = ( - self._hw_tiled_encode(z, return_dict) - if self.use_hw_tiling - else self._encode(z) - ) - - posterior = DiagonalGaussianDistribution(moments) - if not return_dict: - return (posterior,) - - return AutoencoderKLOutput(latent_dist=posterior) - - def _normalize_latent_channels(self, z: torch.FloatTensor) -> torch.FloatTensor: - if isinstance(self.latent_norm_out, nn.BatchNorm3d): - _, c, _, _, _ = z.shape - z = torch.cat( - [ - self.latent_norm_out(z[:, : c // 2, :, :, :]), - z[:, c // 2 :, :, :, :], - ], - dim=1, - ) - elif isinstance(self.latent_norm_out, nn.BatchNorm2d): - raise NotImplementedError("BatchNorm2d not supported") - return z - - def _unnormalize_latent_channels(self, z: torch.FloatTensor) -> torch.FloatTensor: - if isinstance(self.latent_norm_out, nn.BatchNorm3d): - running_mean = self.latent_norm_out.running_mean.view(1, -1, 1, 1, 1) - running_var = self.latent_norm_out.running_var.view(1, -1, 1, 1, 1) - eps = self.latent_norm_out.eps - - z = z * torch.sqrt(running_var + eps) + running_mean - elif isinstance(self.latent_norm_out, nn.BatchNorm3d): - raise NotImplementedError("BatchNorm2d not supported") - return z - - def _encode(self, x: torch.FloatTensor) -> AutoencoderKLOutput: - h = self.encoder(x) - moments = self.quant_conv(h) - moments = self._normalize_latent_channels(moments) - return moments - - def _decode( - self, - z: torch.FloatTensor, - target_shape=None, - timestep: Optional[torch.Tensor] = None, - ) -> Union[DecoderOutput, torch.FloatTensor]: - z = self._unnormalize_latent_channels(z) - z = self.post_quant_conv(z) - if "timestep" in self.decoder_params: - dec = self.decoder(z, target_shape=target_shape, timestep=timestep) - else: - dec = self.decoder(z, target_shape=target_shape) - return dec - - def decode( - self, - z: torch.FloatTensor, - return_dict: bool = True, - target_shape=None, - timestep: Optional[torch.Tensor] = None, - ) -> Union[DecoderOutput, torch.FloatTensor]: - assert target_shape is not None, "target_shape must be provided for decoding" - if self.use_z_tiling and z.shape[2] > self.z_sample_size > 1: - reduction_factor = int( - self.encoder.patch_size_t - * 2 - ** ( - len(self.encoder.down_blocks) - - 1 - - math.sqrt(self.encoder.patch_size) - ) - ) - split_size = self.z_sample_size // reduction_factor - num_splits = z.shape[2] // split_size - - # copy target shape, and divide frame dimension (=2) by the context size - target_shape_split = list(target_shape) - target_shape_split[2] = target_shape[2] // num_splits - - decoded_tiles = [ - ( - self._hw_tiled_decode(z_tile, target_shape_split) - if self.use_hw_tiling - else self._decode(z_tile, target_shape=target_shape_split) - ) - for z_tile in torch.tensor_split(z, num_splits, dim=2) - ] - decoded = torch.cat(decoded_tiles, dim=2) - else: - decoded = ( - self._hw_tiled_decode(z, target_shape) - if self.use_hw_tiling - else self._decode(z, target_shape=target_shape, timestep=timestep) - ) - - if not return_dict: - return (decoded,) - - return DecoderOutput(sample=decoded) - - def forward( - self, - sample: torch.FloatTensor, - sample_posterior: bool = False, - return_dict: bool = True, - generator: Optional[torch.Generator] = None, - ) -> Union[DecoderOutput, torch.FloatTensor]: - r""" - Args: - sample (`torch.FloatTensor`): Input sample. - sample_posterior (`bool`, *optional*, defaults to `False`): - Whether to sample from the posterior. - return_dict (`bool`, *optional*, defaults to `True`): - Whether to return a [`DecoderOutput`] instead of a plain tuple. - generator (`torch.Generator`, *optional*): - Generator used to sample from the posterior. - """ - x = sample - posterior = self.encode(x).latent_dist - if sample_posterior: - z = posterior.sample(generator=generator) - else: - z = posterior.mode() - dec = self.decode(z, target_shape=sample.shape).sample - - if not return_dict: - return (dec,) - - return DecoderOutput(sample=dec) diff --git a/ltx_video_x/models/autoencoders/vae_encode.py b/ltx_video_x/models/autoencoders/vae_encode.py deleted file mode 100644 index bfc97f6720ecbef51711cb47cd759532d8813128..0000000000000000000000000000000000000000 --- a/ltx_video_x/models/autoencoders/vae_encode.py +++ /dev/null @@ -1,247 +0,0 @@ -from typing import Tuple -import torch -from diffusers import AutoencoderKL -from einops import rearrange -from torch import Tensor - - -from ltx_video.models.autoencoders.causal_video_autoencoder import ( - CausalVideoAutoencoder, -) -from ltx_video.models.autoencoders.video_autoencoder import ( - Downsample3D, - VideoAutoencoder, -) - -try: - import torch_xla.core.xla_model as xm -except ImportError: - xm = None - - -def vae_encode( - media_items: Tensor, - vae: AutoencoderKL, - split_size: int = 1, - vae_per_channel_normalize=False, -) -> Tensor: - """ - Encodes media items (images or videos) into latent representations using a specified VAE model. - The function supports processing batches of images or video frames and can handle the processing - in smaller sub-batches if needed. - - Args: - media_items (Tensor): A torch Tensor containing the media items to encode. The expected - shape is (batch_size, channels, height, width) for images or (batch_size, channels, - frames, height, width) for videos. - vae (AutoencoderKL): An instance of the `AutoencoderKL` class from the `diffusers` library, - pre-configured and loaded with the appropriate model weights. - split_size (int, optional): The number of sub-batches to split the input batch into for encoding. - If set to more than 1, the input media items are processed in smaller batches according to - this value. Defaults to 1, which processes all items in a single batch. - - Returns: - Tensor: A torch Tensor of the encoded latent representations. The shape of the tensor is adjusted - to match the input shape, scaled by the model's configuration. - - Examples: - >>> import torch - >>> from diffusers import AutoencoderKL - >>> vae = AutoencoderKL.from_pretrained('your-model-name') - >>> images = torch.rand(10, 3, 8 256, 256) # Example tensor with 10 videos of 8 frames. - >>> latents = vae_encode(images, vae) - >>> print(latents.shape) # Output shape will depend on the model's latent configuration. - - Note: - In case of a video, the function encodes the media item frame-by frame. - """ - is_video_shaped = media_items.dim() == 5 - batch_size, channels = media_items.shape[0:2] - - if channels != 3: - raise ValueError(f"Expects tensors with 3 channels, got {channels}.") - - if is_video_shaped and not isinstance( - vae, (VideoAutoencoder, CausalVideoAutoencoder) - ): - media_items = rearrange(media_items, "b c n h w -> (b n) c h w") - if split_size > 1: - if len(media_items) % split_size != 0: - raise ValueError( - "Error: The batch size must be divisible by 'train.vae_bs_split" - ) - encode_bs = len(media_items) // split_size - # latents = [vae.encode(image_batch).latent_dist.sample() for image_batch in media_items.split(encode_bs)] - latents = [] - if media_items.device.type == "xla": - xm.mark_step() - for image_batch in media_items.split(encode_bs): - latents.append(vae.encode(image_batch).latent_dist.sample()) - if media_items.device.type == "xla": - xm.mark_step() - latents = torch.cat(latents, dim=0) - else: - latents = vae.encode(media_items).latent_dist.sample() - - latents = normalize_latents(latents, vae, vae_per_channel_normalize) - if is_video_shaped and not isinstance( - vae, (VideoAutoencoder, CausalVideoAutoencoder) - ): - latents = rearrange(latents, "(b n) c h w -> b c n h w", b=batch_size) - return latents - - -def vae_decode( - latents: Tensor, - vae: AutoencoderKL, - is_video: bool = True, - split_size: int = 1, - vae_per_channel_normalize=False, - timestep=None, -) -> Tensor: - is_video_shaped = latents.dim() == 5 - batch_size = latents.shape[0] - - if is_video_shaped and not isinstance( - vae, (VideoAutoencoder, CausalVideoAutoencoder) - ): - latents = rearrange(latents, "b c n h w -> (b n) c h w") - if split_size > 1: - if len(latents) % split_size != 0: - raise ValueError( - "Error: The batch size must be divisible by 'train.vae_bs_split" - ) - encode_bs = len(latents) // split_size - image_batch = [ - _run_decoder( - latent_batch, vae, is_video, vae_per_channel_normalize, timestep - ) - for latent_batch in latents.split(encode_bs) - ] - images = torch.cat(image_batch, dim=0) - else: - images = _run_decoder( - latents, vae, is_video, vae_per_channel_normalize, timestep - ) - - if is_video_shaped and not isinstance( - vae, (VideoAutoencoder, CausalVideoAutoencoder) - ): - images = rearrange(images, "(b n) c h w -> b c n h w", b=batch_size) - return images - - -def _run_decoder( - latents: Tensor, - vae: AutoencoderKL, - is_video: bool, - vae_per_channel_normalize=False, - timestep=None, -) -> Tensor: - if isinstance(vae, (VideoAutoencoder, CausalVideoAutoencoder)): - *_, fl, hl, wl = latents.shape - temporal_scale, spatial_scale, _ = get_vae_size_scale_factor(vae) - latents = latents.to(vae.dtype) - vae_decode_kwargs = {} - if timestep is not None: - vae_decode_kwargs["timestep"] = timestep - image = vae.decode( - un_normalize_latents(latents, vae, vae_per_channel_normalize), - return_dict=False, - target_shape=( - 1, - 3, - fl * temporal_scale if is_video else 1, - hl * spatial_scale, - wl * spatial_scale, - ), - **vae_decode_kwargs, - )[0] - else: - image = vae.decode( - un_normalize_latents(latents, vae, vae_per_channel_normalize), - return_dict=False, - )[0] - return image - - -def get_vae_size_scale_factor(vae: AutoencoderKL) -> float: - if isinstance(vae, CausalVideoAutoencoder): - spatial = vae.spatial_downscale_factor - temporal = vae.temporal_downscale_factor - else: - down_blocks = len( - [ - block - for block in vae.encoder.down_blocks - if isinstance(block.downsample, Downsample3D) - ] - ) - spatial = vae.config.patch_size * 2**down_blocks - temporal = ( - vae.config.patch_size_t * 2**down_blocks - if isinstance(vae, VideoAutoencoder) - else 1 - ) - - return (temporal, spatial, spatial) - - -def latent_to_pixel_coords( - latent_coords: Tensor, vae: AutoencoderKL, causal_fix: bool = False -) -> Tensor: - """ - Converts latent coordinates to pixel coordinates by scaling them according to the VAE's - configuration. - - Args: - latent_coords (Tensor): A tensor of shape [batch_size, 3, num_latents] - containing the latent corner coordinates of each token. - vae (AutoencoderKL): The VAE model - causal_fix (bool): Whether to take into account the different temporal scale - of the first frame. Default = False for backwards compatibility. - Returns: - Tensor: A tensor of pixel coordinates corresponding to the input latent coordinates. - """ - - scale_factors = get_vae_size_scale_factor(vae) - causal_fix = isinstance(vae, CausalVideoAutoencoder) and causal_fix - pixel_coords = latent_to_pixel_coords_from_factors( - latent_coords, scale_factors, causal_fix - ) - return pixel_coords - - -def latent_to_pixel_coords_from_factors( - latent_coords: Tensor, scale_factors: Tuple, causal_fix: bool = False -) -> Tensor: - pixel_coords = ( - latent_coords - * torch.tensor(scale_factors, device=latent_coords.device)[None, :, None] - ) - if causal_fix: - # Fix temporal scale for first frame to 1 due to causality - pixel_coords[:, 0] = (pixel_coords[:, 0] + 1 - scale_factors[0]).clamp(min=0) - return pixel_coords - - -def normalize_latents( - latents: Tensor, vae: AutoencoderKL, vae_per_channel_normalize: bool = False -) -> Tensor: - return ( - (latents - vae.mean_of_means.to(latents.dtype).view(1, -1, 1, 1, 1)) - / vae.std_of_means.to(latents.dtype).view(1, -1, 1, 1, 1) - if vae_per_channel_normalize - else latents * vae.config.scaling_factor - ) - - -def un_normalize_latents( - latents: Tensor, vae: AutoencoderKL, vae_per_channel_normalize: bool = False -) -> Tensor: - return ( - latents * vae.std_of_means.to(latents.dtype).view(1, -1, 1, 1, 1) - + vae.mean_of_means.to(latents.dtype).view(1, -1, 1, 1, 1) - if vae_per_channel_normalize - else latents / vae.config.scaling_factor - ) diff --git a/ltx_video_x/models/autoencoders/video_autoencoder.py b/ltx_video_x/models/autoencoders/video_autoencoder.py deleted file mode 100644 index 3c7926c1d3afb8188221b2e569aaaf89f7271bce..0000000000000000000000000000000000000000 --- a/ltx_video_x/models/autoencoders/video_autoencoder.py +++ /dev/null @@ -1,1045 +0,0 @@ -import json -import os -from functools import partial -from types import SimpleNamespace -from typing import Any, Mapping, Optional, Tuple, Union - -import torch -from einops import rearrange -from torch import nn -from torch.nn import functional - -from diffusers.utils import logging - -from ltx_video.utils.torch_utils import Identity -from ltx_video.models.autoencoders.conv_nd_factory import make_conv_nd, make_linear_nd -from ltx_video.models.autoencoders.pixel_norm import PixelNorm -from ltx_video.models.autoencoders.vae import AutoencoderKLWrapper - -logger = logging.get_logger(__name__) - - -class VideoAutoencoder(AutoencoderKLWrapper): - @classmethod - def from_pretrained( - cls, - pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], - *args, - **kwargs, - ): - config_local_path = pretrained_model_name_or_path / "config.json" - config = cls.load_config(config_local_path, **kwargs) - video_vae = cls.from_config(config) - video_vae.to(kwargs["torch_dtype"]) - - model_local_path = pretrained_model_name_or_path / "autoencoder.pth" - ckpt_state_dict = torch.load(model_local_path) - video_vae.load_state_dict(ckpt_state_dict) - - statistics_local_path = ( - pretrained_model_name_or_path / "per_channel_statistics.json" - ) - if statistics_local_path.exists(): - with open(statistics_local_path, "r") as file: - data = json.load(file) - transposed_data = list(zip(*data["data"])) - data_dict = { - col: torch.tensor(vals) - for col, vals in zip(data["columns"], transposed_data) - } - video_vae.register_buffer("std_of_means", data_dict["std-of-means"]) - video_vae.register_buffer( - "mean_of_means", - data_dict.get( - "mean-of-means", torch.zeros_like(data_dict["std-of-means"]) - ), - ) - - return video_vae - - @staticmethod - def from_config(config): - assert ( - config["_class_name"] == "VideoAutoencoder" - ), "config must have _class_name=VideoAutoencoder" - if isinstance(config["dims"], list): - config["dims"] = tuple(config["dims"]) - - assert config["dims"] in [2, 3, (2, 1)], "dims must be 2, 3 or (2, 1)" - - double_z = config.get("double_z", True) - latent_log_var = config.get( - "latent_log_var", "per_channel" if double_z else "none" - ) - use_quant_conv = config.get("use_quant_conv", True) - - if use_quant_conv and latent_log_var == "uniform": - raise ValueError("uniform latent_log_var requires use_quant_conv=False") - - encoder = Encoder( - dims=config["dims"], - in_channels=config.get("in_channels", 3), - out_channels=config["latent_channels"], - block_out_channels=config["block_out_channels"], - patch_size=config.get("patch_size", 1), - latent_log_var=latent_log_var, - norm_layer=config.get("norm_layer", "group_norm"), - patch_size_t=config.get("patch_size_t", config.get("patch_size", 1)), - add_channel_padding=config.get("add_channel_padding", False), - ) - - decoder = Decoder( - dims=config["dims"], - in_channels=config["latent_channels"], - out_channels=config.get("out_channels", 3), - block_out_channels=config["block_out_channels"], - patch_size=config.get("patch_size", 1), - norm_layer=config.get("norm_layer", "group_norm"), - patch_size_t=config.get("patch_size_t", config.get("patch_size", 1)), - add_channel_padding=config.get("add_channel_padding", False), - ) - - dims = config["dims"] - return VideoAutoencoder( - encoder=encoder, - decoder=decoder, - latent_channels=config["latent_channels"], - dims=dims, - use_quant_conv=use_quant_conv, - ) - - @property - def config(self): - return SimpleNamespace( - _class_name="VideoAutoencoder", - dims=self.dims, - in_channels=self.encoder.conv_in.in_channels - // (self.encoder.patch_size_t * self.encoder.patch_size**2), - out_channels=self.decoder.conv_out.out_channels - // (self.decoder.patch_size_t * self.decoder.patch_size**2), - latent_channels=self.decoder.conv_in.in_channels, - block_out_channels=[ - self.encoder.down_blocks[i].res_blocks[-1].conv1.out_channels - for i in range(len(self.encoder.down_blocks)) - ], - scaling_factor=1.0, - norm_layer=self.encoder.norm_layer, - patch_size=self.encoder.patch_size, - latent_log_var=self.encoder.latent_log_var, - use_quant_conv=self.use_quant_conv, - patch_size_t=self.encoder.patch_size_t, - add_channel_padding=self.encoder.add_channel_padding, - ) - - @property - def is_video_supported(self): - """ - Check if the model supports video inputs of shape (B, C, F, H, W). Otherwise, the model only supports 2D images. - """ - return self.dims != 2 - - @property - def downscale_factor(self): - return self.encoder.downsample_factor - - def to_json_string(self) -> str: - import json - - return json.dumps(self.config.__dict__) - - def load_state_dict(self, state_dict: Mapping[str, Any], strict: bool = True): - model_keys = set(name for name, _ in self.named_parameters()) - - key_mapping = { - ".resnets.": ".res_blocks.", - "downsamplers.0": "downsample", - "upsamplers.0": "upsample", - } - - converted_state_dict = {} - for key, value in state_dict.items(): - for k, v in key_mapping.items(): - key = key.replace(k, v) - - if "norm" in key and key not in model_keys: - logger.info( - f"Removing key {key} from state_dict as it is not present in the model" - ) - continue - - converted_state_dict[key] = value - - super().load_state_dict(converted_state_dict, strict=strict) - - def last_layer(self): - if hasattr(self.decoder, "conv_out"): - if isinstance(self.decoder.conv_out, nn.Sequential): - last_layer = self.decoder.conv_out[-1] - else: - last_layer = self.decoder.conv_out - else: - last_layer = self.decoder.layers[-1] - return last_layer - - -class Encoder(nn.Module): - r""" - The `Encoder` layer of a variational autoencoder that encodes its input into a latent representation. - - Args: - in_channels (`int`, *optional*, defaults to 3): - The number of input channels. - out_channels (`int`, *optional*, defaults to 3): - The number of output channels. - block_out_channels (`Tuple[int, ...]`, *optional*, defaults to `(64,)`): - The number of output channels for each block. - layers_per_block (`int`, *optional*, defaults to 2): - The number of layers per block. - norm_num_groups (`int`, *optional*, defaults to 32): - The number of groups for normalization. - patch_size (`int`, *optional*, defaults to 1): - The patch size to use. Should be a power of 2. - norm_layer (`str`, *optional*, defaults to `group_norm`): - The normalization layer to use. Can be either `group_norm` or `pixel_norm`. - latent_log_var (`str`, *optional*, defaults to `per_channel`): - The number of channels for the log variance. Can be either `per_channel`, `uniform`, or `none`. - """ - - def __init__( - self, - dims: Union[int, Tuple[int, int]] = 3, - in_channels: int = 3, - out_channels: int = 3, - block_out_channels: Tuple[int, ...] = (64,), - layers_per_block: int = 2, - norm_num_groups: int = 32, - patch_size: Union[int, Tuple[int]] = 1, - norm_layer: str = "group_norm", # group_norm, pixel_norm - latent_log_var: str = "per_channel", - patch_size_t: Optional[int] = None, - add_channel_padding: Optional[bool] = False, - ): - super().__init__() - self.patch_size = patch_size - self.patch_size_t = patch_size_t if patch_size_t is not None else patch_size - self.add_channel_padding = add_channel_padding - self.layers_per_block = layers_per_block - self.norm_layer = norm_layer - self.latent_channels = out_channels - self.latent_log_var = latent_log_var - if add_channel_padding: - in_channels = in_channels * self.patch_size**3 - else: - in_channels = in_channels * self.patch_size_t * self.patch_size**2 - self.in_channels = in_channels - output_channel = block_out_channels[0] - - self.conv_in = make_conv_nd( - dims=dims, - in_channels=in_channels, - out_channels=output_channel, - kernel_size=3, - stride=1, - padding=1, - ) - - self.down_blocks = nn.ModuleList([]) - - for i in range(len(block_out_channels)): - input_channel = output_channel - output_channel = block_out_channels[i] - is_final_block = i == len(block_out_channels) - 1 - - down_block = DownEncoderBlock3D( - dims=dims, - in_channels=input_channel, - out_channels=output_channel, - num_layers=self.layers_per_block, - add_downsample=not is_final_block and 2**i >= patch_size, - resnet_eps=1e-6, - downsample_padding=0, - resnet_groups=norm_num_groups, - norm_layer=norm_layer, - ) - self.down_blocks.append(down_block) - - self.mid_block = UNetMidBlock3D( - dims=dims, - in_channels=block_out_channels[-1], - num_layers=self.layers_per_block, - resnet_eps=1e-6, - resnet_groups=norm_num_groups, - norm_layer=norm_layer, - ) - - # out - if norm_layer == "group_norm": - self.conv_norm_out = nn.GroupNorm( - num_channels=block_out_channels[-1], - num_groups=norm_num_groups, - eps=1e-6, - ) - elif norm_layer == "pixel_norm": - self.conv_norm_out = PixelNorm() - self.conv_act = nn.SiLU() - - conv_out_channels = out_channels - if latent_log_var == "per_channel": - conv_out_channels *= 2 - elif latent_log_var == "uniform": - conv_out_channels += 1 - elif latent_log_var != "none": - raise ValueError(f"Invalid latent_log_var: {latent_log_var}") - self.conv_out = make_conv_nd( - dims, block_out_channels[-1], conv_out_channels, 3, padding=1 - ) - - self.gradient_checkpointing = False - - @property - def downscale_factor(self): - return ( - 2 - ** len( - [ - block - for block in self.down_blocks - if isinstance(block.downsample, Downsample3D) - ] - ) - * self.patch_size - ) - - def forward( - self, sample: torch.FloatTensor, return_features=False - ) -> torch.FloatTensor: - r"""The forward method of the `Encoder` class.""" - - downsample_in_time = sample.shape[2] != 1 - - # patchify - patch_size_t = self.patch_size_t if downsample_in_time else 1 - sample = patchify( - sample, - patch_size_hw=self.patch_size, - patch_size_t=patch_size_t, - add_channel_padding=self.add_channel_padding, - ) - - sample = self.conv_in(sample) - - checkpoint_fn = ( - partial(torch.utils.checkpoint.checkpoint, use_reentrant=False) - if self.gradient_checkpointing and self.training - else lambda x: x - ) - - if return_features: - features = [] - for down_block in self.down_blocks: - sample = checkpoint_fn(down_block)( - sample, downsample_in_time=downsample_in_time - ) - if return_features: - features.append(sample) - - sample = checkpoint_fn(self.mid_block)(sample) - - # post-process - sample = self.conv_norm_out(sample) - sample = self.conv_act(sample) - sample = self.conv_out(sample) - - if self.latent_log_var == "uniform": - last_channel = sample[:, -1:, ...] - num_dims = sample.dim() - - if num_dims == 4: - # For shape (B, C, H, W) - repeated_last_channel = last_channel.repeat( - 1, sample.shape[1] - 2, 1, 1 - ) - sample = torch.cat([sample, repeated_last_channel], dim=1) - elif num_dims == 5: - # For shape (B, C, F, H, W) - repeated_last_channel = last_channel.repeat( - 1, sample.shape[1] - 2, 1, 1, 1 - ) - sample = torch.cat([sample, repeated_last_channel], dim=1) - else: - raise ValueError(f"Invalid input shape: {sample.shape}") - - if return_features: - features.append(sample[:, : self.latent_channels, ...]) - return sample, features - return sample - - -class Decoder(nn.Module): - r""" - The `Decoder` layer of a variational autoencoder that decodes its latent representation into an output sample. - - Args: - in_channels (`int`, *optional*, defaults to 3): - The number of input channels. - out_channels (`int`, *optional*, defaults to 3): - The number of output channels. - block_out_channels (`Tuple[int, ...]`, *optional*, defaults to `(64,)`): - The number of output channels for each block. - layers_per_block (`int`, *optional*, defaults to 2): - The number of layers per block. - norm_num_groups (`int`, *optional*, defaults to 32): - The number of groups for normalization. - patch_size (`int`, *optional*, defaults to 1): - The patch size to use. Should be a power of 2. - norm_layer (`str`, *optional*, defaults to `group_norm`): - The normalization layer to use. Can be either `group_norm` or `pixel_norm`. - """ - - def __init__( - self, - dims, - in_channels: int = 3, - out_channels: int = 3, - block_out_channels: Tuple[int, ...] = (64,), - layers_per_block: int = 2, - norm_num_groups: int = 32, - patch_size: int = 1, - norm_layer: str = "group_norm", - patch_size_t: Optional[int] = None, - add_channel_padding: Optional[bool] = False, - ): - super().__init__() - self.patch_size = patch_size - self.patch_size_t = patch_size_t if patch_size_t is not None else patch_size - self.add_channel_padding = add_channel_padding - self.layers_per_block = layers_per_block - if add_channel_padding: - out_channels = out_channels * self.patch_size**3 - else: - out_channels = out_channels * self.patch_size_t * self.patch_size**2 - self.out_channels = out_channels - - self.conv_in = make_conv_nd( - dims, - in_channels, - block_out_channels[-1], - kernel_size=3, - stride=1, - padding=1, - ) - - self.mid_block = None - self.up_blocks = nn.ModuleList([]) - - self.mid_block = UNetMidBlock3D( - dims=dims, - in_channels=block_out_channels[-1], - num_layers=self.layers_per_block, - resnet_eps=1e-6, - resnet_groups=norm_num_groups, - norm_layer=norm_layer, - ) - - reversed_block_out_channels = list(reversed(block_out_channels)) - output_channel = reversed_block_out_channels[0] - for i in range(len(reversed_block_out_channels)): - prev_output_channel = output_channel - output_channel = reversed_block_out_channels[i] - - is_final_block = i == len(block_out_channels) - 1 - - up_block = UpDecoderBlock3D( - dims=dims, - num_layers=self.layers_per_block + 1, - in_channels=prev_output_channel, - out_channels=output_channel, - add_upsample=not is_final_block - and 2 ** (len(block_out_channels) - i - 1) > patch_size, - resnet_eps=1e-6, - resnet_groups=norm_num_groups, - norm_layer=norm_layer, - ) - self.up_blocks.append(up_block) - - if norm_layer == "group_norm": - self.conv_norm_out = nn.GroupNorm( - num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=1e-6 - ) - elif norm_layer == "pixel_norm": - self.conv_norm_out = PixelNorm() - - self.conv_act = nn.SiLU() - self.conv_out = make_conv_nd( - dims, block_out_channels[0], out_channels, 3, padding=1 - ) - - self.gradient_checkpointing = False - - def forward(self, sample: torch.FloatTensor, target_shape) -> torch.FloatTensor: - r"""The forward method of the `Decoder` class.""" - assert target_shape is not None, "target_shape must be provided" - upsample_in_time = sample.shape[2] < target_shape[2] - - sample = self.conv_in(sample) - - upscale_dtype = next(iter(self.up_blocks.parameters())).dtype - - checkpoint_fn = ( - partial(torch.utils.checkpoint.checkpoint, use_reentrant=False) - if self.gradient_checkpointing and self.training - else lambda x: x - ) - - sample = checkpoint_fn(self.mid_block)(sample) - sample = sample.to(upscale_dtype) - - for up_block in self.up_blocks: - sample = checkpoint_fn(up_block)(sample, upsample_in_time=upsample_in_time) - - # post-process - sample = self.conv_norm_out(sample) - sample = self.conv_act(sample) - sample = self.conv_out(sample) - - # un-patchify - patch_size_t = self.patch_size_t if upsample_in_time else 1 - sample = unpatchify( - sample, - patch_size_hw=self.patch_size, - patch_size_t=patch_size_t, - add_channel_padding=self.add_channel_padding, - ) - - return sample - - -class DownEncoderBlock3D(nn.Module): - def __init__( - self, - dims: Union[int, Tuple[int, int]], - in_channels: int, - out_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_groups: int = 32, - add_downsample: bool = True, - downsample_padding: int = 1, - norm_layer: str = "group_norm", - ): - super().__init__() - res_blocks = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - res_blocks.append( - ResnetBlock3D( - dims=dims, - in_channels=in_channels, - out_channels=out_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - norm_layer=norm_layer, - ) - ) - - self.res_blocks = nn.ModuleList(res_blocks) - - if add_downsample: - self.downsample = Downsample3D( - dims, - out_channels, - out_channels=out_channels, - padding=downsample_padding, - ) - else: - self.downsample = Identity() - - def forward( - self, hidden_states: torch.FloatTensor, downsample_in_time - ) -> torch.FloatTensor: - for resnet in self.res_blocks: - hidden_states = resnet(hidden_states) - - hidden_states = self.downsample( - hidden_states, downsample_in_time=downsample_in_time - ) - - return hidden_states - - -class UNetMidBlock3D(nn.Module): - """ - A 3D UNet mid-block [`UNetMidBlock3D`] with multiple residual blocks. - - Args: - in_channels (`int`): The number of input channels. - dropout (`float`, *optional*, defaults to 0.0): The dropout rate. - num_layers (`int`, *optional*, defaults to 1): The number of residual blocks. - resnet_eps (`float`, *optional*, 1e-6 ): The epsilon value for the resnet blocks. - resnet_groups (`int`, *optional*, defaults to 32): - The number of groups to use in the group normalization layers of the resnet blocks. - - Returns: - `torch.FloatTensor`: The output of the last residual block, which is a tensor of shape `(batch_size, - in_channels, height, width)`. - - """ - - def __init__( - self, - dims: Union[int, Tuple[int, int]], - in_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_groups: int = 32, - norm_layer: str = "group_norm", - ): - super().__init__() - resnet_groups = ( - resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - ) - - self.res_blocks = nn.ModuleList( - [ - ResnetBlock3D( - dims=dims, - in_channels=in_channels, - out_channels=in_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - norm_layer=norm_layer, - ) - for _ in range(num_layers) - ] - ) - - def forward(self, hidden_states: torch.FloatTensor) -> torch.FloatTensor: - for resnet in self.res_blocks: - hidden_states = resnet(hidden_states) - - return hidden_states - - -class UpDecoderBlock3D(nn.Module): - def __init__( - self, - dims: Union[int, Tuple[int, int]], - in_channels: int, - out_channels: int, - resolution_idx: Optional[int] = None, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_groups: int = 32, - add_upsample: bool = True, - norm_layer: str = "group_norm", - ): - super().__init__() - res_blocks = [] - - for i in range(num_layers): - input_channels = in_channels if i == 0 else out_channels - - res_blocks.append( - ResnetBlock3D( - dims=dims, - in_channels=input_channels, - out_channels=out_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - norm_layer=norm_layer, - ) - ) - - self.res_blocks = nn.ModuleList(res_blocks) - - if add_upsample: - self.upsample = Upsample3D( - dims=dims, channels=out_channels, out_channels=out_channels - ) - else: - self.upsample = Identity() - - self.resolution_idx = resolution_idx - - def forward( - self, hidden_states: torch.FloatTensor, upsample_in_time=True - ) -> torch.FloatTensor: - for resnet in self.res_blocks: - hidden_states = resnet(hidden_states) - - hidden_states = self.upsample(hidden_states, upsample_in_time=upsample_in_time) - - return hidden_states - - -class ResnetBlock3D(nn.Module): - r""" - A Resnet block. - - Parameters: - in_channels (`int`): The number of channels in the input. - out_channels (`int`, *optional*, default to be `None`): - The number of output channels for the first conv layer. If None, same as `in_channels`. - dropout (`float`, *optional*, defaults to `0.0`): The dropout probability to use. - groups (`int`, *optional*, default to `32`): The number of groups to use for the first normalization layer. - eps (`float`, *optional*, defaults to `1e-6`): The epsilon to use for the normalization. - """ - - def __init__( - self, - dims: Union[int, Tuple[int, int]], - in_channels: int, - out_channels: Optional[int] = None, - conv_shortcut: bool = False, - dropout: float = 0.0, - groups: int = 32, - eps: float = 1e-6, - norm_layer: str = "group_norm", - ): - super().__init__() - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - self.use_conv_shortcut = conv_shortcut - - if norm_layer == "group_norm": - self.norm1 = torch.nn.GroupNorm( - num_groups=groups, num_channels=in_channels, eps=eps, affine=True - ) - elif norm_layer == "pixel_norm": - self.norm1 = PixelNorm() - - self.non_linearity = nn.SiLU() - - self.conv1 = make_conv_nd( - dims, in_channels, out_channels, kernel_size=3, stride=1, padding=1 - ) - - if norm_layer == "group_norm": - self.norm2 = torch.nn.GroupNorm( - num_groups=groups, num_channels=out_channels, eps=eps, affine=True - ) - elif norm_layer == "pixel_norm": - self.norm2 = PixelNorm() - - self.dropout = torch.nn.Dropout(dropout) - - self.conv2 = make_conv_nd( - dims, out_channels, out_channels, kernel_size=3, stride=1, padding=1 - ) - - self.conv_shortcut = ( - make_linear_nd( - dims=dims, in_channels=in_channels, out_channels=out_channels - ) - if in_channels != out_channels - else nn.Identity() - ) - - def forward( - self, - input_tensor: torch.FloatTensor, - ) -> torch.FloatTensor: - hidden_states = input_tensor - - hidden_states = self.norm1(hidden_states) - - hidden_states = self.non_linearity(hidden_states) - - hidden_states = self.conv1(hidden_states) - - hidden_states = self.norm2(hidden_states) - - hidden_states = self.non_linearity(hidden_states) - - hidden_states = self.dropout(hidden_states) - - hidden_states = self.conv2(hidden_states) - - input_tensor = self.conv_shortcut(input_tensor) - - output_tensor = input_tensor + hidden_states - - return output_tensor - - -class Downsample3D(nn.Module): - def __init__( - self, - dims, - in_channels: int, - out_channels: int, - kernel_size: int = 3, - padding: int = 1, - ): - super().__init__() - stride: int = 2 - self.padding = padding - self.in_channels = in_channels - self.dims = dims - self.conv = make_conv_nd( - dims=dims, - in_channels=in_channels, - out_channels=out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - ) - - def forward(self, x, downsample_in_time=True): - conv = self.conv - if self.padding == 0: - if self.dims == 2: - padding = (0, 1, 0, 1) - else: - padding = (0, 1, 0, 1, 0, 1 if downsample_in_time else 0) - - x = functional.pad(x, padding, mode="constant", value=0) - - if self.dims == (2, 1) and not downsample_in_time: - return conv(x, skip_time_conv=True) - - return conv(x) - - -class Upsample3D(nn.Module): - """ - An upsampling layer for 3D tensors of shape (B, C, D, H, W). - - :param channels: channels in the inputs and outputs. - """ - - def __init__(self, dims, channels, out_channels=None): - super().__init__() - self.dims = dims - self.channels = channels - self.out_channels = out_channels or channels - self.conv = make_conv_nd( - dims, channels, out_channels, kernel_size=3, padding=1, bias=True - ) - - def forward(self, x, upsample_in_time): - if self.dims == 2: - x = functional.interpolate( - x, (x.shape[2] * 2, x.shape[3] * 2), mode="nearest" - ) - else: - time_scale_factor = 2 if upsample_in_time else 1 - # print("before:", x.shape) - b, c, d, h, w = x.shape - x = rearrange(x, "b c d h w -> (b d) c h w") - # height and width interpolate - x = functional.interpolate( - x, (x.shape[2] * 2, x.shape[3] * 2), mode="nearest" - ) - _, _, h, w = x.shape - - if not upsample_in_time and self.dims == (2, 1): - x = rearrange(x, "(b d) c h w -> b c d h w ", b=b, h=h, w=w) - return self.conv(x, skip_time_conv=True) - - # Second ** upsampling ** which is essentially treated as a 1D convolution across the 'd' dimension - x = rearrange(x, "(b d) c h w -> (b h w) c 1 d", b=b) - - # (b h w) c 1 d - new_d = x.shape[-1] * time_scale_factor - x = functional.interpolate(x, (1, new_d), mode="nearest") - # (b h w) c 1 new_d - x = rearrange( - x, "(b h w) c 1 new_d -> b c new_d h w", b=b, h=h, w=w, new_d=new_d - ) - # b c d h w - - # x = functional.interpolate( - # x, (x.shape[2] * time_scale_factor, x.shape[3] * 2, x.shape[4] * 2), mode="nearest" - # ) - # print("after:", x.shape) - - return self.conv(x) - - -def patchify(x, patch_size_hw, patch_size_t=1, add_channel_padding=False): - if patch_size_hw == 1 and patch_size_t == 1: - return x - if x.dim() == 4: - x = rearrange( - x, "b c (h q) (w r) -> b (c r q) h w", q=patch_size_hw, r=patch_size_hw - ) - elif x.dim() == 5: - x = rearrange( - x, - "b c (f p) (h q) (w r) -> b (c p r q) f h w", - p=patch_size_t, - q=patch_size_hw, - r=patch_size_hw, - ) - else: - raise ValueError(f"Invalid input shape: {x.shape}") - - if ( - (x.dim() == 5) - and (patch_size_hw > patch_size_t) - and (patch_size_t > 1 or add_channel_padding) - ): - channels_to_pad = x.shape[1] * (patch_size_hw // patch_size_t) - x.shape[1] - padding_zeros = torch.zeros( - x.shape[0], - channels_to_pad, - x.shape[2], - x.shape[3], - x.shape[4], - device=x.device, - dtype=x.dtype, - ) - x = torch.cat([padding_zeros, x], dim=1) - - return x - - -def unpatchify(x, patch_size_hw, patch_size_t=1, add_channel_padding=False): - if patch_size_hw == 1 and patch_size_t == 1: - return x - - if ( - (x.dim() == 5) - and (patch_size_hw > patch_size_t) - and (patch_size_t > 1 or add_channel_padding) - ): - channels_to_keep = int(x.shape[1] * (patch_size_t / patch_size_hw)) - x = x[:, :channels_to_keep, :, :, :] - - if x.dim() == 4: - x = rearrange( - x, "b (c r q) h w -> b c (h q) (w r)", q=patch_size_hw, r=patch_size_hw - ) - elif x.dim() == 5: - x = rearrange( - x, - "b (c p r q) f h w -> b c (f p) (h q) (w r)", - p=patch_size_t, - q=patch_size_hw, - r=patch_size_hw, - ) - - return x - - -def create_video_autoencoder_config( - latent_channels: int = 4, -): - config = { - "_class_name": "VideoAutoencoder", - "dims": ( - 2, - 1, - ), # 2 for Conv2, 3 for Conv3d, (2, 1) for Conv2d followed by Conv1d - "in_channels": 3, # Number of input color channels (e.g., RGB) - "out_channels": 3, # Number of output color channels - "latent_channels": latent_channels, # Number of channels in the latent space representation - "block_out_channels": [ - 128, - 256, - 512, - 512, - ], # Number of output channels of each encoder / decoder inner block - "patch_size": 1, - } - - return config - - -def create_video_autoencoder_pathify4x4x4_config( - latent_channels: int = 4, -): - config = { - "_class_name": "VideoAutoencoder", - "dims": ( - 2, - 1, - ), # 2 for Conv2, 3 for Conv3d, (2, 1) for Conv2d followed by Conv1d - "in_channels": 3, # Number of input color channels (e.g., RGB) - "out_channels": 3, # Number of output color channels - "latent_channels": latent_channels, # Number of channels in the latent space representation - "block_out_channels": [512] - * 4, # Number of output channels of each encoder / decoder inner block - "patch_size": 4, - "latent_log_var": "uniform", - } - - return config - - -def create_video_autoencoder_pathify4x4_config( - latent_channels: int = 4, -): - config = { - "_class_name": "VideoAutoencoder", - "dims": 2, # 2 for Conv2, 3 for Conv3d, (2, 1) for Conv2d followed by Conv1d - "in_channels": 3, # Number of input color channels (e.g., RGB) - "out_channels": 3, # Number of output color channels - "latent_channels": latent_channels, # Number of channels in the latent space representation - "block_out_channels": [512] - * 4, # Number of output channels of each encoder / decoder inner block - "patch_size": 4, - "norm_layer": "pixel_norm", - } - - return config - - -def test_vae_patchify_unpatchify(): - import torch - - x = torch.randn(2, 3, 8, 64, 64) - x_patched = patchify(x, patch_size_hw=4, patch_size_t=4) - x_unpatched = unpatchify(x_patched, patch_size_hw=4, patch_size_t=4) - assert torch.allclose(x, x_unpatched) - - -def demo_video_autoencoder_forward_backward(): - # Configuration for the VideoAutoencoder - config = create_video_autoencoder_pathify4x4x4_config() - - # Instantiate the VideoAutoencoder with the specified configuration - video_autoencoder = VideoAutoencoder.from_config(config) - - print(video_autoencoder) - - # Print the total number of parameters in the video autoencoder - total_params = sum(p.numel() for p in video_autoencoder.parameters()) - print(f"Total number of parameters in VideoAutoencoder: {total_params:,}") - - # Create a mock input tensor simulating a batch of videos - # Shape: (batch_size, channels, depth, height, width) - # E.g., 4 videos, each with 3 color channels, 16 frames, and 64x64 pixels per frame - input_videos = torch.randn(2, 3, 8, 64, 64) - - # Forward pass: encode and decode the input videos - latent = video_autoencoder.encode(input_videos).latent_dist.mode() - print(f"input shape={input_videos.shape}") - print(f"latent shape={latent.shape}") - reconstructed_videos = video_autoencoder.decode( - latent, target_shape=input_videos.shape - ).sample - - print(f"reconstructed shape={reconstructed_videos.shape}") - - # Calculate the loss (e.g., mean squared error) - loss = torch.nn.functional.mse_loss(input_videos, reconstructed_videos) - - # Perform backward pass - loss.backward() - - print(f"Demo completed with loss: {loss.item()}") - - -# Ensure to call the demo function to execute the forward and backward pass -if __name__ == "__main__": - demo_video_autoencoder_forward_backward() diff --git a/ltx_video_x/models/transformers/__init__.py b/ltx_video_x/models/transformers/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/ltx_video_x/models/transformers/attention.py b/ltx_video_x/models/transformers/attention.py deleted file mode 100644 index bee0839ad78bfc33d2e940818edec2701ece99c7..0000000000000000000000000000000000000000 --- a/ltx_video_x/models/transformers/attention.py +++ /dev/null @@ -1,1264 +0,0 @@ -import inspect -from importlib import import_module -from typing import Any, Dict, Optional, Tuple - -import torch -import torch.nn.functional as F -from diffusers.models.activations import GEGLU, GELU, ApproximateGELU -from diffusers.models.attention import _chunked_feed_forward -from diffusers.models.attention_processor import ( - LoRAAttnAddedKVProcessor, - LoRAAttnProcessor, - LoRAAttnProcessor2_0, - LoRAXFormersAttnProcessor, - SpatialNorm, -) -from diffusers.models.lora import LoRACompatibleLinear -from diffusers.models.normalization import RMSNorm -from diffusers.utils import deprecate, logging -from diffusers.utils.torch_utils import maybe_allow_in_graph -from einops import rearrange -from torch import nn - -from ltx_video.utils.skip_layer_strategy import SkipLayerStrategy - -try: - from torch_xla.experimental.custom_kernel import flash_attention -except ImportError: - # workaround for automatic tests. Currently this function is manually patched - # to the torch_xla lib on setup of container - pass - -# code adapted from https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention.py - -logger = logging.get_logger(__name__) - - -@maybe_allow_in_graph -class BasicTransformerBlock(nn.Module): - r""" - A basic Transformer block. - - Parameters: - dim (`int`): The number of channels in the input and output. - num_attention_heads (`int`): The number of heads to use for multi-head attention. - attention_head_dim (`int`): The number of channels in each head. - dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. - cross_attention_dim (`int`, *optional*): The size of the encoder_hidden_states vector for cross attention. - activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to be used in feed-forward. - num_embeds_ada_norm (: - obj: `int`, *optional*): The number of diffusion steps used during training. See `Transformer2DModel`. - attention_bias (: - obj: `bool`, *optional*, defaults to `False`): Configure if the attentions should contain a bias parameter. - only_cross_attention (`bool`, *optional*): - Whether to use only cross-attention layers. In this case two cross attention layers are used. - double_self_attention (`bool`, *optional*): - Whether to use two self-attention layers. In this case no cross attention layers are used. - upcast_attention (`bool`, *optional*): - Whether to upcast the attention computation to float32. This is useful for mixed precision training. - norm_elementwise_affine (`bool`, *optional*, defaults to `True`): - Whether to use learnable elementwise affine parameters for normalization. - qk_norm (`str`, *optional*, defaults to None): - Set to 'layer_norm' or `rms_norm` to perform query and key normalization. - adaptive_norm (`str`, *optional*, defaults to `"single_scale_shift"`): - The type of adaptive norm to use. Can be `"single_scale_shift"`, `"single_scale"` or "none". - standardization_norm (`str`, *optional*, defaults to `"layer_norm"`): - The type of pre-normalization to use. Can be `"layer_norm"` or `"rms_norm"`. - final_dropout (`bool` *optional*, defaults to False): - Whether to apply a final dropout after the last feed-forward layer. - attention_type (`str`, *optional*, defaults to `"default"`): - The type of attention to use. Can be `"default"` or `"gated"` or `"gated-text-image"`. - positional_embeddings (`str`, *optional*, defaults to `None`): - The type of positional embeddings to apply to. - num_positional_embeddings (`int`, *optional*, defaults to `None`): - The maximum number of positional embeddings to apply. - """ - - def __init__( - self, - dim: int, - num_attention_heads: int, - attention_head_dim: int, - dropout=0.0, - cross_attention_dim: Optional[int] = None, - activation_fn: str = "geglu", - num_embeds_ada_norm: Optional[int] = None, # pylint: disable=unused-argument - attention_bias: bool = False, - only_cross_attention: bool = False, - double_self_attention: bool = False, - upcast_attention: bool = False, - norm_elementwise_affine: bool = True, - adaptive_norm: str = "single_scale_shift", # 'single_scale_shift', 'single_scale' or 'none' - standardization_norm: str = "layer_norm", # 'layer_norm' or 'rms_norm' - norm_eps: float = 1e-5, - qk_norm: Optional[str] = None, - final_dropout: bool = False, - attention_type: str = "default", # pylint: disable=unused-argument - ff_inner_dim: Optional[int] = None, - ff_bias: bool = True, - attention_out_bias: bool = True, - use_tpu_flash_attention: bool = False, - use_rope: bool = False, - ): - super().__init__() - self.only_cross_attention = only_cross_attention - self.use_tpu_flash_attention = use_tpu_flash_attention - self.adaptive_norm = adaptive_norm - - assert standardization_norm in ["layer_norm", "rms_norm"] - assert adaptive_norm in ["single_scale_shift", "single_scale", "none"] - - make_norm_layer = ( - nn.LayerNorm if standardization_norm == "layer_norm" else RMSNorm - ) - - # Define 3 blocks. Each block has its own normalization layer. - # 1. Self-Attn - self.norm1 = make_norm_layer( - dim, elementwise_affine=norm_elementwise_affine, eps=norm_eps - ) - - self.attn1 = Attention( - query_dim=dim, - heads=num_attention_heads, - dim_head=attention_head_dim, - dropout=dropout, - bias=attention_bias, - cross_attention_dim=cross_attention_dim if only_cross_attention else None, - upcast_attention=upcast_attention, - out_bias=attention_out_bias, - use_tpu_flash_attention=use_tpu_flash_attention, - qk_norm=qk_norm, - use_rope=use_rope, - ) - - # 2. Cross-Attn - if cross_attention_dim is not None or double_self_attention: - self.attn2 = Attention( - query_dim=dim, - cross_attention_dim=( - cross_attention_dim if not double_self_attention else None - ), - heads=num_attention_heads, - dim_head=attention_head_dim, - dropout=dropout, - bias=attention_bias, - upcast_attention=upcast_attention, - out_bias=attention_out_bias, - use_tpu_flash_attention=use_tpu_flash_attention, - qk_norm=qk_norm, - use_rope=use_rope, - ) # is self-attn if encoder_hidden_states is none - - if adaptive_norm == "none": - self.attn2_norm = make_norm_layer( - dim, norm_eps, norm_elementwise_affine - ) - else: - self.attn2 = None - self.attn2_norm = None - - self.norm2 = make_norm_layer(dim, norm_eps, norm_elementwise_affine) - - # 3. Feed-forward - self.ff = FeedForward( - dim, - dropout=dropout, - activation_fn=activation_fn, - final_dropout=final_dropout, - inner_dim=ff_inner_dim, - bias=ff_bias, - ) - - # 5. Scale-shift for PixArt-Alpha. - if adaptive_norm != "none": - num_ada_params = 4 if adaptive_norm == "single_scale" else 6 - self.scale_shift_table = nn.Parameter( - torch.randn(num_ada_params, dim) / dim**0.5 - ) - - # let chunk size default to None - self._chunk_size = None - self._chunk_dim = 0 - - def set_use_tpu_flash_attention(self): - r""" - Function sets the flag in this object and propagates down the children. The flag will enforce the usage of TPU - attention kernel. - """ - self.use_tpu_flash_attention = True - self.attn1.set_use_tpu_flash_attention() - self.attn2.set_use_tpu_flash_attention() - - def set_chunk_feed_forward(self, chunk_size: Optional[int], dim: int = 0): - # Sets chunk feed-forward - self._chunk_size = chunk_size - self._chunk_dim = dim - - def forward( - self, - hidden_states: torch.FloatTensor, - freqs_cis: Optional[Tuple[torch.FloatTensor, torch.FloatTensor]] = None, - attention_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - timestep: Optional[torch.LongTensor] = None, - cross_attention_kwargs: Dict[str, Any] = None, - class_labels: Optional[torch.LongTensor] = None, - skip_layer_mask: Optional[torch.Tensor] = None, - skip_layer_strategy: Optional[SkipLayerStrategy] = None, - ) -> torch.FloatTensor: - if cross_attention_kwargs is not None: - if cross_attention_kwargs.get("scale", None) is not None: - logger.warning( - "Passing `scale` to `cross_attention_kwargs` is depcrecated. `scale` will be ignored." - ) - - # Notice that normalization is always applied before the real computation in the following blocks. - # 0. Self-Attention - batch_size = hidden_states.shape[0] - - original_hidden_states = hidden_states - - norm_hidden_states = self.norm1(hidden_states) - - # Apply ada_norm_single - if self.adaptive_norm in ["single_scale_shift", "single_scale"]: - assert timestep.ndim == 3 # [batch, 1 or num_tokens, embedding_dim] - num_ada_params = self.scale_shift_table.shape[0] - ada_values = self.scale_shift_table[None, None] + timestep.reshape( - batch_size, timestep.shape[1], num_ada_params, -1 - ) - if self.adaptive_norm == "single_scale_shift": - shift_msa, scale_msa, gate_msa, shift_mlp, scale_mlp, gate_mlp = ( - ada_values.unbind(dim=2) - ) - norm_hidden_states = norm_hidden_states * (1 + scale_msa) + shift_msa - else: - scale_msa, gate_msa, scale_mlp, gate_mlp = ada_values.unbind(dim=2) - norm_hidden_states = norm_hidden_states * (1 + scale_msa) - elif self.adaptive_norm == "none": - scale_msa, gate_msa, scale_mlp, gate_mlp = None, None, None, None - else: - raise ValueError(f"Unknown adaptive norm type: {self.adaptive_norm}") - - norm_hidden_states = norm_hidden_states.squeeze( - 1 - ) # TODO: Check if this is needed - - # 1. Prepare GLIGEN inputs - cross_attention_kwargs = ( - cross_attention_kwargs.copy() if cross_attention_kwargs is not None else {} - ) - - attn_output = self.attn1( - norm_hidden_states, - freqs_cis=freqs_cis, - encoder_hidden_states=( - encoder_hidden_states if self.only_cross_attention else None - ), - attention_mask=attention_mask, - skip_layer_mask=skip_layer_mask, - skip_layer_strategy=skip_layer_strategy, - **cross_attention_kwargs, - ) - if gate_msa is not None: - attn_output = gate_msa * attn_output - - hidden_states = attn_output + hidden_states - if hidden_states.ndim == 4: - hidden_states = hidden_states.squeeze(1) - - # 3. Cross-Attention - if self.attn2 is not None: - if self.adaptive_norm == "none": - attn_input = self.attn2_norm(hidden_states) - else: - attn_input = hidden_states - attn_output = self.attn2( - attn_input, - freqs_cis=freqs_cis, - encoder_hidden_states=encoder_hidden_states, - attention_mask=encoder_attention_mask, - **cross_attention_kwargs, - ) - hidden_states = attn_output + hidden_states - - # 4. Feed-forward - norm_hidden_states = self.norm2(hidden_states) - if self.adaptive_norm == "single_scale_shift": - norm_hidden_states = norm_hidden_states * (1 + scale_mlp) + shift_mlp - elif self.adaptive_norm == "single_scale": - norm_hidden_states = norm_hidden_states * (1 + scale_mlp) - elif self.adaptive_norm == "none": - pass - else: - raise ValueError(f"Unknown adaptive norm type: {self.adaptive_norm}") - - if self._chunk_size is not None: - # "feed_forward_chunk_size" can be used to save memory - ff_output = _chunked_feed_forward( - self.ff, norm_hidden_states, self._chunk_dim, self._chunk_size - ) - else: - ff_output = self.ff(norm_hidden_states) - if gate_mlp is not None: - ff_output = gate_mlp * ff_output - - hidden_states = ff_output + hidden_states - if hidden_states.ndim == 4: - hidden_states = hidden_states.squeeze(1) - - if ( - skip_layer_mask is not None - and skip_layer_strategy == SkipLayerStrategy.TransformerBlock - ): - skip_layer_mask = skip_layer_mask.view(-1, 1, 1) - hidden_states = hidden_states * skip_layer_mask + original_hidden_states * ( - 1.0 - skip_layer_mask - ) - - return hidden_states - - -@maybe_allow_in_graph -class Attention(nn.Module): - r""" - A cross attention layer. - - Parameters: - query_dim (`int`): - The number of channels in the query. - cross_attention_dim (`int`, *optional*): - The number of channels in the encoder_hidden_states. If not given, defaults to `query_dim`. - heads (`int`, *optional*, defaults to 8): - The number of heads to use for multi-head attention. - dim_head (`int`, *optional*, defaults to 64): - The number of channels in each head. - dropout (`float`, *optional*, defaults to 0.0): - The dropout probability to use. - bias (`bool`, *optional*, defaults to False): - Set to `True` for the query, key, and value linear layers to contain a bias parameter. - upcast_attention (`bool`, *optional*, defaults to False): - Set to `True` to upcast the attention computation to `float32`. - upcast_softmax (`bool`, *optional*, defaults to False): - Set to `True` to upcast the softmax computation to `float32`. - cross_attention_norm (`str`, *optional*, defaults to `None`): - The type of normalization to use for the cross attention. Can be `None`, `layer_norm`, or `group_norm`. - cross_attention_norm_num_groups (`int`, *optional*, defaults to 32): - The number of groups to use for the group norm in the cross attention. - added_kv_proj_dim (`int`, *optional*, defaults to `None`): - The number of channels to use for the added key and value projections. If `None`, no projection is used. - norm_num_groups (`int`, *optional*, defaults to `None`): - The number of groups to use for the group norm in the attention. - spatial_norm_dim (`int`, *optional*, defaults to `None`): - The number of channels to use for the spatial normalization. - out_bias (`bool`, *optional*, defaults to `True`): - Set to `True` to use a bias in the output linear layer. - scale_qk (`bool`, *optional*, defaults to `True`): - Set to `True` to scale the query and key by `1 / sqrt(dim_head)`. - qk_norm (`str`, *optional*, defaults to None): - Set to 'layer_norm' or `rms_norm` to perform query and key normalization. - only_cross_attention (`bool`, *optional*, defaults to `False`): - Set to `True` to only use cross attention and not added_kv_proj_dim. Can only be set to `True` if - `added_kv_proj_dim` is not `None`. - eps (`float`, *optional*, defaults to 1e-5): - An additional value added to the denominator in group normalization that is used for numerical stability. - rescale_output_factor (`float`, *optional*, defaults to 1.0): - A factor to rescale the output by dividing it with this value. - residual_connection (`bool`, *optional*, defaults to `False`): - Set to `True` to add the residual connection to the output. - _from_deprecated_attn_block (`bool`, *optional*, defaults to `False`): - Set to `True` if the attention block is loaded from a deprecated state dict. - processor (`AttnProcessor`, *optional*, defaults to `None`): - The attention processor to use. If `None`, defaults to `AttnProcessor2_0` if `torch 2.x` is used and - `AttnProcessor` otherwise. - """ - - def __init__( - self, - query_dim: int, - cross_attention_dim: Optional[int] = None, - heads: int = 8, - dim_head: int = 64, - dropout: float = 0.0, - bias: bool = False, - upcast_attention: bool = False, - upcast_softmax: bool = False, - cross_attention_norm: Optional[str] = None, - cross_attention_norm_num_groups: int = 32, - added_kv_proj_dim: Optional[int] = None, - norm_num_groups: Optional[int] = None, - spatial_norm_dim: Optional[int] = None, - out_bias: bool = True, - scale_qk: bool = True, - qk_norm: Optional[str] = None, - only_cross_attention: bool = False, - eps: float = 1e-5, - rescale_output_factor: float = 1.0, - residual_connection: bool = False, - _from_deprecated_attn_block: bool = False, - processor: Optional["AttnProcessor"] = None, - out_dim: int = None, - use_tpu_flash_attention: bool = False, - use_rope: bool = False, - ): - super().__init__() - self.inner_dim = out_dim if out_dim is not None else dim_head * heads - self.query_dim = query_dim - self.use_bias = bias - self.is_cross_attention = cross_attention_dim is not None - self.cross_attention_dim = ( - cross_attention_dim if cross_attention_dim is not None else query_dim - ) - self.upcast_attention = upcast_attention - self.upcast_softmax = upcast_softmax - self.rescale_output_factor = rescale_output_factor - self.residual_connection = residual_connection - self.dropout = dropout - self.fused_projections = False - self.out_dim = out_dim if out_dim is not None else query_dim - self.use_tpu_flash_attention = use_tpu_flash_attention - self.use_rope = use_rope - - # we make use of this private variable to know whether this class is loaded - # with an deprecated state dict so that we can convert it on the fly - self._from_deprecated_attn_block = _from_deprecated_attn_block - - self.scale_qk = scale_qk - self.scale = dim_head**-0.5 if self.scale_qk else 1.0 - - if qk_norm is None: - self.q_norm = nn.Identity() - self.k_norm = nn.Identity() - elif qk_norm == "rms_norm": - self.q_norm = RMSNorm(dim_head * heads, eps=1e-5) - self.k_norm = RMSNorm(dim_head * heads, eps=1e-5) - elif qk_norm == "layer_norm": - self.q_norm = nn.LayerNorm(dim_head * heads, eps=1e-5) - self.k_norm = nn.LayerNorm(dim_head * heads, eps=1e-5) - else: - raise ValueError(f"Unsupported qk_norm method: {qk_norm}") - - self.heads = out_dim // dim_head if out_dim is not None else heads - # for slice_size > 0 the attention score computation - # is split across the batch axis to save memory - # You can set slice_size with `set_attention_slice` - self.sliceable_head_dim = heads - - self.added_kv_proj_dim = added_kv_proj_dim - self.only_cross_attention = only_cross_attention - - if self.added_kv_proj_dim is None and self.only_cross_attention: - raise ValueError( - "`only_cross_attention` can only be set to True if `added_kv_proj_dim` is not None. Make sure to set either `only_cross_attention=False` or define `added_kv_proj_dim`." - ) - - if norm_num_groups is not None: - self.group_norm = nn.GroupNorm( - num_channels=query_dim, num_groups=norm_num_groups, eps=eps, affine=True - ) - else: - self.group_norm = None - - if spatial_norm_dim is not None: - self.spatial_norm = SpatialNorm( - f_channels=query_dim, zq_channels=spatial_norm_dim - ) - else: - self.spatial_norm = None - - if cross_attention_norm is None: - self.norm_cross = None - elif cross_attention_norm == "layer_norm": - self.norm_cross = nn.LayerNorm(self.cross_attention_dim) - elif cross_attention_norm == "group_norm": - if self.added_kv_proj_dim is not None: - # The given `encoder_hidden_states` are initially of shape - # (batch_size, seq_len, added_kv_proj_dim) before being projected - # to (batch_size, seq_len, cross_attention_dim). The norm is applied - # before the projection, so we need to use `added_kv_proj_dim` as - # the number of channels for the group norm. - norm_cross_num_channels = added_kv_proj_dim - else: - norm_cross_num_channels = self.cross_attention_dim - - self.norm_cross = nn.GroupNorm( - num_channels=norm_cross_num_channels, - num_groups=cross_attention_norm_num_groups, - eps=1e-5, - affine=True, - ) - else: - raise ValueError( - f"unknown cross_attention_norm: {cross_attention_norm}. Should be None, 'layer_norm' or 'group_norm'" - ) - - linear_cls = nn.Linear - - self.linear_cls = linear_cls - self.to_q = linear_cls(query_dim, self.inner_dim, bias=bias) - - if not self.only_cross_attention: - # only relevant for the `AddedKVProcessor` classes - self.to_k = linear_cls(self.cross_attention_dim, self.inner_dim, bias=bias) - self.to_v = linear_cls(self.cross_attention_dim, self.inner_dim, bias=bias) - else: - self.to_k = None - self.to_v = None - - if self.added_kv_proj_dim is not None: - self.add_k_proj = linear_cls(added_kv_proj_dim, self.inner_dim) - self.add_v_proj = linear_cls(added_kv_proj_dim, self.inner_dim) - - self.to_out = nn.ModuleList([]) - self.to_out.append(linear_cls(self.inner_dim, self.out_dim, bias=out_bias)) - self.to_out.append(nn.Dropout(dropout)) - - # set attention processor - # We use the AttnProcessor2_0 by default when torch 2.x is used which uses - # torch.nn.functional.scaled_dot_product_attention for native Flash/memory_efficient_attention - # but only if it has the default `scale` argument. TODO remove scale_qk check when we move to torch 2.1 - if processor is None: - processor = AttnProcessor2_0() - self.set_processor(processor) - - def set_use_tpu_flash_attention(self): - r""" - Function sets the flag in this object. The flag will enforce the usage of TPU attention kernel. - """ - self.use_tpu_flash_attention = True - - def set_processor(self, processor: "AttnProcessor") -> None: - r""" - Set the attention processor to use. - - Args: - processor (`AttnProcessor`): - The attention processor to use. - """ - # if current processor is in `self._modules` and if passed `processor` is not, we need to - # pop `processor` from `self._modules` - if ( - hasattr(self, "processor") - and isinstance(self.processor, torch.nn.Module) - and not isinstance(processor, torch.nn.Module) - ): - logger.info( - f"You are removing possibly trained weights of {self.processor} with {processor}" - ) - self._modules.pop("processor") - - self.processor = processor - - def get_processor( - self, return_deprecated_lora: bool = False - ) -> "AttentionProcessor": # noqa: F821 - r""" - Get the attention processor in use. - - Args: - return_deprecated_lora (`bool`, *optional*, defaults to `False`): - Set to `True` to return the deprecated LoRA attention processor. - - Returns: - "AttentionProcessor": The attention processor in use. - """ - if not return_deprecated_lora: - return self.processor - - # TODO(Sayak, Patrick). The rest of the function is needed to ensure backwards compatible - # serialization format for LoRA Attention Processors. It should be deleted once the integration - # with PEFT is completed. - is_lora_activated = { - name: module.lora_layer is not None - for name, module in self.named_modules() - if hasattr(module, "lora_layer") - } - - # 1. if no layer has a LoRA activated we can return the processor as usual - if not any(is_lora_activated.values()): - return self.processor - - # If doesn't apply LoRA do `add_k_proj` or `add_v_proj` - is_lora_activated.pop("add_k_proj", None) - is_lora_activated.pop("add_v_proj", None) - # 2. else it is not posssible that only some layers have LoRA activated - if not all(is_lora_activated.values()): - raise ValueError( - f"Make sure that either all layers or no layers have LoRA activated, but have {is_lora_activated}" - ) - - # 3. And we need to merge the current LoRA layers into the corresponding LoRA attention processor - non_lora_processor_cls_name = self.processor.__class__.__name__ - lora_processor_cls = getattr( - import_module(__name__), "LoRA" + non_lora_processor_cls_name - ) - - hidden_size = self.inner_dim - - # now create a LoRA attention processor from the LoRA layers - if lora_processor_cls in [ - LoRAAttnProcessor, - LoRAAttnProcessor2_0, - LoRAXFormersAttnProcessor, - ]: - kwargs = { - "cross_attention_dim": self.cross_attention_dim, - "rank": self.to_q.lora_layer.rank, - "network_alpha": self.to_q.lora_layer.network_alpha, - "q_rank": self.to_q.lora_layer.rank, - "q_hidden_size": self.to_q.lora_layer.out_features, - "k_rank": self.to_k.lora_layer.rank, - "k_hidden_size": self.to_k.lora_layer.out_features, - "v_rank": self.to_v.lora_layer.rank, - "v_hidden_size": self.to_v.lora_layer.out_features, - "out_rank": self.to_out[0].lora_layer.rank, - "out_hidden_size": self.to_out[0].lora_layer.out_features, - } - - if hasattr(self.processor, "attention_op"): - kwargs["attention_op"] = self.processor.attention_op - - lora_processor = lora_processor_cls(hidden_size, **kwargs) - lora_processor.to_q_lora.load_state_dict(self.to_q.lora_layer.state_dict()) - lora_processor.to_k_lora.load_state_dict(self.to_k.lora_layer.state_dict()) - lora_processor.to_v_lora.load_state_dict(self.to_v.lora_layer.state_dict()) - lora_processor.to_out_lora.load_state_dict( - self.to_out[0].lora_layer.state_dict() - ) - elif lora_processor_cls == LoRAAttnAddedKVProcessor: - lora_processor = lora_processor_cls( - hidden_size, - cross_attention_dim=self.add_k_proj.weight.shape[0], - rank=self.to_q.lora_layer.rank, - network_alpha=self.to_q.lora_layer.network_alpha, - ) - lora_processor.to_q_lora.load_state_dict(self.to_q.lora_layer.state_dict()) - lora_processor.to_k_lora.load_state_dict(self.to_k.lora_layer.state_dict()) - lora_processor.to_v_lora.load_state_dict(self.to_v.lora_layer.state_dict()) - lora_processor.to_out_lora.load_state_dict( - self.to_out[0].lora_layer.state_dict() - ) - - # only save if used - if self.add_k_proj.lora_layer is not None: - lora_processor.add_k_proj_lora.load_state_dict( - self.add_k_proj.lora_layer.state_dict() - ) - lora_processor.add_v_proj_lora.load_state_dict( - self.add_v_proj.lora_layer.state_dict() - ) - else: - lora_processor.add_k_proj_lora = None - lora_processor.add_v_proj_lora = None - else: - raise ValueError(f"{lora_processor_cls} does not exist.") - - return lora_processor - - def forward( - self, - hidden_states: torch.FloatTensor, - freqs_cis: Optional[Tuple[torch.FloatTensor, torch.FloatTensor]] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - skip_layer_mask: Optional[torch.Tensor] = None, - skip_layer_strategy: Optional[SkipLayerStrategy] = None, - **cross_attention_kwargs, - ) -> torch.Tensor: - r""" - The forward method of the `Attention` class. - - Args: - hidden_states (`torch.Tensor`): - The hidden states of the query. - encoder_hidden_states (`torch.Tensor`, *optional*): - The hidden states of the encoder. - attention_mask (`torch.Tensor`, *optional*): - The attention mask to use. If `None`, no mask is applied. - skip_layer_mask (`torch.Tensor`, *optional*): - The skip layer mask to use. If `None`, no mask is applied. - skip_layer_strategy (`SkipLayerStrategy`, *optional*, defaults to `None`): - Controls which layers to skip for spatiotemporal guidance. - **cross_attention_kwargs: - Additional keyword arguments to pass along to the cross attention. - - Returns: - `torch.Tensor`: The output of the attention layer. - """ - # The `Attention` class can call different attention processors / attention functions - # here we simply pass along all tensors to the selected processor class - # For standard processors that are defined here, `**cross_attention_kwargs` is empty - - attn_parameters = set( - inspect.signature(self.processor.__call__).parameters.keys() - ) - unused_kwargs = [ - k for k, _ in cross_attention_kwargs.items() if k not in attn_parameters - ] - if len(unused_kwargs) > 0: - logger.warning( - f"cross_attention_kwargs {unused_kwargs} are not expected by" - f" {self.processor.__class__.__name__} and will be ignored." - ) - cross_attention_kwargs = { - k: w for k, w in cross_attention_kwargs.items() if k in attn_parameters - } - - return self.processor( - self, - hidden_states, - freqs_cis=freqs_cis, - encoder_hidden_states=encoder_hidden_states, - attention_mask=attention_mask, - skip_layer_mask=skip_layer_mask, - skip_layer_strategy=skip_layer_strategy, - **cross_attention_kwargs, - ) - - def batch_to_head_dim(self, tensor: torch.Tensor) -> torch.Tensor: - r""" - Reshape the tensor from `[batch_size, seq_len, dim]` to `[batch_size // heads, seq_len, dim * heads]`. `heads` - is the number of heads initialized while constructing the `Attention` class. - - Args: - tensor (`torch.Tensor`): The tensor to reshape. - - Returns: - `torch.Tensor`: The reshaped tensor. - """ - head_size = self.heads - batch_size, seq_len, dim = tensor.shape - tensor = tensor.reshape(batch_size // head_size, head_size, seq_len, dim) - tensor = tensor.permute(0, 2, 1, 3).reshape( - batch_size // head_size, seq_len, dim * head_size - ) - return tensor - - def head_to_batch_dim(self, tensor: torch.Tensor, out_dim: int = 3) -> torch.Tensor: - r""" - Reshape the tensor from `[batch_size, seq_len, dim]` to `[batch_size, seq_len, heads, dim // heads]` `heads` is - the number of heads initialized while constructing the `Attention` class. - - Args: - tensor (`torch.Tensor`): The tensor to reshape. - out_dim (`int`, *optional*, defaults to `3`): The output dimension of the tensor. If `3`, the tensor is - reshaped to `[batch_size * heads, seq_len, dim // heads]`. - - Returns: - `torch.Tensor`: The reshaped tensor. - """ - - head_size = self.heads - if tensor.ndim == 3: - batch_size, seq_len, dim = tensor.shape - extra_dim = 1 - else: - batch_size, extra_dim, seq_len, dim = tensor.shape - tensor = tensor.reshape( - batch_size, seq_len * extra_dim, head_size, dim // head_size - ) - tensor = tensor.permute(0, 2, 1, 3) - - if out_dim == 3: - tensor = tensor.reshape( - batch_size * head_size, seq_len * extra_dim, dim // head_size - ) - - return tensor - - def get_attention_scores( - self, - query: torch.Tensor, - key: torch.Tensor, - attention_mask: torch.Tensor = None, - ) -> torch.Tensor: - r""" - Compute the attention scores. - - Args: - query (`torch.Tensor`): The query tensor. - key (`torch.Tensor`): The key tensor. - attention_mask (`torch.Tensor`, *optional*): The attention mask to use. If `None`, no mask is applied. - - Returns: - `torch.Tensor`: The attention probabilities/scores. - """ - dtype = query.dtype - if self.upcast_attention: - query = query.float() - key = key.float() - - if attention_mask is None: - baddbmm_input = torch.empty( - query.shape[0], - query.shape[1], - key.shape[1], - dtype=query.dtype, - device=query.device, - ) - beta = 0 - else: - baddbmm_input = attention_mask - beta = 1 - - attention_scores = torch.baddbmm( - baddbmm_input, - query, - key.transpose(-1, -2), - beta=beta, - alpha=self.scale, - ) - del baddbmm_input - - if self.upcast_softmax: - attention_scores = attention_scores.float() - - attention_probs = attention_scores.softmax(dim=-1) - del attention_scores - - attention_probs = attention_probs.to(dtype) - - return attention_probs - - def prepare_attention_mask( - self, - attention_mask: torch.Tensor, - target_length: int, - batch_size: int, - out_dim: int = 3, - ) -> torch.Tensor: - r""" - Prepare the attention mask for the attention computation. - - Args: - attention_mask (`torch.Tensor`): - The attention mask to prepare. - target_length (`int`): - The target length of the attention mask. This is the length of the attention mask after padding. - batch_size (`int`): - The batch size, which is used to repeat the attention mask. - out_dim (`int`, *optional*, defaults to `3`): - The output dimension of the attention mask. Can be either `3` or `4`. - - Returns: - `torch.Tensor`: The prepared attention mask. - """ - head_size = self.heads - if attention_mask is None: - return attention_mask - - current_length: int = attention_mask.shape[-1] - if current_length != target_length: - if attention_mask.device.type == "mps": - # HACK: MPS: Does not support padding by greater than dimension of input tensor. - # Instead, we can manually construct the padding tensor. - padding_shape = ( - attention_mask.shape[0], - attention_mask.shape[1], - target_length, - ) - padding = torch.zeros( - padding_shape, - dtype=attention_mask.dtype, - device=attention_mask.device, - ) - attention_mask = torch.cat([attention_mask, padding], dim=2) - else: - # TODO: for pipelines such as stable-diffusion, padding cross-attn mask: - # we want to instead pad by (0, remaining_length), where remaining_length is: - # remaining_length: int = target_length - current_length - # TODO: re-enable tests/models/test_models_unet_2d_condition.py#test_model_xattn_padding - attention_mask = F.pad(attention_mask, (0, target_length), value=0.0) - - if out_dim == 3: - if attention_mask.shape[0] < batch_size * head_size: - attention_mask = attention_mask.repeat_interleave(head_size, dim=0) - elif out_dim == 4: - attention_mask = attention_mask.unsqueeze(1) - attention_mask = attention_mask.repeat_interleave(head_size, dim=1) - - return attention_mask - - def norm_encoder_hidden_states( - self, encoder_hidden_states: torch.Tensor - ) -> torch.Tensor: - r""" - Normalize the encoder hidden states. Requires `self.norm_cross` to be specified when constructing the - `Attention` class. - - Args: - encoder_hidden_states (`torch.Tensor`): Hidden states of the encoder. - - Returns: - `torch.Tensor`: The normalized encoder hidden states. - """ - assert ( - self.norm_cross is not None - ), "self.norm_cross must be defined to call self.norm_encoder_hidden_states" - - if isinstance(self.norm_cross, nn.LayerNorm): - encoder_hidden_states = self.norm_cross(encoder_hidden_states) - elif isinstance(self.norm_cross, nn.GroupNorm): - # Group norm norms along the channels dimension and expects - # input to be in the shape of (N, C, *). In this case, we want - # to norm along the hidden dimension, so we need to move - # (batch_size, sequence_length, hidden_size) -> - # (batch_size, hidden_size, sequence_length) - encoder_hidden_states = encoder_hidden_states.transpose(1, 2) - encoder_hidden_states = self.norm_cross(encoder_hidden_states) - encoder_hidden_states = encoder_hidden_states.transpose(1, 2) - else: - assert False - - return encoder_hidden_states - - @staticmethod - def apply_rotary_emb( - input_tensor: torch.Tensor, - freqs_cis: Tuple[torch.FloatTensor, torch.FloatTensor], - ) -> Tuple[torch.Tensor, torch.Tensor]: - cos_freqs = freqs_cis[0] - sin_freqs = freqs_cis[1] - - t_dup = rearrange(input_tensor, "... (d r) -> ... d r", r=2) - t1, t2 = t_dup.unbind(dim=-1) - t_dup = torch.stack((-t2, t1), dim=-1) - input_tensor_rot = rearrange(t_dup, "... d r -> ... (d r)") - - out = input_tensor * cos_freqs + input_tensor_rot * sin_freqs - - return out - - -class AttnProcessor2_0: - r""" - Processor for implementing scaled dot-product attention (enabled by default if you're using PyTorch 2.0). - """ - - def __init__(self): - pass - - def __call__( - self, - attn: Attention, - hidden_states: torch.FloatTensor, - freqs_cis: Tuple[torch.FloatTensor, torch.FloatTensor], - encoder_hidden_states: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - temb: Optional[torch.FloatTensor] = None, - skip_layer_mask: Optional[torch.FloatTensor] = None, - skip_layer_strategy: Optional[SkipLayerStrategy] = None, - *args, - **kwargs, - ) -> torch.FloatTensor: - if len(args) > 0 or kwargs.get("scale", None) is not None: - deprecation_message = "The `scale` argument is deprecated and will be ignored. Please remove it, as passing it will raise an error in the future. `scale` should directly be passed while calling the underlying pipeline component i.e., via `cross_attention_kwargs`." - deprecate("scale", "1.0.0", deprecation_message) - - residual = hidden_states - if attn.spatial_norm is not None: - hidden_states = attn.spatial_norm(hidden_states, temb) - - input_ndim = hidden_states.ndim - - if input_ndim == 4: - batch_size, channel, height, width = hidden_states.shape - hidden_states = hidden_states.view( - batch_size, channel, height * width - ).transpose(1, 2) - - batch_size, sequence_length, _ = ( - hidden_states.shape - if encoder_hidden_states is None - else encoder_hidden_states.shape - ) - - if skip_layer_mask is not None: - skip_layer_mask = skip_layer_mask.reshape(batch_size, 1, 1) - - if (attention_mask is not None) and (not attn.use_tpu_flash_attention): - attention_mask = attn.prepare_attention_mask( - attention_mask, sequence_length, batch_size - ) - # scaled_dot_product_attention expects attention_mask shape to be - # (batch, heads, source_length, target_length) - attention_mask = attention_mask.view( - batch_size, attn.heads, -1, attention_mask.shape[-1] - ) - - if attn.group_norm is not None: - hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose( - 1, 2 - ) - - query = attn.to_q(hidden_states) - query = attn.q_norm(query) - - if encoder_hidden_states is not None: - if attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states( - encoder_hidden_states - ) - key = attn.to_k(encoder_hidden_states) - key = attn.k_norm(key) - else: # if no context provided do self-attention - encoder_hidden_states = hidden_states - key = attn.to_k(hidden_states) - key = attn.k_norm(key) - if attn.use_rope: - key = attn.apply_rotary_emb(key, freqs_cis) - query = attn.apply_rotary_emb(query, freqs_cis) - - value = attn.to_v(encoder_hidden_states) - value_for_stg = value - - inner_dim = key.shape[-1] - head_dim = inner_dim // attn.heads - - query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2) - key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2) - value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2) - - # the output of sdp = (batch, num_heads, seq_len, head_dim) - - if attn.use_tpu_flash_attention: # use tpu attention offload 'flash attention' - q_segment_indexes = None - if ( - attention_mask is not None - ): # if mask is required need to tune both segmenIds fields - # attention_mask = torch.squeeze(attention_mask).to(torch.float32) - attention_mask = attention_mask.to(torch.float32) - q_segment_indexes = torch.ones( - batch_size, query.shape[2], device=query.device, dtype=torch.float32 - ) - assert ( - attention_mask.shape[1] == key.shape[2] - ), f"ERROR: KEY SHAPE must be same as attention mask [{key.shape[2]}, {attention_mask.shape[1]}]" - - assert ( - query.shape[2] % 128 == 0 - ), f"ERROR: QUERY SHAPE must be divisible by 128 (TPU limitation) [{query.shape[2]}]" - assert ( - key.shape[2] % 128 == 0 - ), f"ERROR: KEY SHAPE must be divisible by 128 (TPU limitation) [{key.shape[2]}]" - - # run the TPU kernel implemented in jax with pallas - hidden_states_a = flash_attention( - q=query, - k=key, - v=value, - q_segment_ids=q_segment_indexes, - kv_segment_ids=attention_mask, - sm_scale=attn.scale, - ) - else: - hidden_states_a = F.scaled_dot_product_attention( - query, - key, - value, - attn_mask=attention_mask, - dropout_p=0.0, - is_causal=False, - ) - - hidden_states_a = hidden_states_a.transpose(1, 2).reshape( - batch_size, -1, attn.heads * head_dim - ) - hidden_states_a = hidden_states_a.to(query.dtype) - - if ( - skip_layer_mask is not None - and skip_layer_strategy == SkipLayerStrategy.AttentionSkip - ): - hidden_states = hidden_states_a * skip_layer_mask + hidden_states * ( - 1.0 - skip_layer_mask - ) - elif ( - skip_layer_mask is not None - and skip_layer_strategy == SkipLayerStrategy.AttentionValues - ): - hidden_states = hidden_states_a * skip_layer_mask + value_for_stg * ( - 1.0 - skip_layer_mask - ) - else: - hidden_states = hidden_states_a - - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - if input_ndim == 4: - hidden_states = hidden_states.transpose(-1, -2).reshape( - batch_size, channel, height, width - ) - if ( - skip_layer_mask is not None - and skip_layer_strategy == SkipLayerStrategy.Residual - ): - skip_layer_mask = skip_layer_mask.reshape(batch_size, 1, 1, 1) - - if attn.residual_connection: - if ( - skip_layer_mask is not None - and skip_layer_strategy == SkipLayerStrategy.Residual - ): - hidden_states = hidden_states + residual * skip_layer_mask - else: - hidden_states = hidden_states + residual - - hidden_states = hidden_states / attn.rescale_output_factor - - return hidden_states - - -class AttnProcessor: - r""" - Default processor for performing attention-related computations. - """ - - def __call__( - self, - attn: Attention, - hidden_states: torch.FloatTensor, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - temb: Optional[torch.FloatTensor] = None, - *args, - **kwargs, - ) -> torch.Tensor: - if len(args) > 0 or kwargs.get("scale", None) is not None: - deprecation_message = "The `scale` argument is deprecated and will be ignored. Please remove it, as passing it will raise an error in the future. `scale` should directly be passed while calling the underlying pipeline component i.e., via `cross_attention_kwargs`." - deprecate("scale", "1.0.0", deprecation_message) - - residual = hidden_states - - if attn.spatial_norm is not None: - hidden_states = attn.spatial_norm(hidden_states, temb) - - input_ndim = hidden_states.ndim - - if input_ndim == 4: - batch_size, channel, height, width = hidden_states.shape - hidden_states = hidden_states.view( - batch_size, channel, height * width - ).transpose(1, 2) - - batch_size, sequence_length, _ = ( - hidden_states.shape - if encoder_hidden_states is None - else encoder_hidden_states.shape - ) - attention_mask = attn.prepare_attention_mask( - attention_mask, sequence_length, batch_size - ) - - if attn.group_norm is not None: - hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose( - 1, 2 - ) - - query = attn.to_q(hidden_states) - - if encoder_hidden_states is None: - encoder_hidden_states = hidden_states - elif attn.norm_cross: - encoder_hidden_states = attn.norm_encoder_hidden_states( - encoder_hidden_states - ) - - key = attn.to_k(encoder_hidden_states) - value = attn.to_v(encoder_hidden_states) - - query = attn.head_to_batch_dim(query) - key = attn.head_to_batch_dim(key) - value = attn.head_to_batch_dim(value) - - query = attn.q_norm(query) - key = attn.k_norm(key) - - attention_probs = attn.get_attention_scores(query, key, attention_mask) - hidden_states = torch.bmm(attention_probs, value) - hidden_states = attn.batch_to_head_dim(hidden_states) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - if input_ndim == 4: - hidden_states = hidden_states.transpose(-1, -2).reshape( - batch_size, channel, height, width - ) - - if attn.residual_connection: - hidden_states = hidden_states + residual - - hidden_states = hidden_states / attn.rescale_output_factor - - return hidden_states - - -class FeedForward(nn.Module): - r""" - A feed-forward layer. - - Parameters: - dim (`int`): The number of channels in the input. - dim_out (`int`, *optional*): The number of channels in the output. If not given, defaults to `dim`. - mult (`int`, *optional*, defaults to 4): The multiplier to use for the hidden dimension. - dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. - activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to be used in feed-forward. - final_dropout (`bool` *optional*, defaults to False): Apply a final dropout. - bias (`bool`, defaults to True): Whether to use a bias in the linear layer. - """ - - def __init__( - self, - dim: int, - dim_out: Optional[int] = None, - mult: int = 4, - dropout: float = 0.0, - activation_fn: str = "geglu", - final_dropout: bool = False, - inner_dim=None, - bias: bool = True, - ): - super().__init__() - if inner_dim is None: - inner_dim = int(dim * mult) - dim_out = dim_out if dim_out is not None else dim - linear_cls = nn.Linear - - if activation_fn == "gelu": - act_fn = GELU(dim, inner_dim, bias=bias) - elif activation_fn == "gelu-approximate": - act_fn = GELU(dim, inner_dim, approximate="tanh", bias=bias) - elif activation_fn == "geglu": - act_fn = GEGLU(dim, inner_dim, bias=bias) - elif activation_fn == "geglu-approximate": - act_fn = ApproximateGELU(dim, inner_dim, bias=bias) - else: - raise ValueError(f"Unsupported activation function: {activation_fn}") - - self.net = nn.ModuleList([]) - # project in - self.net.append(act_fn) - # project dropout - self.net.append(nn.Dropout(dropout)) - # project out - self.net.append(linear_cls(inner_dim, dim_out, bias=bias)) - # FF as used in Vision Transformer, MLP-Mixer, etc. have a final dropout - if final_dropout: - self.net.append(nn.Dropout(dropout)) - - def forward(self, hidden_states: torch.Tensor, scale: float = 1.0) -> torch.Tensor: - compatible_cls = (GEGLU, LoRACompatibleLinear) - for module in self.net: - if isinstance(module, compatible_cls): - hidden_states = module(hidden_states, scale) - else: - hidden_states = module(hidden_states) - return hidden_states diff --git a/ltx_video_x/models/transformers/embeddings.py b/ltx_video_x/models/transformers/embeddings.py deleted file mode 100644 index a30d6be16b4f3fe709cf24465e06eb798889ba66..0000000000000000000000000000000000000000 --- a/ltx_video_x/models/transformers/embeddings.py +++ /dev/null @@ -1,129 +0,0 @@ -# Adapted from: https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/embeddings.py -import math - -import numpy as np -import torch -from einops import rearrange -from torch import nn - - -def get_timestep_embedding( - timesteps: torch.Tensor, - embedding_dim: int, - flip_sin_to_cos: bool = False, - downscale_freq_shift: float = 1, - scale: float = 1, - max_period: int = 10000, -): - """ - This matches the implementation in Denoising Diffusion Probabilistic Models: Create sinusoidal timestep embeddings. - - :param timesteps: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param embedding_dim: the dimension of the output. :param max_period: controls the minimum frequency of the - embeddings. :return: an [N x dim] Tensor of positional embeddings. - """ - assert len(timesteps.shape) == 1, "Timesteps should be a 1d-array" - - half_dim = embedding_dim // 2 - exponent = -math.log(max_period) * torch.arange( - start=0, end=half_dim, dtype=torch.float32, device=timesteps.device - ) - exponent = exponent / (half_dim - downscale_freq_shift) - - emb = torch.exp(exponent) - emb = timesteps[:, None].float() * emb[None, :] - - # scale embeddings - emb = scale * emb - - # concat sine and cosine embeddings - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=-1) - - # flip sine and cosine embeddings - if flip_sin_to_cos: - emb = torch.cat([emb[:, half_dim:], emb[:, :half_dim]], dim=-1) - - # zero pad - if embedding_dim % 2 == 1: - emb = torch.nn.functional.pad(emb, (0, 1, 0, 0)) - return emb - - -def get_3d_sincos_pos_embed(embed_dim, grid, w, h, f): - """ - grid_size: int of the grid height and width return: pos_embed: [grid_size*grid_size, embed_dim] or - [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token) - """ - grid = rearrange(grid, "c (f h w) -> c f h w", h=h, w=w) - grid = rearrange(grid, "c f h w -> c h w f", h=h, w=w) - grid = grid.reshape([3, 1, w, h, f]) - pos_embed = get_3d_sincos_pos_embed_from_grid(embed_dim, grid) - pos_embed = pos_embed.transpose(1, 0, 2, 3) - return rearrange(pos_embed, "h w f c -> (f h w) c") - - -def get_3d_sincos_pos_embed_from_grid(embed_dim, grid): - if embed_dim % 3 != 0: - raise ValueError("embed_dim must be divisible by 3") - - # use half of dimensions to encode grid_h - emb_f = get_1d_sincos_pos_embed_from_grid(embed_dim // 3, grid[0]) # (H*W*T, D/3) - emb_h = get_1d_sincos_pos_embed_from_grid(embed_dim // 3, grid[1]) # (H*W*T, D/3) - emb_w = get_1d_sincos_pos_embed_from_grid(embed_dim // 3, grid[2]) # (H*W*T, D/3) - - emb = np.concatenate([emb_h, emb_w, emb_f], axis=-1) # (H*W*T, D) - return emb - - -def get_1d_sincos_pos_embed_from_grid(embed_dim, pos): - """ - embed_dim: output dimension for each position pos: a list of positions to be encoded: size (M,) out: (M, D) - """ - if embed_dim % 2 != 0: - raise ValueError("embed_dim must be divisible by 2") - - omega = np.arange(embed_dim // 2, dtype=np.float64) - omega /= embed_dim / 2.0 - omega = 1.0 / 10000**omega # (D/2,) - - pos_shape = pos.shape - - pos = pos.reshape(-1) - out = np.einsum("m,d->md", pos, omega) # (M, D/2), outer product - out = out.reshape([*pos_shape, -1])[0] - - emb_sin = np.sin(out) # (M, D/2) - emb_cos = np.cos(out) # (M, D/2) - - emb = np.concatenate([emb_sin, emb_cos], axis=-1) # (M, D) - return emb - - -class SinusoidalPositionalEmbedding(nn.Module): - """Apply positional information to a sequence of embeddings. - - Takes in a sequence of embeddings with shape (batch_size, seq_length, embed_dim) and adds positional embeddings to - them - - Args: - embed_dim: (int): Dimension of the positional embedding. - max_seq_length: Maximum sequence length to apply positional embeddings - - """ - - def __init__(self, embed_dim: int, max_seq_length: int = 32): - super().__init__() - position = torch.arange(max_seq_length).unsqueeze(1) - div_term = torch.exp( - torch.arange(0, embed_dim, 2) * (-math.log(10000.0) / embed_dim) - ) - pe = torch.zeros(1, max_seq_length, embed_dim) - pe[0, :, 0::2] = torch.sin(position * div_term) - pe[0, :, 1::2] = torch.cos(position * div_term) - self.register_buffer("pe", pe) - - def forward(self, x): - _, seq_length, _ = x.shape - x = x + self.pe[:, :seq_length] - return x diff --git a/ltx_video_x/models/transformers/symmetric_patchifier.py b/ltx_video_x/models/transformers/symmetric_patchifier.py deleted file mode 100644 index 2eca32033eef03c0dbffd7a25cca993bbda57ded..0000000000000000000000000000000000000000 --- a/ltx_video_x/models/transformers/symmetric_patchifier.py +++ /dev/null @@ -1,84 +0,0 @@ -from abc import ABC, abstractmethod -from typing import Tuple - -import torch -from diffusers.configuration_utils import ConfigMixin -from einops import rearrange -from torch import Tensor - - -class Patchifier(ConfigMixin, ABC): - def __init__(self, patch_size: int): - super().__init__() - self._patch_size = (1, patch_size, patch_size) - - @abstractmethod - def patchify(self, latents: Tensor) -> Tuple[Tensor, Tensor]: - raise NotImplementedError("Patchify method not implemented") - - @abstractmethod - def unpatchify( - self, - latents: Tensor, - output_height: int, - output_width: int, - out_channels: int, - ) -> Tuple[Tensor, Tensor]: - pass - - @property - def patch_size(self): - return self._patch_size - - def get_latent_coords( - self, latent_num_frames, latent_height, latent_width, batch_size, device - ): - """ - Return a tensor of shape [batch_size, 3, num_patches] containing the - top-left corner latent coordinates of each latent patch. - The tensor is repeated for each batch element. - """ - latent_sample_coords = torch.meshgrid( - torch.arange(0, latent_num_frames, self._patch_size[0], device=device), - torch.arange(0, latent_height, self._patch_size[1], device=device), - torch.arange(0, latent_width, self._patch_size[2], device=device), - ) - latent_sample_coords = torch.stack(latent_sample_coords, dim=0) - latent_coords = latent_sample_coords.unsqueeze(0).repeat(batch_size, 1, 1, 1, 1) - latent_coords = rearrange( - latent_coords, "b c f h w -> b c (f h w)", b=batch_size - ) - return latent_coords - - -class SymmetricPatchifier(Patchifier): - def patchify(self, latents: Tensor) -> Tuple[Tensor, Tensor]: - b, _, f, h, w = latents.shape - latent_coords = self.get_latent_coords(f, h, w, b, latents.device) - latents = rearrange( - latents, - "b c (f p1) (h p2) (w p3) -> b (f h w) (c p1 p2 p3)", - p1=self._patch_size[0], - p2=self._patch_size[1], - p3=self._patch_size[2], - ) - return latents, latent_coords - - def unpatchify( - self, - latents: Tensor, - output_height: int, - output_width: int, - out_channels: int, - ) -> Tuple[Tensor, Tensor]: - output_height = output_height // self._patch_size[1] - output_width = output_width // self._patch_size[2] - latents = rearrange( - latents, - "b (f h w) (c p q) -> b c f (h p) (w q)", - h=output_height, - w=output_width, - p=self._patch_size[1], - q=self._patch_size[2], - ) - return latents diff --git a/ltx_video_x/models/transformers/transformer3d.py b/ltx_video_x/models/transformers/transformer3d.py deleted file mode 100644 index 3dc08d8e3f1669287bca04135fd63498385d014d..0000000000000000000000000000000000000000 --- a/ltx_video_x/models/transformers/transformer3d.py +++ /dev/null @@ -1,507 +0,0 @@ -# Adapted from: https://github.com/huggingface/diffusers/blob/v0.26.3/src/diffusers/models/transformers/transformer_2d.py -import math -from dataclasses import dataclass -from typing import Any, Dict, List, Optional, Union -import os -import json -import glob -from pathlib import Path - -import torch -from diffusers.configuration_utils import ConfigMixin, register_to_config -from diffusers.models.embeddings import PixArtAlphaTextProjection -from diffusers.models.modeling_utils import ModelMixin -from diffusers.models.normalization import AdaLayerNormSingle -from diffusers.utils import BaseOutput, is_torch_version -from diffusers.utils import logging -from torch import nn -from safetensors import safe_open - - -from ltx_video.models.transformers.attention import BasicTransformerBlock -from ltx_video.utils.skip_layer_strategy import SkipLayerStrategy - -from ltx_video.utils.diffusers_config_mapping import ( - diffusers_and_ours_config_mapping, - make_hashable_key, - TRANSFORMER_KEYS_RENAME_DICT, -) - - -logger = logging.get_logger(__name__) - - -@dataclass -class Transformer3DModelOutput(BaseOutput): - """ - The output of [`Transformer2DModel`]. - - Args: - sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [`Transformer2DModel`] is discrete): - The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability - distributions for the unnoised latent pixels. - """ - - sample: torch.FloatTensor - - -class Transformer3DModel(ModelMixin, ConfigMixin): - _supports_gradient_checkpointing = True - - @register_to_config - def __init__( - self, - num_attention_heads: int = 16, - attention_head_dim: int = 88, - in_channels: Optional[int] = None, - out_channels: Optional[int] = None, - num_layers: int = 1, - dropout: float = 0.0, - norm_num_groups: int = 32, - cross_attention_dim: Optional[int] = None, - attention_bias: bool = False, - num_vector_embeds: Optional[int] = None, - activation_fn: str = "geglu", - num_embeds_ada_norm: Optional[int] = None, - use_linear_projection: bool = False, - only_cross_attention: bool = False, - double_self_attention: bool = False, - upcast_attention: bool = False, - adaptive_norm: str = "single_scale_shift", # 'single_scale_shift' or 'single_scale' - standardization_norm: str = "layer_norm", # 'layer_norm' or 'rms_norm' - norm_elementwise_affine: bool = True, - norm_eps: float = 1e-5, - attention_type: str = "default", - caption_channels: int = None, - use_tpu_flash_attention: bool = False, # if True uses the TPU attention offload ('flash attention') - qk_norm: Optional[str] = None, - positional_embedding_type: str = "rope", - positional_embedding_theta: Optional[float] = None, - positional_embedding_max_pos: Optional[List[int]] = None, - timestep_scale_multiplier: Optional[float] = None, - causal_temporal_positioning: bool = False, # For backward compatibility, will be deprecated - ): - super().__init__() - self.use_tpu_flash_attention = ( - use_tpu_flash_attention # FIXME: push config down to the attention modules - ) - self.use_linear_projection = use_linear_projection - self.num_attention_heads = num_attention_heads - self.attention_head_dim = attention_head_dim - inner_dim = num_attention_heads * attention_head_dim - self.inner_dim = inner_dim - self.patchify_proj = nn.Linear(in_channels, inner_dim, bias=True) - self.positional_embedding_type = positional_embedding_type - self.positional_embedding_theta = positional_embedding_theta - self.positional_embedding_max_pos = positional_embedding_max_pos - self.use_rope = self.positional_embedding_type == "rope" - self.timestep_scale_multiplier = timestep_scale_multiplier - - if self.positional_embedding_type == "absolute": - raise ValueError("Absolute positional embedding is no longer supported") - elif self.positional_embedding_type == "rope": - if positional_embedding_theta is None: - raise ValueError( - "If `positional_embedding_type` type is rope, `positional_embedding_theta` must also be defined" - ) - if positional_embedding_max_pos is None: - raise ValueError( - "If `positional_embedding_type` type is rope, `positional_embedding_max_pos` must also be defined" - ) - - # 3. Define transformers blocks - self.transformer_blocks = nn.ModuleList( - [ - BasicTransformerBlock( - inner_dim, - num_attention_heads, - attention_head_dim, - dropout=dropout, - cross_attention_dim=cross_attention_dim, - activation_fn=activation_fn, - num_embeds_ada_norm=num_embeds_ada_norm, - attention_bias=attention_bias, - only_cross_attention=only_cross_attention, - double_self_attention=double_self_attention, - upcast_attention=upcast_attention, - adaptive_norm=adaptive_norm, - standardization_norm=standardization_norm, - norm_elementwise_affine=norm_elementwise_affine, - norm_eps=norm_eps, - attention_type=attention_type, - use_tpu_flash_attention=use_tpu_flash_attention, - qk_norm=qk_norm, - use_rope=self.use_rope, - ) - for d in range(num_layers) - ] - ) - - # 4. Define output layers - self.out_channels = in_channels if out_channels is None else out_channels - self.norm_out = nn.LayerNorm(inner_dim, elementwise_affine=False, eps=1e-6) - self.scale_shift_table = nn.Parameter( - torch.randn(2, inner_dim) / inner_dim**0.5 - ) - self.proj_out = nn.Linear(inner_dim, self.out_channels) - - self.adaln_single = AdaLayerNormSingle( - inner_dim, use_additional_conditions=False - ) - if adaptive_norm == "single_scale": - self.adaln_single.linear = nn.Linear(inner_dim, 4 * inner_dim, bias=True) - - self.caption_projection = None - if caption_channels is not None: - self.caption_projection = PixArtAlphaTextProjection( - in_features=caption_channels, hidden_size=inner_dim - ) - - self.gradient_checkpointing = False - - def set_use_tpu_flash_attention(self): - r""" - Function sets the flag in this object and propagates down the children. The flag will enforce the usage of TPU - attention kernel. - """ - logger.info("ENABLE TPU FLASH ATTENTION -> TRUE") - self.use_tpu_flash_attention = True - # push config down to the attention modules - for block in self.transformer_blocks: - block.set_use_tpu_flash_attention() - - def create_skip_layer_mask( - self, - batch_size: int, - num_conds: int, - ptb_index: int, - skip_block_list: Optional[List[int]] = None, - ): - if skip_block_list is None or len(skip_block_list) == 0: - return None - num_layers = len(self.transformer_blocks) - mask = torch.ones( - (num_layers, batch_size * num_conds), device=self.device, dtype=self.dtype - ) - for block_idx in skip_block_list: - mask[block_idx, ptb_index::num_conds] = 0 - return mask - - def _set_gradient_checkpointing(self, module, value=False): - if hasattr(module, "gradient_checkpointing"): - module.gradient_checkpointing = value - - def get_fractional_positions(self, indices_grid): - fractional_positions = torch.stack( - [ - indices_grid[:, i] / self.positional_embedding_max_pos[i] - for i in range(3) - ], - dim=-1, - ) - return fractional_positions - - def precompute_freqs_cis(self, indices_grid, spacing="exp"): - dtype = torch.float32 # We need full precision in the freqs_cis computation. - dim = self.inner_dim - theta = self.positional_embedding_theta - - fractional_positions = self.get_fractional_positions(indices_grid) - - start = 1 - end = theta - device = fractional_positions.device - if spacing == "exp": - indices = theta ** ( - torch.linspace( - math.log(start, theta), - math.log(end, theta), - dim // 6, - device=device, - dtype=dtype, - ) - ) - indices = indices.to(dtype=dtype) - elif spacing == "exp_2": - indices = 1.0 / theta ** (torch.arange(0, dim, 6, device=device) / dim) - indices = indices.to(dtype=dtype) - elif spacing == "linear": - indices = torch.linspace(start, end, dim // 6, device=device, dtype=dtype) - elif spacing == "sqrt": - indices = torch.linspace( - start**2, end**2, dim // 6, device=device, dtype=dtype - ).sqrt() - - indices = indices * math.pi / 2 - - if spacing == "exp_2": - freqs = ( - (indices * fractional_positions.unsqueeze(-1)) - .transpose(-1, -2) - .flatten(2) - ) - else: - freqs = ( - (indices * (fractional_positions.unsqueeze(-1) * 2 - 1)) - .transpose(-1, -2) - .flatten(2) - ) - - cos_freq = freqs.cos().repeat_interleave(2, dim=-1) - sin_freq = freqs.sin().repeat_interleave(2, dim=-1) - if dim % 6 != 0: - cos_padding = torch.ones_like(cos_freq[:, :, : dim % 6]) - sin_padding = torch.zeros_like(cos_freq[:, :, : dim % 6]) - cos_freq = torch.cat([cos_padding, cos_freq], dim=-1) - sin_freq = torch.cat([sin_padding, sin_freq], dim=-1) - return cos_freq.to(self.dtype), sin_freq.to(self.dtype) - - def load_state_dict( - self, - state_dict: Dict, - *args, - **kwargs, - ): - if any([key.startswith("model.diffusion_model.") for key in state_dict.keys()]): - state_dict = { - key.replace("model.diffusion_model.", ""): value - for key, value in state_dict.items() - if key.startswith("model.diffusion_model.") - } - super().load_state_dict(state_dict, *args, **kwargs) - - @classmethod - def from_pretrained( - cls, - pretrained_model_path: Optional[Union[str, os.PathLike]], - *args, - **kwargs, - ): - pretrained_model_path = Path(pretrained_model_path) - if pretrained_model_path.is_dir(): - config_path = pretrained_model_path / "transformer" / "config.json" - with open(config_path, "r") as f: - config = make_hashable_key(json.load(f)) - - assert config in diffusers_and_ours_config_mapping, ( - "Provided diffusers checkpoint config for transformer is not suppported. " - "We only support diffusers configs found in Lightricks/LTX-Video." - ) - - config = diffusers_and_ours_config_mapping[config] - state_dict = {} - ckpt_paths = ( - pretrained_model_path - / "transformer" - / "diffusion_pytorch_model*.safetensors" - ) - dict_list = glob.glob(str(ckpt_paths)) - for dict_path in dict_list: - part_dict = {} - with safe_open(dict_path, framework="pt", device="cpu") as f: - for k in f.keys(): - part_dict[k] = f.get_tensor(k) - state_dict.update(part_dict) - - for key in list(state_dict.keys()): - new_key = key - for replace_key, rename_key in TRANSFORMER_KEYS_RENAME_DICT.items(): - new_key = new_key.replace(replace_key, rename_key) - state_dict[new_key] = state_dict.pop(key) - - with torch.device("meta"): - transformer = cls.from_config(config) - transformer.load_state_dict(state_dict, assign=True, strict=True) - elif pretrained_model_path.is_file() and str(pretrained_model_path).endswith( - ".safetensors" - ): - comfy_single_file_state_dict = {} - with safe_open(pretrained_model_path, framework="pt", device="cpu") as f: - metadata = f.metadata() - for k in f.keys(): - comfy_single_file_state_dict[k] = f.get_tensor(k) - configs = json.loads(metadata["config"]) - transformer_config = configs["transformer"] - with torch.device("meta"): - transformer = Transformer3DModel.from_config(transformer_config) - transformer.load_state_dict(comfy_single_file_state_dict, assign=True) - return transformer - - def forward( - self, - hidden_states: torch.Tensor, - indices_grid: torch.Tensor, - encoder_hidden_states: Optional[torch.Tensor] = None, - timestep: Optional[torch.LongTensor] = None, - class_labels: Optional[torch.LongTensor] = None, - cross_attention_kwargs: Dict[str, Any] = None, - attention_mask: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - skip_layer_mask: Optional[torch.Tensor] = None, - skip_layer_strategy: Optional[SkipLayerStrategy] = None, - return_dict: bool = True, - ): - """ - The [`Transformer2DModel`] forward method. - - Args: - hidden_states (`torch.LongTensor` of shape `(batch size, num latent pixels)` if discrete, `torch.FloatTensor` of shape `(batch size, channel, height, width)` if continuous): - Input `hidden_states`. - indices_grid (`torch.LongTensor` of shape `(batch size, 3, num latent pixels)`): - encoder_hidden_states ( `torch.FloatTensor` of shape `(batch size, sequence len, embed dims)`, *optional*): - Conditional embeddings for cross attention layer. If not given, cross-attention defaults to - self-attention. - timestep ( `torch.LongTensor`, *optional*): - Used to indicate denoising step. Optional timestep to be applied as an embedding in `AdaLayerNorm`. - class_labels ( `torch.LongTensor` of shape `(batch size, num classes)`, *optional*): - Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in - `AdaLayerZeroNorm`. - cross_attention_kwargs ( `Dict[str, Any]`, *optional*): - A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under - `self.processor` in - [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). - attention_mask ( `torch.Tensor`, *optional*): - An attention mask of shape `(batch, key_tokens)` is applied to `encoder_hidden_states`. If `1` the mask - is kept, otherwise if `0` it is discarded. Mask will be converted into a bias, which adds large - negative values to the attention scores corresponding to "discard" tokens. - encoder_attention_mask ( `torch.Tensor`, *optional*): - Cross-attention mask applied to `encoder_hidden_states`. Two formats supported: - - * Mask `(batch, sequence_length)` True = keep, False = discard. - * Bias `(batch, 1, sequence_length)` 0 = keep, -10000 = discard. - - If `ndim == 2`: will be interpreted as a mask, then converted into a bias consistent with the format - above. This bias will be added to the cross-attention scores. - skip_layer_mask ( `torch.Tensor`, *optional*): - A mask of shape `(num_layers, batch)` that indicates which layers to skip. `0` at position - `layer, batch_idx` indicates that the layer should be skipped for the corresponding batch index. - skip_layer_strategy ( `SkipLayerStrategy`, *optional*, defaults to `None`): - Controls which layers are skipped when calculating a perturbed latent for spatiotemporal guidance. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~models.unets.unet_2d_condition.UNet2DConditionOutput`] instead of a plain - tuple. - - Returns: - If `return_dict` is True, an [`~models.transformer_2d.Transformer2DModelOutput`] is returned, otherwise a - `tuple` where the first element is the sample tensor. - """ - # for tpu attention offload 2d token masks are used. No need to transform. - if not self.use_tpu_flash_attention: - # ensure attention_mask is a bias, and give it a singleton query_tokens dimension. - # we may have done this conversion already, e.g. if we came here via UNet2DConditionModel#forward. - # we can tell by counting dims; if ndim == 2: it's a mask rather than a bias. - # expects mask of shape: - # [batch, key_tokens] - # adds singleton query_tokens dimension: - # [batch, 1, key_tokens] - # this helps to broadcast it as a bias over attention scores, which will be in one of the following shapes: - # [batch, heads, query_tokens, key_tokens] (e.g. torch sdp attn) - # [batch * heads, query_tokens, key_tokens] (e.g. xformers or classic attn) - if attention_mask is not None and attention_mask.ndim == 2: - # assume that mask is expressed as: - # (1 = keep, 0 = discard) - # convert mask into a bias that can be added to attention scores: - # (keep = +0, discard = -10000.0) - attention_mask = (1 - attention_mask.to(hidden_states.dtype)) * -10000.0 - attention_mask = attention_mask.unsqueeze(1) - - # convert encoder_attention_mask to a bias the same way we do for attention_mask - if encoder_attention_mask is not None and encoder_attention_mask.ndim == 2: - encoder_attention_mask = ( - 1 - encoder_attention_mask.to(hidden_states.dtype) - ) * -10000.0 - encoder_attention_mask = encoder_attention_mask.unsqueeze(1) - - # 1. Input - hidden_states = self.patchify_proj(hidden_states) - - if self.timestep_scale_multiplier: - timestep = self.timestep_scale_multiplier * timestep - - freqs_cis = self.precompute_freqs_cis(indices_grid) - - batch_size = hidden_states.shape[0] - timestep, embedded_timestep = self.adaln_single( - timestep.flatten(), - {"resolution": None, "aspect_ratio": None}, - batch_size=batch_size, - hidden_dtype=hidden_states.dtype, - ) - # Second dimension is 1 or number of tokens (if timestep_per_token) - timestep = timestep.view(batch_size, -1, timestep.shape[-1]) - embedded_timestep = embedded_timestep.view( - batch_size, -1, embedded_timestep.shape[-1] - ) - - # 2. Blocks - if self.caption_projection is not None: - batch_size = hidden_states.shape[0] - encoder_hidden_states = self.caption_projection(encoder_hidden_states) - encoder_hidden_states = encoder_hidden_states.view( - batch_size, -1, hidden_states.shape[-1] - ) - - for block_idx, block in enumerate(self.transformer_blocks): - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module, return_dict=None): - def custom_forward(*inputs): - if return_dict is not None: - return module(*inputs, return_dict=return_dict) - else: - return module(*inputs) - - return custom_forward - - ckpt_kwargs: Dict[str, Any] = ( - {"use_reentrant": False} if is_torch_version(">=", "1.11.0") else {} - ) - hidden_states = torch.utils.checkpoint.checkpoint( - create_custom_forward(block), - hidden_states, - freqs_cis, - attention_mask, - encoder_hidden_states, - encoder_attention_mask, - timestep, - cross_attention_kwargs, - class_labels, - ( - skip_layer_mask[block_idx] - if skip_layer_mask is not None - else None - ), - skip_layer_strategy, - **ckpt_kwargs, - ) - else: - hidden_states = block( - hidden_states, - freqs_cis=freqs_cis, - attention_mask=attention_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - timestep=timestep, - cross_attention_kwargs=cross_attention_kwargs, - class_labels=class_labels, - skip_layer_mask=( - skip_layer_mask[block_idx] - if skip_layer_mask is not None - else None - ), - skip_layer_strategy=skip_layer_strategy, - ) - - # 3. Output - scale_shift_values = ( - self.scale_shift_table[None, None] + embedded_timestep[:, :, None] - ) - shift, scale = scale_shift_values[:, :, 0], scale_shift_values[:, :, 1] - hidden_states = self.norm_out(hidden_states) - # Modulation - hidden_states = hidden_states * (1 + scale) + shift - hidden_states = self.proj_out(hidden_states) - if not return_dict: - return (hidden_states,) - - return Transformer3DModelOutput(sample=hidden_states) diff --git a/ltx_video_x/pipelines/__init__.py b/ltx_video_x/pipelines/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/ltx_video_x/pipelines/crf_compressor.py b/ltx_video_x/pipelines/crf_compressor.py deleted file mode 100644 index 9b9380afb7f92e0a2379c9db4cf5ce9f5a20942c..0000000000000000000000000000000000000000 --- a/ltx_video_x/pipelines/crf_compressor.py +++ /dev/null @@ -1,50 +0,0 @@ -import av -import torch -import io -import numpy as np - - -def _encode_single_frame(output_file, image_array: np.ndarray, crf): - container = av.open(output_file, "w", format="mp4") - try: - stream = container.add_stream( - "libx264", rate=1, options={"crf": str(crf), "preset": "veryfast"} - ) - stream.height = image_array.shape[0] - stream.width = image_array.shape[1] - av_frame = av.VideoFrame.from_ndarray(image_array, format="rgb24").reformat( - format="yuv420p" - ) - container.mux(stream.encode(av_frame)) - container.mux(stream.encode()) - finally: - container.close() - - -def _decode_single_frame(video_file): - container = av.open(video_file) - try: - stream = next(s for s in container.streams if s.type == "video") - frame = next(container.decode(stream)) - finally: - container.close() - return frame.to_ndarray(format="rgb24") - - -def compress(image: torch.Tensor, crf=29): - if crf == 0: - return image - - image_array = ( - (image[: (image.shape[0] // 2) * 2, : (image.shape[1] // 2) * 2] * 255.0) - .byte() - .cpu() - .numpy() - ) - with io.BytesIO() as output_file: - _encode_single_frame(output_file, image_array, crf) - video_bytes = output_file.getvalue() - with io.BytesIO(video_bytes) as video_file: - image_array = _decode_single_frame(video_file) - tensor = torch.tensor(image_array, dtype=image.dtype, device=image.device) / 255.0 - return tensor diff --git a/ltx_video_x/pipelines/pipeline_ltx_video.py b/ltx_video_x/pipelines/pipeline_ltx_video.py deleted file mode 100644 index 6df3926c8ef96eec3c41f0ef9f09fdee662f47fe..0000000000000000000000000000000000000000 --- a/ltx_video_x/pipelines/pipeline_ltx_video.py +++ /dev/null @@ -1,2291 +0,0 @@ -# Adapted from: https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pixart_alpha/pipeline_pixart_alpha.py -import copy -import inspect -import math -import re -from contextlib import nullcontext -from dataclasses import dataclass -from typing import Any, Callable, Dict, List, Optional, Tuple, Union - -import torch -import torch.nn.functional as F -from diffusers.image_processor import VaeImageProcessor -from diffusers.models import AutoencoderKL -from diffusers.pipelines.pipeline_utils import DiffusionPipeline, ImagePipelineOutput -from diffusers.schedulers import DPMSolverMultistepScheduler -from diffusers.utils import deprecate, logging -from diffusers.utils.torch_utils import randn_tensor -from einops import rearrange -from transformers import ( - T5EncoderModel, - T5Tokenizer, - AutoModelForCausalLM, - AutoProcessor, - AutoTokenizer, -) - -from ltx_video.models.autoencoders.causal_video_autoencoder import ( - CausalVideoAutoencoder, -) -from ltx_video.models.autoencoders.vae_encode import ( - get_vae_size_scale_factor, - latent_to_pixel_coords, - vae_decode, - vae_encode, -) -from ltx_video.models.transformers.symmetric_patchifier import Patchifier -from ltx_video.models.transformers.transformer3d import Transformer3DModel -from ltx_video.schedulers.rf import TimestepShifter -from ltx_video.utils.skip_layer_strategy import SkipLayerStrategy -from ltx_video.utils.prompt_enhance_utils import generate_cinematic_prompt -from ltx_video.models.autoencoders.latent_upsampler import LatentUpsampler -from ltx_video.models.autoencoders.vae_encode import ( - un_normalize_latents, - normalize_latents, -) - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -ASPECT_RATIO_1024_BIN = { - "0.25": [512.0, 2048.0], - "0.28": [512.0, 1856.0], - "0.32": [576.0, 1792.0], - "0.33": [576.0, 1728.0], - "0.35": [576.0, 1664.0], - "0.4": [640.0, 1600.0], - "0.42": [640.0, 1536.0], - "0.48": [704.0, 1472.0], - "0.5": [704.0, 1408.0], - "0.52": [704.0, 1344.0], - "0.57": [768.0, 1344.0], - "0.6": [768.0, 1280.0], - "0.68": [832.0, 1216.0], - "0.72": [832.0, 1152.0], - "0.78": [896.0, 1152.0], - "0.82": [896.0, 1088.0], - "0.88": [960.0, 1088.0], - "0.94": [960.0, 1024.0], - "1.0": [1024.0, 1024.0], - "1.07": [1024.0, 960.0], - "1.13": [1088.0, 960.0], - "1.21": [1088.0, 896.0], - "1.29": [1152.0, 896.0], - "1.38": [1152.0, 832.0], - "1.46": [1216.0, 832.0], - "1.67": [1280.0, 768.0], - "1.75": [1344.0, 768.0], - "2.0": [1408.0, 704.0], - "2.09": [1472.0, 704.0], - "2.4": [1536.0, 640.0], - "2.5": [1600.0, 640.0], - "3.0": [1728.0, 576.0], - "4.0": [2048.0, 512.0], -} - -ASPECT_RATIO_512_BIN = { - "0.25": [256.0, 1024.0], - "0.28": [256.0, 928.0], - "0.32": [288.0, 896.0], - "0.33": [288.0, 864.0], - "0.35": [288.0, 832.0], - "0.4": [320.0, 800.0], - "0.42": [320.0, 768.0], - "0.48": [352.0, 736.0], - "0.5": [352.0, 704.0], - "0.52": [352.0, 672.0], - "0.57": [384.0, 672.0], - "0.6": [384.0, 640.0], - "0.68": [416.0, 608.0], - "0.72": [416.0, 576.0], - "0.78": [448.0, 576.0], - "0.82": [448.0, 544.0], - "0.88": [480.0, 544.0], - "0.94": [480.0, 512.0], - "1.0": [512.0, 512.0], - "1.07": [512.0, 480.0], - "1.13": [544.0, 480.0], - "1.21": [544.0, 448.0], - "1.29": [576.0, 448.0], - "1.38": [576.0, 416.0], - "1.46": [608.0, 416.0], - "1.67": [640.0, 384.0], - "1.75": [672.0, 384.0], - "2.0": [704.0, 352.0], - "2.09": [736.0, 352.0], - "2.4": [768.0, 320.0], - "2.5": [800.0, 320.0], - "3.0": [864.0, 288.0], - "4.0": [1024.0, 256.0], -} - - -# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.retrieve_timesteps -def retrieve_timesteps( - scheduler, - num_inference_steps: Optional[int] = None, - device: Optional[Union[str, torch.device]] = None, - timesteps: Optional[List[int]] = None, - skip_initial_inference_steps: int = 0, - skip_final_inference_steps: int = 0, - **kwargs, -): - """ - Calls the scheduler's `set_timesteps` method and retrieves timesteps from the scheduler after the call. Handles - custom timesteps. Any kwargs will be supplied to `scheduler.set_timesteps`. - - Args: - scheduler (`SchedulerMixin`): - The scheduler to get timesteps from. - num_inference_steps (`int`): - The number of diffusion steps used when generating samples with a pre-trained model. If used, - `timesteps` must be `None`. - device (`str` or `torch.device`, *optional*): - The device to which the timesteps should be moved to. If `None`, the timesteps are not moved. - timesteps (`List[int]`, *optional*): - Custom timesteps used to support arbitrary spacing between timesteps. If `None`, then the default - timestep spacing strategy of the scheduler is used. If `timesteps` is passed, `num_inference_steps` - must be `None`. - max_timestep ('float', *optional*, defaults to 1.0): - The initial noising level for image-to-image/video-to-video. The list if timestamps will be - truncated to start with a timestamp greater or equal to this. - - Returns: - `Tuple[torch.Tensor, int]`: A tuple where the first element is the timestep schedule from the scheduler and the - second element is the number of inference steps. - """ - if timesteps is not None: - accepts_timesteps = "timesteps" in set( - inspect.signature(scheduler.set_timesteps).parameters.keys() - ) - if not accepts_timesteps: - raise ValueError( - f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom" - f" timestep schedules. Please check whether you are using the correct scheduler." - ) - scheduler.set_timesteps(timesteps=timesteps, device=device, **kwargs) - timesteps = scheduler.timesteps - num_inference_steps = len(timesteps) - else: - scheduler.set_timesteps(num_inference_steps, device=device, **kwargs) - timesteps = scheduler.timesteps - - if ( - skip_initial_inference_steps < 0 - or skip_final_inference_steps < 0 - or skip_initial_inference_steps + skip_final_inference_steps - >= num_inference_steps - ): - raise ValueError( - "invalid skip inference step values: must be non-negative and the sum of skip_initial_inference_steps and skip_final_inference_steps must be less than the number of inference steps" - ) - - timesteps = timesteps[ - skip_initial_inference_steps : len(timesteps) - skip_final_inference_steps - ] - scheduler.set_timesteps(timesteps=timesteps, device=device, **kwargs) - num_inference_steps = len(timesteps) - - return timesteps, num_inference_steps - - -@dataclass -class ConditioningItem: - """ - Defines a single frame-conditioning item - a single frame or a sequence of frames. - - Attributes: - media_item (torch.Tensor): shape=(b, 3, f, h, w). The media item to condition on. - media_frame_number (int): The start-frame number of the media item in the generated video. - conditioning_strength (float): The strength of the conditioning (1.0 = full conditioning). - media_x (Optional[int]): Optional left x coordinate of the media item in the generated frame. - media_y (Optional[int]): Optional top y coordinate of the media item in the generated frame. - """ - - media_item: torch.Tensor - media_frame_number: int - conditioning_strength: float - media_x: Optional[int] = None - media_y: Optional[int] = None - - - -@dataclass -class LatentConditioningItem: - latent_tensor: torch.Tensor - media_frame_number: int - conditioning_strength: float - - -class LTXVideoPipeline(DiffusionPipeline): - r""" - Pipeline for text-to-image generation using LTX-Video. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`T5EncoderModel`]): - Frozen text-encoder. This uses - [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel), specifically the - [t5-v1_1-xxl](https://huggingface.co/PixArt-alpha/PixArt-alpha/tree/main/t5-v1_1-xxl) variant. - tokenizer (`T5Tokenizer`): - Tokenizer of class - [T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer). - transformer ([`Transformer2DModel`]): - A text conditioned `Transformer2DModel` to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `transformer` to denoise the encoded image latents. - """ - - bad_punct_regex = re.compile( - r"[" - + "#®•©™&@·º½¾¿¡§~" - + r"\)" - + r"\(" - + r"\]" - + r"\[" - + r"\}" - + r"\{" - + r"\|" - + "\\" - + r"\/" - + r"\*" - + r"]{1,}" - ) # noqa - - _optional_components = [ - "tokenizer", - "text_encoder", - "prompt_enhancer_image_caption_model", - "prompt_enhancer_image_caption_processor", - "prompt_enhancer_llm_model", - "prompt_enhancer_llm_tokenizer", - ] - model_cpu_offload_seq = "prompt_enhancer_image_caption_model->prompt_enhancer_llm_model->text_encoder->transformer->vae" - - def __init__( - self, - tokenizer: T5Tokenizer, - text_encoder: T5EncoderModel, - vae: AutoencoderKL, - transformer: Transformer3DModel, - scheduler: DPMSolverMultistepScheduler, - patchifier: Patchifier, - prompt_enhancer_image_caption_model: AutoModelForCausalLM, - prompt_enhancer_image_caption_processor: AutoProcessor, - prompt_enhancer_llm_model: AutoModelForCausalLM, - prompt_enhancer_llm_tokenizer: AutoTokenizer, - allowed_inference_steps: Optional[List[float]] = None, - ): - super().__init__() - - self.register_modules( - tokenizer=tokenizer, - text_encoder=text_encoder, - vae=vae, - transformer=transformer, - scheduler=scheduler, - patchifier=patchifier, - prompt_enhancer_image_caption_model=prompt_enhancer_image_caption_model, - prompt_enhancer_image_caption_processor=prompt_enhancer_image_caption_processor, - prompt_enhancer_llm_model=prompt_enhancer_llm_model, - prompt_enhancer_llm_tokenizer=prompt_enhancer_llm_tokenizer, - ) - - self.video_scale_factor, self.vae_scale_factor, _ = get_vae_size_scale_factor( - self.vae - ) - self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) - - self.allowed_inference_steps = allowed_inference_steps - - def mask_text_embeddings(self, emb, mask): - if emb.shape[0] == 1: - keep_index = mask.sum().item() - return emb[:, :, :keep_index, :], keep_index - else: - masked_feature = emb * mask[:, None, :, None] - return masked_feature, emb.shape[2] - - # Adapted from diffusers.pipelines.deepfloyd_if.pipeline_if.encode_prompt - def encode_prompt( - self, - prompt: Union[str, List[str]], - do_classifier_free_guidance: bool = True, - negative_prompt: str = "", - num_images_per_prompt: int = 1, - device: Optional[torch.device] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - prompt_attention_mask: Optional[torch.FloatTensor] = None, - negative_prompt_attention_mask: Optional[torch.FloatTensor] = None, - text_encoder_max_tokens: int = 256, - **kwargs, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - negative_prompt (`str` or `List[str]`, *optional*): - The prompt not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` - instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). For - This should be "". - do_classifier_free_guidance (`bool`, *optional*, defaults to `True`): - whether to use classifier free guidance or not - num_images_per_prompt (`int`, *optional*, defaults to 1): - number of images that should be generated per prompt - device: (`torch.device`, *optional*): - torch device to place the resulting embeddings on - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. - """ - - if "mask_feature" in kwargs: - deprecation_message = "The use of `mask_feature` is deprecated. It is no longer used in any computation and that doesn't affect the end results. It will be removed in a future version." - deprecate("mask_feature", "1.0.0", deprecation_message, standard_warn=False) - - if device is None: - device = self._execution_device - - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - # See Section 3.1. of the paper. - max_length = ( - text_encoder_max_tokens # TPU supports only lengths multiple of 128 - ) - if prompt_embeds is None: - assert ( - self.text_encoder is not None - ), "You should provide either prompt_embeds or self.text_encoder should not be None," - text_enc_device = next(self.text_encoder.parameters()).device - prompt = self._text_preprocessing(prompt) - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=max_length, - truncation=True, - add_special_tokens=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer( - prompt, padding="longest", return_tensors="pt" - ).input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[ - -1 - ] and not torch.equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode( - untruncated_ids[:, max_length - 1 : -1] - ) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {max_length} tokens: {removed_text}" - ) - - prompt_attention_mask = text_inputs.attention_mask - prompt_attention_mask = prompt_attention_mask.to(text_enc_device) - prompt_attention_mask = prompt_attention_mask.to(device) - - prompt_embeds = self.text_encoder( - text_input_ids.to(text_enc_device), attention_mask=prompt_attention_mask - ) - prompt_embeds = prompt_embeds[0] - - if self.text_encoder is not None: - dtype = self.text_encoder.dtype - elif self.transformer is not None: - dtype = self.transformer.dtype - else: - dtype = None - - prompt_embeds = prompt_embeds.to(dtype=dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings and attention mask for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view( - bs_embed * num_images_per_prompt, seq_len, -1 - ) - prompt_attention_mask = prompt_attention_mask.repeat(1, num_images_per_prompt) - prompt_attention_mask = prompt_attention_mask.view( - bs_embed * num_images_per_prompt, -1 - ) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens = self._text_preprocessing(negative_prompt) - uncond_tokens = uncond_tokens * batch_size - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_attention_mask=True, - add_special_tokens=True, - return_tensors="pt", - ) - negative_prompt_attention_mask = uncond_input.attention_mask - negative_prompt_attention_mask = negative_prompt_attention_mask.to( - text_enc_device - ) - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(text_enc_device), - attention_mask=negative_prompt_attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to( - dtype=dtype, device=device - ) - - negative_prompt_embeds = negative_prompt_embeds.repeat( - 1, num_images_per_prompt, 1 - ) - negative_prompt_embeds = negative_prompt_embeds.view( - batch_size * num_images_per_prompt, seq_len, -1 - ) - - negative_prompt_attention_mask = negative_prompt_attention_mask.repeat( - 1, num_images_per_prompt - ) - negative_prompt_attention_mask = negative_prompt_attention_mask.view( - bs_embed * num_images_per_prompt, -1 - ) - else: - negative_prompt_embeds = None - negative_prompt_attention_mask = None - - return ( - prompt_embeds, - prompt_attention_mask, - negative_prompt_embeds, - negative_prompt_attention_mask, - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set( - inspect.signature(self.scheduler.step).parameters.keys() - ) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set( - inspect.signature(self.scheduler.step).parameters.keys() - ) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs( - self, - prompt, - height, - width, - negative_prompt, - prompt_embeds=None, - negative_prompt_embeds=None, - prompt_attention_mask=None, - negative_prompt_attention_mask=None, - enhance_prompt=False, - ): - if height % 8 != 0 or width % 8 != 0: - raise ValueError( - f"`height` and `width` have to be divisible by 8 but are {height} and {width}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and ( - not isinstance(prompt, str) and not isinstance(prompt, list) - ): - raise ValueError( - f"`prompt` has to be of type `str` or `list` but is {type(prompt)}" - ) - - if prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and prompt_attention_mask is None: - raise ValueError( - "Must provide `prompt_attention_mask` when specifying `prompt_embeds`." - ) - - if ( - negative_prompt_embeds is not None - and negative_prompt_attention_mask is None - ): - raise ValueError( - "Must provide `negative_prompt_attention_mask` when specifying `negative_prompt_embeds`." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - if prompt_attention_mask.shape != negative_prompt_attention_mask.shape: - raise ValueError( - "`prompt_attention_mask` and `negative_prompt_attention_mask` must have the same shape when passed directly, but" - f" got: `prompt_attention_mask` {prompt_attention_mask.shape} != `negative_prompt_attention_mask`" - f" {negative_prompt_attention_mask.shape}." - ) - - if enhance_prompt: - assert ( - self.prompt_enhancer_image_caption_model is not None - ), "Image caption model must be initialized if enhance_prompt is True" - assert ( - self.prompt_enhancer_image_caption_processor is not None - ), "Image caption processor must be initialized if enhance_prompt is True" - assert ( - self.prompt_enhancer_llm_model is not None - ), "Text prompt enhancer model must be initialized if enhance_prompt is True" - assert ( - self.prompt_enhancer_llm_tokenizer is not None - ), "Text prompt enhancer tokenizer must be initialized if enhance_prompt is True" - - def _text_preprocessing(self, text): - if not isinstance(text, (tuple, list)): - text = [text] - - def process(text: str): - text = text.strip() - return text - - return [process(t) for t in text] - - @staticmethod - def add_noise_to_image_conditioning_latents( - t: float, - init_latents: torch.Tensor, - latents: torch.Tensor, - noise_scale: float, - conditioning_mask: torch.Tensor, - generator, - eps=1e-6, - ): - """ - Add timestep-dependent noise to the hard-conditioning latents. - This helps with motion continuity, especially when conditioned on a single frame. - """ - noise = randn_tensor( - latents.shape, - generator=generator, - device=latents.device, - dtype=latents.dtype, - ) - # Add noise only to hard-conditioning latents (conditioning_mask = 1.0) - need_to_noise = (conditioning_mask > 1.0 - eps).unsqueeze(-1) - noised_latents = init_latents + noise_scale * noise * (t**2) - latents = torch.where(need_to_noise, noised_latents, latents) - return latents - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents - def prepare_latents( - self, - latents: torch.Tensor | None, - media_items: torch.Tensor | None, - timestep: float, - latent_shape: torch.Size | Tuple[Any, ...], - dtype: torch.dtype, - device: torch.device, - generator: torch.Generator | List[torch.Generator], - vae_per_channel_normalize: bool = True, - ): - """ - Prepare the initial latent tensor to be denoised. - The latents are either pure noise or a noised version of the encoded media items. - Args: - latents (`torch.FloatTensor` or `None`): - The latents to use (provided by the user) or `None` to create new latents. - media_items (`torch.FloatTensor` or `None`): - An image or video to be updated using img2img or vid2vid. The media item is encoded and noised. - timestep (`float`): - The timestep to noise the encoded media_items to. - latent_shape (`torch.Size`): - The target latent shape. - dtype (`torch.dtype`): - The target dtype. - device (`torch.device`): - The target device. - generator (`torch.Generator` or `List[torch.Generator]`): - Generator(s) to be used for the noising process. - vae_per_channel_normalize ('bool'): - When encoding the media_items, whether to normalize the latents per-channel. - Returns: - `torch.FloatTensor`: The latents to be used for the denoising process. This is a tensor of shape - (batch_size, num_channels, height, width). - """ - if isinstance(generator, list) and len(generator) != latent_shape[0]: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {latent_shape[0]}. Make sure the batch size matches the length of the generators." - ) - - # Initialize the latents with the given latents or encoded media item, if provided - assert ( - latents is None or media_items is None - ), "Cannot provide both latents and media_items. Please provide only one of the two." - - #aduc carlex patch - #assert ( - # latents is None and media_items is None or timestep < 1.0 - #), "Input media_item or latents are provided, but they will be replaced with noise." - - if media_items is not None: - latents = vae_encode( - media_items.to(dtype=self.vae.dtype, device=self.vae.device), - self.vae, - vae_per_channel_normalize=vae_per_channel_normalize, - ) - if latents is not None: - assert ( - latents.shape == latent_shape - ), f"Latents have to be of shape {latent_shape} but are {latents.shape}." - latents = latents.to(device=device, dtype=dtype) - - # For backward compatibility, generate in the "patchified" shape and rearrange - b, c, f, h, w = latent_shape - noise = randn_tensor( - (b, f * h * w, c), generator=generator, device=device, dtype=dtype - ) - noise = rearrange(noise, "b (f h w) c -> b c f h w", f=f, h=h, w=w) - - # scale the initial noise by the standard deviation required by the scheduler - noise = noise * self.scheduler.init_noise_sigma - - if latents is None: - latents = noise - else: - # Noise the latents to the required (first) timestep - latents = timestep * noise + (1 - timestep) * latents - - return latents - - @staticmethod - def classify_height_width_bin( - height: int, width: int, ratios: dict - ) -> Tuple[int, int]: - """Returns binned height and width.""" - ar = float(height / width) - closest_ratio = min(ratios.keys(), key=lambda ratio: abs(float(ratio) - ar)) - default_hw = ratios[closest_ratio] - return int(default_hw[0]), int(default_hw[1]) - - @staticmethod - def resize_and_crop_tensor( - samples: torch.Tensor, new_width: int, new_height: int - ) -> torch.Tensor: - n_frames, orig_height, orig_width = samples.shape[-3:] - - # Check if resizing is needed - if orig_height != new_height or orig_width != new_width: - ratio = max(new_height / orig_height, new_width / orig_width) - resized_width = int(orig_width * ratio) - resized_height = int(orig_height * ratio) - - # Resize - samples = LTXVideoPipeline.resize_tensor( - samples, resized_height, resized_width - ) - - # Center Crop - start_x = (resized_width - new_width) // 2 - end_x = start_x + new_width - start_y = (resized_height - new_height) // 2 - end_y = start_y + new_height - samples = samples[..., start_y:end_y, start_x:end_x] - - return samples - - @staticmethod - def resize_tensor(media_items, height, width): - n_frames = media_items.shape[2] - if media_items.shape[-2:] != (height, width): - media_items = rearrange(media_items, "b c n h w -> (b n) c h w") - media_items = F.interpolate( - media_items, - size=(height, width), - mode="bilinear", - align_corners=False, - ) - media_items = rearrange(media_items, "(b n) c h w -> b c n h w", n=n_frames) - return media_items - - @torch.no_grad() - def __call__( - self, - height: int, - width: int, - num_frames: int, - frame_rate: float, - prompt: Union[str, List[str]] = None, - negative_prompt: str = "", - num_inference_steps: int = 20, - skip_initial_inference_steps: int = 0, - skip_final_inference_steps: int = 0, - timesteps: List[int] = None, - guidance_scale: Union[float, List[float]] = 4.5, - cfg_star_rescale: bool = False, - skip_layer_strategy: Optional[SkipLayerStrategy] = None, - skip_block_list: Optional[Union[List[List[int]], List[int]]] = None, - stg_scale: Union[float, List[float]] = 1.0, - rescaling_scale: Union[float, List[float]] = 0.7, - guidance_timesteps: Optional[List[int]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - prompt_attention_mask: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_attention_mask: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None, - conditioning_items: Optional[List[ConditioningItem]] = None, - decode_timestep: Union[List[float], float] = 0.0, - decode_noise_scale: Optional[List[float]] = None, - mixed_precision: bool = False, - offload_to_cpu: bool = False, - enhance_prompt: bool = False, - text_encoder_max_tokens: int = 256, - stochastic_sampling: bool = False, - media_items: Optional[torch.Tensor] = None, - tone_map_compression_ratio: float = 0.0, - **kwargs, - ) -> Union[ImagePipelineOutput, Tuple]: - """ - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. - instead. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - num_inference_steps (`int`, *optional*, defaults to 100): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. If `timesteps` is provided, this parameter is ignored. - skip_initial_inference_steps (`int`, *optional*, defaults to 0): - The number of initial timesteps to skip. After calculating the timesteps, this number of timesteps will - be removed from the beginning of the timesteps list. Meaning the highest-timesteps values will not run. - skip_final_inference_steps (`int`, *optional*, defaults to 0): - The number of final timesteps to skip. After calculating the timesteps, this number of timesteps will - be removed from the end of the timesteps list. Meaning the lowest-timesteps values will not run. - timesteps (`List[int]`, *optional*): - Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps` - timesteps are used. Must be in descending order. - guidance_scale (`float`, *optional*, defaults to 4.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - cfg_star_rescale (`bool`, *optional*, defaults to `False`): - If set to `True`, applies the CFG star rescale. Scales the negative prediction according to dot - product between positive and negative. - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - height (`int`, *optional*, defaults to self.unet.config.sample_size): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size): - The width in pixels of the generated image. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - prompt_attention_mask (`torch.FloatTensor`, *optional*): Pre-generated attention mask for text embeddings. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. This negative prompt should be "". If not - provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. - negative_prompt_attention_mask (`torch.FloatTensor`, *optional*): - Pre-generated attention mask for negative text embeddings. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether to return a [`~pipelines.stable_diffusion.IFPipelineOutput`] instead of a plain tuple. - callback_on_step_end (`Callable`, *optional*): - A function that calls at the end of each denoising steps during the inference. The function is called - with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, - callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by - `callback_on_step_end_tensor_inputs`. - use_resolution_binning (`bool` defaults to `True`): - If set to `True`, the requested height and width are first mapped to the closest resolutions using - `ASPECT_RATIO_1024_BIN`. After the produced latents are decoded into images, they are resized back to - the requested resolution. Useful for generating non-square images. - enhance_prompt (`bool`, *optional*, defaults to `False`): - If set to `True`, the prompt is enhanced using a LLM model. - text_encoder_max_tokens (`int`, *optional*, defaults to `256`): - The maximum number of tokens to use for the text encoder. - stochastic_sampling (`bool`, *optional*, defaults to `False`): - If set to `True`, the sampling is stochastic. If set to `False`, the sampling is deterministic. - media_items ('torch.Tensor', *optional*): - The input media item used for image-to-image / video-to-video. - tone_map_compression_ratio: compression ratio for tone mapping, defaults to 0.0. - If set to 0.0, no tone mapping is applied. If set to 1.0 - full compression is applied. - Examples: - - Returns: - [`~pipelines.ImagePipelineOutput`] or `tuple`: - If `return_dict` is `True`, [`~pipelines.ImagePipelineOutput`] is returned, otherwise a `tuple` is - returned where the first element is a list with the generated images - """ - if "mask_feature" in kwargs: - deprecation_message = "The use of `mask_feature` is deprecated. It is no longer used in any computation and that doesn't affect the end results. It will be removed in a future version." - deprecate("mask_feature", "1.0.0", deprecation_message, standard_warn=False) - - is_video = kwargs.get("is_video", False) - self.check_inputs( - prompt, - height, - width, - negative_prompt, - prompt_embeds, - negative_prompt_embeds, - prompt_attention_mask, - negative_prompt_attention_mask, - ) - - # 2. Default height and width to transformer - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - - self.video_scale_factor = self.video_scale_factor if is_video else 1 - vae_per_channel_normalize = kwargs.get("vae_per_channel_normalize", True) - image_cond_noise_scale = kwargs.get("image_cond_noise_scale", 0.0) - - latent_height = height // self.vae_scale_factor - latent_width = width // self.vae_scale_factor - latent_num_frames = num_frames // self.video_scale_factor - if isinstance(self.vae, CausalVideoAutoencoder) and is_video: - latent_num_frames += 1 - latent_shape = ( - batch_size * num_images_per_prompt, - self.transformer.config.in_channels, - latent_num_frames, - latent_height, - latent_width, - ) - - # Prepare the list of denoising time-steps - - retrieve_timesteps_kwargs = {} - if isinstance(self.scheduler, TimestepShifter): - retrieve_timesteps_kwargs["samples_shape"] = latent_shape - - assert ( - skip_initial_inference_steps == 0 - or latents is not None - or media_items is not None - ), ( - f"skip_initial_inference_steps ({skip_initial_inference_steps}) is used for image-to-image/video-to-video - " - "media_item or latents should be provided." - ) - - timesteps, num_inference_steps = retrieve_timesteps( - self.scheduler, - num_inference_steps, - device, - timesteps, - skip_initial_inference_steps=skip_initial_inference_steps, - skip_final_inference_steps=skip_final_inference_steps, - **retrieve_timesteps_kwargs, - ) - - if self.allowed_inference_steps is not None: - for timestep in [round(x, 4) for x in timesteps.tolist()]: - assert ( - timestep in self.allowed_inference_steps - ), f"Invalid inference timestep {timestep}. Allowed timesteps are {self.allowed_inference_steps}." - - if guidance_timesteps: - guidance_mapping = [] - for timestep in timesteps: - indices = [ - i for i, val in enumerate(guidance_timesteps) if val <= timestep - ] - # assert len(indices) > 0, f"No guidance timestep found for {timestep}" - guidance_mapping.append( - indices[0] if len(indices) > 0 else (len(guidance_timesteps) - 1) - ) - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - if not isinstance(guidance_scale, List): - guidance_scale = [guidance_scale] * len(timesteps) - else: - guidance_scale = [ - guidance_scale[guidance_mapping[i]] for i in range(len(timesteps)) - ] - - if not isinstance(stg_scale, List): - stg_scale = [stg_scale] * len(timesteps) - else: - stg_scale = [stg_scale[guidance_mapping[i]] for i in range(len(timesteps))] - - if not isinstance(rescaling_scale, List): - rescaling_scale = [rescaling_scale] * len(timesteps) - else: - rescaling_scale = [ - rescaling_scale[guidance_mapping[i]] for i in range(len(timesteps)) - ] - - # Normalize skip_block_list to always be None or a list of lists matching timesteps - if skip_block_list is not None: - # Convert single list to list of lists if needed - if len(skip_block_list) == 0 or not isinstance(skip_block_list[0], list): - skip_block_list = [skip_block_list] * len(timesteps) - else: - new_skip_block_list = [] - for i, timestep in enumerate(timesteps): - new_skip_block_list.append(skip_block_list[guidance_mapping[i]]) - skip_block_list = new_skip_block_list - - if enhance_prompt: - self.prompt_enhancer_image_caption_model = ( - self.prompt_enhancer_image_caption_model.to(self._execution_device) - ) - self.prompt_enhancer_llm_model = self.prompt_enhancer_llm_model.to( - self._execution_device - ) - - prompt = generate_cinematic_prompt( - self.prompt_enhancer_image_caption_model, - self.prompt_enhancer_image_caption_processor, - self.prompt_enhancer_llm_model, - self.prompt_enhancer_llm_tokenizer, - prompt, - conditioning_items, - max_new_tokens=text_encoder_max_tokens, - ) - - # --- [NOSSA ESCUTA SECRETA AQUI] --- - print("--- [LOG DO DIRETOR ASSISTENTE (PROMPT ENHANCER)] ---") - print("Prompt Original do Maestro:", kwargs.get("original_prompt_for_logging", "N/A")) # Precisamos passar isso - print("PROMPT FINAL APERFEIÇOADO (enviado para o LTX):", prompt) - print("--- [FIM DO LOG DO DIRETOR ASSISTENTE] ---") - # --- [FIM DA ESCUTA] --- - - - # 3. Encode input prompt - if self.text_encoder is not None: - self.text_encoder = self.text_encoder.to(self._execution_device) - - ( - prompt_embeds, - prompt_attention_mask, - negative_prompt_embeds, - negative_prompt_attention_mask, - ) = self.encode_prompt( - prompt, - True, - negative_prompt=negative_prompt, - num_images_per_prompt=num_images_per_prompt, - device=device, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - prompt_attention_mask=prompt_attention_mask, - negative_prompt_attention_mask=negative_prompt_attention_mask, - text_encoder_max_tokens=text_encoder_max_tokens, - ) - - if offload_to_cpu and self.text_encoder is not None: - self.text_encoder = self.text_encoder.cpu() - - self.transformer = self.transformer.to(self._execution_device) - - prompt_embeds_batch = prompt_embeds - prompt_attention_mask_batch = prompt_attention_mask - negative_prompt_embeds = ( - torch.zeros_like(prompt_embeds) - if negative_prompt_embeds is None - else negative_prompt_embeds - ) - negative_prompt_attention_mask = ( - torch.zeros_like(prompt_attention_mask) - if negative_prompt_attention_mask is None - else negative_prompt_attention_mask - ) - - prompt_embeds_batch = torch.cat( - [negative_prompt_embeds, prompt_embeds, prompt_embeds], dim=0 - ) - prompt_attention_mask_batch = torch.cat( - [ - negative_prompt_attention_mask, - prompt_attention_mask, - prompt_attention_mask, - ], - dim=0, - ) - # 4. Prepare the initial latents using the provided media and conditioning items - - # Prepare the initial latents tensor, shape = (b, c, f, h, w) - latents = self.prepare_latents( - latents=latents, - media_items=media_items, - timestep=timesteps[0], - latent_shape=latent_shape, - dtype=prompt_embeds.dtype, - device=device, - generator=generator, - vae_per_channel_normalize=vae_per_channel_normalize, - ) - - # Update the latents with the conditioning items and patchify them into (b, n, c) - latents, pixel_coords, conditioning_mask, num_cond_latents = ( - self.prepare_conditioning( - conditioning_items=conditioning_items, - init_latents=latents, - num_frames=num_frames, - height=height, - width=width, - vae_per_channel_normalize=vae_per_channel_normalize, - generator=generator, - ) - ) - init_latents = latents.clone() # Used for image_cond_noise_update - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 7. Denoising loop - num_warmup_steps = max( - len(timesteps) - num_inference_steps * self.scheduler.order, 0 - ) - - orig_conditioning_mask = conditioning_mask - - # Befor compiling this code please be aware: - # This code might generate different input shapes if some timesteps have no STG or CFG. - # This means that the codes might need to be compiled mutliple times. - # To avoid that, use the same STG and CFG values for all timesteps. - - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - do_classifier_free_guidance = guidance_scale[i] > 1.0 - do_spatio_temporal_guidance = stg_scale[i] > 0 - do_rescaling = rescaling_scale[i] != 1.0 - - num_conds = 1 - if do_classifier_free_guidance: - num_conds += 1 - if do_spatio_temporal_guidance: - num_conds += 1 - - if do_classifier_free_guidance and do_spatio_temporal_guidance: - indices = slice(batch_size * 0, batch_size * 3) - elif do_classifier_free_guidance: - indices = slice(batch_size * 0, batch_size * 2) - elif do_spatio_temporal_guidance: - indices = slice(batch_size * 1, batch_size * 3) - else: - indices = slice(batch_size * 1, batch_size * 2) - - # Prepare skip layer masks - skip_layer_mask: Optional[torch.Tensor] = None - if do_spatio_temporal_guidance: - if skip_block_list is not None: - skip_layer_mask = self.transformer.create_skip_layer_mask( - batch_size, num_conds, num_conds - 1, skip_block_list[i] - ) - - batch_pixel_coords = torch.cat([pixel_coords] * num_conds) - conditioning_mask = orig_conditioning_mask - if conditioning_mask is not None and is_video: - assert num_images_per_prompt == 1 - conditioning_mask = torch.cat([conditioning_mask] * num_conds) - fractional_coords = batch_pixel_coords.to(torch.float32) - fractional_coords[:, 0] = fractional_coords[:, 0] * (1.0 / frame_rate) - - if conditioning_mask is not None and image_cond_noise_scale > 0.0: - latents = self.add_noise_to_image_conditioning_latents( - t, - init_latents, - latents, - image_cond_noise_scale, - orig_conditioning_mask, - generator, - ) - - latent_model_input = ( - torch.cat([latents] * num_conds) if num_conds > 1 else latents - ) - latent_model_input = self.scheduler.scale_model_input( - latent_model_input, t - ) - - current_timestep = t - if not torch.is_tensor(current_timestep): - # TODO: this requires sync between CPU and GPU. So try to pass timesteps as tensors if you can - # This would be a good case for the `match` statement (Python 3.10+) - is_mps = latent_model_input.device.type == "mps" - if isinstance(current_timestep, float): - dtype = torch.float32 if is_mps else torch.float64 - else: - dtype = torch.int32 if is_mps else torch.int64 - current_timestep = torch.tensor( - [current_timestep], - dtype=dtype, - device=latent_model_input.device, - ) - elif len(current_timestep.shape) == 0: - current_timestep = current_timestep[None].to( - latent_model_input.device - ) - # broadcast to batch dimension in a way that's compatible with ONNX/Core ML - current_timestep = current_timestep.expand( - latent_model_input.shape[0] - ).unsqueeze(-1) - - if conditioning_mask is not None: - # Conditioning latents have an initial timestep and noising level of (1.0 - conditioning_mask) - # and will start to be denoised when the current timestep is lower than their conditioning timestep. - current_timestep = torch.min( - current_timestep, 1.0 - conditioning_mask - ) - - # Choose the appropriate context manager based on `mixed_precision` - if mixed_precision: - context_manager = torch.autocast(device.type, dtype=torch.bfloat16) - else: - context_manager = nullcontext() # Dummy context manager - - # predict noise model_output - with context_manager: - noise_pred = self.transformer( - latent_model_input.to(self.transformer.dtype), - indices_grid=fractional_coords, - encoder_hidden_states=prompt_embeds_batch[indices].to( - self.transformer.dtype - ), - encoder_attention_mask=prompt_attention_mask_batch[indices], - timestep=current_timestep, - skip_layer_mask=skip_layer_mask, - skip_layer_strategy=skip_layer_strategy, - return_dict=False, - )[0] - - # perform guidance - if do_spatio_temporal_guidance: - noise_pred_text, noise_pred_text_perturb = noise_pred.chunk( - num_conds - )[-2:] - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(num_conds)[:2] - - if cfg_star_rescale: - # Rescales the unconditional noise prediction using the projection of the conditional prediction onto it: - # α = (⟨ε_text, ε_uncond⟩ / ||ε_uncond||²), then ε_uncond ← α * ε_uncond - # where ε_text is the conditional noise prediction and ε_uncond is the unconditional one. - positive_flat = noise_pred_text.view(batch_size, -1) - negative_flat = noise_pred_uncond.view(batch_size, -1) - dot_product = torch.sum( - positive_flat * negative_flat, dim=1, keepdim=True - ) - squared_norm = ( - torch.sum(negative_flat**2, dim=1, keepdim=True) + 1e-8 - ) - alpha = dot_product / squared_norm - noise_pred_uncond = alpha * noise_pred_uncond - - noise_pred = noise_pred_uncond + guidance_scale[i] * ( - noise_pred_text - noise_pred_uncond - ) - elif do_spatio_temporal_guidance: - noise_pred = noise_pred_text - if do_spatio_temporal_guidance: - noise_pred = noise_pred + stg_scale[i] * ( - noise_pred_text - noise_pred_text_perturb - ) - if do_rescaling and stg_scale[i] > 0.0: - noise_pred_text_std = noise_pred_text.view(batch_size, -1).std( - dim=1, keepdim=True - ) - noise_pred_std = noise_pred.view(batch_size, -1).std( - dim=1, keepdim=True - ) - - factor = noise_pred_text_std / noise_pred_std - factor = rescaling_scale[i] * factor + (1 - rescaling_scale[i]) - - noise_pred = noise_pred * factor.view(batch_size, 1, 1) - - current_timestep = current_timestep[:1] - # learned sigma - if ( - self.transformer.config.out_channels // 2 - == self.transformer.config.in_channels - ): - noise_pred = noise_pred.chunk(2, dim=1)[0] - - # compute previous image: x_t -> x_t-1 - latents = self.denoising_step( - latents, - noise_pred, - current_timestep, - orig_conditioning_mask, - t, - extra_step_kwargs, - stochastic_sampling=stochastic_sampling, - ) - - # call the callback, if provided - if i == len(timesteps) - 1 or ( - (i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0 - ): - progress_bar.update() - - if callback_on_step_end is not None: - callback_on_step_end(self, i, t, {}) - - if offload_to_cpu: - self.transformer = self.transformer.cpu() - if self._execution_device == "cuda": - torch.cuda.empty_cache() - - # Remove the added conditioning latents - latents = latents[:, num_cond_latents:] - - latents = self.patchifier.unpatchify( - latents=latents, - output_height=latent_height, - output_width=latent_width, - out_channels=self.transformer.in_channels - // math.prod(self.patchifier.patch_size), - ) - if output_type != "latent": - if self.vae.decoder.timestep_conditioning: - noise = torch.randn_like(latents) - if not isinstance(decode_timestep, list): - decode_timestep = [decode_timestep] * latents.shape[0] - if decode_noise_scale is None: - decode_noise_scale = decode_timestep - elif not isinstance(decode_noise_scale, list): - decode_noise_scale = [decode_noise_scale] * latents.shape[0] - - decode_timestep = torch.tensor(decode_timestep).to(latents.device) - decode_noise_scale = torch.tensor(decode_noise_scale).to( - latents.device - )[:, None, None, None, None] - latents = ( - latents * (1 - decode_noise_scale) + noise * decode_noise_scale - ) - else: - decode_timestep = None - latents = self.tone_map_latents(latents, tone_map_compression_ratio) - image = vae_decode( - latents, - self.vae, - is_video, - vae_per_channel_normalize=kwargs["vae_per_channel_normalize"], - timestep=decode_timestep, - ) - - image = self.image_processor.postprocess(image, output_type=output_type) - - else: - image = latents - - # Offload all models - self.maybe_free_model_hooks() - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) - - def denoising_step( - self, - latents: torch.Tensor, - noise_pred: torch.Tensor, - current_timestep: torch.Tensor, - conditioning_mask: torch.Tensor, - t: float, - extra_step_kwargs, - t_eps=1e-6, - stochastic_sampling=False, - ): - """ - Perform the denoising step for the required tokens, based on the current timestep and - conditioning mask: - Conditioning latents have an initial timestep and noising level of (1.0 - conditioning_mask) - and will start to be denoised when the current timestep is equal or lower than their - conditioning timestep. - (hard-conditioning latents with conditioning_mask = 1.0 are never denoised) - """ - # Denoise the latents using the scheduler - denoised_latents = self.scheduler.step( - noise_pred, - t if current_timestep is None else current_timestep, - latents, - **extra_step_kwargs, - return_dict=False, - stochastic_sampling=stochastic_sampling, - )[0] - - if conditioning_mask is None: - return denoised_latents - - tokens_to_denoise_mask = (t - t_eps < (1.0 - conditioning_mask)).unsqueeze(-1) - return torch.where(tokens_to_denoise_mask, denoised_latents, latents) - - - - #patch carlex deforms - # ltx_video/pipelines/pipeline_ltx_video.py (Versão com Indentação Corrigida) - - def prepare_conditioning( - self, - conditioning_items: Optional[List[Union[ConditioningItem, "LatentConditioningItem"]]], - init_latents: torch.Tensor, - num_frames: int, - height: int, - width: int, - vae_per_channel_normalize: bool = False, - generator=None, - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, int]: - """ - [MODIFICADO] Lida corretamente com ConditioningItem (pixels) e LatentConditioningItem com caminhos lógicos separados. - """ - assert isinstance(self.vae, CausalVideoAutoencoder) - - if not conditioning_items: - # Se não houver itens, apenas patchify e retorna. - init_latents, init_latent_coords = self.patchifier.patchify(latents=init_latents) - init_pixel_coords = latent_to_pixel_coords( - init_latent_coords, self.vae, causal_fix=self.transformer.config.causal_temporal_positioning - ) - return init_latents, init_pixel_coords, None, 0 - - init_conditioning_mask = torch.zeros( - init_latents[:, 0, :, :, :].shape, dtype=torch.float32, device=init_latents.device - ) - extra_conditioning_latents = [] - extra_conditioning_pixel_coords = [] - extra_conditioning_mask = [] - extra_conditioning_num_latents = 0 - - # --- [INÍCIO DA CORREÇÃO] --- - # Verifica o tipo do primeiro item para decidir o modo de processamento. - is_latent_mode = hasattr(conditioning_items[0], 'latent_tensor') - - if is_latent_mode: - # --- CAMINHO 1: Processamento exclusivo para LatentConditioningItem --- - for item in conditioning_items: - media_item_latents = item.latent_tensor.to(dtype=init_latents.dtype, device=init_latents.device) - media_frame_number = item.media_frame_number - strength = item.conditioning_strength - n_latent_frames = media_item_latents.shape[2] - - if media_frame_number == 0: - # Para latentes, assumimos que eles preenchem o quadro, sem posicionamento espacial. - f_l, h_l, w_l = media_item_latents.shape[-3:] - init_latents[:, :, :f_l, :h_l, :w_l] = torch.lerp(init_latents[:, :, :f_l, :h_l, :w_l], media_item_latents, strength) - init_conditioning_mask[:, :f_l, :h_l, :w_l] = strength - else: - # Lógica simplificada para frames não-iniciais de latentes - noise = randn_tensor(media_item_latents.shape, generator=generator, device=media_item_latents.device, dtype=media_item_latents.dtype) - media_item_latents = torch.lerp(noise, media_item_latents, strength) - - patched_latents, latent_coords = self.patchifier.patchify(latents=media_item_latents) - pixel_coords = latent_to_pixel_coords(latent_coords, self.vae, causal_fix=self.transformer.config.causal_temporal_positioning) - pixel_coords[:, 0] += media_frame_number - extra_conditioning_num_latents += patched_latents.shape[1] - - new_mask = torch.full(patched_latents.shape[:2], strength, dtype=torch.float32, device=init_latents.device) - - extra_conditioning_latents.append(patched_latents) - extra_conditioning_pixel_coords.append(pixel_coords) - extra_conditioning_mask.append(new_mask) - - else: - # --- CAMINHO 2: Processamento exclusivo para ConditioningItem (pixels) --- - for item in conditioning_items: - if not isinstance(item, ConditioningItem): continue - - item = self._resize_conditioning_item(item, height, width) - media_item_latents = vae_encode( - item.media_item.to(dtype=self.vae.dtype, device=self.vae.device), - self.vae, vae_per_channel_normalize=vae_per_channel_normalize - ).to(dtype=init_latents.dtype) - - media_frame_number = item.media_frame_number - strength = item.conditioning_strength - n_pixel_frames = item.media_item.shape[2] - - if media_frame_number == 0: - media_item_latents, l_x, l_y = self._get_latent_spatial_position(media_item_latents, item, height, width, strip_latent_border=True) - f_l, h_l, w_l = media_item_latents.shape[-3:] - init_latents[:, :, :f_l, l_y:l_y+h_l, l_x:l_x+w_l] = torch.lerp(init_latents[:, :, :f_l, l_y:l_y+h_l, l_x:l_x+w_l], media_item_latents, strength) - init_conditioning_mask[:, :f_l, l_y:l_y+h_l, l_x:l_x+w_l] = strength - else: - if n_pixel_frames > 1: - (init_latents, init_conditioning_mask, media_item_latents) = self._handle_non_first_conditioning_sequence( - init_latents, init_conditioning_mask, media_item_latents, media_frame_number, strength - ) - if media_item_latents is not None: - noise = randn_tensor(media_item_latents.shape, generator=generator, device=media_item_latents.device, dtype=media_item_latents.dtype) - media_item_latents = torch.lerp(noise, media_item_latents, strength) - patched_latents, latent_coords = self.patchifier.patchify(latents=media_item_latents) - pixel_coords = latent_to_pixel_coords(latent_coords, self.vae, causal_fix=self.transformer.config.causal_temporal_positioning) - pixel_coords[:, 0] += media_frame_number - extra_conditioning_num_latents += patched_latents.shape[1] - new_mask = torch.full(patched_latents.shape[:2], strength, dtype=torch.float32, device=init_latents.device) - extra_conditioning_latents.append(patched_latents) - extra_conditioning_pixel_coords.append(pixel_coords) - extra_conditioning_mask.append(new_mask) - - # --- [FIM DA CORREÇÃO] --- - - # O resto da função (patchify final e concatenação) permanece o mesmo - init_latents, init_latent_coords = self.patchifier.patchify(latents=init_latents) - init_pixel_coords = latent_to_pixel_coords( - init_latent_coords, self.vae, causal_fix=self.transformer.config.causal_temporal_positioning - ) - init_conditioning_mask, _ = self.patchifier.patchify(latents=init_conditioning_mask.unsqueeze(1)) - init_conditioning_mask = init_conditioning_mask.squeeze(-1) - if extra_conditioning_latents: - init_latents = torch.cat([*extra_conditioning_latents, init_latents], dim=1) - init_pixel_coords = torch.cat([*extra_conditioning_pixel_coords, init_pixel_coords], dim=2) - init_conditioning_mask = torch.cat([*extra_conditioning_mask, init_conditioning_mask], dim=1) - if self.transformer.use_tpu_flash_attention: - init_latents = init_latents[:, :-extra_conditioning_num_latents] - init_pixel_coords = init_pixel_coords[:, :, :-extra_conditioning_num_latents] - init_conditioning_mask = init_conditioning_mask[:, :-extra_conditioning_num_latents] - - return init_latents, init_pixel_coords, init_conditioning_mask, extra_conditioning_num_latents - - # --- [INÍCIO DA SEÇÃO CORRIGIDA] --- - def prepare_conditioning12( - self, - conditioning_items: Optional[List[Union[ConditioningItem, "LatentConditioningItem"]]], - init_latents: torch.Tensor, - num_frames: int, - height: int, - width: int, - vae_per_channel_normalize: bool = False, - generator=None, - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, int]: - """ - Prepara os tokens de condicionamento. - [MODIFICADO] Lida corretamente com ConditioningItem (pixels) e LatentConditioningItem. - """ - assert isinstance(self.vae, CausalVideoAutoencoder) - - if conditioning_items: - init_conditioning_mask = torch.zeros( - init_latents[:, 0, :, :, :].shape, - dtype=torch.float32, - device=init_latents.device, - ) - extra_conditioning_latents = [] - extra_conditioning_pixel_coords = [] - extra_conditioning_mask = [] - extra_conditioning_num_latents = 0 - - for conditioning_item in conditioning_items: - media_item_latents = None - - # Usamos hasattr para evitar importação circular. - is_latent_item = hasattr(conditioning_item, 'latent_tensor') - - if is_latent_item: - # Se for um item latente, pulamos o pré-processamento de pixels. - media_item_latents = conditioning_item.latent_tensor.to(dtype=init_latents.dtype, device=init_latents.device) - media_frame_number = conditioning_item.media_frame_number - strength = conditioning_item.conditioning_strength - n_frames = media_item_latents.shape[2] - - elif isinstance(conditioning_item, ConditioningItem): - # Se for um item de pixel, seguimos o caminho original. - conditioning_item = self._resize_conditioning_item(conditioning_item, height, width) - media_item = conditioning_item.media_item - media_frame_number = conditioning_item.media_frame_number - strength = conditioning_item.conditioning_strength - b, c, n_frames, h, w = media_item.shape - - media_item_latents = vae_encode( - media_item.to(dtype=self.vae.dtype, device=self.vae.device), - self.vae, - vae_per_channel_normalize=vae_per_channel_normalize, - ).to(dtype=init_latents.dtype) - - else: # CORREÇÃO: Adiciona um indented block aqui - continue # Pula itens que não são de nenhum tipo conhecido - - if media_item_latents is None: - continue - - # A lógica unificada a partir daqui - if media_frame_number == 0: - pos_item = None if is_latent_item else conditioning_item - - media_item_latents, l_x, l_y = self._get_latent_spatial_position( - media_item_latents, - pos_item, - height, - width, - strip_latent_border=True, - ) - b, c_l, f_l, h_l, w_l = media_item_latents.shape - - init_latents[:, :, :f_l, l_y : l_y + h_l, l_x : l_x + w_l] = ( - torch.lerp( - init_latents[:, :, :f_l, l_y : l_y + h_l, l_x : l_x + w_l], - media_item_latents, - strength, - ) - ) - init_conditioning_mask[ - :, :f_l, l_y : l_y + h_l, l_x : l_x + w_l - ] = strength - else: - if n_frames > 1: - ( - init_latents, - init_conditioning_mask, - media_item_latents, - ) = self._handle_non_first_conditioning_sequence( - init_latents, - init_conditioning_mask, - media_item_latents, - media_frame_number, - strength, - ) - - if media_item_latents is not None: - noise = randn_tensor( - media_item_latents.shape, - generator=generator, - device=media_item_latents.device, - dtype=media_item_latents.dtype, - ) - media_item_latents = torch.lerp( - noise, media_item_latents, strength - ) - media_item_latents, latent_coords = self.patchifier.patchify( - latents=media_item_latents - ) - pixel_coords = latent_to_pixel_coords( - latent_coords, - self.vae, - causal_fix=self.transformer.config.causal_temporal_positioning, - ) - pixel_coords[:, 0] += media_frame_number - extra_conditioning_num_latents += media_item_latents.shape[1] - conditioning_mask = torch.full( - media_item_latents.shape[:2], - strength, - dtype=torch.float32, - device=init_latents.device, - ) - extra_conditioning_latents.append(media_item_latents) - extra_conditioning_pixel_coords.append(pixel_coords) - extra_conditioning_mask.append(conditioning_mask) - - init_latents, init_latent_coords = self.patchifier.patchify(latents=init_latents) - init_pixel_coords = latent_to_pixel_coords( - init_latent_coords, - self.vae, - causal_fix=self.transformer.config.causal_temporal_positioning, - ) - - if not conditioning_items: - return init_latents, init_pixel_coords, None, 0 - - init_conditioning_mask, _ = self.patchifier.patchify(latents=init_conditioning_mask.unsqueeze(1)) - init_conditioning_mask = init_conditioning_mask.squeeze(-1) - - if extra_conditioning_latents: - init_latents = torch.cat([*extra_conditioning_latents, init_latents], dim=1) - init_pixel_coords = torch.cat([*extra_conditioning_pixel_coords, init_pixel_coords], dim=2) - init_conditioning_mask = torch.cat([*extra_conditioning_mask, init_conditioning_mask], dim=1) - if self.transformer.use_tpu_flash_attention: - init_latents = init_latents[:, :-extra_conditioning_num_latents] - init_pixel_coords = init_pixel_coords[:, :, :-extra_conditioning_num_latents] - init_conditioning_mask = init_conditioning_mask[:, :-extra_conditioning_num_latents] - - return init_latents, init_pixel_coords, init_conditioning_mask, extra_conditioning_num_latents - - # Se não houver conditioning_items, retorna os valores iniciais - init_latents, init_latent_coords = self.patchifier.patchify(latents=init_latents) - init_pixel_coords = latent_to_pixel_coords( - init_latent_coords, - self.vae, - causal_fix=self.transformer.config.causal_temporal_positioning, - ) - return init_latents, init_pixel_coords, None, 0 - - def _get_latent_spatial_position( - self, - latents: torch.Tensor, - conditioning_item: Optional[ConditioningItem], # Tornamos opcional - height: int, - width: int, - strip_latent_border, - ): - """ - [MODIFICADO] Se conditioning_item for None (caso de item latente), assume posição central. - """ - scale = self.vae_scale_factor - - # --- [INÍCIO DA CORREÇÃO] --- - # A verificação de None deve vir PRIMEIRO. - if conditioning_item is None: - # Caso de um item latente. Não há posicionamento espacial, - # então assumimos que ele preenche todo o quadro. - x_start, y_start = 0, 0 - w, h = width, height - else: - # Caso de um item de pixel, com possível posicionamento. - h, w = conditioning_item.media_item.shape[-2:] - assert (h <= height and w <= width), f"Conditioning item size {h}x{w} is larger than target size {height}x{width}" - assert h % scale == 0 and w % scale == 0 - x_start, y_start = conditioning_item.media_x, conditioning_item.media_y - x_start = (width - w) // 2 if x_start is None else x_start - y_start = (height - h) // 2 if y_start is None else y_start - # --- [FIM DA CORREÇÃO] --- - - x_end, y_end = x_start + w, y_start + h - assert (x_end <= width and y_end <= height), f"Conditioning item {x_start}:{x_end}x{y_start}:{y_end} is out of bounds for target size {width}x{height}" - - if strip_latent_border: - if x_start > 0: - x_start += scale - latents = latents[:, :, :, :, 1:] - if y_start > 0: - y_start += scale - latents = latents[:, :, :, 1:, :] - if x_end < width: - latents = latents[:, :, :, :, :-1] - if y_end < height: - latents = latents[:, :, :, :-1, :] - - return latents, x_start // scale, y_start // scale - - def _get_latent_spatial_position1( - self, - latents: torch.Tensor, - conditioning_item: Optional[ConditioningItem], - height: int, - width: int, - strip_latent_border, - ): - """ - [MODIFICADO] Se conditioning_item for None (caso de item latente), assume posição central. - """ - scale = self.vae_scale_factor - - if conditioning_item is None: - x_start, y_start = 0, 0 - w, h = width, height - else: - h, w = conditioning_item.media_item.shape[-2:] - assert (h <= height and w <= width), f"Conditioning item size {h}x{w} is larger than target size {height}x{width}" - assert h % scale == 0 and w % scale == 0 - x_start, y_start = conditioning_item.media_x, conditioning_item.media_y - x_start = (width - w) // 2 if x_start is None else x_start - y_start = (height - h) // 2 if y_start is None else y_start - - x_end, y_end = x_start + w, y_start + h - assert (x_end <= width and y_end <= height), f"Conditioning item {x_start}:{x_end}x{y_start}:{y_end} is out of bounds for target size {width}x{height}" - - if strip_latent_border: - if x_start > 0: - x_start += scale - latents = latents[:, :, :, :, 1:] - if y_start > 0: - y_start += scale - latents = latents[:, :, :, 1:, :] - if x_end < width: - latents = latents[:, :, :, :, :-1] - if y_end < height: - latents = latents[:, :, :, :-1, :] - - return latents, x_start // scale, y_start // scale - - # --- [FIM DA SEÇÃO CORRIGIDA] --- - - # ... (O resto da classe LTXVideoPipeline continua como antes) - - def prepare_conditioning1( - self, - conditioning_items: Optional[List[ConditioningItem]], - init_latents: torch.Tensor, - num_frames: int, - height: int, - width: int, - vae_per_channel_normalize: bool = False, - generator=None, - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, int]: - """ - Prepare conditioning tokens based on the provided conditioning items. - - This method encodes provided conditioning items (video frames or single frames) into latents - and integrates them with the initial latent tensor. It also calculates corresponding pixel - coordinates, a mask indicating the influence of conditioning latents, and the total number of - conditioning latents. - - Args: - conditioning_items (Optional[List[ConditioningItem]]): A list of ConditioningItem objects. - init_latents (torch.Tensor): The initial latent tensor of shape (b, c, f_l, h_l, w_l), where - `f_l` is the number of latent frames, and `h_l` and `w_l` are latent spatial dimensions. - num_frames, height, width: The dimensions of the generated video. - vae_per_channel_normalize (bool, optional): Whether to normalize channels during VAE encoding. - Defaults to `False`. - generator: The random generator - - Returns: - Tuple[torch.Tensor, torch.Tensor, torch.Tensor, int]: - - `init_latents` (torch.Tensor): The updated latent tensor including conditioning latents, - patchified into (b, n, c) shape. - - `init_pixel_coords` (torch.Tensor): The pixel coordinates corresponding to the updated - latent tensor. - - `conditioning_mask` (torch.Tensor): A mask indicating the conditioning-strength of each - latent token. - - `num_cond_latents` (int): The total number of latent tokens added from conditioning items. - - Raises: - AssertionError: If input shapes, dimensions, or conditions for applying conditioning are invalid. - """ - assert isinstance(self.vae, CausalVideoAutoencoder) - - if conditioning_items: - batch_size, _, num_latent_frames = init_latents.shape[:3] - - init_conditioning_mask = torch.zeros( - init_latents[:, 0, :, :, :].shape, - dtype=torch.float32, - device=init_latents.device, - ) - - extra_conditioning_latents = [] - extra_conditioning_pixel_coords = [] - extra_conditioning_mask = [] - extra_conditioning_num_latents = 0 # Number of extra conditioning latents added (should be removed before decoding) - - # Process each conditioning item - for conditioning_item in conditioning_items: - conditioning_item = self._resize_conditioning_item( - conditioning_item, height, width - ) - media_item = conditioning_item.media_item - media_frame_number = conditioning_item.media_frame_number - strength = conditioning_item.conditioning_strength - assert media_item.ndim == 5 # (b, c, f, h, w) - b, c, n_frames, h, w = media_item.shape - assert ( - height == h and width == w - ) or media_frame_number == 0, f"Dimensions do not match: {height}x{width} != {h}x{w} - allowed only when media_frame_number == 0" - assert n_frames % 8 == 1 - assert ( - media_frame_number >= 0 - and media_frame_number + n_frames <= num_frames - ) - - # Encode the provided conditioning media item - media_item_latents = vae_encode( - media_item.to(dtype=self.vae.dtype, device=self.vae.device), - self.vae, - vae_per_channel_normalize=vae_per_channel_normalize, - ).to(dtype=init_latents.dtype) - - # Handle the different conditioning cases - if media_frame_number == 0: - # Get the target spatial position of the latent conditioning item - media_item_latents, l_x, l_y = self._get_latent_spatial_position( - media_item_latents, - conditioning_item, - height, - width, - strip_latent_border=True, - ) - b, c_l, f_l, h_l, w_l = media_item_latents.shape - - # First frame or sequence - just update the initial noise latents and the mask - init_latents[:, :, :f_l, l_y : l_y + h_l, l_x : l_x + w_l] = ( - torch.lerp( - init_latents[:, :, :f_l, l_y : l_y + h_l, l_x : l_x + w_l], - media_item_latents, - strength, - ) - ) - init_conditioning_mask[ - :, :f_l, l_y : l_y + h_l, l_x : l_x + w_l - ] = strength - else: - # Non-first frame or sequence - if n_frames > 1: - # Handle non-first sequence. - # Encoded latents are either fully consumed, or the prefix is handled separately below. - ( - init_latents, - init_conditioning_mask, - media_item_latents, - ) = self._handle_non_first_conditioning_sequence( - init_latents, - init_conditioning_mask, - media_item_latents, - media_frame_number, - strength, - ) - - # Single frame or sequence-prefix latents - if media_item_latents is not None: - noise = randn_tensor( - media_item_latents.shape, - generator=generator, - device=media_item_latents.device, - dtype=media_item_latents.dtype, - ) - - media_item_latents = torch.lerp( - noise, media_item_latents, strength - ) - - # Patchify the extra conditioning latents and calculate their pixel coordinates - media_item_latents, latent_coords = self.patchifier.patchify( - latents=media_item_latents - ) - pixel_coords = latent_to_pixel_coords( - latent_coords, - self.vae, - causal_fix=self.transformer.config.causal_temporal_positioning, - ) - - # Update the frame numbers to match the target frame number - pixel_coords[:, 0] += media_frame_number - extra_conditioning_num_latents += media_item_latents.shape[1] - - conditioning_mask = torch.full( - media_item_latents.shape[:2], - strength, - dtype=torch.float32, - device=init_latents.device, - ) - - extra_conditioning_latents.append(media_item_latents) - extra_conditioning_pixel_coords.append(pixel_coords) - extra_conditioning_mask.append(conditioning_mask) - - # Patchify the updated latents and calculate their pixel coordinates - init_latents, init_latent_coords = self.patchifier.patchify( - latents=init_latents - ) - init_pixel_coords = latent_to_pixel_coords( - init_latent_coords, - self.vae, - causal_fix=self.transformer.config.causal_temporal_positioning, - ) - - if not conditioning_items: - return init_latents, init_pixel_coords, None, 0 - - init_conditioning_mask, _ = self.patchifier.patchify( - latents=init_conditioning_mask.unsqueeze(1) - ) - init_conditioning_mask = init_conditioning_mask.squeeze(-1) - - if extra_conditioning_latents: - # Stack the extra conditioning latents, pixel coordinates and mask - init_latents = torch.cat([*extra_conditioning_latents, init_latents], dim=1) - init_pixel_coords = torch.cat( - [*extra_conditioning_pixel_coords, init_pixel_coords], dim=2 - ) - init_conditioning_mask = torch.cat( - [*extra_conditioning_mask, init_conditioning_mask], dim=1 - ) - - if self.transformer.use_tpu_flash_attention: - # When flash attention is used, keep the original number of tokens by removing - # tokens from the end. - init_latents = init_latents[:, :-extra_conditioning_num_latents] - init_pixel_coords = init_pixel_coords[ - :, :, :-extra_conditioning_num_latents - ] - init_conditioning_mask = init_conditioning_mask[ - :, :-extra_conditioning_num_latents - ] - - return ( - init_latents, - init_pixel_coords, - init_conditioning_mask, - extra_conditioning_num_latents, - ) - - @staticmethod - def _resize_conditioning_item( - conditioning_item: ConditioningItem, - height: int, - width: int, - ): - if conditioning_item.media_x or conditioning_item.media_y: - raise ValueError( - "Provide media_item in the target size for spatial conditioning." - ) - new_conditioning_item = copy.copy(conditioning_item) - new_conditioning_item.media_item = LTXVideoPipeline.resize_tensor( - conditioning_item.media_item, height, width - ) - return new_conditioning_item - - def _get_latent_spatial_position( - self, - latents: torch.Tensor, - conditioning_item: ConditioningItem, - height: int, - width: int, - strip_latent_border, - ): - """ - Get the spatial position of the conditioning item in the latent space. - If requested, strip the conditioning latent borders that do not align with target borders. - (border latents look different then other latents and might confuse the model) - """ - scale = self.vae_scale_factor - h, w = conditioning_item.media_item.shape[-2:] - assert ( - h <= height and w <= width - ), f"Conditioning item size {h}x{w} is larger than target size {height}x{width}" - assert h % scale == 0 and w % scale == 0 - - # Compute the start and end spatial positions of the media item - x_start, y_start = conditioning_item.media_x, conditioning_item.media_y - x_start = (width - w) // 2 if x_start is None else x_start - y_start = (height - h) // 2 if y_start is None else y_start - x_end, y_end = x_start + w, y_start + h - assert ( - x_end <= width and y_end <= height - ), f"Conditioning item {x_start}:{x_end}x{y_start}:{y_end} is out of bounds for target size {width}x{height}" - - if strip_latent_border: - # Strip one latent from left/right and/or top/bottom, update x, y accordingly - if x_start > 0: - x_start += scale - latents = latents[:, :, :, :, 1:] - - if y_start > 0: - y_start += scale - latents = latents[:, :, :, 1:, :] - - if x_end < width: - latents = latents[:, :, :, :, :-1] - - if y_end < height: - latents = latents[:, :, :, :-1, :] - - return latents, x_start // scale, y_start // scale - - @staticmethod - def _handle_non_first_conditioning_sequence( - init_latents: torch.Tensor, - init_conditioning_mask: torch.Tensor, - latents: torch.Tensor, - media_frame_number: int, - strength: float, - num_prefix_latent_frames: int = 2, - prefix_latents_mode: str = "concat", - prefix_soft_conditioning_strength: float = 0.15, - ): - """ - Special handling for a conditioning sequence that does not start on the first frame. - The special handling is required to allow a short encoded video to be used as middle - (or last) sequence in a longer video. - Args: - init_latents (torch.Tensor): The initial noise latents to be updated. - init_conditioning_mask (torch.Tensor): The initial conditioning mask to be updated. - latents (torch.Tensor): The encoded conditioning item. - media_frame_number (int): The target frame number of the first frame in the conditioning sequence. - strength (float): The conditioning strength for the conditioning latents. - num_prefix_latent_frames (int, optional): The length of the sequence prefix, to be handled - separately. Defaults to 2. - prefix_latents_mode (str, optional): Special treatment for prefix (boundary) latents. - - "drop": Drop the prefix latents. - - "soft": Use the prefix latents, but with soft-conditioning - - "concat": Add the prefix latents as extra tokens (like single frames) - prefix_soft_conditioning_strength (float, optional): The strength of the soft-conditioning for - the prefix latents, relevant if `prefix_latents_mode` is "soft". Defaults to 0.1. - - """ - f_l = latents.shape[2] - f_l_p = num_prefix_latent_frames - assert f_l >= f_l_p - assert media_frame_number % 8 == 0 - if f_l > f_l_p: - # Insert the conditioning latents **excluding the prefix** into the sequence - f_l_start = media_frame_number // 8 + f_l_p - f_l_end = f_l_start + f_l - f_l_p - init_latents[:, :, f_l_start:f_l_end] = torch.lerp( - init_latents[:, :, f_l_start:f_l_end], - latents[:, :, f_l_p:], - strength, - ) - # Mark these latent frames as conditioning latents - init_conditioning_mask[:, f_l_start:f_l_end] = strength - - # Handle the prefix-latents - if prefix_latents_mode == "soft": - if f_l_p > 1: - # Drop the first (single-frame) latent and soft-condition the remaining prefix - f_l_start = media_frame_number // 8 + 1 - f_l_end = f_l_start + f_l_p - 1 - strength = min(prefix_soft_conditioning_strength, strength) - init_latents[:, :, f_l_start:f_l_end] = torch.lerp( - init_latents[:, :, f_l_start:f_l_end], - latents[:, :, 1:f_l_p], - strength, - ) - # Mark these latent frames as conditioning latents - init_conditioning_mask[:, f_l_start:f_l_end] = strength - latents = None # No more latents to handle - elif prefix_latents_mode == "drop": - # Drop the prefix latents - latents = None - elif prefix_latents_mode == "concat": - # Pass-on the prefix latents to be handled as extra conditioning frames - latents = latents[:, :, :f_l_p] - else: - raise ValueError(f"Invalid prefix_latents_mode: {prefix_latents_mode}") - return ( - init_latents, - init_conditioning_mask, - latents, - ) - - def trim_conditioning_sequence( - self, start_frame: int, sequence_num_frames: int, target_num_frames: int - ): - """ - Trim a conditioning sequence to the allowed number of frames. - - Args: - start_frame (int): The target frame number of the first frame in the sequence. - sequence_num_frames (int): The number of frames in the sequence. - target_num_frames (int): The target number of frames in the generated video. - - Returns: - int: updated sequence length - """ - scale_factor = self.video_scale_factor - num_frames = min(sequence_num_frames, target_num_frames - start_frame) - # Trim down to a multiple of temporal_scale_factor frames plus 1 - num_frames = (num_frames - 1) // scale_factor * scale_factor + 1 - return num_frames - - @staticmethod - def tone_map_latents( - latents: torch.Tensor, - compression: float, - ) -> torch.Tensor: - """ - Applies a non-linear tone-mapping function to latent values to reduce their dynamic range - in a perceptually smooth way using a sigmoid-based compression. - - This is useful for regularizing high-variance latents or for conditioning outputs - during generation, especially when controlling dynamic behavior with a `compression` factor. - - Parameters: - ---------- - latents : torch.Tensor - Input latent tensor with arbitrary shape. Expected to be roughly in [-1, 1] or [0, 1] range. - compression : float - Compression strength in the range [0, 1]. - - 0.0: No tone-mapping (identity transform) - - 1.0: Full compression effect - - Returns: - ------- - torch.Tensor - The tone-mapped latent tensor of the same shape as input. - """ - if not (0 <= compression <= 1): - raise ValueError("Compression must be in the range [0, 1]") - - # Remap [0-1] to [0-0.75] and apply sigmoid compression in one shot - scale_factor = compression * 0.75 - abs_latents = torch.abs(latents) - - # Sigmoid compression: sigmoid shifts large values toward 0.2, small values stay ~1.0 - # When scale_factor=0, sigmoid term vanishes, when scale_factor=0.75, full effect - sigmoid_term = torch.sigmoid(4.0 * scale_factor * (abs_latents - 1.0)) - scales = 1.0 - 0.8 * scale_factor * sigmoid_term - - filtered = latents * scales - return filtered - - -def adain_filter_latent( - latents: torch.Tensor, reference_latents: torch.Tensor, factor=1.0 -): - """ - Applies Adaptive Instance Normalization (AdaIN) to a latent tensor based on - statistics from a reference latent tensor. - - Args: - latent (torch.Tensor): Input latents to normalize - reference_latent (torch.Tensor): The reference latents providing style statistics. - factor (float): Blending factor between original and transformed latent. - Range: -10.0 to 10.0, Default: 1.0 - - Returns: - torch.Tensor: The transformed latent tensor - """ - result = latents.clone() - - for i in range(latents.size(0)): - for c in range(latents.size(1)): - r_sd, r_mean = torch.std_mean( - reference_latents[i, c], dim=None - ) # index by original dim order - i_sd, i_mean = torch.std_mean(result[i, c], dim=None) - - result[i, c] = ((result[i, c] - i_mean) / i_sd) * r_sd + r_mean - - result = torch.lerp(latents, result, factor) - return result - - -class LTXMultiScalePipeline: - def _upsample_latents( - self, latest_upsampler: LatentUpsampler, latents: torch.Tensor - ): - assert latents.device == latest_upsampler.device - - latents = un_normalize_latents( - latents, self.vae, vae_per_channel_normalize=True - ) - upsampled_latents = latest_upsampler(latents) - upsampled_latents = normalize_latents( - upsampled_latents, self.vae, vae_per_channel_normalize=True - ) - return upsampled_latents - - def __init__( - self, video_pipeline: LTXVideoPipeline, latent_upsampler: LatentUpsampler - ): - self.video_pipeline = video_pipeline - self.vae = video_pipeline.vae - self.latent_upsampler = latent_upsampler - - def __call__( - self, - downscale_factor: float, - first_pass: dict, - second_pass: dict, - *args: Any, - **kwargs: Any, - ) -> Any: - original_kwargs = kwargs.copy() - original_output_type = kwargs["output_type"] - original_width = kwargs["width"] - original_height = kwargs["height"] - - x_width = int(kwargs["width"] * downscale_factor) - downscaled_width = x_width - (x_width % self.video_pipeline.vae_scale_factor) - x_height = int(kwargs["height"] * downscale_factor) - downscaled_height = x_height - (x_height % self.video_pipeline.vae_scale_factor) - - kwargs["output_type"] = "latent" - kwargs["width"] = downscaled_width - kwargs["height"] = downscaled_height - kwargs.update(**first_pass) - result = self.video_pipeline(*args, **kwargs) - latents = result.images - - upsampled_latents = self._upsample_latents(self.latent_upsampler, latents) - upsampled_latents = adain_filter_latent( - latents=upsampled_latents, reference_latents=latents - ) - - kwargs = original_kwargs - - kwargs["latents"] = upsampled_latents - kwargs["output_type"] = original_output_type - kwargs["width"] = downscaled_width * 2 - kwargs["height"] = downscaled_height * 2 - kwargs.update(**second_pass) - - result = self.video_pipeline(*args, **kwargs) - if original_output_type != "latent": - num_frames = result.images.shape[2] - videos = rearrange(result.images, "b c f h w -> (b f) c h w") - - videos = F.interpolate( - videos, - size=(original_height, original_width), - mode="bilinear", - align_corners=False, - ) - videos = rearrange(videos, "(b f) c h w -> b c f h w", f=num_frames) - result.images = videos - - return result diff --git a/ltx_video_x/schedulers/__init__.py b/ltx_video_x/schedulers/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/ltx_video_x/schedulers/rf.py b/ltx_video_x/schedulers/rf.py deleted file mode 100644 index c7d2ab3426645941efa71ec0c5d866d9ea9c90d4..0000000000000000000000000000000000000000 --- a/ltx_video_x/schedulers/rf.py +++ /dev/null @@ -1,386 +0,0 @@ -import math -from abc import ABC, abstractmethod -from dataclasses import dataclass -from typing import Callable, Optional, Tuple, Union -import json -import os -from pathlib import Path - -import torch -from diffusers.configuration_utils import ConfigMixin, register_to_config -from diffusers.schedulers.scheduling_utils import SchedulerMixin -from diffusers.utils import BaseOutput -from torch import Tensor -from safetensors import safe_open - - -from ltx_video.utils.torch_utils import append_dims - -from ltx_video.utils.diffusers_config_mapping import ( - diffusers_and_ours_config_mapping, - make_hashable_key, -) - - -def linear_quadratic_schedule(num_steps, threshold_noise=0.025, linear_steps=None): - if num_steps == 1: - return torch.tensor([1.0]) - if linear_steps is None: - linear_steps = num_steps // 2 - linear_sigma_schedule = [ - i * threshold_noise / linear_steps for i in range(linear_steps) - ] - threshold_noise_step_diff = linear_steps - threshold_noise * num_steps - quadratic_steps = num_steps - linear_steps - quadratic_coef = threshold_noise_step_diff / (linear_steps * quadratic_steps**2) - linear_coef = threshold_noise / linear_steps - 2 * threshold_noise_step_diff / ( - quadratic_steps**2 - ) - const = quadratic_coef * (linear_steps**2) - quadratic_sigma_schedule = [ - quadratic_coef * (i**2) + linear_coef * i + const - for i in range(linear_steps, num_steps) - ] - sigma_schedule = linear_sigma_schedule + quadratic_sigma_schedule + [1.0] - sigma_schedule = [1.0 - x for x in sigma_schedule] - return torch.tensor(sigma_schedule[:-1]) - - -def simple_diffusion_resolution_dependent_timestep_shift( - samples_shape: torch.Size, - timesteps: Tensor, - n: int = 32 * 32, -) -> Tensor: - if len(samples_shape) == 3: - _, m, _ = samples_shape - elif len(samples_shape) in [4, 5]: - m = math.prod(samples_shape[2:]) - else: - raise ValueError( - "Samples must have shape (b, t, c), (b, c, h, w) or (b, c, f, h, w)" - ) - snr = (timesteps / (1 - timesteps)) ** 2 - shift_snr = torch.log(snr) + 2 * math.log(m / n) - shifted_timesteps = torch.sigmoid(0.5 * shift_snr) - - return shifted_timesteps - - -def time_shift(mu: float, sigma: float, t: Tensor): - return math.exp(mu) / (math.exp(mu) + (1 / t - 1) ** sigma) - - -def get_normal_shift( - n_tokens: int, - min_tokens: int = 1024, - max_tokens: int = 4096, - min_shift: float = 0.95, - max_shift: float = 2.05, -) -> Callable[[float], float]: - m = (max_shift - min_shift) / (max_tokens - min_tokens) - b = min_shift - m * min_tokens - return m * n_tokens + b - - -def strech_shifts_to_terminal(shifts: Tensor, terminal=0.1): - """ - Stretch a function (given as sampled shifts) so that its final value matches the given terminal value - using the provided formula. - - Parameters: - - shifts (Tensor): The samples of the function to be stretched (PyTorch Tensor). - - terminal (float): The desired terminal value (value at the last sample). - - Returns: - - Tensor: The stretched shifts such that the final value equals `terminal`. - """ - if shifts.numel() == 0: - raise ValueError("The 'shifts' tensor must not be empty.") - - # Ensure terminal value is valid - if terminal <= 0 or terminal >= 1: - raise ValueError("The terminal value must be between 0 and 1 (exclusive).") - - # Transform the shifts using the given formula - one_minus_z = 1 - shifts - scale_factor = one_minus_z[-1] / (1 - terminal) - stretched_shifts = 1 - (one_minus_z / scale_factor) - - return stretched_shifts - - -def sd3_resolution_dependent_timestep_shift( - samples_shape: torch.Size, - timesteps: Tensor, - target_shift_terminal: Optional[float] = None, -) -> Tensor: - """ - Shifts the timestep schedule as a function of the generated resolution. - - In the SD3 paper, the authors empirically how to shift the timesteps based on the resolution of the target images. - For more details: https://arxiv.org/pdf/2403.03206 - - In Flux they later propose a more dynamic resolution dependent timestep shift, see: - https://github.com/black-forest-labs/flux/blob/87f6fff727a377ea1c378af692afb41ae84cbe04/src/flux/sampling.py#L66 - - - Args: - samples_shape (torch.Size): The samples batch shape (batch_size, channels, height, width) or - (batch_size, channels, frame, height, width). - timesteps (Tensor): A batch of timesteps with shape (batch_size,). - target_shift_terminal (float): The target terminal value for the shifted timesteps. - - Returns: - Tensor: The shifted timesteps. - """ - if len(samples_shape) == 3: - _, m, _ = samples_shape - elif len(samples_shape) in [4, 5]: - m = math.prod(samples_shape[2:]) - else: - raise ValueError( - "Samples must have shape (b, t, c), (b, c, h, w) or (b, c, f, h, w)" - ) - - shift = get_normal_shift(m) - time_shifts = time_shift(shift, 1, timesteps) - if target_shift_terminal is not None: # Stretch the shifts to the target terminal - time_shifts = strech_shifts_to_terminal(time_shifts, target_shift_terminal) - return time_shifts - - -class TimestepShifter(ABC): - @abstractmethod - def shift_timesteps(self, samples_shape: torch.Size, timesteps: Tensor) -> Tensor: - pass - - -@dataclass -class RectifiedFlowSchedulerOutput(BaseOutput): - """ - Output class for the scheduler's step function output. - - Args: - prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - The predicted denoised sample (x_{0}) based on the model output from the current timestep. - `pred_original_sample` can be used to preview progress or for guidance. - """ - - prev_sample: torch.FloatTensor - pred_original_sample: Optional[torch.FloatTensor] = None - - -class RectifiedFlowScheduler(SchedulerMixin, ConfigMixin, TimestepShifter): - order = 1 - - @register_to_config - def __init__( - self, - num_train_timesteps=1000, - shifting: Optional[str] = None, - base_resolution: int = 32**2, - target_shift_terminal: Optional[float] = None, - sampler: Optional[str] = "Uniform", - shift: Optional[float] = None, - ): - super().__init__() - self.init_noise_sigma = 1.0 - self.num_inference_steps = None - self.sampler = sampler - self.shifting = shifting - self.base_resolution = base_resolution - self.target_shift_terminal = target_shift_terminal - self.timesteps = self.sigmas = self.get_initial_timesteps( - num_train_timesteps, shift=shift - ) - self.shift = shift - - def get_initial_timesteps( - self, num_timesteps: int, shift: Optional[float] = None - ) -> Tensor: - if self.sampler == "Uniform": - return torch.linspace(1, 1 / num_timesteps, num_timesteps) - elif self.sampler == "LinearQuadratic": - return linear_quadratic_schedule(num_timesteps) - elif self.sampler == "Constant": - assert ( - shift is not None - ), "Shift must be provided for constant time shift sampler." - return time_shift( - shift, 1, torch.linspace(1, 1 / num_timesteps, num_timesteps) - ) - - def shift_timesteps(self, samples_shape: torch.Size, timesteps: Tensor) -> Tensor: - if self.shifting == "SD3": - return sd3_resolution_dependent_timestep_shift( - samples_shape, timesteps, self.target_shift_terminal - ) - elif self.shifting == "SimpleDiffusion": - return simple_diffusion_resolution_dependent_timestep_shift( - samples_shape, timesteps, self.base_resolution - ) - return timesteps - - def set_timesteps( - self, - num_inference_steps: Optional[int] = None, - samples_shape: Optional[torch.Size] = None, - timesteps: Optional[Tensor] = None, - device: Union[str, torch.device] = None, - ): - """ - Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. - If `timesteps` are provided, they will be used instead of the scheduled timesteps. - - Args: - num_inference_steps (`int` *optional*): The number of diffusion steps used when generating samples. - samples_shape (`torch.Size` *optional*): The samples batch shape, used for shifting. - timesteps ('torch.Tensor' *optional*): Specific timesteps to use instead of scheduled timesteps. - device (`Union[str, torch.device]`, *optional*): The device to which the timesteps tensor will be moved. - """ - if timesteps is not None and num_inference_steps is not None: - raise ValueError( - "You cannot provide both `timesteps` and `num_inference_steps`." - ) - if timesteps is None: - num_inference_steps = min( - self.config.num_train_timesteps, num_inference_steps - ) - timesteps = self.get_initial_timesteps( - num_inference_steps, shift=self.shift - ).to(device) - timesteps = self.shift_timesteps(samples_shape, timesteps) - else: - timesteps = torch.Tensor(timesteps).to(device) - num_inference_steps = len(timesteps) - self.timesteps = timesteps - self.num_inference_steps = num_inference_steps - self.sigmas = self.timesteps - - @staticmethod - def from_pretrained(pretrained_model_path: Union[str, os.PathLike]): - pretrained_model_path = Path(pretrained_model_path) - if pretrained_model_path.is_file(): - comfy_single_file_state_dict = {} - with safe_open(pretrained_model_path, framework="pt", device="cpu") as f: - metadata = f.metadata() - for k in f.keys(): - comfy_single_file_state_dict[k] = f.get_tensor(k) - configs = json.loads(metadata["config"]) - config = configs["scheduler"] - del comfy_single_file_state_dict - - elif pretrained_model_path.is_dir(): - diffusers_noise_scheduler_config_path = ( - pretrained_model_path / "scheduler" / "scheduler_config.json" - ) - - with open(diffusers_noise_scheduler_config_path, "r") as f: - scheduler_config = json.load(f) - hashable_config = make_hashable_key(scheduler_config) - if hashable_config in diffusers_and_ours_config_mapping: - config = diffusers_and_ours_config_mapping[hashable_config] - return RectifiedFlowScheduler.from_config(config) - - def scale_model_input( - self, sample: torch.FloatTensor, timestep: Optional[int] = None - ) -> torch.FloatTensor: - # pylint: disable=unused-argument - """ - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - - Args: - sample (`torch.FloatTensor`): input sample - timestep (`int`, optional): current timestep - - Returns: - `torch.FloatTensor`: scaled input sample - """ - return sample - - def step( - self, - model_output: torch.FloatTensor, - timestep: torch.FloatTensor, - sample: torch.FloatTensor, - return_dict: bool = True, - stochastic_sampling: Optional[bool] = False, - **kwargs, - ) -> Union[RectifiedFlowSchedulerOutput, Tuple]: - """ - Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion - process from the learned model outputs (most often the predicted noise). - z_{t_1} = z_t - Delta_t * v - The method finds the next timestep that is lower than the input timestep(s) and denoises the latents - to that level. The input timestep(s) are not required to be one of the predefined timesteps. - - Args: - model_output (`torch.FloatTensor`): - The direct output from learned diffusion model - the velocity, - timestep (`float`): - The current discrete timestep in the diffusion chain (global or per-token). - sample (`torch.FloatTensor`): - A current latent tokens to be de-noised. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~schedulers.scheduling_ddim.DDIMSchedulerOutput`] or `tuple`. - stochastic_sampling (`bool`, *optional*, defaults to `False`): - Whether to use stochastic sampling for the sampling process. - - Returns: - [`~schedulers.scheduling_utils.RectifiedFlowSchedulerOutput`] or `tuple`: - If return_dict is `True`, [`~schedulers.rf_scheduler.RectifiedFlowSchedulerOutput`] is returned, - otherwise a tuple is returned where the first element is the sample tensor. - """ - if self.num_inference_steps is None: - raise ValueError( - "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" - ) - t_eps = 1e-6 # Small epsilon to avoid numerical issues in timestep values - - timesteps_padded = torch.cat( - [self.timesteps, torch.zeros(1, device=self.timesteps.device)] - ) - - # Find the next lower timestep(s) and compute the dt from the current timestep(s) - if timestep.ndim == 0: - # Global timestep case - lower_mask = timesteps_padded < timestep - t_eps - lower_timestep = timesteps_padded[lower_mask][0] # Closest lower timestep - dt = timestep - lower_timestep - - else: - # Per-token case - assert timestep.ndim == 2 - lower_mask = timesteps_padded[:, None, None] < timestep[None] - t_eps - lower_timestep = lower_mask * timesteps_padded[:, None, None] - lower_timestep, _ = lower_timestep.max(dim=0) - dt = (timestep - lower_timestep)[..., None] - - # Compute previous sample - if stochastic_sampling: - x0 = sample - timestep[..., None] * model_output - next_timestep = timestep[..., None] - dt - prev_sample = self.add_noise(x0, torch.randn_like(sample), next_timestep) - else: - prev_sample = sample - dt * model_output - - if not return_dict: - return (prev_sample,) - - return RectifiedFlowSchedulerOutput(prev_sample=prev_sample) - - def add_noise( - self, - original_samples: torch.FloatTensor, - noise: torch.FloatTensor, - timesteps: torch.FloatTensor, - ) -> torch.FloatTensor: - sigmas = timesteps - sigmas = append_dims(sigmas, original_samples.ndim) - alphas = 1 - sigmas - noisy_samples = alphas * original_samples + sigmas * noise - return noisy_samples diff --git a/ltx_video_x/utils/__init__.py b/ltx_video_x/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/ltx_video_x/utils/diffusers_config_mapping.py b/ltx_video_x/utils/diffusers_config_mapping.py deleted file mode 100644 index 53c0082d182617f6f84eab9c849f7ef0224becb8..0000000000000000000000000000000000000000 --- a/ltx_video_x/utils/diffusers_config_mapping.py +++ /dev/null @@ -1,174 +0,0 @@ -def make_hashable_key(dict_key): - def convert_value(value): - if isinstance(value, list): - return tuple(value) - elif isinstance(value, dict): - return tuple(sorted((k, convert_value(v)) for k, v in value.items())) - else: - return value - - return tuple(sorted((k, convert_value(v)) for k, v in dict_key.items())) - - -DIFFUSERS_SCHEDULER_CONFIG = { - "_class_name": "FlowMatchEulerDiscreteScheduler", - "_diffusers_version": "0.32.0.dev0", - "base_image_seq_len": 1024, - "base_shift": 0.95, - "invert_sigmas": False, - "max_image_seq_len": 4096, - "max_shift": 2.05, - "num_train_timesteps": 1000, - "shift": 1.0, - "shift_terminal": 0.1, - "use_beta_sigmas": False, - "use_dynamic_shifting": True, - "use_exponential_sigmas": False, - "use_karras_sigmas": False, -} -DIFFUSERS_TRANSFORMER_CONFIG = { - "_class_name": "LTXVideoTransformer3DModel", - "_diffusers_version": "0.32.0.dev0", - "activation_fn": "gelu-approximate", - "attention_bias": True, - "attention_head_dim": 64, - "attention_out_bias": True, - "caption_channels": 4096, - "cross_attention_dim": 2048, - "in_channels": 128, - "norm_elementwise_affine": False, - "norm_eps": 1e-06, - "num_attention_heads": 32, - "num_layers": 28, - "out_channels": 128, - "patch_size": 1, - "patch_size_t": 1, - "qk_norm": "rms_norm_across_heads", -} -DIFFUSERS_VAE_CONFIG = { - "_class_name": "AutoencoderKLLTXVideo", - "_diffusers_version": "0.32.0.dev0", - "block_out_channels": [128, 256, 512, 512], - "decoder_causal": False, - "encoder_causal": True, - "in_channels": 3, - "latent_channels": 128, - "layers_per_block": [4, 3, 3, 3, 4], - "out_channels": 3, - "patch_size": 4, - "patch_size_t": 1, - "resnet_norm_eps": 1e-06, - "scaling_factor": 1.0, - "spatio_temporal_scaling": [True, True, True, False], -} - -OURS_SCHEDULER_CONFIG = { - "_class_name": "RectifiedFlowScheduler", - "_diffusers_version": "0.25.1", - "num_train_timesteps": 1000, - "shifting": "SD3", - "base_resolution": None, - "target_shift_terminal": 0.1, -} - -OURS_TRANSFORMER_CONFIG = { - "_class_name": "Transformer3DModel", - "_diffusers_version": "0.25.1", - "_name_or_path": "PixArt-alpha/PixArt-XL-2-256x256", - "activation_fn": "gelu-approximate", - "attention_bias": True, - "attention_head_dim": 64, - "attention_type": "default", - "caption_channels": 4096, - "cross_attention_dim": 2048, - "double_self_attention": False, - "dropout": 0.0, - "in_channels": 128, - "norm_elementwise_affine": False, - "norm_eps": 1e-06, - "norm_num_groups": 32, - "num_attention_heads": 32, - "num_embeds_ada_norm": 1000, - "num_layers": 28, - "num_vector_embeds": None, - "only_cross_attention": False, - "out_channels": 128, - "project_to_2d_pos": True, - "upcast_attention": False, - "use_linear_projection": False, - "qk_norm": "rms_norm", - "standardization_norm": "rms_norm", - "positional_embedding_type": "rope", - "positional_embedding_theta": 10000.0, - "positional_embedding_max_pos": [20, 2048, 2048], - "timestep_scale_multiplier": 1000, -} -OURS_VAE_CONFIG = { - "_class_name": "CausalVideoAutoencoder", - "dims": 3, - "in_channels": 3, - "out_channels": 3, - "latent_channels": 128, - "blocks": [ - ["res_x", 4], - ["compress_all", 1], - ["res_x_y", 1], - ["res_x", 3], - ["compress_all", 1], - ["res_x_y", 1], - ["res_x", 3], - ["compress_all", 1], - ["res_x", 3], - ["res_x", 4], - ], - "scaling_factor": 1.0, - "norm_layer": "pixel_norm", - "patch_size": 4, - "latent_log_var": "uniform", - "use_quant_conv": False, - "causal_decoder": False, -} - - -diffusers_and_ours_config_mapping = { - make_hashable_key(DIFFUSERS_SCHEDULER_CONFIG): OURS_SCHEDULER_CONFIG, - make_hashable_key(DIFFUSERS_TRANSFORMER_CONFIG): OURS_TRANSFORMER_CONFIG, - make_hashable_key(DIFFUSERS_VAE_CONFIG): OURS_VAE_CONFIG, -} - - -TRANSFORMER_KEYS_RENAME_DICT = { - "proj_in": "patchify_proj", - "time_embed": "adaln_single", - "norm_q": "q_norm", - "norm_k": "k_norm", -} - - -VAE_KEYS_RENAME_DICT = { - "decoder.up_blocks.3.conv_in": "decoder.up_blocks.7", - "decoder.up_blocks.3.upsamplers.0": "decoder.up_blocks.8", - "decoder.up_blocks.3": "decoder.up_blocks.9", - "decoder.up_blocks.2.upsamplers.0": "decoder.up_blocks.5", - "decoder.up_blocks.2.conv_in": "decoder.up_blocks.4", - "decoder.up_blocks.2": "decoder.up_blocks.6", - "decoder.up_blocks.1.upsamplers.0": "decoder.up_blocks.2", - "decoder.up_blocks.1": "decoder.up_blocks.3", - "decoder.up_blocks.0": "decoder.up_blocks.1", - "decoder.mid_block": "decoder.up_blocks.0", - "encoder.down_blocks.3": "encoder.down_blocks.8", - "encoder.down_blocks.2.downsamplers.0": "encoder.down_blocks.7", - "encoder.down_blocks.2": "encoder.down_blocks.6", - "encoder.down_blocks.1.downsamplers.0": "encoder.down_blocks.4", - "encoder.down_blocks.1.conv_out": "encoder.down_blocks.5", - "encoder.down_blocks.1": "encoder.down_blocks.3", - "encoder.down_blocks.0.conv_out": "encoder.down_blocks.2", - "encoder.down_blocks.0.downsamplers.0": "encoder.down_blocks.1", - "encoder.down_blocks.0": "encoder.down_blocks.0", - "encoder.mid_block": "encoder.down_blocks.9", - "conv_shortcut.conv": "conv_shortcut", - "resnets": "res_blocks", - "norm3": "norm3.norm", - "latents_mean": "per_channel_statistics.mean-of-means", - "latents_std": "per_channel_statistics.std-of-means", -} diff --git a/ltx_video_x/utils/prompt_enhance_utils.py b/ltx_video_x/utils/prompt_enhance_utils.py deleted file mode 100644 index 9010517282925f8f3d2343829347f309e5c0e41a..0000000000000000000000000000000000000000 --- a/ltx_video_x/utils/prompt_enhance_utils.py +++ /dev/null @@ -1,226 +0,0 @@ -import logging -from typing import Union, List, Optional - -import torch -from PIL import Image - -logger = logging.getLogger(__name__) # pylint: disable=invalid-name - -T2V_CINEMATIC_PROMPT = """You are an expert cinematic director with many award winning movies, When writing prompts based on the user input, focus on detailed, chronological descriptions of actions and scenes. -Include specific movements, appearances, camera angles, and environmental details - all in a single flowing paragraph. -Start directly with the action, and keep descriptions literal and precise. -Think like a cinematographer describing a shot list. -Do not change the user input intent, just enhance it. -Keep within 150 words. -For best results, build your prompts using this structure: -Start with main action in a single sentence -Add specific details about movements and gestures -Describe character/object appearances precisely -Include background and environment details -Specify camera angles and movements -Describe lighting and colors -Note any changes or sudden events -Do not exceed the 150 word limit! -Output the enhanced prompt only. -""" - -I2V_CINEMATIC_PROMPT = """You are an expert cinematic director with many award winning movies, When writing prompts based on the user input, focus on detailed, chronological descriptions of actions and scenes. -Include specific movements, appearances, camera angles, and environmental details - all in a single flowing paragraph. -Start directly with the action, and keep descriptions literal and precise. -Think like a cinematographer describing a shot list. -Keep within 150 words. -For best results, build your prompts using this structure: -Describe the image first and then add the user input. Image description should be in first priority! Align to the image caption if it contradicts the user text input. -Start with main action in a single sentence -Add specific details about movements and gestures -Describe character/object appearances precisely -Include background and environment details -Specify camera angles and movements -Describe lighting and colors -Note any changes or sudden events -Align to the image caption if it contradicts the user text input. -Do not exceed the 150 word limit! -Output the enhanced prompt only. -""" - - -def tensor_to_pil(tensor): - # Ensure tensor is in range [-1, 1] - assert tensor.min() >= -1 and tensor.max() <= 1 - - # Convert from [-1, 1] to [0, 1] - tensor = (tensor + 1) / 2 - - # Rearrange from [C, H, W] to [H, W, C] - tensor = tensor.permute(1, 2, 0) - - # Convert to numpy array and then to uint8 range [0, 255] - numpy_image = (tensor.cpu().numpy() * 255).astype("uint8") - - # Convert to PIL Image - return Image.fromarray(numpy_image) - - -def generate_cinematic_prompt( - image_caption_model, - image_caption_processor, - prompt_enhancer_model, - prompt_enhancer_tokenizer, - prompt: Union[str, List[str]], - conditioning_items: Optional[List] = None, - max_new_tokens: int = 256, -) -> List[str]: - prompts = [prompt] if isinstance(prompt, str) else prompt - - if conditioning_items is None: - prompts = _generate_t2v_prompt( - prompt_enhancer_model, - prompt_enhancer_tokenizer, - prompts, - max_new_tokens, - T2V_CINEMATIC_PROMPT, - ) - else: - if len(conditioning_items) > 1 or conditioning_items[0].media_frame_number != 0: - logger.warning( - "prompt enhancement does only support unconditional or first frame of conditioning items, returning original prompts" - ) - return prompts - - first_frame_conditioning_item = conditioning_items[0] - first_frames = _get_first_frames_from_conditioning_item( - first_frame_conditioning_item - ) - - assert len(first_frames) == len( - prompts - ), "Number of conditioning frames must match number of prompts" - - prompts = _generate_i2v_prompt( - image_caption_model, - image_caption_processor, - prompt_enhancer_model, - prompt_enhancer_tokenizer, - prompts, - first_frames, - max_new_tokens, - I2V_CINEMATIC_PROMPT, - ) - - return prompts - - -def _get_first_frames_from_conditioning_item(conditioning_item) -> List[Image.Image]: - frames_tensor = conditioning_item.media_item - return [ - tensor_to_pil(frames_tensor[i, :, 0, :, :]) - for i in range(frames_tensor.shape[0]) - ] - - -def _generate_t2v_prompt( - prompt_enhancer_model, - prompt_enhancer_tokenizer, - prompts: List[str], - max_new_tokens: int, - system_prompt: str, -) -> List[str]: - messages = [ - [ - {"role": "system", "content": system_prompt}, - {"role": "user", "content": f"user_prompt: {p}"}, - ] - for p in prompts - ] - - texts = [ - prompt_enhancer_tokenizer.apply_chat_template( - m, tokenize=False, add_generation_prompt=True - ) - for m in messages - ] - model_inputs = prompt_enhancer_tokenizer(texts, return_tensors="pt").to( - prompt_enhancer_model.device - ) - - return _generate_and_decode_prompts( - prompt_enhancer_model, prompt_enhancer_tokenizer, model_inputs, max_new_tokens - ) - - -def _generate_i2v_prompt( - image_caption_model, - image_caption_processor, - prompt_enhancer_model, - prompt_enhancer_tokenizer, - prompts: List[str], - first_frames: List[Image.Image], - max_new_tokens: int, - system_prompt: str, -) -> List[str]: - image_captions = _generate_image_captions( - image_caption_model, image_caption_processor, first_frames - ) - - messages = [ - [ - {"role": "system", "content": system_prompt}, - {"role": "user", "content": f"user_prompt: {p}\nimage_caption: {c}"}, - ] - for p, c in zip(prompts, image_captions) - ] - - texts = [ - prompt_enhancer_tokenizer.apply_chat_template( - m, tokenize=False, add_generation_prompt=True - ) - for m in messages - ] - model_inputs = prompt_enhancer_tokenizer(texts, return_tensors="pt").to( - prompt_enhancer_model.device - ) - - return _generate_and_decode_prompts( - prompt_enhancer_model, prompt_enhancer_tokenizer, model_inputs, max_new_tokens - ) - - -def _generate_image_captions( - image_caption_model, - image_caption_processor, - images: List[Image.Image], - system_prompt: str = "", -) -> List[str]: - image_caption_prompts = [system_prompt] * len(images) - inputs = image_caption_processor( - image_caption_prompts, images, return_tensors="pt" - ).to(image_caption_model.device) - - with torch.inference_mode(): - generated_ids = image_caption_model.generate( - input_ids=inputs["input_ids"], - pixel_values=inputs["pixel_values"], - max_new_tokens=1024, - do_sample=False, - num_beams=3, - ) - - return image_caption_processor.batch_decode(generated_ids, skip_special_tokens=True) - - -def _generate_and_decode_prompts( - prompt_enhancer_model, prompt_enhancer_tokenizer, model_inputs, max_new_tokens: int -) -> List[str]: - with torch.inference_mode(): - outputs = prompt_enhancer_model.generate( - **model_inputs, max_new_tokens=max_new_tokens - ) - generated_ids = [ - output_ids[len(input_ids) :] - for input_ids, output_ids in zip(model_inputs.input_ids, outputs) - ] - decoded_prompts = prompt_enhancer_tokenizer.batch_decode( - generated_ids, skip_special_tokens=True - ) - - return decoded_prompts diff --git a/ltx_video_x/utils/skip_layer_strategy.py b/ltx_video_x/utils/skip_layer_strategy.py deleted file mode 100644 index 30f9016e1cf2abbe62360775e914fa63876e4cf7..0000000000000000000000000000000000000000 --- a/ltx_video_x/utils/skip_layer_strategy.py +++ /dev/null @@ -1,8 +0,0 @@ -from enum import Enum, auto - - -class SkipLayerStrategy(Enum): - AttentionSkip = auto() - AttentionValues = auto() - Residual = auto() - TransformerBlock = auto() diff --git a/ltx_video_x/utils/torch_utils.py b/ltx_video_x/utils/torch_utils.py deleted file mode 100644 index 991b07c36269ef4dafb88a85834f2596647ba816..0000000000000000000000000000000000000000 --- a/ltx_video_x/utils/torch_utils.py +++ /dev/null @@ -1,25 +0,0 @@ -import torch -from torch import nn - - -def append_dims(x: torch.Tensor, target_dims: int) -> torch.Tensor: - """Appends dimensions to the end of a tensor until it has target_dims dimensions.""" - dims_to_append = target_dims - x.ndim - if dims_to_append < 0: - raise ValueError( - f"input has {x.ndim} dims but target_dims is {target_dims}, which is less" - ) - elif dims_to_append == 0: - return x - return x[(...,) + (None,) * dims_to_append] - - -class Identity(nn.Module): - """A placeholder identity operator that is argument-insensitive.""" - - def __init__(self, *args, **kwargs) -> None: # pylint: disable=unused-argument - super().__init__() - - # pylint: disable=unused-argument - def forward(self, x: torch.Tensor, *args, **kwargs) -> torch.Tensor: - return x diff --git a/mmaudio_x/LICENSE.txt b/mmaudio_x/LICENSE.txt deleted file mode 100644 index 261eeb9e9f8b2b4b0d119366dda99c6fd7d35c64..0000000000000000000000000000000000000000 --- a/mmaudio_x/LICENSE.txt +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/mmaudio_x/README.md b/mmaudio_x/README.md deleted file mode 100644 index 964c76ea7d615f5287e9283e23df316dad8bfdd4..0000000000000000000000000000000000000000 --- a/mmaudio_x/README.md +++ /dev/null @@ -1,135 +0,0 @@ -# 🛠️ helpers/ - Ferramentas de IA de Terceiros Adaptadas para ADUC-SDR - -Esta pasta contém implementações adaptadas de modelos e utilitários de IA de terceiros, que servem como "especialistas" ou "ferramentas" de baixo nível para a arquitetura ADUC-SDR. - -**IMPORTANTE:** O conteúdo desta pasta é de autoria de seus respectivos idealizadores e desenvolvedores originais. Esta pasta **NÃO FAZ PARTE** do projeto principal ADUC-SDR em termos de sua arquitetura inovadora. Ela serve como um repositório para as **dependências diretas e modificadas** que os `DeformesXDEngines` (os estágios do "foguete" ADUC-SDR) invocam para realizar tarefas específicas (geração de imagem, vídeo, áudio). - -As modificações realizadas nos arquivos aqui presentes visam principalmente: -1. **Adaptação de Interfaces:** Padronizar as interfaces para que se encaixem no fluxo de orquestração do ADUC-SDR. -2. **Gerenciamento de Recursos:** Integrar lógicas de carregamento/descarregamento de modelos (GPU management) e configurações via arquivos YAML. -3. **Otimização de Fluxo:** Ajustar as pipelines para aceitar formatos de entrada mais eficientes (ex: tensores pré-codificados em vez de caminhos de mídia, pulando etapas de codificação/decodificação redundantes). - ---- - -## 📄 Licenciamento - -O conteúdo original dos projetos listados abaixo é licenciado sob a **Licença Apache 2.0**, ou outra licença especificada pelos autores originais. Todas as modificações e o uso desses arquivos dentro da estrutura `helpers/` do projeto ADUC-SDR estão em conformidade com os termos da **Licença Apache 2.0**. - -As licenças originais dos projetos podem ser encontradas nas suas respectivas fontes ou nos subdiretórios `incl_licenses/` dentro de cada módulo adaptado. - ---- - -## 🛠️ API dos Helpers e Guia de Uso - -Esta seção detalha como cada helper (agente especialista) deve ser utilizado dentro do ecossistema ADUC-SDR. Todos os agentes são instanciados como **singletons** no `hardware_manager.py` para garantir o gerenciamento centralizado de recursos de GPU. - -### **gemini_helpers.py (GeminiAgent)** - -* **Propósito:** Atua como o "Oráculo de Síntese Adaptativo", responsável por todas as tarefas de processamento de linguagem natural, como criação de storyboards, geração de prompts, e tomada de decisões narrativas. -* **Singleton Instance:** `gemini_agent_singleton` -* **Construtor:** `GeminiAgent()` - * Lê `configs/gemini_config.yaml` para obter o nome do modelo, parâmetros de inferência e caminhos de templates de prompt. A chave da API é lida da variável de ambiente `GEMINI_API_KEY`. -* **Métodos Públicos:** - * `generate_storyboard(prompt: str, num_keyframes: int, ref_image_paths: list[str])` - * **Inputs:** - * `prompt`: A ideia geral do filme (string). - * `num_keyframes`: O número de cenas a serem geradas (int). - * `ref_image_paths`: Lista de caminhos para as imagens de referência (list[str]). - * **Output:** `tuple[list[str], str]` (Uma tupla contendo a lista de strings do storyboard e um relatório textual da operação). - * `select_keyframes_from_pool(storyboard: list, base_image_paths: list[str], pool_image_paths: list[str])` - * **Inputs:** - * `storyboard`: A lista de strings do storyboard gerado. - * `base_image_paths`: Imagens de referência base (list[str]). - * `pool_image_paths`: O "banco de imagens" de onde selecionar (list[str]). - * **Output:** `tuple[list[str], str]` (Uma tupla contendo a lista de caminhos de imagens selecionadas e um relatório textual). - * `get_anticipatory_keyframe_prompt(...)` - * **Inputs:** Contexto narrativo e visual para gerar um prompt de imagem. - * **Output:** `tuple[str, str]` (Uma tupla contendo o prompt gerado para o modelo de imagem e um relatório textual). - * `get_initial_motion_prompt(...)` - * **Inputs:** Contexto narrativo e visual para a primeira transição de vídeo. - * **Output:** `tuple[str, str]` (Uma tupla contendo o prompt de movimento gerado e um relatório textual). - * `get_transition_decision(...)` - * **Inputs:** Contexto narrativo e visual para uma transição de vídeo intermediária. - * **Output:** `tuple[dict, str]` (Uma tupla contendo um dicionário `{"transition_type": "...", "motion_prompt": "..."}` e um relatório textual). - * `generate_audio_prompts(...)` - * **Inputs:** Contexto narrativo global. - * **Output:** `tuple[dict, str]` (Uma tupla contendo um dicionário `{"music_prompt": "...", "sfx_prompt": "..."}` e um relatório textual). - -### **flux_kontext_helpers.py (FluxPoolManager)** - -* **Propósito:** Especialista em geração de imagens de alta qualidade (keyframes) usando a pipeline FluxKontext. Gerencia um pool de workers para otimizar o uso de múltiplas GPUs. -* **Singleton Instance:** `flux_kontext_singleton` -* **Construtor:** `FluxPoolManager(device_ids: list[str], flux_config_file: str)` - * Lê `configs/flux_config.yaml`. -* **Método Público:** - * `generate_image(prompt: str, reference_images: list[Image.Image], width: int, height: int, seed: int = 42, callback: callable = None)` - * **Inputs:** - * `prompt`: Prompt textual para guiar a geração (string). - * `reference_images`: Lista de objetos `PIL.Image` como referência visual. - * `width`, `height`: Dimensões da imagem de saída (int). - * `seed`: Semente para reprodutibilidade (int). - * `callback`: Função de callback opcional para monitorar o progresso. - * **Output:** `PIL.Image.Image` (O objeto da imagem gerada). - -### **dreamo_helpers.py (DreamOAgent)** - -* **Propósito:** Especialista em geração de imagens de alta qualidade (keyframes) usando a pipeline DreamO, com capacidades avançadas de edição e estilo a partir de referências. -* **Singleton Instance:** `dreamo_agent_singleton` -* **Construtor:** `DreamOAgent(device_id: str = None)` - * Lê `configs/dreamo_config.yaml`. -* **Método Público:** - * `generate_image(prompt: str, reference_images: list[Image.Image], width: int, height: int)` - * **Inputs:** - * `prompt`: Prompt textual para guiar a geração (string). - * `reference_images`: Lista de objetos `PIL.Image` como referência visual. A lógica interna atribui a primeira imagem como `style` e as demais como `ip`. - * `width`, `height`: Dimensões da imagem de saída (int). - * **Output:** `PIL.Image.Image` (O objeto da imagem gerada). - -### **ltx_manager_helpers.py (LtxPoolManager)** - -* **Propósito:** Especialista na geração de fragmentos de vídeo no espaço latente usando a pipeline LTX-Video. Gerencia um pool de workers para otimizar o uso de múltiplas GPUs. -* **Singleton Instance:** `ltx_manager_singleton` -* **Construtor:** `LtxPoolManager(device_ids: list[str], ltx_model_config_file: str, ltx_global_config_file: str)` - * Lê o `ltx_global_config_file` e o `ltx_model_config_file` para configurar a pipeline. -* **Método Público:** - * `generate_latent_fragment(**kwargs)` - * **Inputs:** Dicionário de keyword arguments (`kwargs`) contendo todos os parâmetros da pipeline LTX, incluindo: - * `height`, `width`: Dimensões do vídeo (int). - * `video_total_frames`: Número total de frames a serem gerados (int). - * `video_fps`: Frames por segundo (int). - * `motion_prompt`: Prompt de movimento (string). - * `conditioning_items_data`: Lista de objetos `LatentConditioningItem` contendo os tensores latentes de condição. - * `guidance_scale`, `stg_scale`, `num_inference_steps`, etc. - * **Output:** `tuple[torch.Tensor, tuple]` (Uma tupla contendo o tensor latente gerado e os valores de padding utilizados). - -### **mmaudio_helper.py (MMAudioAgent)** - -* **Propósito:** Especialista em geração de áudio para um determinado fragmento de vídeo. -* **Singleton Instance:** `mmaudio_agent_singleton` -* **Construtor:** `MMAudioAgent(workspace_dir: str, device_id: str = None, mmaudio_config_file: str)` - * Lê `configs/mmaudio_config.yaml`. -* **Método Público:** - * `generate_audio_for_video(video_path: str, prompt: str, negative_prompt: str, duration_seconds: float)` - * **Inputs:** - * `video_path`: Caminho para o arquivo de vídeo silencioso (string). - * `prompt`: Prompt textual para guiar a geração de áudio (string). - * `negative_prompt`: Prompt negativo para áudio (string). - * `duration_seconds`: Duração exata do vídeo (float). - * **Output:** `str` (O caminho para o novo arquivo de vídeo com a faixa de áudio integrada). - ---- - -## 🔗 Projetos Originais e Atribuições -(A seção de atribuições e licenças permanece a mesma que definimos anteriormente) - -### DreamO -* **Repositório Original:** [https://github.com/bytedance/DreamO](https://github.com/bytedance/DreamO) -... - -### LTX-Video -* **Repositório Original:** [https://github.com/Lightricks/LTX-Video](https://github.com/Lightricks/LTX-Video) -... - -### MMAudio -* **Repositório Original:** [https://github.com/hkchengrex/MMAudio](https://github.com/hkchengrex/MMAudio) -... \ No newline at end of file diff --git a/mmaudio_x/__init__.py b/mmaudio_x/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/mmaudio_x/data/__init__.py b/mmaudio_x/data/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/mmaudio_x/data/av_utils.py b/mmaudio_x/data/av_utils.py deleted file mode 100644 index 7d4945b9658b8208f039e72d78e1dac45ae5e12d..0000000000000000000000000000000000000000 --- a/mmaudio_x/data/av_utils.py +++ /dev/null @@ -1,136 +0,0 @@ -from dataclasses import dataclass -from fractions import Fraction -from pathlib import Path -from typing import Optional - -import av -import numpy as np -import torch -from av import AudioFrame - - -@dataclass -class VideoInfo: - duration_sec: float - fps: Fraction - clip_frames: torch.Tensor - sync_frames: torch.Tensor - all_frames: Optional[list[np.ndarray]] - - @property - def height(self): - return self.all_frames[0].shape[0] - - @property - def width(self): - return self.all_frames[0].shape[1] - - -def read_frames(video_path: Path, list_of_fps: list[float], start_sec: float, end_sec: float, - need_all_frames: bool) -> tuple[list[np.ndarray], list[np.ndarray], Fraction]: - output_frames = [[] for _ in list_of_fps] - next_frame_time_for_each_fps = [0.0 for _ in list_of_fps] - time_delta_for_each_fps = [1 / fps for fps in list_of_fps] - all_frames = [] - - # container = av.open(video_path) - with av.open(video_path) as container: - stream = container.streams.video[0] - fps = stream.guessed_rate - stream.thread_type = 'AUTO' - for packet in container.demux(stream): - for frame in packet.decode(): - frame_time = frame.time - if frame_time < start_sec: - continue - if frame_time > end_sec: - break - - frame_np = None - if need_all_frames: - frame_np = frame.to_ndarray(format='rgb24') - all_frames.append(frame_np) - - for i, _ in enumerate(list_of_fps): - this_time = frame_time - while this_time >= next_frame_time_for_each_fps[i]: - if frame_np is None: - frame_np = frame.to_ndarray(format='rgb24') - - output_frames[i].append(frame_np) - next_frame_time_for_each_fps[i] += time_delta_for_each_fps[i] - - output_frames = [np.stack(frames) for frames in output_frames] - return output_frames, all_frames, fps - - -def reencode_with_audio(video_info: VideoInfo, output_path: Path, audio: torch.Tensor, - sampling_rate: int): - container = av.open(output_path, 'w') - output_video_stream = container.add_stream('h264', video_info.fps) - output_video_stream.codec_context.bit_rate = 10 * 1e6 # 10 Mbps - output_video_stream.width = video_info.width - output_video_stream.height = video_info.height - output_video_stream.pix_fmt = 'yuv420p' - - output_audio_stream = container.add_stream('aac', sampling_rate) - - # encode video - for image in video_info.all_frames: - image = av.VideoFrame.from_ndarray(image) - packet = output_video_stream.encode(image) - container.mux(packet) - - for packet in output_video_stream.encode(): - container.mux(packet) - - # convert float tensor audio to numpy array - audio_np = audio.numpy().astype(np.float32) - audio_frame = AudioFrame.from_ndarray(audio_np, format='flt', layout='mono') - audio_frame.sample_rate = sampling_rate - - for packet in output_audio_stream.encode(audio_frame): - container.mux(packet) - - for packet in output_audio_stream.encode(): - container.mux(packet) - - container.close() - - -def remux_with_audio(video_path: Path, audio: torch.Tensor, output_path: Path, sampling_rate: int): - """ - NOTE: I don't think we can get the exact video duration right without re-encoding - so we are not using this but keeping it here for reference - """ - video = av.open(video_path) - output = av.open(output_path, 'w') - input_video_stream = video.streams.video[0] - output_video_stream = output.add_stream(template=input_video_stream) - output_audio_stream = output.add_stream('aac', sampling_rate) - - duration_sec = audio.shape[-1] / sampling_rate - - for packet in video.demux(input_video_stream): - # We need to skip the "flushing" packets that `demux` generates. - if packet.dts is None: - continue - # We need to assign the packet to the new stream. - packet.stream = output_video_stream - output.mux(packet) - - # convert float tensor audio to numpy array - audio_np = audio.numpy().astype(np.float32) - audio_frame = av.AudioFrame.from_ndarray(audio_np, format='flt', layout='mono') - audio_frame.sample_rate = sampling_rate - - for packet in output_audio_stream.encode(audio_frame): - output.mux(packet) - - for packet in output_audio_stream.encode(): - output.mux(packet) - - video.close() - output.close() - - output.close() diff --git a/mmaudio_x/eval_utils.py b/mmaudio_x/eval_utils.py deleted file mode 100644 index a5c9291f2687855b10b63b3f6e67e299c86cbbbe..0000000000000000000000000000000000000000 --- a/mmaudio_x/eval_utils.py +++ /dev/null @@ -1,217 +0,0 @@ -import dataclasses -import logging -from pathlib import Path -from typing import Optional - -import torch -from colorlog import ColoredFormatter -from torchvision.transforms import v2 - -from mmaudio.data.av_utils import VideoInfo, read_frames, reencode_with_audio -from mmaudio.model.flow_matching import FlowMatching -from mmaudio.model.networks import MMAudio -from mmaudio.model.sequence_config import (CONFIG_16K, CONFIG_44K, SequenceConfig) -from mmaudio.model.utils.features_utils import FeaturesUtils -from mmaudio.utils.download_utils import download_model_if_needed - -log = logging.getLogger() - - -@dataclasses.dataclass -class ModelConfig: - model_name: str - model_path: Path - vae_path: Path - bigvgan_16k_path: Optional[Path] - mode: str - synchformer_ckpt: Path = Path('./ext_weights/synchformer_state_dict.pth') - - @property - def seq_cfg(self) -> SequenceConfig: - if self.mode == '16k': - return CONFIG_16K - elif self.mode == '44k': - return CONFIG_44K - - def download_if_needed(self): - download_model_if_needed(self.model_path) - download_model_if_needed(self.vae_path) - if self.bigvgan_16k_path is not None: - download_model_if_needed(self.bigvgan_16k_path) - download_model_if_needed(self.synchformer_ckpt) - - -small_16k = ModelConfig(model_name='small_16k', - model_path=Path('./weights/mmaudio_small_16k.pth'), - vae_path=Path('./ext_weights/v1-16.pth'), - bigvgan_16k_path=Path('./ext_weights/best_netG.pt'), - mode='16k') -small_44k = ModelConfig(model_name='small_44k', - model_path=Path('./weights/mmaudio_small_44k.pth'), - vae_path=Path('./ext_weights/v1-44.pth'), - bigvgan_16k_path=None, - mode='44k') -medium_44k = ModelConfig(model_name='medium_44k', - model_path=Path('./weights/mmaudio_medium_44k.pth'), - vae_path=Path('./ext_weights/v1-44.pth'), - bigvgan_16k_path=None, - mode='44k') -large_44k = ModelConfig(model_name='large_44k', - model_path=Path('./weights/mmaudio_large_44k.pth'), - vae_path=Path('./ext_weights/v1-44.pth'), - bigvgan_16k_path=None, - mode='44k') -large_44k_v2 = ModelConfig(model_name='large_44k_v2', - model_path=Path('./weights/mmaudio_large_44k_v2.pth'), - vae_path=Path('./ext_weights/v1-44.pth'), - bigvgan_16k_path=None, - mode='44k') -all_model_cfg: dict[str, ModelConfig] = { - 'small_16k': small_16k, - 'small_44k': small_44k, - 'medium_44k': medium_44k, - 'large_44k': large_44k, - 'large_44k_v2': large_44k_v2, -} - - -def generate( - clip_video: Optional[torch.Tensor], - sync_video: Optional[torch.Tensor], - text: Optional[list[str]], - *, - negative_text: Optional[list[str]] = None, - feature_utils: FeaturesUtils, - net: MMAudio, - fm: FlowMatching, - rng: torch.Generator, - cfg_strength: float, - clip_batch_size_multiplier: int = 40, - sync_batch_size_multiplier: int = 40, -) -> torch.Tensor: - device = feature_utils.device - dtype = feature_utils.dtype - - bs = len(text) - if clip_video is not None: - clip_video = clip_video.to(device, dtype, non_blocking=True) - clip_features = feature_utils.encode_video_with_clip(clip_video, - batch_size=bs * - clip_batch_size_multiplier) - else: - clip_features = net.get_empty_clip_sequence(bs) - - if sync_video is not None: - sync_video = sync_video.to(device, dtype, non_blocking=True) - sync_features = feature_utils.encode_video_with_sync(sync_video, - batch_size=bs * - sync_batch_size_multiplier) - else: - sync_features = net.get_empty_sync_sequence(bs) - - if text is not None: - text_features = feature_utils.encode_text(text) - else: - text_features = net.get_empty_string_sequence(bs) - - if negative_text is not None: - assert len(negative_text) == bs - negative_text_features = feature_utils.encode_text(negative_text) - else: - negative_text_features = net.get_empty_string_sequence(bs) - - x0 = torch.randn(bs, - net.latent_seq_len, - net.latent_dim, - device=device, - dtype=dtype, - generator=rng) - preprocessed_conditions = net.preprocess_conditions(clip_features, sync_features, text_features) - empty_conditions = net.get_empty_conditions( - bs, negative_text_features=negative_text_features if negative_text is not None else None) - - cfg_ode_wrapper = lambda t, x: net.ode_wrapper(t, x, preprocessed_conditions, empty_conditions, - cfg_strength) - x1 = fm.to_data(cfg_ode_wrapper, x0) - x1 = net.unnormalize(x1) - spec = feature_utils.decode(x1) - audio = feature_utils.vocode(spec) - return audio - - -LOGFORMAT = " %(log_color)s%(levelname)-8s%(reset)s | %(log_color)s%(message)s%(reset)s" - - -def setup_eval_logging(log_level: int = logging.INFO): - logging.root.setLevel(log_level) - formatter = ColoredFormatter(LOGFORMAT) - stream = logging.StreamHandler() - stream.setLevel(log_level) - stream.setFormatter(formatter) - log = logging.getLogger() - log.setLevel(log_level) - log.addHandler(stream) - - -def load_video(video_path: Path, duration_sec: float, load_all_frames: bool = True) -> VideoInfo: - _CLIP_SIZE = 384 - _CLIP_FPS = 8.0 - - _SYNC_SIZE = 224 - _SYNC_FPS = 25.0 - - clip_transform = v2.Compose([ - v2.Resize((_CLIP_SIZE, _CLIP_SIZE), interpolation=v2.InterpolationMode.BICUBIC), - v2.ToImage(), - v2.ToDtype(torch.float32, scale=True), - ]) - - sync_transform = v2.Compose([ - v2.Resize(_SYNC_SIZE, interpolation=v2.InterpolationMode.BICUBIC), - v2.CenterCrop(_SYNC_SIZE), - v2.ToImage(), - v2.ToDtype(torch.float32, scale=True), - v2.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]), - ]) - - output_frames, all_frames, orig_fps = read_frames(video_path, - list_of_fps=[_CLIP_FPS, _SYNC_FPS], - start_sec=0, - end_sec=duration_sec, - need_all_frames=load_all_frames) - - clip_chunk, sync_chunk = output_frames - clip_chunk = torch.from_numpy(clip_chunk).permute(0, 3, 1, 2) - sync_chunk = torch.from_numpy(sync_chunk).permute(0, 3, 1, 2) - - clip_frames = clip_transform(clip_chunk) - sync_frames = sync_transform(sync_chunk) - - clip_length_sec = clip_frames.shape[0] / _CLIP_FPS - sync_length_sec = sync_frames.shape[0] / _SYNC_FPS - - if clip_length_sec < duration_sec: - log.warning(f'Clip video is too short: {clip_length_sec:.2f} < {duration_sec:.2f}') - log.warning(f'Truncating to {clip_length_sec:.2f} sec') - duration_sec = clip_length_sec - - if sync_length_sec < duration_sec: - log.warning(f'Sync video is too short: {sync_length_sec:.2f} < {duration_sec:.2f}') - log.warning(f'Truncating to {sync_length_sec:.2f} sec') - duration_sec = sync_length_sec - - clip_frames = clip_frames[:int(_CLIP_FPS * duration_sec)] - sync_frames = sync_frames[:int(_SYNC_FPS * duration_sec)] - - video_info = VideoInfo( - duration_sec=duration_sec, - fps=orig_fps, - clip_frames=clip_frames, - sync_frames=sync_frames, - all_frames=all_frames if load_all_frames else None, - ) - return video_info - - -def make_video(video_info: VideoInfo, output_path: Path, audio: torch.Tensor, sampling_rate: int): - reencode_with_audio(video_info, output_path, audio, sampling_rate) diff --git a/mmaudio_x/ext/__init__.py b/mmaudio_x/ext/__init__.py deleted file mode 100644 index 8b137891791fe96927ad78e64b0aad7bded08bdc..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/__init__.py +++ /dev/null @@ -1 +0,0 @@ - diff --git a/mmaudio_x/ext/autoencoder/__init__.py b/mmaudio_x/ext/autoencoder/__init__.py deleted file mode 100644 index e5a876391c1e48970e93ff45f212f21f86d4d0c9..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/autoencoder/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .autoencoder import AutoEncoderModule diff --git a/mmaudio_x/ext/autoencoder/autoencoder.py b/mmaudio_x/ext/autoencoder/autoencoder.py deleted file mode 100644 index 5b444656112f9c4e5d9493c8fce40c118a2e31d5..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/autoencoder/autoencoder.py +++ /dev/null @@ -1,52 +0,0 @@ -from typing import Literal, Optional - -import torch -import torch.nn as nn - -from mmaudio.ext.autoencoder.vae import VAE, get_my_vae -from mmaudio.ext.bigvgan import BigVGAN -from mmaudio.ext.bigvgan_v2.bigvgan import BigVGAN as BigVGANv2 -from mmaudio.model.utils.distributions import DiagonalGaussianDistribution - - -class AutoEncoderModule(nn.Module): - - def __init__(self, - *, - vae_ckpt_path, - vocoder_ckpt_path: Optional[str] = None, - mode: Literal['16k', '44k'], - need_vae_encoder: bool = True): - super().__init__() - self.vae: VAE = get_my_vae(mode).eval() - vae_state_dict = torch.load(vae_ckpt_path, weights_only=True, map_location='cpu') - self.vae.load_state_dict(vae_state_dict, strict=False) - self.vae.remove_weight_norm() - - if mode == '16k': - assert vocoder_ckpt_path is not None - self.vocoder = BigVGAN(vocoder_ckpt_path).eval() - elif mode == '44k': - self.vocoder = BigVGANv2.from_pretrained('nvidia/bigvgan_v2_44khz_128band_512x', - use_cuda_kernel=False) - self.vocoder.remove_weight_norm() - else: - raise ValueError(f'Unknown mode: {mode}') - - for param in self.parameters(): - param.requires_grad = False - - if not need_vae_encoder: - del self.vae.encoder - - @torch.inference_mode() - def encode(self, x: torch.Tensor) -> DiagonalGaussianDistribution: - return self.vae.encode(x) - - @torch.inference_mode() - def decode(self, z: torch.Tensor) -> torch.Tensor: - return self.vae.decode(z) - - @torch.inference_mode() - def vocode(self, spec: torch.Tensor) -> torch.Tensor: - return self.vocoder(spec) diff --git a/mmaudio_x/ext/autoencoder/edm2_utils.py b/mmaudio_x/ext/autoencoder/edm2_utils.py deleted file mode 100644 index a18ffba5cc42214fddf1300034be2eff2760025c..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/autoencoder/edm2_utils.py +++ /dev/null @@ -1,168 +0,0 @@ -# Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# This work is licensed under a Creative Commons -# Attribution-NonCommercial-ShareAlike 4.0 International License. -# You should have received a copy of the license along with this -# work. If not, see http://creativecommons.org/licenses/by-nc-sa/4.0/ -"""Improved diffusion model architecture proposed in the paper -"Analyzing and Improving the Training Dynamics of Diffusion Models".""" - -import numpy as np -import torch - -#---------------------------------------------------------------------------- -# Variant of constant() that inherits dtype and device from the given -# reference tensor by default. - -_constant_cache = dict() - - -def constant(value, shape=None, dtype=None, device=None, memory_format=None): - value = np.asarray(value) - if shape is not None: - shape = tuple(shape) - if dtype is None: - dtype = torch.get_default_dtype() - if device is None: - device = torch.device('cpu') - if memory_format is None: - memory_format = torch.contiguous_format - - key = (value.shape, value.dtype, value.tobytes(), shape, dtype, device, memory_format) - tensor = _constant_cache.get(key, None) - if tensor is None: - tensor = torch.as_tensor(value.copy(), dtype=dtype, device=device) - if shape is not None: - tensor, _ = torch.broadcast_tensors(tensor, torch.empty(shape)) - tensor = tensor.contiguous(memory_format=memory_format) - _constant_cache[key] = tensor - return tensor - - -def const_like(ref, value, shape=None, dtype=None, device=None, memory_format=None): - if dtype is None: - dtype = ref.dtype - if device is None: - device = ref.device - return constant(value, shape=shape, dtype=dtype, device=device, memory_format=memory_format) - - -#---------------------------------------------------------------------------- -# Normalize given tensor to unit magnitude with respect to the given -# dimensions. Default = all dimensions except the first. - - -def normalize(x, dim=None, eps=1e-4): - if dim is None: - dim = list(range(1, x.ndim)) - norm = torch.linalg.vector_norm(x, dim=dim, keepdim=True, dtype=torch.float32) - norm = torch.add(eps, norm, alpha=np.sqrt(norm.numel() / x.numel())) - return x / norm.to(x.dtype) - - -class Normalize(torch.nn.Module): - - def __init__(self, dim=None, eps=1e-4): - super().__init__() - self.dim = dim - self.eps = eps - - def forward(self, x): - return normalize(x, dim=self.dim, eps=self.eps) - - -#---------------------------------------------------------------------------- -# Upsample or downsample the given tensor with the given filter, -# or keep it as is. - - -def resample(x, f=[1, 1], mode='keep'): - if mode == 'keep': - return x - f = np.float32(f) - assert f.ndim == 1 and len(f) % 2 == 0 - pad = (len(f) - 1) // 2 - f = f / f.sum() - f = np.outer(f, f)[np.newaxis, np.newaxis, :, :] - f = const_like(x, f) - c = x.shape[1] - if mode == 'down': - return torch.nn.functional.conv2d(x, - f.tile([c, 1, 1, 1]), - groups=c, - stride=2, - padding=(pad, )) - assert mode == 'up' - return torch.nn.functional.conv_transpose2d(x, (f * 4).tile([c, 1, 1, 1]), - groups=c, - stride=2, - padding=(pad, )) - - -#---------------------------------------------------------------------------- -# Magnitude-preserving SiLU (Equation 81). - - -def mp_silu(x): - return torch.nn.functional.silu(x) / 0.596 - - -class MPSiLU(torch.nn.Module): - - def forward(self, x): - return mp_silu(x) - - -#---------------------------------------------------------------------------- -# Magnitude-preserving sum (Equation 88). - - -def mp_sum(a, b, t=0.5): - return a.lerp(b, t) / np.sqrt((1 - t)**2 + t**2) - - -#---------------------------------------------------------------------------- -# Magnitude-preserving concatenation (Equation 103). - - -def mp_cat(a, b, dim=1, t=0.5): - Na = a.shape[dim] - Nb = b.shape[dim] - C = np.sqrt((Na + Nb) / ((1 - t)**2 + t**2)) - wa = C / np.sqrt(Na) * (1 - t) - wb = C / np.sqrt(Nb) * t - return torch.cat([wa * a, wb * b], dim=dim) - - -#---------------------------------------------------------------------------- -# Magnitude-preserving convolution or fully-connected layer (Equation 47) -# with force weight normalization (Equation 66). - - -class MPConv1D(torch.nn.Module): - - def __init__(self, in_channels, out_channels, kernel_size): - super().__init__() - self.out_channels = out_channels - self.weight = torch.nn.Parameter(torch.randn(out_channels, in_channels, kernel_size)) - - self.weight_norm_removed = False - - def forward(self, x, gain=1): - assert self.weight_norm_removed, 'call remove_weight_norm() before inference' - - w = self.weight * gain - if w.ndim == 2: - return x @ w.t() - assert w.ndim == 3 - return torch.nn.functional.conv1d(x, w, padding=(w.shape[-1] // 2, )) - - def remove_weight_norm(self): - w = self.weight.to(torch.float32) - w = normalize(w) # traditional weight normalization - w = w / np.sqrt(w[0].numel()) - w = w.to(self.weight.dtype) - self.weight.data.copy_(w) - - self.weight_norm_removed = True - return self diff --git a/mmaudio_x/ext/autoencoder/vae.py b/mmaudio_x/ext/autoencoder/vae.py deleted file mode 100644 index 204c2e01cf9fc89eb718f8aa266a1c6a7e443312..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/autoencoder/vae.py +++ /dev/null @@ -1,373 +0,0 @@ -import logging -from typing import Optional - -import torch -import torch.nn as nn - -from mmaudio.ext.autoencoder.edm2_utils import MPConv1D -from mmaudio.ext.autoencoder.vae_modules import (AttnBlock1D, Downsample1D, ResnetBlock1D, - Upsample1D, nonlinearity) -from mmaudio.model.utils.distributions import DiagonalGaussianDistribution - -log = logging.getLogger() - -DATA_MEAN_80D = [ - -1.6058, -1.3676, -1.2520, -1.2453, -1.2078, -1.2224, -1.2419, -1.2439, -1.2922, -1.2927, - -1.3170, -1.3543, -1.3401, -1.3836, -1.3907, -1.3912, -1.4313, -1.4152, -1.4527, -1.4728, - -1.4568, -1.5101, -1.5051, -1.5172, -1.5623, -1.5373, -1.5746, -1.5687, -1.6032, -1.6131, - -1.6081, -1.6331, -1.6489, -1.6489, -1.6700, -1.6738, -1.6953, -1.6969, -1.7048, -1.7280, - -1.7361, -1.7495, -1.7658, -1.7814, -1.7889, -1.8064, -1.8221, -1.8377, -1.8417, -1.8643, - -1.8857, -1.8929, -1.9173, -1.9379, -1.9531, -1.9673, -1.9824, -2.0042, -2.0215, -2.0436, - -2.0766, -2.1064, -2.1418, -2.1855, -2.2319, -2.2767, -2.3161, -2.3572, -2.3954, -2.4282, - -2.4659, -2.5072, -2.5552, -2.6074, -2.6584, -2.7107, -2.7634, -2.8266, -2.8981, -2.9673 -] - -DATA_STD_80D = [ - 1.0291, 1.0411, 1.0043, 0.9820, 0.9677, 0.9543, 0.9450, 0.9392, 0.9343, 0.9297, 0.9276, 0.9263, - 0.9242, 0.9254, 0.9232, 0.9281, 0.9263, 0.9315, 0.9274, 0.9247, 0.9277, 0.9199, 0.9188, 0.9194, - 0.9160, 0.9161, 0.9146, 0.9161, 0.9100, 0.9095, 0.9145, 0.9076, 0.9066, 0.9095, 0.9032, 0.9043, - 0.9038, 0.9011, 0.9019, 0.9010, 0.8984, 0.8983, 0.8986, 0.8961, 0.8962, 0.8978, 0.8962, 0.8973, - 0.8993, 0.8976, 0.8995, 0.9016, 0.8982, 0.8972, 0.8974, 0.8949, 0.8940, 0.8947, 0.8936, 0.8939, - 0.8951, 0.8956, 0.9017, 0.9167, 0.9436, 0.9690, 1.0003, 1.0225, 1.0381, 1.0491, 1.0545, 1.0604, - 1.0761, 1.0929, 1.1089, 1.1196, 1.1176, 1.1156, 1.1117, 1.1070 -] - -DATA_MEAN_128D = [ - -3.3462, -2.6723, -2.4893, -2.3143, -2.2664, -2.3317, -2.1802, -2.4006, -2.2357, -2.4597, - -2.3717, -2.4690, -2.5142, -2.4919, -2.6610, -2.5047, -2.7483, -2.5926, -2.7462, -2.7033, - -2.7386, -2.8112, -2.7502, -2.9594, -2.7473, -3.0035, -2.8891, -2.9922, -2.9856, -3.0157, - -3.1191, -2.9893, -3.1718, -3.0745, -3.1879, -3.2310, -3.1424, -3.2296, -3.2791, -3.2782, - -3.2756, -3.3134, -3.3509, -3.3750, -3.3951, -3.3698, -3.4505, -3.4509, -3.5089, -3.4647, - -3.5536, -3.5788, -3.5867, -3.6036, -3.6400, -3.6747, -3.7072, -3.7279, -3.7283, -3.7795, - -3.8259, -3.8447, -3.8663, -3.9182, -3.9605, -3.9861, -4.0105, -4.0373, -4.0762, -4.1121, - -4.1488, -4.1874, -4.2461, -4.3170, -4.3639, -4.4452, -4.5282, -4.6297, -4.7019, -4.7960, - -4.8700, -4.9507, -5.0303, -5.0866, -5.1634, -5.2342, -5.3242, -5.4053, -5.4927, -5.5712, - -5.6464, -5.7052, -5.7619, -5.8410, -5.9188, -6.0103, -6.0955, -6.1673, -6.2362, -6.3120, - -6.3926, -6.4797, -6.5565, -6.6511, -6.8130, -6.9961, -7.1275, -7.2457, -7.3576, -7.4663, - -7.6136, -7.7469, -7.8815, -8.0132, -8.1515, -8.3071, -8.4722, -8.7418, -9.3975, -9.6628, - -9.7671, -9.8863, -9.9992, -10.0860, -10.1709, -10.5418, -11.2795, -11.3861 -] - -DATA_STD_128D = [ - 2.3804, 2.4368, 2.3772, 2.3145, 2.2803, 2.2510, 2.2316, 2.2083, 2.1996, 2.1835, 2.1769, 2.1659, - 2.1631, 2.1618, 2.1540, 2.1606, 2.1571, 2.1567, 2.1612, 2.1579, 2.1679, 2.1683, 2.1634, 2.1557, - 2.1668, 2.1518, 2.1415, 2.1449, 2.1406, 2.1350, 2.1313, 2.1415, 2.1281, 2.1352, 2.1219, 2.1182, - 2.1327, 2.1195, 2.1137, 2.1080, 2.1179, 2.1036, 2.1087, 2.1036, 2.1015, 2.1068, 2.0975, 2.0991, - 2.0902, 2.1015, 2.0857, 2.0920, 2.0893, 2.0897, 2.0910, 2.0881, 2.0925, 2.0873, 2.0960, 2.0900, - 2.0957, 2.0958, 2.0978, 2.0936, 2.0886, 2.0905, 2.0845, 2.0855, 2.0796, 2.0840, 2.0813, 2.0817, - 2.0838, 2.0840, 2.0917, 2.1061, 2.1431, 2.1976, 2.2482, 2.3055, 2.3700, 2.4088, 2.4372, 2.4609, - 2.4731, 2.4847, 2.5072, 2.5451, 2.5772, 2.6147, 2.6529, 2.6596, 2.6645, 2.6726, 2.6803, 2.6812, - 2.6899, 2.6916, 2.6931, 2.6998, 2.7062, 2.7262, 2.7222, 2.7158, 2.7041, 2.7485, 2.7491, 2.7451, - 2.7485, 2.7233, 2.7297, 2.7233, 2.7145, 2.6958, 2.6788, 2.6439, 2.6007, 2.4786, 2.2469, 2.1877, - 2.1392, 2.0717, 2.0107, 1.9676, 1.9140, 1.7102, 0.9101, 0.7164 -] - - -class VAE(nn.Module): - - def __init__( - self, - *, - data_dim: int, - embed_dim: int, - hidden_dim: int, - ): - super().__init__() - - if data_dim == 80: - # self.data_mean = torch.tensor(DATA_MEAN_80D, dtype=torch.float32).cuda() - # self.data_std = torch.tensor(DATA_STD_80D, dtype=torch.float32).cuda() - self.register_buffer('data_mean', torch.tensor(DATA_MEAN_80D, dtype=torch.float32)) - self.register_buffer('data_std', torch.tensor(DATA_STD_80D, dtype=torch.float32)) - elif data_dim == 128: - # torch.tensor(DATA_MEAN_128D, dtype=torch.float32).cuda() - # self.data_std = torch.tensor(DATA_STD_128D, dtype=torch.float32).cuda() - self.register_buffer('data_mean', torch.tensor(DATA_MEAN_128D, dtype=torch.float32)) - self.register_buffer('data_std', torch.tensor(DATA_STD_128D, dtype=torch.float32)) - - self.data_mean = self.data_mean.view(1, -1, 1) - self.data_std = self.data_std.view(1, -1, 1) - - self.encoder = Encoder1D( - dim=hidden_dim, - ch_mult=(1, 2, 4), - num_res_blocks=2, - attn_layers=[3], - down_layers=[0], - in_dim=data_dim, - embed_dim=embed_dim, - ) - self.decoder = Decoder1D( - dim=hidden_dim, - ch_mult=(1, 2, 4), - num_res_blocks=2, - attn_layers=[3], - down_layers=[0], - in_dim=data_dim, - out_dim=data_dim, - embed_dim=embed_dim, - ) - - self.embed_dim = embed_dim - # self.quant_conv = nn.Conv1d(2 * embed_dim, 2 * embed_dim, 1) - # self.post_quant_conv = nn.Conv1d(embed_dim, embed_dim, 1) - - self.initialize_weights() - - def initialize_weights(self): - pass - - def encode(self, x: torch.Tensor, normalize: bool = True) -> DiagonalGaussianDistribution: - if normalize: - x = self.normalize(x) - moments = self.encoder(x) - posterior = DiagonalGaussianDistribution(moments) - return posterior - - def decode(self, z: torch.Tensor, unnormalize: bool = True) -> torch.Tensor: - dec = self.decoder(z) - if unnormalize: - dec = self.unnormalize(dec) - return dec - - def normalize(self, x: torch.Tensor) -> torch.Tensor: - return (x - self.data_mean) / self.data_std - - def unnormalize(self, x: torch.Tensor) -> torch.Tensor: - return x * self.data_std + self.data_mean - - def forward( - self, - x: torch.Tensor, - sample_posterior: bool = True, - rng: Optional[torch.Generator] = None, - normalize: bool = True, - unnormalize: bool = True, - ) -> tuple[torch.Tensor, DiagonalGaussianDistribution]: - - posterior = self.encode(x, normalize=normalize) - if sample_posterior: - z = posterior.sample(rng) - else: - z = posterior.mode() - dec = self.decode(z, unnormalize=unnormalize) - return dec, posterior - - def load_weights(self, src_dict) -> None: - self.load_state_dict(src_dict, strict=True) - - @property - def device(self) -> torch.device: - return next(self.parameters()).device - - def get_last_layer(self): - return self.decoder.conv_out.weight - - def remove_weight_norm(self): - for name, m in self.named_modules(): - if isinstance(m, MPConv1D): - m.remove_weight_norm() - log.debug(f"Removed weight norm from {name}") - return self - - -class Encoder1D(nn.Module): - - def __init__(self, - *, - dim: int, - ch_mult: tuple[int] = (1, 2, 4, 8), - num_res_blocks: int, - attn_layers: list[int] = [], - down_layers: list[int] = [], - resamp_with_conv: bool = True, - in_dim: int, - embed_dim: int, - double_z: bool = True, - kernel_size: int = 3, - clip_act: float = 256.0): - super().__init__() - self.dim = dim - self.num_layers = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.in_channels = in_dim - self.clip_act = clip_act - self.down_layers = down_layers - self.attn_layers = attn_layers - self.conv_in = MPConv1D(in_dim, self.dim, kernel_size=kernel_size) - - in_ch_mult = (1, ) + tuple(ch_mult) - self.in_ch_mult = in_ch_mult - # downsampling - self.down = nn.ModuleList() - for i_level in range(self.num_layers): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = dim * in_ch_mult[i_level] - block_out = dim * ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append( - ResnetBlock1D(in_dim=block_in, - out_dim=block_out, - kernel_size=kernel_size, - use_norm=True)) - block_in = block_out - if i_level in attn_layers: - attn.append(AttnBlock1D(block_in)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level in down_layers: - down.downsample = Downsample1D(block_in, resamp_with_conv) - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock1D(in_dim=block_in, - out_dim=block_in, - kernel_size=kernel_size, - use_norm=True) - self.mid.attn_1 = AttnBlock1D(block_in) - self.mid.block_2 = ResnetBlock1D(in_dim=block_in, - out_dim=block_in, - kernel_size=kernel_size, - use_norm=True) - - # end - self.conv_out = MPConv1D(block_in, - 2 * embed_dim if double_z else embed_dim, - kernel_size=kernel_size) - - self.learnable_gain = nn.Parameter(torch.zeros([])) - - def forward(self, x): - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_layers): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1]) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - h = h.clamp(-self.clip_act, self.clip_act) - hs.append(h) - if i_level in self.down_layers: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h) - h = self.mid.attn_1(h) - h = self.mid.block_2(h) - h = h.clamp(-self.clip_act, self.clip_act) - - # end - h = nonlinearity(h) - h = self.conv_out(h, gain=(self.learnable_gain + 1)) - return h - - -class Decoder1D(nn.Module): - - def __init__(self, - *, - dim: int, - out_dim: int, - ch_mult: tuple[int] = (1, 2, 4, 8), - num_res_blocks: int, - attn_layers: list[int] = [], - down_layers: list[int] = [], - kernel_size: int = 3, - resamp_with_conv: bool = True, - in_dim: int, - embed_dim: int, - clip_act: float = 256.0): - super().__init__() - self.ch = dim - self.num_layers = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.in_channels = in_dim - self.clip_act = clip_act - self.down_layers = [i + 1 for i in down_layers] # each downlayer add one - - # compute in_ch_mult, block_in and curr_res at lowest res - block_in = dim * ch_mult[self.num_layers - 1] - - # z to block_in - self.conv_in = MPConv1D(embed_dim, block_in, kernel_size=kernel_size) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock1D(in_dim=block_in, out_dim=block_in, use_norm=True) - self.mid.attn_1 = AttnBlock1D(block_in) - self.mid.block_2 = ResnetBlock1D(in_dim=block_in, out_dim=block_in, use_norm=True) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_layers)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = dim * ch_mult[i_level] - for i_block in range(self.num_res_blocks + 1): - block.append(ResnetBlock1D(in_dim=block_in, out_dim=block_out, use_norm=True)) - block_in = block_out - if i_level in attn_layers: - attn.append(AttnBlock1D(block_in)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level in self.down_layers: - up.upsample = Upsample1D(block_in, resamp_with_conv) - self.up.insert(0, up) # prepend to get consistent order - - # end - self.conv_out = MPConv1D(block_in, out_dim, kernel_size=kernel_size) - self.learnable_gain = nn.Parameter(torch.zeros([])) - - def forward(self, z): - # z to block_in - h = self.conv_in(z) - - # middle - h = self.mid.block_1(h) - h = self.mid.attn_1(h) - h = self.mid.block_2(h) - h = h.clamp(-self.clip_act, self.clip_act) - - # upsampling - for i_level in reversed(range(self.num_layers)): - for i_block in range(self.num_res_blocks + 1): - h = self.up[i_level].block[i_block](h) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - h = h.clamp(-self.clip_act, self.clip_act) - if i_level in self.down_layers: - h = self.up[i_level].upsample(h) - - h = nonlinearity(h) - h = self.conv_out(h, gain=(self.learnable_gain + 1)) - return h - - -def VAE_16k(**kwargs) -> VAE: - return VAE(data_dim=80, embed_dim=20, hidden_dim=384, **kwargs) - - -def VAE_44k(**kwargs) -> VAE: - return VAE(data_dim=128, embed_dim=40, hidden_dim=512, **kwargs) - - -def get_my_vae(name: str, **kwargs) -> VAE: - if name == '16k': - return VAE_16k(**kwargs) - if name == '44k': - return VAE_44k(**kwargs) - raise ValueError(f'Unknown model: {name}') - - -if __name__ == '__main__': - network = get_my_vae('standard') - - # print the number of parameters in terms of millions - num_params = sum(p.numel() for p in network.parameters()) / 1e6 - print(f'Number of parameters: {num_params:.2f}M') diff --git a/mmaudio_x/ext/autoencoder/vae_modules.py b/mmaudio_x/ext/autoencoder/vae_modules.py deleted file mode 100644 index c59ff41e86303e518688fd3f56ade08f4550f2aa..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/autoencoder/vae_modules.py +++ /dev/null @@ -1,117 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from einops import rearrange - -from mmaudio.ext.autoencoder.edm2_utils import (MPConv1D, mp_silu, mp_sum, normalize) - - -def nonlinearity(x): - # swish - return mp_silu(x) - - -class ResnetBlock1D(nn.Module): - - def __init__(self, *, in_dim, out_dim=None, conv_shortcut=False, kernel_size=3, use_norm=True): - super().__init__() - self.in_dim = in_dim - out_dim = in_dim if out_dim is None else out_dim - self.out_dim = out_dim - self.use_conv_shortcut = conv_shortcut - self.use_norm = use_norm - - self.conv1 = MPConv1D(in_dim, out_dim, kernel_size=kernel_size) - self.conv2 = MPConv1D(out_dim, out_dim, kernel_size=kernel_size) - if self.in_dim != self.out_dim: - if self.use_conv_shortcut: - self.conv_shortcut = MPConv1D(in_dim, out_dim, kernel_size=kernel_size) - else: - self.nin_shortcut = MPConv1D(in_dim, out_dim, kernel_size=1) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - - # pixel norm - if self.use_norm: - x = normalize(x, dim=1) - - h = x - h = nonlinearity(h) - h = self.conv1(h) - - h = nonlinearity(h) - h = self.conv2(h) - - if self.in_dim != self.out_dim: - if self.use_conv_shortcut: - x = self.conv_shortcut(x) - else: - x = self.nin_shortcut(x) - - return mp_sum(x, h, t=0.3) - - -class AttnBlock1D(nn.Module): - - def __init__(self, in_channels, num_heads=1): - super().__init__() - self.in_channels = in_channels - - self.num_heads = num_heads - self.qkv = MPConv1D(in_channels, in_channels * 3, kernel_size=1) - self.proj_out = MPConv1D(in_channels, in_channels, kernel_size=1) - - def forward(self, x): - h = x - y = self.qkv(h) - y = y.reshape(y.shape[0], self.num_heads, -1, 3, y.shape[-1]) - q, k, v = normalize(y, dim=2).unbind(3) - - q = rearrange(q, 'b h c l -> b h l c') - k = rearrange(k, 'b h c l -> b h l c') - v = rearrange(v, 'b h c l -> b h l c') - - h = F.scaled_dot_product_attention(q, k, v) - h = rearrange(h, 'b h l c -> b (h c) l') - - h = self.proj_out(h) - - return mp_sum(x, h, t=0.3) - - -class Upsample1D(nn.Module): - - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - self.conv = MPConv1D(in_channels, in_channels, kernel_size=3) - - def forward(self, x): - x = F.interpolate(x, scale_factor=2.0, mode='nearest-exact') # support 3D tensor(B,C,T) - if self.with_conv: - x = self.conv(x) - return x - - -class Downsample1D(nn.Module): - - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - # no asymmetric padding in torch conv, must do it ourselves - self.conv1 = MPConv1D(in_channels, in_channels, kernel_size=1) - self.conv2 = MPConv1D(in_channels, in_channels, kernel_size=1) - - def forward(self, x): - - if self.with_conv: - x = self.conv1(x) - - x = F.avg_pool1d(x, kernel_size=2, stride=2) - - if self.with_conv: - x = self.conv2(x) - - return x diff --git a/mmaudio_x/ext/bigvgan/LICENSE b/mmaudio_x/ext/bigvgan/LICENSE deleted file mode 100644 index e9663595cc28938f88d6299acd3ba791542e4c0c..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan/LICENSE +++ /dev/null @@ -1,21 +0,0 @@ -MIT License - -Copyright (c) 2022 NVIDIA CORPORATION. - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan/__init__.py b/mmaudio_x/ext/bigvgan/__init__.py deleted file mode 100644 index 00f13e9bf9ccb0b4ec37e1c70869f9a9a538871f..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .bigvgan import BigVGAN diff --git a/mmaudio_x/ext/bigvgan/activations.py b/mmaudio_x/ext/bigvgan/activations.py deleted file mode 100644 index 61f2808a5466b3cf4d041059700993af5527dd29..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan/activations.py +++ /dev/null @@ -1,120 +0,0 @@ -# Implementation adapted from https://github.com/EdwardDixon/snake under the MIT license. -# LICENSE is in incl_licenses directory. - -import torch -from torch import nn, sin, pow -from torch.nn import Parameter - - -class Snake(nn.Module): - ''' - Implementation of a sine-based periodic activation function - Shape: - - Input: (B, C, T) - - Output: (B, C, T), same shape as the input - Parameters: - - alpha - trainable parameter - References: - - This activation function is from this paper by Liu Ziyin, Tilman Hartwig, Masahito Ueda: - https://arxiv.org/abs/2006.08195 - Examples: - >>> a1 = snake(256) - >>> x = torch.randn(256) - >>> x = a1(x) - ''' - def __init__(self, in_features, alpha=1.0, alpha_trainable=True, alpha_logscale=False): - ''' - Initialization. - INPUT: - - in_features: shape of the input - - alpha: trainable parameter - alpha is initialized to 1 by default, higher values = higher-frequency. - alpha will be trained along with the rest of your model. - ''' - super(Snake, self).__init__() - self.in_features = in_features - - # initialize alpha - self.alpha_logscale = alpha_logscale - if self.alpha_logscale: # log scale alphas initialized to zeros - self.alpha = Parameter(torch.zeros(in_features) * alpha) - else: # linear scale alphas initialized to ones - self.alpha = Parameter(torch.ones(in_features) * alpha) - - self.alpha.requires_grad = alpha_trainable - - self.no_div_by_zero = 0.000000001 - - def forward(self, x): - ''' - Forward pass of the function. - Applies the function to the input elementwise. - Snake ∶= x + 1/a * sin^2 (xa) - ''' - alpha = self.alpha.unsqueeze(0).unsqueeze(-1) # line up with x to [B, C, T] - if self.alpha_logscale: - alpha = torch.exp(alpha) - x = x + (1.0 / (alpha + self.no_div_by_zero)) * pow(sin(x * alpha), 2) - - return x - - -class SnakeBeta(nn.Module): - ''' - A modified Snake function which uses separate parameters for the magnitude of the periodic components - Shape: - - Input: (B, C, T) - - Output: (B, C, T), same shape as the input - Parameters: - - alpha - trainable parameter that controls frequency - - beta - trainable parameter that controls magnitude - References: - - This activation function is a modified version based on this paper by Liu Ziyin, Tilman Hartwig, Masahito Ueda: - https://arxiv.org/abs/2006.08195 - Examples: - >>> a1 = snakebeta(256) - >>> x = torch.randn(256) - >>> x = a1(x) - ''' - def __init__(self, in_features, alpha=1.0, alpha_trainable=True, alpha_logscale=False): - ''' - Initialization. - INPUT: - - in_features: shape of the input - - alpha - trainable parameter that controls frequency - - beta - trainable parameter that controls magnitude - alpha is initialized to 1 by default, higher values = higher-frequency. - beta is initialized to 1 by default, higher values = higher-magnitude. - alpha will be trained along with the rest of your model. - ''' - super(SnakeBeta, self).__init__() - self.in_features = in_features - - # initialize alpha - self.alpha_logscale = alpha_logscale - if self.alpha_logscale: # log scale alphas initialized to zeros - self.alpha = Parameter(torch.zeros(in_features) * alpha) - self.beta = Parameter(torch.zeros(in_features) * alpha) - else: # linear scale alphas initialized to ones - self.alpha = Parameter(torch.ones(in_features) * alpha) - self.beta = Parameter(torch.ones(in_features) * alpha) - - self.alpha.requires_grad = alpha_trainable - self.beta.requires_grad = alpha_trainable - - self.no_div_by_zero = 0.000000001 - - def forward(self, x): - ''' - Forward pass of the function. - Applies the function to the input elementwise. - SnakeBeta ∶= x + 1/b * sin^2 (xa) - ''' - alpha = self.alpha.unsqueeze(0).unsqueeze(-1) # line up with x to [B, C, T] - beta = self.beta.unsqueeze(0).unsqueeze(-1) - if self.alpha_logscale: - alpha = torch.exp(alpha) - beta = torch.exp(beta) - x = x + (1.0 / (beta + self.no_div_by_zero)) * pow(sin(x * alpha), 2) - - return x \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan/alias_free_torch/__init__.py b/mmaudio_x/ext/bigvgan/alias_free_torch/__init__.py deleted file mode 100644 index a2318b63198250856809c0cb46210a4147b829bc..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan/alias_free_torch/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -from .filter import * -from .resample import * -from .act import * \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan/alias_free_torch/act.py b/mmaudio_x/ext/bigvgan/alias_free_torch/act.py deleted file mode 100644 index 028debd697dd60458aae75010057df038bd3518a..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan/alias_free_torch/act.py +++ /dev/null @@ -1,28 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -import torch.nn as nn -from .resample import UpSample1d, DownSample1d - - -class Activation1d(nn.Module): - def __init__(self, - activation, - up_ratio: int = 2, - down_ratio: int = 2, - up_kernel_size: int = 12, - down_kernel_size: int = 12): - super().__init__() - self.up_ratio = up_ratio - self.down_ratio = down_ratio - self.act = activation - self.upsample = UpSample1d(up_ratio, up_kernel_size) - self.downsample = DownSample1d(down_ratio, down_kernel_size) - - # x: [B,C,T] - def forward(self, x): - x = self.upsample(x) - x = self.act(x) - x = self.downsample(x) - - return x \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan/alias_free_torch/filter.py b/mmaudio_x/ext/bigvgan/alias_free_torch/filter.py deleted file mode 100644 index 7ad6ea87c1f10ddd94c544037791d7a4634d5ae1..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan/alias_free_torch/filter.py +++ /dev/null @@ -1,95 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -import torch -import torch.nn as nn -import torch.nn.functional as F -import math - -if 'sinc' in dir(torch): - sinc = torch.sinc -else: - # This code is adopted from adefossez's julius.core.sinc under the MIT License - # https://adefossez.github.io/julius/julius/core.html - # LICENSE is in incl_licenses directory. - def sinc(x: torch.Tensor): - """ - Implementation of sinc, i.e. sin(pi * x) / (pi * x) - __Warning__: Different to julius.sinc, the input is multiplied by `pi`! - """ - return torch.where(x == 0, - torch.tensor(1., device=x.device, dtype=x.dtype), - torch.sin(math.pi * x) / math.pi / x) - - -# This code is adopted from adefossez's julius.lowpass.LowPassFilters under the MIT License -# https://adefossez.github.io/julius/julius/lowpass.html -# LICENSE is in incl_licenses directory. -def kaiser_sinc_filter1d(cutoff, half_width, kernel_size): # return filter [1,1,kernel_size] - even = (kernel_size % 2 == 0) - half_size = kernel_size // 2 - - #For kaiser window - delta_f = 4 * half_width - A = 2.285 * (half_size - 1) * math.pi * delta_f + 7.95 - if A > 50.: - beta = 0.1102 * (A - 8.7) - elif A >= 21.: - beta = 0.5842 * (A - 21)**0.4 + 0.07886 * (A - 21.) - else: - beta = 0. - window = torch.kaiser_window(kernel_size, beta=beta, periodic=False) - - # ratio = 0.5/cutoff -> 2 * cutoff = 1 / ratio - if even: - time = (torch.arange(-half_size, half_size) + 0.5) - else: - time = torch.arange(kernel_size) - half_size - if cutoff == 0: - filter_ = torch.zeros_like(time) - else: - filter_ = 2 * cutoff * window * sinc(2 * cutoff * time) - # Normalize filter to have sum = 1, otherwise we will have a small leakage - # of the constant component in the input signal. - filter_ /= filter_.sum() - filter = filter_.view(1, 1, kernel_size) - - return filter - - -class LowPassFilter1d(nn.Module): - def __init__(self, - cutoff=0.5, - half_width=0.6, - stride: int = 1, - padding: bool = True, - padding_mode: str = 'replicate', - kernel_size: int = 12): - # kernel_size should be even number for stylegan3 setup, - # in this implementation, odd number is also possible. - super().__init__() - if cutoff < -0.: - raise ValueError("Minimum cutoff must be larger than zero.") - if cutoff > 0.5: - raise ValueError("A cutoff above 0.5 does not make sense.") - self.kernel_size = kernel_size - self.even = (kernel_size % 2 == 0) - self.pad_left = kernel_size // 2 - int(self.even) - self.pad_right = kernel_size // 2 - self.stride = stride - self.padding = padding - self.padding_mode = padding_mode - filter = kaiser_sinc_filter1d(cutoff, half_width, kernel_size) - self.register_buffer("filter", filter) - - #input [B, C, T] - def forward(self, x): - _, C, _ = x.shape - - if self.padding: - x = F.pad(x, (self.pad_left, self.pad_right), - mode=self.padding_mode) - out = F.conv1d(x, self.filter.expand(C, -1, -1), - stride=self.stride, groups=C) - - return out \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan/alias_free_torch/resample.py b/mmaudio_x/ext/bigvgan/alias_free_torch/resample.py deleted file mode 100644 index 750e6c3402cc5ac939c4b9d075246562e0e1d1a7..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan/alias_free_torch/resample.py +++ /dev/null @@ -1,49 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -import torch.nn as nn -from torch.nn import functional as F -from .filter import LowPassFilter1d -from .filter import kaiser_sinc_filter1d - - -class UpSample1d(nn.Module): - def __init__(self, ratio=2, kernel_size=None): - super().__init__() - self.ratio = ratio - self.kernel_size = int(6 * ratio // 2) * 2 if kernel_size is None else kernel_size - self.stride = ratio - self.pad = self.kernel_size // ratio - 1 - self.pad_left = self.pad * self.stride + (self.kernel_size - self.stride) // 2 - self.pad_right = self.pad * self.stride + (self.kernel_size - self.stride + 1) // 2 - filter = kaiser_sinc_filter1d(cutoff=0.5 / ratio, - half_width=0.6 / ratio, - kernel_size=self.kernel_size) - self.register_buffer("filter", filter) - - # x: [B, C, T] - def forward(self, x): - _, C, _ = x.shape - - x = F.pad(x, (self.pad, self.pad), mode='replicate') - x = self.ratio * F.conv_transpose1d( - x, self.filter.expand(C, -1, -1), stride=self.stride, groups=C) - x = x[..., self.pad_left:-self.pad_right] - - return x - - -class DownSample1d(nn.Module): - def __init__(self, ratio=2, kernel_size=None): - super().__init__() - self.ratio = ratio - self.kernel_size = int(6 * ratio // 2) * 2 if kernel_size is None else kernel_size - self.lowpass = LowPassFilter1d(cutoff=0.5 / ratio, - half_width=0.6 / ratio, - stride=ratio, - kernel_size=self.kernel_size) - - def forward(self, x): - xx = self.lowpass(x) - - return xx \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan/bigvgan.py b/mmaudio_x/ext/bigvgan/bigvgan.py deleted file mode 100644 index 032ea1d03e96165571c9ae22d66e00911a605870..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan/bigvgan.py +++ /dev/null @@ -1,32 +0,0 @@ -from pathlib import Path - -import torch -import torch.nn as nn -from omegaconf import OmegaConf - -from mmaudio.ext.bigvgan.models import BigVGANVocoder - -_bigvgan_vocoder_path = Path(__file__).parent / 'bigvgan_vocoder.yml' - - -class BigVGAN(nn.Module): - - def __init__(self, ckpt_path, config_path=_bigvgan_vocoder_path): - super().__init__() - vocoder_cfg = OmegaConf.load(config_path) - self.vocoder = BigVGANVocoder(vocoder_cfg).eval() - vocoder_ckpt = torch.load(ckpt_path, map_location='cpu', weights_only=True)['generator'] - self.vocoder.load_state_dict(vocoder_ckpt) - - self.weight_norm_removed = False - self.remove_weight_norm() - - @torch.inference_mode() - def forward(self, x): - assert self.weight_norm_removed, 'call remove_weight_norm() before inference' - return self.vocoder(x) - - def remove_weight_norm(self): - self.vocoder.remove_weight_norm() - self.weight_norm_removed = True - return self diff --git a/mmaudio_x/ext/bigvgan/bigvgan_vocoder.yml b/mmaudio_x/ext/bigvgan/bigvgan_vocoder.yml deleted file mode 100644 index d4db31ec45336e757d94d5099ed16cb3c906c24a..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan/bigvgan_vocoder.yml +++ /dev/null @@ -1,63 +0,0 @@ -resblock: '1' -num_gpus: 0 -batch_size: 64 -num_mels: 80 -learning_rate: 0.0001 -adam_b1: 0.8 -adam_b2: 0.99 -lr_decay: 0.999 -seed: 1234 -upsample_rates: -- 4 -- 4 -- 2 -- 2 -- 2 -- 2 -upsample_kernel_sizes: -- 8 -- 8 -- 4 -- 4 -- 4 -- 4 -upsample_initial_channel: 1536 -resblock_kernel_sizes: -- 3 -- 7 -- 11 -resblock_dilation_sizes: -- - 1 - - 3 - - 5 -- - 1 - - 3 - - 5 -- - 1 - - 3 - - 5 -activation: snakebeta -snake_logscale: true -resolutions: -- - 1024 - - 120 - - 600 -- - 2048 - - 240 - - 1200 -- - 512 - - 50 - - 240 -mpd_reshapes: -- 2 -- 3 -- 5 -- 7 -- 11 -use_spectral_norm: false -discriminator_channel_mult: 1 -num_workers: 4 -dist_config: - dist_backend: nccl - dist_url: tcp://localhost:54341 - world_size: 1 diff --git a/mmaudio_x/ext/bigvgan/env.py b/mmaudio_x/ext/bigvgan/env.py deleted file mode 100644 index b8be238d4db710c8c9a338d336baea0138f18d1f..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan/env.py +++ /dev/null @@ -1,18 +0,0 @@ -# Adapted from https://github.com/jik876/hifi-gan under the MIT license. -# LICENSE is in incl_licenses directory. - -import os -import shutil - - -class AttrDict(dict): - def __init__(self, *args, **kwargs): - super(AttrDict, self).__init__(*args, **kwargs) - self.__dict__ = self - - -def build_env(config, config_name, path): - t_path = os.path.join(path, config_name) - if config != t_path: - os.makedirs(path, exist_ok=True) - shutil.copyfile(config, os.path.join(path, config_name)) \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan/incl_licenses/LICENSE_1 b/mmaudio_x/ext/bigvgan/incl_licenses/LICENSE_1 deleted file mode 100644 index 5afae394d6b37da0e12ba6b290d2512687f421ac..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan/incl_licenses/LICENSE_1 +++ /dev/null @@ -1,21 +0,0 @@ -MIT License - -Copyright (c) 2020 Jungil Kong - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan/incl_licenses/LICENSE_2 b/mmaudio_x/ext/bigvgan/incl_licenses/LICENSE_2 deleted file mode 100644 index 322b758863c4219be68291ae3826218baa93cb4c..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan/incl_licenses/LICENSE_2 +++ /dev/null @@ -1,21 +0,0 @@ -MIT License - -Copyright (c) 2020 Edward Dixon - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan/incl_licenses/LICENSE_3 b/mmaudio_x/ext/bigvgan/incl_licenses/LICENSE_3 deleted file mode 100644 index 56ee3c8c4cc2b4b32e0975d17258f9ba515fdbcc..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan/incl_licenses/LICENSE_3 +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan/incl_licenses/LICENSE_4 b/mmaudio_x/ext/bigvgan/incl_licenses/LICENSE_4 deleted file mode 100644 index 48fd1a1ba8d81a94b6c7d1c2ff1a1f307cc5371d..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan/incl_licenses/LICENSE_4 +++ /dev/null @@ -1,29 +0,0 @@ -BSD 3-Clause License - -Copyright (c) 2019, Seungwon Park 박승원 -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - -1. Redistributions of source code must retain the above copyright notice, this - list of conditions and the following disclaimer. - -2. Redistributions in binary form must reproduce the above copyright notice, - this list of conditions and the following disclaimer in the documentation - and/or other materials provided with the distribution. - -3. Neither the name of the copyright holder nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE -FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL -DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR -SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, -OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan/incl_licenses/LICENSE_5 b/mmaudio_x/ext/bigvgan/incl_licenses/LICENSE_5 deleted file mode 100644 index 01ae5538e6b7c787bb4f5d6f2cd9903520d6e465..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan/incl_licenses/LICENSE_5 +++ /dev/null @@ -1,16 +0,0 @@ -Copyright 2020 Alexandre Défossez - -Permission is hereby granted, free of charge, to any person obtaining a copy of this software and -associated documentation files (the "Software"), to deal in the Software without restriction, -including without limitation the rights to use, copy, modify, merge, publish, distribute, -sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all copies or -substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT -NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan/models.py b/mmaudio_x/ext/bigvgan/models.py deleted file mode 100644 index 36938e659ebc0e4cb045f10e4893525907c2d1f7..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan/models.py +++ /dev/null @@ -1,255 +0,0 @@ -# Copyright (c) 2022 NVIDIA CORPORATION. -# Licensed under the MIT license. - -# Adapted from https://github.com/jik876/hifi-gan under the MIT license. -# LICENSE is in incl_licenses directory. - -import torch -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d -from torch.nn.utils.parametrizations import weight_norm -from torch.nn.utils.parametrize import remove_parametrizations - -from mmaudio.ext.bigvgan import activations -from mmaudio.ext.bigvgan.alias_free_torch import * -from mmaudio.ext.bigvgan.utils import get_padding, init_weights - -LRELU_SLOPE = 0.1 - - -class AMPBlock1(torch.nn.Module): - - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5), activation=None): - super(AMPBlock1, self).__init__() - self.h = h - - self.convs1 = nn.ModuleList([ - weight_norm( - Conv1d(channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm( - Conv1d(channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm( - Conv1d(channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm( - Conv1d(channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm( - Conv1d(channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm( - Conv1d(channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - self.num_layers = len(self.convs1) + len(self.convs2) # total number of conv layers - - if activation == 'snake': # periodic nonlinearity with snake function and anti-aliasing - self.activations = nn.ModuleList([ - Activation1d( - activation=activations.Snake(channels, alpha_logscale=h.snake_logscale)) - for _ in range(self.num_layers) - ]) - elif activation == 'snakebeta': # periodic nonlinearity with snakebeta function and anti-aliasing - self.activations = nn.ModuleList([ - Activation1d( - activation=activations.SnakeBeta(channels, alpha_logscale=h.snake_logscale)) - for _ in range(self.num_layers) - ]) - else: - raise NotImplementedError( - "activation incorrectly specified. check the config file and look for 'activation'." - ) - - def forward(self, x): - acts1, acts2 = self.activations[::2], self.activations[1::2] - for c1, c2, a1, a2 in zip(self.convs1, self.convs2, acts1, acts2): - xt = a1(x) - xt = c1(xt) - xt = a2(xt) - xt = c2(xt) - x = xt + x - - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_parametrizations(l, 'weight') - for l in self.convs2: - remove_parametrizations(l, 'weight') - - -class AMPBlock2(torch.nn.Module): - - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3), activation=None): - super(AMPBlock2, self).__init__() - self.h = h - - self.convs = nn.ModuleList([ - weight_norm( - Conv1d(channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm( - Conv1d(channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - self.num_layers = len(self.convs) # total number of conv layers - - if activation == 'snake': # periodic nonlinearity with snake function and anti-aliasing - self.activations = nn.ModuleList([ - Activation1d( - activation=activations.Snake(channels, alpha_logscale=h.snake_logscale)) - for _ in range(self.num_layers) - ]) - elif activation == 'snakebeta': # periodic nonlinearity with snakebeta function and anti-aliasing - self.activations = nn.ModuleList([ - Activation1d( - activation=activations.SnakeBeta(channels, alpha_logscale=h.snake_logscale)) - for _ in range(self.num_layers) - ]) - else: - raise NotImplementedError( - "activation incorrectly specified. check the config file and look for 'activation'." - ) - - def forward(self, x): - for c, a in zip(self.convs, self.activations): - xt = a(x) - xt = c(xt) - x = xt + x - - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_parametrizations(l, 'weight') - - -class BigVGANVocoder(torch.nn.Module): - # this is our main BigVGAN model. Applies anti-aliased periodic activation for resblocks. - def __init__(self, h): - super().__init__() - self.h = h - - self.num_kernels = len(h.resblock_kernel_sizes) - self.num_upsamples = len(h.upsample_rates) - - # pre conv - self.conv_pre = weight_norm(Conv1d(h.num_mels, h.upsample_initial_channel, 7, 1, padding=3)) - - # define which AMPBlock to use. BigVGAN uses AMPBlock1 as default - resblock = AMPBlock1 if h.resblock == '1' else AMPBlock2 - - # transposed conv-based upsamplers. does not apply anti-aliasing - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)): - self.ups.append( - nn.ModuleList([ - weight_norm( - ConvTranspose1d(h.upsample_initial_channel // (2**i), - h.upsample_initial_channel // (2**(i + 1)), - k, - u, - padding=(k - u) // 2)) - ])) - - # residual blocks using anti-aliased multi-periodicity composition modules (AMP) - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h.upsample_initial_channel // (2**(i + 1)) - for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)): - self.resblocks.append(resblock(h, ch, k, d, activation=h.activation)) - - # post conv - if h.activation == "snake": # periodic nonlinearity with snake function and anti-aliasing - activation_post = activations.Snake(ch, alpha_logscale=h.snake_logscale) - self.activation_post = Activation1d(activation=activation_post) - elif h.activation == "snakebeta": # periodic nonlinearity with snakebeta function and anti-aliasing - activation_post = activations.SnakeBeta(ch, alpha_logscale=h.snake_logscale) - self.activation_post = Activation1d(activation=activation_post) - else: - raise NotImplementedError( - "activation incorrectly specified. check the config file and look for 'activation'." - ) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - - # weight initialization - for i in range(len(self.ups)): - self.ups[i].apply(init_weights) - self.conv_post.apply(init_weights) - - def forward(self, x): - # pre conv - x = self.conv_pre(x) - - for i in range(self.num_upsamples): - # upsampling - for i_up in range(len(self.ups[i])): - x = self.ups[i][i_up](x) - # AMP blocks - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - - # post conv - x = self.activation_post(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - for l_i in l: - remove_parametrizations(l_i, 'weight') - for l in self.resblocks: - l.remove_weight_norm() - remove_parametrizations(self.conv_pre, 'weight') - remove_parametrizations(self.conv_post, 'weight') diff --git a/mmaudio_x/ext/bigvgan/utils.py b/mmaudio_x/ext/bigvgan/utils.py deleted file mode 100644 index aff7e653533d3390756c53a0215801b06cc924b5..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan/utils.py +++ /dev/null @@ -1,31 +0,0 @@ -# Adapted from https://github.com/jik876/hifi-gan under the MIT license. -# LICENSE is in incl_licenses directory. - -import os - -import torch -from torch.nn.utils.parametrizations import weight_norm - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict diff --git a/mmaudio_x/ext/bigvgan_v2/LICENSE b/mmaudio_x/ext/bigvgan_v2/LICENSE deleted file mode 100644 index 4c78361c86d4f685117d60d6623e2197fcfed706..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/LICENSE +++ /dev/null @@ -1,21 +0,0 @@ -MIT License - -Copyright (c) 2024 NVIDIA CORPORATION. - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. diff --git a/mmaudio_x/ext/bigvgan_v2/__init__.py b/mmaudio_x/ext/bigvgan_v2/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/mmaudio_x/ext/bigvgan_v2/activations.py b/mmaudio_x/ext/bigvgan_v2/activations.py deleted file mode 100644 index 4f08ddab5b55d6dcaf3e968af98889e0770c44f5..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/activations.py +++ /dev/null @@ -1,126 +0,0 @@ -# Implementation adapted from https://github.com/EdwardDixon/snake under the MIT license. -# LICENSE is in incl_licenses directory. - -import torch -from torch import nn, sin, pow -from torch.nn import Parameter - - -class Snake(nn.Module): - """ - Implementation of a sine-based periodic activation function - Shape: - - Input: (B, C, T) - - Output: (B, C, T), same shape as the input - Parameters: - - alpha - trainable parameter - References: - - This activation function is from this paper by Liu Ziyin, Tilman Hartwig, Masahito Ueda: - https://arxiv.org/abs/2006.08195 - Examples: - >>> a1 = snake(256) - >>> x = torch.randn(256) - >>> x = a1(x) - """ - - def __init__( - self, in_features, alpha=1.0, alpha_trainable=True, alpha_logscale=False - ): - """ - Initialization. - INPUT: - - in_features: shape of the input - - alpha: trainable parameter - alpha is initialized to 1 by default, higher values = higher-frequency. - alpha will be trained along with the rest of your model. - """ - super(Snake, self).__init__() - self.in_features = in_features - - # Initialize alpha - self.alpha_logscale = alpha_logscale - if self.alpha_logscale: # Log scale alphas initialized to zeros - self.alpha = Parameter(torch.zeros(in_features) * alpha) - else: # Linear scale alphas initialized to ones - self.alpha = Parameter(torch.ones(in_features) * alpha) - - self.alpha.requires_grad = alpha_trainable - - self.no_div_by_zero = 0.000000001 - - def forward(self, x): - """ - Forward pass of the function. - Applies the function to the input elementwise. - Snake ∶= x + 1/a * sin^2 (xa) - """ - alpha = self.alpha.unsqueeze(0).unsqueeze(-1) # Line up with x to [B, C, T] - if self.alpha_logscale: - alpha = torch.exp(alpha) - x = x + (1.0 / (alpha + self.no_div_by_zero)) * pow(sin(x * alpha), 2) - - return x - - -class SnakeBeta(nn.Module): - """ - A modified Snake function which uses separate parameters for the magnitude of the periodic components - Shape: - - Input: (B, C, T) - - Output: (B, C, T), same shape as the input - Parameters: - - alpha - trainable parameter that controls frequency - - beta - trainable parameter that controls magnitude - References: - - This activation function is a modified version based on this paper by Liu Ziyin, Tilman Hartwig, Masahito Ueda: - https://arxiv.org/abs/2006.08195 - Examples: - >>> a1 = snakebeta(256) - >>> x = torch.randn(256) - >>> x = a1(x) - """ - - def __init__( - self, in_features, alpha=1.0, alpha_trainable=True, alpha_logscale=False - ): - """ - Initialization. - INPUT: - - in_features: shape of the input - - alpha - trainable parameter that controls frequency - - beta - trainable parameter that controls magnitude - alpha is initialized to 1 by default, higher values = higher-frequency. - beta is initialized to 1 by default, higher values = higher-magnitude. - alpha will be trained along with the rest of your model. - """ - super(SnakeBeta, self).__init__() - self.in_features = in_features - - # Initialize alpha - self.alpha_logscale = alpha_logscale - if self.alpha_logscale: # Log scale alphas initialized to zeros - self.alpha = Parameter(torch.zeros(in_features) * alpha) - self.beta = Parameter(torch.zeros(in_features) * alpha) - else: # Linear scale alphas initialized to ones - self.alpha = Parameter(torch.ones(in_features) * alpha) - self.beta = Parameter(torch.ones(in_features) * alpha) - - self.alpha.requires_grad = alpha_trainable - self.beta.requires_grad = alpha_trainable - - self.no_div_by_zero = 0.000000001 - - def forward(self, x): - """ - Forward pass of the function. - Applies the function to the input elementwise. - SnakeBeta ∶= x + 1/b * sin^2 (xa) - """ - alpha = self.alpha.unsqueeze(0).unsqueeze(-1) # Line up with x to [B, C, T] - beta = self.beta.unsqueeze(0).unsqueeze(-1) - if self.alpha_logscale: - alpha = torch.exp(alpha) - beta = torch.exp(beta) - x = x + (1.0 / (beta + self.no_div_by_zero)) * pow(sin(x * alpha), 2) - - return x diff --git a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/__init__.py b/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/activation1d.py b/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/activation1d.py deleted file mode 100644 index fbc0fd8f28a37ad949fbdb9832f51b5b933c6ff2..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/activation1d.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright (c) 2024 NVIDIA CORPORATION. -# Licensed under the MIT license. - -import torch -import torch.nn as nn -from alias_free_activation.torch.resample import UpSample1d, DownSample1d - -# load fused CUDA kernel: this enables importing anti_alias_activation_cuda -from alias_free_activation.cuda import load - -anti_alias_activation_cuda = load.load() - - -class FusedAntiAliasActivation(torch.autograd.Function): - """ - Assumes filter size 12, replication padding on upsampling/downsampling, and logscale alpha/beta parameters as inputs. - The hyperparameters are hard-coded in the kernel to maximize speed. - NOTE: The fused kenrel is incorrect for Activation1d with different hyperparameters. - """ - - @staticmethod - def forward(ctx, inputs, up_ftr, down_ftr, alpha, beta): - activation_results = anti_alias_activation_cuda.forward( - inputs, up_ftr, down_ftr, alpha, beta - ) - - return activation_results - - @staticmethod - def backward(ctx, output_grads): - raise NotImplementedError - return output_grads, None, None - - -class Activation1d(nn.Module): - def __init__( - self, - activation, - up_ratio: int = 2, - down_ratio: int = 2, - up_kernel_size: int = 12, - down_kernel_size: int = 12, - fused: bool = True, - ): - super().__init__() - self.up_ratio = up_ratio - self.down_ratio = down_ratio - self.act = activation - self.upsample = UpSample1d(up_ratio, up_kernel_size) - self.downsample = DownSample1d(down_ratio, down_kernel_size) - - self.fused = fused # Whether to use fused CUDA kernel or not - - def forward(self, x): - if not self.fused: - x = self.upsample(x) - x = self.act(x) - x = self.downsample(x) - return x - else: - if self.act.__class__.__name__ == "Snake": - beta = self.act.alpha.data # Snake uses same params for alpha and beta - else: - beta = ( - self.act.beta.data - ) # Snakebeta uses different params for alpha and beta - alpha = self.act.alpha.data - if ( - not self.act.alpha_logscale - ): # Exp baked into cuda kernel, cancel it out with a log - alpha = torch.log(alpha) - beta = torch.log(beta) - - x = FusedAntiAliasActivation.apply( - x, self.upsample.filter, self.downsample.lowpass.filter, alpha, beta - ) - return x diff --git a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/anti_alias_activation.cpp b/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/anti_alias_activation.cpp deleted file mode 100644 index c5651f77143bd678169eb11564a7cf7a7969a59e..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/anti_alias_activation.cpp +++ /dev/null @@ -1,23 +0,0 @@ -/* coding=utf-8 - * Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - #include - -extern "C" torch::Tensor fwd_cuda(torch::Tensor const &input, torch::Tensor const &up_filter, torch::Tensor const &down_filter, torch::Tensor const &alpha, torch::Tensor const &beta); - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("forward", &fwd_cuda, "Anti-Alias Activation forward (CUDA)"); -} \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/anti_alias_activation_cuda.cu b/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/anti_alias_activation_cuda.cu deleted file mode 100644 index 8c442334869fe72d639ec203fa4fac07f96a0ee1..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/anti_alias_activation_cuda.cu +++ /dev/null @@ -1,246 +0,0 @@ -/* coding=utf-8 - * Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#include -#include -#include -#include -#include -#include -#include -#include "type_shim.h" -#include -#include -#include -#include -#include - -namespace -{ - // Hard-coded hyperparameters - // WARP_SIZE and WARP_BATCH must match the return values batches_per_warp and - constexpr int ELEMENTS_PER_LDG_STG = 1; //(WARP_ITERATIONS < 4) ? 1 : 4; - constexpr int BUFFER_SIZE = 32; - constexpr int FILTER_SIZE = 12; - constexpr int HALF_FILTER_SIZE = 6; - constexpr int UPSAMPLE_REPLICATION_PAD = 5; // 5 on each side, matching torch impl - constexpr int DOWNSAMPLE_REPLICATION_PAD_LEFT = 5; // matching torch impl - constexpr int DOWNSAMPLE_REPLICATION_PAD_RIGHT = 6; // matching torch impl - - template - __global__ void anti_alias_activation_forward( - output_t *dst, - const input_t *src, - const input_t *up_ftr, - const input_t *down_ftr, - const input_t *alpha, - const input_t *beta, - int batch_size, - int channels, - int seq_len) - { - // Up and downsample filters - input_t up_filter[FILTER_SIZE]; - input_t down_filter[FILTER_SIZE]; - - // Load data from global memory including extra indices reserved for replication paddings - input_t elements[2 * FILTER_SIZE + 2 * BUFFER_SIZE + 2 * UPSAMPLE_REPLICATION_PAD] = {0}; - input_t intermediates[2 * FILTER_SIZE + 2 * BUFFER_SIZE + DOWNSAMPLE_REPLICATION_PAD_LEFT + DOWNSAMPLE_REPLICATION_PAD_RIGHT] = {0}; - - // Output stores downsampled output before writing to dst - output_t output[BUFFER_SIZE]; - - // blockDim/threadIdx = (128, 1, 1) - // gridDim/blockIdx = (seq_blocks, channels, batches) - int block_offset = (blockIdx.x * 128 * BUFFER_SIZE + seq_len * (blockIdx.y + gridDim.y * blockIdx.z)); - int local_offset = threadIdx.x * BUFFER_SIZE; - int seq_offset = blockIdx.x * 128 * BUFFER_SIZE + local_offset; - - // intermediate have double the seq_len - int intermediate_local_offset = threadIdx.x * BUFFER_SIZE * 2; - int intermediate_seq_offset = blockIdx.x * 128 * BUFFER_SIZE * 2 + intermediate_local_offset; - - // Get values needed for replication padding before moving pointer - const input_t *right_most_pntr = src + (seq_len * (blockIdx.y + gridDim.y * blockIdx.z)); - input_t seq_left_most_value = right_most_pntr[0]; - input_t seq_right_most_value = right_most_pntr[seq_len - 1]; - - // Move src and dst pointers - src += block_offset + local_offset; - dst += block_offset + local_offset; - - // Alpha and beta values for snake activatons. Applies exp by default - alpha = alpha + blockIdx.y; - input_t alpha_val = expf(alpha[0]); - beta = beta + blockIdx.y; - input_t beta_val = expf(beta[0]); - - #pragma unroll - for (int it = 0; it < FILTER_SIZE; it += 1) - { - up_filter[it] = up_ftr[it]; - down_filter[it] = down_ftr[it]; - } - - // Apply replication padding for upsampling, matching torch impl - #pragma unroll - for (int it = -HALF_FILTER_SIZE; it < BUFFER_SIZE + HALF_FILTER_SIZE; it += 1) - { - int element_index = seq_offset + it; // index for element - if ((element_index < 0) && (element_index >= -UPSAMPLE_REPLICATION_PAD)) - { - elements[2 * (HALF_FILTER_SIZE + it)] = 2 * seq_left_most_value; - } - if ((element_index >= seq_len) && (element_index < seq_len + UPSAMPLE_REPLICATION_PAD)) - { - elements[2 * (HALF_FILTER_SIZE + it)] = 2 * seq_right_most_value; - } - if ((element_index >= 0) && (element_index < seq_len)) - { - elements[2 * (HALF_FILTER_SIZE + it)] = 2 * src[it]; - } - } - - // Apply upsampling strided convolution and write to intermediates. It reserves DOWNSAMPLE_REPLICATION_PAD_LEFT for replication padding of the downsampilng conv later - #pragma unroll - for (int it = 0; it < (2 * BUFFER_SIZE + 2 * FILTER_SIZE); it += 1) - { - input_t acc = 0.0; - int element_index = intermediate_seq_offset + it; // index for intermediate - #pragma unroll - for (int f_idx = 0; f_idx < FILTER_SIZE; f_idx += 1) - { - if ((element_index + f_idx) >= 0) - { - acc += up_filter[f_idx] * elements[it + f_idx]; - } - } - intermediates[it + DOWNSAMPLE_REPLICATION_PAD_LEFT] = acc; - } - - // Apply activation function. It reserves DOWNSAMPLE_REPLICATION_PAD_LEFT and DOWNSAMPLE_REPLICATION_PAD_RIGHT for replication padding of the downsampilng conv later - double no_div_by_zero = 0.000000001; - #pragma unroll - for (int it = 0; it < 2 * BUFFER_SIZE + 2 * FILTER_SIZE; it += 1) - { - intermediates[it + DOWNSAMPLE_REPLICATION_PAD_LEFT] += (1.0 / (beta_val + no_div_by_zero)) * sinf(intermediates[it + DOWNSAMPLE_REPLICATION_PAD_LEFT] * alpha_val) * sinf(intermediates[it + DOWNSAMPLE_REPLICATION_PAD_LEFT] * alpha_val); - } - - // Apply replication padding before downsampling conv from intermediates - #pragma unroll - for (int it = 0; it < DOWNSAMPLE_REPLICATION_PAD_LEFT; it += 1) - { - intermediates[it] = intermediates[DOWNSAMPLE_REPLICATION_PAD_LEFT]; - } - #pragma unroll - for (int it = DOWNSAMPLE_REPLICATION_PAD_LEFT + 2 * BUFFER_SIZE + 2 * FILTER_SIZE; it < DOWNSAMPLE_REPLICATION_PAD_LEFT + 2 * BUFFER_SIZE + 2 * FILTER_SIZE + DOWNSAMPLE_REPLICATION_PAD_RIGHT; it += 1) - { - intermediates[it] = intermediates[DOWNSAMPLE_REPLICATION_PAD_LEFT + 2 * BUFFER_SIZE + 2 * FILTER_SIZE - 1]; - } - - // Apply downsample strided convolution (assuming stride=2) from intermediates - #pragma unroll - for (int it = 0; it < BUFFER_SIZE; it += 1) - { - input_t acc = 0.0; - #pragma unroll - for (int f_idx = 0; f_idx < FILTER_SIZE; f_idx += 1) - { - // Add constant DOWNSAMPLE_REPLICATION_PAD_RIGHT to match torch implementation - acc += down_filter[f_idx] * intermediates[it * 2 + f_idx + DOWNSAMPLE_REPLICATION_PAD_RIGHT]; - } - output[it] = acc; - } - - // Write output to dst - #pragma unroll - for (int it = 0; it < BUFFER_SIZE; it += ELEMENTS_PER_LDG_STG) - { - int element_index = seq_offset + it; - if (element_index < seq_len) - { - dst[it] = output[it]; - } - } - - } - - template - void dispatch_anti_alias_activation_forward( - output_t *dst, - const input_t *src, - const input_t *up_ftr, - const input_t *down_ftr, - const input_t *alpha, - const input_t *beta, - int batch_size, - int channels, - int seq_len) - { - if (seq_len == 0) - { - return; - } - else - { - // Use 128 threads per block to maximimize gpu utilization - constexpr int threads_per_block = 128; - constexpr int seq_len_per_block = 4096; - int blocks_per_seq_len = (seq_len + seq_len_per_block - 1) / seq_len_per_block; - dim3 blocks(blocks_per_seq_len, channels, batch_size); - dim3 threads(threads_per_block, 1, 1); - - anti_alias_activation_forward - <<>>(dst, src, up_ftr, down_ftr, alpha, beta, batch_size, channels, seq_len); - } - } -} - -extern "C" torch::Tensor fwd_cuda(torch::Tensor const &input, torch::Tensor const &up_filter, torch::Tensor const &down_filter, torch::Tensor const &alpha, torch::Tensor const &beta) -{ - // Input is a 3d tensor with dimensions [batches, channels, seq_len] - const int batches = input.size(0); - const int channels = input.size(1); - const int seq_len = input.size(2); - - // Output - auto act_options = input.options().requires_grad(false); - - torch::Tensor anti_alias_activation_results = - torch::empty({batches, channels, seq_len}, act_options); - - void *input_ptr = static_cast(input.data_ptr()); - void *up_filter_ptr = static_cast(up_filter.data_ptr()); - void *down_filter_ptr = static_cast(down_filter.data_ptr()); - void *alpha_ptr = static_cast(alpha.data_ptr()); - void *beta_ptr = static_cast(beta.data_ptr()); - void *anti_alias_activation_results_ptr = static_cast(anti_alias_activation_results.data_ptr()); - - DISPATCH_FLOAT_HALF_AND_BFLOAT( - input.scalar_type(), - "dispatch anti alias activation_forward", - dispatch_anti_alias_activation_forward( - reinterpret_cast(anti_alias_activation_results_ptr), - reinterpret_cast(input_ptr), - reinterpret_cast(up_filter_ptr), - reinterpret_cast(down_filter_ptr), - reinterpret_cast(alpha_ptr), - reinterpret_cast(beta_ptr), - batches, - channels, - seq_len);); - return anti_alias_activation_results; -} \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/compat.h b/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/compat.h deleted file mode 100644 index 25818b2edf4cb0dc9130e62c7c4de8d16a01baa5..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/compat.h +++ /dev/null @@ -1,29 +0,0 @@ -/* coding=utf-8 - * Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*This code is copied fron NVIDIA apex: - * https://github.com/NVIDIA/apex - * with minor changes. */ - -#ifndef TORCH_CHECK -#define TORCH_CHECK AT_CHECK -#endif - -#ifdef VERSION_GE_1_3 -#define DATA_PTR data_ptr -#else -#define DATA_PTR data -#endif diff --git a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/load.py b/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/load.py deleted file mode 100644 index ca5d01de398249e75e9e2298958764acb436edba..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/load.py +++ /dev/null @@ -1,86 +0,0 @@ -# Copyright (c) 2024 NVIDIA CORPORATION. -# Licensed under the MIT license. - -import os -import pathlib -import subprocess - -from torch.utils import cpp_extension - -""" -Setting this param to a list has a problem of generating different compilation commands (with diferent order of architectures) and leading to recompilation of fused kernels. -Set it to empty stringo avoid recompilation and assign arch flags explicity in extra_cuda_cflags below -""" -os.environ["TORCH_CUDA_ARCH_LIST"] = "" - - -def load(): - # Check if cuda 11 is installed for compute capability 8.0 - cc_flag = [] - _, bare_metal_major, _ = _get_cuda_bare_metal_version(cpp_extension.CUDA_HOME) - if int(bare_metal_major) >= 11: - cc_flag.append("-gencode") - cc_flag.append("arch=compute_80,code=sm_80") - - # Build path - srcpath = pathlib.Path(__file__).parent.absolute() - buildpath = srcpath / "build" - _create_build_dir(buildpath) - - # Helper function to build the kernels. - def _cpp_extention_load_helper(name, sources, extra_cuda_flags): - return cpp_extension.load( - name=name, - sources=sources, - build_directory=buildpath, - extra_cflags=[ - "-O3", - ], - extra_cuda_cflags=[ - "-O3", - "-gencode", - "arch=compute_70,code=sm_70", - "--use_fast_math", - ] - + extra_cuda_flags - + cc_flag, - verbose=True, - ) - - extra_cuda_flags = [ - "-U__CUDA_NO_HALF_OPERATORS__", - "-U__CUDA_NO_HALF_CONVERSIONS__", - "--expt-relaxed-constexpr", - "--expt-extended-lambda", - ] - - sources = [ - srcpath / "anti_alias_activation.cpp", - srcpath / "anti_alias_activation_cuda.cu", - ] - anti_alias_activation_cuda = _cpp_extention_load_helper( - "anti_alias_activation_cuda", sources, extra_cuda_flags - ) - - return anti_alias_activation_cuda - - -def _get_cuda_bare_metal_version(cuda_dir): - raw_output = subprocess.check_output( - [cuda_dir + "/bin/nvcc", "-V"], universal_newlines=True - ) - output = raw_output.split() - release_idx = output.index("release") + 1 - release = output[release_idx].split(".") - bare_metal_major = release[0] - bare_metal_minor = release[1][0] - - return raw_output, bare_metal_major, bare_metal_minor - - -def _create_build_dir(buildpath): - try: - os.mkdir(buildpath) - except OSError: - if not os.path.isdir(buildpath): - print(f"Creation of the build directory {buildpath} failed") diff --git a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/type_shim.h b/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/type_shim.h deleted file mode 100644 index 5db7e8a397e982d4d30d16ab6060814b98b7ab83..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/cuda/type_shim.h +++ /dev/null @@ -1,92 +0,0 @@ -/* coding=utf-8 - * Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#include -#include "compat.h" - -#define DISPATCH_FLOAT_HALF_AND_BFLOAT(TYPE, NAME, ...) \ - switch (TYPE) \ - { \ - case at::ScalarType::Float: \ - { \ - using scalar_t = float; \ - __VA_ARGS__; \ - break; \ - } \ - case at::ScalarType::Half: \ - { \ - using scalar_t = at::Half; \ - __VA_ARGS__; \ - break; \ - } \ - case at::ScalarType::BFloat16: \ - { \ - using scalar_t = at::BFloat16; \ - __VA_ARGS__; \ - break; \ - } \ - default: \ - AT_ERROR(#NAME, " not implemented for '", toString(TYPE), "'"); \ - } - -#define DISPATCH_FLOAT_HALF_AND_BFLOAT_INOUT_TYPES(TYPEIN, TYPEOUT, NAME, ...) \ - switch (TYPEIN) \ - { \ - case at::ScalarType::Float: \ - { \ - using scalar_t_in = float; \ - switch (TYPEOUT) \ - { \ - case at::ScalarType::Float: \ - { \ - using scalar_t_out = float; \ - __VA_ARGS__; \ - break; \ - } \ - case at::ScalarType::Half: \ - { \ - using scalar_t_out = at::Half; \ - __VA_ARGS__; \ - break; \ - } \ - case at::ScalarType::BFloat16: \ - { \ - using scalar_t_out = at::BFloat16; \ - __VA_ARGS__; \ - break; \ - } \ - default: \ - AT_ERROR(#NAME, " not implemented for '", toString(TYPEOUT), "'"); \ - } \ - break; \ - } \ - case at::ScalarType::Half: \ - { \ - using scalar_t_in = at::Half; \ - using scalar_t_out = at::Half; \ - __VA_ARGS__; \ - break; \ - } \ - case at::ScalarType::BFloat16: \ - { \ - using scalar_t_in = at::BFloat16; \ - using scalar_t_out = at::BFloat16; \ - __VA_ARGS__; \ - break; \ - } \ - default: \ - AT_ERROR(#NAME, " not implemented for '", toString(TYPEIN), "'"); \ - } diff --git a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/torch/__init__.py b/mmaudio_x/ext/bigvgan_v2/alias_free_activation/torch/__init__.py deleted file mode 100644 index 8f756ed83f87f9839e457b240f60469bc187707d..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/torch/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -from .filter import * -from .resample import * -from .act import * diff --git a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/torch/act.py b/mmaudio_x/ext/bigvgan_v2/alias_free_activation/torch/act.py deleted file mode 100644 index 92445a8652d1998f80e2952224b18d0e1a89dc9f..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/torch/act.py +++ /dev/null @@ -1,32 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -import torch.nn as nn - -from mmaudio.ext.bigvgan_v2.alias_free_activation.torch.resample import (DownSample1d, UpSample1d) - - -class Activation1d(nn.Module): - - def __init__( - self, - activation, - up_ratio: int = 2, - down_ratio: int = 2, - up_kernel_size: int = 12, - down_kernel_size: int = 12, - ): - super().__init__() - self.up_ratio = up_ratio - self.down_ratio = down_ratio - self.act = activation - self.upsample = UpSample1d(up_ratio, up_kernel_size) - self.downsample = DownSample1d(down_ratio, down_kernel_size) - - # x: [B,C,T] - def forward(self, x): - x = self.upsample(x) - x = self.act(x) - x = self.downsample(x) - - return x diff --git a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/torch/filter.py b/mmaudio_x/ext/bigvgan_v2/alias_free_activation/torch/filter.py deleted file mode 100644 index 0fa35b0d5ddf8d6cb04cd9d47364ca033cebcd32..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/torch/filter.py +++ /dev/null @@ -1,101 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -import torch -import torch.nn as nn -import torch.nn.functional as F -import math - -if "sinc" in dir(torch): - sinc = torch.sinc -else: - # This code is adopted from adefossez's julius.core.sinc under the MIT License - # https://adefossez.github.io/julius/julius/core.html - # LICENSE is in incl_licenses directory. - def sinc(x: torch.Tensor): - """ - Implementation of sinc, i.e. sin(pi * x) / (pi * x) - __Warning__: Different to julius.sinc, the input is multiplied by `pi`! - """ - return torch.where( - x == 0, - torch.tensor(1.0, device=x.device, dtype=x.dtype), - torch.sin(math.pi * x) / math.pi / x, - ) - - -# This code is adopted from adefossez's julius.lowpass.LowPassFilters under the MIT License -# https://adefossez.github.io/julius/julius/lowpass.html -# LICENSE is in incl_licenses directory. -def kaiser_sinc_filter1d( - cutoff, half_width, kernel_size -): # return filter [1,1,kernel_size] - even = kernel_size % 2 == 0 - half_size = kernel_size // 2 - - # For kaiser window - delta_f = 4 * half_width - A = 2.285 * (half_size - 1) * math.pi * delta_f + 7.95 - if A > 50.0: - beta = 0.1102 * (A - 8.7) - elif A >= 21.0: - beta = 0.5842 * (A - 21) ** 0.4 + 0.07886 * (A - 21.0) - else: - beta = 0.0 - window = torch.kaiser_window(kernel_size, beta=beta, periodic=False) - - # ratio = 0.5/cutoff -> 2 * cutoff = 1 / ratio - if even: - time = torch.arange(-half_size, half_size) + 0.5 - else: - time = torch.arange(kernel_size) - half_size - if cutoff == 0: - filter_ = torch.zeros_like(time) - else: - filter_ = 2 * cutoff * window * sinc(2 * cutoff * time) - """ - Normalize filter to have sum = 1, otherwise we will have a small leakage of the constant component in the input signal. - """ - filter_ /= filter_.sum() - filter = filter_.view(1, 1, kernel_size) - - return filter - - -class LowPassFilter1d(nn.Module): - def __init__( - self, - cutoff=0.5, - half_width=0.6, - stride: int = 1, - padding: bool = True, - padding_mode: str = "replicate", - kernel_size: int = 12, - ): - """ - kernel_size should be even number for stylegan3 setup, in this implementation, odd number is also possible. - """ - super().__init__() - if cutoff < -0.0: - raise ValueError("Minimum cutoff must be larger than zero.") - if cutoff > 0.5: - raise ValueError("A cutoff above 0.5 does not make sense.") - self.kernel_size = kernel_size - self.even = kernel_size % 2 == 0 - self.pad_left = kernel_size // 2 - int(self.even) - self.pad_right = kernel_size // 2 - self.stride = stride - self.padding = padding - self.padding_mode = padding_mode - filter = kaiser_sinc_filter1d(cutoff, half_width, kernel_size) - self.register_buffer("filter", filter) - - # Input [B, C, T] - def forward(self, x): - _, C, _ = x.shape - - if self.padding: - x = F.pad(x, (self.pad_left, self.pad_right), mode=self.padding_mode) - out = F.conv1d(x, self.filter.expand(C, -1, -1), stride=self.stride, groups=C) - - return out diff --git a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/torch/resample.py b/mmaudio_x/ext/bigvgan_v2/alias_free_activation/torch/resample.py deleted file mode 100644 index 33faa1518c3bcf34b63cc44374905df83542f614..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/alias_free_activation/torch/resample.py +++ /dev/null @@ -1,54 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -import torch.nn as nn -from torch.nn import functional as F - -from mmaudio.ext.bigvgan_v2.alias_free_activation.torch.filter import (LowPassFilter1d, - kaiser_sinc_filter1d) - - -class UpSample1d(nn.Module): - - def __init__(self, ratio=2, kernel_size=None): - super().__init__() - self.ratio = ratio - self.kernel_size = (int(6 * ratio // 2) * 2 if kernel_size is None else kernel_size) - self.stride = ratio - self.pad = self.kernel_size // ratio - 1 - self.pad_left = self.pad * self.stride + (self.kernel_size - self.stride) // 2 - self.pad_right = (self.pad * self.stride + (self.kernel_size - self.stride + 1) // 2) - filter = kaiser_sinc_filter1d(cutoff=0.5 / ratio, - half_width=0.6 / ratio, - kernel_size=self.kernel_size) - self.register_buffer("filter", filter) - - # x: [B, C, T] - def forward(self, x): - _, C, _ = x.shape - - x = F.pad(x, (self.pad, self.pad), mode="replicate") - x = self.ratio * F.conv_transpose1d( - x, self.filter.expand(C, -1, -1), stride=self.stride, groups=C) - x = x[..., self.pad_left:-self.pad_right] - - return x - - -class DownSample1d(nn.Module): - - def __init__(self, ratio=2, kernel_size=None): - super().__init__() - self.ratio = ratio - self.kernel_size = (int(6 * ratio // 2) * 2 if kernel_size is None else kernel_size) - self.lowpass = LowPassFilter1d( - cutoff=0.5 / ratio, - half_width=0.6 / ratio, - stride=ratio, - kernel_size=self.kernel_size, - ) - - def forward(self, x): - xx = self.lowpass(x) - - return xx diff --git a/mmaudio_x/ext/bigvgan_v2/bigvgan.py b/mmaudio_x/ext/bigvgan_v2/bigvgan.py deleted file mode 100644 index ff2b6c4c87e20d147130d0b608d2467557347caf..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/bigvgan.py +++ /dev/null @@ -1,439 +0,0 @@ -# Copyright (c) 2024 NVIDIA CORPORATION. -# Licensed under the MIT license. - -# Adapted from https://github.com/jik876/hifi-gan under the MIT license. -# LICENSE is in incl_licenses directory. - -import json -import os -from pathlib import Path -from typing import Dict, Optional, Union - -import torch -import torch.nn as nn -from huggingface_hub import PyTorchModelHubMixin, hf_hub_download -from torch.nn import Conv1d, ConvTranspose1d -from torch.nn.utils.parametrizations import weight_norm -from torch.nn.utils.parametrize import remove_parametrizations - -from mmaudio.ext.bigvgan_v2 import activations -from mmaudio.ext.bigvgan_v2.alias_free_activation.torch.act import \ - Activation1d as TorchActivation1d -from mmaudio.ext.bigvgan_v2.env import AttrDict -from mmaudio.ext.bigvgan_v2.utils import get_padding, init_weights - - -def load_hparams_from_json(path) -> AttrDict: - with open(path) as f: - data = f.read() - return AttrDict(json.loads(data)) - - -class AMPBlock1(torch.nn.Module): - """ - AMPBlock applies Snake / SnakeBeta activation functions with trainable parameters that control periodicity, defined for each layer. - AMPBlock1 has additional self.convs2 that contains additional Conv1d layers with a fixed dilation=1 followed by each layer in self.convs1 - - Args: - h (AttrDict): Hyperparameters. - channels (int): Number of convolution channels. - kernel_size (int): Size of the convolution kernel. Default is 3. - dilation (tuple): Dilation rates for the convolutions. Each dilation layer has two convolutions. Default is (1, 3, 5). - activation (str): Activation function type. Should be either 'snake' or 'snakebeta'. Default is None. - """ - - def __init__( - self, - h: AttrDict, - channels: int, - kernel_size: int = 3, - dilation: tuple = (1, 3, 5), - activation: str = None, - ): - super().__init__() - - self.h = h - - self.convs1 = nn.ModuleList([ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - stride=1, - dilation=d, - padding=get_padding(kernel_size, d), - )) for d in dilation - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - stride=1, - dilation=1, - padding=get_padding(kernel_size, 1), - )) for _ in range(len(dilation)) - ]) - self.convs2.apply(init_weights) - - self.num_layers = len(self.convs1) + len(self.convs2) # Total number of conv layers - - # Select which Activation1d, lazy-load cuda version to ensure backward compatibility - if self.h.get("use_cuda_kernel", False): - from alias_free_activation.cuda.activation1d import \ - Activation1d as CudaActivation1d - - Activation1d = CudaActivation1d - else: - Activation1d = TorchActivation1d - - # Activation functions - if activation == "snake": - self.activations = nn.ModuleList([ - Activation1d( - activation=activations.Snake(channels, alpha_logscale=h.snake_logscale)) - for _ in range(self.num_layers) - ]) - elif activation == "snakebeta": - self.activations = nn.ModuleList([ - Activation1d( - activation=activations.SnakeBeta(channels, alpha_logscale=h.snake_logscale)) - for _ in range(self.num_layers) - ]) - else: - raise NotImplementedError( - "activation incorrectly specified. check the config file and look for 'activation'." - ) - - def forward(self, x): - acts1, acts2 = self.activations[::2], self.activations[1::2] - for c1, c2, a1, a2 in zip(self.convs1, self.convs2, acts1, acts2): - xt = a1(x) - xt = c1(xt) - xt = a2(xt) - xt = c2(xt) - x = xt + x - - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_parametrizations(l, 'weight') - for l in self.convs2: - remove_parametrizations(l, 'weight') - - -class AMPBlock2(torch.nn.Module): - """ - AMPBlock applies Snake / SnakeBeta activation functions with trainable parameters that control periodicity, defined for each layer. - Unlike AMPBlock1, AMPBlock2 does not contain extra Conv1d layers with fixed dilation=1 - - Args: - h (AttrDict): Hyperparameters. - channels (int): Number of convolution channels. - kernel_size (int): Size of the convolution kernel. Default is 3. - dilation (tuple): Dilation rates for the convolutions. Each dilation layer has two convolutions. Default is (1, 3, 5). - activation (str): Activation function type. Should be either 'snake' or 'snakebeta'. Default is None. - """ - - def __init__( - self, - h: AttrDict, - channels: int, - kernel_size: int = 3, - dilation: tuple = (1, 3, 5), - activation: str = None, - ): - super().__init__() - - self.h = h - - self.convs = nn.ModuleList([ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - stride=1, - dilation=d, - padding=get_padding(kernel_size, d), - )) for d in dilation - ]) - self.convs.apply(init_weights) - - self.num_layers = len(self.convs) # Total number of conv layers - - # Select which Activation1d, lazy-load cuda version to ensure backward compatibility - if self.h.get("use_cuda_kernel", False): - from alias_free_activation.cuda.activation1d import \ - Activation1d as CudaActivation1d - - Activation1d = CudaActivation1d - else: - Activation1d = TorchActivation1d - - # Activation functions - if activation == "snake": - self.activations = nn.ModuleList([ - Activation1d( - activation=activations.Snake(channels, alpha_logscale=h.snake_logscale)) - for _ in range(self.num_layers) - ]) - elif activation == "snakebeta": - self.activations = nn.ModuleList([ - Activation1d( - activation=activations.SnakeBeta(channels, alpha_logscale=h.snake_logscale)) - for _ in range(self.num_layers) - ]) - else: - raise NotImplementedError( - "activation incorrectly specified. check the config file and look for 'activation'." - ) - - def forward(self, x): - for c, a in zip(self.convs, self.activations): - xt = a(x) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class BigVGAN( - torch.nn.Module, - PyTorchModelHubMixin, - library_name="bigvgan", - repo_url="https://github.com/NVIDIA/BigVGAN", - docs_url="https://github.com/NVIDIA/BigVGAN/blob/main/README.md", - pipeline_tag="audio-to-audio", - license="mit", - tags=["neural-vocoder", "audio-generation", "arxiv:2206.04658"], -): - """ - BigVGAN is a neural vocoder model that applies anti-aliased periodic activation for residual blocks (resblocks). - New in BigVGAN-v2: it can optionally use optimized CUDA kernels for AMP (anti-aliased multi-periodicity) blocks. - - Args: - h (AttrDict): Hyperparameters. - use_cuda_kernel (bool): If set to True, loads optimized CUDA kernels for AMP. This should be used for inference only, as training is not supported with CUDA kernels. - - Note: - - The `use_cuda_kernel` parameter should be used for inference only, as training with CUDA kernels is not supported. - - Ensure that the activation function is correctly specified in the hyperparameters (h.activation). - """ - - def __init__(self, h: AttrDict, use_cuda_kernel: bool = False): - super().__init__() - self.h = h - self.h["use_cuda_kernel"] = use_cuda_kernel - - # Select which Activation1d, lazy-load cuda version to ensure backward compatibility - if self.h.get("use_cuda_kernel", False): - from alias_free_activation.cuda.activation1d import \ - Activation1d as CudaActivation1d - - Activation1d = CudaActivation1d - else: - Activation1d = TorchActivation1d - - self.num_kernels = len(h.resblock_kernel_sizes) - self.num_upsamples = len(h.upsample_rates) - - # Pre-conv - self.conv_pre = weight_norm(Conv1d(h.num_mels, h.upsample_initial_channel, 7, 1, padding=3)) - - # Define which AMPBlock to use. BigVGAN uses AMPBlock1 as default - if h.resblock == "1": - resblock_class = AMPBlock1 - elif h.resblock == "2": - resblock_class = AMPBlock2 - else: - raise ValueError( - f"Incorrect resblock class specified in hyperparameters. Got {h.resblock}") - - # Transposed conv-based upsamplers. does not apply anti-aliasing - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)): - self.ups.append( - nn.ModuleList([ - weight_norm( - ConvTranspose1d( - h.upsample_initial_channel // (2**i), - h.upsample_initial_channel // (2**(i + 1)), - k, - u, - padding=(k - u) // 2, - )) - ])) - - # Residual blocks using anti-aliased multi-periodicity composition modules (AMP) - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h.upsample_initial_channel // (2**(i + 1)) - for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)): - self.resblocks.append(resblock_class(h, ch, k, d, activation=h.activation)) - - # Post-conv - activation_post = (activations.Snake(ch, alpha_logscale=h.snake_logscale) - if h.activation == "snake" else - (activations.SnakeBeta(ch, alpha_logscale=h.snake_logscale) - if h.activation == "snakebeta" else None)) - if activation_post is None: - raise NotImplementedError( - "activation incorrectly specified. check the config file and look for 'activation'." - ) - - self.activation_post = Activation1d(activation=activation_post) - - # Whether to use bias for the final conv_post. Default to True for backward compatibility - self.use_bias_at_final = h.get("use_bias_at_final", True) - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3, bias=self.use_bias_at_final)) - - # Weight initialization - for i in range(len(self.ups)): - self.ups[i].apply(init_weights) - self.conv_post.apply(init_weights) - - # Final tanh activation. Defaults to True for backward compatibility - self.use_tanh_at_final = h.get("use_tanh_at_final", True) - - def forward(self, x): - # Pre-conv - x = self.conv_pre(x) - - for i in range(self.num_upsamples): - # Upsampling - for i_up in range(len(self.ups[i])): - x = self.ups[i][i_up](x) - # AMP blocks - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - - # Post-conv - x = self.activation_post(x) - x = self.conv_post(x) - # Final tanh activation - if self.use_tanh_at_final: - x = torch.tanh(x) - else: - x = torch.clamp(x, min=-1.0, max=1.0) # Bound the output to [-1, 1] - - return x - - def remove_weight_norm(self): - try: - print("Removing weight norm...") - for l in self.ups: - for l_i in l: - remove_parametrizations(l_i, 'weight') - for l in self.resblocks: - l.remove_weight_norm() - remove_parametrizations(self.conv_pre, 'weight') - remove_parametrizations(self.conv_post, 'weight') - except ValueError: - print("[INFO] Model already removed weight norm. Skipping!") - pass - - # Additional methods for huggingface_hub support - def _save_pretrained(self, save_directory: Path) -> None: - """Save weights and config.json from a Pytorch model to a local directory.""" - - model_path = save_directory / "bigvgan_generator.pt" - torch.save({"generator": self.state_dict()}, model_path) - - config_path = save_directory / "config.json" - with open(config_path, "w") as config_file: - json.dump(self.h, config_file, indent=4) - - @classmethod - def _from_pretrained( - cls, - *, - model_id: str, - revision: str, - cache_dir: str, - force_download: bool, - proxies: Optional[Dict], - resume_download: bool, - local_files_only: bool, - token: Union[str, bool, None], - map_location: str = "cpu", # Additional argument - strict: bool = False, # Additional argument - use_cuda_kernel: bool = False, - **model_kwargs, - ): - """Load Pytorch pretrained weights and return the loaded model.""" - - # Download and load hyperparameters (h) used by BigVGAN - if os.path.isdir(model_id): - print("Loading config.json from local directory") - config_file = os.path.join(model_id, "config.json") - else: - config_file = hf_hub_download( - repo_id=model_id, - filename="config.json", - revision=revision, - cache_dir=cache_dir, - force_download=force_download, - proxies=proxies, - resume_download=resume_download, - token=token, - local_files_only=local_files_only, - ) - h = load_hparams_from_json(config_file) - - # instantiate BigVGAN using h - if use_cuda_kernel: - print( - f"[WARNING] You have specified use_cuda_kernel=True during BigVGAN.from_pretrained(). Only inference is supported (training is not implemented)!" - ) - print( - f"[WARNING] You need nvcc and ninja installed in your system that matches your PyTorch build is using to build the kernel. If not, the model will fail to initialize or generate incorrect waveform!" - ) - print( - f"[WARNING] For detail, see the official GitHub repository: https://github.com/NVIDIA/BigVGAN?tab=readme-ov-file#using-custom-cuda-kernel-for-synthesis" - ) - model = cls(h, use_cuda_kernel=use_cuda_kernel) - - # Download and load pretrained generator weight - if os.path.isdir(model_id): - print("Loading weights from local directory") - model_file = os.path.join(model_id, "bigvgan_generator.pt") - else: - print(f"Loading weights from {model_id}") - model_file = hf_hub_download( - repo_id=model_id, - filename="bigvgan_generator.pt", - revision=revision, - cache_dir=cache_dir, - force_download=force_download, - proxies=proxies, - resume_download=resume_download, - token=token, - local_files_only=local_files_only, - ) - - checkpoint_dict = torch.load(model_file, map_location=map_location, weights_only=True) - - try: - model.load_state_dict(checkpoint_dict["generator"]) - except RuntimeError: - print( - f"[INFO] the pretrained checkpoint does not contain weight norm. Loading the checkpoint after removing weight norm!" - ) - model.remove_weight_norm() - model.load_state_dict(checkpoint_dict["generator"]) - - return model diff --git a/mmaudio_x/ext/bigvgan_v2/env.py b/mmaudio_x/ext/bigvgan_v2/env.py deleted file mode 100644 index b8be238d4db710c8c9a338d336baea0138f18d1f..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/env.py +++ /dev/null @@ -1,18 +0,0 @@ -# Adapted from https://github.com/jik876/hifi-gan under the MIT license. -# LICENSE is in incl_licenses directory. - -import os -import shutil - - -class AttrDict(dict): - def __init__(self, *args, **kwargs): - super(AttrDict, self).__init__(*args, **kwargs) - self.__dict__ = self - - -def build_env(config, config_name, path): - t_path = os.path.join(path, config_name) - if config != t_path: - os.makedirs(path, exist_ok=True) - shutil.copyfile(config, os.path.join(path, config_name)) \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_1 b/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_1 deleted file mode 100644 index 5afae394d6b37da0e12ba6b290d2512687f421ac..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_1 +++ /dev/null @@ -1,21 +0,0 @@ -MIT License - -Copyright (c) 2020 Jungil Kong - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_2 b/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_2 deleted file mode 100644 index 322b758863c4219be68291ae3826218baa93cb4c..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_2 +++ /dev/null @@ -1,21 +0,0 @@ -MIT License - -Copyright (c) 2020 Edward Dixon - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_3 b/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_3 deleted file mode 100644 index 56ee3c8c4cc2b4b32e0975d17258f9ba515fdbcc..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_3 +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_4 b/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_4 deleted file mode 100644 index 48fd1a1ba8d81a94b6c7d1c2ff1a1f307cc5371d..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_4 +++ /dev/null @@ -1,29 +0,0 @@ -BSD 3-Clause License - -Copyright (c) 2019, Seungwon Park 박승원 -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - -1. Redistributions of source code must retain the above copyright notice, this - list of conditions and the following disclaimer. - -2. Redistributions in binary form must reproduce the above copyright notice, - this list of conditions and the following disclaimer in the documentation - and/or other materials provided with the distribution. - -3. Neither the name of the copyright holder nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE -FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL -DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR -SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, -OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_5 b/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_5 deleted file mode 100644 index 01ae5538e6b7c787bb4f5d6f2cd9903520d6e465..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_5 +++ /dev/null @@ -1,16 +0,0 @@ -Copyright 2020 Alexandre Défossez - -Permission is hereby granted, free of charge, to any person obtaining a copy of this software and -associated documentation files (the "Software"), to deal in the Software without restriction, -including without limitation the rights to use, copy, modify, merge, publish, distribute, -sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all copies or -substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT -NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_6 b/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_6 deleted file mode 100644 index 2569ec0b6c85f94f3cd071ba16e9028ccf156be2..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_6 +++ /dev/null @@ -1,21 +0,0 @@ -MIT License - -Copyright (c) 2023-present, Descript - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_7 b/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_7 deleted file mode 100644 index c37bdaf99c6921f5849425d546069e972f52d7fa..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_7 +++ /dev/null @@ -1,21 +0,0 @@ -MIT License - -Copyright (c) 2023 Charactr Inc. - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_8 b/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_8 deleted file mode 100644 index ab3d7ffe795779f54e339078e4e752ad9019aae8..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/incl_licenses/LICENSE_8 +++ /dev/null @@ -1,21 +0,0 @@ -MIT License - -Copyright (c) 2023 Amphion - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. \ No newline at end of file diff --git a/mmaudio_x/ext/bigvgan_v2/utils.py b/mmaudio_x/ext/bigvgan_v2/utils.py deleted file mode 100644 index 3b1d41670fa1ee257b2ed22c61086ba7a32c7cb0..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/bigvgan_v2/utils.py +++ /dev/null @@ -1,31 +0,0 @@ -# Adapted from https://github.com/jik876/hifi-gan under the MIT license. -# LICENSE is in incl_licenses directory. - -import os - -import torch -from torch.nn.utils import weight_norm - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print(f"Loading '{filepath}'") - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict diff --git a/mmaudio_x/ext/mel_converter.py b/mmaudio_x/ext/mel_converter.py deleted file mode 100644 index 6fc589c9468e077fc580965db250fd502e229672..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/mel_converter.py +++ /dev/null @@ -1,82 +0,0 @@ -# Reference: # https://github.com/bytedance/Make-An-Audio-2 - -import torch -import torch.nn as nn -from librosa.filters import mel as librosa_mel_fn - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5, norm_fn=torch.log10): - return norm_fn(torch.clamp(x, min=clip_val) * C) - - -def spectral_normalize_torch(magnitudes, norm_fn): - output = dynamic_range_compression_torch(magnitudes, norm_fn=norm_fn) - return output - - -class MelConverter(nn.Module): - - def __init__( - self, - *, - sampling_rate: float = 16_000, - n_fft: int = 1024, - num_mels: int = 80, - hop_size: int = 256, - win_size: int = 1024, - fmin: float = 0, - fmax: float = 8_000, - norm_fn=torch.log10, - ): - super().__init__() - self.sampling_rate = sampling_rate - self.n_fft = n_fft - self.num_mels = num_mels - self.hop_size = hop_size - self.win_size = win_size - self.fmin = fmin - self.fmax = fmax - self.norm_fn = norm_fn - - mel = librosa_mel_fn(sr=self.sampling_rate, - n_fft=self.n_fft, - n_mels=self.num_mels, - fmin=self.fmin, - fmax=self.fmax) - mel_basis = torch.from_numpy(mel).float() - hann_window = torch.hann_window(self.win_size) - - self.register_buffer('mel_basis', mel_basis) - self.register_buffer('hann_window', hann_window) - - @property - def device(self): - return self.mel_basis.device - - def forward(self, waveform: torch.Tensor, center: bool = False) -> torch.Tensor: - waveform = waveform.clamp(min=-1., max=1.).to(self.device) - - waveform = torch.nn.functional.pad( - waveform.unsqueeze(1), - [int((self.n_fft - self.hop_size) / 2), - int((self.n_fft - self.hop_size) / 2)], - mode='reflect') - waveform = waveform.squeeze(1) - - spec = torch.stft(waveform, - self.n_fft, - hop_length=self.hop_size, - win_length=self.win_size, - window=self.hann_window, - center=center, - pad_mode='reflect', - normalized=False, - onesided=True, - return_complex=True) - - spec = torch.view_as_real(spec) - spec = torch.sqrt(spec.pow(2).sum(-1) + (1e-9)) - spec = torch.matmul(self.mel_basis, spec) - spec = spectral_normalize_torch(spec, self.norm_fn) - - return spec diff --git a/mmaudio_x/ext/rotary_embeddings.py b/mmaudio_x/ext/rotary_embeddings.py deleted file mode 100644 index 1ea9d56278cb68b7577ed13148227c30ed98fd02..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/rotary_embeddings.py +++ /dev/null @@ -1,35 +0,0 @@ -from typing import Union - -import torch -from einops import rearrange -from torch import Tensor - -# Ref: https://github.com/black-forest-labs/flux/blob/main/src/flux/math.py -# Ref: https://github.com/lucidrains/rotary-embedding-torch - - -def compute_rope_rotations(length: int, - dim: int, - theta: int, - *, - freq_scaling: float = 1.0, - device: Union[torch.device, str] = 'cpu') -> Tensor: - assert dim % 2 == 0 - - with torch.amp.autocast(device_type='cuda', enabled=False): - pos = torch.arange(length, dtype=torch.float32, device=device) - freqs = 1.0 / (theta**(torch.arange(0, dim, 2, dtype=torch.float32, device=device) / dim)) - freqs *= freq_scaling - - rot = torch.einsum('..., f -> ... f', pos, freqs) - rot = torch.stack([torch.cos(rot), -torch.sin(rot), torch.sin(rot), torch.cos(rot)], dim=-1) - rot = rearrange(rot, 'n d (i j) -> 1 n d i j', i=2, j=2) - return rot - - -def apply_rope(x: Tensor, rot: Tensor) -> tuple[Tensor, Tensor]: - with torch.amp.autocast(device_type='cuda', enabled=False): - _x = x.float() - _x = _x.view(*_x.shape[:-1], -1, 1, 2) - x_out = rot[..., 0] * _x[..., 0] + rot[..., 1] * _x[..., 1] - return x_out.reshape(*x.shape).to(dtype=x.dtype) diff --git a/mmaudio_x/ext/stft_converter.py b/mmaudio_x/ext/stft_converter.py deleted file mode 100644 index 62922067ef3b1d3b8727ec39e7d664ccb304d9fe..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/stft_converter.py +++ /dev/null @@ -1,183 +0,0 @@ -# Reference: # https://github.com/bytedance/Make-An-Audio-2 - -import torch -import torch.nn as nn -import torchaudio -from einops import rearrange -from librosa.filters import mel as librosa_mel_fn - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5, norm_fn=torch.log10): - return norm_fn(torch.clamp(x, min=clip_val) * C) - - -def spectral_normalize_torch(magnitudes, norm_fn): - output = dynamic_range_compression_torch(magnitudes, norm_fn=norm_fn) - return output - - -class STFTConverter(nn.Module): - - def __init__( - self, - *, - sampling_rate: float = 16_000, - n_fft: int = 1024, - num_mels: int = 128, - hop_size: int = 256, - win_size: int = 1024, - fmin: float = 0, - fmax: float = 8_000, - norm_fn=torch.log, - ): - super().__init__() - self.sampling_rate = sampling_rate - self.n_fft = n_fft - self.num_mels = num_mels - self.hop_size = hop_size - self.win_size = win_size - self.fmin = fmin - self.fmax = fmax - self.norm_fn = norm_fn - - mel = librosa_mel_fn(sr=self.sampling_rate, - n_fft=self.n_fft, - n_mels=self.num_mels, - fmin=self.fmin, - fmax=self.fmax) - mel_basis = torch.from_numpy(mel).float() - hann_window = torch.hann_window(self.win_size) - - self.register_buffer('mel_basis', mel_basis) - self.register_buffer('hann_window', hann_window) - - @property - def device(self): - return self.hann_window.device - - def forward(self, waveform: torch.Tensor) -> torch.Tensor: - # input: batch_size * length - bs = waveform.shape[0] - waveform = waveform.clamp(min=-1., max=1.) - - spec = torch.stft(waveform, - self.n_fft, - hop_length=self.hop_size, - win_length=self.win_size, - window=self.hann_window, - center=True, - pad_mode='reflect', - normalized=False, - onesided=True, - return_complex=True) - - spec = torch.view_as_real(spec) - # print('After stft', spec.shape, spec.min(), spec.max(), spec.mean()) - - power = spec.pow(2).sum(-1) - angle = torch.atan2(spec[..., 1], spec[..., 0]) - - print('power', power.shape, power.min(), power.max(), power.mean()) - print('angle', angle.shape, angle.min(), angle.max(), angle.mean()) - - # print('mel', self.mel_basis.shape, self.mel_basis.min(), self.mel_basis.max(), - # self.mel_basis.mean()) - - # spec = rearrange(spec, 'b f t c -> (b c) f t') - - # spec = self.mel_transform(spec) - - # spec = torch.matmul(self.mel_basis, spec) - - # print('After mel', spec.shape, spec.min(), spec.max(), spec.mean()) - - # spec = spectral_normalize_torch(spec, self.norm_fn) - - # print('After norm', spec.shape, spec.min(), spec.max(), spec.mean()) - - # compute magnitude - # magnitude = torch.sqrt((spec**2).sum(-1)) - # normalize by magnitude - # scaled_magnitude = torch.log10(magnitude.clamp(min=1e-5)) * 10 - # spec = spec / magnitude.unsqueeze(-1) * scaled_magnitude.unsqueeze(-1) - - # power = torch.log10(power.clamp(min=1e-5)) * 10 - power = torch.log10(power.clamp(min=1e-5)) - - print('After scaling', power.shape, power.min(), power.max(), power.mean()) - - spec = torch.stack([power, angle], dim=-1) - - # spec = rearrange(spec, '(b c) f t -> b c f t', b=bs) - spec = rearrange(spec, 'b f t c -> b c f t', b=bs) - - # spec[:, :, 400:] = 0 - - return spec - - def invert(self, spec: torch.Tensor, length: int) -> torch.Tensor: - bs = spec.shape[0] - - # spec = rearrange(spec, 'b c f t -> (b c) f t') - # print(spec.shape, self.mel_basis.shape) - # spec = torch.linalg.lstsq(self.mel_basis.unsqueeze(0), spec).solution - # spec = torch.linalg.pinv(self.mel_basis.unsqueeze(0)) @ spec - - # spec = self.invmel_transform(spec) - - spec = rearrange(spec, 'b c f t -> b f t c', b=bs).contiguous() - - # spec[..., 0] = 10**(spec[..., 0] / 10) - - power = spec[..., 0] - power = 10**power - - # print('After unscaling', spec[..., 0].shape, spec[..., 0].min(), spec[..., 0].max(), - # spec[..., 0].mean()) - - unit_vector = torch.stack([ - torch.cos(spec[..., 1]), - torch.sin(spec[..., 1]), - ], dim=-1) - - spec = torch.sqrt(power) * unit_vector - - # spec = rearrange(spec, '(b c) f t -> b f t c', b=bs).contiguous() - spec = torch.view_as_complex(spec) - - waveform = torch.istft( - spec, - self.n_fft, - length=length, - hop_length=self.hop_size, - win_length=self.win_size, - window=self.hann_window, - center=True, - normalized=False, - onesided=True, - return_complex=False, - ) - - return waveform - - -if __name__ == '__main__': - - converter = STFTConverter(sampling_rate=16000) - - signal = torchaudio.load('./output/ZZ6GRocWW38_000090.wav')[0] - # resample signal at 44100 Hz - # signal = torchaudio.transforms.Resample(16_000, 44_100)(signal) - - L = signal.shape[1] - print('Input signal', signal.shape) - spec = converter(signal) - - print('Final spec', spec.shape) - - signal_recon = converter.invert(spec, length=L) - print('Output signal', signal_recon.shape, signal_recon.min(), signal_recon.max(), - signal_recon.mean()) - - print('MSE', torch.nn.functional.mse_loss(signal, signal_recon)) - torchaudio.save('./output/ZZ6GRocWW38_000090_recon.wav', signal_recon, 16000) diff --git a/mmaudio_x/ext/stft_converter_mel.py b/mmaudio_x/ext/stft_converter_mel.py deleted file mode 100644 index f6b32d4cb9a23cd74f723e7d8307fd82fa1abba0..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/stft_converter_mel.py +++ /dev/null @@ -1,234 +0,0 @@ -# Reference: # https://github.com/bytedance/Make-An-Audio-2 - -import torch -import torch.nn as nn -import torchaudio -from einops import rearrange -from librosa.filters import mel as librosa_mel_fn - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5, norm_fn=torch.log10): - return norm_fn(torch.clamp(x, min=clip_val) * C) - - -def spectral_normalize_torch(magnitudes, norm_fn): - output = dynamic_range_compression_torch(magnitudes, norm_fn=norm_fn) - return output - - -class STFTConverter(nn.Module): - - def __init__( - self, - *, - sampling_rate: float = 16_000, - n_fft: int = 1024, - num_mels: int = 128, - hop_size: int = 256, - win_size: int = 1024, - fmin: float = 0, - fmax: float = 8_000, - norm_fn=torch.log, - ): - super().__init__() - self.sampling_rate = sampling_rate - self.n_fft = n_fft - self.num_mels = num_mels - self.hop_size = hop_size - self.win_size = win_size - self.fmin = fmin - self.fmax = fmax - self.norm_fn = norm_fn - - mel = librosa_mel_fn(sr=self.sampling_rate, - n_fft=self.n_fft, - n_mels=self.num_mels, - fmin=self.fmin, - fmax=self.fmax) - mel_basis = torch.from_numpy(mel).float() - hann_window = torch.hann_window(self.win_size) - - self.register_buffer('mel_basis', mel_basis) - self.register_buffer('hann_window', hann_window) - - @property - def device(self): - return self.hann_window.device - - def forward(self, waveform: torch.Tensor) -> torch.Tensor: - # input: batch_size * length - bs = waveform.shape[0] - waveform = waveform.clamp(min=-1., max=1.) - - spec = torch.stft(waveform, - self.n_fft, - hop_length=self.hop_size, - win_length=self.win_size, - window=self.hann_window, - center=True, - pad_mode='reflect', - normalized=False, - onesided=True, - return_complex=True) - - spec = torch.view_as_real(spec) - # print('After stft', spec.shape, spec.min(), spec.max(), spec.mean()) - - power = (spec.pow(2).sum(-1))**(0.5) - angle = torch.atan2(spec[..., 1], spec[..., 0]) - - print('power 1', power.shape, power.min(), power.max(), power.mean()) - print('angle 1', angle.shape, angle.min(), angle.max(), angle.mean(), angle[:, :2, :2]) - - # print('mel', self.mel_basis.shape, self.mel_basis.min(), self.mel_basis.max(), - # self.mel_basis.mean()) - - # spec = self.mel_transform(spec) - - # power = torch.matmul(self.mel_basis, power) - - spec = rearrange(spec, 'b f t c -> (b c) f t') - spec = self.mel_basis.unsqueeze(0) @ spec - spec = rearrange(spec, '(b c) f t -> b f t c', b=bs) - - power = (spec.pow(2).sum(-1))**(0.5) - angle = torch.atan2(spec[..., 1], spec[..., 0]) - - print('power', power.shape, power.min(), power.max(), power.mean()) - print('angle', angle.shape, angle.min(), angle.max(), angle.mean(), angle[:, :2, :2]) - - # print('After mel', spec.shape, spec.min(), spec.max(), spec.mean()) - - # spec = spectral_normalize_torch(spec, self.norm_fn) - - # print('After norm', spec.shape, spec.min(), spec.max(), spec.mean()) - - # compute magnitude - # magnitude = torch.sqrt((spec**2).sum(-1)) - # normalize by magnitude - # scaled_magnitude = torch.log10(magnitude.clamp(min=1e-5)) * 10 - # spec = spec / magnitude.unsqueeze(-1) * scaled_magnitude.unsqueeze(-1) - - # power = torch.log10(power.clamp(min=1e-5)) * 10 - power = torch.log10(power.clamp(min=1e-8)) - - print('After scaling', power.shape, power.min(), power.max(), power.mean()) - - # spec = torch.stack([power, angle], dim=-1) - - # spec = rearrange(spec, '(b c) f t -> b c f t', b=bs) - # spec = rearrange(spec, 'b f t c -> b c f t', b=bs) - - # spec[:, :, 400:] = 0 - - return power, angle - # return spec[..., 0], spec[..., 1] - - def invert(self, spec: torch.Tensor, length: int) -> torch.Tensor: - - power, angle = spec - - bs = power.shape[0] - - # spec = rearrange(spec, 'b c f t -> (b c) f t') - # print(spec.shape, self.mel_basis.shape) - # spec = torch.linalg.lstsq(self.mel_basis.unsqueeze(0), spec).solution - # spec = torch.linalg.pinv(self.mel_basis.unsqueeze(0)) @ spec - - # spec = self.invmel_transform(spec) - - # spec = rearrange(spec, 'b c f t -> b f t c', b=bs).contiguous() - - # spec[..., 0] = 10**(spec[..., 0] / 10) - - # power = spec[..., 0] - power = 10**power - - # print('After unscaling', spec[..., 0].shape, spec[..., 0].min(), spec[..., 0].max(), - # spec[..., 0].mean()) - - unit_vector = torch.stack([ - torch.cos(angle), - torch.sin(angle), - ], dim=-1) - - spec = power.unsqueeze(-1) * unit_vector - - # power = torch.linalg.lstsq(self.mel_basis.unsqueeze(0), power).solution - spec = rearrange(spec, 'b f t c -> (b c) f t') - spec = torch.linalg.pinv(self.mel_basis.unsqueeze(0)) @ spec - # spec = torch.linalg.lstsq(self.mel_basis.unsqueeze(0), spec).solution - spec = rearrange(spec, '(b c) f t -> b f t c', b=bs).contiguous() - - power = (spec.pow(2).sum(-1))**(0.5) - angle = torch.atan2(spec[..., 1], spec[..., 0]) - - print('power 2', power.shape, power.min(), power.max(), power.mean()) - print('angle 2', angle.shape, angle.min(), angle.max(), angle.mean(), angle[:, :2, :2]) - - # spec = rearrange(spec, '(b c) f t -> b f t c', b=bs).contiguous() - spec = torch.view_as_complex(spec) - - waveform = torch.istft( - spec, - self.n_fft, - length=length, - hop_length=self.hop_size, - win_length=self.win_size, - window=self.hann_window, - center=True, - normalized=False, - onesided=True, - return_complex=False, - ) - - return waveform - - -if __name__ == '__main__': - - converter = STFTConverter(sampling_rate=16000) - - signal = torchaudio.load('./output/ZZ6GRocWW38_000090.wav')[0] - # resample signal at 44100 Hz - # signal = torchaudio.transforms.Resample(16_000, 44_100)(signal) - - L = signal.shape[1] - print('Input signal', signal.shape) - spec = converter(signal) - - power, angle = spec - - # print(power.shape, angle.shape) - # print(power, power.min(), power.max(), power.mean()) - # power = power.clamp(-1, 1) - # angle = angle.clamp(-1, 1) - - import matplotlib.pyplot as plt - - # Visualize power - plt.figure() - plt.imshow(power[0].detach().numpy(), aspect='auto', origin='lower') - plt.colorbar() - plt.title('Power') - plt.xlabel('Time') - plt.ylabel('Frequency') - plt.savefig('./output/power.png') - - # Visualize angle - plt.figure() - plt.imshow(angle[0].detach().numpy(), aspect='auto', origin='lower') - plt.colorbar() - plt.title('Angle') - plt.xlabel('Time') - plt.ylabel('Frequency') - plt.savefig('./output/angle.png') - - # print('Final spec', spec.shape) - - signal_recon = converter.invert(spec, length=L) - print('Output signal', signal_recon.shape, signal_recon.min(), signal_recon.max(), - signal_recon.mean()) - - print('MSE', torch.nn.functional.mse_loss(signal, signal_recon)) - torchaudio.save('./output/ZZ6GRocWW38_000090_recon.wav', signal_recon, 16000) diff --git a/mmaudio_x/ext/synchformer/LICENSE b/mmaudio_x/ext/synchformer/LICENSE deleted file mode 100644 index 2f70bf24b6f45f458998bdf5746376c4832352ea..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/synchformer/LICENSE +++ /dev/null @@ -1,21 +0,0 @@ -MIT License - -Copyright (c) 2024 Vladimir Iashin - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. diff --git a/mmaudio_x/ext/synchformer/__init__.py b/mmaudio_x/ext/synchformer/__init__.py deleted file mode 100644 index 3aa1c4b6464593722e557505d721f3ca5e05f4e8..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/synchformer/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from mmaudio.ext.synchformer.synchformer import Synchformer diff --git a/mmaudio_x/ext/synchformer/divided_224_16x4.yaml b/mmaudio_x/ext/synchformer/divided_224_16x4.yaml deleted file mode 100644 index f9d20b76302a8af7928391643bd4b2d184e970aa..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/synchformer/divided_224_16x4.yaml +++ /dev/null @@ -1,84 +0,0 @@ -TRAIN: - ENABLE: True - DATASET: Ssv2 - BATCH_SIZE: 32 - EVAL_PERIOD: 5 - CHECKPOINT_PERIOD: 5 - AUTO_RESUME: True - CHECKPOINT_EPOCH_RESET: True - CHECKPOINT_FILE_PATH: /checkpoint/fmetze/neurips_sota/40944587/checkpoints/checkpoint_epoch_00035.pyth -DATA: - NUM_FRAMES: 16 - SAMPLING_RATE: 4 - TRAIN_JITTER_SCALES: [256, 320] - TRAIN_CROP_SIZE: 224 - TEST_CROP_SIZE: 224 - INPUT_CHANNEL_NUM: [3] - MEAN: [0.5, 0.5, 0.5] - STD: [0.5, 0.5, 0.5] - PATH_TO_DATA_DIR: /private/home/mandelapatrick/slowfast/data/ssv2 - PATH_PREFIX: /datasets01/SomethingV2/092720/20bn-something-something-v2-frames - INV_UNIFORM_SAMPLE: True - RANDOM_FLIP: False - REVERSE_INPUT_CHANNEL: True - USE_RAND_AUGMENT: True - RE_PROB: 0.0 - USE_REPEATED_AUG: False - USE_RANDOM_RESIZE_CROPS: False - COLORJITTER: False - GRAYSCALE: False - GAUSSIAN: False -SOLVER: - BASE_LR: 1e-4 - LR_POLICY: steps_with_relative_lrs - LRS: [1, 0.1, 0.01] - STEPS: [0, 20, 30] - MAX_EPOCH: 35 - MOMENTUM: 0.9 - WEIGHT_DECAY: 5e-2 - WARMUP_EPOCHS: 0.0 - OPTIMIZING_METHOD: adamw - USE_MIXED_PRECISION: True - SMOOTHING: 0.2 -SLOWFAST: - ALPHA: 8 -VIT: - PATCH_SIZE: 16 - PATCH_SIZE_TEMP: 2 - CHANNELS: 3 - EMBED_DIM: 768 - DEPTH: 12 - NUM_HEADS: 12 - MLP_RATIO: 4 - QKV_BIAS: True - VIDEO_INPUT: True - TEMPORAL_RESOLUTION: 8 - USE_MLP: True - DROP: 0.0 - POS_DROPOUT: 0.0 - DROP_PATH: 0.2 - IM_PRETRAINED: True - HEAD_DROPOUT: 0.0 - HEAD_ACT: tanh - PRETRAINED_WEIGHTS: vit_1k - ATTN_LAYER: divided -MODEL: - NUM_CLASSES: 174 - ARCH: slow - MODEL_NAME: VisionTransformer - LOSS_FUNC: cross_entropy -TEST: - ENABLE: True - DATASET: Ssv2 - BATCH_SIZE: 64 - NUM_ENSEMBLE_VIEWS: 1 - NUM_SPATIAL_CROPS: 3 -DATA_LOADER: - NUM_WORKERS: 4 - PIN_MEMORY: True -NUM_GPUS: 8 -NUM_SHARDS: 4 -RNG_SEED: 0 -OUTPUT_DIR: . -TENSORBOARD: - ENABLE: True diff --git a/mmaudio_x/ext/synchformer/motionformer.py b/mmaudio_x/ext/synchformer/motionformer.py deleted file mode 100644 index f02141e7cf3a3a133553b6a25341b4b68a483de4..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/synchformer/motionformer.py +++ /dev/null @@ -1,400 +0,0 @@ -import logging -from pathlib import Path - -import einops -import torch -from omegaconf import OmegaConf -from timm.layers import trunc_normal_ -from torch import nn - -from mmaudio.ext.synchformer.utils import check_if_file_exists_else_download -from mmaudio.ext.synchformer.video_model_builder import VisionTransformer - -FILE2URL = { - # cfg - 'motionformer_224_16x4.yaml': - 'https://raw.githubusercontent.com/facebookresearch/Motionformer/bf43d50/configs/SSV2/motionformer_224_16x4.yaml', - 'joint_224_16x4.yaml': - 'https://raw.githubusercontent.com/facebookresearch/Motionformer/bf43d50/configs/SSV2/joint_224_16x4.yaml', - 'divided_224_16x4.yaml': - 'https://raw.githubusercontent.com/facebookresearch/Motionformer/bf43d50/configs/SSV2/divided_224_16x4.yaml', - # ckpt - 'ssv2_motionformer_224_16x4.pyth': - 'https://dl.fbaipublicfiles.com/motionformer/ssv2_motionformer_224_16x4.pyth', - 'ssv2_joint_224_16x4.pyth': - 'https://dl.fbaipublicfiles.com/motionformer/ssv2_joint_224_16x4.pyth', - 'ssv2_divided_224_16x4.pyth': - 'https://dl.fbaipublicfiles.com/motionformer/ssv2_divided_224_16x4.pyth', -} - - -class MotionFormer(VisionTransformer): - ''' This class serves three puposes: - 1. Renames the class to MotionFormer. - 2. Downloads the cfg from the original repo and patches it if needed. - 3. Takes care of feature extraction by redefining .forward() - - if `extract_features=True` and `factorize_space_time=False`, - the output is of shape (B, T, D) where T = 1 + (224 // 16) * (224 // 16) * 8 - - if `extract_features=True` and `factorize_space_time=True`, the output is of shape (B*S, D) - and spatial and temporal transformer encoder layers are used. - - if `extract_features=True` and `factorize_space_time=True` as well as `add_global_repr=True` - the output is of shape (B, D) and spatial and temporal transformer encoder layers - are used as well as the global representation is extracted from segments (extra pos emb - is added). - ''' - - def __init__( - self, - extract_features: bool = False, - ckpt_path: str = None, - factorize_space_time: bool = None, - agg_space_module: str = None, - agg_time_module: str = None, - add_global_repr: bool = True, - agg_segments_module: str = None, - max_segments: int = None, - ): - self.extract_features = extract_features - self.ckpt_path = ckpt_path - self.factorize_space_time = factorize_space_time - - if self.ckpt_path is not None: - check_if_file_exists_else_download(self.ckpt_path, FILE2URL) - ckpt = torch.load(self.ckpt_path, map_location='cpu') - mformer_ckpt2cfg = { - 'ssv2_motionformer_224_16x4.pyth': 'motionformer_224_16x4.yaml', - 'ssv2_joint_224_16x4.pyth': 'joint_224_16x4.yaml', - 'ssv2_divided_224_16x4.pyth': 'divided_224_16x4.yaml', - } - # init from motionformer ckpt or from our Stage I ckpt - # depending on whether the feat extractor was pre-trained on AVCLIPMoCo or not, we need to - # load the state dict differently - was_pt_on_avclip = self.ckpt_path.endswith( - '.pt') # checks if it is a stage I ckpt (FIXME: a bit generic) - if self.ckpt_path.endswith(tuple(mformer_ckpt2cfg.keys())): - cfg_fname = mformer_ckpt2cfg[Path(self.ckpt_path).name] - elif was_pt_on_avclip: - # TODO: this is a hack, we should be able to get the cfg from the ckpt (earlier ckpt didn't have it) - s1_cfg = ckpt.get('args', None) # Stage I cfg - if s1_cfg is not None: - s1_vfeat_extractor_ckpt_path = s1_cfg.model.params.vfeat_extractor.params.ckpt_path - # if the stage I ckpt was initialized from a motionformer ckpt or train from scratch - if s1_vfeat_extractor_ckpt_path is not None: - cfg_fname = mformer_ckpt2cfg[Path(s1_vfeat_extractor_ckpt_path).name] - else: - cfg_fname = 'divided_224_16x4.yaml' - else: - cfg_fname = 'divided_224_16x4.yaml' - else: - raise ValueError(f'ckpt_path {self.ckpt_path} is not supported.') - else: - was_pt_on_avclip = False - cfg_fname = 'divided_224_16x4.yaml' - # logging.info(f'No ckpt_path provided, using {cfg_fname} config.') - - if cfg_fname in ['motionformer_224_16x4.yaml', 'divided_224_16x4.yaml']: - pos_emb_type = 'separate' - elif cfg_fname == 'joint_224_16x4.yaml': - pos_emb_type = 'joint' - - self.mformer_cfg_path = Path(__file__).absolute().parent / cfg_fname - - check_if_file_exists_else_download(self.mformer_cfg_path, FILE2URL) - mformer_cfg = OmegaConf.load(self.mformer_cfg_path) - logging.info(f'Loading MotionFormer config from {self.mformer_cfg_path.absolute()}') - - # patch the cfg (from the default cfg defined in the repo `Motionformer/slowfast/config/defaults.py`) - mformer_cfg.VIT.ATTN_DROPOUT = 0.0 - mformer_cfg.VIT.POS_EMBED = pos_emb_type - mformer_cfg.VIT.USE_ORIGINAL_TRAJ_ATTN_CODE = True - mformer_cfg.VIT.APPROX_ATTN_TYPE = 'none' # guessing - mformer_cfg.VIT.APPROX_ATTN_DIM = 64 # from ckpt['cfg'] - - # finally init VisionTransformer with the cfg - super().__init__(mformer_cfg) - - # load the ckpt now if ckpt is provided and not from AVCLIPMoCo-pretrained ckpt - if (self.ckpt_path is not None) and (not was_pt_on_avclip): - _ckpt_load_status = self.load_state_dict(ckpt['model_state'], strict=False) - if len(_ckpt_load_status.missing_keys) > 0 or len( - _ckpt_load_status.unexpected_keys) > 0: - logging.warning(f'Loading exact vfeat_extractor ckpt from {self.ckpt_path} failed.' \ - f'Missing keys: {_ckpt_load_status.missing_keys}, ' \ - f'Unexpected keys: {_ckpt_load_status.unexpected_keys}') - else: - logging.info(f'Loading vfeat_extractor ckpt from {self.ckpt_path} succeeded.') - - if self.extract_features: - assert isinstance(self.norm, - nn.LayerNorm), 'early x[:, 1:, :] may not be safe for per-tr weights' - # pre-logits are Sequential(nn.Linear(emb, emd), act) and `act` is tanh but see the logger - self.pre_logits = nn.Identity() - # we don't need the classification head (saving memory) - self.head = nn.Identity() - self.head_drop = nn.Identity() - # avoiding code duplication (used only if agg_*_module is TransformerEncoderLayer) - transf_enc_layer_kwargs = dict( - d_model=self.embed_dim, - nhead=self.num_heads, - activation=nn.GELU(), - batch_first=True, - dim_feedforward=self.mlp_ratio * self.embed_dim, - dropout=self.drop_rate, - layer_norm_eps=1e-6, - norm_first=True, - ) - # define adapters if needed - if self.factorize_space_time: - if agg_space_module == 'TransformerEncoderLayer': - self.spatial_attn_agg = SpatialTransformerEncoderLayer( - **transf_enc_layer_kwargs) - elif agg_space_module == 'AveragePooling': - self.spatial_attn_agg = AveragePooling(avg_pattern='BS D t h w -> BS D t', - then_permute_pattern='BS D t -> BS t D') - if agg_time_module == 'TransformerEncoderLayer': - self.temp_attn_agg = TemporalTransformerEncoderLayer(**transf_enc_layer_kwargs) - elif agg_time_module == 'AveragePooling': - self.temp_attn_agg = AveragePooling(avg_pattern='BS t D -> BS D') - elif 'Identity' in agg_time_module: - self.temp_attn_agg = nn.Identity() - # define a global aggregation layer (aggregarate over segments) - self.add_global_repr = add_global_repr - if add_global_repr: - if agg_segments_module == 'TransformerEncoderLayer': - # we can reuse the same layer as for temporal factorization (B, dim_to_agg, D) -> (B, D) - # we need to add pos emb (PE) because previously we added the same PE for each segment - pos_max_len = max_segments if max_segments is not None else 16 # 16 = 10sec//0.64sec + 1 - self.global_attn_agg = TemporalTransformerEncoderLayer( - add_pos_emb=True, - pos_emb_drop=mformer_cfg.VIT.POS_DROPOUT, - pos_max_len=pos_max_len, - **transf_enc_layer_kwargs) - elif agg_segments_module == 'AveragePooling': - self.global_attn_agg = AveragePooling(avg_pattern='B S D -> B D') - - if was_pt_on_avclip: - # we need to filter out the state_dict of the AVCLIP model (has both A and V extractors) - # and keep only the state_dict of the feat extractor - ckpt_weights = dict() - for k, v in ckpt['state_dict'].items(): - if k.startswith(('module.v_encoder.', 'v_encoder.')): - k = k.replace('module.', '').replace('v_encoder.', '') - ckpt_weights[k] = v - _load_status = self.load_state_dict(ckpt_weights, strict=False) - if len(_load_status.missing_keys) > 0 or len(_load_status.unexpected_keys) > 0: - logging.warning(f'Loading exact vfeat_extractor ckpt from {self.ckpt_path} failed. \n' \ - f'Missing keys ({len(_load_status.missing_keys)}): ' \ - f'{_load_status.missing_keys}, \n' \ - f'Unexpected keys ({len(_load_status.unexpected_keys)}): ' \ - f'{_load_status.unexpected_keys} \n' \ - f'temp_attn_agg are expected to be missing if ckpt was pt contrastively.') - else: - logging.info(f'Loading vfeat_extractor ckpt from {self.ckpt_path} succeeded.') - - # patch_embed is not used in MotionFormer, only patch_embed_3d, because cfg.VIT.PATCH_SIZE_TEMP > 1 - # but it used to calculate the number of patches, so we need to set keep it - self.patch_embed.requires_grad_(False) - - def forward(self, x): - ''' - x is of shape (B, S, C, T, H, W) where S is the number of segments. - ''' - # Batch, Segments, Channels, T=frames, Height, Width - B, S, C, T, H, W = x.shape - # Motionformer expects a tensor of shape (1, B, C, T, H, W). - # The first dimension (1) is a dummy dimension to make the input tensor and won't be used: - # see `video_model_builder.video_input`. - # x = x.unsqueeze(0) # (1, B, S, C, T, H, W) - - orig_shape = (B, S, C, T, H, W) - x = x.view(B * S, C, T, H, W) # flatten batch and segments - x = self.forward_segments(x, orig_shape=orig_shape) - # unpack the segments (using rest dimensions to support different shapes e.g. (BS, D) or (BS, t, D)) - x = x.view(B, S, *x.shape[1:]) - # x is now of shape (B*S, D) or (B*S, t, D) if `self.temp_attn_agg` is `Identity` - - return x # x is (B, S, ...) - - def forward_segments(self, x, orig_shape: tuple) -> torch.Tensor: - '''x is of shape (1, BS, C, T, H, W) where S is the number of segments.''' - x, x_mask = self.forward_features(x) - - assert self.extract_features - - # (BS, T, D) where T = 1 + (224 // 16) * (224 // 16) * 8 - x = x[:, - 1:, :] # without the CLS token for efficiency (should be safe for LayerNorm and FC) - x = self.norm(x) - x = self.pre_logits(x) - if self.factorize_space_time: - x = self.restore_spatio_temp_dims(x, orig_shape) # (B*S, D, t, h, w) <- (B*S, t*h*w, D) - - x = self.spatial_attn_agg(x, x_mask) # (B*S, t, D) - x = self.temp_attn_agg( - x) # (B*S, D) or (BS, t, D) if `self.temp_attn_agg` is `Identity` - - return x - - def restore_spatio_temp_dims(self, feats: torch.Tensor, orig_shape: tuple) -> torch.Tensor: - ''' - feats are of shape (B*S, T, D) where T = 1 + (224 // 16) * (224 // 16) * 8 - Our goal is to make them of shape (B*S, t, h, w, D) where h, w are the spatial dimensions. - From `self.patch_embed_3d`, it follows that we could reshape feats with: - `feats.transpose(1, 2).view(B*S, D, t, h, w)` - ''' - B, S, C, T, H, W = orig_shape - D = self.embed_dim - - # num patches in each dimension - t = T // self.patch_embed_3d.z_block_size - h = self.patch_embed_3d.height - w = self.patch_embed_3d.width - - feats = feats.permute(0, 2, 1) # (B*S, D, T) - feats = feats.view(B * S, D, t, h, w) # (B*S, D, t, h, w) - - return feats - - -class BaseEncoderLayer(nn.TransformerEncoderLayer): - ''' - This is a wrapper around nn.TransformerEncoderLayer that adds a CLS token - to the sequence and outputs the CLS token's representation. - This base class parents both SpatialEncoderLayer and TemporalEncoderLayer for the RGB stream - and the FrequencyEncoderLayer and TemporalEncoderLayer for the audio stream stream. - We also, optionally, add a positional embedding to the input sequence which - allows to reuse it for global aggregation (of segments) for both streams. - ''' - - def __init__(self, - add_pos_emb: bool = False, - pos_emb_drop: float = None, - pos_max_len: int = None, - *args_transformer_enc, - **kwargs_transformer_enc): - super().__init__(*args_transformer_enc, **kwargs_transformer_enc) - self.cls_token = nn.Parameter(torch.zeros(1, 1, self.self_attn.embed_dim)) - trunc_normal_(self.cls_token, std=.02) - - # add positional embedding - self.add_pos_emb = add_pos_emb - if add_pos_emb: - self.pos_max_len = 1 + pos_max_len # +1 (for CLS) - self.pos_emb = nn.Parameter(torch.zeros(1, self.pos_max_len, self.self_attn.embed_dim)) - self.pos_drop = nn.Dropout(pos_emb_drop) - trunc_normal_(self.pos_emb, std=.02) - - self.apply(self._init_weights) - - def forward(self, x: torch.Tensor, x_mask: torch.Tensor = None): - ''' x is of shape (B, N, D); if provided x_mask is of shape (B, N)''' - batch_dim = x.shape[0] - - # add CLS token - cls_tokens = self.cls_token.expand(batch_dim, -1, -1) # expanding to match batch dimension - x = torch.cat((cls_tokens, x), dim=-2) # (batch_dim, 1+seq_len, D) - if x_mask is not None: - cls_mask = torch.ones((batch_dim, 1), dtype=torch.bool, - device=x_mask.device) # 1=keep; 0=mask - x_mask_w_cls = torch.cat((cls_mask, x_mask), dim=-1) # (batch_dim, 1+seq_len) - B, N = x_mask_w_cls.shape - # torch expects (N, N) or (B*num_heads, N, N) mask (sadness ahead); torch masks - x_mask_w_cls = x_mask_w_cls.reshape(B, 1, 1, N)\ - .expand(-1, self.self_attn.num_heads, N, -1)\ - .reshape(B * self.self_attn.num_heads, N, N) - assert x_mask_w_cls.dtype == x_mask_w_cls.bool().dtype, 'x_mask_w_cls.dtype != bool' - x_mask_w_cls = ~x_mask_w_cls # invert mask (1=mask) - else: - x_mask_w_cls = None - - # add positional embedding - if self.add_pos_emb: - seq_len = x.shape[ - 1] # (don't even think about moving it before the CLS token concatenation) - assert seq_len <= self.pos_max_len, f'Seq len ({seq_len}) > pos_max_len ({self.pos_max_len})' - x = x + self.pos_emb[:, :seq_len, :] - x = self.pos_drop(x) - - # apply encoder layer (calls nn.TransformerEncoderLayer.forward); - x = super().forward(src=x, src_mask=x_mask_w_cls) # (batch_dim, 1+seq_len, D) - - # CLS token is expected to hold spatial information for each frame - x = x[:, 0, :] # (batch_dim, D) - - return x - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @torch.jit.ignore - def no_weight_decay(self): - return {'cls_token', 'pos_emb'} - - -class SpatialTransformerEncoderLayer(BaseEncoderLayer): - ''' Aggregates spatial dimensions by applying attention individually to each frame. ''' - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - def forward(self, x: torch.Tensor, x_mask: torch.Tensor = None) -> torch.Tensor: - ''' x is of shape (B*S, D, t, h, w) where S is the number of segments. - if specified x_mask (B*S, t, h, w), 0=masked, 1=kept - Returns a tensor of shape (B*S, t, D) pooling spatial information for each frame. ''' - BS, D, t, h, w = x.shape - - # time as a batch dimension and flatten spatial dimensions as sequence - x = einops.rearrange(x, 'BS D t h w -> (BS t) (h w) D') - # similar to mask - if x_mask is not None: - x_mask = einops.rearrange(x_mask, 'BS t h w -> (BS t) (h w)') - - # apply encoder layer (BaseEncoderLayer.forward) - it will add CLS token and output its representation - x = super().forward(x=x, x_mask=x_mask) # (B*S*t, D) - - # reshape back to (B*S, t, D) - x = einops.rearrange(x, '(BS t) D -> BS t D', BS=BS, t=t) - - # (B*S, t, D) - return x - - -class TemporalTransformerEncoderLayer(BaseEncoderLayer): - ''' Aggregates temporal dimension with attention. Also used with pos emb as global aggregation - in both streams. ''' - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - def forward(self, x): - ''' x is of shape (B*S, t, D) where S is the number of segments. - Returns a tensor of shape (B*S, D) pooling temporal information. ''' - BS, t, D = x.shape - - # apply encoder layer (BaseEncoderLayer.forward) - it will add CLS token and output its representation - x = super().forward(x) # (B*S, D) - - return x # (B*S, D) - - -class AveragePooling(nn.Module): - - def __init__(self, avg_pattern: str, then_permute_pattern: str = None) -> None: - ''' patterns are e.g. "bs t d -> bs d" ''' - super().__init__() - # TODO: need to register them as buffers (but fails because these are strings) - self.reduce_fn = 'mean' - self.avg_pattern = avg_pattern - self.then_permute_pattern = then_permute_pattern - - def forward(self, x: torch.Tensor, x_mask: torch.Tensor = None) -> torch.Tensor: - x = einops.reduce(x, self.avg_pattern, self.reduce_fn) - if self.then_permute_pattern is not None: - x = einops.rearrange(x, self.then_permute_pattern) - return x diff --git a/mmaudio_x/ext/synchformer/synchformer.py b/mmaudio_x/ext/synchformer/synchformer.py deleted file mode 100644 index 80871f004d6f4c57f48594d90195f84f89d7cb0a..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/synchformer/synchformer.py +++ /dev/null @@ -1,55 +0,0 @@ -import logging -from typing import Any, Mapping - -import torch -from torch import nn - -from mmaudio.ext.synchformer.motionformer import MotionFormer - - -class Synchformer(nn.Module): - - def __init__(self): - super().__init__() - - self.vfeat_extractor = MotionFormer(extract_features=True, - factorize_space_time=True, - agg_space_module='TransformerEncoderLayer', - agg_time_module='torch.nn.Identity', - add_global_repr=False) - - # self.vfeat_extractor = instantiate_from_config(vfeat_extractor) - # self.afeat_extractor = instantiate_from_config(afeat_extractor) - # # bridging the s3d latent dim (1024) into what is specified in the config - # # to match e.g. the transformer dim - # self.vproj = instantiate_from_config(vproj) - # self.aproj = instantiate_from_config(aproj) - # self.transformer = instantiate_from_config(transformer) - - def forward(self, vis): - B, S, Tv, C, H, W = vis.shape - vis = vis.permute(0, 1, 3, 2, 4, 5) # (B, S, C, Tv, H, W) - # feat extractors return a tuple of segment-level and global features (ignored for sync) - # (B, S, tv, D), e.g. (B, 7, 8, 768) - vis = self.vfeat_extractor(vis) - return vis - - def load_state_dict(self, sd: Mapping[str, Any], strict: bool = True): - # discard all entries except vfeat_extractor - sd = {k: v for k, v in sd.items() if k.startswith('vfeat_extractor')} - - return super().load_state_dict(sd, strict) - - -if __name__ == "__main__": - model = Synchformer().cuda().eval() - sd = torch.load('./ext_weights/synchformer_state_dict.pth', weights_only=True) - model.load_state_dict(sd) - - vid = torch.randn(2, 7, 16, 3, 224, 224).cuda() - features = model.extract_vfeats(vid, for_loop=False).detach().cpu() - print(features.shape) - - # extract and save the state dict only - # sd = torch.load('./ext_weights/sync_model_audioset.pt')['model'] - # torch.save(sd, './ext_weights/synchformer_state_dict.pth') diff --git a/mmaudio_x/ext/synchformer/utils.py b/mmaudio_x/ext/synchformer/utils.py deleted file mode 100644 index a797eb9c66f04b7c29934bfc384c935cdf441a62..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/synchformer/utils.py +++ /dev/null @@ -1,92 +0,0 @@ -from hashlib import md5 -from pathlib import Path - -import requests -from tqdm import tqdm - -PARENT_LINK = 'https://a3s.fi/swift/v1/AUTH_a235c0f452d648828f745589cde1219a' -FNAME2LINK = { - # S3: Synchability: AudioSet (run 2) - '24-01-22T20-34-52.pt': - f'{PARENT_LINK}/sync/sync_models/24-01-22T20-34-52/24-01-22T20-34-52.pt', - 'cfg-24-01-22T20-34-52.yaml': - f'{PARENT_LINK}/sync/sync_models/24-01-22T20-34-52/cfg-24-01-22T20-34-52.yaml', - # S2: Synchformer: AudioSet (run 2) - '24-01-04T16-39-21.pt': - f'{PARENT_LINK}/sync/sync_models/24-01-04T16-39-21/24-01-04T16-39-21.pt', - 'cfg-24-01-04T16-39-21.yaml': - f'{PARENT_LINK}/sync/sync_models/24-01-04T16-39-21/cfg-24-01-04T16-39-21.yaml', - # S2: Synchformer: AudioSet (run 1) - '23-08-28T11-23-23.pt': - f'{PARENT_LINK}/sync/sync_models/23-08-28T11-23-23/23-08-28T11-23-23.pt', - 'cfg-23-08-28T11-23-23.yaml': - f'{PARENT_LINK}/sync/sync_models/23-08-28T11-23-23/cfg-23-08-28T11-23-23.yaml', - # S2: Synchformer: LRS3 (run 2) - '23-12-23T18-33-57.pt': - f'{PARENT_LINK}/sync/sync_models/23-12-23T18-33-57/23-12-23T18-33-57.pt', - 'cfg-23-12-23T18-33-57.yaml': - f'{PARENT_LINK}/sync/sync_models/23-12-23T18-33-57/cfg-23-12-23T18-33-57.yaml', - # S2: Synchformer: VGS (run 2) - '24-01-02T10-00-53.pt': - f'{PARENT_LINK}/sync/sync_models/24-01-02T10-00-53/24-01-02T10-00-53.pt', - 'cfg-24-01-02T10-00-53.yaml': - f'{PARENT_LINK}/sync/sync_models/24-01-02T10-00-53/cfg-24-01-02T10-00-53.yaml', - # SparseSync: ft VGGSound-Full - '22-09-21T21-00-52.pt': - f'{PARENT_LINK}/sync/sync_models/22-09-21T21-00-52/22-09-21T21-00-52.pt', - 'cfg-22-09-21T21-00-52.yaml': - f'{PARENT_LINK}/sync/sync_models/22-09-21T21-00-52/cfg-22-09-21T21-00-52.yaml', - # SparseSync: ft VGGSound-Sparse - '22-07-28T15-49-45.pt': - f'{PARENT_LINK}/sync/sync_models/22-07-28T15-49-45/22-07-28T15-49-45.pt', - 'cfg-22-07-28T15-49-45.yaml': - f'{PARENT_LINK}/sync/sync_models/22-07-28T15-49-45/cfg-22-07-28T15-49-45.yaml', - # SparseSync: only pt on LRS3 - '22-07-13T22-25-49.pt': - f'{PARENT_LINK}/sync/sync_models/22-07-13T22-25-49/22-07-13T22-25-49.pt', - 'cfg-22-07-13T22-25-49.yaml': - f'{PARENT_LINK}/sync/sync_models/22-07-13T22-25-49/cfg-22-07-13T22-25-49.yaml', - # SparseSync: feature extractors - 'ResNetAudio-22-08-04T09-51-04.pt': - f'{PARENT_LINK}/sync/ResNetAudio-22-08-04T09-51-04.pt', # 2s - 'ResNetAudio-22-08-03T23-14-49.pt': - f'{PARENT_LINK}/sync/ResNetAudio-22-08-03T23-14-49.pt', # 3s - 'ResNetAudio-22-08-03T23-14-28.pt': - f'{PARENT_LINK}/sync/ResNetAudio-22-08-03T23-14-28.pt', # 4s - 'ResNetAudio-22-06-24T08-10-33.pt': - f'{PARENT_LINK}/sync/ResNetAudio-22-06-24T08-10-33.pt', # 5s - 'ResNetAudio-22-06-24T17-31-07.pt': - f'{PARENT_LINK}/sync/ResNetAudio-22-06-24T17-31-07.pt', # 6s - 'ResNetAudio-22-06-24T23-57-11.pt': - f'{PARENT_LINK}/sync/ResNetAudio-22-06-24T23-57-11.pt', # 7s - 'ResNetAudio-22-06-25T04-35-42.pt': - f'{PARENT_LINK}/sync/ResNetAudio-22-06-25T04-35-42.pt', # 8s -} - - -def check_if_file_exists_else_download(path, fname2link=FNAME2LINK, chunk_size=1024): - '''Checks if file exists, if not downloads it from the link to the path''' - path = Path(path) - if not path.exists(): - path.parent.mkdir(exist_ok=True, parents=True) - link = fname2link.get(path.name, None) - if link is None: - raise ValueError(f'Cant find the checkpoint file: {path}.', - f'Please download it manually and ensure the path exists.') - with requests.get(fname2link[path.name], stream=True) as r: - total_size = int(r.headers.get('content-length', 0)) - with tqdm(total=total_size, unit='B', unit_scale=True) as pbar: - with open(path, 'wb') as f: - for data in r.iter_content(chunk_size=chunk_size): - if data: - f.write(data) - pbar.update(chunk_size) - - -def get_md5sum(path): - hash_md5 = md5() - with open(path, 'rb') as f: - for chunk in iter(lambda: f.read(4096 * 8), b''): - hash_md5.update(chunk) - md5sum = hash_md5.hexdigest() - return md5sum diff --git a/mmaudio_x/ext/synchformer/video_model_builder.py b/mmaudio_x/ext/synchformer/video_model_builder.py deleted file mode 100644 index 3defae4d07806086fd654906fab3d9f64ba4544f..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/synchformer/video_model_builder.py +++ /dev/null @@ -1,277 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# Copyright 2020 Ross Wightman -# Modified Model definition - -from collections import OrderedDict -from functools import partial - -import torch -import torch.nn as nn -from timm.layers import trunc_normal_ - -from mmaudio.ext.synchformer import vit_helper - - -class VisionTransformer(nn.Module): - """ Vision Transformer with support for patch or hybrid CNN input stage """ - - def __init__(self, cfg): - super().__init__() - self.img_size = cfg.DATA.TRAIN_CROP_SIZE - self.patch_size = cfg.VIT.PATCH_SIZE - self.in_chans = cfg.VIT.CHANNELS - if cfg.TRAIN.DATASET == "Epickitchens": - self.num_classes = [97, 300] - else: - self.num_classes = cfg.MODEL.NUM_CLASSES - self.embed_dim = cfg.VIT.EMBED_DIM - self.depth = cfg.VIT.DEPTH - self.num_heads = cfg.VIT.NUM_HEADS - self.mlp_ratio = cfg.VIT.MLP_RATIO - self.qkv_bias = cfg.VIT.QKV_BIAS - self.drop_rate = cfg.VIT.DROP - self.drop_path_rate = cfg.VIT.DROP_PATH - self.head_dropout = cfg.VIT.HEAD_DROPOUT - self.video_input = cfg.VIT.VIDEO_INPUT - self.temporal_resolution = cfg.VIT.TEMPORAL_RESOLUTION - self.use_mlp = cfg.VIT.USE_MLP - self.num_features = self.embed_dim - norm_layer = partial(nn.LayerNorm, eps=1e-6) - self.attn_drop_rate = cfg.VIT.ATTN_DROPOUT - self.head_act = cfg.VIT.HEAD_ACT - self.cfg = cfg - - # Patch Embedding - self.patch_embed = vit_helper.PatchEmbed(img_size=224, - patch_size=self.patch_size, - in_chans=self.in_chans, - embed_dim=self.embed_dim) - - # 3D Patch Embedding - self.patch_embed_3d = vit_helper.PatchEmbed3D(img_size=self.img_size, - temporal_resolution=self.temporal_resolution, - patch_size=self.patch_size, - in_chans=self.in_chans, - embed_dim=self.embed_dim, - z_block_size=self.cfg.VIT.PATCH_SIZE_TEMP) - self.patch_embed_3d.proj.weight.data = torch.zeros_like( - self.patch_embed_3d.proj.weight.data) - - # Number of patches - if self.video_input: - num_patches = self.patch_embed.num_patches * self.temporal_resolution - else: - num_patches = self.patch_embed.num_patches - self.num_patches = num_patches - - # CLS token - self.cls_token = nn.Parameter(torch.zeros(1, 1, self.embed_dim)) - trunc_normal_(self.cls_token, std=.02) - - # Positional embedding - self.pos_embed = nn.Parameter( - torch.zeros(1, self.patch_embed.num_patches + 1, self.embed_dim)) - self.pos_drop = nn.Dropout(p=cfg.VIT.POS_DROPOUT) - trunc_normal_(self.pos_embed, std=.02) - - if self.cfg.VIT.POS_EMBED == "joint": - self.st_embed = nn.Parameter(torch.zeros(1, num_patches + 1, self.embed_dim)) - trunc_normal_(self.st_embed, std=.02) - elif self.cfg.VIT.POS_EMBED == "separate": - self.temp_embed = nn.Parameter(torch.zeros(1, self.temporal_resolution, self.embed_dim)) - - # Layer Blocks - dpr = [x.item() for x in torch.linspace(0, self.drop_path_rate, self.depth)] - if self.cfg.VIT.ATTN_LAYER == "divided": - self.blocks = nn.ModuleList([ - vit_helper.DividedSpaceTimeBlock( - attn_type=cfg.VIT.ATTN_LAYER, - dim=self.embed_dim, - num_heads=self.num_heads, - mlp_ratio=self.mlp_ratio, - qkv_bias=self.qkv_bias, - drop=self.drop_rate, - attn_drop=self.attn_drop_rate, - drop_path=dpr[i], - norm_layer=norm_layer, - ) for i in range(self.depth) - ]) - else: - self.blocks = nn.ModuleList([ - vit_helper.Block(attn_type=cfg.VIT.ATTN_LAYER, - dim=self.embed_dim, - num_heads=self.num_heads, - mlp_ratio=self.mlp_ratio, - qkv_bias=self.qkv_bias, - drop=self.drop_rate, - attn_drop=self.attn_drop_rate, - drop_path=dpr[i], - norm_layer=norm_layer, - use_original_code=self.cfg.VIT.USE_ORIGINAL_TRAJ_ATTN_CODE) - for i in range(self.depth) - ]) - self.norm = norm_layer(self.embed_dim) - - # MLP head - if self.use_mlp: - hidden_dim = self.embed_dim - if self.head_act == 'tanh': - # logging.info("Using TanH activation in MLP") - act = nn.Tanh() - elif self.head_act == 'gelu': - # logging.info("Using GELU activation in MLP") - act = nn.GELU() - else: - # logging.info("Using ReLU activation in MLP") - act = nn.ReLU() - self.pre_logits = nn.Sequential( - OrderedDict([ - ('fc', nn.Linear(self.embed_dim, hidden_dim)), - ('act', act), - ])) - else: - self.pre_logits = nn.Identity() - - # Classifier Head - self.head_drop = nn.Dropout(p=self.head_dropout) - if isinstance(self.num_classes, (list, )) and len(self.num_classes) > 1: - for a, i in enumerate(range(len(self.num_classes))): - setattr(self, "head%d" % a, nn.Linear(self.embed_dim, self.num_classes[i])) - else: - self.head = nn.Linear(self.embed_dim, - self.num_classes) if self.num_classes > 0 else nn.Identity() - - # Initialize weights - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @torch.jit.ignore - def no_weight_decay(self): - if self.cfg.VIT.POS_EMBED == "joint": - return {'pos_embed', 'cls_token', 'st_embed'} - else: - return {'pos_embed', 'cls_token', 'temp_embed'} - - def get_classifier(self): - return self.head - - def reset_classifier(self, num_classes, global_pool=''): - self.num_classes = num_classes - self.head = (nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()) - - def forward_features(self, x): - # if self.video_input: - # x = x[0] - B = x.shape[0] - - # Tokenize input - # if self.cfg.VIT.PATCH_SIZE_TEMP > 1: - # for simplicity of mapping between content dimensions (input x) and token dims (after patching) - # we use the same trick as for AST (see modeling_ast.ASTModel.forward for the details): - - # apply patching on input - x = self.patch_embed_3d(x) - tok_mask = None - - # else: - # tok_mask = None - # # 2D tokenization - # if self.video_input: - # x = x.permute(0, 2, 1, 3, 4) - # (B, T, C, H, W) = x.shape - # x = x.reshape(B * T, C, H, W) - - # x = self.patch_embed(x) - - # if self.video_input: - # (B2, T2, D2) = x.shape - # x = x.reshape(B, T * T2, D2) - - # Append CLS token - cls_tokens = self.cls_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, x), dim=1) - # if tok_mask is not None: - # # prepend 1(=keep) to the mask to account for the CLS token as well - # tok_mask = torch.cat((torch.ones_like(tok_mask[:, [0]]), tok_mask), dim=1) - - # Interpolate positinoal embeddings - # if self.cfg.DATA.TRAIN_CROP_SIZE != 224: - # pos_embed = self.pos_embed - # N = pos_embed.shape[1] - 1 - # npatch = int((x.size(1) - 1) / self.temporal_resolution) - # class_emb = pos_embed[:, 0] - # pos_embed = pos_embed[:, 1:] - # dim = x.shape[-1] - # pos_embed = torch.nn.functional.interpolate( - # pos_embed.reshape(1, int(math.sqrt(N)), int(math.sqrt(N)), dim).permute(0, 3, 1, 2), - # scale_factor=math.sqrt(npatch / N), - # mode='bicubic', - # ) - # pos_embed = pos_embed.permute(0, 2, 3, 1).view(1, -1, dim) - # new_pos_embed = torch.cat((class_emb.unsqueeze(0), pos_embed), dim=1) - # else: - new_pos_embed = self.pos_embed - npatch = self.patch_embed.num_patches - - # Add positional embeddings to input - if self.video_input: - if self.cfg.VIT.POS_EMBED == "separate": - cls_embed = self.pos_embed[:, 0, :].unsqueeze(1) - tile_pos_embed = new_pos_embed[:, 1:, :].repeat(1, self.temporal_resolution, 1) - tile_temporal_embed = self.temp_embed.repeat_interleave(npatch, 1) - total_pos_embed = tile_pos_embed + tile_temporal_embed - total_pos_embed = torch.cat([cls_embed, total_pos_embed], dim=1) - x = x + total_pos_embed - elif self.cfg.VIT.POS_EMBED == "joint": - x = x + self.st_embed - else: - # image input - x = x + new_pos_embed - - # Apply positional dropout - x = self.pos_drop(x) - - # Encoding using transformer layers - for i, blk in enumerate(self.blocks): - x = blk(x, - seq_len=npatch, - num_frames=self.temporal_resolution, - approx=self.cfg.VIT.APPROX_ATTN_TYPE, - num_landmarks=self.cfg.VIT.APPROX_ATTN_DIM, - tok_mask=tok_mask) - - ### v-iashin: I moved it to the forward pass - # x = self.norm(x)[:, 0] - # x = self.pre_logits(x) - ### - return x, tok_mask - - # def forward(self, x): - # x = self.forward_features(x) - # ### v-iashin: here. This should leave the same forward output as before - # x = self.norm(x)[:, 0] - # x = self.pre_logits(x) - # ### - # x = self.head_drop(x) - # if isinstance(self.num_classes, (list, )) and len(self.num_classes) > 1: - # output = [] - # for head in range(len(self.num_classes)): - # x_out = getattr(self, "head%d" % head)(x) - # if not self.training: - # x_out = torch.nn.functional.softmax(x_out, dim=-1) - # output.append(x_out) - # return output - # else: - # x = self.head(x) - # if not self.training: - # x = torch.nn.functional.softmax(x, dim=-1) - # return x diff --git a/mmaudio_x/ext/synchformer/vit_helper.py b/mmaudio_x/ext/synchformer/vit_helper.py deleted file mode 100644 index 6af730a135bf49240ec439c81c9ad0aa5c9a505e..0000000000000000000000000000000000000000 --- a/mmaudio_x/ext/synchformer/vit_helper.py +++ /dev/null @@ -1,399 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# Copyright 2020 Ross Wightman -# Modified Model definition -"""Video models.""" - -import math - -import torch -import torch.nn as nn -from einops import rearrange, repeat -from timm.layers import to_2tuple -from torch import einsum -from torch.nn import functional as F - -default_cfgs = { - 'vit_1k': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_p16_224-80ecf9dd.pth', - 'vit_1k_large': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_p16_224-4ee7a4dc.pth', -} - - -def qkv_attn(q, k, v, tok_mask: torch.Tensor = None): - sim = einsum('b i d, b j d -> b i j', q, k) - # apply masking if provided, tok_mask is (B*S*H, N): 1s - keep; sim is (B*S*H, H, N, N) - if tok_mask is not None: - BSH, N = tok_mask.shape - sim = sim.masked_fill(tok_mask.view(BSH, 1, N) == 0, - float('-inf')) # 1 - broadcasts across N - attn = sim.softmax(dim=-1) - out = einsum('b i j, b j d -> b i d', attn, v) - return out - - -class DividedAttention(nn.Module): - - def __init__(self, dim, num_heads=8, qkv_bias=False, attn_drop=0., proj_drop=0.): - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = head_dim**-0.5 - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.proj = nn.Linear(dim, dim) - - # init to zeros - self.qkv.weight.data.fill_(0) - self.qkv.bias.data.fill_(0) - self.proj.weight.data.fill_(1) - self.proj.bias.data.fill_(0) - - self.attn_drop = nn.Dropout(attn_drop) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x, einops_from, einops_to, tok_mask: torch.Tensor = None, **einops_dims): - # num of heads variable - h = self.num_heads - - # project x to q, k, v vaalues - q, k, v = self.qkv(x).chunk(3, dim=-1) - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) - if tok_mask is not None: - # replicate token mask across heads (b, n) -> (b, h, n) -> (b*h, n) -- same as qkv but w/o d - assert len(tok_mask.shape) == 2 - tok_mask = tok_mask.unsqueeze(1).expand(-1, h, -1).reshape(-1, tok_mask.shape[1]) - - # Scale q - q *= self.scale - - # Take out cls_q, cls_k, cls_v - (cls_q, q_), (cls_k, k_), (cls_v, v_) = map(lambda t: (t[:, 0:1], t[:, 1:]), (q, k, v)) - # the same for masking - if tok_mask is not None: - cls_mask, mask_ = tok_mask[:, 0:1], tok_mask[:, 1:] - else: - cls_mask, mask_ = None, None - - # let CLS token attend to key / values of all patches across time and space - cls_out = qkv_attn(cls_q, k, v, tok_mask=tok_mask) - - # rearrange across time or space - q_, k_, v_ = map(lambda t: rearrange(t, f'{einops_from} -> {einops_to}', **einops_dims), - (q_, k_, v_)) - - # expand CLS token keys and values across time or space and concat - r = q_.shape[0] // cls_k.shape[0] - cls_k, cls_v = map(lambda t: repeat(t, 'b () d -> (b r) () d', r=r), (cls_k, cls_v)) - - k_ = torch.cat((cls_k, k_), dim=1) - v_ = torch.cat((cls_v, v_), dim=1) - - # the same for masking (if provided) - if tok_mask is not None: - # since mask does not have the latent dim (d), we need to remove it from einops dims - mask_ = rearrange(mask_, f'{einops_from} -> {einops_to}'.replace(' d', ''), - **einops_dims) - cls_mask = repeat(cls_mask, 'b () -> (b r) ()', - r=r) # expand cls_mask across time or space - mask_ = torch.cat((cls_mask, mask_), dim=1) - - # attention - out = qkv_attn(q_, k_, v_, tok_mask=mask_) - - # merge back time or space - out = rearrange(out, f'{einops_to} -> {einops_from}', **einops_dims) - - # concat back the cls token - out = torch.cat((cls_out, out), dim=1) - - # merge back the heads - out = rearrange(out, '(b h) n d -> b n (h d)', h=h) - - ## to out - x = self.proj(out) - x = self.proj_drop(x) - return x - - -class DividedSpaceTimeBlock(nn.Module): - - def __init__(self, - dim=768, - num_heads=12, - attn_type='divided', - mlp_ratio=4., - qkv_bias=False, - drop=0., - attn_drop=0., - drop_path=0., - act_layer=nn.GELU, - norm_layer=nn.LayerNorm): - super().__init__() - - self.einops_from_space = 'b (f n) d' - self.einops_to_space = '(b f) n d' - self.einops_from_time = 'b (f n) d' - self.einops_to_time = '(b n) f d' - - self.norm1 = norm_layer(dim) - - self.attn = DividedAttention(dim, - num_heads=num_heads, - qkv_bias=qkv_bias, - attn_drop=attn_drop, - proj_drop=drop) - - self.timeattn = DividedAttention(dim, - num_heads=num_heads, - qkv_bias=qkv_bias, - attn_drop=attn_drop, - proj_drop=drop) - - # self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.drop_path = nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, - hidden_features=mlp_hidden_dim, - act_layer=act_layer, - drop=drop) - self.norm3 = norm_layer(dim) - - def forward(self, - x, - seq_len=196, - num_frames=8, - approx='none', - num_landmarks=128, - tok_mask: torch.Tensor = None): - time_output = self.timeattn(self.norm3(x), - self.einops_from_time, - self.einops_to_time, - n=seq_len, - tok_mask=tok_mask) - time_residual = x + time_output - - space_output = self.attn(self.norm1(time_residual), - self.einops_from_space, - self.einops_to_space, - f=num_frames, - tok_mask=tok_mask) - space_residual = time_residual + self.drop_path(space_output) - - x = space_residual - x = x + self.drop_path(self.mlp(self.norm2(x))) - return x - - -class Mlp(nn.Module): - - def __init__(self, - in_features, - hidden_features=None, - out_features=None, - act_layer=nn.GELU, - drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class PatchEmbed(nn.Module): - """ Image to Patch Embedding - """ - - def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768): - super().__init__() - img_size = img_size if type(img_size) is tuple else to_2tuple(img_size) - patch_size = img_size if type(patch_size) is tuple else to_2tuple(patch_size) - num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0]) - self.img_size = img_size - self.patch_size = patch_size - self.num_patches = num_patches - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - - def forward(self, x): - B, C, H, W = x.shape - x = self.proj(x).flatten(2).transpose(1, 2) - return x - - -class PatchEmbed3D(nn.Module): - """ Image to Patch Embedding """ - - def __init__(self, - img_size=224, - temporal_resolution=4, - in_chans=3, - patch_size=16, - z_block_size=2, - embed_dim=768, - flatten=True): - super().__init__() - self.height = (img_size // patch_size) - self.width = (img_size // patch_size) - ### v-iashin: these two are incorrect - # self.frames = (temporal_resolution // z_block_size) - # self.num_patches = self.height * self.width * self.frames - self.z_block_size = z_block_size - ### - self.proj = nn.Conv3d(in_chans, - embed_dim, - kernel_size=(z_block_size, patch_size, patch_size), - stride=(z_block_size, patch_size, patch_size)) - self.flatten = flatten - - def forward(self, x): - B, C, T, H, W = x.shape - x = self.proj(x) - if self.flatten: - x = x.flatten(2).transpose(1, 2) - return x - - -class HeadMLP(nn.Module): - - def __init__(self, n_input, n_classes, n_hidden=512, p=0.1): - super(HeadMLP, self).__init__() - self.n_input = n_input - self.n_classes = n_classes - self.n_hidden = n_hidden - if n_hidden is None: - # use linear classifier - self.block_forward = nn.Sequential(nn.Dropout(p=p), - nn.Linear(n_input, n_classes, bias=True)) - else: - # use simple MLP classifier - self.block_forward = nn.Sequential(nn.Dropout(p=p), - nn.Linear(n_input, n_hidden, bias=True), - nn.BatchNorm1d(n_hidden), nn.ReLU(inplace=True), - nn.Dropout(p=p), - nn.Linear(n_hidden, n_classes, bias=True)) - print(f"Dropout-NLP: {p}") - - def forward(self, x): - return self.block_forward(x) - - -def _conv_filter(state_dict, patch_size=16): - """ convert patch embedding weight from manual patchify + linear proj to conv""" - out_dict = {} - for k, v in state_dict.items(): - if 'patch_embed.proj.weight' in k: - v = v.reshape((v.shape[0], 3, patch_size, patch_size)) - out_dict[k] = v - return out_dict - - -def adapt_input_conv(in_chans, conv_weight, agg='sum'): - conv_type = conv_weight.dtype - conv_weight = conv_weight.float() - O, I, J, K = conv_weight.shape - if in_chans == 1: - if I > 3: - assert conv_weight.shape[1] % 3 == 0 - # For models with space2depth stems - conv_weight = conv_weight.reshape(O, I // 3, 3, J, K) - conv_weight = conv_weight.sum(dim=2, keepdim=False) - else: - if agg == 'sum': - print("Summing conv1 weights") - conv_weight = conv_weight.sum(dim=1, keepdim=True) - else: - print("Averaging conv1 weights") - conv_weight = conv_weight.mean(dim=1, keepdim=True) - elif in_chans != 3: - if I != 3: - raise NotImplementedError('Weight format not supported by conversion.') - else: - if agg == 'sum': - print("Summing conv1 weights") - repeat = int(math.ceil(in_chans / 3)) - conv_weight = conv_weight.repeat(1, repeat, 1, 1)[:, :in_chans, :, :] - conv_weight *= (3 / float(in_chans)) - else: - print("Averaging conv1 weights") - conv_weight = conv_weight.mean(dim=1, keepdim=True) - conv_weight = conv_weight.repeat(1, in_chans, 1, 1) - conv_weight = conv_weight.to(conv_type) - return conv_weight - - -def load_pretrained(model, - cfg=None, - num_classes=1000, - in_chans=3, - filter_fn=None, - strict=True, - progress=False): - # Load state dict - assert (f"{cfg.VIT.PRETRAINED_WEIGHTS} not in [vit_1k, vit_1k_large]") - state_dict = torch.hub.load_state_dict_from_url(url=default_cfgs[cfg.VIT.PRETRAINED_WEIGHTS]) - - if filter_fn is not None: - state_dict = filter_fn(state_dict) - - input_convs = 'patch_embed.proj' - if input_convs is not None and in_chans != 3: - if isinstance(input_convs, str): - input_convs = (input_convs, ) - for input_conv_name in input_convs: - weight_name = input_conv_name + '.weight' - try: - state_dict[weight_name] = adapt_input_conv(in_chans, - state_dict[weight_name], - agg='avg') - print( - f'Converted input conv {input_conv_name} pretrained weights from 3 to {in_chans} channel(s)' - ) - except NotImplementedError as e: - del state_dict[weight_name] - strict = False - print( - f'Unable to convert pretrained {input_conv_name} weights, using random init for this layer.' - ) - - classifier_name = 'head' - label_offset = cfg.get('label_offset', 0) - pretrain_classes = 1000 - if num_classes != pretrain_classes: - # completely discard fully connected if model num_classes doesn't match pretrained weights - del state_dict[classifier_name + '.weight'] - del state_dict[classifier_name + '.bias'] - strict = False - elif label_offset > 0: - # special case for pretrained weights with an extra background class in pretrained weights - classifier_weight = state_dict[classifier_name + '.weight'] - state_dict[classifier_name + '.weight'] = classifier_weight[label_offset:] - classifier_bias = state_dict[classifier_name + '.bias'] - state_dict[classifier_name + '.bias'] = classifier_bias[label_offset:] - - loaded_state = state_dict - self_state = model.state_dict() - all_names = set(self_state.keys()) - saved_names = set([]) - for name, param in loaded_state.items(): - param = param - if 'module.' in name: - name = name.replace('module.', '') - if name in self_state.keys() and param.shape == self_state[name].shape: - saved_names.add(name) - self_state[name].copy_(param) - else: - print(f"didnt load: {name} of shape: {param.shape}") - print("Missing Keys:") - print(all_names - saved_names) diff --git a/mmaudio_x/model/__init__.py b/mmaudio_x/model/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/mmaudio_x/model/embeddings.py b/mmaudio_x/model/embeddings.py deleted file mode 100644 index d447a98f941f1231d1b1dac716db3047a6a8eb88..0000000000000000000000000000000000000000 --- a/mmaudio_x/model/embeddings.py +++ /dev/null @@ -1,48 +0,0 @@ -import torch -import torch.nn as nn - -# https://github.com/facebookresearch/DiT - - -class TimestepEmbedder(nn.Module): - """ - Embeds scalar timesteps into vector representations. - """ - - def __init__(self, dim, frequency_embedding_size, max_period): - super().__init__() - self.mlp = nn.Sequential( - nn.Linear(frequency_embedding_size, dim), - nn.SiLU(), - nn.Linear(dim, dim), - ) - self.dim = dim - self.max_period = max_period - assert dim % 2 == 0, 'dim must be even.' - - with torch.autocast('cuda', enabled=False): - self.freqs = ( - 1.0 / (10000**(torch.arange(0, frequency_embedding_size, 2, dtype=torch.float32) / - frequency_embedding_size))) - freq_scale = 10000 / max_period - self.freqs = nn.Parameter(freq_scale * self.freqs) - - def timestep_embedding(self, t): - """ - Create sinusoidal timestep embeddings. - :param t: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param dim: the dimension of the output. - :param max_period: controls the minimum frequency of the embeddings. - :return: an (N, D) Tensor of positional embeddings. - """ - # https://github.com/openai/glide-text2im/blob/main/glide_text2im/nn.py - - args = t[:, None].float() * self.freqs[None] - embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1) - return embedding - - def forward(self, t): - t_freq = self.timestep_embedding(t).to(t.dtype) - t_emb = self.mlp(t_freq) - return t_emb diff --git a/mmaudio_x/model/flow_matching.py b/mmaudio_x/model/flow_matching.py deleted file mode 100644 index a04510ab888c0c3c3398360f97b8b7e3c55998ad..0000000000000000000000000000000000000000 --- a/mmaudio_x/model/flow_matching.py +++ /dev/null @@ -1,88 +0,0 @@ -import logging -from typing import Callable, Iterable, Optional - -import torch -from torchdiffeq import odeint - -# from torchcfm.conditional_flow_matching import ExactOptimalTransportConditionalFlowMatcher - -log = logging.getLogger() - - -# Partially from https://github.com/gle-bellier/flow-matching -class FlowMatching: - - def __init__(self, min_sigma: float = 0.0, inference_mode='euler', num_steps: int = 25): - # inference_mode: 'euler' or 'adaptive' - # num_steps: number of steps in the euler inference mode - super().__init__() - self.min_sigma = min_sigma - self.inference_mode = inference_mode - self.num_steps = num_steps - - # self.fm = ExactOptimalTransportConditionalFlowMatcher(sigma=min_sigma) - - assert self.inference_mode in ['euler', 'adaptive'] - if self.inference_mode == 'adaptive' and num_steps > 0: - log.info('The number of steps is ignored in adaptive inference mode ') - - def get_conditional_flow(self, x0: torch.Tensor, x1: torch.Tensor, - t: torch.Tensor) -> torch.Tensor: - # which is psi_t(x), eq 22 in flow matching for generative models - t = t[:, None, None].expand_as(x0) - return (1 - (1 - self.min_sigma) * t) * x0 + t * x1 - - def loss(self, predicted_v: torch.Tensor, x0: torch.Tensor, x1: torch.Tensor) -> torch.Tensor: - # return the mean error without reducing the batch dimension - reduce_dim = list(range(1, len(predicted_v.shape))) - target_v = x1 - (1 - self.min_sigma) * x0 - return (predicted_v - target_v).pow(2).mean(dim=reduce_dim) - - def get_x0_xt_c( - self, - x1: torch.Tensor, - t: torch.Tensor, - Cs: list[torch.Tensor], - generator: Optional[torch.Generator] = None - ) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]: - # x0 = torch.randn_like(x1, generator=generator) - x0 = torch.empty_like(x1).normal_(generator=generator) - - # find mini-batch optimal transport - # x0, x1, _, Cs = self.fm.ot_sampler.sample_plan_with_labels(x0, x1, None, Cs, replace=True) - - xt = self.get_conditional_flow(x0, x1, t) - return x0, x1, xt, Cs - - def to_prior(self, fn: Callable, x1: torch.Tensor) -> torch.Tensor: - return self.run_t0_to_t1(fn, x1, 1, 0) - - def to_data(self, fn: Callable, x0: torch.Tensor) -> torch.Tensor: - return self.run_t0_to_t1(fn, x0, 0, 1) - - def run_t0_to_t1(self, fn: Callable, x0: torch.Tensor, t0: float, t1: float) -> torch.Tensor: - # fn: a function that takes (t, x) and returns the direction x0->x1 - - if self.inference_mode == 'adaptive': - return odeint(fn, x0, torch.tensor([t0, t1], device=x0.device, dtype=x0.dtype)) - elif self.inference_mode == 'euler': - x = x0 - steps = torch.linspace(t0, t1 - self.min_sigma, self.num_steps + 1) - for ti, t in enumerate(steps[:-1]): - flow = fn(t, x) - next_t = steps[ti + 1] - dt = next_t - t - x = x + dt * flow - - # return odeint(fn, - # x0, - # torch.tensor([t0, t1], device=x0.device, dtype=x0.dtype), - # method='rk4', - # options=dict(step_size=(t1 - t0) / self.num_steps))[-1] - # return odeint(fn, - # x0, - # torch.tensor([t0, t1], device=x0.device, dtype=x0.dtype), - # method='euler', - # options=dict(step_size=(t1 - t0) / self.num_steps))[-1] - - return x diff --git a/mmaudio_x/model/low_level.py b/mmaudio_x/model/low_level.py deleted file mode 100644 index c8326a8bec99f1be08b92e76fda4b59e777b39d2..0000000000000000000000000000000000000000 --- a/mmaudio_x/model/low_level.py +++ /dev/null @@ -1,95 +0,0 @@ -import torch -from torch import nn -from torch.nn import functional as F - - -class ChannelLastConv1d(nn.Conv1d): - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = x.permute(0, 2, 1) - x = super().forward(x) - x = x.permute(0, 2, 1) - return x - - -# https://github.com/Stability-AI/sd3-ref -class MLP(nn.Module): - - def __init__( - self, - dim: int, - hidden_dim: int, - multiple_of: int = 256, - ): - """ - Initialize the FeedForward module. - - Args: - dim (int): Input dimension. - hidden_dim (int): Hidden dimension of the feedforward layer. - multiple_of (int): Value to ensure hidden dimension is a multiple of this value. - - Attributes: - w1 (ColumnParallelLinear): Linear transformation for the first layer. - w2 (RowParallelLinear): Linear transformation for the second layer. - w3 (ColumnParallelLinear): Linear transformation for the third layer. - - """ - super().__init__() - hidden_dim = int(2 * hidden_dim / 3) - hidden_dim = multiple_of * ((hidden_dim + multiple_of - 1) // multiple_of) - - self.w1 = nn.Linear(dim, hidden_dim, bias=False) - self.w2 = nn.Linear(hidden_dim, dim, bias=False) - self.w3 = nn.Linear(dim, hidden_dim, bias=False) - - def forward(self, x): - return self.w2(F.silu(self.w1(x)) * self.w3(x)) - - -class ConvMLP(nn.Module): - - def __init__( - self, - dim: int, - hidden_dim: int, - multiple_of: int = 256, - kernel_size: int = 3, - padding: int = 1, - ): - """ - Initialize the FeedForward module. - - Args: - dim (int): Input dimension. - hidden_dim (int): Hidden dimension of the feedforward layer. - multiple_of (int): Value to ensure hidden dimension is a multiple of this value. - - Attributes: - w1 (ColumnParallelLinear): Linear transformation for the first layer. - w2 (RowParallelLinear): Linear transformation for the second layer. - w3 (ColumnParallelLinear): Linear transformation for the third layer. - - """ - super().__init__() - hidden_dim = int(2 * hidden_dim / 3) - hidden_dim = multiple_of * ((hidden_dim + multiple_of - 1) // multiple_of) - - self.w1 = ChannelLastConv1d(dim, - hidden_dim, - bias=False, - kernel_size=kernel_size, - padding=padding) - self.w2 = ChannelLastConv1d(hidden_dim, - dim, - bias=False, - kernel_size=kernel_size, - padding=padding) - self.w3 = ChannelLastConv1d(dim, - hidden_dim, - bias=False, - kernel_size=kernel_size, - padding=padding) - - def forward(self, x): - return self.w2(F.silu(self.w1(x)) * self.w3(x)) diff --git a/mmaudio_x/model/networks.py b/mmaudio_x/model/networks.py deleted file mode 100644 index e60e309c89d92cec70e7e673a4e842cc6716fae9..0000000000000000000000000000000000000000 --- a/mmaudio_x/model/networks.py +++ /dev/null @@ -1,471 +0,0 @@ -import logging -from dataclasses import dataclass -from typing import Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from mmaudio.ext.rotary_embeddings import compute_rope_rotations -from mmaudio.model.embeddings import TimestepEmbedder -from mmaudio.model.low_level import MLP, ChannelLastConv1d, ConvMLP -from mmaudio.model.transformer_layers import (FinalBlock, JointBlock, MMDitSingleBlock) - -log = logging.getLogger() - - -@dataclass -class PreprocessedConditions: - clip_f: torch.Tensor - sync_f: torch.Tensor - text_f: torch.Tensor - clip_f_c: torch.Tensor - text_f_c: torch.Tensor - - -# Partially from https://github.com/facebookresearch/DiT -class MMAudio(nn.Module): - - def __init__(self, - *, - latent_dim: int, - clip_dim: int, - sync_dim: int, - text_dim: int, - hidden_dim: int, - depth: int, - fused_depth: int, - num_heads: int, - mlp_ratio: float = 4.0, - latent_seq_len: int, - clip_seq_len: int, - sync_seq_len: int, - text_seq_len: int = 77, - latent_mean: Optional[torch.Tensor] = None, - latent_std: Optional[torch.Tensor] = None, - empty_string_feat: Optional[torch.Tensor] = None, - v2: bool = False) -> None: - super().__init__() - - self.v2 = v2 - self.latent_dim = latent_dim - self._latent_seq_len = latent_seq_len - self._clip_seq_len = clip_seq_len - self._sync_seq_len = sync_seq_len - self._text_seq_len = text_seq_len - self.hidden_dim = hidden_dim - self.num_heads = num_heads - - if v2: - self.audio_input_proj = nn.Sequential( - ChannelLastConv1d(latent_dim, hidden_dim, kernel_size=7, padding=3), - nn.SiLU(), - ConvMLP(hidden_dim, hidden_dim * 4, kernel_size=7, padding=3), - ) - - self.clip_input_proj = nn.Sequential( - nn.Linear(clip_dim, hidden_dim), - nn.SiLU(), - ConvMLP(hidden_dim, hidden_dim * 4, kernel_size=3, padding=1), - ) - - self.sync_input_proj = nn.Sequential( - ChannelLastConv1d(sync_dim, hidden_dim, kernel_size=7, padding=3), - nn.SiLU(), - ConvMLP(hidden_dim, hidden_dim * 4, kernel_size=3, padding=1), - ) - - self.text_input_proj = nn.Sequential( - nn.Linear(text_dim, hidden_dim), - nn.SiLU(), - MLP(hidden_dim, hidden_dim * 4), - ) - else: - self.audio_input_proj = nn.Sequential( - ChannelLastConv1d(latent_dim, hidden_dim, kernel_size=7, padding=3), - nn.SELU(), - ConvMLP(hidden_dim, hidden_dim * 4, kernel_size=7, padding=3), - ) - - self.clip_input_proj = nn.Sequential( - nn.Linear(clip_dim, hidden_dim), - ConvMLP(hidden_dim, hidden_dim * 4, kernel_size=3, padding=1), - ) - - self.sync_input_proj = nn.Sequential( - ChannelLastConv1d(sync_dim, hidden_dim, kernel_size=7, padding=3), - nn.SELU(), - ConvMLP(hidden_dim, hidden_dim * 4, kernel_size=3, padding=1), - ) - - self.text_input_proj = nn.Sequential( - nn.Linear(text_dim, hidden_dim), - MLP(hidden_dim, hidden_dim * 4), - ) - - self.clip_cond_proj = nn.Linear(hidden_dim, hidden_dim) - self.text_cond_proj = nn.Linear(hidden_dim, hidden_dim) - self.global_cond_mlp = MLP(hidden_dim, hidden_dim * 4) - # each synchformer output segment has 8 feature frames - self.sync_pos_emb = nn.Parameter(torch.zeros((1, 1, 8, sync_dim))) - - self.final_layer = FinalBlock(hidden_dim, latent_dim) - - if v2: - self.t_embed = TimestepEmbedder(hidden_dim, - frequency_embedding_size=hidden_dim, - max_period=1) - else: - self.t_embed = TimestepEmbedder(hidden_dim, - frequency_embedding_size=256, - max_period=10000) - self.joint_blocks = nn.ModuleList([ - JointBlock(hidden_dim, - num_heads, - mlp_ratio=mlp_ratio, - pre_only=(i == depth - fused_depth - 1)) for i in range(depth - fused_depth) - ]) - - self.fused_blocks = nn.ModuleList([ - MMDitSingleBlock(hidden_dim, num_heads, mlp_ratio=mlp_ratio, kernel_size=3, padding=1) - for i in range(fused_depth) - ]) - - if latent_mean is None: - # these values are not meant to be used - # if you don't provide mean/std here, we should load them later from a checkpoint - assert latent_std is None - latent_mean = torch.ones(latent_dim).view(1, 1, -1).fill_(float('nan')) - latent_std = torch.ones(latent_dim).view(1, 1, -1).fill_(float('nan')) - else: - assert latent_std is not None - assert latent_mean.numel() == latent_dim, f'{latent_mean.numel()=} != {latent_dim=}' - if empty_string_feat is None: - empty_string_feat = torch.zeros((text_seq_len, text_dim)) - self.latent_mean = nn.Parameter(latent_mean.view(1, 1, -1), requires_grad=False) - self.latent_std = nn.Parameter(latent_std.view(1, 1, -1), requires_grad=False) - - self.empty_string_feat = nn.Parameter(empty_string_feat, requires_grad=False) - self.empty_clip_feat = nn.Parameter(torch.zeros(1, clip_dim), requires_grad=True) - self.empty_sync_feat = nn.Parameter(torch.zeros(1, sync_dim), requires_grad=True) - - self.initialize_weights() - self.initialize_rotations() - - def initialize_rotations(self): - base_freq = 1.0 - latent_rot = compute_rope_rotations(self._latent_seq_len, - self.hidden_dim // self.num_heads, - 10000, - freq_scaling=base_freq, - device=self.device) - clip_rot = compute_rope_rotations(self._clip_seq_len, - self.hidden_dim // self.num_heads, - 10000, - freq_scaling=base_freq * self._latent_seq_len / - self._clip_seq_len, - device=self.device) - - # self.latent_rot = latent_rot.to(self.device) - # self.clip_rot = clip_rot.to(self.device) - self.register_buffer('latent_rot', latent_rot) - self.register_buffer('clip_rot', clip_rot) - - def update_seq_lengths(self, latent_seq_len: int, clip_seq_len: int, sync_seq_len: int) -> None: - self._latent_seq_len = latent_seq_len - self._clip_seq_len = clip_seq_len - self._sync_seq_len = sync_seq_len - self.initialize_rotations() - - def initialize_weights(self): - - def _basic_init(module): - if isinstance(module, nn.Linear): - torch.nn.init.xavier_uniform_(module.weight) - if module.bias is not None: - nn.init.constant_(module.bias, 0) - - self.apply(_basic_init) - - # Initialize timestep embedding MLP: - nn.init.normal_(self.t_embed.mlp[0].weight, std=0.02) - nn.init.normal_(self.t_embed.mlp[2].weight, std=0.02) - - # Zero-out adaLN modulation layers in DiT blocks: - for block in self.joint_blocks: - nn.init.constant_(block.latent_block.adaLN_modulation[-1].weight, 0) - nn.init.constant_(block.latent_block.adaLN_modulation[-1].bias, 0) - nn.init.constant_(block.clip_block.adaLN_modulation[-1].weight, 0) - nn.init.constant_(block.clip_block.adaLN_modulation[-1].bias, 0) - nn.init.constant_(block.text_block.adaLN_modulation[-1].weight, 0) - nn.init.constant_(block.text_block.adaLN_modulation[-1].bias, 0) - for block in self.fused_blocks: - nn.init.constant_(block.adaLN_modulation[-1].weight, 0) - nn.init.constant_(block.adaLN_modulation[-1].bias, 0) - - # Zero-out output layers: - nn.init.constant_(self.final_layer.adaLN_modulation[-1].weight, 0) - nn.init.constant_(self.final_layer.adaLN_modulation[-1].bias, 0) - nn.init.constant_(self.final_layer.conv.weight, 0) - nn.init.constant_(self.final_layer.conv.bias, 0) - - # empty string feat shall be initialized by a CLIP encoder - nn.init.constant_(self.sync_pos_emb, 0) - nn.init.constant_(self.empty_clip_feat, 0) - nn.init.constant_(self.empty_sync_feat, 0) - - def normalize(self, x: torch.Tensor) -> torch.Tensor: - # return (x - self.latent_mean) / self.latent_std - return x.sub_(self.latent_mean).div_(self.latent_std) - - def unnormalize(self, x: torch.Tensor) -> torch.Tensor: - # return x * self.latent_std + self.latent_mean - return x.mul_(self.latent_std).add_(self.latent_mean) - - def preprocess_conditions(self, clip_f: torch.Tensor, sync_f: torch.Tensor, - text_f: torch.Tensor) -> PreprocessedConditions: - """ - cache computations that do not depend on the latent/time step - i.e., the features are reused over steps during inference - """ - assert clip_f.shape[1] == self._clip_seq_len, f'{clip_f.shape=} {self._clip_seq_len=}' - assert sync_f.shape[1] == self._sync_seq_len, f'{sync_f.shape=} {self._sync_seq_len=}' - assert text_f.shape[1] == self._text_seq_len, f'{text_f.shape=} {self._text_seq_len=}' - - bs = clip_f.shape[0] - - # B * num_segments (24) * 8 * 768 - num_sync_segments = self._sync_seq_len // 8 - sync_f = sync_f.view(bs, num_sync_segments, 8, -1) + self.sync_pos_emb - sync_f = sync_f.flatten(1, 2) # (B, VN, D) - - # extend vf to match x - clip_f = self.clip_input_proj(clip_f) # (B, VN, D) - sync_f = self.sync_input_proj(sync_f) # (B, VN, D) - text_f = self.text_input_proj(text_f) # (B, VN, D) - - # upsample the sync features to match the audio - sync_f = sync_f.transpose(1, 2) # (B, D, VN) - sync_f = F.interpolate(sync_f, size=self._latent_seq_len, mode='nearest-exact') - sync_f = sync_f.transpose(1, 2) # (B, N, D) - - # get conditional features from the clip side - clip_f_c = self.clip_cond_proj(clip_f.mean(dim=1)) # (B, D) - text_f_c = self.text_cond_proj(text_f.mean(dim=1)) # (B, D) - - return PreprocessedConditions(clip_f=clip_f, - sync_f=sync_f, - text_f=text_f, - clip_f_c=clip_f_c, - text_f_c=text_f_c) - - def predict_flow(self, latent: torch.Tensor, t: torch.Tensor, - conditions: PreprocessedConditions) -> torch.Tensor: - """ - for non-cacheable computations - """ - assert latent.shape[1] == self._latent_seq_len, f'{latent.shape=} {self._latent_seq_len=}' - - clip_f = conditions.clip_f - sync_f = conditions.sync_f - text_f = conditions.text_f - clip_f_c = conditions.clip_f_c - text_f_c = conditions.text_f_c - - latent = self.audio_input_proj(latent) # (B, N, D) - global_c = self.global_cond_mlp(clip_f_c + text_f_c) # (B, D) - - global_c = self.t_embed(t).unsqueeze(1) + global_c.unsqueeze(1) # (B, D) - extended_c = global_c + sync_f - - for block in self.joint_blocks: - latent, clip_f, text_f = block(latent, clip_f, text_f, global_c, extended_c, - self.latent_rot, self.clip_rot) # (B, N, D) - - for block in self.fused_blocks: - latent = block(latent, extended_c, self.latent_rot) - - flow = self.final_layer(latent, global_c) # (B, N, out_dim), remove t - return flow - - def forward(self, latent: torch.Tensor, clip_f: torch.Tensor, sync_f: torch.Tensor, - text_f: torch.Tensor, t: torch.Tensor) -> torch.Tensor: - """ - latent: (B, N, C) - vf: (B, T, C_V) - t: (B,) - """ - conditions = self.preprocess_conditions(clip_f, sync_f, text_f) - flow = self.predict_flow(latent, t, conditions) - return flow - - def get_empty_string_sequence(self, bs: int) -> torch.Tensor: - return self.empty_string_feat.unsqueeze(0).expand(bs, -1, -1) - - def get_empty_clip_sequence(self, bs: int) -> torch.Tensor: - return self.empty_clip_feat.unsqueeze(0).expand(bs, self._clip_seq_len, -1) - - def get_empty_sync_sequence(self, bs: int) -> torch.Tensor: - return self.empty_sync_feat.unsqueeze(0).expand(bs, self._sync_seq_len, -1) - - def get_empty_conditions( - self, - bs: int, - *, - negative_text_features: Optional[torch.Tensor] = None) -> PreprocessedConditions: - if negative_text_features is not None: - empty_text = negative_text_features - else: - empty_text = self.get_empty_string_sequence(1) - - empty_clip = self.get_empty_clip_sequence(1) - empty_sync = self.get_empty_sync_sequence(1) - conditions = self.preprocess_conditions(empty_clip, empty_sync, empty_text) - conditions.clip_f = conditions.clip_f.expand(bs, -1, -1) - conditions.sync_f = conditions.sync_f.expand(bs, -1, -1) - conditions.clip_f_c = conditions.clip_f_c.expand(bs, -1) - if negative_text_features is None: - conditions.text_f = conditions.text_f.expand(bs, -1, -1) - conditions.text_f_c = conditions.text_f_c.expand(bs, -1) - - return conditions - - def ode_wrapper(self, t: torch.Tensor, latent: torch.Tensor, conditions: PreprocessedConditions, - empty_conditions: PreprocessedConditions, cfg_strength: float) -> torch.Tensor: - t = t * torch.ones(len(latent), device=latent.device, dtype=latent.dtype) - - if cfg_strength < 1.0: - return self.predict_flow(latent, t, conditions) - else: - return (cfg_strength * self.predict_flow(latent, t, conditions) + - (1 - cfg_strength) * self.predict_flow(latent, t, empty_conditions)) - - def load_weights(self, src_dict) -> None: - if 't_embed.freqs' in src_dict: - del src_dict['t_embed.freqs'] - if 'latent_rot' in src_dict: - del src_dict['latent_rot'] - if 'clip_rot' in src_dict: - del src_dict['clip_rot'] - - self.load_state_dict(src_dict, strict=False) - - @property - def device(self) -> torch.device: - return self.latent_mean.device - - @property - def latent_seq_len(self) -> int: - return self._latent_seq_len - - @property - def clip_seq_len(self) -> int: - return self._clip_seq_len - - @property - def sync_seq_len(self) -> int: - return self._sync_seq_len - - -def small_16k(**kwargs) -> MMAudio: - num_heads = 7 - return MMAudio(latent_dim=20, - clip_dim=1024, - sync_dim=768, - text_dim=1024, - hidden_dim=64 * num_heads, - depth=12, - fused_depth=8, - num_heads=num_heads, - latent_seq_len=250, - clip_seq_len=64, - sync_seq_len=192, - **kwargs) - - -def small_44k(**kwargs) -> MMAudio: - num_heads = 7 - return MMAudio(latent_dim=40, - clip_dim=1024, - sync_dim=768, - text_dim=1024, - hidden_dim=64 * num_heads, - depth=12, - fused_depth=8, - num_heads=num_heads, - latent_seq_len=345, - clip_seq_len=64, - sync_seq_len=192, - **kwargs) - - -def medium_44k(**kwargs) -> MMAudio: - num_heads = 14 - return MMAudio(latent_dim=40, - clip_dim=1024, - sync_dim=768, - text_dim=1024, - hidden_dim=64 * num_heads, - depth=12, - fused_depth=8, - num_heads=num_heads, - latent_seq_len=345, - clip_seq_len=64, - sync_seq_len=192, - **kwargs) - - -def large_44k(**kwargs) -> MMAudio: - num_heads = 14 - return MMAudio(latent_dim=40, - clip_dim=1024, - sync_dim=768, - text_dim=1024, - hidden_dim=64 * num_heads, - depth=21, - fused_depth=14, - num_heads=num_heads, - latent_seq_len=345, - clip_seq_len=64, - sync_seq_len=192, - **kwargs) - - -def large_44k_v2(**kwargs) -> MMAudio: - num_heads = 14 - return MMAudio(latent_dim=40, - clip_dim=1024, - sync_dim=768, - text_dim=1024, - hidden_dim=64 * num_heads, - depth=21, - fused_depth=14, - num_heads=num_heads, - latent_seq_len=345, - clip_seq_len=64, - sync_seq_len=192, - v2=True, - **kwargs) - - -def get_my_mmaudio(name: str, **kwargs) -> MMAudio: - if name == 'small_16k': - return small_16k(**kwargs) - if name == 'small_44k': - return small_44k(**kwargs) - if name == 'medium_44k': - return medium_44k(**kwargs) - if name == 'large_44k': - return large_44k(**kwargs) - if name == 'large_44k_v2': - return large_44k_v2(**kwargs) - - raise ValueError(f'Unknown model name: {name}') - - -if __name__ == '__main__': - network = get_my_mmaudio('small_16k') - - # print the number of parameters in terms of millions - num_params = sum(p.numel() for p in network.parameters()) / 1e6 - print(f'Number of parameters: {num_params:.2f}M') diff --git a/mmaudio_x/model/sequence_config.py b/mmaudio_x/model/sequence_config.py deleted file mode 100644 index 14269014dc401b4751d172466813a935fddda6c1..0000000000000000000000000000000000000000 --- a/mmaudio_x/model/sequence_config.py +++ /dev/null @@ -1,58 +0,0 @@ -import dataclasses -import math - - -@dataclasses.dataclass -class SequenceConfig: - # general - duration: float - - # audio - sampling_rate: int - spectrogram_frame_rate: int - latent_downsample_rate: int = 2 - - # visual - clip_frame_rate: int = 8 - sync_frame_rate: int = 25 - sync_num_frames_per_segment: int = 16 - sync_step_size: int = 8 - sync_downsample_rate: int = 2 - - @property - def num_audio_frames(self) -> int: - # we need an integer number of latents - return self.latent_seq_len * self.spectrogram_frame_rate * self.latent_downsample_rate - - @property - def latent_seq_len(self) -> int: - return int( - math.ceil(self.duration * self.sampling_rate / self.spectrogram_frame_rate / - self.latent_downsample_rate)) - - @property - def clip_seq_len(self) -> int: - return int(self.duration * self.clip_frame_rate) - - @property - def sync_seq_len(self) -> int: - num_frames = self.duration * self.sync_frame_rate - num_segments = (num_frames - self.sync_num_frames_per_segment) // self.sync_step_size + 1 - return int(num_segments * self.sync_num_frames_per_segment / self.sync_downsample_rate) - - -CONFIG_16K = SequenceConfig(duration=8.0, sampling_rate=16000, spectrogram_frame_rate=256) -CONFIG_44K = SequenceConfig(duration=8.0, sampling_rate=44100, spectrogram_frame_rate=512) - -if __name__ == '__main__': - assert CONFIG_16K.latent_seq_len == 250 - assert CONFIG_16K.clip_seq_len == 64 - assert CONFIG_16K.sync_seq_len == 192 - assert CONFIG_16K.num_audio_frames == 128000 - - assert CONFIG_44K.latent_seq_len == 345 - assert CONFIG_44K.clip_seq_len == 64 - assert CONFIG_44K.sync_seq_len == 192 - assert CONFIG_44K.num_audio_frames == 353280 - - print('Passed') diff --git a/mmaudio_x/model/transformer_layers.py b/mmaudio_x/model/transformer_layers.py deleted file mode 100644 index 3ca02ec3b6c00b9c39624d97d55a211cdd2e427d..0000000000000000000000000000000000000000 --- a/mmaudio_x/model/transformer_layers.py +++ /dev/null @@ -1,203 +0,0 @@ -from typing import Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F -from einops import rearrange -from einops.layers.torch import Rearrange -from torch.nn.attention import SDPBackend, sdpa_kernel - -from mmaudio.ext.rotary_embeddings import apply_rope -from mmaudio.model.low_level import MLP, ChannelLastConv1d, ConvMLP - - -def modulate(x: torch.Tensor, shift: torch.Tensor, scale: torch.Tensor): - return x * (1 + scale) + shift - - -def attention(q: torch.Tensor, k: torch.Tensor, v: torch.Tensor): - # training will crash without these contiguous calls and the CUDNN limitation - # I believe this is related to https://github.com/pytorch/pytorch/issues/133974 - # unresolved at the time of writing - q = q.contiguous() - k = k.contiguous() - v = v.contiguous() - out = F.scaled_dot_product_attention(q, k, v) - out = rearrange(out, 'b h n d -> b n (h d)').contiguous() - return out - - -class SelfAttention(nn.Module): - - def __init__(self, dim: int, nheads: int): - super().__init__() - self.dim = dim - self.nheads = nheads - - self.qkv = nn.Linear(dim, dim * 3, bias=True) - self.q_norm = nn.RMSNorm(dim // nheads) - self.k_norm = nn.RMSNorm(dim // nheads) - - self.split_into_heads = Rearrange('b n (h d j) -> b h n d j', - h=nheads, - d=dim // nheads, - j=3) - - def pre_attention( - self, x: torch.Tensor, - rot: Optional[torch.Tensor]) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - # x: batch_size * n_tokens * n_channels - qkv = self.qkv(x) - q, k, v = self.split_into_heads(qkv).chunk(3, dim=-1) - q = q.squeeze(-1) - k = k.squeeze(-1) - v = v.squeeze(-1) - q = self.q_norm(q) - k = self.k_norm(k) - - if rot is not None: - q = apply_rope(q, rot) - k = apply_rope(k, rot) - - return q, k, v - - def forward( - self, - x: torch.Tensor, # batch_size * n_tokens * n_channels - ) -> torch.Tensor: - q, v, k = self.pre_attention(x) - out = attention(q, k, v) - return out - - -class MMDitSingleBlock(nn.Module): - - def __init__(self, - dim: int, - nhead: int, - mlp_ratio: float = 4.0, - pre_only: bool = False, - kernel_size: int = 7, - padding: int = 3): - super().__init__() - self.norm1 = nn.LayerNorm(dim, elementwise_affine=False) - self.attn = SelfAttention(dim, nhead) - - self.pre_only = pre_only - if pre_only: - self.adaLN_modulation = nn.Sequential(nn.SiLU(), nn.Linear(dim, 2 * dim, bias=True)) - else: - if kernel_size == 1: - self.linear1 = nn.Linear(dim, dim) - else: - self.linear1 = ChannelLastConv1d(dim, dim, kernel_size=kernel_size, padding=padding) - self.norm2 = nn.LayerNorm(dim, elementwise_affine=False) - - if kernel_size == 1: - self.ffn = MLP(dim, int(dim * mlp_ratio)) - else: - self.ffn = ConvMLP(dim, - int(dim * mlp_ratio), - kernel_size=kernel_size, - padding=padding) - - self.adaLN_modulation = nn.Sequential(nn.SiLU(), nn.Linear(dim, 6 * dim, bias=True)) - - def pre_attention(self, x: torch.Tensor, c: torch.Tensor, rot: Optional[torch.Tensor]): - # x: BS * N * D - # cond: BS * D - modulation = self.adaLN_modulation(c) - if self.pre_only: - (shift_msa, scale_msa) = modulation.chunk(2, dim=-1) - gate_msa = shift_mlp = scale_mlp = gate_mlp = None - else: - (shift_msa, scale_msa, gate_msa, shift_mlp, scale_mlp, - gate_mlp) = modulation.chunk(6, dim=-1) - - x = modulate(self.norm1(x), shift_msa, scale_msa) - q, k, v = self.attn.pre_attention(x, rot) - return (q, k, v), (gate_msa, shift_mlp, scale_mlp, gate_mlp) - - def post_attention(self, x: torch.Tensor, attn_out: torch.Tensor, c: tuple[torch.Tensor]): - if self.pre_only: - return x - - (gate_msa, shift_mlp, scale_mlp, gate_mlp) = c - x = x + self.linear1(attn_out) * gate_msa - r = modulate(self.norm2(x), shift_mlp, scale_mlp) - x = x + self.ffn(r) * gate_mlp - - return x - - def forward(self, x: torch.Tensor, cond: torch.Tensor, - rot: Optional[torch.Tensor]) -> torch.Tensor: - # x: BS * N * D - # cond: BS * D - x_qkv, x_conditions = self.pre_attention(x, cond, rot) - attn_out = attention(*x_qkv) - x = self.post_attention(x, attn_out, x_conditions) - - return x - - -class JointBlock(nn.Module): - - def __init__(self, dim: int, nhead: int, mlp_ratio: float = 4.0, pre_only: bool = False): - super().__init__() - self.pre_only = pre_only - self.latent_block = MMDitSingleBlock(dim, - nhead, - mlp_ratio, - pre_only=False, - kernel_size=3, - padding=1) - self.clip_block = MMDitSingleBlock(dim, - nhead, - mlp_ratio, - pre_only=pre_only, - kernel_size=3, - padding=1) - self.text_block = MMDitSingleBlock(dim, nhead, mlp_ratio, pre_only=pre_only, kernel_size=1) - - def forward(self, latent: torch.Tensor, clip_f: torch.Tensor, text_f: torch.Tensor, - global_c: torch.Tensor, extended_c: torch.Tensor, latent_rot: torch.Tensor, - clip_rot: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]: - # latent: BS * N1 * D - # clip_f: BS * N2 * D - # c: BS * (1/N) * D - x_qkv, x_mod = self.latent_block.pre_attention(latent, extended_c, latent_rot) - c_qkv, c_mod = self.clip_block.pre_attention(clip_f, global_c, clip_rot) - t_qkv, t_mod = self.text_block.pre_attention(text_f, global_c, rot=None) - - latent_len = latent.shape[1] - clip_len = clip_f.shape[1] - text_len = text_f.shape[1] - - joint_qkv = [torch.cat([x_qkv[i], c_qkv[i], t_qkv[i]], dim=2) for i in range(3)] - - attn_out = attention(*joint_qkv) - x_attn_out = attn_out[:, :latent_len] - c_attn_out = attn_out[:, latent_len:latent_len + clip_len] - t_attn_out = attn_out[:, latent_len + clip_len:] - - latent = self.latent_block.post_attention(latent, x_attn_out, x_mod) - if not self.pre_only: - clip_f = self.clip_block.post_attention(clip_f, c_attn_out, c_mod) - text_f = self.text_block.post_attention(text_f, t_attn_out, t_mod) - - return latent, clip_f, text_f - - -class FinalBlock(nn.Module): - - def __init__(self, dim, out_dim): - super().__init__() - self.adaLN_modulation = nn.Sequential(nn.SiLU(), nn.Linear(dim, 2 * dim, bias=True)) - self.norm = nn.LayerNorm(dim, elementwise_affine=False) - self.conv = ChannelLastConv1d(dim, out_dim, kernel_size=7, padding=3) - - def forward(self, latent, c): - shift, scale = self.adaLN_modulation(c).chunk(2, dim=-1) - latent = modulate(self.norm(latent), shift, scale) - latent = self.conv(latent) - return latent diff --git a/mmaudio_x/model/utils/__init__.py b/mmaudio_x/model/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/mmaudio_x/model/utils/distributions.py b/mmaudio_x/model/utils/distributions.py deleted file mode 100644 index 1d526a5b0b3dd2ae556d806a3397e1cf43c07fb9..0000000000000000000000000000000000000000 --- a/mmaudio_x/model/utils/distributions.py +++ /dev/null @@ -1,46 +0,0 @@ -from typing import Optional - -import numpy as np -import torch - - -class DiagonalGaussianDistribution: - - def __init__(self, parameters, deterministic=False): - self.parameters = parameters - self.mean, self.logvar = torch.chunk(parameters, 2, dim=1) - self.logvar = torch.clamp(self.logvar, -30.0, 20.0) - self.deterministic = deterministic - self.std = torch.exp(0.5 * self.logvar) - self.var = torch.exp(self.logvar) - if self.deterministic: - self.var = self.std = torch.zeros_like(self.mean).to(device=self.parameters.device) - - def sample(self, rng: Optional[torch.Generator] = None): - # x = self.mean + self.std * torch.randn(self.mean.shape).to(device=self.parameters.device) - - r = torch.empty_like(self.mean).normal_(generator=rng) - x = self.mean + self.std * r - - return x - - def kl(self, other=None): - if self.deterministic: - return torch.Tensor([0.]) - else: - if other is None: - - return 0.5 * torch.pow(self.mean, 2) + self.var - 1.0 - self.logvar - else: - return 0.5 * (torch.pow(self.mean - other.mean, 2) / other.var + - self.var / other.var - 1.0 - self.logvar + other.logvar) - - def nll(self, sample, dims=[1, 2, 3]): - if self.deterministic: - return torch.Tensor([0.]) - logtwopi = np.log(2.0 * np.pi) - return 0.5 * torch.sum(logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var, - dim=dims) - - def mode(self): - return self.mean diff --git a/mmaudio_x/model/utils/features_utils.py b/mmaudio_x/model/utils/features_utils.py deleted file mode 100644 index 8b5ebcf685d98d9f024ce29df239e93312418bae..0000000000000000000000000000000000000000 --- a/mmaudio_x/model/utils/features_utils.py +++ /dev/null @@ -1,164 +0,0 @@ -from typing import Literal, Optional - -import open_clip -import torch -import torch.nn as nn -import torch.nn.functional as F -from einops import rearrange -from open_clip import create_model_from_pretrained -from torchvision.transforms import Normalize - -from mmaudio.ext.autoencoder import AutoEncoderModule -from mmaudio.ext.mel_converter import MelConverter -from mmaudio.ext.synchformer import Synchformer -from mmaudio.model.utils.distributions import DiagonalGaussianDistribution - - -def patch_clip(clip_model): - # a hack to make it output last hidden states - # https://github.com/mlfoundations/open_clip/blob/fc5a37b72d705f760ebbc7915b84729816ed471f/src/open_clip/model.py#L269 - def new_encode_text(self, text, normalize: bool = False): - cast_dtype = self.transformer.get_cast_dtype() - - x = self.token_embedding(text).to(cast_dtype) # [batch_size, n_ctx, d_model] - - x = x + self.positional_embedding.to(cast_dtype) - x = self.transformer(x, attn_mask=self.attn_mask) - x = self.ln_final(x) # [batch_size, n_ctx, transformer.width] - return F.normalize(x, dim=-1) if normalize else x - - clip_model.encode_text = new_encode_text.__get__(clip_model) - return clip_model - - -class FeaturesUtils(nn.Module): - - def __init__( - self, - *, - tod_vae_ckpt: Optional[str] = None, - bigvgan_vocoder_ckpt: Optional[str] = None, - synchformer_ckpt: Optional[str] = None, - enable_conditions: bool = True, - mode=Literal['16k', '44k'], - need_vae_encoder: bool = True, - ): - super().__init__() - - if enable_conditions: - self.clip_model = create_model_from_pretrained('hf-hub:apple/DFN5B-CLIP-ViT-H-14-384', - return_transform=False) - self.clip_preprocess = Normalize(mean=[0.48145466, 0.4578275, 0.40821073], - std=[0.26862954, 0.26130258, 0.27577711]) - self.clip_model = patch_clip(self.clip_model) - - self.synchformer = Synchformer() - self.synchformer.load_state_dict( - torch.load(synchformer_ckpt, weights_only=True, map_location='cpu')) - - self.tokenizer = open_clip.get_tokenizer('ViT-H-14-378-quickgelu') # same as 'ViT-H-14' - else: - self.clip_model = None - self.synchformer = None - self.tokenizer = None - - if tod_vae_ckpt is not None: - self.tod = AutoEncoderModule(vae_ckpt_path=tod_vae_ckpt, - vocoder_ckpt_path=bigvgan_vocoder_ckpt, - mode=mode, - need_vae_encoder=need_vae_encoder) - else: - self.tod = None - self.mel_converter = MelConverter() - - def compile(self): - if self.clip_model is not None: - self.clip_model.encode_image = torch.compile(self.clip_model.encode_image) - self.clip_model.encode_text = torch.compile(self.clip_model.encode_text) - if self.synchformer is not None: - self.synchformer = torch.compile(self.synchformer) - self.decode = torch.compile(self.decode) - self.vocode = torch.compile(self.vocode) - - def train(self, mode: bool) -> None: - return super().train(False) - - @torch.inference_mode() - def encode_video_with_clip(self, x: torch.Tensor, batch_size: int = -1) -> torch.Tensor: - assert self.clip_model is not None, 'CLIP is not loaded' - # x: (B, T, C, H, W) H/W: 384 - b, t, c, h, w = x.shape - assert c == 3 and h == 384 and w == 384 - x = self.clip_preprocess(x) - x = rearrange(x, 'b t c h w -> (b t) c h w') - outputs = [] - if batch_size < 0: - batch_size = b * t - for i in range(0, b * t, batch_size): - outputs.append(self.clip_model.encode_image(x[i:i + batch_size], normalize=True)) - x = torch.cat(outputs, dim=0) - # x = self.clip_model.encode_image(x, normalize=True) - x = rearrange(x, '(b t) d -> b t d', b=b) - return x - - @torch.inference_mode() - def encode_video_with_sync(self, x: torch.Tensor, batch_size: int = -1) -> torch.Tensor: - assert self.synchformer is not None, 'Synchformer is not loaded' - # x: (B, T, C, H, W) H/W: 384 - - b, t, c, h, w = x.shape - assert c == 3 and h == 224 and w == 224 - - # partition the video - segment_size = 16 - step_size = 8 - num_segments = (t - segment_size) // step_size + 1 - segments = [] - for i in range(num_segments): - segments.append(x[:, i * step_size:i * step_size + segment_size]) - x = torch.stack(segments, dim=1) # (B, S, T, C, H, W) - - outputs = [] - if batch_size < 0: - batch_size = b - x = rearrange(x, 'b s t c h w -> (b s) 1 t c h w') - for i in range(0, b * num_segments, batch_size): - outputs.append(self.synchformer(x[i:i + batch_size])) - x = torch.cat(outputs, dim=0) - x = rearrange(x, '(b s) 1 t d -> b (s t) d', b=b) - return x - - @torch.inference_mode() - def encode_text(self, text: list[str]) -> torch.Tensor: - assert self.clip_model is not None, 'CLIP is not loaded' - assert self.tokenizer is not None, 'Tokenizer is not loaded' - # x: (B, L) - tokens = self.tokenizer(text).to(self.device) - return self.clip_model.encode_text(tokens, normalize=True) - - @torch.inference_mode() - def encode_audio(self, x) -> DiagonalGaussianDistribution: - assert self.tod is not None, 'VAE is not loaded' - # x: (B * L) - mel = self.mel_converter(x) - dist = self.tod.encode(mel) - - return dist - - @torch.inference_mode() - def vocode(self, mel: torch.Tensor) -> torch.Tensor: - assert self.tod is not None, 'VAE is not loaded' - return self.tod.vocode(mel) - - @torch.inference_mode() - def decode(self, z: torch.Tensor) -> torch.Tensor: - assert self.tod is not None, 'VAE is not loaded' - return self.tod.decode(z.transpose(1, 2)) - - @property - def device(self): - return next(self.parameters()).device - - @property - def dtype(self): - return next(self.parameters()).dtype diff --git a/mmaudio_x/model/utils/parameter_groups.py b/mmaudio_x/model/utils/parameter_groups.py deleted file mode 100644 index 89c3993083f470dfc6b18a5c90f908ea37bde12b..0000000000000000000000000000000000000000 --- a/mmaudio_x/model/utils/parameter_groups.py +++ /dev/null @@ -1,72 +0,0 @@ -import logging - -log = logging.getLogger() - - -def get_parameter_groups(model, cfg, print_log=False): - """ - Assign different weight decays and learning rates to different parameters. - Returns a parameter group which can be passed to the optimizer. - """ - weight_decay = cfg.weight_decay - # embed_weight_decay = cfg.embed_weight_decay - # backbone_lr_ratio = cfg.backbone_lr_ratio - base_lr = cfg.learning_rate - - backbone_params = [] - embed_params = [] - other_params = [] - - # embedding_names = ['summary_pos', 'query_init', 'query_emb', 'obj_pe'] - # embedding_names = [e + '.weight' for e in embedding_names] - - # inspired by detectron2 - memo = set() - for name, param in model.named_parameters(): - if not param.requires_grad: - continue - # Avoid duplicating parameters - if param in memo: - continue - memo.add(param) - - if name.startswith('module'): - name = name[7:] - - inserted = False - # if name.startswith('pixel_encoder.'): - # backbone_params.append(param) - # inserted = True - # if print_log: - # log.info(f'{name} counted as a backbone parameter.') - # else: - # for e in embedding_names: - # if name.endswith(e): - # embed_params.append(param) - # inserted = True - # if print_log: - # log.info(f'{name} counted as an embedding parameter.') - # break - - # if not inserted: - other_params.append(param) - - parameter_groups = [ - # { - # 'params': backbone_params, - # 'lr': base_lr * backbone_lr_ratio, - # 'weight_decay': weight_decay - # }, - # { - # 'params': embed_params, - # 'lr': base_lr, - # 'weight_decay': embed_weight_decay - # }, - { - 'params': other_params, - 'lr': base_lr, - 'weight_decay': weight_decay - }, - ] - - return parameter_groups diff --git a/mmaudio_x/model/utils/sample_utils.py b/mmaudio_x/model/utils/sample_utils.py deleted file mode 100644 index d44cf278e0b464bc6ac7e240fcab4a23895caa2f..0000000000000000000000000000000000000000 --- a/mmaudio_x/model/utils/sample_utils.py +++ /dev/null @@ -1,12 +0,0 @@ -from typing import Optional - -import torch - - -def log_normal_sample(x: torch.Tensor, - generator: Optional[torch.Generator] = None, - m: float = 0.0, - s: float = 1.0) -> torch.Tensor: - bs = x.shape[0] - s = torch.randn(bs, device=x.device, generator=generator) * s + m - return torch.sigmoid(s) diff --git a/mmaudio_x/utils/__init__.py b/mmaudio_x/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/mmaudio_x/utils/dist_utils.py b/mmaudio_x/utils/dist_utils.py deleted file mode 100644 index 354229b5d94bd03d104a07c7f16a06df9b519bdd..0000000000000000000000000000000000000000 --- a/mmaudio_x/utils/dist_utils.py +++ /dev/null @@ -1,17 +0,0 @@ -import os -from logging import Logger - -from mmaudio.utils.logger import TensorboardLogger - -local_rank = int(os.environ['LOCAL_RANK']) if 'LOCAL_RANK' in os.environ else 0 -world_size = int(os.environ['WORLD_SIZE']) if 'WORLD_SIZE' in os.environ else 1 - - -def info_if_rank_zero(logger: Logger, msg: str): - if local_rank == 0: - logger.info(msg) - - -def string_if_rank_zero(logger: TensorboardLogger, tag: str, msg: str): - if local_rank == 0: - logger.log_string(tag, msg) diff --git a/mmaudio_x/utils/download_utils.py b/mmaudio_x/utils/download_utils.py deleted file mode 100644 index 1d193efdb6dd7811d866dcdfbdfc471a5a2f0592..0000000000000000000000000000000000000000 --- a/mmaudio_x/utils/download_utils.py +++ /dev/null @@ -1,84 +0,0 @@ -import hashlib -import logging -from pathlib import Path - -import requests -from tqdm import tqdm - -log = logging.getLogger() - -links = [ - { - 'name': 'mmaudio_small_16k.pth', - 'url': 'https://huggingface.co/hkchengrex/MMAudio/resolve/main/weights/mmaudio_small_16k.pth', - 'md5': 'af93cde404179f58e3919ac085b8033b', - }, - { - 'name': 'mmaudio_small_44k.pth', - 'url': 'https://huggingface.co/hkchengrex/MMAudio/resolve/main/weights/mmaudio_small_44k.pth', - 'md5': 'babd74c884783d13701ea2820a5f5b6d', - }, - { - 'name': 'mmaudio_medium_44k.pth', - 'url': 'https://huggingface.co/hkchengrex/MMAudio/resolve/main/weights/mmaudio_medium_44k.pth', - 'md5': '5a56b6665e45a1e65ada534defa903d0', - }, - { - 'name': 'mmaudio_large_44k.pth', - 'url': 'https://huggingface.co/hkchengrex/MMAudio/resolve/main/weights/mmaudio_large_44k.pth', - 'md5': 'fed96c325a6785b85ce75ae1aafd2673' - }, - { - 'name': 'mmaudio_large_44k_v2.pth', - 'url': 'https://huggingface.co/hkchengrex/MMAudio/resolve/main/weights/mmaudio_large_44k_v2.pth', - 'md5': '01ad4464f049b2d7efdaa4c1a59b8dfe' - }, - { - 'name': 'v1-16.pth', - 'url': 'https://github.com/hkchengrex/MMAudio/releases/download/v0.1/v1-16.pth', - 'md5': '69f56803f59a549a1a507c93859fd4d7' - }, - { - 'name': 'best_netG.pt', - 'url': 'https://github.com/hkchengrex/MMAudio/releases/download/v0.1/best_netG.pt', - 'md5': 'eeaf372a38a9c31c362120aba2dde292' - }, - { - 'name': 'v1-44.pth', - 'url': 'https://github.com/hkchengrex/MMAudio/releases/download/v0.1/v1-44.pth', - 'md5': 'fab020275fa44c6589820ce025191600' - }, - { - 'name': 'synchformer_state_dict.pth', - 'url': - 'https://github.com/hkchengrex/MMAudio/releases/download/v0.1/synchformer_state_dict.pth', - 'md5': '5b2f5594b0730f70e41e549b7c94390c' - }, -] - - -def download_model_if_needed(model_path: Path): - base_name = model_path.name - - for link in links: - if link['name'] == base_name: - target_link = link - break - else: - raise ValueError(f'No link found for {base_name}') - - model_path.parent.mkdir(parents=True, exist_ok=True) - if not model_path.exists() or hashlib.md5(open(model_path, - 'rb').read()).hexdigest() != target_link['md5']: - log.info(f'Downloading {base_name} to {model_path}...') - r = requests.get(target_link['url'], stream=True) - total_size = int(r.headers.get('content-length', 0)) - block_size = 1024 - t = tqdm(total=total_size, unit='iB', unit_scale=True) - with open(model_path, 'wb') as f: - for data in r.iter_content(block_size): - t.update(len(data)) - f.write(data) - t.close() - if total_size != 0 and t.n != total_size: - raise RuntimeError('Error while downloading %s' % base_name) diff --git a/models_x/.gitkeep b/models_x/.gitkeep deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/models_x/LICENSE.txt b/models_x/LICENSE.txt deleted file mode 100644 index 261eeb9e9f8b2b4b0d119366dda99c6fd7d35c64..0000000000000000000000000000000000000000 --- a/models_x/LICENSE.txt +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/models_x/README.md b/models_x/README.md deleted file mode 100644 index 371a83bc5640a438d5f59aecdae146af15991ec1..0000000000000000000000000000000000000000 --- a/models_x/README.md +++ /dev/null @@ -1,138 +0,0 @@ -# 🛠️ helpers/ - Ferramentas de IA de Terceiros Adaptadas para ADUC-SDR - -Esta pasta contém implementações adaptadas de modelos e utilitários de IA de terceiros, que servem como "especialistas" ou "ferramentas" de baixo nível para a arquitetura ADUC-SDR. - -**IMPORTANTE:** O conteúdo desta pasta é de autoria de seus respectivos idealizadores e desenvolvedores originais. Esta pasta **NÃO FAZ PARTE** do projeto principal ADUC-SDR em termos de sua arquitetura inovadora. Ela serve como um repositório para as **dependências diretas e modificadas** que os `DeformesXDEngines` (os estágios do "foguete" ADUC-SDR) invocam para realizar tarefas específicas (geração de imagem, vídeo, áudio). - -As modificações realizadas nos arquivos aqui presentes visam principalmente: -1. **Adaptação de Interfaces:** Padronizar as interfaces para que se encaixem no fluxo de orquestração do ADUC-SDR. -2. **Gerenciamento de Recursos:** Integrar lógicas de carregamento/descarregamento de modelos (GPU management) e configurações via arquivos YAML. -3. **Otimização de Fluxo:** Ajustar as pipelines para aceitar formatos de entrada mais eficientes (ex: tensores pré-codificados em vez de caminhos de mídia, pulando etapas de codificação/decodificação redundantes). - ---- - -## 📄 Licenciamento - -O conteúdo original dos projetos listados abaixo é licenciado sob a **Licença Apache 2.0**, ou outra licença especificada pelos autores originais. Todas as modificações e o uso desses arquivos dentro da estrutura `helpers/` do projeto ADUC-SDR estão em conformidade com os termos da **Licença Apache 2.0**. - -As licenças originais dos projetos podem ser encontradas nas suas respectivas fontes ou nos subdiretórios `incl_licenses/` dentro de cada módulo adaptado. - ---- - -## 🛠️ API dos Helpers e Guia de Uso - -Esta seção detalha como cada helper (agente especialista) deve ser utilizado dentro do ecossistema ADUC-SDR. Todos os agentes são instanciados como **singletons** no `hardware_manager.py` para garantir o gerenciamento centralizado de recursos de GPU. - -### **gemini_helpers.py (GeminiAgent)** - -* **Propósito:** Atua como o "Oráculo de Síntese Adaptativo", responsável por todas as tarefas de processamento de linguagem natural, como criação de storyboards, geração de prompts, e tomada de decisões narrativas. -* **Singleton Instance:** `gemini_agent_singleton` -* **Construtor:** `GeminiAgent()` - * Lê `configs/gemini_config.yaml` para obter o nome do modelo, parâmetros de inferência e caminhos de templates de prompt. A chave da API é lida da variável de ambiente `GEMINI_API_KEY`. -* **Métodos Públicos:** - * `generate_storyboard(prompt: str, num_keyframes: int, ref_image_paths: list[str])` - * **Inputs:** - * `prompt`: A ideia geral do filme (string). - * `num_keyframes`: O número de cenas a serem geradas (int). - * `ref_image_paths`: Lista de caminhos para as imagens de referência (list[str]). - * **Output:** `tuple[list[str], str]` (Uma tupla contendo a lista de strings do storyboard e um relatório textual da operação). - * `select_keyframes_from_pool(storyboard: list, base_image_paths: list[str], pool_image_paths: list[str])` - * **Inputs:** - * `storyboard`: A lista de strings do storyboard gerado. - * `base_image_paths`: Imagens de referência base (list[str]). - * `pool_image_paths`: O "banco de imagens" de onde selecionar (list[str]). - * **Output:** `tuple[list[str], str]` (Uma tupla contendo a lista de caminhos de imagens selecionadas e um relatório textual). - * `get_anticipatory_keyframe_prompt(...)` - * **Inputs:** Contexto narrativo e visual para gerar um prompt de imagem. - * **Output:** `tuple[str, str]` (Uma tupla contendo o prompt gerado para o modelo de imagem e um relatório textual). - * `get_initial_motion_prompt(...)` - * **Inputs:** Contexto narrativo e visual para a primeira transição de vídeo. - * **Output:** `tuple[str, str]` (Uma tupla contendo o prompt de movimento gerado e um relatório textual). - * `get_transition_decision(...)` - * **Inputs:** Contexto narrativo e visual para uma transição de vídeo intermediária. - * **Output:** `tuple[dict, str]` (Uma tupla contendo um dicionário `{"transition_type": "...", "motion_prompt": "..."}` e um relatório textual). - * `generate_audio_prompts(...)` - * **Inputs:** Contexto narrativo global. - * **Output:** `tuple[dict, str]` (Uma tupla contendo um dicionário `{"music_prompt": "...", "sfx_prompt": "..."}` e um relatório textual). - -### **flux_kontext_helpers.py (FluxPoolManager)** - -* **Propósito:** Especialista em geração de imagens de alta qualidade (keyframes) usando a pipeline FluxKontext. Gerencia um pool de workers para otimizar o uso de múltiplas GPUs. -* **Singleton Instance:** `flux_kontext_singleton` -* **Construtor:** `FluxPoolManager(device_ids: list[str], flux_config_file: str)` - * Lê `configs/flux_config.yaml`. -* **Método Público:** - * `generate_image(prompt: str, reference_images: list[Image.Image], width: int, height: int, seed: int = 42, callback: callable = None)` - * **Inputs:** - * `prompt`: Prompt textual para guiar a geração (string). - * `reference_images`: Lista de objetos `PIL.Image` como referência visual. - * `width`, `height`: Dimensões da imagem de saída (int). - * `seed`: Semente para reprodutibilidade (int). - * `callback`: Função de callback opcional para monitorar o progresso. - * **Output:** `PIL.Image.Image` (O objeto da imagem gerada). - -### **dreamo_helpers.py (DreamOAgent)** - -* **Propósito:** Especialista em geração de imagens de alta qualidade (keyframes) usando a pipeline DreamO, com capacidades avançadas de edição e estilo a partir de referências. -* **Singleton Instance:** `dreamo_agent_singleton` -* **Construtor:** `DreamOAgent(device_id: str = None)` - * Lê `configs/dreamo_config.yaml`. -* **Método Público:** - * `generate_image(prompt: str, reference_images: list[Image.Image], width: int, height: int)` - * **Inputs:** - * `prompt`: Prompt textual para guiar a geração (string). - * `reference_images`: Lista de objetos `PIL.Image` como referência visual. A lógica interna atribui a primeira imagem como `style` e as demais como `ip`. - * `width`, `height`: Dimensões da imagem de saída (int). - * **Output:** `PIL.Image.Image` (O objeto da imagem gerada). - -### **ltx_manager_helpers.py (LtxPoolManager)** - -* **Propósito:** Especialista na geração de fragmentos de vídeo no espaço latente usando a pipeline LTX-Video. Gerencia um pool de workers para otimizar o uso de múltiplas GPUs. -* **Singleton Instance:** `ltx_manager_singleton` -* **Construtor:** `LtxPoolManager(device_ids: list[str], ltx_model_config_file: str, ltx_global_config_file: str)` - * Lê o `ltx_global_config_file` e o `ltx_model_config_file` para configurar a pipeline. -* **Método Público:** - * `generate_latent_fragment(**kwargs)` - * **Inputs:** Dicionário de keyword arguments (`kwargs`) contendo todos os parâmetros da pipeline LTX, incluindo: - * `height`, `width`: Dimensões do vídeo (int). - * `video_total_frames`: Número total de frames a serem gerados (int). - * `video_fps`: Frames por segundo (int). - * `motion_prompt`: Prompt de movimento (string). - * `conditioning_items_data`: Lista de objetos `LatentConditioningItem` contendo os tensores latentes de condição. - * `guidance_scale`, `stg_scale`, `num_inference_steps`, etc. - * **Output:** `tuple[torch.Tensor, tuple]` (Uma tupla contendo o tensor latente gerado e os valores de padding utilizados). - -### **mmaudio_helper.py (MMAudioAgent)** - -* **Propósito:** Especialista em geração de áudio para um determinado fragmento de vídeo. -* **Singleton Instance:** `mmaudio_agent_singleton` -* **Construtor:** `MMAudioAgent(workspace_dir: str, device_id: str = None, mmaudio_config_file: str)` - * Lê `configs/mmaudio_config.yaml`. -* **Método Público:** - * `generate_audio_for_video(video_path: str, prompt: str, negative_prompt: str, duration_seconds: float)` - * **Inputs:** - * `video_path`: Caminho para o arquivo de vídeo silencioso (string). - * `prompt`: Prompt textual para guiar a geração de áudio (string). - * `negative_prompt`: Prompt negativo para áudio (string). - * `duration_seconds`: Duração exata do vídeo (float). - * **Output:** `str` (O caminho para o novo arquivo de vídeo com a faixa de áudio integrada). - - -### https://huggingface.co/spaces/ByteDance-Seed/SeedVR2-3B/tree/main - ---- - -## 🔗 Projetos Originais e Atribuições -(A seção de atribuições e licenças permanece a mesma que definimos anteriormente) - -### DreamO -* **Repositório Original:** [https://github.com/bytedance/DreamO](https://github.com/bytedance/DreamO) -... - -### LTX-Video -* **Repositório Original:** [https://github.com/Lightricks/LTX-Video](https://github.com/Lightricks/LTX-Video) -... - -### MMAudio -* **Repositório Original:** [https://github.com/hkchengrex/MMAudio](https://github.com/hkchengrex/MMAudio) -... \ No newline at end of file diff --git a/models_x/dit/attention.py b/models_x/dit/attention.py deleted file mode 100644 index ac0cadbcd62e7d40700108d2857cb587f794fcee..0000000000000000000000000000000000000000 --- a/models_x/dit/attention.py +++ /dev/null @@ -1,46 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -import torch -import torch.nn.functional as F - -from flash_attn import flash_attn_varlen_func - -from torch import nn - -class TorchAttention(nn.Module): - def tflops(self, args, kwargs, output) -> float: - assert len(args) == 0 or len(args) > 2, "query, key should both provided by args / kwargs" - q = kwargs.get("query") or args[0] - k = kwargs.get("key") or args[1] - b, h, sq, d = q.shape - b, h, sk, d = k.shape - return b * h * (4 * d * (sq / 1e6) * (sk / 1e6)) - - def forward(self, *args, **kwargs): - return F.scaled_dot_product_attention(*args, **kwargs) - - -class FlashAttentionVarlen(nn.Module): - def tflops(self, args, kwargs, output) -> float: - cu_seqlens_q = kwargs["cu_seqlens_q"] - cu_seqlens_k = kwargs["cu_seqlens_k"] - _, h, d = output.shape - seqlens_q = (cu_seqlens_q[1:] - cu_seqlens_q[:-1]) / 1e6 - seqlens_k = (cu_seqlens_k[1:] - cu_seqlens_k[:-1]) / 1e6 - return h * (4 * d * (seqlens_q * seqlens_k).sum()) - - def forward(self, *args, **kwargs): - kwargs["deterministic"] = torch.are_deterministic_algorithms_enabled() - return flash_attn_varlen_func(*args, **kwargs) \ No newline at end of file diff --git a/models_x/dit/blocks/__init__.py b/models_x/dit/blocks/__init__.py deleted file mode 100644 index 3195b400a407b871a6c19b67cf25239c5c3f196d..0000000000000000000000000000000000000000 --- a/models_x/dit/blocks/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from .mmdit_window_block import MMWindowTransformerBlock - -dit_blocks = { - "mmdit_window": MMWindowTransformerBlock, -} - - -def get_block(block_type: str): - if block_type in dit_blocks: - return dit_blocks[block_type] - raise NotImplementedError(f"{block_type} is not supported") diff --git a/models_x/dit/blocks/mmdit_window_block.py b/models_x/dit/blocks/mmdit_window_block.py deleted file mode 100644 index eacaa093658f62fb483086215cfb6ac72a2dc9fd..0000000000000000000000000000000000000000 --- a/models_x/dit/blocks/mmdit_window_block.py +++ /dev/null @@ -1,233 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Tuple, Union -import torch -from einops import rearrange -from torch import nn -from torch.nn import functional as F -from torch.nn.modules.utils import _triple - -from common.distributed.ops import ( - gather_heads, - gather_heads_scatter_seq, - gather_seq_scatter_heads_qkv, - scatter_heads, -) - -from ..attention import TorchAttention -from ..mlp import get_mlp -from ..mm import MMArg, MMModule -from ..modulation import ada_layer_type -from ..normalization import norm_layer_type -from ..rope import RotaryEmbedding3d - - -class MMWindowAttention(nn.Module): - def __init__( - self, - vid_dim: int, - txt_dim: int, - heads: int, - head_dim: int, - qk_bias: bool, - qk_rope: bool, - qk_norm: norm_layer_type, - qk_norm_eps: float, - window: Union[int, Tuple[int, int, int]], - window_method: str, - shared_qkv: bool, - ): - super().__init__() - dim = MMArg(vid_dim, txt_dim) - inner_dim = heads * head_dim - qkv_dim = inner_dim * 3 - - self.window = _triple(window) - self.window_method = window_method - assert all(map(lambda v: isinstance(v, int) and v >= 0, self.window)) - - self.head_dim = head_dim - self.proj_qkv = MMModule(nn.Linear, dim, qkv_dim, bias=qk_bias, shared_weights=shared_qkv) - self.proj_out = MMModule(nn.Linear, inner_dim, dim, shared_weights=shared_qkv) - self.norm_q = MMModule(qk_norm, dim=head_dim, eps=qk_norm_eps, elementwise_affine=True) - self.norm_k = MMModule(qk_norm, dim=head_dim, eps=qk_norm_eps, elementwise_affine=True) - self.rope = RotaryEmbedding3d(dim=head_dim // 2) if qk_rope else None - self.attn = TorchAttention() - - def forward( - self, - vid: torch.FloatTensor, # b T H W c - txt: torch.FloatTensor, # b L c - txt_mask: torch.BoolTensor, # b L - ) -> Tuple[ - torch.FloatTensor, - torch.FloatTensor, - ]: - # Project q, k, v. - vid_qkv, txt_qkv = self.proj_qkv(vid, txt) - vid_qkv = gather_seq_scatter_heads_qkv(vid_qkv, seq_dim=2) - _, T, H, W, _ = vid_qkv.shape - _, L, _ = txt.shape - - if self.window_method == "win": - nt, nh, nw = self.window - tt, hh, ww = T // nt, H // nh, W // nw - elif self.window_method == "win_by_size": - tt, hh, ww = self.window - tt, hh, ww = ( - tt if tt > 0 else T, - hh if hh > 0 else H, - ww if ww > 0 else W, - ) - nt, nh, nw = T // tt, H // hh, W // ww - else: - raise NotImplementedError - - vid_qkv = rearrange(vid_qkv, "b T H W (o h d) -> o b h (T H W) d", o=3, d=self.head_dim) - txt_qkv = rearrange(txt_qkv, "b L (o h d) -> o b h L d", o=3, d=self.head_dim) - txt_qkv = scatter_heads(txt_qkv, dim=2) - - vid_q, vid_k, vid_v = vid_qkv.unbind() - txt_q, txt_k, txt_v = txt_qkv.unbind() - - vid_q, txt_q = self.norm_q(vid_q, txt_q) - vid_k, txt_k = self.norm_k(vid_k, txt_k) - - if self.rope: - vid_q, vid_k = self.rope(vid_q, vid_k, (T, H, W)) - - def vid_window(v): - return rearrange( - v, - "b h (nt tt nh hh nw ww) d -> b h (nt nh nw) (tt hh ww) d", - hh=hh, - ww=ww, - tt=tt, - nh=nh, - nw=nw, - nt=nt, - ) - - def txt_window(t): - return rearrange(t, "b h L d -> b h 1 L d").expand(-1, -1, nt * nh * nw, -1, -1) - - # Process video attention. - vid_msk = F.pad(txt_mask, (tt * hh * ww, 0), value=True) - vid_msk = rearrange(vid_msk, "b l -> b 1 1 1 l").expand(-1, 1, 1, tt * hh * ww, -1) - vid_out = self.attn( - vid_window(vid_q), - torch.cat([vid_window(vid_k), txt_window(txt_k)], dim=-2), - torch.cat([vid_window(vid_v), txt_window(txt_v)], dim=-2), - vid_msk, - ) - vid_out = rearrange( - vid_out, - "b h (nt nh nw) (tt hh ww) d -> b (nt tt) (nh hh) (nw ww) (h d)", - hh=hh, - ww=ww, - tt=tt, - nh=nh, - nw=nw, - ) - vid_out = gather_heads_scatter_seq(vid_out, head_dim=4, seq_dim=2) - - # Process text attention. - txt_msk = F.pad(txt_mask, (T * H * W, 0), value=True) - txt_msk = rearrange(txt_msk, "b l -> b 1 1 l").expand(-1, 1, L, -1) - txt_out = self.attn( - txt_q, - torch.cat([vid_k, txt_k], dim=-2), - torch.cat([vid_v, txt_v], dim=-2), - txt_msk, - ) - txt_out = rearrange(txt_out, "b h L d -> b L (h d)") - txt_out = gather_heads(txt_out, dim=2) - - # Project output. - vid_out, txt_out = self.proj_out(vid_out, txt_out) - return vid_out, txt_out - - -class MMWindowTransformerBlock(nn.Module): - def __init__( - self, - *, - vid_dim: int, - txt_dim: int, - emb_dim: int, - heads: int, - head_dim: int, - expand_ratio: int, - norm: norm_layer_type, - norm_eps: float, - ada: ada_layer_type, - qk_bias: bool, - qk_rope: bool, - qk_norm: norm_layer_type, - window: Union[int, Tuple[int, int, int]], - window_method: str, - shared_qkv: bool, - shared_mlp: bool, - mlp_type: str, - **kwargs, - ): - super().__init__() - dim = MMArg(vid_dim, txt_dim) - self.attn_norm = MMModule(norm, dim=dim, eps=norm_eps, elementwise_affine=False) - self.attn = MMWindowAttention( - vid_dim=vid_dim, - txt_dim=txt_dim, - heads=heads, - head_dim=head_dim, - qk_bias=qk_bias, - qk_rope=qk_rope, - qk_norm=qk_norm, - qk_norm_eps=norm_eps, - window=window, - window_method=window_method, - shared_qkv=shared_qkv, - ) - self.mlp_norm = MMModule(norm, dim=dim, eps=norm_eps, elementwise_affine=False) - self.mlp = MMModule( - get_mlp(mlp_type), - dim=dim, - expand_ratio=expand_ratio, - shared_weights=shared_mlp, - ) - self.ada = MMModule(ada, dim=dim, emb_dim=emb_dim, layers=["attn", "mlp"]) - - def forward( - self, - vid: torch.FloatTensor, - txt: torch.FloatTensor, - txt_mask: torch.BoolTensor, - emb: torch.FloatTensor, - ) -> Tuple[ - torch.FloatTensor, - torch.FloatTensor, - ]: - vid_attn, txt_attn = self.attn_norm(vid, txt) - vid_attn, txt_attn = self.ada(vid_attn, txt_attn, emb=emb, layer="attn", mode="in") - vid_attn, txt_attn = self.attn(vid_attn, txt_attn, txt_mask=txt_mask) - vid_attn, txt_attn = self.ada(vid_attn, txt_attn, emb=emb, layer="attn", mode="out") - vid_attn, txt_attn = (vid_attn + vid), (txt_attn + txt) - - vid_mlp, txt_mlp = self.mlp_norm(vid_attn, txt_attn) - vid_mlp, txt_mlp = self.ada(vid_mlp, txt_mlp, emb=emb, layer="mlp", mode="in") - vid_mlp, txt_mlp = self.mlp(vid_mlp, txt_mlp) - vid_mlp, txt_mlp = self.ada(vid_mlp, txt_mlp, emb=emb, layer="mlp", mode="out") - vid_mlp, txt_mlp = (vid_mlp + vid_attn), (txt_mlp + txt_attn) - - return vid_mlp, txt_mlp diff --git a/models_x/dit/embedding.py b/models_x/dit/embedding.py deleted file mode 100644 index e972244f5767c9f34e5e77bb180ae720ce88b89c..0000000000000000000000000000000000000000 --- a/models_x/dit/embedding.py +++ /dev/null @@ -1,62 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Optional, Union -import torch -from diffusers.models.embeddings import get_timestep_embedding -from torch import nn - - -def emb_add(emb1: torch.Tensor, emb2: Optional[torch.Tensor]): - return emb1 if emb2 is None else emb1 + emb2 - - -class TimeEmbedding(nn.Module): - def __init__( - self, - sinusoidal_dim: int, - hidden_dim: int, - output_dim: int, - ): - super().__init__() - self.sinusoidal_dim = sinusoidal_dim - self.proj_in = nn.Linear(sinusoidal_dim, hidden_dim) - self.proj_hid = nn.Linear(hidden_dim, hidden_dim) - self.proj_out = nn.Linear(hidden_dim, output_dim) - self.act = nn.SiLU() - - def forward( - self, - timestep: Union[int, float, torch.IntTensor, torch.FloatTensor], - device: torch.device, - dtype: torch.dtype, - ) -> torch.FloatTensor: - if not torch.is_tensor(timestep): - timestep = torch.tensor([timestep], device=device, dtype=dtype) - if timestep.ndim == 0: - timestep = timestep[None] - - emb = get_timestep_embedding( - timesteps=timestep, - embedding_dim=self.sinusoidal_dim, - flip_sin_to_cos=False, - downscale_freq_shift=0, - ) - emb = emb.to(dtype) - emb = self.proj_in(emb) - emb = self.act(emb) - emb = self.proj_hid(emb) - emb = self.act(emb) - emb = self.proj_out(emb) - return emb diff --git a/models_x/dit/mlp.py b/models_x/dit/mlp.py deleted file mode 100644 index 2d05cb021f3e3c6ac05c0e7ae1aa8a6d29475b87..0000000000000000000000000000000000000000 --- a/models_x/dit/mlp.py +++ /dev/null @@ -1,62 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Optional -import torch -import torch.nn.functional as F -from torch import nn - - -def get_mlp(mlp_type: Optional[str] = "normal"): - if mlp_type == "normal": - return MLP - elif mlp_type == "swiglu": - return SwiGLUMLP - - -class MLP(nn.Module): - def __init__( - self, - dim: int, - expand_ratio: int, - ): - super().__init__() - self.proj_in = nn.Linear(dim, dim * expand_ratio) - self.act = nn.GELU("tanh") - self.proj_out = nn.Linear(dim * expand_ratio, dim) - - def forward(self, x: torch.FloatTensor) -> torch.FloatTensor: - x = self.proj_in(x) - x = self.act(x) - x = self.proj_out(x) - return x - - -class SwiGLUMLP(nn.Module): - def __init__( - self, - dim: int, - expand_ratio: int, - multiple_of: int = 256, - ): - super().__init__() - hidden_dim = int(2 * dim * expand_ratio / 3) - hidden_dim = multiple_of * ((hidden_dim + multiple_of - 1) // multiple_of) - self.proj_in_gate = nn.Linear(dim, hidden_dim, bias=False) - self.proj_out = nn.Linear(hidden_dim, dim, bias=False) - self.proj_in = nn.Linear(dim, hidden_dim, bias=False) - - def forward(self, x: torch.FloatTensor) -> torch.FloatTensor: - x = self.proj_out(F.silu(self.proj_in_gate(x)) * self.proj_in(x)) - return x diff --git a/models_x/dit/mm.py b/models_x/dit/mm.py deleted file mode 100644 index 49be1f5915a61d8ea27f3e3718f35e5c9af662e7..0000000000000000000000000000000000000000 --- a/models_x/dit/mm.py +++ /dev/null @@ -1,67 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from dataclasses import dataclass -from typing import Any, Callable, Dict, List, Tuple -import torch -from torch import nn - - -@dataclass -class MMArg: - vid: Any - txt: Any - - -def get_args(key: str, args: List[Any]) -> List[Any]: - return [getattr(v, key) if isinstance(v, MMArg) else v for v in args] - - -def get_kwargs(key: str, kwargs: Dict[str, Any]) -> Dict[str, Any]: - return {k: getattr(v, key) if isinstance(v, MMArg) else v for k, v in kwargs.items()} - - -class MMModule(nn.Module): - def __init__( - self, - module: Callable[..., nn.Module], - *args, - shared_weights: bool = False, - **kwargs, - ): - super().__init__() - self.shared_weights = shared_weights - if self.shared_weights: - assert get_args("vid", args) == get_args("txt", args) - assert get_kwargs("vid", kwargs) == get_kwargs("txt", kwargs) - self.all = module(*get_args("vid", args), **get_kwargs("vid", kwargs)) - else: - self.vid = module(*get_args("vid", args), **get_kwargs("vid", kwargs)) - self.txt = module(*get_args("txt", args), **get_kwargs("txt", kwargs)) - - def forward( - self, - vid: torch.FloatTensor, - txt: torch.FloatTensor, - *args, - **kwargs, - ) -> Tuple[ - torch.FloatTensor, - torch.FloatTensor, - ]: - vid_module = self.vid if not self.shared_weights else self.all - txt_module = self.txt if not self.shared_weights else self.all - vid = vid_module(vid, *get_args("vid", args), **get_kwargs("vid", kwargs)) - txt = txt_module(txt, *get_args("txt", args), **get_kwargs("txt", kwargs)) - return vid, txt diff --git a/models_x/dit/modulation.py b/models_x/dit/modulation.py deleted file mode 100644 index cd3b41f6c457396ac65403d88edc3d5ad3382262..0000000000000000000000000000000000000000 --- a/models_x/dit/modulation.py +++ /dev/null @@ -1,97 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Callable, List, Optional -import torch -from einops import rearrange -from torch import nn - -from common.cache import Cache -from common.distributed.ops import slice_inputs - -# (dim: int, emb_dim: int) -ada_layer_type = Callable[[int, int], nn.Module] - - -def get_ada_layer(ada_layer: str) -> ada_layer_type: - if ada_layer == "single": - return AdaSingle - raise NotImplementedError(f"{ada_layer} is not supported") - - -def expand_dims(x: torch.Tensor, dim: int, ndim: int): - """ - Expand tensor "x" to "ndim" by adding empty dims at "dim". - Example: x is (b d), target ndim is 5, add dim at 1, return (b 1 1 1 d). - """ - shape = x.shape - shape = shape[:dim] + (1,) * (ndim - len(shape)) + shape[dim:] - return x.reshape(shape) - - -class AdaSingle(nn.Module): - def __init__( - self, - dim: int, - emb_dim: int, - layers: List[str], - ): - assert emb_dim == 6 * dim, "AdaSingle requires emb_dim == 6 * dim" - super().__init__() - self.dim = dim - self.emb_dim = emb_dim - self.layers = layers - for l in layers: - self.register_parameter(f"{l}_shift", nn.Parameter(torch.randn(dim) / dim**0.5)) - self.register_parameter(f"{l}_scale", nn.Parameter(torch.randn(dim) / dim**0.5 + 1)) - self.register_parameter(f"{l}_gate", nn.Parameter(torch.randn(dim) / dim**0.5)) - - def forward( - self, - hid: torch.FloatTensor, # b ... c - emb: torch.FloatTensor, # b d - layer: str, - mode: str, - cache: Cache = Cache(disable=True), - branch_tag: str = "", - hid_len: Optional[torch.LongTensor] = None, # b - ) -> torch.FloatTensor: - idx = self.layers.index(layer) - emb = rearrange(emb, "b (d l g) -> b d l g", l=len(self.layers), g=3)[..., idx, :] - emb = expand_dims(emb, 1, hid.ndim + 1) - - if hid_len is not None: - emb = cache( - f"emb_repeat_{idx}_{branch_tag}", - lambda: slice_inputs( - torch.cat([e.repeat(l, *([1] * e.ndim)) for e, l in zip(emb, hid_len)]), - dim=0, - ), - ) - - shiftA, scaleA, gateA = emb.unbind(-1) - shiftB, scaleB, gateB = ( - getattr(self, f"{layer}_shift"), - getattr(self, f"{layer}_scale"), - getattr(self, f"{layer}_gate"), - ) - - if mode == "in": - return hid.mul_(scaleA + scaleB).add_(shiftA + shiftB) - if mode == "out": - return hid.mul_(gateA + gateB) - raise NotImplementedError - - def extra_repr(self) -> str: - return f"dim={self.dim}, emb_dim={self.emb_dim}, layers={self.layers}" \ No newline at end of file diff --git a/models_x/dit/na.py b/models_x/dit/na.py deleted file mode 100644 index 0dbd546c4705b3b9c7c19a9823f9d113a0447616..0000000000000000000000000000000000000000 --- a/models_x/dit/na.py +++ /dev/null @@ -1,241 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from itertools import chain -from typing import Callable, Dict, List, Tuple -import einops -import torch - - -def flatten( - hid: List[torch.FloatTensor], # List of (*** c) -) -> Tuple[ - torch.FloatTensor, # (L c) - torch.LongTensor, # (b n) -]: - assert len(hid) > 0 - shape = torch.stack([torch.tensor(x.shape[:-1], device=hid[0].device) for x in hid]) - hid = torch.cat([x.flatten(0, -2) for x in hid]) - return hid, shape - - -def unflatten( - hid: torch.FloatTensor, # (L c) or (L ... c) - hid_shape: torch.LongTensor, # (b n) -) -> List[torch.Tensor]: # List of (*** c) or (*** ... c) - hid_len = hid_shape.prod(-1) - hid = hid.split(hid_len.tolist()) - hid = [x.unflatten(0, s.tolist()) for x, s in zip(hid, hid_shape)] - return hid - - -def concat( - vid: torch.FloatTensor, # (VL ... c) - txt: torch.FloatTensor, # (TL ... c) - vid_len: torch.LongTensor, # (b) - txt_len: torch.LongTensor, # (b) -) -> torch.FloatTensor: # (L ... c) - vid = torch.split(vid, vid_len.tolist()) - txt = torch.split(txt, txt_len.tolist()) - return torch.cat(list(chain(*zip(vid, txt)))) - - -def concat_idx( - vid_len: torch.LongTensor, # (b) - txt_len: torch.LongTensor, # (b) -) -> Tuple[ - Callable, - Callable, -]: - device = vid_len.device - vid_idx = torch.arange(vid_len.sum(), device=device) - txt_idx = torch.arange(len(vid_idx), len(vid_idx) + txt_len.sum(), device=device) - tgt_idx = concat(vid_idx, txt_idx, vid_len, txt_len) - src_idx = torch.argsort(tgt_idx) - return ( - lambda vid, txt: torch.index_select(torch.cat([vid, txt]), 0, tgt_idx), - lambda all: torch.index_select(all, 0, src_idx).split([len(vid_idx), len(txt_idx)]), - ) - - -def unconcat( - all: torch.FloatTensor, # (L ... c) - vid_len: torch.LongTensor, # (b) - txt_len: torch.LongTensor, # (b) -) -> Tuple[ - torch.FloatTensor, # (VL ... c) - torch.FloatTensor, # (TL ... c) -]: - interleave_len = list(chain(*zip(vid_len.tolist(), txt_len.tolist()))) - all = all.split(interleave_len) - vid = torch.cat(all[0::2]) - txt = torch.cat(all[1::2]) - return vid, txt - - -def repeat_concat( - vid: torch.FloatTensor, # (VL ... c) - txt: torch.FloatTensor, # (TL ... c) - vid_len: torch.LongTensor, # (n*b) - txt_len: torch.LongTensor, # (b) - txt_repeat: List, # (n) -) -> torch.FloatTensor: # (L ... c) - vid = torch.split(vid, vid_len.tolist()) - txt = torch.split(txt, txt_len.tolist()) - txt = [[x] * n for x, n in zip(txt, txt_repeat)] - txt = list(chain(*txt)) - return torch.cat(list(chain(*zip(vid, txt)))) - - -def repeat_concat_idx( - vid_len: torch.LongTensor, # (n*b) - txt_len: torch.LongTensor, # (b) - txt_repeat: torch.LongTensor, # (n) -) -> Tuple[ - Callable, - Callable, -]: - device = vid_len.device - vid_idx = torch.arange(vid_len.sum(), device=device) - txt_idx = torch.arange(len(vid_idx), len(vid_idx) + txt_len.sum(), device=device) - txt_repeat_list = txt_repeat.tolist() - tgt_idx = repeat_concat(vid_idx, txt_idx, vid_len, txt_len, txt_repeat) - src_idx = torch.argsort(tgt_idx) - txt_idx_len = len(tgt_idx) - len(vid_idx) - repeat_txt_len = (txt_len * txt_repeat).tolist() - - def unconcat_coalesce(all): - """ - Un-concat vid & txt, and coalesce the repeated txt. - e.g. vid [0 1 2 3 4 5 6 7 8] -> 3 splits -> [0 1 2] [3 4 5] [6 7 8] - txt [9 10] - repeat_concat ==> [0 1 2 9 10 3 4 5 9 10 6 7 8 9 10] - 1. argsort re-index ==> [0 1 2 3 4 5 6 7 8 9 9 9 10 10 10] - split ==> vid_out [0 1 2 3 4 5 6 7 8] txt_out [9 9 9 10 10 10] - 2. reshape & mean for each sample to coalesce the repeated txt. - """ - vid_out, txt_out = all[src_idx].split([len(vid_idx), txt_idx_len]) - txt_out_coalesced = [] - for txt, repeat_time in zip(txt_out.split(repeat_txt_len), txt_repeat_list): - txt = txt.reshape(-1, repeat_time, *txt.shape[1:]).mean(1) - txt_out_coalesced.append(txt) - return vid_out, torch.cat(txt_out_coalesced) - - # Note: Backward of torch.index_select is non-deterministic when existing repeated index, - # the difference may cumulative like torch.repeat_interleave, so we use vanilla index here. - return ( - lambda vid, txt: torch.cat([vid, txt])[tgt_idx], - lambda all: unconcat_coalesce(all), - ) - - -def rearrange( - hid: torch.FloatTensor, # (L c) - hid_shape: torch.LongTensor, # (b n) - pattern: str, - **kwargs: Dict[str, int], -) -> Tuple[ - torch.FloatTensor, - torch.LongTensor, -]: - return flatten([einops.rearrange(h, pattern, **kwargs) for h in unflatten(hid, hid_shape)]) - - -def rearrange_idx( - hid_shape: torch.LongTensor, # (b n) - pattern: str, - **kwargs: Dict[str, int], -) -> Tuple[Callable, Callable, torch.LongTensor]: - hid_idx = torch.arange(hid_shape.prod(-1).sum(), device=hid_shape.device).unsqueeze(-1) - tgt_idx, tgt_shape = rearrange(hid_idx, hid_shape, pattern, **kwargs) - tgt_idx = tgt_idx.squeeze(-1) - src_idx = torch.argsort(tgt_idx) - return ( - lambda hid: torch.index_select(hid, 0, tgt_idx), - lambda hid: torch.index_select(hid, 0, src_idx), - tgt_shape, - ) - - -def repeat( - hid: torch.FloatTensor, # (L c) - hid_shape: torch.LongTensor, # (b n) - pattern: str, - **kwargs: Dict[str, torch.LongTensor], # (b) -) -> Tuple[ - torch.FloatTensor, - torch.LongTensor, -]: - hid = unflatten(hid, hid_shape) - kwargs = [{k: v[i].item() for k, v in kwargs.items()} for i in range(len(hid))] - return flatten([einops.repeat(h, pattern, **a) for h, a in zip(hid, kwargs)]) - - -def pack( - samples: List[torch.Tensor], # List of (h w c). -) -> Tuple[ - List[torch.Tensor], # groups [(b1 h1 w1 c1), (b2 h2 w2 c2)] - List[List[int]], # reversal indices. -]: - batches = {} - indices = {} - for i, sample in enumerate(samples): - shape = sample.shape - batches[shape] = batches.get(shape, []) - indices[shape] = indices.get(shape, []) - batches[shape].append(sample) - indices[shape].append(i) - - batches = list(map(torch.stack, batches.values())) - indices = list(indices.values()) - return batches, indices - - -def unpack( - batches: List[torch.Tensor], - indices: List[List[int]], -) -> List[torch.Tensor]: - samples = [None] * (max(chain(*indices)) + 1) - for batch, index in zip(batches, indices): - for sample, i in zip(batch.unbind(), index): - samples[i] = sample - return samples - - -def window( - hid: torch.FloatTensor, # (L c) - hid_shape: torch.LongTensor, # (b n) - window_fn: Callable[[torch.Tensor], List[torch.Tensor]], -): - hid = unflatten(hid, hid_shape) - hid = list(map(window_fn, hid)) - hid_windows = torch.tensor(list(map(len, hid)), device=hid_shape.device) - hid, hid_shape = flatten(list(chain(*hid))) - return hid, hid_shape, hid_windows - - -def window_idx( - hid_shape: torch.LongTensor, # (b n) - window_fn: Callable[[torch.Tensor], List[torch.Tensor]], -): - hid_idx = torch.arange(hid_shape.prod(-1).sum(), device=hid_shape.device).unsqueeze(-1) - tgt_idx, tgt_shape, tgt_windows = window(hid_idx, hid_shape, window_fn) - tgt_idx = tgt_idx.squeeze(-1) - src_idx = torch.argsort(tgt_idx) - return ( - lambda hid: torch.index_select(hid, 0, tgt_idx), - lambda hid: torch.index_select(hid, 0, src_idx), - tgt_shape, - tgt_windows, - ) diff --git a/models_x/dit/nablocks/__init__.py b/models_x/dit/nablocks/__init__.py deleted file mode 100644 index afa206db157786d9e4cf830bec09bd3a390bd9a8..0000000000000000000000000000000000000000 --- a/models_x/dit/nablocks/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from .mmsr_block import NaMMSRTransformerBlock - -nadit_blocks = { - "mmdit_sr": NaMMSRTransformerBlock, -} - - -def get_nablock(block_type: str): - if block_type in nadit_blocks: - return nadit_blocks[block_type] - raise NotImplementedError(f"{block_type} is not supported") diff --git a/models_x/dit/nablocks/mmsr_block.py b/models_x/dit/nablocks/mmsr_block.py deleted file mode 100644 index b75652efc070188268bb84b35352b543e1a3746b..0000000000000000000000000000000000000000 --- a/models_x/dit/nablocks/mmsr_block.py +++ /dev/null @@ -1,248 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Tuple, Union -import torch -from einops import rearrange -from torch.nn import functional as F - -# from ..cache import Cache -from common.cache import Cache -from common.distributed.ops import gather_heads_scatter_seq, gather_seq_scatter_heads_qkv - -from .. import na -from ..attention import FlashAttentionVarlen -from ..blocks.mmdit_window_block import MMWindowAttention, MMWindowTransformerBlock -from ..mm import MMArg -from ..modulation import ada_layer_type -from ..normalization import norm_layer_type -from ..rope import NaRotaryEmbedding3d -from ..window import get_window_op - - -class NaSwinAttention(MMWindowAttention): - def __init__( - self, - vid_dim: int, - txt_dim: int, - heads: int, - head_dim: int, - qk_bias: bool, - qk_rope: bool, - qk_norm: norm_layer_type, - qk_norm_eps: float, - window: Union[int, Tuple[int, int, int]], - window_method: str, - shared_qkv: bool, - **kwargs, - ): - super().__init__( - vid_dim=vid_dim, - txt_dim=txt_dim, - heads=heads, - head_dim=head_dim, - qk_bias=qk_bias, - qk_rope=qk_rope, - qk_norm=qk_norm, - qk_norm_eps=qk_norm_eps, - window=window, - window_method=window_method, - shared_qkv=shared_qkv, - ) - self.rope = NaRotaryEmbedding3d(dim=head_dim // 2) if qk_rope else None - self.attn = FlashAttentionVarlen() - self.window_op = get_window_op(window_method) - - def forward( - self, - vid: torch.FloatTensor, # l c - txt: torch.FloatTensor, # l c - vid_shape: torch.LongTensor, # b 3 - txt_shape: torch.LongTensor, # b 1 - cache: Cache, - ) -> Tuple[ - torch.FloatTensor, - torch.FloatTensor, - ]: - - vid_qkv, txt_qkv = self.proj_qkv(vid, txt) - vid_qkv = gather_seq_scatter_heads_qkv( - vid_qkv, - seq_dim=0, - qkv_shape=vid_shape, - cache=cache.namespace("vid"), - ) - txt_qkv = gather_seq_scatter_heads_qkv( - txt_qkv, - seq_dim=0, - qkv_shape=txt_shape, - cache=cache.namespace("txt"), - ) - - # re-org the input seq for window attn - cache_win = cache.namespace(f"{self.window_method}_{self.window}_sd3") - - def make_window(x: torch.Tensor): - t, h, w, _ = x.shape - window_slices = self.window_op((t, h, w), self.window) - return [x[st, sh, sw] for (st, sh, sw) in window_slices] - - window_partition, window_reverse, window_shape, window_count = cache_win( - "win_transform", - lambda: na.window_idx(vid_shape, make_window), - ) - vid_qkv_win = window_partition(vid_qkv) - - vid_qkv_win = rearrange(vid_qkv_win, "l (o h d) -> l o h d", o=3, d=self.head_dim) - txt_qkv = rearrange(txt_qkv, "l (o h d) -> l o h d", o=3, d=self.head_dim) - - vid_q, vid_k, vid_v = vid_qkv_win.unbind(1) - txt_q, txt_k, txt_v = txt_qkv.unbind(1) - - vid_q, txt_q = self.norm_q(vid_q, txt_q) - vid_k, txt_k = self.norm_k(vid_k, txt_k) - - txt_len = cache("txt_len", lambda: txt_shape.prod(-1)) - - vid_len_win = cache_win("vid_len", lambda: window_shape.prod(-1)) - txt_len_win = cache_win("txt_len", lambda: txt_len.repeat_interleave(window_count)) - all_len_win = cache_win("all_len", lambda: vid_len_win + txt_len_win) - concat_win, unconcat_win = cache_win( - "mm_pnp", lambda: na.repeat_concat_idx(vid_len_win, txt_len, window_count) - ) - - # window rope - if self.rope: - vid_q, vid_k = self.rope(vid_q, vid_k, window_shape, cache_win) - - out = self.attn( - q=concat_win(vid_q, txt_q).bfloat16(), - k=concat_win(vid_k, txt_k).bfloat16(), - v=concat_win(vid_v, txt_v).bfloat16(), - cu_seqlens_q=cache_win( - "vid_seqlens_q", lambda: F.pad(all_len_win.cumsum(0), (1, 0)).int() - ), - cu_seqlens_k=cache_win( - "vid_seqlens_k", lambda: F.pad(all_len_win.cumsum(0), (1, 0)).int() - ), - max_seqlen_q=cache_win("vid_max_seqlen_q", lambda: all_len_win.max().item()), - max_seqlen_k=cache_win("vid_max_seqlen_k", lambda: all_len_win.max().item()), - ).type_as(vid_q) - - # text pooling - vid_out, txt_out = unconcat_win(out) - - vid_out = rearrange(vid_out, "l h d -> l (h d)") - txt_out = rearrange(txt_out, "l h d -> l (h d)") - vid_out = window_reverse(vid_out) - - vid_out = gather_heads_scatter_seq(vid_out, head_dim=1, seq_dim=0) - txt_out = gather_heads_scatter_seq(txt_out, head_dim=1, seq_dim=0) - - vid_out, txt_out = self.proj_out(vid_out, txt_out) - - return vid_out, txt_out - - -class NaMMSRTransformerBlock(MMWindowTransformerBlock): - def __init__( - self, - *, - vid_dim: int, - txt_dim: int, - emb_dim: int, - heads: int, - head_dim: int, - expand_ratio: int, - norm: norm_layer_type, - norm_eps: float, - ada: ada_layer_type, - qk_bias: bool, - qk_rope: bool, - qk_norm: norm_layer_type, - shared_qkv: bool, - shared_mlp: bool, - mlp_type: str, - **kwargs, - ): - super().__init__( - vid_dim=vid_dim, - txt_dim=txt_dim, - emb_dim=emb_dim, - heads=heads, - head_dim=head_dim, - expand_ratio=expand_ratio, - norm=norm, - norm_eps=norm_eps, - ada=ada, - qk_bias=qk_bias, - qk_rope=qk_rope, - qk_norm=qk_norm, - shared_qkv=shared_qkv, - shared_mlp=shared_mlp, - mlp_type=mlp_type, - **kwargs, - ) - - self.attn = NaSwinAttention( - vid_dim=vid_dim, - txt_dim=txt_dim, - heads=heads, - head_dim=head_dim, - qk_bias=qk_bias, - qk_rope=qk_rope, - qk_norm=qk_norm, - qk_norm_eps=norm_eps, - shared_qkv=shared_qkv, - **kwargs, - ) - - def forward( - self, - vid: torch.FloatTensor, # l c - txt: torch.FloatTensor, # l c - vid_shape: torch.LongTensor, # b 3 - txt_shape: torch.LongTensor, # b 1 - emb: torch.FloatTensor, - cache: Cache, - ) -> Tuple[ - torch.FloatTensor, - torch.FloatTensor, - torch.LongTensor, - torch.LongTensor, - ]: - hid_len = MMArg( - cache("vid_len", lambda: vid_shape.prod(-1)), - cache("txt_len", lambda: txt_shape.prod(-1)), - ) - ada_kwargs = { - "emb": emb, - "hid_len": hid_len, - "cache": cache, - "branch_tag": MMArg("vid", "txt"), - } - - vid_attn, txt_attn = self.attn_norm(vid, txt) - vid_attn, txt_attn = self.ada(vid_attn, txt_attn, layer="attn", mode="in", **ada_kwargs) - vid_attn, txt_attn = self.attn(vid_attn, txt_attn, vid_shape, txt_shape, cache) - vid_attn, txt_attn = self.ada(vid_attn, txt_attn, layer="attn", mode="out", **ada_kwargs) - vid_attn, txt_attn = (vid_attn + vid), (txt_attn + txt) - - vid_mlp, txt_mlp = self.mlp_norm(vid_attn, txt_attn) - vid_mlp, txt_mlp = self.ada(vid_mlp, txt_mlp, layer="mlp", mode="in", **ada_kwargs) - vid_mlp, txt_mlp = self.mlp(vid_mlp, txt_mlp) - vid_mlp, txt_mlp = self.ada(vid_mlp, txt_mlp, layer="mlp", mode="out", **ada_kwargs) - vid_mlp, txt_mlp = (vid_mlp + vid_attn), (txt_mlp + txt_attn) - - return vid_mlp, txt_mlp, vid_shape, txt_shape diff --git a/models_x/dit/nadit.py b/models_x/dit/nadit.py deleted file mode 100644 index 7e778236db6a70f49a364db6e84bf7539c0b58ac..0000000000000000000000000000000000000000 --- a/models_x/dit/nadit.py +++ /dev/null @@ -1,350 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from dataclasses import dataclass -from typing import Optional, Tuple, Union, Callable -import torch -from torch import nn - -from common.cache import Cache -from common.distributed.ops import slice_inputs - -from . import na -from .embedding import TimeEmbedding -from .modulation import get_ada_layer -from .nablocks import get_nablock -from .normalization import get_norm_layer -from .patch import NaPatchIn, NaPatchOut - -# Fake func, no checkpointing is required for inference -def gradient_checkpointing(module: Union[Callable, nn.Module], *args, enabled: bool, **kwargs): - return module(*args, **kwargs) - -@dataclass -class NaDiTOutput: - vid_sample: torch.Tensor - - -class NaDiT(nn.Module): - """ - Native Resolution Diffusion Transformer (NaDiT) - """ - - gradient_checkpointing = False - - def __init__( - self, - vid_in_channels: int, - vid_out_channels: int, - vid_dim: int, - txt_in_dim: Optional[int], - txt_dim: Optional[int], - emb_dim: int, - heads: int, - head_dim: int, - expand_ratio: int, - norm: Optional[str], - norm_eps: float, - ada: str, - qk_bias: bool, - qk_rope: bool, - qk_norm: Optional[str], - patch_size: Union[int, Tuple[int, int, int]], - num_layers: int, - block_type: Union[str, Tuple[str]], - shared_qkv: bool = False, - shared_mlp: bool = False, - mlp_type: str = "normal", - window: Optional[Tuple] = None, - window_method: Optional[Tuple[str]] = None, - temporal_window_size: int = None, - temporal_shifted: bool = False, - **kwargs, - ): - ada = get_ada_layer(ada) - norm = get_norm_layer(norm) - qk_norm = get_norm_layer(qk_norm) - if isinstance(block_type, str): - block_type = [block_type] * num_layers - elif len(block_type) != num_layers: - raise ValueError("The ``block_type`` list should equal to ``num_layers``.") - super().__init__() - self.vid_in = NaPatchIn( - in_channels=vid_in_channels, - patch_size=patch_size, - dim=vid_dim, - ) - self.txt_in = ( - nn.Linear(txt_in_dim, txt_dim) - if txt_in_dim and txt_in_dim != txt_dim - else nn.Identity() - ) - self.emb_in = TimeEmbedding( - sinusoidal_dim=256, - hidden_dim=max(vid_dim, txt_dim), - output_dim=emb_dim, - ) - - if window is None or isinstance(window[0], int): - window = [window] * num_layers - if window_method is None or isinstance(window_method, str): - window_method = [window_method] * num_layers - if temporal_window_size is None or isinstance(temporal_window_size, int): - temporal_window_size = [temporal_window_size] * num_layers - if temporal_shifted is None or isinstance(temporal_shifted, bool): - temporal_shifted = [temporal_shifted] * num_layers - - self.blocks = nn.ModuleList( - [ - get_nablock(block_type[i])( - vid_dim=vid_dim, - txt_dim=txt_dim, - emb_dim=emb_dim, - heads=heads, - head_dim=head_dim, - expand_ratio=expand_ratio, - norm=norm, - norm_eps=norm_eps, - ada=ada, - qk_bias=qk_bias, - qk_rope=qk_rope, - qk_norm=qk_norm, - shared_qkv=shared_qkv, - shared_mlp=shared_mlp, - mlp_type=mlp_type, - window=window[i], - window_method=window_method[i], - temporal_window_size=temporal_window_size[i], - temporal_shifted=temporal_shifted[i], - **kwargs, - ) - for i in range(num_layers) - ] - ) - self.vid_out = NaPatchOut( - out_channels=vid_out_channels, - patch_size=patch_size, - dim=vid_dim, - ) - - self.need_txt_repeat = block_type[0] in [ - "mmdit_stwin", - "mmdit_stwin_spatial", - "mmdit_stwin_3d_spatial", - ] - - def set_gradient_checkpointing(self, enable: bool): - self.gradient_checkpointing = enable - - def forward( - self, - vid: torch.FloatTensor, # l c - txt: torch.FloatTensor, # l c - vid_shape: torch.LongTensor, # b 3 - txt_shape: torch.LongTensor, # b 1 - timestep: Union[int, float, torch.IntTensor, torch.FloatTensor], # b - disable_cache: bool = True, # for test - ): - # Text input. - if txt_shape.size(-1) == 1 and self.need_txt_repeat: - txt, txt_shape = na.repeat(txt, txt_shape, "l c -> t l c", t=vid_shape[:, 0]) - # slice vid after patching in when using sequence parallelism - txt = slice_inputs(txt, dim=0) - txt = self.txt_in(txt) - - # Video input. - # Sequence parallel slicing is done inside patching class. - vid, vid_shape = self.vid_in(vid, vid_shape) - - # Embedding input. - emb = self.emb_in(timestep, device=vid.device, dtype=vid.dtype) - - # Body - cache = Cache(disable=disable_cache) - for i, block in enumerate(self.blocks): - vid, txt, vid_shape, txt_shape = gradient_checkpointing( - enabled=(self.gradient_checkpointing and self.training), - module=block, - vid=vid, - txt=txt, - vid_shape=vid_shape, - txt_shape=txt_shape, - emb=emb, - cache=cache, - ) - - vid, vid_shape = self.vid_out(vid, vid_shape, cache) - return NaDiTOutput(vid_sample=vid) - - -class NaDiTUpscaler(nn.Module): - """ - Native Resolution Diffusion Transformer (NaDiT) - """ - - gradient_checkpointing = False - - def __init__( - self, - vid_in_channels: int, - vid_out_channels: int, - vid_dim: int, - txt_in_dim: Optional[int], - txt_dim: Optional[int], - emb_dim: int, - heads: int, - head_dim: int, - expand_ratio: int, - norm: Optional[str], - norm_eps: float, - ada: str, - qk_bias: bool, - qk_rope: bool, - qk_norm: Optional[str], - patch_size: Union[int, Tuple[int, int, int]], - num_layers: int, - block_type: Union[str, Tuple[str]], - shared_qkv: bool = False, - shared_mlp: bool = False, - mlp_type: str = "normal", - window: Optional[Tuple] = None, - window_method: Optional[Tuple[str]] = None, - temporal_window_size: int = None, - temporal_shifted: bool = False, - **kwargs, - ): - ada = get_ada_layer(ada) - norm = get_norm_layer(norm) - qk_norm = get_norm_layer(qk_norm) - if isinstance(block_type, str): - block_type = [block_type] * num_layers - elif len(block_type) != num_layers: - raise ValueError("The ``block_type`` list should equal to ``num_layers``.") - super().__init__() - self.vid_in = NaPatchIn( - in_channels=vid_in_channels, - patch_size=patch_size, - dim=vid_dim, - ) - self.txt_in = ( - nn.Linear(txt_in_dim, txt_dim) - if txt_in_dim and txt_in_dim != txt_dim - else nn.Identity() - ) - self.emb_in = TimeEmbedding( - sinusoidal_dim=256, - hidden_dim=max(vid_dim, txt_dim), - output_dim=emb_dim, - ) - - self.emb_scale = TimeEmbedding( - sinusoidal_dim=256, - hidden_dim=max(vid_dim, txt_dim), - output_dim=emb_dim, - ) - - if window is None or isinstance(window[0], int): - window = [window] * num_layers - if window_method is None or isinstance(window_method, str): - window_method = [window_method] * num_layers - if temporal_window_size is None or isinstance(temporal_window_size, int): - temporal_window_size = [temporal_window_size] * num_layers - if temporal_shifted is None or isinstance(temporal_shifted, bool): - temporal_shifted = [temporal_shifted] * num_layers - - self.blocks = nn.ModuleList( - [ - get_nablock(block_type[i])( - vid_dim=vid_dim, - txt_dim=txt_dim, - emb_dim=emb_dim, - heads=heads, - head_dim=head_dim, - expand_ratio=expand_ratio, - norm=norm, - norm_eps=norm_eps, - ada=ada, - qk_bias=qk_bias, - qk_rope=qk_rope, - qk_norm=qk_norm, - shared_qkv=shared_qkv, - shared_mlp=shared_mlp, - mlp_type=mlp_type, - window=window[i], - window_method=window_method[i], - temporal_window_size=temporal_window_size[i], - temporal_shifted=temporal_shifted[i], - **kwargs, - ) - for i in range(num_layers) - ] - ) - self.vid_out = NaPatchOut( - out_channels=vid_out_channels, - patch_size=patch_size, - dim=vid_dim, - ) - - self.need_txt_repeat = block_type[0] in [ - "mmdit_stwin", - "mmdit_stwin_spatial", - "mmdit_stwin_3d_spatial", - ] - - def set_gradient_checkpointing(self, enable: bool): - self.gradient_checkpointing = enable - - def forward( - self, - vid: torch.FloatTensor, # l c - txt: torch.FloatTensor, # l c - vid_shape: torch.LongTensor, # b 3 - txt_shape: torch.LongTensor, # b 1 - timestep: Union[int, float, torch.IntTensor, torch.FloatTensor], # b - downscale: Union[int, float, torch.IntTensor, torch.FloatTensor], # b - disable_cache: bool = False, # for test - ): - - # Text input. - if txt_shape.size(-1) == 1 and self.need_txt_repeat: - txt, txt_shape = na.repeat(txt, txt_shape, "l c -> t l c", t=vid_shape[:, 0]) - # slice vid after patching in when using sequence parallelism - txt = slice_inputs(txt, dim=0) - txt = self.txt_in(txt) - - # Video input. - # Sequence parallel slicing is done inside patching class. - vid, vid_shape = self.vid_in(vid, vid_shape) - - # Embedding input. - emb = self.emb_in(timestep, device=vid.device, dtype=vid.dtype) - emb_scale = self.emb_scale(downscale, device=vid.device, dtype=vid.dtype) - emb = emb + emb_scale - - # Body - cache = Cache(disable=disable_cache) - for i, block in enumerate(self.blocks): - vid, txt, vid_shape, txt_shape = gradient_checkpointing( - enabled=(self.gradient_checkpointing and self.training), - module=block, - vid=vid, - txt=txt, - vid_shape=vid_shape, - txt_shape=txt_shape, - emb=emb, - cache=cache, - ) - - vid, vid_shape = self.vid_out(vid, vid_shape, cache) - return NaDiTOutput(vid_sample=vid) diff --git a/models_x/dit/normalization.py b/models_x/dit/normalization.py deleted file mode 100644 index 98827a9c71f9fd6e461937774d022b68844aee34..0000000000000000000000000000000000000000 --- a/models_x/dit/normalization.py +++ /dev/null @@ -1,63 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Callable, Optional -from diffusers.models.normalization import RMSNorm -from torch import nn - -# (dim: int, eps: float, elementwise_affine: bool) -norm_layer_type = Callable[[int, float, bool], nn.Module] - - -def get_norm_layer(norm_type: Optional[str]) -> norm_layer_type: - - def _norm_layer(dim: int, eps: float, elementwise_affine: bool): - if norm_type is None: - return nn.Identity() - - if norm_type == "layer": - return nn.LayerNorm( - normalized_shape=dim, - eps=eps, - elementwise_affine=elementwise_affine, - ) - - if norm_type == "rms": - return RMSNorm( - dim=dim, - eps=eps, - elementwise_affine=elementwise_affine, - ) - - if norm_type == "fusedln": - from apex.normalization import FusedLayerNorm - - return FusedLayerNorm( - normalized_shape=dim, - elementwise_affine=elementwise_affine, - eps=eps, - ) - - if norm_type == "fusedrms": - from apex.normalization import FusedRMSNorm - - return FusedRMSNorm( - normalized_shape=dim, - elementwise_affine=elementwise_affine, - eps=eps, - ) - - raise NotImplementedError(f"{norm_type} is not supported") - - return _norm_layer diff --git a/models_x/dit/patch.py b/models_x/dit/patch.py deleted file mode 100644 index d98158e34a94e0447ed82b92fbfa289bf1a2be1d..0000000000000000000000000000000000000000 --- a/models_x/dit/patch.py +++ /dev/null @@ -1,112 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Tuple, Union -import torch -from einops import rearrange -from torch import nn -from torch.nn.modules.utils import _triple - -from common.cache import Cache -from common.distributed.ops import gather_outputs, slice_inputs - -from . import na - - -class PatchIn(nn.Module): - def __init__( - self, - in_channels: int, - patch_size: Union[int, Tuple[int, int, int]], - dim: int, - ): - super().__init__() - t, h, w = _triple(patch_size) - self.patch_size = t, h, w - self.proj = nn.Linear(in_channels * t * h * w, dim) - - def forward( - self, - vid: torch.Tensor, - ) -> torch.Tensor: - t, h, w = self.patch_size - vid = rearrange(vid, "b c (T t) (H h) (W w) -> b T H W (t h w c)", t=t, h=h, w=w) - vid = self.proj(vid) - return vid - - -class PatchOut(nn.Module): - def __init__( - self, - out_channels: int, - patch_size: Union[int, Tuple[int, int, int]], - dim: int, - ): - super().__init__() - t, h, w = _triple(patch_size) - self.patch_size = t, h, w - self.proj = nn.Linear(dim, out_channels * t * h * w) - - def forward( - self, - vid: torch.Tensor, - ) -> torch.Tensor: - t, h, w = self.patch_size - vid = self.proj(vid) - vid = rearrange(vid, "b T H W (t h w c) -> b c (T t) (H h) (W w)", t=t, h=h, w=w) - return vid - - -class NaPatchIn(PatchIn): - def forward( - self, - vid: torch.Tensor, # l c - vid_shape: torch.LongTensor, - ) -> torch.Tensor: - t, h, w = self.patch_size - if not (t == h == w == 1): - vid, vid_shape = na.rearrange( - vid, vid_shape, "(T t) (H h) (W w) c -> T H W (t h w c)", t=t, h=h, w=w - ) - # slice vid after patching in when using sequence parallelism - vid = slice_inputs(vid, dim=0) - vid = self.proj(vid) - return vid, vid_shape - - -class NaPatchOut(PatchOut): - def forward( - self, - vid: torch.FloatTensor, # l c - vid_shape: torch.LongTensor, - cache: Cache = Cache(disable=True), - ) -> Tuple[ - torch.FloatTensor, - torch.LongTensor, - ]: - t, h, w = self.patch_size - vid = self.proj(vid) - # gather vid before patching out when enabling sequence parallelism - vid = gather_outputs( - vid, - gather_dim=0, - padding_dim=0, - unpad_shape=vid_shape, - cache=cache.namespace("vid"), - ) - if not (t == h == w == 1): - vid, vid_shape = na.rearrange( - vid, vid_shape, "T H W (t h w c) -> (T t) (H h) (W w) c", t=t, h=h, w=w - ) - return vid, vid_shape diff --git a/models_x/dit/rope.py b/models_x/dit/rope.py deleted file mode 100644 index 32a4815a1b349001cb86ea6d752fb4f91f6e655e..0000000000000000000000000000000000000000 --- a/models_x/dit/rope.py +++ /dev/null @@ -1,101 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from functools import lru_cache -from typing import Tuple -import torch -from einops import rearrange -from rotary_embedding_torch import RotaryEmbedding, apply_rotary_emb -from torch import nn - -from common.cache import Cache - - -class RotaryEmbeddingBase(nn.Module): - def __init__(self, dim: int, rope_dim: int): - super().__init__() - self.rope = RotaryEmbedding( - dim=dim // rope_dim, - freqs_for="pixel", - max_freq=256, - ) - # 1. Set model.requires_grad_(True) after model creation will make - # the `requires_grad=False` for rope freqs no longer hold. - # 2. Even if we don't set requires_grad_(True) explicitly, - # FSDP is not memory efficient when handling fsdp_wrap - # with mixed requires_grad=True/False. - # With above consideration, it is easier just remove the freqs - # out of nn.Parameters when `learned_freq=False` - freqs = self.rope.freqs - del self.rope.freqs - self.rope.register_buffer("freqs", freqs.data) - - @lru_cache(maxsize=128) - def get_axial_freqs(self, *dims): - return self.rope.get_axial_freqs(*dims) - - -class RotaryEmbedding3d(RotaryEmbeddingBase): - def __init__(self, dim: int): - super().__init__(dim, rope_dim=3) - - def forward( - self, - q: torch.FloatTensor, # b h l d - k: torch.FloatTensor, # b h l d - size: Tuple[int, int, int], - ) -> Tuple[ - torch.FloatTensor, - torch.FloatTensor, - ]: - T, H, W = size - freqs = self.get_axial_freqs(T, H, W) - q = rearrange(q, "b h (T H W) d -> b h T H W d", T=T, H=H, W=W) - k = rearrange(k, "b h (T H W) d -> b h T H W d", T=T, H=H, W=W) - q = apply_rotary_emb(freqs, q) - k = apply_rotary_emb(freqs, k) - q = rearrange(q, "b h T H W d -> b h (T H W) d") - k = rearrange(k, "b h T H W d -> b h (T H W) d") - return q, k - - -class NaRotaryEmbedding3d(RotaryEmbedding3d): - def forward( - self, - q: torch.FloatTensor, # L h d - k: torch.FloatTensor, # L h d - shape: torch.LongTensor, - cache: Cache, - ) -> Tuple[ - torch.FloatTensor, - torch.FloatTensor, - ]: - freqs = cache("rope_freqs_3d", lambda: self.get_freqs(shape)) - q = rearrange(q, "L h d -> h L d") - k = rearrange(k, "L h d -> h L d") - q = apply_rotary_emb(freqs, q.float()).to(q.dtype) - k = apply_rotary_emb(freqs, k.float()).to(k.dtype) - q = rearrange(q, "h L d -> L h d") - k = rearrange(k, "h L d -> L h d") - return q, k - - def get_freqs( - self, - shape: torch.LongTensor, - ) -> torch.Tensor: - freq_list = [] - for f, h, w in shape.tolist(): - freqs = self.get_axial_freqs(f, h, w) - freq_list.append(freqs.view(-1, freqs.size(-1))) - return torch.cat(freq_list, dim=0) diff --git a/models_x/dit/window.py b/models_x/dit/window.py deleted file mode 100644 index b7475921ae283cf76d82bff7521233c133f54bfd..0000000000000000000000000000000000000000 --- a/models_x/dit/window.py +++ /dev/null @@ -1,83 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from math import ceil -from typing import Tuple -import math - -def get_window_op(name: str): - if name == "720pwin_by_size_bysize": - return make_720Pwindows_bysize - if name == "720pswin_by_size_bysize": - return make_shifted_720Pwindows_bysize - raise ValueError(f"Unknown windowing method: {name}") - - -# -------------------------------- Windowing -------------------------------- # -def make_720Pwindows_bysize(size: Tuple[int, int, int], num_windows: Tuple[int, int, int]): - t, h, w = size - resized_nt, resized_nh, resized_nw = num_windows - #cal windows under 720p - scale = math.sqrt((45 * 80) / (h * w)) - resized_h, resized_w = round(h * scale), round(w * scale) - wh, ww = ceil(resized_h / resized_nh), ceil(resized_w / resized_nw) # window size. - wt = ceil(min(t, 30) / resized_nt) # window size. - nt, nh, nw = ceil(t / wt), ceil(h / wh), ceil(w / ww) # window size. - return [ - ( - slice(it * wt, min((it + 1) * wt, t)), - slice(ih * wh, min((ih + 1) * wh, h)), - slice(iw * ww, min((iw + 1) * ww, w)), - ) - for iw in range(nw) - if min((iw + 1) * ww, w) > iw * ww - for ih in range(nh) - if min((ih + 1) * wh, h) > ih * wh - for it in range(nt) - if min((it + 1) * wt, t) > it * wt - ] - -def make_shifted_720Pwindows_bysize(size: Tuple[int, int, int], num_windows: Tuple[int, int, int]): - t, h, w = size - resized_nt, resized_nh, resized_nw = num_windows - #cal windows under 720p - scale = math.sqrt((45 * 80) / (h * w)) - resized_h, resized_w = round(h * scale), round(w * scale) - wh, ww = ceil(resized_h / resized_nh), ceil(resized_w / resized_nw) # window size. - wt = ceil(min(t, 30) / resized_nt) # window size. - - st, sh, sw = ( # shift size. - 0.5 if wt < t else 0, - 0.5 if wh < h else 0, - 0.5 if ww < w else 0, - ) - nt, nh, nw = ceil((t - st) / wt), ceil((h - sh) / wh), ceil((w - sw) / ww) # window size. - nt, nh, nw = ( # number of window. - nt + 1 if st > 0 else 1, - nh + 1 if sh > 0 else 1, - nw + 1 if sw > 0 else 1, - ) - return [ - ( - slice(max(int((it - st) * wt), 0), min(int((it - st + 1) * wt), t)), - slice(max(int((ih - sh) * wh), 0), min(int((ih - sh + 1) * wh), h)), - slice(max(int((iw - sw) * ww), 0), min(int((iw - sw + 1) * ww), w)), - ) - for iw in range(nw) - if min(int((iw - sw + 1) * ww), w) > max(int((iw - sw) * ww), 0) - for ih in range(nh) - if min(int((ih - sh + 1) * wh), h) > max(int((ih - sh) * wh), 0) - for it in range(nt) - if min(int((it - st + 1) * wt), t) > max(int((it - st) * wt), 0) - ] diff --git a/models_x/dit_v2/attention.py b/models_x/dit_v2/attention.py deleted file mode 100644 index 9201fe095778db21ebd3384d163b0ccac4b35664..0000000000000000000000000000000000000000 --- a/models_x/dit_v2/attention.py +++ /dev/null @@ -1,46 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -import torch -import torch.nn.functional as F - -from flash_attn import flash_attn_varlen_func - -from torch import nn - -class TorchAttention(nn.Module): - def tflops(self, args, kwargs, output) -> float: - assert len(args) == 0 or len(args) > 2, "query, key should both provided by args / kwargs" - q = kwargs.get("query") or args[0] - k = kwargs.get("key") or args[1] - b, h, sq, d = q.shape - b, h, sk, d = k.shape - return b * h * (4 * d * (sq / 1e6) * (sk / 1e6)) - - def forward(self, *args, **kwargs): - return F.scaled_dot_product_attention(*args, **kwargs) - - -class FlashAttentionVarlen(nn.Module): - def tflops(self, args, kwargs, output) -> float: - cu_seqlens_q = kwargs["cu_seqlens_q"] - cu_seqlens_k = kwargs["cu_seqlens_k"] - _, h, d = output.shape - seqlens_q = (cu_seqlens_q[1:] - cu_seqlens_q[:-1]) / 1e6 - seqlens_k = (cu_seqlens_k[1:] - cu_seqlens_k[:-1]) / 1e6 - return h * (4 * d * (seqlens_q * seqlens_k).sum()) - - def forward(self, *args, **kwargs): - kwargs["deterministic"] = torch.are_deterministic_algorithms_enabled() - return flash_attn_varlen_func(*args, **kwargs) diff --git a/models_x/dit_v2/embedding.py b/models_x/dit_v2/embedding.py deleted file mode 100644 index e972244f5767c9f34e5e77bb180ae720ce88b89c..0000000000000000000000000000000000000000 --- a/models_x/dit_v2/embedding.py +++ /dev/null @@ -1,62 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Optional, Union -import torch -from diffusers.models.embeddings import get_timestep_embedding -from torch import nn - - -def emb_add(emb1: torch.Tensor, emb2: Optional[torch.Tensor]): - return emb1 if emb2 is None else emb1 + emb2 - - -class TimeEmbedding(nn.Module): - def __init__( - self, - sinusoidal_dim: int, - hidden_dim: int, - output_dim: int, - ): - super().__init__() - self.sinusoidal_dim = sinusoidal_dim - self.proj_in = nn.Linear(sinusoidal_dim, hidden_dim) - self.proj_hid = nn.Linear(hidden_dim, hidden_dim) - self.proj_out = nn.Linear(hidden_dim, output_dim) - self.act = nn.SiLU() - - def forward( - self, - timestep: Union[int, float, torch.IntTensor, torch.FloatTensor], - device: torch.device, - dtype: torch.dtype, - ) -> torch.FloatTensor: - if not torch.is_tensor(timestep): - timestep = torch.tensor([timestep], device=device, dtype=dtype) - if timestep.ndim == 0: - timestep = timestep[None] - - emb = get_timestep_embedding( - timesteps=timestep, - embedding_dim=self.sinusoidal_dim, - flip_sin_to_cos=False, - downscale_freq_shift=0, - ) - emb = emb.to(dtype) - emb = self.proj_in(emb) - emb = self.act(emb) - emb = self.proj_hid(emb) - emb = self.act(emb) - emb = self.proj_out(emb) - return emb diff --git a/models_x/dit_v2/mlp.py b/models_x/dit_v2/mlp.py deleted file mode 100644 index 2d05cb021f3e3c6ac05c0e7ae1aa8a6d29475b87..0000000000000000000000000000000000000000 --- a/models_x/dit_v2/mlp.py +++ /dev/null @@ -1,62 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Optional -import torch -import torch.nn.functional as F -from torch import nn - - -def get_mlp(mlp_type: Optional[str] = "normal"): - if mlp_type == "normal": - return MLP - elif mlp_type == "swiglu": - return SwiGLUMLP - - -class MLP(nn.Module): - def __init__( - self, - dim: int, - expand_ratio: int, - ): - super().__init__() - self.proj_in = nn.Linear(dim, dim * expand_ratio) - self.act = nn.GELU("tanh") - self.proj_out = nn.Linear(dim * expand_ratio, dim) - - def forward(self, x: torch.FloatTensor) -> torch.FloatTensor: - x = self.proj_in(x) - x = self.act(x) - x = self.proj_out(x) - return x - - -class SwiGLUMLP(nn.Module): - def __init__( - self, - dim: int, - expand_ratio: int, - multiple_of: int = 256, - ): - super().__init__() - hidden_dim = int(2 * dim * expand_ratio / 3) - hidden_dim = multiple_of * ((hidden_dim + multiple_of - 1) // multiple_of) - self.proj_in_gate = nn.Linear(dim, hidden_dim, bias=False) - self.proj_out = nn.Linear(hidden_dim, dim, bias=False) - self.proj_in = nn.Linear(dim, hidden_dim, bias=False) - - def forward(self, x: torch.FloatTensor) -> torch.FloatTensor: - x = self.proj_out(F.silu(self.proj_in_gate(x)) * self.proj_in(x)) - return x diff --git a/models_x/dit_v2/mm.py b/models_x/dit_v2/mm.py deleted file mode 100644 index 344f89a8fa22b9a5473b8d25f208085a630f0c85..0000000000000000000000000000000000000000 --- a/models_x/dit_v2/mm.py +++ /dev/null @@ -1,74 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from dataclasses import dataclass -from typing import Any, Callable, Dict, List, Tuple -import torch -from torch import nn - - -@dataclass -class MMArg: - vid: Any - txt: Any - - -def get_args(key: str, args: List[Any]) -> List[Any]: - return [getattr(v, key) if isinstance(v, MMArg) else v for v in args] - - -def get_kwargs(key: str, kwargs: Dict[str, Any]) -> Dict[str, Any]: - return {k: getattr(v, key) if isinstance(v, MMArg) else v for k, v in kwargs.items()} - - -class MMModule(nn.Module): - def __init__( - self, - module: Callable[..., nn.Module], - *args, - shared_weights: bool = False, - vid_only: bool = False, - **kwargs, - ): - super().__init__() - self.shared_weights = shared_weights - self.vid_only = vid_only - if self.shared_weights: - assert get_args("vid", args) == get_args("txt", args) - assert get_kwargs("vid", kwargs) == get_kwargs("txt", kwargs) - self.all = module(*get_args("vid", args), **get_kwargs("vid", kwargs)) - else: - self.vid = module(*get_args("vid", args), **get_kwargs("vid", kwargs)) - self.txt = ( - module(*get_args("txt", args), **get_kwargs("txt", kwargs)) - if not vid_only - else None - ) - - def forward( - self, - vid: torch.FloatTensor, - txt: torch.FloatTensor, - *args, - **kwargs, - ) -> Tuple[ - torch.FloatTensor, - torch.FloatTensor, - ]: - vid_module = self.vid if not self.shared_weights else self.all - vid = vid_module(vid, *get_args("vid", args), **get_kwargs("vid", kwargs)) - if not self.vid_only: - txt_module = self.txt if not self.shared_weights else self.all - txt = txt_module(txt, *get_args("txt", args), **get_kwargs("txt", kwargs)) - return vid, txt diff --git a/models_x/dit_v2/modulation.py b/models_x/dit_v2/modulation.py deleted file mode 100644 index 9e14bb005ef2d0a2c7205f593c483e8862a42858..0000000000000000000000000000000000000000 --- a/models_x/dit_v2/modulation.py +++ /dev/null @@ -1,102 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Callable, List, Optional -import torch -from einops import rearrange -from torch import nn - -from common.cache import Cache -from common.distributed.ops import slice_inputs - -# (dim: int, emb_dim: int) -ada_layer_type = Callable[[int, int], nn.Module] - - -def get_ada_layer(ada_layer: str) -> ada_layer_type: - if ada_layer == "single": - return AdaSingle - raise NotImplementedError(f"{ada_layer} is not supported") - - -def expand_dims(x: torch.Tensor, dim: int, ndim: int): - """ - Expand tensor "x" to "ndim" by adding empty dims at "dim". - Example: x is (b d), target ndim is 5, add dim at 1, return (b 1 1 1 d). - """ - shape = x.shape - shape = shape[:dim] + (1,) * (ndim - len(shape)) + shape[dim:] - return x.reshape(shape) - - -class AdaSingle(nn.Module): - def __init__( - self, - dim: int, - emb_dim: int, - layers: List[str], - modes: List[str] = ["in", "out"], - ): - assert emb_dim == 6 * dim, "AdaSingle requires emb_dim == 6 * dim" - super().__init__() - self.dim = dim - self.emb_dim = emb_dim - self.layers = layers - for l in layers: - if "in" in modes: - self.register_parameter(f"{l}_shift", nn.Parameter(torch.randn(dim) / dim**0.5)) - self.register_parameter( - f"{l}_scale", nn.Parameter(torch.randn(dim) / dim**0.5 + 1) - ) - if "out" in modes: - self.register_parameter(f"{l}_gate", nn.Parameter(torch.randn(dim) / dim**0.5)) - - def forward( - self, - hid: torch.FloatTensor, # b ... c - emb: torch.FloatTensor, # b d - layer: str, - mode: str, - cache: Cache = Cache(disable=True), - branch_tag: str = "", - hid_len: Optional[torch.LongTensor] = None, # b - ) -> torch.FloatTensor: - idx = self.layers.index(layer) - emb = rearrange(emb, "b (d l g) -> b d l g", l=len(self.layers), g=3)[..., idx, :] - emb = expand_dims(emb, 1, hid.ndim + 1) - - if hid_len is not None: - emb = cache( - f"emb_repeat_{idx}_{branch_tag}", - lambda: slice_inputs( - torch.cat([e.repeat(l, *([1] * e.ndim)) for e, l in zip(emb, hid_len)]), - dim=0, - ), - ) - - shiftA, scaleA, gateA = emb.unbind(-1) - shiftB, scaleB, gateB = ( - getattr(self, f"{layer}_shift", None), - getattr(self, f"{layer}_scale", None), - getattr(self, f"{layer}_gate", None), - ) - - if mode == "in": - return hid.mul_(scaleA + scaleB).add_(shiftA + shiftB) - if mode == "out": - return hid.mul_(gateA + gateB) - raise NotImplementedError - - def extra_repr(self) -> str: - return f"dim={self.dim}, emb_dim={self.emb_dim}, layers={self.layers}" \ No newline at end of file diff --git a/models_x/dit_v2/na.py b/models_x/dit_v2/na.py deleted file mode 100644 index 0dbd546c4705b3b9c7c19a9823f9d113a0447616..0000000000000000000000000000000000000000 --- a/models_x/dit_v2/na.py +++ /dev/null @@ -1,241 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from itertools import chain -from typing import Callable, Dict, List, Tuple -import einops -import torch - - -def flatten( - hid: List[torch.FloatTensor], # List of (*** c) -) -> Tuple[ - torch.FloatTensor, # (L c) - torch.LongTensor, # (b n) -]: - assert len(hid) > 0 - shape = torch.stack([torch.tensor(x.shape[:-1], device=hid[0].device) for x in hid]) - hid = torch.cat([x.flatten(0, -2) for x in hid]) - return hid, shape - - -def unflatten( - hid: torch.FloatTensor, # (L c) or (L ... c) - hid_shape: torch.LongTensor, # (b n) -) -> List[torch.Tensor]: # List of (*** c) or (*** ... c) - hid_len = hid_shape.prod(-1) - hid = hid.split(hid_len.tolist()) - hid = [x.unflatten(0, s.tolist()) for x, s in zip(hid, hid_shape)] - return hid - - -def concat( - vid: torch.FloatTensor, # (VL ... c) - txt: torch.FloatTensor, # (TL ... c) - vid_len: torch.LongTensor, # (b) - txt_len: torch.LongTensor, # (b) -) -> torch.FloatTensor: # (L ... c) - vid = torch.split(vid, vid_len.tolist()) - txt = torch.split(txt, txt_len.tolist()) - return torch.cat(list(chain(*zip(vid, txt)))) - - -def concat_idx( - vid_len: torch.LongTensor, # (b) - txt_len: torch.LongTensor, # (b) -) -> Tuple[ - Callable, - Callable, -]: - device = vid_len.device - vid_idx = torch.arange(vid_len.sum(), device=device) - txt_idx = torch.arange(len(vid_idx), len(vid_idx) + txt_len.sum(), device=device) - tgt_idx = concat(vid_idx, txt_idx, vid_len, txt_len) - src_idx = torch.argsort(tgt_idx) - return ( - lambda vid, txt: torch.index_select(torch.cat([vid, txt]), 0, tgt_idx), - lambda all: torch.index_select(all, 0, src_idx).split([len(vid_idx), len(txt_idx)]), - ) - - -def unconcat( - all: torch.FloatTensor, # (L ... c) - vid_len: torch.LongTensor, # (b) - txt_len: torch.LongTensor, # (b) -) -> Tuple[ - torch.FloatTensor, # (VL ... c) - torch.FloatTensor, # (TL ... c) -]: - interleave_len = list(chain(*zip(vid_len.tolist(), txt_len.tolist()))) - all = all.split(interleave_len) - vid = torch.cat(all[0::2]) - txt = torch.cat(all[1::2]) - return vid, txt - - -def repeat_concat( - vid: torch.FloatTensor, # (VL ... c) - txt: torch.FloatTensor, # (TL ... c) - vid_len: torch.LongTensor, # (n*b) - txt_len: torch.LongTensor, # (b) - txt_repeat: List, # (n) -) -> torch.FloatTensor: # (L ... c) - vid = torch.split(vid, vid_len.tolist()) - txt = torch.split(txt, txt_len.tolist()) - txt = [[x] * n for x, n in zip(txt, txt_repeat)] - txt = list(chain(*txt)) - return torch.cat(list(chain(*zip(vid, txt)))) - - -def repeat_concat_idx( - vid_len: torch.LongTensor, # (n*b) - txt_len: torch.LongTensor, # (b) - txt_repeat: torch.LongTensor, # (n) -) -> Tuple[ - Callable, - Callable, -]: - device = vid_len.device - vid_idx = torch.arange(vid_len.sum(), device=device) - txt_idx = torch.arange(len(vid_idx), len(vid_idx) + txt_len.sum(), device=device) - txt_repeat_list = txt_repeat.tolist() - tgt_idx = repeat_concat(vid_idx, txt_idx, vid_len, txt_len, txt_repeat) - src_idx = torch.argsort(tgt_idx) - txt_idx_len = len(tgt_idx) - len(vid_idx) - repeat_txt_len = (txt_len * txt_repeat).tolist() - - def unconcat_coalesce(all): - """ - Un-concat vid & txt, and coalesce the repeated txt. - e.g. vid [0 1 2 3 4 5 6 7 8] -> 3 splits -> [0 1 2] [3 4 5] [6 7 8] - txt [9 10] - repeat_concat ==> [0 1 2 9 10 3 4 5 9 10 6 7 8 9 10] - 1. argsort re-index ==> [0 1 2 3 4 5 6 7 8 9 9 9 10 10 10] - split ==> vid_out [0 1 2 3 4 5 6 7 8] txt_out [9 9 9 10 10 10] - 2. reshape & mean for each sample to coalesce the repeated txt. - """ - vid_out, txt_out = all[src_idx].split([len(vid_idx), txt_idx_len]) - txt_out_coalesced = [] - for txt, repeat_time in zip(txt_out.split(repeat_txt_len), txt_repeat_list): - txt = txt.reshape(-1, repeat_time, *txt.shape[1:]).mean(1) - txt_out_coalesced.append(txt) - return vid_out, torch.cat(txt_out_coalesced) - - # Note: Backward of torch.index_select is non-deterministic when existing repeated index, - # the difference may cumulative like torch.repeat_interleave, so we use vanilla index here. - return ( - lambda vid, txt: torch.cat([vid, txt])[tgt_idx], - lambda all: unconcat_coalesce(all), - ) - - -def rearrange( - hid: torch.FloatTensor, # (L c) - hid_shape: torch.LongTensor, # (b n) - pattern: str, - **kwargs: Dict[str, int], -) -> Tuple[ - torch.FloatTensor, - torch.LongTensor, -]: - return flatten([einops.rearrange(h, pattern, **kwargs) for h in unflatten(hid, hid_shape)]) - - -def rearrange_idx( - hid_shape: torch.LongTensor, # (b n) - pattern: str, - **kwargs: Dict[str, int], -) -> Tuple[Callable, Callable, torch.LongTensor]: - hid_idx = torch.arange(hid_shape.prod(-1).sum(), device=hid_shape.device).unsqueeze(-1) - tgt_idx, tgt_shape = rearrange(hid_idx, hid_shape, pattern, **kwargs) - tgt_idx = tgt_idx.squeeze(-1) - src_idx = torch.argsort(tgt_idx) - return ( - lambda hid: torch.index_select(hid, 0, tgt_idx), - lambda hid: torch.index_select(hid, 0, src_idx), - tgt_shape, - ) - - -def repeat( - hid: torch.FloatTensor, # (L c) - hid_shape: torch.LongTensor, # (b n) - pattern: str, - **kwargs: Dict[str, torch.LongTensor], # (b) -) -> Tuple[ - torch.FloatTensor, - torch.LongTensor, -]: - hid = unflatten(hid, hid_shape) - kwargs = [{k: v[i].item() for k, v in kwargs.items()} for i in range(len(hid))] - return flatten([einops.repeat(h, pattern, **a) for h, a in zip(hid, kwargs)]) - - -def pack( - samples: List[torch.Tensor], # List of (h w c). -) -> Tuple[ - List[torch.Tensor], # groups [(b1 h1 w1 c1), (b2 h2 w2 c2)] - List[List[int]], # reversal indices. -]: - batches = {} - indices = {} - for i, sample in enumerate(samples): - shape = sample.shape - batches[shape] = batches.get(shape, []) - indices[shape] = indices.get(shape, []) - batches[shape].append(sample) - indices[shape].append(i) - - batches = list(map(torch.stack, batches.values())) - indices = list(indices.values()) - return batches, indices - - -def unpack( - batches: List[torch.Tensor], - indices: List[List[int]], -) -> List[torch.Tensor]: - samples = [None] * (max(chain(*indices)) + 1) - for batch, index in zip(batches, indices): - for sample, i in zip(batch.unbind(), index): - samples[i] = sample - return samples - - -def window( - hid: torch.FloatTensor, # (L c) - hid_shape: torch.LongTensor, # (b n) - window_fn: Callable[[torch.Tensor], List[torch.Tensor]], -): - hid = unflatten(hid, hid_shape) - hid = list(map(window_fn, hid)) - hid_windows = torch.tensor(list(map(len, hid)), device=hid_shape.device) - hid, hid_shape = flatten(list(chain(*hid))) - return hid, hid_shape, hid_windows - - -def window_idx( - hid_shape: torch.LongTensor, # (b n) - window_fn: Callable[[torch.Tensor], List[torch.Tensor]], -): - hid_idx = torch.arange(hid_shape.prod(-1).sum(), device=hid_shape.device).unsqueeze(-1) - tgt_idx, tgt_shape, tgt_windows = window(hid_idx, hid_shape, window_fn) - tgt_idx = tgt_idx.squeeze(-1) - src_idx = torch.argsort(tgt_idx) - return ( - lambda hid: torch.index_select(hid, 0, tgt_idx), - lambda hid: torch.index_select(hid, 0, src_idx), - tgt_shape, - tgt_windows, - ) diff --git a/models_x/dit_v2/nablocks/__init__.py b/models_x/dit_v2/nablocks/__init__.py deleted file mode 100644 index c1a9da26ef760575192042ea32b01bd9cd1a267d..0000000000000000000000000000000000000000 --- a/models_x/dit_v2/nablocks/__init__.py +++ /dev/null @@ -1,26 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from .mmsr_block import NaMMSRTransformerBlock - - -nadit_blocks = { - "mmdit_sr": NaMMSRTransformerBlock, -} - - -def get_nablock(block_type: str): - if block_type in nadit_blocks: - return nadit_blocks[block_type] - raise NotImplementedError(f"{block_type} is not supported") diff --git a/models_x/dit_v2/nablocks/attention/__init__.py b/models_x/dit_v2/nablocks/attention/__init__.py deleted file mode 100644 index a7561025245d888d26ade38f25668efb216cd907..0000000000000000000000000000000000000000 --- a/models_x/dit_v2/nablocks/attention/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from .mmattn import NaMMAttention - -attns = { - "mm_full": NaMMAttention, -} - - -def get_attn(attn_type: str): - if attn_type in attns: - return attns[attn_type] - raise NotImplementedError(f"{attn_type} is not supported") diff --git a/models_x/dit_v2/nablocks/attention/mmattn.py b/models_x/dit_v2/nablocks/attention/mmattn.py deleted file mode 100644 index 4fea9cb9c6fa2f82dd1aba46d658a04a19a11305..0000000000000000000000000000000000000000 --- a/models_x/dit_v2/nablocks/attention/mmattn.py +++ /dev/null @@ -1,266 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Optional, Tuple, Union -import torch -from einops import rearrange -from torch import nn -from torch.nn import functional as F -from torch.nn.modules.utils import _triple - -from common.cache import Cache -from common.distributed.ops import gather_heads_scatter_seq, gather_seq_scatter_heads_qkv - -from ... import na -from ...attention import FlashAttentionVarlen -from ...mm import MMArg, MMModule -from ...normalization import norm_layer_type -from ...rope import get_na_rope -from ...window import get_window_op -from itertools import chain - - -class NaMMAttention(nn.Module): - def __init__( - self, - vid_dim: int, - txt_dim: int, - heads: int, - head_dim: int, - qk_bias: bool, - qk_norm: norm_layer_type, - qk_norm_eps: float, - rope_type: Optional[str], - rope_dim: int, - shared_weights: bool, - **kwargs, - ): - super().__init__() - dim = MMArg(vid_dim, txt_dim) - inner_dim = heads * head_dim - qkv_dim = inner_dim * 3 - self.head_dim = head_dim - self.proj_qkv = MMModule( - nn.Linear, dim, qkv_dim, bias=qk_bias, shared_weights=shared_weights - ) - self.proj_out = MMModule(nn.Linear, inner_dim, dim, shared_weights=shared_weights) - self.norm_q = MMModule( - qk_norm, - dim=head_dim, - eps=qk_norm_eps, - elementwise_affine=True, - shared_weights=shared_weights, - ) - self.norm_k = MMModule( - qk_norm, - dim=head_dim, - eps=qk_norm_eps, - elementwise_affine=True, - shared_weights=shared_weights, - ) - - self.rope = get_na_rope(rope_type=rope_type, dim=rope_dim) - self.attn = FlashAttentionVarlen() - - def forward( - self, - vid: torch.FloatTensor, # l c - txt: torch.FloatTensor, # l c - vid_shape: torch.LongTensor, # b 3 - txt_shape: torch.LongTensor, # b 1 - cache: Cache, - ) -> Tuple[ - torch.FloatTensor, - torch.FloatTensor, - ]: - vid_qkv, txt_qkv = self.proj_qkv(vid, txt) - vid_qkv = gather_seq_scatter_heads_qkv( - vid_qkv, - seq_dim=0, - qkv_shape=vid_shape, - cache=cache.namespace("vid"), - ) - txt_qkv = gather_seq_scatter_heads_qkv( - txt_qkv, - seq_dim=0, - qkv_shape=txt_shape, - cache=cache.namespace("txt"), - ) - vid_qkv = rearrange(vid_qkv, "l (o h d) -> l o h d", o=3, d=self.head_dim) - txt_qkv = rearrange(txt_qkv, "l (o h d) -> l o h d", o=3, d=self.head_dim) - - vid_q, vid_k, vid_v = vid_qkv.unbind(1) - txt_q, txt_k, txt_v = txt_qkv.unbind(1) - - vid_q, txt_q = self.norm_q(vid_q, txt_q) - vid_k, txt_k = self.norm_k(vid_k, txt_k) - - if self.rope: - if self.rope.mm: - vid_q, vid_k, txt_q, txt_k = self.rope( - vid_q, vid_k, vid_shape, txt_q, txt_k, txt_shape, cache - ) - else: - vid_q, vid_k = self.rope(vid_q, vid_k, vid_shape, cache) - - vid_len = cache("vid_len", lambda: vid_shape.prod(-1)) - txt_len = cache("txt_len", lambda: txt_shape.prod(-1)) - all_len = cache("all_len", lambda: vid_len + txt_len) - - concat, unconcat = cache("mm_pnp", lambda: na.concat_idx(vid_len, txt_len)) - - attn = self.attn( - q=concat(vid_q, txt_q).bfloat16(), - k=concat(vid_k, txt_k).bfloat16(), - v=concat(vid_v, txt_v).bfloat16(), - cu_seqlens_q=cache("mm_seqlens", lambda: F.pad(all_len.cumsum(0), (1, 0)).int()), - cu_seqlens_k=cache("mm_seqlens", lambda: F.pad(all_len.cumsum(0), (1, 0)).int()), - max_seqlen_q=cache("mm_maxlen", lambda: all_len.max().item()), - max_seqlen_k=cache("mm_maxlen", lambda: all_len.max().item()), - ).type_as(vid_q) - - attn = rearrange(attn, "l h d -> l (h d)") - vid_out, txt_out = unconcat(attn) - vid_out = gather_heads_scatter_seq(vid_out, head_dim=1, seq_dim=0) - txt_out = gather_heads_scatter_seq(txt_out, head_dim=1, seq_dim=0) - - vid_out, txt_out = self.proj_out(vid_out, txt_out) - return vid_out, txt_out - - -class NaSwinAttention(NaMMAttention): - def __init__( - self, - *args, - window: Union[int, Tuple[int, int, int]], - window_method: str, - **kwargs, - ): - super().__init__(*args, **kwargs) - self.window = _triple(window) - self.window_method = window_method - assert all(map(lambda v: isinstance(v, int) and v >= 0, self.window)) - - self.window_op = get_window_op(window_method) - - def forward( - self, - vid: torch.FloatTensor, # l c - txt: torch.FloatTensor, # l c - vid_shape: torch.LongTensor, # b 3 - txt_shape: torch.LongTensor, # b 1 - cache: Cache, - ) -> Tuple[ - torch.FloatTensor, - torch.FloatTensor, - ]: - - vid_qkv, txt_qkv = self.proj_qkv(vid, txt) - vid_qkv = gather_seq_scatter_heads_qkv( - vid_qkv, - seq_dim=0, - qkv_shape=vid_shape, - cache=cache.namespace("vid"), - ) - txt_qkv = gather_seq_scatter_heads_qkv( - txt_qkv, - seq_dim=0, - qkv_shape=txt_shape, - cache=cache.namespace("txt"), - ) - - # re-org the input seq for window attn - cache_win = cache.namespace(f"{self.window_method}_{self.window}_sd3") - - def make_window(x: torch.Tensor): - t, h, w, _ = x.shape - window_slices = self.window_op((t, h, w), self.window) - return [x[st, sh, sw] for (st, sh, sw) in window_slices] - - window_partition, window_reverse, window_shape, window_count = cache_win( - "win_transform", - lambda: na.window_idx(vid_shape, make_window), - ) - vid_qkv_win = window_partition(vid_qkv) - - vid_qkv_win = rearrange(vid_qkv_win, "l (o h d) -> l o h d", o=3, d=self.head_dim) - txt_qkv = rearrange(txt_qkv, "l (o h d) -> l o h d", o=3, d=self.head_dim) - - vid_q, vid_k, vid_v = vid_qkv_win.unbind(1) - txt_q, txt_k, txt_v = txt_qkv.unbind(1) - - vid_q, txt_q = self.norm_q(vid_q, txt_q) - vid_k, txt_k = self.norm_k(vid_k, txt_k) - - txt_len = cache("txt_len", lambda: txt_shape.prod(-1)) - - vid_len_win = cache_win("vid_len", lambda: window_shape.prod(-1)) - txt_len_win = cache_win("txt_len", lambda: txt_len.repeat_interleave(window_count)) - all_len_win = cache_win("all_len", lambda: vid_len_win + txt_len_win) - concat_win, unconcat_win = cache_win( - "mm_pnp", lambda: na.repeat_concat_idx(vid_len_win, txt_len, window_count) - ) - - # window rope - if self.rope: - if self.rope.mm: - # repeat text q and k for window mmrope - _, num_h, _ = txt_q.shape - txt_q_repeat = rearrange(txt_q, "l h d -> l (h d)") - txt_q_repeat = na.unflatten(txt_q_repeat, txt_shape) - txt_q_repeat = [[x] * n for x, n in zip(txt_q_repeat, window_count)] - txt_q_repeat = list(chain(*txt_q_repeat)) - txt_q_repeat, txt_shape_repeat = na.flatten(txt_q_repeat) - txt_q_repeat = rearrange(txt_q_repeat, "l (h d) -> l h d", h=num_h) - - txt_k_repeat = rearrange(txt_k, "l h d -> l (h d)") - txt_k_repeat = na.unflatten(txt_k_repeat, txt_shape) - txt_k_repeat = [[x] * n for x, n in zip(txt_k_repeat, window_count)] - txt_k_repeat = list(chain(*txt_k_repeat)) - txt_k_repeat, _ = na.flatten(txt_k_repeat) - txt_k_repeat = rearrange(txt_k_repeat, "l (h d) -> l h d", h=num_h) - - vid_q, vid_k, txt_q, txt_k = self.rope( - vid_q, vid_k, window_shape, txt_q_repeat, txt_k_repeat, txt_shape_repeat, cache_win - ) - else: - vid_q, vid_k = self.rope(vid_q, vid_k, window_shape, cache_win) - - out = self.attn( - q=concat_win(vid_q, txt_q).bfloat16(), - k=concat_win(vid_k, txt_k).bfloat16(), - v=concat_win(vid_v, txt_v).bfloat16(), - cu_seqlens_q=cache_win( - "vid_seqlens_q", lambda: F.pad(all_len_win.cumsum(0), (1, 0)).int() - ), - cu_seqlens_k=cache_win( - "vid_seqlens_k", lambda: F.pad(all_len_win.cumsum(0), (1, 0)).int() - ), - max_seqlen_q=cache_win("vid_max_seqlen_q", lambda: all_len_win.max().item()), - max_seqlen_k=cache_win("vid_max_seqlen_k", lambda: all_len_win.max().item()), - ).type_as(vid_q) - - # text pooling - vid_out, txt_out = unconcat_win(out) - - vid_out = rearrange(vid_out, "l h d -> l (h d)") - txt_out = rearrange(txt_out, "l h d -> l (h d)") - vid_out = window_reverse(vid_out) - - vid_out = gather_heads_scatter_seq(vid_out, head_dim=1, seq_dim=0) - txt_out = gather_heads_scatter_seq(txt_out, head_dim=1, seq_dim=0) - - vid_out, txt_out = self.proj_out(vid_out, txt_out) - - return vid_out, txt_out \ No newline at end of file diff --git a/models_x/dit_v2/nablocks/mmsr_block.py b/models_x/dit_v2/nablocks/mmsr_block.py deleted file mode 100644 index 407c5b3eac3d0e572a148283ac322cf50a77d8a4..0000000000000000000000000000000000000000 --- a/models_x/dit_v2/nablocks/mmsr_block.py +++ /dev/null @@ -1,119 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Tuple -import torch -import torch.nn as nn - -# from ..cache import Cache -from common.cache import Cache - -from .attention.mmattn import NaSwinAttention -from ..mm import MMArg -from ..modulation import ada_layer_type -from ..normalization import norm_layer_type -from ..mm import MMArg, MMModule -from ..mlp import get_mlp - - -class NaMMSRTransformerBlock(nn.Module): - def __init__( - self, - *, - vid_dim: int, - txt_dim: int, - emb_dim: int, - heads: int, - head_dim: int, - expand_ratio: int, - norm: norm_layer_type, - norm_eps: float, - ada: ada_layer_type, - qk_bias: bool, - qk_norm: norm_layer_type, - mlp_type: str, - shared_weights: bool, - rope_type: str, - rope_dim: int, - is_last_layer: bool, - **kwargs, - ): - super().__init__() - dim = MMArg(vid_dim, txt_dim) - self.attn_norm = MMModule(norm, dim=dim, eps=norm_eps, elementwise_affine=False, shared_weights=shared_weights,) - - self.attn = NaSwinAttention( - vid_dim=vid_dim, - txt_dim=txt_dim, - heads=heads, - head_dim=head_dim, - qk_bias=qk_bias, - qk_norm=qk_norm, - qk_norm_eps=norm_eps, - rope_type=rope_type, - rope_dim=rope_dim, - shared_weights=shared_weights, - window=kwargs.pop("window", None), - window_method=kwargs.pop("window_method", None), - ) - - self.mlp_norm = MMModule(norm, dim=dim, eps=norm_eps, elementwise_affine=False, shared_weights=shared_weights, vid_only=is_last_layer) - self.mlp = MMModule( - get_mlp(mlp_type), - dim=dim, - expand_ratio=expand_ratio, - shared_weights=shared_weights, - vid_only=is_last_layer - ) - self.ada = MMModule(ada, dim=dim, emb_dim=emb_dim, layers=["attn", "mlp"], shared_weights=shared_weights, vid_only=is_last_layer) - self.is_last_layer = is_last_layer - - def forward( - self, - vid: torch.FloatTensor, # l c - txt: torch.FloatTensor, # l c - vid_shape: torch.LongTensor, # b 3 - txt_shape: torch.LongTensor, # b 1 - emb: torch.FloatTensor, - cache: Cache, - ) -> Tuple[ - torch.FloatTensor, - torch.FloatTensor, - torch.LongTensor, - torch.LongTensor, - ]: - hid_len = MMArg( - cache("vid_len", lambda: vid_shape.prod(-1)), - cache("txt_len", lambda: txt_shape.prod(-1)), - ) - ada_kwargs = { - "emb": emb, - "hid_len": hid_len, - "cache": cache, - "branch_tag": MMArg("vid", "txt"), - } - - vid_attn, txt_attn = self.attn_norm(vid, txt) - vid_attn, txt_attn = self.ada(vid_attn, txt_attn, layer="attn", mode="in", **ada_kwargs) - vid_attn, txt_attn = self.attn(vid_attn, txt_attn, vid_shape, txt_shape, cache) - vid_attn, txt_attn = self.ada(vid_attn, txt_attn, layer="attn", mode="out", **ada_kwargs) - vid_attn, txt_attn = (vid_attn + vid), (txt_attn + txt) - - vid_mlp, txt_mlp = self.mlp_norm(vid_attn, txt_attn) - vid_mlp, txt_mlp = self.ada(vid_mlp, txt_mlp, layer="mlp", mode="in", **ada_kwargs) - vid_mlp, txt_mlp = self.mlp(vid_mlp, txt_mlp) - vid_mlp, txt_mlp = self.ada(vid_mlp, txt_mlp, layer="mlp", mode="out", **ada_kwargs) - vid_mlp, txt_mlp = (vid_mlp + vid_attn), (txt_mlp + txt_attn) - - return vid_mlp, txt_mlp, vid_shape, txt_shape diff --git a/models_x/dit_v2/nadit.py b/models_x/dit_v2/nadit.py deleted file mode 100644 index fe9d7f85fa38e330069d1888cdd996468c719144..0000000000000000000000000000000000000000 --- a/models_x/dit_v2/nadit.py +++ /dev/null @@ -1,246 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from dataclasses import dataclass -from typing import List, Optional, Tuple, Union, Callable -import torch -from torch import nn - -from common.cache import Cache -from common.distributed.ops import slice_inputs - -from . import na -from .embedding import TimeEmbedding -from .modulation import get_ada_layer -from .nablocks import get_nablock -from .normalization import get_norm_layer -from .patch import get_na_patch_layers - -# Fake func, no checkpointing is required for inference -def gradient_checkpointing(module: Union[Callable, nn.Module], *args, enabled: bool, **kwargs): - return module(*args, **kwargs) - -@dataclass -class NaDiTOutput: - vid_sample: torch.Tensor - - -class NaDiT(nn.Module): - """ - Native Resolution Diffusion Transformer (NaDiT) - """ - - gradient_checkpointing = False - - def __init__( - self, - vid_in_channels: int, - vid_out_channels: int, - vid_dim: int, - txt_in_dim: Union[int, List[int]], - txt_dim: Optional[int], - emb_dim: int, - heads: int, - head_dim: int, - expand_ratio: int, - norm: Optional[str], - norm_eps: float, - ada: str, - qk_bias: bool, - qk_norm: Optional[str], - patch_size: Union[int, Tuple[int, int, int]], - num_layers: int, - block_type: Union[str, Tuple[str]], - mm_layers: Union[int, Tuple[bool]], - mlp_type: str = "normal", - patch_type: str = "v1", - rope_type: Optional[str] = "rope3d", - rope_dim: Optional[int] = None, - window: Optional[Tuple] = None, - window_method: Optional[Tuple[str]] = None, - msa_type: Optional[Tuple[str]] = None, - mca_type: Optional[Tuple[str]] = None, - txt_in_norm: Optional[str] = None, - txt_in_norm_scale_factor: int = 0.01, - txt_proj_type: Optional[str] = "linear", - vid_out_norm: Optional[str] = None, - **kwargs, - ): - ada = get_ada_layer(ada) - norm = get_norm_layer(norm) - qk_norm = get_norm_layer(qk_norm) - rope_dim = rope_dim if rope_dim is not None else head_dim // 2 - if isinstance(block_type, str): - block_type = [block_type] * num_layers - elif len(block_type) != num_layers: - raise ValueError("The ``block_type`` list should equal to ``num_layers``.") - super().__init__() - NaPatchIn, NaPatchOut = get_na_patch_layers(patch_type) - self.vid_in = NaPatchIn( - in_channels=vid_in_channels, - patch_size=patch_size, - dim=vid_dim, - ) - if not isinstance(txt_in_dim, int): - self.txt_in = nn.ModuleList([]) - for in_dim in txt_in_dim: - txt_norm_layer = get_norm_layer(txt_in_norm)(txt_dim, norm_eps, True) - if txt_proj_type == "linear": - txt_proj_layer = nn.Linear(in_dim, txt_dim) - else: - txt_proj_layer = nn.Sequential( - nn.Linear(in_dim, in_dim), nn.GELU("tanh"), nn.Linear(in_dim, txt_dim) - ) - torch.nn.init.constant_(txt_norm_layer.weight, txt_in_norm_scale_factor) - self.txt_in.append( - nn.Sequential( - txt_proj_layer, - txt_norm_layer, - ) - ) - else: - self.txt_in = ( - nn.Linear(txt_in_dim, txt_dim) - if txt_in_dim and txt_in_dim != txt_dim - else nn.Identity() - ) - self.emb_in = TimeEmbedding( - sinusoidal_dim=256, - hidden_dim=max(vid_dim, txt_dim), - output_dim=emb_dim, - ) - - if window is None or isinstance(window[0], int): - window = [window] * num_layers - if window_method is None or isinstance(window_method, str): - window_method = [window_method] * num_layers - - if msa_type is None or isinstance(msa_type, str): - msa_type = [msa_type] * num_layers - if mca_type is None or isinstance(mca_type, str): - mca_type = [mca_type] * num_layers - - self.blocks = nn.ModuleList( - [ - get_nablock(block_type[i])( - vid_dim=vid_dim, - txt_dim=txt_dim, - emb_dim=emb_dim, - heads=heads, - head_dim=head_dim, - expand_ratio=expand_ratio, - norm=norm, - norm_eps=norm_eps, - ada=ada, - qk_bias=qk_bias, - qk_norm=qk_norm, - shared_weights=not ( - (i < mm_layers) if isinstance(mm_layers, int) else mm_layers[i] - ), - mlp_type=mlp_type, - window=window[i], - window_method=window_method[i], - msa_type=msa_type[i], - mca_type=mca_type[i], - rope_type=rope_type, - rope_dim=rope_dim, - is_last_layer=(i == num_layers - 1), - **kwargs, - ) - for i in range(num_layers) - ] - ) - - self.vid_out_norm = None - if vid_out_norm is not None: - self.vid_out_norm = get_norm_layer(vid_out_norm)( - dim=vid_dim, - eps=norm_eps, - elementwise_affine=True, - ) - self.vid_out_ada = ada( - dim=vid_dim, - emb_dim=emb_dim, - layers=["out"], - modes=["in"], - ) - - self.vid_out = NaPatchOut( - out_channels=vid_out_channels, - patch_size=patch_size, - dim=vid_dim, - ) - - def set_gradient_checkpointing(self, enable: bool): - self.gradient_checkpointing = enable - - def forward( - self, - vid: torch.FloatTensor, # l c - txt: Union[torch.FloatTensor, List[torch.FloatTensor]], # l c - vid_shape: torch.LongTensor, # b 3 - txt_shape: Union[torch.LongTensor, List[torch.LongTensor]], # b 1 - timestep: Union[int, float, torch.IntTensor, torch.FloatTensor], # b - disable_cache: bool = False, # for test - ): - cache = Cache(disable=disable_cache) - - # slice vid after patching in when using sequence parallelism - if isinstance(txt, list): - assert isinstance(self.txt_in, nn.ModuleList) - txt = [ - na.unflatten(fc(i), s) for fc, i, s in zip(self.txt_in, txt, txt_shape) - ] # B L D - txt, txt_shape = na.flatten([torch.cat(t, dim=0) for t in zip(*txt)]) - txt = slice_inputs(txt, dim=0) - else: - txt = slice_inputs(txt, dim=0) - txt = self.txt_in(txt) - - # Video input. - # Sequence parallel slicing is done inside patching class. - vid, vid_shape = self.vid_in(vid, vid_shape, cache) - - # Embedding input. - emb = self.emb_in(timestep, device=vid.device, dtype=vid.dtype) - - # Body - for i, block in enumerate(self.blocks): - vid, txt, vid_shape, txt_shape = gradient_checkpointing( - enabled=(self.gradient_checkpointing and self.training), - module=block, - vid=vid, - txt=txt, - vid_shape=vid_shape, - txt_shape=txt_shape, - emb=emb, - cache=cache, - ) - - # Video output norm. - if self.vid_out_norm: - vid = self.vid_out_norm(vid) - vid = self.vid_out_ada( - vid, - emb=emb, - layer="out", - mode="in", - hid_len=cache("vid_len", lambda: vid_shape.prod(-1)), - cache=cache, - branch_tag="vid", - ) - - # Video output. - vid, vid_shape = self.vid_out(vid, vid_shape, cache) - return NaDiTOutput(vid_sample=vid) diff --git a/models_x/dit_v2/normalization.py b/models_x/dit_v2/normalization.py deleted file mode 100644 index 98827a9c71f9fd6e461937774d022b68844aee34..0000000000000000000000000000000000000000 --- a/models_x/dit_v2/normalization.py +++ /dev/null @@ -1,63 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Callable, Optional -from diffusers.models.normalization import RMSNorm -from torch import nn - -# (dim: int, eps: float, elementwise_affine: bool) -norm_layer_type = Callable[[int, float, bool], nn.Module] - - -def get_norm_layer(norm_type: Optional[str]) -> norm_layer_type: - - def _norm_layer(dim: int, eps: float, elementwise_affine: bool): - if norm_type is None: - return nn.Identity() - - if norm_type == "layer": - return nn.LayerNorm( - normalized_shape=dim, - eps=eps, - elementwise_affine=elementwise_affine, - ) - - if norm_type == "rms": - return RMSNorm( - dim=dim, - eps=eps, - elementwise_affine=elementwise_affine, - ) - - if norm_type == "fusedln": - from apex.normalization import FusedLayerNorm - - return FusedLayerNorm( - normalized_shape=dim, - elementwise_affine=elementwise_affine, - eps=eps, - ) - - if norm_type == "fusedrms": - from apex.normalization import FusedRMSNorm - - return FusedRMSNorm( - normalized_shape=dim, - elementwise_affine=elementwise_affine, - eps=eps, - ) - - raise NotImplementedError(f"{norm_type} is not supported") - - return _norm_layer diff --git a/models_x/dit_v2/patch/__init__.py b/models_x/dit_v2/patch/__init__.py deleted file mode 100644 index 4e3c9783163f1e671f2d946dfad39ca33b12843d..0000000000000000000000000000000000000000 --- a/models_x/dit_v2/patch/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -def get_na_patch_layers(patch_type="v1"): - assert patch_type in ["v1"] - if patch_type == "v1": - from .patch_v1 import NaPatchIn, NaPatchOut - return NaPatchIn, NaPatchOut diff --git a/models_x/dit_v2/patch/patch_v1.py b/models_x/dit_v2/patch/patch_v1.py deleted file mode 100644 index 0231bc0905e70e1fc702fe088fb2d0dac30fcc71..0000000000000000000000000000000000000000 --- a/models_x/dit_v2/patch/patch_v1.py +++ /dev/null @@ -1,127 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Tuple, Union -import torch -from einops import rearrange -from torch import nn -from torch.nn.modules.utils import _triple - -from common.cache import Cache -from common.distributed.ops import gather_outputs, slice_inputs - -from .. import na - - -class PatchIn(nn.Module): - def __init__( - self, - in_channels: int, - patch_size: Union[int, Tuple[int, int, int]], - dim: int, - ): - super().__init__() - t, h, w = _triple(patch_size) - self.patch_size = t, h, w - self.proj = nn.Linear(in_channels * t * h * w, dim) - - def forward( - self, - vid: torch.Tensor, - ) -> torch.Tensor: - t, h, w = self.patch_size - if t > 1: - assert vid.size(2) % t == 1 - vid = torch.cat([vid[:, :, :1]] * (t - 1) + [vid], dim=2) - vid = rearrange(vid, "b c (T t) (H h) (W w) -> b T H W (t h w c)", t=t, h=h, w=w) - vid = self.proj(vid) - return vid - - -class PatchOut(nn.Module): - def __init__( - self, - out_channels: int, - patch_size: Union[int, Tuple[int, int, int]], - dim: int, - ): - super().__init__() - t, h, w = _triple(patch_size) - self.patch_size = t, h, w - self.proj = nn.Linear(dim, out_channels * t * h * w) - - def forward( - self, - vid: torch.Tensor, - ) -> torch.Tensor: - t, h, w = self.patch_size - vid = self.proj(vid) - vid = rearrange(vid, "b T H W (t h w c) -> b c (T t) (H h) (W w)", t=t, h=h, w=w) - if t > 1: - vid = vid[:, :, (t - 1) :] - return vid - - -class NaPatchIn(PatchIn): - def forward( - self, - vid: torch.Tensor, # l c - vid_shape: torch.LongTensor, - cache: Cache = Cache(disable=True), # for test - ) -> torch.Tensor: - cache = cache.namespace("patch") - vid_shape_before_patchify = cache("vid_shape_before_patchify", lambda: vid_shape) - t, h, w = self.patch_size - if not (t == h == w == 1): - vid = na.unflatten(vid, vid_shape) - for i in range(len(vid)): - if t > 1 and vid_shape_before_patchify[i, 0] % t != 0: - vid[i] = torch.cat([vid[i][:1]] * (t - vid[i].size(0) % t) + [vid[i]], dim=0) - vid[i] = rearrange(vid[i], "(T t) (H h) (W w) c -> T H W (t h w c)", t=t, h=h, w=w) - vid, vid_shape = na.flatten(vid) - - # slice vid after patching in when using sequence parallelism - vid = slice_inputs(vid, dim=0) - vid = self.proj(vid) - return vid, vid_shape - - -class NaPatchOut(PatchOut): - def forward( - self, - vid: torch.FloatTensor, # l c - vid_shape: torch.LongTensor, - cache: Cache = Cache(disable=True), # for test - ) -> Tuple[ - torch.FloatTensor, - torch.LongTensor, - ]: - cache = cache.namespace("patch") - vid_shape_before_patchify = cache.get("vid_shape_before_patchify") - - t, h, w = self.patch_size - vid = self.proj(vid) - # gather vid before patching out when enabling sequence parallelism - vid = gather_outputs( - vid, gather_dim=0, padding_dim=0, unpad_shape=vid_shape, cache=cache.namespace("vid") - ) - if not (t == h == w == 1): - vid = na.unflatten(vid, vid_shape) - for i in range(len(vid)): - vid[i] = rearrange(vid[i], "T H W (t h w c) -> (T t) (H h) (W w) c", t=t, h=h, w=w) - if t > 1 and vid_shape_before_patchify[i, 0] % t != 0: - vid[i] = vid[i][(t - vid_shape_before_patchify[i, 0] % t) :] - vid, vid_shape = na.flatten(vid) - - return vid, vid_shape diff --git a/models_x/dit_v2/rope.py b/models_x/dit_v2/rope.py deleted file mode 100644 index ceb5458ba2829417a93124b9e06a86b74a523765..0000000000000000000000000000000000000000 --- a/models_x/dit_v2/rope.py +++ /dev/null @@ -1,150 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from functools import lru_cache -from typing import Optional, Tuple -import torch -from einops import rearrange -from rotary_embedding_torch import RotaryEmbedding, apply_rotary_emb -from torch import nn - -from common.cache import Cache - - -class RotaryEmbeddingBase(nn.Module): - def __init__(self, dim: int, rope_dim: int): - super().__init__() - self.rope = RotaryEmbedding( - dim=dim // rope_dim, - freqs_for="pixel", - max_freq=256, - ) - # 1. Set model.requires_grad_(True) after model creation will make - # the `requires_grad=False` for rope freqs no longer hold. - # 2. Even if we don't set requires_grad_(True) explicitly, - # FSDP is not memory efficient when handling fsdp_wrap - # with mixed requires_grad=True/False. - # With above consideration, it is easier just remove the freqs - # out of nn.Parameters when `learned_freq=False` - freqs = self.rope.freqs - del self.rope.freqs - self.rope.register_buffer("freqs", freqs.data) - - @lru_cache(maxsize=128) - def get_axial_freqs(self, *dims): - return self.rope.get_axial_freqs(*dims) - - -class RotaryEmbedding3d(RotaryEmbeddingBase): - def __init__(self, dim: int): - super().__init__(dim, rope_dim=3) - self.mm = False - - def forward( - self, - q: torch.FloatTensor, # b h l d - k: torch.FloatTensor, # b h l d - size: Tuple[int, int, int], - ) -> Tuple[ - torch.FloatTensor, - torch.FloatTensor, - ]: - T, H, W = size - freqs = self.get_axial_freqs(T, H, W) - q = rearrange(q, "b h (T H W) d -> b h T H W d", T=T, H=H, W=W) - k = rearrange(k, "b h (T H W) d -> b h T H W d", T=T, H=H, W=W) - q = apply_rotary_emb(freqs, q.float()).to(q.dtype) - k = apply_rotary_emb(freqs, k.float()).to(k.dtype) - q = rearrange(q, "b h T H W d -> b h (T H W) d") - k = rearrange(k, "b h T H W d -> b h (T H W) d") - return q, k - - -class MMRotaryEmbeddingBase(RotaryEmbeddingBase): - def __init__(self, dim: int, rope_dim: int): - super().__init__(dim, rope_dim) - self.rope = RotaryEmbedding( - dim=dim // rope_dim, - freqs_for="lang", - theta=10000, - ) - freqs = self.rope.freqs - del self.rope.freqs - self.rope.register_buffer("freqs", freqs.data) - self.mm = True - - -class NaMMRotaryEmbedding3d(MMRotaryEmbeddingBase): - def __init__(self, dim: int): - super().__init__(dim, rope_dim=3) - - def forward( - self, - vid_q: torch.FloatTensor, # L h d - vid_k: torch.FloatTensor, # L h d - vid_shape: torch.LongTensor, # B 3 - txt_q: torch.FloatTensor, # L h d - txt_k: torch.FloatTensor, # L h d - txt_shape: torch.LongTensor, # B 1 - cache: Cache, - ) -> Tuple[ - torch.FloatTensor, - torch.FloatTensor, - torch.FloatTensor, - torch.FloatTensor, - ]: - vid_freqs, txt_freqs = cache( - "mmrope_freqs_3d", - lambda: self.get_freqs(vid_shape, txt_shape), - ) - vid_q = rearrange(vid_q, "L h d -> h L d") - vid_k = rearrange(vid_k, "L h d -> h L d") - vid_q = apply_rotary_emb(vid_freqs, vid_q.float()).to(vid_q.dtype) - vid_k = apply_rotary_emb(vid_freqs, vid_k.float()).to(vid_k.dtype) - vid_q = rearrange(vid_q, "h L d -> L h d") - vid_k = rearrange(vid_k, "h L d -> L h d") - - txt_q = rearrange(txt_q, "L h d -> h L d") - txt_k = rearrange(txt_k, "L h d -> h L d") - txt_q = apply_rotary_emb(txt_freqs, txt_q.float()).to(txt_q.dtype) - txt_k = apply_rotary_emb(txt_freqs, txt_k.float()).to(txt_k.dtype) - txt_q = rearrange(txt_q, "h L d -> L h d") - txt_k = rearrange(txt_k, "h L d -> L h d") - return vid_q, vid_k, txt_q, txt_k - - def get_freqs( - self, - vid_shape: torch.LongTensor, - txt_shape: torch.LongTensor, - ) -> Tuple[ - torch.Tensor, - torch.Tensor, - ]: - vid_freqs = self.get_axial_freqs(1024, 128, 128) - txt_freqs = self.get_axial_freqs(1024) - vid_freq_list, txt_freq_list = [], [] - for (f, h, w), l in zip(vid_shape.tolist(), txt_shape[:, 0].tolist()): - vid_freq = vid_freqs[l : l + f, :h, :w].reshape(-1, vid_freqs.size(-1)) - txt_freq = txt_freqs[:l].repeat(1, 3).reshape(-1, vid_freqs.size(-1)) - vid_freq_list.append(vid_freq) - txt_freq_list.append(txt_freq) - return torch.cat(vid_freq_list, dim=0), torch.cat(txt_freq_list, dim=0) - - -def get_na_rope(rope_type: Optional[str], dim: int): - if rope_type is None: - return None - if rope_type == "mmrope3d": - return NaMMRotaryEmbedding3d(dim=dim) - raise NotImplementedError(f"{rope_type} is not supported.") diff --git a/models_x/dit_v2/window.py b/models_x/dit_v2/window.py deleted file mode 100644 index b7475921ae283cf76d82bff7521233c133f54bfd..0000000000000000000000000000000000000000 --- a/models_x/dit_v2/window.py +++ /dev/null @@ -1,83 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from math import ceil -from typing import Tuple -import math - -def get_window_op(name: str): - if name == "720pwin_by_size_bysize": - return make_720Pwindows_bysize - if name == "720pswin_by_size_bysize": - return make_shifted_720Pwindows_bysize - raise ValueError(f"Unknown windowing method: {name}") - - -# -------------------------------- Windowing -------------------------------- # -def make_720Pwindows_bysize(size: Tuple[int, int, int], num_windows: Tuple[int, int, int]): - t, h, w = size - resized_nt, resized_nh, resized_nw = num_windows - #cal windows under 720p - scale = math.sqrt((45 * 80) / (h * w)) - resized_h, resized_w = round(h * scale), round(w * scale) - wh, ww = ceil(resized_h / resized_nh), ceil(resized_w / resized_nw) # window size. - wt = ceil(min(t, 30) / resized_nt) # window size. - nt, nh, nw = ceil(t / wt), ceil(h / wh), ceil(w / ww) # window size. - return [ - ( - slice(it * wt, min((it + 1) * wt, t)), - slice(ih * wh, min((ih + 1) * wh, h)), - slice(iw * ww, min((iw + 1) * ww, w)), - ) - for iw in range(nw) - if min((iw + 1) * ww, w) > iw * ww - for ih in range(nh) - if min((ih + 1) * wh, h) > ih * wh - for it in range(nt) - if min((it + 1) * wt, t) > it * wt - ] - -def make_shifted_720Pwindows_bysize(size: Tuple[int, int, int], num_windows: Tuple[int, int, int]): - t, h, w = size - resized_nt, resized_nh, resized_nw = num_windows - #cal windows under 720p - scale = math.sqrt((45 * 80) / (h * w)) - resized_h, resized_w = round(h * scale), round(w * scale) - wh, ww = ceil(resized_h / resized_nh), ceil(resized_w / resized_nw) # window size. - wt = ceil(min(t, 30) / resized_nt) # window size. - - st, sh, sw = ( # shift size. - 0.5 if wt < t else 0, - 0.5 if wh < h else 0, - 0.5 if ww < w else 0, - ) - nt, nh, nw = ceil((t - st) / wt), ceil((h - sh) / wh), ceil((w - sw) / ww) # window size. - nt, nh, nw = ( # number of window. - nt + 1 if st > 0 else 1, - nh + 1 if sh > 0 else 1, - nw + 1 if sw > 0 else 1, - ) - return [ - ( - slice(max(int((it - st) * wt), 0), min(int((it - st + 1) * wt), t)), - slice(max(int((ih - sh) * wh), 0), min(int((ih - sh + 1) * wh), h)), - slice(max(int((iw - sw) * ww), 0), min(int((iw - sw + 1) * ww), w)), - ) - for iw in range(nw) - if min(int((iw - sw + 1) * ww), w) > max(int((iw - sw) * ww), 0) - for ih in range(nh) - if min(int((ih - sh + 1) * wh), h) > max(int((ih - sh) * wh), 0) - for it in range(nt) - if min(int((it - st + 1) * wt), t) > max(int((it - st) * wt), 0) - ] diff --git a/models_x/video_vae_v3/modules/attn_video_vae.py b/models_x/video_vae_v3/modules/attn_video_vae.py deleted file mode 100644 index edaf817452af1df8c85746f07d017e8802d989b0..0000000000000000000000000000000000000000 --- a/models_x/video_vae_v3/modules/attn_video_vae.py +++ /dev/null @@ -1,1345 +0,0 @@ -# Copyright (c) 2023 HuggingFace Team -# Copyright (c) 2025 ByteDance Ltd. and/or its affiliates. -# SPDX-License-Identifier: Apache License, Version 2.0 (the "License") -# -# This file has been modified by ByteDance Ltd. and/or its affiliates. on 1st June 2025 -# -# Original file was released under Apache License, Version 2.0 (the "License"), with the full license text -# available at http://www.apache.org/licenses/LICENSE-2.0. -# -# This modified file is released under the same license. - - -from contextlib import nullcontext -from typing import Literal, Optional, Tuple, Union -import diffusers -import torch -import torch.nn as nn -import torch.nn.functional as F -from diffusers.models.attention_processor import Attention, SpatialNorm -from diffusers.models.autoencoders.vae import DecoderOutput, DiagonalGaussianDistribution -from diffusers.models.downsampling import Downsample2D -from diffusers.models.lora import LoRACompatibleConv -from diffusers.models.modeling_outputs import AutoencoderKLOutput -from diffusers.models.resnet import ResnetBlock2D -from diffusers.models.unets.unet_2d_blocks import DownEncoderBlock2D, UpDecoderBlock2D -from diffusers.models.upsampling import Upsample2D -from diffusers.utils import is_torch_version -from diffusers.utils.accelerate_utils import apply_forward_hook -from einops import rearrange - -from common.distributed.advanced import get_sequence_parallel_world_size -from common.logger import get_logger -from models.video_vae_v3.modules.causal_inflation_lib import ( - InflatedCausalConv3d, - causal_norm_wrapper, - init_causal_conv3d, - remove_head, -) -from models.video_vae_v3.modules.context_parallel_lib import ( - causal_conv_gather_outputs, - causal_conv_slice_inputs, -) -from models.video_vae_v3.modules.global_config import set_norm_limit -from models.video_vae_v3.modules.types import ( - CausalAutoencoderOutput, - CausalDecoderOutput, - CausalEncoderOutput, - MemoryState, - _inflation_mode_t, - _memory_device_t, - _receptive_field_t, -) - -logger = get_logger(__name__) # pylint: disable=invalid-name - - -class Upsample3D(Upsample2D): - """A 3D upsampling layer with an optional convolution.""" - - def __init__( - self, - *args, - inflation_mode: _inflation_mode_t = "tail", - temporal_up: bool = False, - spatial_up: bool = True, - slicing: bool = False, - **kwargs, - ): - super().__init__(*args, **kwargs) - conv = self.conv if self.name == "conv" else self.Conv2d_0 - - assert type(conv) is not nn.ConvTranspose2d - # Note: lora_layer is not passed into constructor in the original implementation. - # So we make a simplification. - conv = init_causal_conv3d( - self.channels, - self.out_channels, - 3, - padding=1, - inflation_mode=inflation_mode, - ) - - self.temporal_up = temporal_up - self.spatial_up = spatial_up - self.temporal_ratio = 2 if temporal_up else 1 - self.spatial_ratio = 2 if spatial_up else 1 - self.slicing = slicing - - assert not self.interpolate - # [Override] MAGViT v2 implementation - if not self.interpolate: - upscale_ratio = (self.spatial_ratio**2) * self.temporal_ratio - self.upscale_conv = nn.Conv3d( - self.channels, self.channels * upscale_ratio, kernel_size=1, padding=0 - ) - identity = ( - torch.eye(self.channels) - .repeat(upscale_ratio, 1) - .reshape_as(self.upscale_conv.weight) - ) - self.upscale_conv.weight.data.copy_(identity) - nn.init.zeros_(self.upscale_conv.bias) - - if self.name == "conv": - self.conv = conv - else: - self.Conv2d_0 = conv - - def forward( - self, - hidden_states: torch.FloatTensor, - output_size: Optional[int] = None, - memory_state: MemoryState = MemoryState.DISABLED, - **kwargs, - ) -> torch.FloatTensor: - assert hidden_states.shape[1] == self.channels - - if hasattr(self, "norm") and self.norm is not None: - # [Overridden] change to causal norm. - hidden_states = causal_norm_wrapper(self.norm, hidden_states) - - if self.use_conv_transpose: - return self.conv(hidden_states) - - if self.slicing: - split_size = hidden_states.size(2) // 2 - hidden_states = list( - hidden_states.split([split_size, hidden_states.size(2) - split_size], dim=2) - ) - else: - hidden_states = [hidden_states] - - for i in range(len(hidden_states)): - hidden_states[i] = self.upscale_conv(hidden_states[i]) - hidden_states[i] = rearrange( - hidden_states[i], - "b (x y z c) f h w -> b c (f z) (h x) (w y)", - x=self.spatial_ratio, - y=self.spatial_ratio, - z=self.temporal_ratio, - ) - - # [Overridden] For causal temporal conv - if self.temporal_up and memory_state != MemoryState.ACTIVE: - hidden_states[0] = remove_head(hidden_states[0]) - - if not self.slicing: - hidden_states = hidden_states[0] - - if self.use_conv: - if self.name == "conv": - hidden_states = self.conv(hidden_states, memory_state=memory_state) - else: - hidden_states = self.Conv2d_0(hidden_states, memory_state=memory_state) - - if not self.slicing: - return hidden_states - else: - return torch.cat(hidden_states, dim=2) - - -class Downsample3D(Downsample2D): - """A 3D downsampling layer with an optional convolution.""" - - def __init__( - self, - *args, - inflation_mode: _inflation_mode_t = "tail", - spatial_down: bool = False, - temporal_down: bool = False, - **kwargs, - ): - super().__init__(*args, **kwargs) - conv = self.conv - self.temporal_down = temporal_down - self.spatial_down = spatial_down - - self.temporal_ratio = 2 if temporal_down else 1 - self.spatial_ratio = 2 if spatial_down else 1 - - self.temporal_kernel = 3 if temporal_down else 1 - self.spatial_kernel = 3 if spatial_down else 1 - - if type(conv) in [nn.Conv2d, LoRACompatibleConv]: - # Note: lora_layer is not passed into constructor in the original implementation. - # So we make a simplification. - conv = init_causal_conv3d( - self.channels, - self.out_channels, - kernel_size=(self.temporal_kernel, self.spatial_kernel, self.spatial_kernel), - stride=(self.temporal_ratio, self.spatial_ratio, self.spatial_ratio), - padding=( - 1 if self.temporal_down else 0, - self.padding if self.spatial_down else 0, - self.padding if self.spatial_down else 0, - ), - inflation_mode=inflation_mode, - ) - elif type(conv) is nn.AvgPool2d: - assert self.channels == self.out_channels - conv = nn.AvgPool3d( - kernel_size=(self.temporal_ratio, self.spatial_ratio, self.spatial_ratio), - stride=(self.temporal_ratio, self.spatial_ratio, self.spatial_ratio), - ) - else: - raise NotImplementedError - - if self.name == "conv": - self.Conv2d_0 = conv - self.conv = conv - else: - self.conv = conv - - def forward( - self, - hidden_states: torch.FloatTensor, - memory_state: MemoryState = MemoryState.DISABLED, - **kwargs, - ) -> torch.FloatTensor: - - assert hidden_states.shape[1] == self.channels - - if hasattr(self, "norm") and self.norm is not None: - # [Overridden] change to causal norm. - hidden_states = causal_norm_wrapper(self.norm, hidden_states) - - if self.use_conv and self.padding == 0 and self.spatial_down: - pad = (0, 1, 0, 1) - hidden_states = F.pad(hidden_states, pad, mode="constant", value=0) - - assert hidden_states.shape[1] == self.channels - - hidden_states = self.conv(hidden_states, memory_state=memory_state) - - return hidden_states - - -class ResnetBlock3D(ResnetBlock2D): - def __init__( - self, - *args, - inflation_mode: _inflation_mode_t = "tail", - time_receptive_field: _receptive_field_t = "half", - slicing: bool = False, - **kwargs, - ): - super().__init__(*args, **kwargs) - self.conv1 = init_causal_conv3d( - self.in_channels, - self.out_channels, - kernel_size=(1, 3, 3) if time_receptive_field == "half" else (3, 3, 3), - stride=1, - padding=(0, 1, 1) if time_receptive_field == "half" else (1, 1, 1), - inflation_mode=inflation_mode, - ) - - self.conv2 = init_causal_conv3d( - self.out_channels, - self.conv2.out_channels, - kernel_size=3, - stride=1, - padding=1, - inflation_mode=inflation_mode, - ) - - if self.up: - assert type(self.upsample) is Upsample2D - self.upsample = Upsample3D( - self.in_channels, - use_conv=False, - inflation_mode=inflation_mode, - slicing=slicing, - ) - elif self.down: - assert type(self.downsample) is Downsample2D - self.downsample = Downsample3D( - self.in_channels, - use_conv=False, - padding=1, - name="op", - inflation_mode=inflation_mode, - ) - - if self.use_in_shortcut: - self.conv_shortcut = init_causal_conv3d( - self.in_channels, - self.conv_shortcut.out_channels, - kernel_size=1, - stride=1, - padding=0, - bias=(self.conv_shortcut.bias is not None), - inflation_mode=inflation_mode, - ) - - def forward( - self, input_tensor, temb, memory_state: MemoryState = MemoryState.DISABLED, **kwargs - ): - hidden_states = input_tensor - - hidden_states = causal_norm_wrapper(self.norm1, hidden_states) - - hidden_states = self.nonlinearity(hidden_states) - - if self.upsample is not None: - # upsample_nearest_nhwc fails with large batch sizes. - # see https://github.com/huggingface/diffusers/issues/984 - if hidden_states.shape[0] >= 64: - input_tensor = input_tensor.contiguous() - hidden_states = hidden_states.contiguous() - input_tensor = self.upsample(input_tensor, memory_state=memory_state) - hidden_states = self.upsample(hidden_states, memory_state=memory_state) - elif self.downsample is not None: - input_tensor = self.downsample(input_tensor, memory_state=memory_state) - hidden_states = self.downsample(hidden_states, memory_state=memory_state) - - hidden_states = self.conv1(hidden_states, memory_state=memory_state) - - if self.time_emb_proj is not None: - if not self.skip_time_act: - temb = self.nonlinearity(temb) - temb = self.time_emb_proj(temb)[:, :, None, None] - - if temb is not None and self.time_embedding_norm == "default": - hidden_states = hidden_states + temb - - hidden_states = causal_norm_wrapper(self.norm2, hidden_states) - - if temb is not None and self.time_embedding_norm == "scale_shift": - scale, shift = torch.chunk(temb, 2, dim=1) - hidden_states = hidden_states * (1 + scale) + shift - - hidden_states = self.nonlinearity(hidden_states) - - hidden_states = self.dropout(hidden_states) - hidden_states = self.conv2(hidden_states, memory_state=memory_state) - - if self.conv_shortcut is not None: - input_tensor = self.conv_shortcut(input_tensor, memory_state=memory_state) - - output_tensor = (input_tensor + hidden_states) / self.output_scale_factor - - return output_tensor - - -class DownEncoderBlock3D(DownEncoderBlock2D): - def __init__( - self, - in_channels: int, - out_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor: float = 1.0, - add_downsample: bool = True, - downsample_padding: int = 1, - inflation_mode: _inflation_mode_t = "tail", - time_receptive_field: _receptive_field_t = "half", - temporal_down: bool = True, - spatial_down: bool = True, - ): - super().__init__( - in_channels=in_channels, - out_channels=out_channels, - dropout=dropout, - num_layers=num_layers, - resnet_eps=resnet_eps, - resnet_time_scale_shift=resnet_time_scale_shift, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - resnet_pre_norm=resnet_pre_norm, - output_scale_factor=output_scale_factor, - add_downsample=add_downsample, - downsample_padding=downsample_padding, - ) - resnets = [] - temporal_modules = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - # [Override] Replace module. - ResnetBlock3D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=None, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ) - ) - temporal_modules.append(nn.Identity()) - - self.resnets = nn.ModuleList(resnets) - self.temporal_modules = nn.ModuleList(temporal_modules) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - # [Override] Replace module. - Downsample3D( - out_channels, - use_conv=True, - out_channels=out_channels, - padding=downsample_padding, - name="op", - temporal_down=temporal_down, - spatial_down=spatial_down, - inflation_mode=inflation_mode, - ) - ] - ) - else: - self.downsamplers = None - - def forward( - self, - hidden_states: torch.FloatTensor, - memory_state: MemoryState = MemoryState.DISABLED, - **kwargs, - ) -> torch.FloatTensor: - for resnet, temporal in zip(self.resnets, self.temporal_modules): - hidden_states = resnet(hidden_states, temb=None, memory_state=memory_state) - hidden_states = temporal(hidden_states) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states, memory_state=memory_state) - - return hidden_states - - -class UpDecoderBlock3D(UpDecoderBlock2D): - def __init__( - self, - in_channels: int, - out_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", # default, spatial - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor: float = 1.0, - add_upsample: bool = True, - temb_channels: Optional[int] = None, - inflation_mode: _inflation_mode_t = "tail", - time_receptive_field: _receptive_field_t = "half", - temporal_up: bool = True, - spatial_up: bool = True, - slicing: bool = False, - ): - super().__init__( - in_channels=in_channels, - out_channels=out_channels, - dropout=dropout, - num_layers=num_layers, - resnet_eps=resnet_eps, - resnet_time_scale_shift=resnet_time_scale_shift, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - resnet_pre_norm=resnet_pre_norm, - output_scale_factor=output_scale_factor, - add_upsample=add_upsample, - temb_channels=temb_channels, - ) - resnets = [] - temporal_modules = [] - - for i in range(num_layers): - input_channels = in_channels if i == 0 else out_channels - - resnets.append( - # [Override] Replace module. - ResnetBlock3D( - in_channels=input_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - slicing=slicing, - ) - ) - - temporal_modules.append(nn.Identity()) - - self.resnets = nn.ModuleList(resnets) - self.temporal_modules = nn.ModuleList(temporal_modules) - - if add_upsample: - # [Override] Replace module & use learnable upsample - self.upsamplers = nn.ModuleList( - [ - Upsample3D( - out_channels, - use_conv=True, - out_channels=out_channels, - temporal_up=temporal_up, - spatial_up=spatial_up, - interpolate=False, - inflation_mode=inflation_mode, - slicing=slicing, - ) - ] - ) - else: - self.upsamplers = None - - def forward( - self, - hidden_states: torch.FloatTensor, - temb: Optional[torch.FloatTensor] = None, - memory_state: MemoryState = MemoryState.DISABLED, - ) -> torch.FloatTensor: - for resnet, temporal in zip(self.resnets, self.temporal_modules): - hidden_states = resnet(hidden_states, temb=None, memory_state=memory_state) - hidden_states = temporal(hidden_states) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, memory_state=memory_state) - - return hidden_states - - -class UNetMidBlock3D(nn.Module): - def __init__( - self, - in_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", # default, spatial - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - add_attention: bool = True, - attention_head_dim: int = 1, - output_scale_factor: float = 1.0, - inflation_mode: _inflation_mode_t = "tail", - time_receptive_field: _receptive_field_t = "half", - ): - super().__init__() - resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - self.add_attention = add_attention - - # there is always at least one resnet - resnets = [ - # [Override] Replace module. - ResnetBlock3D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ) - ] - attentions = [] - - if attention_head_dim is None: - logger.warn( - f"It is not recommend to pass `attention_head_dim=None`. " - f"Defaulting `attention_head_dim` to `in_channels`: {in_channels}." - ) - attention_head_dim = in_channels - - for _ in range(num_layers): - if self.add_attention: - attentions.append( - Attention( - in_channels, - heads=in_channels // attention_head_dim, - dim_head=attention_head_dim, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - norm_num_groups=( - resnet_groups if resnet_time_scale_shift == "default" else None - ), - spatial_norm_dim=( - temb_channels if resnet_time_scale_shift == "spatial" else None - ), - residual_connection=True, - bias=True, - upcast_softmax=True, - _from_deprecated_attn_block=True, - ) - ) - else: - attentions.append(None) - - resnets.append( - ResnetBlock3D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - def forward(self, hidden_states, temb=None, memory_state: MemoryState = MemoryState.DISABLED): - video_length, frame_height, frame_width = hidden_states.size()[-3:] - hidden_states = self.resnets[0](hidden_states, temb, memory_state=memory_state) - for attn, resnet in zip(self.attentions, self.resnets[1:]): - if attn is not None: - hidden_states = rearrange(hidden_states, "b c f h w -> (b f) c h w") - hidden_states = attn(hidden_states, temb=temb) - hidden_states = rearrange( - hidden_states, "(b f) c h w -> b c f h w", f=video_length - ) - hidden_states = resnet(hidden_states, temb, memory_state=memory_state) - - return hidden_states - - -class Encoder3D(nn.Module): - r""" - [Override] override most logics to support extra condition input and causal conv - - The `Encoder` layer of a variational autoencoder that encodes - its input into a latent representation. - - Args: - in_channels (`int`, *optional*, defaults to 3): - The number of input channels. - out_channels (`int`, *optional*, defaults to 3): - The number of output channels. - down_block_types (`Tuple[str, ...]`, *optional*, defaults to `("DownEncoderBlock2D",)`): - The types of down blocks to use. - See `~diffusers.models.unet_2d_blocks.get_down_block` - for available options. - block_out_channels (`Tuple[int, ...]`, *optional*, defaults to `(64,)`): - The number of output channels for each block. - layers_per_block (`int`, *optional*, defaults to 2): - The number of layers per block. - norm_num_groups (`int`, *optional*, defaults to 32): - The number of groups for normalization. - act_fn (`str`, *optional*, defaults to `"silu"`): - The activation function to use. - See `~diffusers.models.activations.get_activation` for available options. - double_z (`bool`, *optional*, defaults to `True`): - Whether to double the number of output channels for the last block. - """ - - def __init__( - self, - in_channels: int = 3, - out_channels: int = 3, - down_block_types: Tuple[str, ...] = ("DownEncoderBlock3D",), - block_out_channels: Tuple[int, ...] = (64,), - layers_per_block: int = 2, - norm_num_groups: int = 32, - act_fn: str = "silu", - double_z: bool = True, - mid_block_add_attention=True, - # [Override] add extra_cond_dim, temporal down num - temporal_down_num: int = 2, - extra_cond_dim: int = None, - gradient_checkpoint: bool = False, - inflation_mode: _inflation_mode_t = "tail", - time_receptive_field: _receptive_field_t = "half", - ): - super().__init__() - self.layers_per_block = layers_per_block - self.temporal_down_num = temporal_down_num - - self.conv_in = init_causal_conv3d( - in_channels, - block_out_channels[0], - kernel_size=3, - stride=1, - padding=1, - inflation_mode=inflation_mode, - ) - - self.mid_block = None - self.down_blocks = nn.ModuleList([]) - self.extra_cond_dim = extra_cond_dim - - self.conv_extra_cond = nn.ModuleList([]) - - # down - output_channel = block_out_channels[0] - for i, down_block_type in enumerate(down_block_types): - input_channel = output_channel - output_channel = block_out_channels[i] - is_final_block = i == len(block_out_channels) - 1 - # [Override] to support temporal down block design - is_temporal_down_block = i >= len(block_out_channels) - self.temporal_down_num - 1 - # Note: take the last ones - - assert down_block_type == "DownEncoderBlock3D" - - down_block = DownEncoderBlock3D( - num_layers=self.layers_per_block, - in_channels=input_channel, - out_channels=output_channel, - add_downsample=not is_final_block, - resnet_eps=1e-6, - downsample_padding=0, - # Note: Don't know why set it as 0 - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - temporal_down=is_temporal_down_block, - spatial_down=True, - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ) - self.down_blocks.append(down_block) - - def zero_module(module): - # Zero out the parameters of a module and return it. - for p in module.parameters(): - p.detach().zero_() - return module - - self.conv_extra_cond.append( - zero_module( - nn.Conv3d(extra_cond_dim, output_channel, kernel_size=1, stride=1, padding=0) - ) - if self.extra_cond_dim is not None and self.extra_cond_dim > 0 - else None - ) - - # mid - self.mid_block = UNetMidBlock3D( - in_channels=block_out_channels[-1], - resnet_eps=1e-6, - resnet_act_fn=act_fn, - output_scale_factor=1, - resnet_time_scale_shift="default", - attention_head_dim=block_out_channels[-1], - resnet_groups=norm_num_groups, - temb_channels=None, - add_attention=mid_block_add_attention, - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ) - - # out - self.conv_norm_out = nn.GroupNorm( - num_channels=block_out_channels[-1], num_groups=norm_num_groups, eps=1e-6 - ) - self.conv_act = nn.SiLU() - - conv_out_channels = 2 * out_channels if double_z else out_channels - self.conv_out = init_causal_conv3d( - block_out_channels[-1], conv_out_channels, 3, padding=1, inflation_mode=inflation_mode - ) - - self.gradient_checkpointing = gradient_checkpoint - - def forward( - self, - sample: torch.FloatTensor, - extra_cond=None, - memory_state: MemoryState = MemoryState.DISABLED, - ) -> torch.FloatTensor: - r"""The forward method of the `Encoder` class.""" - sample = self.conv_in(sample, memory_state=memory_state) - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - # down - # [Override] add extra block and extra cond - for down_block, extra_block in zip(self.down_blocks, self.conv_extra_cond): - sample = torch.utils.checkpoint.checkpoint( - create_custom_forward(down_block), sample, memory_state, use_reentrant=False - ) - if extra_block is not None: - sample = sample + F.interpolate(extra_block(extra_cond), size=sample.shape[2:]) - - # middle - sample = self.mid_block(sample, memory_state=memory_state) - - # sample = torch.utils.checkpoint.checkpoint( - # create_custom_forward(self.mid_block), sample, use_reentrant=False - # ) - - else: - # down - # [Override] add extra block and extra cond - for down_block, extra_block in zip(self.down_blocks, self.conv_extra_cond): - sample = down_block(sample, memory_state=memory_state) - if extra_block is not None: - sample = sample + F.interpolate(extra_block(extra_cond), size=sample.shape[2:]) - - # middle - sample = self.mid_block(sample, memory_state=memory_state) - - # post-process - sample = causal_norm_wrapper(self.conv_norm_out, sample) - sample = self.conv_act(sample) - sample = self.conv_out(sample, memory_state=memory_state) - - return sample - - -class Decoder3D(nn.Module): - r""" - The `Decoder` layer of a variational autoencoder that - decodes its latent representation into an output sample. - - Args: - in_channels (`int`, *optional*, defaults to 3): - The number of input channels. - out_channels (`int`, *optional*, defaults to 3): - The number of output channels. - up_block_types (`Tuple[str, ...]`, *optional*, defaults to `("UpDecoderBlock2D",)`): - The types of up blocks to use. - See `~diffusers.models.unet_2d_blocks.get_up_block` for available options. - block_out_channels (`Tuple[int, ...]`, *optional*, defaults to `(64,)`): - The number of output channels for each block. - layers_per_block (`int`, *optional*, defaults to 2): - The number of layers per block. - norm_num_groups (`int`, *optional*, defaults to 32): - The number of groups for normalization. - act_fn (`str`, *optional*, defaults to `"silu"`): - The activation function to use. - See `~diffusers.models.activations.get_activation` for available options. - norm_type (`str`, *optional*, defaults to `"group"`): - The normalization type to use. Can be either `"group"` or `"spatial"`. - """ - - def __init__( - self, - in_channels: int = 3, - out_channels: int = 3, - up_block_types: Tuple[str, ...] = ("UpDecoderBlock3D",), - block_out_channels: Tuple[int, ...] = (64,), - layers_per_block: int = 2, - norm_num_groups: int = 32, - act_fn: str = "silu", - norm_type: str = "group", # group, spatial - mid_block_add_attention=True, - # [Override] add temporal up block - inflation_mode: _inflation_mode_t = "tail", - time_receptive_field: _receptive_field_t = "half", - temporal_up_num: int = 2, - slicing_up_num: int = 0, - gradient_checkpoint: bool = False, - ): - super().__init__() - self.layers_per_block = layers_per_block - self.temporal_up_num = temporal_up_num - - self.conv_in = init_causal_conv3d( - in_channels, - block_out_channels[-1], - kernel_size=3, - stride=1, - padding=1, - inflation_mode=inflation_mode, - ) - - self.mid_block = None - self.up_blocks = nn.ModuleList([]) - - temb_channels = in_channels if norm_type == "spatial" else None - - # mid - self.mid_block = UNetMidBlock3D( - in_channels=block_out_channels[-1], - resnet_eps=1e-6, - resnet_act_fn=act_fn, - output_scale_factor=1, - resnet_time_scale_shift="default" if norm_type == "group" else norm_type, - attention_head_dim=block_out_channels[-1], - resnet_groups=norm_num_groups, - temb_channels=temb_channels, - add_attention=mid_block_add_attention, - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ) - - # up - reversed_block_out_channels = list(reversed(block_out_channels)) - output_channel = reversed_block_out_channels[0] - print(f"slicing_up_num: {slicing_up_num}") - for i, up_block_type in enumerate(up_block_types): - prev_output_channel = output_channel - output_channel = reversed_block_out_channels[i] - - is_final_block = i == len(block_out_channels) - 1 - is_temporal_up_block = i < self.temporal_up_num - is_slicing_up_block = i >= len(block_out_channels) - slicing_up_num - # Note: Keep symmetric - - assert up_block_type == "UpDecoderBlock3D" - up_block = UpDecoderBlock3D( - num_layers=self.layers_per_block + 1, - in_channels=prev_output_channel, - out_channels=output_channel, - add_upsample=not is_final_block, - resnet_eps=1e-6, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - resnet_time_scale_shift=norm_type, - temb_channels=temb_channels, - temporal_up=is_temporal_up_block, - slicing=is_slicing_up_block, - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ) - self.up_blocks.append(up_block) - prev_output_channel = output_channel - - # out - if norm_type == "spatial": - self.conv_norm_out = SpatialNorm(block_out_channels[0], temb_channels) - else: - self.conv_norm_out = nn.GroupNorm( - num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=1e-6 - ) - self.conv_act = nn.SiLU() - self.conv_out = init_causal_conv3d( - block_out_channels[0], out_channels, 3, padding=1, inflation_mode=inflation_mode - ) - - self.gradient_checkpointing = gradient_checkpoint - - # Note: Just copy from Decoder. - def forward( - self, - sample: torch.FloatTensor, - latent_embeds: Optional[torch.FloatTensor] = None, - memory_state: MemoryState = MemoryState.DISABLED, - ) -> torch.FloatTensor: - r"""The forward method of the `Decoder` class.""" - - sample = self.conv_in(sample, memory_state=memory_state) - - upscale_dtype = next(iter(self.up_blocks.parameters())).dtype - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - if is_torch_version(">=", "1.11.0"): - sample = self.mid_block(sample, latent_embeds, memory_state=memory_state) - sample = sample.to(upscale_dtype) - - # up - for up_block in self.up_blocks: - sample = torch.utils.checkpoint.checkpoint( - create_custom_forward(up_block), - sample, - latent_embeds, - memory_state, - use_reentrant=False, - ) - else: - # middle - sample = self.mid_block(sample, latent_embeds, memory_state=memory_state) - sample = sample.to(upscale_dtype) - - # up - for up_block in self.up_blocks: - sample = torch.utils.checkpoint.checkpoint( - create_custom_forward(up_block), sample, latent_embeds, memory_state - ) - else: - # middle - sample = self.mid_block(sample, latent_embeds, memory_state=memory_state) - sample = sample.to(upscale_dtype) - - # up - for up_block in self.up_blocks: - sample = up_block(sample, latent_embeds, memory_state=memory_state) - - # post-process - sample = causal_norm_wrapper(self.conv_norm_out, sample) - sample = self.conv_act(sample) - sample = self.conv_out(sample, memory_state=memory_state) - - return sample - - -class AutoencoderKL(diffusers.AutoencoderKL): - """ - We simply inherit the model code from diffusers - """ - - def __init__(self, attention: bool = True, *args, **kwargs): - super().__init__(*args, **kwargs) - - # A hacky way to remove attention. - if not attention: - self.encoder.mid_block.attentions = torch.nn.ModuleList([None]) - self.decoder.mid_block.attentions = torch.nn.ModuleList([None]) - - def load_state_dict(self, state_dict, strict=True): - # Newer version of diffusers changed the model keys, - # causing incompatibility with old checkpoints. - # They provided a method for conversion. We call conversion before loading state_dict. - convert_deprecated_attention_blocks = getattr( - self, "_convert_deprecated_attention_blocks", None - ) - if callable(convert_deprecated_attention_blocks): - convert_deprecated_attention_blocks(state_dict) - return super().load_state_dict(state_dict, strict) - - -class VideoAutoencoderKL(diffusers.AutoencoderKL): - """ - We simply inherit the model code from diffusers - """ - - def __init__( - self, - in_channels: int = 3, - out_channels: int = 3, - down_block_types: Tuple[str] = ("DownEncoderBlock3D",), - up_block_types: Tuple[str] = ("UpDecoderBlock3D",), - block_out_channels: Tuple[int] = (64,), - layers_per_block: int = 1, - act_fn: str = "silu", - latent_channels: int = 4, - norm_num_groups: int = 32, - sample_size: int = 32, - scaling_factor: float = 0.18215, - force_upcast: float = True, - attention: bool = True, - temporal_scale_num: int = 2, - slicing_up_num: int = 0, - gradient_checkpoint: bool = False, - inflation_mode: _inflation_mode_t = "tail", - time_receptive_field: _receptive_field_t = "full", - slicing_sample_min_size: int = 32, - use_quant_conv: bool = True, - use_post_quant_conv: bool = True, - *args, - **kwargs, - ): - extra_cond_dim = kwargs.pop("extra_cond_dim") if "extra_cond_dim" in kwargs else None - self.slicing_sample_min_size = slicing_sample_min_size - self.slicing_latent_min_size = slicing_sample_min_size // (2**temporal_scale_num) - - super().__init__( - in_channels=in_channels, - out_channels=out_channels, - # [Override] make sure it can be normally initialized - down_block_types=tuple( - [down_block_type.replace("3D", "2D") for down_block_type in down_block_types] - ), - up_block_types=tuple( - [up_block_type.replace("3D", "2D") for up_block_type in up_block_types] - ), - block_out_channels=block_out_channels, - layers_per_block=layers_per_block, - act_fn=act_fn, - latent_channels=latent_channels, - norm_num_groups=norm_num_groups, - sample_size=sample_size, - scaling_factor=scaling_factor, - force_upcast=force_upcast, - *args, - **kwargs, - ) - - # pass init params to Encoder - self.encoder = Encoder3D( - in_channels=in_channels, - out_channels=latent_channels, - down_block_types=down_block_types, - block_out_channels=block_out_channels, - layers_per_block=layers_per_block, - act_fn=act_fn, - norm_num_groups=norm_num_groups, - double_z=True, - extra_cond_dim=extra_cond_dim, - # [Override] add temporal_down_num parameter - temporal_down_num=temporal_scale_num, - gradient_checkpoint=gradient_checkpoint, - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ) - - # pass init params to Decoder - self.decoder = Decoder3D( - in_channels=latent_channels, - out_channels=out_channels, - up_block_types=up_block_types, - block_out_channels=block_out_channels, - layers_per_block=layers_per_block, - norm_num_groups=norm_num_groups, - act_fn=act_fn, - # [Override] add temporal_up_num parameter - temporal_up_num=temporal_scale_num, - slicing_up_num=slicing_up_num, - gradient_checkpoint=gradient_checkpoint, - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ) - - self.quant_conv = ( - init_causal_conv3d( - in_channels=2 * latent_channels, - out_channels=2 * latent_channels, - kernel_size=1, - inflation_mode=inflation_mode, - ) - if use_quant_conv - else None - ) - self.post_quant_conv = ( - init_causal_conv3d( - in_channels=latent_channels, - out_channels=latent_channels, - kernel_size=1, - inflation_mode=inflation_mode, - ) - if use_post_quant_conv - else None - ) - - # A hacky way to remove attention. - if not attention: - self.encoder.mid_block.attentions = torch.nn.ModuleList([None]) - self.decoder.mid_block.attentions = torch.nn.ModuleList([None]) - - @apply_forward_hook - def encode(self, x: torch.FloatTensor, return_dict: bool = True) -> AutoencoderKLOutput: - h = self.slicing_encode(x) - posterior = DiagonalGaussianDistribution(h) - - if not return_dict: - return (posterior,) - - return AutoencoderKLOutput(latent_dist=posterior) - - @apply_forward_hook - def decode( - self, z: torch.Tensor, return_dict: bool = True - ) -> Union[DecoderOutput, torch.Tensor]: - decoded = self.slicing_decode(z) - - if not return_dict: - return (decoded,) - - return DecoderOutput(sample=decoded) - - def _encode( - self, x: torch.Tensor, memory_state: MemoryState = MemoryState.DISABLED - ) -> torch.Tensor: - _x = x.to(self.device) - _x = causal_conv_slice_inputs(_x, self.slicing_sample_min_size, memory_state=memory_state) - h = self.encoder(_x, memory_state=memory_state) - if self.quant_conv is not None: - output = self.quant_conv(h, memory_state=memory_state) - else: - output = h - output = causal_conv_gather_outputs(output) - return output.to(x.device) - - def _decode( - self, z: torch.Tensor, memory_state: MemoryState = MemoryState.DISABLED - ) -> torch.Tensor: - _z = z.to(self.device) - _z = causal_conv_slice_inputs(_z, self.slicing_latent_min_size, memory_state=memory_state) - if self.post_quant_conv is not None: - _z = self.post_quant_conv(_z, memory_state=memory_state) - output = self.decoder(_z, memory_state=memory_state) - output = causal_conv_gather_outputs(output) - return output.to(z.device) - - def slicing_encode(self, x: torch.Tensor) -> torch.Tensor: - sp_size = get_sequence_parallel_world_size() - if self.use_slicing and (x.shape[2] - 1) > self.slicing_sample_min_size * sp_size: - x_slices = x[:, :, 1:].split(split_size=self.slicing_sample_min_size * sp_size, dim=2) - encoded_slices = [ - self._encode( - torch.cat((x[:, :, :1], x_slices[0]), dim=2), - memory_state=MemoryState.INITIALIZING, - ) - ] - for x_idx in range(1, len(x_slices)): - encoded_slices.append( - self._encode(x_slices[x_idx], memory_state=MemoryState.ACTIVE) - ) - return torch.cat(encoded_slices, dim=2) - else: - return self._encode(x) - - def slicing_decode(self, z: torch.Tensor) -> torch.Tensor: - sp_size = get_sequence_parallel_world_size() - if self.use_slicing and (z.shape[2] - 1) > self.slicing_latent_min_size * sp_size: - z_slices = z[:, :, 1:].split(split_size=self.slicing_latent_min_size * sp_size, dim=2) - decoded_slices = [ - self._decode( - torch.cat((z[:, :, :1], z_slices[0]), dim=2), - memory_state=MemoryState.INITIALIZING, - ) - ] - for z_idx in range(1, len(z_slices)): - decoded_slices.append( - self._decode(z_slices[z_idx], memory_state=MemoryState.ACTIVE) - ) - return torch.cat(decoded_slices, dim=2) - else: - return self._decode(z) - - def tiled_encode(self, x: torch.Tensor, **kwargs) -> torch.Tensor: - raise NotImplementedError - - def tiled_decode(self, z: torch.Tensor, **kwargs) -> torch.Tensor: - raise NotImplementedError - - def forward( - self, x: torch.FloatTensor, mode: Literal["encode", "decode", "all"] = "all", **kwargs - ): - # x: [b c t h w] - if mode == "encode": - h = self.encode(x) - return h.latent_dist - elif mode == "decode": - h = self.decode(x) - return h.sample - else: - h = self.encode(x) - h = self.decode(h.latent_dist.mode()) - return h.sample - - def load_state_dict(self, state_dict, strict=False): - # Newer version of diffusers changed the model keys, - # causing incompatibility with old checkpoints. - # They provided a method for conversion. - # We call conversion before loading state_dict. - convert_deprecated_attention_blocks = getattr( - self, "_convert_deprecated_attention_blocks", None - ) - if callable(convert_deprecated_attention_blocks): - convert_deprecated_attention_blocks(state_dict) - return super().load_state_dict(state_dict, strict) - - -class VideoAutoencoderKLWrapper(VideoAutoencoderKL): - def __init__( - self, - *args, - spatial_downsample_factor: int, - temporal_downsample_factor: int, - freeze_encoder: bool, - **kwargs, - ): - self.spatial_downsample_factor = spatial_downsample_factor - self.temporal_downsample_factor = temporal_downsample_factor - self.freeze_encoder = freeze_encoder - super().__init__(*args, **kwargs) - - def forward(self, x: torch.FloatTensor) -> CausalAutoencoderOutput: - with torch.no_grad() if self.freeze_encoder else nullcontext(): - z, p = self.encode(x) - x = self.decode(z).sample - return CausalAutoencoderOutput(x, z, p) - - def encode(self, x: torch.FloatTensor) -> CausalEncoderOutput: - if x.ndim == 4: - x = x.unsqueeze(2) - p = super().encode(x).latent_dist - z = p.sample().squeeze(2) - return CausalEncoderOutput(z, p) - - def decode(self, z: torch.FloatTensor) -> CausalDecoderOutput: - if z.ndim == 4: - z = z.unsqueeze(2) - x = super().decode(z).sample.squeeze(2) - return CausalDecoderOutput(x) - - def preprocess(self, x: torch.Tensor): - # x should in [B, C, T, H, W], [B, C, H, W] - assert x.ndim == 4 or x.size(2) % 4 == 1 - return x - - def postprocess(self, x: torch.Tensor): - # x should in [B, C, T, H, W], [B, C, H, W] - return x - - def set_causal_slicing( - self, - *, - split_size: Optional[int], - memory_device: _memory_device_t, - ): - assert ( - split_size is None or memory_device is not None - ), "if split_size is set, memory_device must not be None." - if split_size is not None: - self.enable_slicing() - self.slicing_sample_min_size = split_size - self.slicing_latent_min_size = split_size // self.temporal_downsample_factor - else: - self.disable_slicing() - for module in self.modules(): - if isinstance(module, InflatedCausalConv3d): - module.set_memory_device(memory_device) - - def set_memory_limit(self, conv_max_mem: Optional[float], norm_max_mem: Optional[float]): - set_norm_limit(norm_max_mem) - for m in self.modules(): - if isinstance(m, InflatedCausalConv3d): - m.set_memory_limit(conv_max_mem if conv_max_mem is not None else float("inf")) diff --git a/models_x/video_vae_v3/modules/causal_inflation_lib.py b/models_x/video_vae_v3/modules/causal_inflation_lib.py deleted file mode 100644 index fdd3cbe2512b119d76729c4103325ac22e0b12fe..0000000000000000000000000000000000000000 --- a/models_x/video_vae_v3/modules/causal_inflation_lib.py +++ /dev/null @@ -1,460 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -import math -from contextlib import contextmanager -from typing import List, Optional, Union -import torch -import torch.distributed as dist -import torch.nn.functional as F -from diffusers.models.normalization import RMSNorm -from einops import rearrange -from torch import Tensor, nn -from torch.nn import Conv3d - -from common.distributed.advanced import ( - get_next_sequence_parallel_rank, - get_prev_sequence_parallel_rank, - get_sequence_parallel_group, - get_sequence_parallel_rank, - get_sequence_parallel_world_size, -) -from common.logger import get_logger -from models.video_vae_v3.modules.context_parallel_lib import cache_send_recv, get_cache_size -from models.video_vae_v3.modules.global_config import get_norm_limit -from models.video_vae_v3.modules.types import MemoryState, _inflation_mode_t, _memory_device_t - -logger = get_logger(__name__) - - -@contextmanager -def ignore_padding(model): - orig_padding = model.padding - model.padding = (0, 0, 0) - try: - yield - finally: - model.padding = orig_padding - - -class InflatedCausalConv3d(Conv3d): - def __init__( - self, - *args, - inflation_mode: _inflation_mode_t, - memory_device: _memory_device_t = "same", - **kwargs, - ): - self.inflation_mode = inflation_mode - self.memory = None - super().__init__(*args, **kwargs) - self.temporal_padding = self.padding[0] - self.memory_device = memory_device - self.padding = (0, *self.padding[1:]) # Remove temporal pad to keep causal. - self.memory_limit = float("inf") - - def set_memory_limit(self, value: float): - self.memory_limit = value - - def set_memory_device(self, memory_device: _memory_device_t): - self.memory_device = memory_device - - def memory_limit_conv( - self, - x, - *, - split_dim=3, - padding=(0, 0, 0, 0, 0, 0), - prev_cache=None, - ): - # Compatible with no limit. - if math.isinf(self.memory_limit): - if prev_cache is not None: - x = torch.cat([prev_cache, x], dim=split_dim - 1) - return super().forward(x) - - # Compute tensor shape after concat & padding. - shape = torch.tensor(x.size()) - if prev_cache is not None: - shape[split_dim - 1] += prev_cache.size(split_dim - 1) - shape[-3:] += torch.tensor(padding).view(3, 2).sum(-1).flip(0) - memory_occupy = shape.prod() * x.element_size() / 1024**3 # GiB - logger.debug( - f"x:{(shape, x.dtype)} {memory_occupy:.3f}GiB " - f"prev_cache:{prev_cache.shape if prev_cache is not None else None}" - ) - if memory_occupy < self.memory_limit or split_dim == x.ndim: - if prev_cache is not None: - x = torch.cat([prev_cache, x], dim=split_dim - 1) - x = F.pad(x, padding, value=0.0) - with ignore_padding(self): - return super().forward(x) - - logger.debug( - f"Exceed memory limit {memory_occupy} > {self.memory_limit}, split dim {split_dim}" - ) - - # Split input (& prev_cache). - num_splits = math.ceil(memory_occupy / self.memory_limit) - size_per_split = x.size(split_dim) // num_splits - split_sizes = [size_per_split] * (num_splits - 1) - split_sizes += [x.size(split_dim) - sum(split_sizes)] - - x = list(x.split(split_sizes, dim=split_dim)) - logger.debug(f"Conv inputs: {[inp.size() for inp in x]} {x[0].dtype}") - if prev_cache is not None: - prev_cache = list(prev_cache.split(split_sizes, dim=split_dim)) - - # Loop Fwd. - cache = None - for idx in range(len(x)): - # Concat prev cache from last dim - if prev_cache is not None: - x[idx] = torch.cat([prev_cache[idx], x[idx]], dim=split_dim - 1) - - # Get padding pattern. - lpad_dim = (x[idx].ndim - split_dim - 1) * 2 - rpad_dim = lpad_dim + 1 - padding = list(padding) - padding[lpad_dim] = self.padding[split_dim - 2] if idx == 0 else 0 - padding[rpad_dim] = self.padding[split_dim - 2] if idx == len(x) - 1 else 0 - pad_len = padding[lpad_dim] + padding[rpad_dim] - padding = tuple(padding) - - # Prepare cache for next slice (this dim). - next_cache = None - cache_len = cache.size(split_dim) if cache is not None else 0 - next_catch_size = get_cache_size( - conv_module=self, - input_len=x[idx].size(split_dim) + cache_len, - pad_len=pad_len, - dim=split_dim - 2, - ) - if next_catch_size != 0: - assert next_catch_size <= x[idx].size(split_dim) - next_cache = ( - x[idx].transpose(0, split_dim)[-next_catch_size:].transpose(0, split_dim) - ) - - # Recursive. - x[idx] = self.memory_limit_conv( - x[idx], - split_dim=split_dim + 1, - padding=padding, - prev_cache=cache, - ) - - # Update cache. - cache = next_cache - - logger.debug(f"Conv outputs, concat(dim={split_dim}): {[d.size() for d in x]}") - return torch.cat(x, split_dim) - - def forward( - self, - input: Union[Tensor, List[Tensor]], - memory_state: MemoryState = MemoryState.UNSET, - ) -> Tensor: - assert memory_state != MemoryState.UNSET - if memory_state != MemoryState.ACTIVE: - self.memory = None - if ( - math.isinf(self.memory_limit) - and torch.is_tensor(input) - and get_sequence_parallel_group() is None - ): - return self.basic_forward(input, memory_state) - return self.slicing_forward(input, memory_state) - - def basic_forward(self, input: Tensor, memory_state: MemoryState = MemoryState.UNSET): - mem_size = self.stride[0] - self.kernel_size[0] - if (self.memory is not None) and (memory_state == MemoryState.ACTIVE): - input = extend_head(input, memory=self.memory, times=-1) - else: - input = extend_head(input, times=self.temporal_padding * 2) - memory = ( - input[:, :, mem_size:].detach() - if (mem_size != 0 and memory_state != MemoryState.DISABLED) - else None - ) - if ( - memory_state != MemoryState.DISABLED - and not self.training - and (self.memory_device is not None) - ): - self.memory = memory - if self.memory_device == "cpu" and self.memory is not None: - self.memory = self.memory.to("cpu") - return super().forward(input) - - def slicing_forward( - self, - input: Union[Tensor, List[Tensor]], - memory_state: MemoryState = MemoryState.UNSET, - ) -> Tensor: - squeeze_out = False - if torch.is_tensor(input): - input = [input] - squeeze_out = True - - cache_size = self.kernel_size[0] - self.stride[0] - cache = cache_send_recv( - input, cache_size=cache_size, memory=self.memory, times=self.temporal_padding * 2 - ) - - # For slice=4 and sp=2, and 17 frames in total - # sp0 sp1 - # slice 0: [`0 0` 0 1 2 {3 4}] [{3 4} 5 6 (7 8)] extend=`0 0` cache={3 4} memory=(7 8) - # slice 1: [(7 8) 9 10 {11 12}] [{11 12} 13 14 15 16] - sp_rank = get_sequence_parallel_rank() - sp_size = get_sequence_parallel_world_size() - sp_group = get_sequence_parallel_group() - send_dst = get_next_sequence_parallel_rank() - recv_src = get_prev_sequence_parallel_rank() - if ( - memory_state in [MemoryState.INITIALIZING, MemoryState.ACTIVE] # use_slicing - and not self.training - and (self.memory_device is not None) - and sp_rank in [0, sp_size - 1] - and cache_size != 0 - ): - if cache_size > input[-1].size(2) and cache is not None and len(input) == 1: - input[0] = torch.cat([cache, input[0]], dim=2) - cache = None - assert cache_size <= input[-1].size(2) - if sp_size == 1: - self.memory = input[-1][:, :, -cache_size:].detach().contiguous() - else: - if sp_rank == sp_size - 1: - dist.send( - input[-1][:, :, -cache_size:].detach().contiguous(), - send_dst, - group=sp_group, - ) - if sp_rank == 0: - shape = list(input[0].size()) - shape[2] = cache_size - self.memory = torch.empty( - *shape, device=input[0].device, dtype=input[0].dtype - ).contiguous() - dist.recv(self.memory, recv_src, group=sp_group) - if self.memory_device == "cpu" and self.memory is not None: - self.memory = self.memory.to("cpu") - - padding = tuple(x for x in reversed(self.padding) for _ in range(2)) - for i in range(len(input)): - # Prepare cache for next input slice. - next_cache = None - cache_size = 0 - if i < len(input) - 1: - cache_len = cache.size(2) if cache is not None else 0 - cache_size = get_cache_size(self, input[i].size(2) + cache_len, pad_len=0) - if cache_size != 0: - if cache_size > input[i].size(2) and cache is not None: - input[i] = torch.cat([cache, input[i]], dim=2) - cache = None - assert cache_size <= input[i].size(2), f"{cache_size} > {input[i].size(2)}" - next_cache = input[i][:, :, -cache_size:] - - # Conv forward for this input slice. - input[i] = self.memory_limit_conv( - input[i], - padding=padding, - prev_cache=cache, - ) - - # Update cache. - cache = next_cache - - return input[0] if squeeze_out else input - - def tflops(self, args, kwargs, output) -> float: - if torch.is_tensor(output): - output_numel = output.numel() - elif isinstance(output, list): - output_numel = sum(o.numel() for o in output) - else: - raise NotImplementedError - return (2 * math.prod(self.kernel_size) * self.in_channels * (output_numel / 1e6)) / 1e6 - - def _load_from_state_dict( - self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ): - if self.inflation_mode != "none": - state_dict = modify_state_dict( - self, - state_dict, - prefix, - inflate_weight_fn=inflate_weight, - inflate_bias_fn=inflate_bias, - ) - super()._load_from_state_dict( - state_dict, - prefix, - local_metadata, - (strict and self.inflation_mode == "none"), - missing_keys, - unexpected_keys, - error_msgs, - ) - - -def init_causal_conv3d( - *args, - inflation_mode: _inflation_mode_t, - **kwargs, -): - """ - Initialize a Causal-3D convolution layer. - Parameters: - inflation_mode: Listed as below. It's compatible with all the 3D-VAE checkpoints we have. - - none: No inflation will be conducted. - The loading logic of state dict will fall back to default. - - tail / replicate: Refer to the definition of `InflatedCausalConv3d`. - """ - return InflatedCausalConv3d(*args, inflation_mode=inflation_mode, **kwargs) - - -def causal_norm_wrapper(norm_layer: nn.Module, x: torch.Tensor) -> torch.Tensor: - input_dtype = x.dtype - if isinstance(norm_layer, (nn.LayerNorm, RMSNorm)): - if x.ndim == 4: - x = rearrange(x, "b c h w -> b h w c") - x = norm_layer(x) - x = rearrange(x, "b h w c -> b c h w") - return x.to(input_dtype) - if x.ndim == 5: - x = rearrange(x, "b c t h w -> b t h w c") - x = norm_layer(x) - x = rearrange(x, "b t h w c -> b c t h w") - return x.to(input_dtype) - if isinstance(norm_layer, (nn.GroupNorm, nn.BatchNorm2d, nn.SyncBatchNorm)): - if x.ndim <= 4: - return norm_layer(x).to(input_dtype) - if x.ndim == 5: - t = x.size(2) - x = rearrange(x, "b c t h w -> (b t) c h w") - memory_occupy = x.numel() * x.element_size() / 1024**3 - if isinstance(norm_layer, nn.GroupNorm) and memory_occupy > get_norm_limit(): - num_chunks = min(4 if x.element_size() == 2 else 2, norm_layer.num_groups) - logger.debug(f"large tensor {x.shape}, norm in {num_chunks} chunks") - assert norm_layer.num_groups % num_chunks == 0 - num_groups_per_chunk = norm_layer.num_groups // num_chunks - - x = list(x.chunk(num_chunks, dim=1)) - weights = norm_layer.weight.chunk(num_chunks, dim=0) - biases = norm_layer.bias.chunk(num_chunks, dim=0) - for i, (w, b) in enumerate(zip(weights, biases)): - x[i] = F.group_norm(x[i], num_groups_per_chunk, w, b, norm_layer.eps) - x[i] = x[i].to(input_dtype) - x = torch.cat(x, dim=1) - else: - x = norm_layer(x) - x = rearrange(x, "(b t) c h w -> b c t h w", t=t) - return x.to(input_dtype) - raise NotImplementedError - - -def remove_head(tensor: Tensor, times: int = 1) -> Tensor: - """ - Remove duplicated first frame features in the up-sampling process. - """ - sp_rank = get_sequence_parallel_rank() - if times == 0 or sp_rank > 0: - return tensor - return torch.cat(tensors=(tensor[:, :, :1], tensor[:, :, times + 1 :]), dim=2) - - -def extend_head(tensor: Tensor, times: int = 2, memory: Optional[Tensor] = None) -> Tensor: - """ - When memory is None: - - Duplicate first frame features in the down-sampling process. - When memory is not None: - - Concatenate memory features with the input features to keep temporal consistency. - """ - if memory is not None: - return torch.cat((memory.to(tensor), tensor), dim=2) - assert times >= 0, "Invalid input for function 'extend_head'!" - if times == 0: - return tensor - else: - tile_repeat = [1] * tensor.ndim - tile_repeat[2] = times - return torch.cat(tensors=(torch.tile(tensor[:, :, :1], tile_repeat), tensor), dim=2) - - -def inflate_weight(weight_2d: torch.Tensor, weight_3d: torch.Tensor, inflation_mode: str): - """ - Inflate a 2D convolution weight matrix to a 3D one. - Parameters: - weight_2d: The weight matrix of 2D conv to be inflated. - weight_3d: The weight matrix of 3D conv to be initialized. - inflation_mode: the mode of inflation - """ - assert inflation_mode in ["tail", "replicate"] - assert weight_3d.shape[:2] == weight_2d.shape[:2] - with torch.no_grad(): - if inflation_mode == "replicate": - depth = weight_3d.size(2) - weight_3d.copy_(weight_2d.unsqueeze(2).repeat(1, 1, depth, 1, 1) / depth) - else: - weight_3d.fill_(0.0) - weight_3d[:, :, -1].copy_(weight_2d) - return weight_3d - - -def inflate_bias(bias_2d: torch.Tensor, bias_3d: torch.Tensor, inflation_mode: str): - """ - Inflate a 2D convolution bias tensor to a 3D one - Parameters: - bias_2d: The bias tensor of 2D conv to be inflated. - bias_3d: The bias tensor of 3D conv to be initialized. - inflation_mode: Placeholder to align `inflate_weight`. - """ - assert bias_3d.shape == bias_2d.shape - with torch.no_grad(): - bias_3d.copy_(bias_2d) - return bias_3d - - -def modify_state_dict(layer, state_dict, prefix, inflate_weight_fn, inflate_bias_fn): - """ - the main function to inflated 2D parameters to 3D. - """ - weight_name = prefix + "weight" - bias_name = prefix + "bias" - if weight_name in state_dict: - weight_2d = state_dict[weight_name] - if weight_2d.dim() == 4: - # Assuming the 2D weights are 4D tensors (out_channels, in_channels, h, w) - weight_3d = inflate_weight_fn( - weight_2d=weight_2d, - weight_3d=layer.weight, - inflation_mode=layer.inflation_mode, - ) - state_dict[weight_name] = weight_3d - else: - return state_dict - # It's a 3d state dict, should not do inflation on both bias and weight. - if bias_name in state_dict: - bias_2d = state_dict[bias_name] - if bias_2d.dim() == 1: - # Assuming the 2D biases are 1D tensors (out_channels,) - bias_3d = inflate_bias_fn( - bias_2d=bias_2d, - bias_3d=layer.bias, - inflation_mode=layer.inflation_mode, - ) - state_dict[bias_name] = bias_3d - return state_dict diff --git a/models_x/video_vae_v3/modules/context_parallel_lib.py b/models_x/video_vae_v3/modules/context_parallel_lib.py deleted file mode 100644 index 55cfe481ee2ade7434166bfda0b83589b423137c..0000000000000000000000000000000000000000 --- a/models_x/video_vae_v3/modules/context_parallel_lib.py +++ /dev/null @@ -1,164 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import List -import torch -import torch.distributed as dist -import torch.nn.functional as F -from torch import Tensor - -from common.distributed import get_device -from common.distributed.advanced import ( - get_next_sequence_parallel_rank, - get_prev_sequence_parallel_rank, - get_sequence_parallel_group, - get_sequence_parallel_rank, - get_sequence_parallel_world_size, -) -from common.distributed.ops import Gather -from common.logger import get_logger -from models.video_vae_v3.modules.types import MemoryState - -logger = get_logger(__name__) - - -def causal_conv_slice_inputs(x, split_size, memory_state): - sp_size = get_sequence_parallel_world_size() - sp_group = get_sequence_parallel_group() - sp_rank = get_sequence_parallel_rank() - if sp_group is None: - return x - - assert memory_state != MemoryState.UNSET - leave_out = 1 if memory_state != MemoryState.ACTIVE else 0 - - # Should have at least sp_size slices. - num_slices = (x.size(2) - leave_out) // split_size - assert num_slices >= sp_size, f"{num_slices} < {sp_size}" - - split_sizes = [split_size + leave_out] + [split_size] * (num_slices - 1) - split_sizes += [x.size(2) - sum(split_sizes)] - assert sum(split_sizes) == x.size(2) - - split_sizes = torch.tensor(split_sizes) - slices_per_rank = len(split_sizes) // sp_size - split_sizes = split_sizes.split( - [slices_per_rank] * (sp_size - 1) + [len(split_sizes) - slices_per_rank * (sp_size - 1)] - ) - split_sizes = list(map(lambda s: s.sum().item(), split_sizes)) - logger.debug(f"split_sizes: {split_sizes}") - return x.split(split_sizes, dim=2)[sp_rank] - - -def causal_conv_gather_outputs(x): - sp_group = get_sequence_parallel_group() - sp_size = get_sequence_parallel_world_size() - if sp_group is None: - return x - - # Communicate shapes. - unpad_lens = torch.empty((sp_size,), device=get_device(), dtype=torch.long) - local_unpad_len = torch.tensor([x.size(2)], device=get_device(), dtype=torch.long) - torch.distributed.all_gather_into_tensor(unpad_lens, local_unpad_len, group=sp_group) - - # Padding to max_len for gather. - max_len = unpad_lens.max() - x_pad = F.pad(x, (0, 0, 0, 0, 0, max_len - x.size(2))).contiguous() - - # Gather outputs. - x_pad = Gather.apply(sp_group, x_pad, 2, True) - - # Remove padding. - x_pad_lists = list(x_pad.chunk(sp_size, dim=2)) - for i, (x_pad, unpad_len) in enumerate(zip(x_pad_lists, unpad_lens)): - x_pad_lists[i] = x_pad[:, :, :unpad_len] - - return torch.cat(x_pad_lists, dim=2) - - -def get_output_len(conv_module, input_len, pad_len, dim=0): - dilated_kernerl_size = conv_module.dilation[dim] * (conv_module.kernel_size[dim] - 1) + 1 - output_len = (input_len + pad_len - dilated_kernerl_size) // conv_module.stride[dim] + 1 - return output_len - - -def get_cache_size(conv_module, input_len, pad_len, dim=0): - dilated_kernerl_size = conv_module.dilation[dim] * (conv_module.kernel_size[dim] - 1) + 1 - output_len = (input_len + pad_len - dilated_kernerl_size) // conv_module.stride[dim] + 1 - remain_len = ( - input_len + pad_len - ((output_len - 1) * conv_module.stride[dim] + dilated_kernerl_size) - ) - overlap_len = dilated_kernerl_size - conv_module.stride[dim] - cache_len = overlap_len + remain_len # >= 0 - logger.debug( - f"I:{input_len}, " - f"P:{pad_len}, " - f"K:{conv_module.kernel_size[dim]}, " - f"S:{conv_module.stride[dim]}, " - f"O:{output_len}, " - f"Cache:{cache_len}" - ) - assert output_len > 0 - return cache_len - - -def cache_send_recv(tensor: List[Tensor], cache_size, times, memory=None): - sp_group = get_sequence_parallel_group() - sp_rank = get_sequence_parallel_rank() - sp_size = get_sequence_parallel_world_size() - send_dst = get_next_sequence_parallel_rank() - recv_src = get_prev_sequence_parallel_rank() - recv_buffer = None - recv_req = None - - logger.debug( - f"[sp{sp_rank}] cur_tensors:{[(t.size(), t.dtype) for t in tensor]}, times: {times}" - ) - if sp_rank == 0 or sp_group is None: - if memory is not None: - recv_buffer = memory.to(tensor[0]) - elif times > 0: - tile_repeat = [1] * tensor[0].ndim - tile_repeat[2] = times - recv_buffer = torch.tile(tensor[0][:, :, :1], tile_repeat) - - if cache_size != 0 and sp_group is not None: - if sp_rank > 0: - shape = list(tensor[0].size()) - shape[2] = cache_size - recv_buffer = torch.empty( - *shape, device=tensor[0].device, dtype=tensor[0].dtype - ).contiguous() - recv_req = dist.irecv(recv_buffer, recv_src, group=sp_group) - if sp_rank < sp_size - 1: - if cache_size > tensor[-1].size(2) and len(tensor) == 1: - logger.debug(f"[sp{sp_rank}] force concat before send {tensor[-1].size()}") - if recv_req is not None: - recv_req.wait() - tensor[0] = torch.cat([recv_buffer, tensor[0]], dim=2) - recv_buffer = None - assert cache_size <= tensor[-1].size( - 2 - ), f"Not enough value to cache, got {tensor[-1].size()}, cache_size={cache_size}" - dist.isend( - tensor[-1][:, :, -cache_size:].detach().contiguous(), send_dst, group=sp_group - ) - if recv_req is not None: - recv_req.wait() - - logger.debug( - f"[sp{sp_rank}] recv_src:{recv_src}, " - f"recv_buffer:{recv_buffer.size() if recv_buffer is not None else None}" - ) - return recv_buffer diff --git a/models_x/video_vae_v3/modules/global_config.py b/models_x/video_vae_v3/modules/global_config.py deleted file mode 100644 index 863117570a8aadde38b8eae8f1aa16480cd9f7ca..0000000000000000000000000000000000000000 --- a/models_x/video_vae_v3/modules/global_config.py +++ /dev/null @@ -1,28 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import Optional - -_NORM_LIMIT = float("inf") - - -def get_norm_limit(): - return _NORM_LIMIT - - -def set_norm_limit(value: Optional[float] = None): - global _NORM_LIMIT - if value is None: - value = float("inf") - _NORM_LIMIT = value diff --git a/models_x/video_vae_v3/modules/inflated_layers.py b/models_x/video_vae_v3/modules/inflated_layers.py deleted file mode 100644 index 8dfa4841a4ba3e4f758831396497b246614c7bb5..0000000000000000000000000000000000000000 --- a/models_x/video_vae_v3/modules/inflated_layers.py +++ /dev/null @@ -1,106 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from functools import partial -from typing import Literal, Optional -from torch import Tensor -from torch.nn import Conv3d - -from models.video_vae_v3.modules.inflated_lib import ( - MemoryState, - extend_head, - inflate_bias, - inflate_weight, - modify_state_dict, -) - -_inflation_mode_t = Literal["none", "tail", "replicate"] -_memory_device_t = Optional[Literal["cpu", "same"]] - - -class InflatedCausalConv3d(Conv3d): - def __init__( - self, - *args, - inflation_mode: _inflation_mode_t, - memory_device: _memory_device_t = "same", - **kwargs, - ): - self.inflation_mode = inflation_mode - self.memory = None - super().__init__(*args, **kwargs) - self.temporal_padding = self.padding[0] - self.memory_device = memory_device - self.padding = (0, *self.padding[1:]) # Remove temporal pad to keep causal. - - def set_memory_device(self, memory_device: _memory_device_t): - self.memory_device = memory_device - - def forward(self, input: Tensor, memory_state: MemoryState = MemoryState.DISABLED) -> Tensor: - mem_size = self.stride[0] - self.kernel_size[0] - if (self.memory is not None) and (memory_state == MemoryState.ACTIVE): - input = extend_head(input, memory=self.memory) - else: - input = extend_head(input, times=self.temporal_padding * 2) - memory = ( - input[:, :, mem_size:].detach() - if (mem_size != 0 and memory_state != MemoryState.DISABLED) - else None - ) - if ( - memory_state != MemoryState.DISABLED - and not self.training - and (self.memory_device is not None) - ): - self.memory = memory - if self.memory_device == "cpu" and self.memory is not None: - self.memory = self.memory.to("cpu") - return super().forward(input) - - def _load_from_state_dict( - self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ): - if self.inflation_mode != "none": - state_dict = modify_state_dict( - self, - state_dict, - prefix, - inflate_weight_fn=partial(inflate_weight, position="tail"), - inflate_bias_fn=partial(inflate_bias, position="tail"), - ) - super()._load_from_state_dict( - state_dict, - prefix, - local_metadata, - (strict and self.inflation_mode == "none"), - missing_keys, - unexpected_keys, - error_msgs, - ) - - -def init_causal_conv3d( - *args, - inflation_mode: _inflation_mode_t, - **kwargs, -): - """ - Initialize a Causal-3D convolution layer. - Parameters: - inflation_mode: Listed as below. It's compatible with all the 3D-VAE checkpoints we have. - - none: No inflation will be conducted. - The loading logic of state dict will fall back to default. - - tail / replicate: Refer to the definition of `InflatedCausalConv3d`. - """ - return InflatedCausalConv3d(*args, inflation_mode=inflation_mode, **kwargs) diff --git a/models_x/video_vae_v3/modules/inflated_lib.py b/models_x/video_vae_v3/modules/inflated_lib.py deleted file mode 100644 index cbdaf3138bb5994c4702185426f854a4660cc6a4..0000000000000000000000000000000000000000 --- a/models_x/video_vae_v3/modules/inflated_lib.py +++ /dev/null @@ -1,156 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from enum import Enum -from typing import Optional -import numpy as np -import torch -from diffusers.models.normalization import RMSNorm -from einops import rearrange -from torch import Tensor, nn - -from common.logger import get_logger - -logger = get_logger(__name__) - - -class MemoryState(Enum): - """ - State[Disabled]: No memory bank will be enabled. - State[Initializing]: The model is handling the first clip, - need to reset / initialize the memory bank. - State[Active]: There has been some data in the memory bank. - """ - - DISABLED = 0 - INITIALIZING = 1 - ACTIVE = 2 - - -def causal_norm_wrapper(norm_layer: nn.Module, x: torch.Tensor) -> torch.Tensor: - if isinstance(norm_layer, (nn.LayerNorm, RMSNorm)): - if x.ndim == 4: - x = rearrange(x, "b c h w -> b h w c") - x = norm_layer(x) - x = rearrange(x, "b h w c -> b c h w") - return x - if x.ndim == 5: - x = rearrange(x, "b c t h w -> b t h w c") - x = norm_layer(x) - x = rearrange(x, "b t h w c -> b c t h w") - return x - if isinstance(norm_layer, (nn.GroupNorm, nn.BatchNorm2d, nn.SyncBatchNorm)): - if x.ndim <= 4: - return norm_layer(x) - if x.ndim == 5: - t = x.size(2) - x = rearrange(x, "b c t h w -> (b t) c h w") - x = norm_layer(x) - x = rearrange(x, "(b t) c h w -> b c t h w", t=t) - return x - raise NotImplementedError - - -def remove_head(tensor: Tensor, times: int = 1) -> Tensor: - """ - Remove duplicated first frame features in the up-sampling process. - """ - if times == 0: - return tensor - return torch.cat(tensors=(tensor[:, :, :1], tensor[:, :, times + 1 :]), dim=2) - - -def extend_head( - tensor: Tensor, times: Optional[int] = 2, memory: Optional[Tensor] = None -) -> Tensor: - """ - When memory is None: - - Duplicate first frame features in the down-sampling process. - When memory is not None: - - Concatenate memory features with the input features to keep temporal consistency. - """ - if times == 0: - return tensor - if memory is not None: - return torch.cat((memory.to(tensor), tensor), dim=2) - else: - tile_repeat = np.ones(tensor.ndim).astype(int) - tile_repeat[2] = times - return torch.cat(tensors=(torch.tile(tensor[:, :, :1], list(tile_repeat)), tensor), dim=2) - - -def inflate_weight(weight_2d: torch.Tensor, weight_3d: torch.Tensor, inflation_mode: str): - """ - Inflate a 2D convolution weight matrix to a 3D one. - Parameters: - weight_2d: The weight matrix of 2D conv to be inflated. - weight_3d: The weight matrix of 3D conv to be initialized. - inflation_mode: the mode of inflation - """ - assert inflation_mode in ["constant", "replicate"] - assert weight_3d.shape[:2] == weight_2d.shape[:2] - with torch.no_grad(): - if inflation_mode == "replicate": - depth = weight_3d.size(2) - weight_3d.copy_(weight_2d.unsqueeze(2).repeat(1, 1, depth, 1, 1) / depth) - else: - weight_3d.fill_(0.0) - weight_3d[:, :, -1].copy_(weight_2d) - return weight_3d - - -def inflate_bias(bias_2d: torch.Tensor, bias_3d: torch.Tensor, inflation_mode: str): - """ - Inflate a 2D convolution bias tensor to a 3D one - Parameters: - bias_2d: The bias tensor of 2D conv to be inflated. - bias_3d: The bias tensor of 3D conv to be initialized. - inflation_mode: Placeholder to align `inflate_weight`. - """ - assert bias_3d.shape == bias_2d.shape - with torch.no_grad(): - bias_3d.copy_(bias_2d) - return bias_3d - - -def modify_state_dict(layer, state_dict, prefix, inflate_weight_fn, inflate_bias_fn): - """ - the main function to inflated 2D parameters to 3D. - """ - weight_name = prefix + "weight" - bias_name = prefix + "bias" - if weight_name in state_dict: - weight_2d = state_dict[weight_name] - if weight_2d.dim() == 4: - # Assuming the 2D weights are 4D tensors (out_channels, in_channels, h, w) - weight_3d = inflate_weight_fn( - weight_2d=weight_2d, - weight_3d=layer.weight, - inflation_mode=layer.inflation_mode, - ) - state_dict[weight_name] = weight_3d - else: - return state_dict - # It's a 3d state dict, should not do inflation on both bias and weight. - if bias_name in state_dict: - bias_2d = state_dict[bias_name] - if bias_2d.dim() == 1: - # Assuming the 2D biases are 1D tensors (out_channels,) - bias_3d = inflate_bias_fn( - bias_2d=bias_2d, - bias_3d=layer.bias, - inflation_mode=layer.inflation_mode, - ) - state_dict[bias_name] = bias_3d - return state_dict diff --git a/models_x/video_vae_v3/modules/types.py b/models_x/video_vae_v3/modules/types.py deleted file mode 100644 index 5a030d2d284f9535f2a84c1f9befcd3f82d8d9ff..0000000000000000000000000000000000000000 --- a/models_x/video_vae_v3/modules/types.py +++ /dev/null @@ -1,76 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from enum import Enum -from typing import Dict, Literal, NamedTuple, Optional -import torch - -_receptive_field_t = Literal["half", "full"] -_inflation_mode_t = Literal["none", "tail", "replicate"] -_memory_device_t = Optional[Literal["cpu", "same"]] -_gradient_checkpointing_t = Optional[Literal["half", "full"]] -_selective_checkpointing_t = Optional[Literal["coarse", "fine"]] - -class DiagonalGaussianDistribution: - def __init__(self, mean: torch.Tensor, logvar: torch.Tensor): - self.mean = mean - self.logvar = torch.clamp(logvar, -30.0, 20.0) - self.std = torch.exp(0.5 * self.logvar) - self.var = torch.exp(self.logvar) - - def mode(self) -> torch.Tensor: - return self.mean - - def sample(self) -> torch.FloatTensor: - return self.mean + self.std * torch.randn_like(self.mean) - - def kl(self) -> torch.Tensor: - return 0.5 * torch.sum( - self.mean**2 + self.var - 1.0 - self.logvar, - dim=list(range(1, self.mean.ndim)), - ) - -class MemoryState(Enum): - """ - State[Disabled]: No memory bank will be enabled. - State[Initializing]: The model is handling the first clip, need to reset the memory bank. - State[Active]: There has been some data in the memory bank. - State[Unset]: Error state, indicating users didn't pass correct memory state in. - """ - - DISABLED = 0 - INITIALIZING = 1 - ACTIVE = 2 - UNSET = 3 - - -class QuantizerOutput(NamedTuple): - latent: torch.Tensor - extra_loss: torch.Tensor - statistics: Dict[str, torch.Tensor] - - -class CausalAutoencoderOutput(NamedTuple): - sample: torch.Tensor - latent: torch.Tensor - posterior: Optional[DiagonalGaussianDistribution] - - -class CausalEncoderOutput(NamedTuple): - latent: torch.Tensor - posterior: Optional[DiagonalGaussianDistribution] - - -class CausalDecoderOutput(NamedTuple): - sample: torch.Tensor diff --git a/models_x/video_vae_v3/modules/video_vae.py b/models_x/video_vae_v3/modules/video_vae.py deleted file mode 100644 index 1b169431c637ba273de7e6a2340c64206746ef28..0000000000000000000000000000000000000000 --- a/models_x/video_vae_v3/modules/video_vae.py +++ /dev/null @@ -1,955 +0,0 @@ -# Copyright (c) 2023 HuggingFace Team -# Copyright (c) 2025 ByteDance Ltd. and/or its affiliates. -# SPDX-License-Identifier: Apache License, Version 2.0 (the "License") -# -# This file has been modified by ByteDance Ltd. and/or its affiliates. on 1st June 2025 -# -# Original file was released under Apache License, Version 2.0 (the "License"), with the full license text -# available at http://www.apache.org/licenses/LICENSE-2.0. -# -# This modified file is released under the same license. - -from contextlib import nullcontext -from typing import Optional, Tuple, Literal, Callable, Union - -import torch -import torch.nn as nn -import torch.nn.functional as F -from diffusers.models.autoencoders.vae import DiagonalGaussianDistribution -from einops import rearrange - -from common.distributed.advanced import get_sequence_parallel_world_size -from common.logger import get_logger -from models.video_vae_v3.modules.causal_inflation_lib import ( - InflatedCausalConv3d, - causal_norm_wrapper, - init_causal_conv3d, - remove_head, -) -from models.video_vae_v3.modules.context_parallel_lib import ( - causal_conv_gather_outputs, - causal_conv_slice_inputs, -) -from models.video_vae_v3.modules.global_config import set_norm_limit -from models.video_vae_v3.modules.types import ( - CausalAutoencoderOutput, - CausalDecoderOutput, - CausalEncoderOutput, - MemoryState, - _inflation_mode_t, - _memory_device_t, - _receptive_field_t, - _selective_checkpointing_t, -) - -logger = get_logger(__name__) # pylint: disable=invalid-name - -# Fake func, no checkpointing is required for inference -def gradient_checkpointing(module: Union[Callable, nn.Module], *args, enabled: bool, **kwargs): - return module(*args, **kwargs) - -class ResnetBlock2D(nn.Module): - r""" - A Resnet block. - - Parameters: - in_channels (`int`): The number of channels in the input. - out_channels (`int`, *optional*, default to be `None`): - The number of output channels for the first conv2d layer. - If None, same as `in_channels`. - dropout (`float`, *optional*, defaults to `0.0`): The dropout probability to use. - """ - - def __init__( - self, *, in_channels: int, out_channels: Optional[int] = None, dropout: float = 0.0 - ): - super().__init__() - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - - self.nonlinearity = nn.SiLU() - - self.norm1 = torch.nn.GroupNorm( - num_groups=32, num_channels=in_channels, eps=1e-6, affine=True - ) - - self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1) - - self.norm2 = torch.nn.GroupNorm( - num_groups=32, num_channels=out_channels, eps=1e-6, affine=True - ) - - self.dropout = torch.nn.Dropout(dropout) - self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1) - - self.use_in_shortcut = self.in_channels != out_channels - - self.conv_shortcut = None - if self.use_in_shortcut: - self.conv_shortcut = nn.Conv2d( - in_channels, out_channels, kernel_size=1, stride=1, padding=0 - ) - - def forward(self, input_tensor: torch.Tensor) -> torch.Tensor: - hidden = input_tensor - - hidden = self.norm1(hidden) - hidden = self.nonlinearity(hidden) - hidden = self.conv1(hidden) - - hidden = self.norm2(hidden) - hidden = self.nonlinearity(hidden) - hidden = self.dropout(hidden) - hidden = self.conv2(hidden) - - if self.conv_shortcut is not None: - input_tensor = self.conv_shortcut(input_tensor) - - output_tensor = input_tensor + hidden - - return output_tensor - -class Upsample3D(nn.Module): - """A 3D upsampling layer.""" - - def __init__( - self, - channels: int, - inflation_mode: _inflation_mode_t = "tail", - temporal_up: bool = False, - spatial_up: bool = True, - slicing: bool = False, - ): - super().__init__() - self.channels = channels - self.conv = init_causal_conv3d( - self.channels, self.channels, kernel_size=3, padding=1, inflation_mode=inflation_mode - ) - - self.temporal_up = temporal_up - self.spatial_up = spatial_up - self.temporal_ratio = 2 if temporal_up else 1 - self.spatial_ratio = 2 if spatial_up else 1 - self.slicing = slicing - - upscale_ratio = (self.spatial_ratio**2) * self.temporal_ratio - self.upscale_conv = nn.Conv3d( - self.channels, self.channels * upscale_ratio, kernel_size=1, padding=0 - ) - identity = ( - torch.eye(self.channels).repeat(upscale_ratio, 1).reshape_as(self.upscale_conv.weight) - ) - - self.upscale_conv.weight.data.copy_(identity) - nn.init.zeros_(self.upscale_conv.bias) - self.gradient_checkpointing = False - - def forward( - self, - hidden_states: torch.FloatTensor, - memory_state: MemoryState, - ) -> torch.FloatTensor: - return gradient_checkpointing( - self.custom_forward, - hidden_states, - memory_state, - enabled=self.training and self.gradient_checkpointing, - ) - - def custom_forward( - self, - hidden_states: torch.FloatTensor, - memory_state: MemoryState, - ) -> torch.FloatTensor: - assert hidden_states.shape[1] == self.channels - - if self.slicing: - split_size = hidden_states.size(2) // 2 - hidden_states = list( - hidden_states.split([split_size, hidden_states.size(2) - split_size], dim=2) - ) - else: - hidden_states = [hidden_states] - - for i in range(len(hidden_states)): - hidden_states[i] = self.upscale_conv(hidden_states[i]) - hidden_states[i] = rearrange( - hidden_states[i], - "b (x y z c) f h w -> b c (f z) (h x) (w y)", - x=self.spatial_ratio, - y=self.spatial_ratio, - z=self.temporal_ratio, - ) - - # [Overridden] For causal temporal conv - if self.temporal_up and memory_state != MemoryState.ACTIVE: - hidden_states[0] = remove_head(hidden_states[0]) - - if self.slicing: - hidden_states = self.conv(hidden_states, memory_state=memory_state) - return torch.cat(hidden_states, dim=2) - else: - return self.conv(hidden_states[0], memory_state=memory_state) - - -class Downsample3D(nn.Module): - """A 3D downsampling layer.""" - - def __init__( - self, - channels: int, - inflation_mode: _inflation_mode_t = "tail", - temporal_down: bool = False, - spatial_down: bool = True, - ): - super().__init__() - self.channels = channels - self.temporal_down = temporal_down - self.spatial_down = spatial_down - - self.temporal_ratio = 2 if temporal_down else 1 - self.spatial_ratio = 2 if spatial_down else 1 - - self.temporal_kernel = 3 if temporal_down else 1 - self.spatial_kernel = 3 if spatial_down else 1 - - self.conv = init_causal_conv3d( - self.channels, - self.channels, - kernel_size=(self.temporal_kernel, self.spatial_kernel, self.spatial_kernel), - stride=(self.temporal_ratio, self.spatial_ratio, self.spatial_ratio), - padding=((1 if self.temporal_down else 0), 0, 0), - inflation_mode=inflation_mode, - ) - self.gradient_checkpointing = False - - def forward( - self, - hidden_states: torch.FloatTensor, - memory_state: MemoryState, - ) -> torch.FloatTensor: - return gradient_checkpointing( - self.custom_forward, - hidden_states, - memory_state, - enabled=self.training and self.gradient_checkpointing, - ) - - def custom_forward( - self, - hidden_states: torch.FloatTensor, - memory_state: MemoryState, - ) -> torch.FloatTensor: - - assert hidden_states.shape[1] == self.channels - - if self.spatial_down: - hidden_states = F.pad(hidden_states, (0, 1, 0, 1), mode="constant", value=0) - - hidden_states = self.conv(hidden_states, memory_state=memory_state) - return hidden_states - - -class ResnetBlock3D(ResnetBlock2D): - def __init__( - self, - *args, - inflation_mode: _inflation_mode_t = "tail", - time_receptive_field: _receptive_field_t = "half", - **kwargs, - ): - super().__init__(*args, **kwargs) - self.conv1 = init_causal_conv3d( - self.in_channels, - self.out_channels, - kernel_size=3, - stride=1, - padding=1, - inflation_mode=inflation_mode, - ) - - self.conv2 = init_causal_conv3d( - self.out_channels, - self.out_channels, - kernel_size=(1, 3, 3) if time_receptive_field == "half" else (3, 3, 3), - stride=1, - padding=(0, 1, 1) if time_receptive_field == "half" else (1, 1, 1), - inflation_mode=inflation_mode, - ) - - if self.use_in_shortcut: - self.conv_shortcut = init_causal_conv3d( - self.in_channels, - self.out_channels, - kernel_size=1, - stride=1, - padding=0, - bias=(self.conv_shortcut.bias is not None), - inflation_mode=inflation_mode, - ) - self.gradient_checkpointing = False - - def forward(self, input_tensor: torch.Tensor, memory_state: MemoryState = MemoryState.UNSET): - return gradient_checkpointing( - self.custom_forward, - input_tensor, - memory_state, - enabled=self.training and self.gradient_checkpointing, - ) - - def custom_forward( - self, input_tensor: torch.Tensor, memory_state: MemoryState = MemoryState.UNSET - ): - assert memory_state != MemoryState.UNSET - hidden_states = input_tensor - - hidden_states = causal_norm_wrapper(self.norm1, hidden_states) - hidden_states = self.nonlinearity(hidden_states) - hidden_states = self.conv1(hidden_states, memory_state=memory_state) - - hidden_states = causal_norm_wrapper(self.norm2, hidden_states) - hidden_states = self.nonlinearity(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.conv2(hidden_states, memory_state=memory_state) - - if self.conv_shortcut is not None: - input_tensor = self.conv_shortcut(input_tensor, memory_state=memory_state) - - output_tensor = input_tensor + hidden_states - - return output_tensor - - -class DownEncoderBlock3D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - add_downsample: bool = True, - inflation_mode: _inflation_mode_t = "tail", - time_receptive_field: _receptive_field_t = "half", - temporal_down: bool = True, - spatial_down: bool = True, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock3D( - in_channels=in_channels, - out_channels=out_channels, - dropout=dropout, - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - self.downsamplers = None - if add_downsample: - # Todo: Refactor this line before V5 Image VAE Training. - self.downsamplers = nn.ModuleList( - [ - Downsample3D( - channels=out_channels, - inflation_mode=inflation_mode, - temporal_down=temporal_down, - spatial_down=spatial_down, - ) - ] - ) - - def forward( - self, hidden_states: torch.FloatTensor, memory_state: MemoryState - ) -> torch.FloatTensor: - for resnet in self.resnets: - hidden_states = resnet(hidden_states, memory_state=memory_state) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states, memory_state=memory_state) - - return hidden_states - - -class UpDecoderBlock3D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - add_upsample: bool = True, - inflation_mode: _inflation_mode_t = "tail", - time_receptive_field: _receptive_field_t = "half", - temporal_up: bool = True, - spatial_up: bool = True, - slicing: bool = False, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - input_channels = in_channels if i == 0 else out_channels - - resnets.append( - ResnetBlock3D( - in_channels=input_channels, - out_channels=out_channels, - dropout=dropout, - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - self.upsamplers = None - # Todo: Refactor this line before V5 Image VAE Training. - if add_upsample: - self.upsamplers = nn.ModuleList( - [ - Upsample3D( - channels=out_channels, - inflation_mode=inflation_mode, - temporal_up=temporal_up, - spatial_up=spatial_up, - slicing=slicing, - ) - ] - ) - - def forward( - self, hidden_states: torch.FloatTensor, memory_state: MemoryState - ) -> torch.FloatTensor: - for resnet in self.resnets: - hidden_states = resnet(hidden_states, memory_state=memory_state) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, memory_state=memory_state) - - return hidden_states - - -class UNetMidBlock3D(nn.Module): - def __init__( - self, - channels: int, - dropout: float = 0.0, - inflation_mode: _inflation_mode_t = "tail", - time_receptive_field: _receptive_field_t = "half", - ): - super().__init__() - self.resnets = nn.ModuleList( - [ - ResnetBlock3D( - in_channels=channels, - out_channels=channels, - dropout=dropout, - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ), - ResnetBlock3D( - in_channels=channels, - out_channels=channels, - dropout=dropout, - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ), - ] - ) - - def forward(self, hidden_states: torch.Tensor, memory_state: MemoryState): - for resnet in self.resnets: - hidden_states = resnet(hidden_states, memory_state) - return hidden_states - - -class Encoder3D(nn.Module): - r""" - The `Encoder` layer of a variational autoencoder that encodes - its input into a latent representation. - """ - - def __init__( - self, - in_channels: int = 3, - out_channels: int = 3, - block_out_channels: Tuple[int, ...] = (64,), - layers_per_block: int = 2, - double_z: bool = True, - temporal_down_num: int = 2, - inflation_mode: _inflation_mode_t = "tail", - time_receptive_field: _receptive_field_t = "half", - selective_checkpointing: Tuple[_selective_checkpointing_t] = ("none",), - ): - super().__init__() - self.layers_per_block = layers_per_block - - self.temporal_down_num = temporal_down_num - - self.conv_in = init_causal_conv3d( - in_channels, - block_out_channels[0], - kernel_size=3, - stride=1, - padding=1, - inflation_mode=inflation_mode, - ) - - self.down_blocks = nn.ModuleList([]) - - # down - output_channel = block_out_channels[0] - for i in range(len(block_out_channels)): - input_channel = output_channel - output_channel = block_out_channels[i] - is_final_block = i == len(block_out_channels) - 1 - is_temporal_down_block = i >= len(block_out_channels) - self.temporal_down_num - 1 - # Note: take the last one - - down_block = DownEncoderBlock3D( - num_layers=self.layers_per_block, - in_channels=input_channel, - out_channels=output_channel, - add_downsample=not is_final_block, - temporal_down=is_temporal_down_block, - spatial_down=True, - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ) - self.down_blocks.append(down_block) - - # mid - self.mid_block = UNetMidBlock3D( - channels=block_out_channels[-1], - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ) - - # out - self.conv_norm_out = nn.GroupNorm( - num_channels=block_out_channels[-1], num_groups=32, eps=1e-6 - ) - self.conv_act = nn.SiLU() - - conv_out_channels = 2 * out_channels if double_z else out_channels - self.conv_out = init_causal_conv3d( - block_out_channels[-1], conv_out_channels, 3, padding=1, inflation_mode=inflation_mode - ) - - assert len(selective_checkpointing) == len(self.down_blocks) - self.set_gradient_checkpointing(selective_checkpointing) - - def set_gradient_checkpointing(self, checkpointing_types): - gradient_checkpointing = [] - for down_block, sac_type in zip(self.down_blocks, checkpointing_types): - if sac_type == "coarse": - gradient_checkpointing.append(True) - elif sac_type == "fine": - for n, m in down_block.named_modules(): - if hasattr(m, "gradient_checkpointing"): - m.gradient_checkpointing = True - logger.debug(f"set gradient_checkpointing: {n}") - gradient_checkpointing.append(False) - else: - gradient_checkpointing.append(False) - self.gradient_checkpointing = gradient_checkpointing - logger.info(f"[Encoder3D] gradient_checkpointing: {checkpointing_types}") - - def forward(self, sample: torch.FloatTensor, memory_state: MemoryState) -> torch.FloatTensor: - r"""The forward method of the `Encoder` class.""" - sample = self.conv_in(sample, memory_state=memory_state) - # down - for down_block, sac in zip(self.down_blocks, self.gradient_checkpointing): - sample = gradient_checkpointing( - down_block, - sample, - memory_state=memory_state, - enabled=self.training and sac, - ) - - # middle - sample = self.mid_block(sample, memory_state=memory_state) - - # post-process - sample = causal_norm_wrapper(self.conv_norm_out, sample) - sample = self.conv_act(sample) - sample = self.conv_out(sample, memory_state=memory_state) - - return sample - - -class Decoder3D(nn.Module): - r""" - The `Decoder` layer of a variational autoencoder that - decodes its latent representation into an output sample. - """ - - def __init__( - self, - in_channels: int = 3, - out_channels: int = 3, - block_out_channels: Tuple[int, ...] = (64,), - layers_per_block: int = 2, - inflation_mode: _inflation_mode_t = "tail", - time_receptive_field: _receptive_field_t = "half", - temporal_up_num: int = 2, - slicing_up_num: int = 0, - selective_checkpointing: Tuple[_selective_checkpointing_t] = ("none",), - ): - super().__init__() - self.layers_per_block = layers_per_block - self.temporal_up_num = temporal_up_num - - self.conv_in = init_causal_conv3d( - in_channels, - block_out_channels[-1], - kernel_size=3, - stride=1, - padding=1, - inflation_mode=inflation_mode, - ) - - self.up_blocks = nn.ModuleList([]) - - # mid - self.mid_block = UNetMidBlock3D( - channels=block_out_channels[-1], - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ) - - # up - reversed_block_out_channels = list(reversed(block_out_channels)) - output_channel = reversed_block_out_channels[0] - for i in range(len(reversed_block_out_channels)): - prev_output_channel = output_channel - output_channel = reversed_block_out_channels[i] - - is_final_block = i == len(block_out_channels) - 1 - is_temporal_up_block = i < self.temporal_up_num - is_slicing_up_block = i >= len(block_out_channels) - slicing_up_num - # Note: Keep symmetric - - up_block = UpDecoderBlock3D( - num_layers=self.layers_per_block + 1, - in_channels=prev_output_channel, - out_channels=output_channel, - add_upsample=not is_final_block, - temporal_up=is_temporal_up_block, - slicing=is_slicing_up_block, - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ) - self.up_blocks.append(up_block) - - # out - self.conv_norm_out = nn.GroupNorm( - num_channels=block_out_channels[0], num_groups=32, eps=1e-6 - ) - self.conv_act = nn.SiLU() - self.conv_out = init_causal_conv3d( - block_out_channels[0], out_channels, 3, padding=1, inflation_mode=inflation_mode - ) - - assert len(selective_checkpointing) == len(self.up_blocks) - self.set_gradient_checkpointing(selective_checkpointing) - - def set_gradient_checkpointing(self, checkpointing_types): - gradient_checkpointing = [] - for up_block, sac_type in zip(self.up_blocks, checkpointing_types): - if sac_type == "coarse": - gradient_checkpointing.append(True) - elif sac_type == "fine": - for n, m in up_block.named_modules(): - if hasattr(m, "gradient_checkpointing"): - m.gradient_checkpointing = True - logger.debug(f"set gradient_checkpointing: {n}") - gradient_checkpointing.append(False) - else: - gradient_checkpointing.append(False) - self.gradient_checkpointing = gradient_checkpointing - logger.info(f"[Decoder3D] gradient_checkpointing: {checkpointing_types}") - - def forward(self, sample: torch.FloatTensor, memory_state: MemoryState) -> torch.FloatTensor: - r"""The forward method of the `Decoder` class.""" - - sample = self.conv_in(sample, memory_state=memory_state) - - # middle - sample = self.mid_block(sample, memory_state=memory_state) - - # up - for up_block, sac in zip(self.up_blocks, self.gradient_checkpointing): - sample = gradient_checkpointing( - up_block, - sample, - memory_state=memory_state, - enabled=self.training and sac, - ) - - # post-process - sample = causal_norm_wrapper(self.conv_norm_out, sample) - sample = self.conv_act(sample) - sample = self.conv_out(sample, memory_state=memory_state) - - return sample - - -class VideoAutoencoderKL(nn.Module): - def __init__( - self, - in_channels: int = 3, - out_channels: int = 3, - block_out_channels: Tuple[int] = (64,), - layers_per_block: int = 1, - latent_channels: int = 4, - use_quant_conv: bool = True, - use_post_quant_conv: bool = True, - enc_selective_checkpointing: Tuple[_selective_checkpointing_t] = ("none",), - dec_selective_checkpointing: Tuple[_selective_checkpointing_t] = ("none",), - temporal_scale_num: int = 3, - slicing_up_num: int = 0, - inflation_mode: _inflation_mode_t = "tail", - time_receptive_field: _receptive_field_t = "half", - slicing_sample_min_size: int = None, - spatial_downsample_factor: int = 16, - temporal_downsample_factor: int = 8, - freeze_encoder: bool = False, - ): - super().__init__() - self.spatial_downsample_factor = spatial_downsample_factor - self.temporal_downsample_factor = temporal_downsample_factor - self.freeze_encoder = freeze_encoder - if slicing_sample_min_size is None: - slicing_sample_min_size = temporal_downsample_factor - self.slicing_sample_min_size = slicing_sample_min_size - self.slicing_latent_min_size = slicing_sample_min_size // (2**temporal_scale_num) - - # pass init params to Encoder - self.encoder = Encoder3D( - in_channels=in_channels, - out_channels=latent_channels, - block_out_channels=block_out_channels, - layers_per_block=layers_per_block, - double_z=True, - temporal_down_num=temporal_scale_num, - selective_checkpointing=enc_selective_checkpointing, - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ) - - # pass init params to Decoder - self.decoder = Decoder3D( - in_channels=latent_channels, - out_channels=out_channels, - block_out_channels=block_out_channels, - layers_per_block=layers_per_block, - # [Override] add temporal_up_num parameter - temporal_up_num=temporal_scale_num, - slicing_up_num=slicing_up_num, - selective_checkpointing=dec_selective_checkpointing, - inflation_mode=inflation_mode, - time_receptive_field=time_receptive_field, - ) - - self.quant_conv = ( - init_causal_conv3d( - in_channels=2 * latent_channels, - out_channels=2 * latent_channels, - kernel_size=1, - inflation_mode=inflation_mode, - ) - if use_quant_conv - else None - ) - self.post_quant_conv = ( - init_causal_conv3d( - in_channels=latent_channels, - out_channels=latent_channels, - kernel_size=1, - inflation_mode=inflation_mode, - ) - if use_post_quant_conv - else None - ) - - self.use_slicing = False - - def enable_slicing(self): - self.use_slicing = True - - def disable_slicing(self): - self.use_slicing = False - - def encode(self, x: torch.FloatTensor) -> CausalEncoderOutput: - if x.ndim == 4: - x = x.unsqueeze(2) - h = self.slicing_encode(x) - p = DiagonalGaussianDistribution(h) - z = p.sample() - return CausalEncoderOutput(z, p) - - def decode(self, z: torch.FloatTensor) -> CausalDecoderOutput: - if z.ndim == 4: - z = z.unsqueeze(2) - x = self.slicing_decode(z) - return CausalDecoderOutput(x) - - def _encode(self, x: torch.Tensor, memory_state: MemoryState) -> torch.Tensor: - x = causal_conv_slice_inputs(x, self.slicing_sample_min_size, memory_state=memory_state) - h = self.encoder(x, memory_state=memory_state) - h = self.quant_conv(h, memory_state=memory_state) if self.quant_conv is not None else h - h = causal_conv_gather_outputs(h) - return h - - def _decode(self, z: torch.Tensor, memory_state: MemoryState) -> torch.Tensor: - z = causal_conv_slice_inputs(z, self.slicing_latent_min_size, memory_state=memory_state) - z = ( - self.post_quant_conv(z, memory_state=memory_state) - if self.post_quant_conv is not None - else z - ) - x = self.decoder(z, memory_state=memory_state) - x = causal_conv_gather_outputs(x) - return x - - def slicing_encode(self, x: torch.Tensor) -> torch.Tensor: - sp_size = get_sequence_parallel_world_size() - if self.use_slicing and (x.shape[2] - 1) > self.slicing_sample_min_size * sp_size: - x_slices = x[:, :, 1:].split(split_size=self.slicing_sample_min_size * sp_size, dim=2) - encoded_slices = [ - self._encode( - torch.cat((x[:, :, :1], x_slices[0]), dim=2), - memory_state=MemoryState.INITIALIZING, - ) - ] - for x_idx in range(1, len(x_slices)): - encoded_slices.append( - self._encode(x_slices[x_idx], memory_state=MemoryState.ACTIVE) - ) - return torch.cat(encoded_slices, dim=2) - else: - return self._encode(x, memory_state=MemoryState.DISABLED) - - def slicing_decode(self, z: torch.Tensor) -> torch.Tensor: - sp_size = get_sequence_parallel_world_size() - if self.use_slicing and (z.shape[2] - 1) > self.slicing_latent_min_size * sp_size: - z_slices = z[:, :, 1:].split(split_size=self.slicing_latent_min_size * sp_size, dim=2) - decoded_slices = [ - self._decode( - torch.cat((z[:, :, :1], z_slices[0]), dim=2), - memory_state=MemoryState.INITIALIZING, - ) - ] - for z_idx in range(1, len(z_slices)): - decoded_slices.append( - self._decode(z_slices[z_idx], memory_state=MemoryState.ACTIVE) - ) - return torch.cat(decoded_slices, dim=2) - else: - return self._decode(z, memory_state=MemoryState.DISABLED) - - def forward(self, x: torch.FloatTensor) -> CausalAutoencoderOutput: - with torch.no_grad() if self.freeze_encoder else nullcontext(): - z, p = self.encode(x) - x = self.decode(z).sample - return CausalAutoencoderOutput(x, z, p) - - def preprocess(self, x: torch.Tensor): - # x should in [B, C, T, H, W], [B, C, H, W] - assert x.ndim == 4 or x.size(2) % self.temporal_downsample_factor == 1 - return x - - def postprocess(self, x: torch.Tensor): - # x should in [B, C, T, H, W], [B, C, H, W] - return x - - def set_causal_slicing( - self, - *, - split_size: Optional[int], - memory_device: _memory_device_t, - ): - assert ( - split_size is None or memory_device is not None - ), "if split_size is set, memory_device must not be None." - if split_size is not None: - self.enable_slicing() - self.slicing_sample_min_size = split_size - self.slicing_latent_min_size = split_size // self.temporal_downsample_factor - else: - self.disable_slicing() - for module in self.modules(): - if isinstance(module, InflatedCausalConv3d): - module.set_memory_device(memory_device) - - def set_memory_limit(self, conv_max_mem: Optional[float], norm_max_mem: Optional[float]): - set_norm_limit(norm_max_mem) - for m in self.modules(): - if isinstance(m, InflatedCausalConv3d): - m.set_memory_limit(conv_max_mem if conv_max_mem is not None else float("inf")) - - -class VideoAutoencoderKLWrapper(VideoAutoencoderKL): - def __init__( - self, *args, spatial_downsample_factor: int, temporal_downsample_factor: int, **kwargs - ): - self.spatial_downsample_factor = spatial_downsample_factor - self.temporal_downsample_factor = temporal_downsample_factor - super().__init__(*args, **kwargs) - - def forward(self, x) -> CausalAutoencoderOutput: - z, _, p = self.encode(x) - x, _ = self.decode(z) - return CausalAutoencoderOutput(x, z, None, p) - - def encode(self, x) -> CausalEncoderOutput: - if x.ndim == 4: - x = x.unsqueeze(2) - p = super().encode(x).latent_dist - z = p.sample().squeeze(2) - return CausalEncoderOutput(z, None, p) - - def decode(self, z) -> CausalDecoderOutput: - if z.ndim == 4: - z = z.unsqueeze(2) - x = super().decode(z).sample.squeeze(2) - return CausalDecoderOutput(x, None) - - def preprocess(self, x): - # x should in [B, C, T, H, W], [B, C, H, W] - assert x.ndim == 4 or x.size(2) % 4 == 1 - return x - - def postprocess(self, x): - # x should in [B, C, T, H, W], [B, C, H, W] - return x - - def set_causal_slicing( - self, - *, - split_size: Optional[int], - memory_device: Optional[Literal["cpu", "same"]], - ): - assert ( - split_size is None or memory_device is not None - ), "if split_size is set, memory_device must not be None." - if split_size is not None: - self.enable_slicing() - else: - self.disable_slicing() - self.slicing_sample_min_size = split_size - if split_size is not None: - self.slicing_latent_min_size = split_size // self.temporal_downsample_factor - for module in self.modules(): - if isinstance(module, InflatedCausalConv3d): - module.set_memory_device(memory_device) \ No newline at end of file diff --git a/models_x/video_vae_v3/s8_c16_t4_inflation_sd3.yaml b/models_x/video_vae_v3/s8_c16_t4_inflation_sd3.yaml deleted file mode 100644 index 58309522b791171f9d39f78ea1eaf57bab2a28fe..0000000000000000000000000000000000000000 --- a/models_x/video_vae_v3/s8_c16_t4_inflation_sd3.yaml +++ /dev/null @@ -1,33 +0,0 @@ -__object__: - path: models.video_vae_v3.modules.attn_video_vae - name: VideoAutoencoderKLWrapper - args: as_params - -act_fn: silu -block_out_channels: - - 128 - - 256 - - 512 - - 512 -down_block_types: - - DownEncoderBlock3D - - DownEncoderBlock3D - - DownEncoderBlock3D - - DownEncoderBlock3D -in_channels: 3 -latent_channels: 16 -layers_per_block: 2 -norm_num_groups: 32 -out_channels: 3 -slicing_sample_min_size: 4 -temporal_scale_num: 2 -inflation_mode: pad -up_block_types: - - UpDecoderBlock3D - - UpDecoderBlock3D - - UpDecoderBlock3D - - UpDecoderBlock3D -spatial_downsample_factor: 8 -temporal_downsample_factor: 4 -use_quant_conv: False -use_post_quant_conv: False diff --git a/projects_x/README.md b/projects_x/README.md deleted file mode 100644 index 371a83bc5640a438d5f59aecdae146af15991ec1..0000000000000000000000000000000000000000 --- a/projects_x/README.md +++ /dev/null @@ -1,138 +0,0 @@ -# 🛠️ helpers/ - Ferramentas de IA de Terceiros Adaptadas para ADUC-SDR - -Esta pasta contém implementações adaptadas de modelos e utilitários de IA de terceiros, que servem como "especialistas" ou "ferramentas" de baixo nível para a arquitetura ADUC-SDR. - -**IMPORTANTE:** O conteúdo desta pasta é de autoria de seus respectivos idealizadores e desenvolvedores originais. Esta pasta **NÃO FAZ PARTE** do projeto principal ADUC-SDR em termos de sua arquitetura inovadora. Ela serve como um repositório para as **dependências diretas e modificadas** que os `DeformesXDEngines` (os estágios do "foguete" ADUC-SDR) invocam para realizar tarefas específicas (geração de imagem, vídeo, áudio). - -As modificações realizadas nos arquivos aqui presentes visam principalmente: -1. **Adaptação de Interfaces:** Padronizar as interfaces para que se encaixem no fluxo de orquestração do ADUC-SDR. -2. **Gerenciamento de Recursos:** Integrar lógicas de carregamento/descarregamento de modelos (GPU management) e configurações via arquivos YAML. -3. **Otimização de Fluxo:** Ajustar as pipelines para aceitar formatos de entrada mais eficientes (ex: tensores pré-codificados em vez de caminhos de mídia, pulando etapas de codificação/decodificação redundantes). - ---- - -## 📄 Licenciamento - -O conteúdo original dos projetos listados abaixo é licenciado sob a **Licença Apache 2.0**, ou outra licença especificada pelos autores originais. Todas as modificações e o uso desses arquivos dentro da estrutura `helpers/` do projeto ADUC-SDR estão em conformidade com os termos da **Licença Apache 2.0**. - -As licenças originais dos projetos podem ser encontradas nas suas respectivas fontes ou nos subdiretórios `incl_licenses/` dentro de cada módulo adaptado. - ---- - -## 🛠️ API dos Helpers e Guia de Uso - -Esta seção detalha como cada helper (agente especialista) deve ser utilizado dentro do ecossistema ADUC-SDR. Todos os agentes são instanciados como **singletons** no `hardware_manager.py` para garantir o gerenciamento centralizado de recursos de GPU. - -### **gemini_helpers.py (GeminiAgent)** - -* **Propósito:** Atua como o "Oráculo de Síntese Adaptativo", responsável por todas as tarefas de processamento de linguagem natural, como criação de storyboards, geração de prompts, e tomada de decisões narrativas. -* **Singleton Instance:** `gemini_agent_singleton` -* **Construtor:** `GeminiAgent()` - * Lê `configs/gemini_config.yaml` para obter o nome do modelo, parâmetros de inferência e caminhos de templates de prompt. A chave da API é lida da variável de ambiente `GEMINI_API_KEY`. -* **Métodos Públicos:** - * `generate_storyboard(prompt: str, num_keyframes: int, ref_image_paths: list[str])` - * **Inputs:** - * `prompt`: A ideia geral do filme (string). - * `num_keyframes`: O número de cenas a serem geradas (int). - * `ref_image_paths`: Lista de caminhos para as imagens de referência (list[str]). - * **Output:** `tuple[list[str], str]` (Uma tupla contendo a lista de strings do storyboard e um relatório textual da operação). - * `select_keyframes_from_pool(storyboard: list, base_image_paths: list[str], pool_image_paths: list[str])` - * **Inputs:** - * `storyboard`: A lista de strings do storyboard gerado. - * `base_image_paths`: Imagens de referência base (list[str]). - * `pool_image_paths`: O "banco de imagens" de onde selecionar (list[str]). - * **Output:** `tuple[list[str], str]` (Uma tupla contendo a lista de caminhos de imagens selecionadas e um relatório textual). - * `get_anticipatory_keyframe_prompt(...)` - * **Inputs:** Contexto narrativo e visual para gerar um prompt de imagem. - * **Output:** `tuple[str, str]` (Uma tupla contendo o prompt gerado para o modelo de imagem e um relatório textual). - * `get_initial_motion_prompt(...)` - * **Inputs:** Contexto narrativo e visual para a primeira transição de vídeo. - * **Output:** `tuple[str, str]` (Uma tupla contendo o prompt de movimento gerado e um relatório textual). - * `get_transition_decision(...)` - * **Inputs:** Contexto narrativo e visual para uma transição de vídeo intermediária. - * **Output:** `tuple[dict, str]` (Uma tupla contendo um dicionário `{"transition_type": "...", "motion_prompt": "..."}` e um relatório textual). - * `generate_audio_prompts(...)` - * **Inputs:** Contexto narrativo global. - * **Output:** `tuple[dict, str]` (Uma tupla contendo um dicionário `{"music_prompt": "...", "sfx_prompt": "..."}` e um relatório textual). - -### **flux_kontext_helpers.py (FluxPoolManager)** - -* **Propósito:** Especialista em geração de imagens de alta qualidade (keyframes) usando a pipeline FluxKontext. Gerencia um pool de workers para otimizar o uso de múltiplas GPUs. -* **Singleton Instance:** `flux_kontext_singleton` -* **Construtor:** `FluxPoolManager(device_ids: list[str], flux_config_file: str)` - * Lê `configs/flux_config.yaml`. -* **Método Público:** - * `generate_image(prompt: str, reference_images: list[Image.Image], width: int, height: int, seed: int = 42, callback: callable = None)` - * **Inputs:** - * `prompt`: Prompt textual para guiar a geração (string). - * `reference_images`: Lista de objetos `PIL.Image` como referência visual. - * `width`, `height`: Dimensões da imagem de saída (int). - * `seed`: Semente para reprodutibilidade (int). - * `callback`: Função de callback opcional para monitorar o progresso. - * **Output:** `PIL.Image.Image` (O objeto da imagem gerada). - -### **dreamo_helpers.py (DreamOAgent)** - -* **Propósito:** Especialista em geração de imagens de alta qualidade (keyframes) usando a pipeline DreamO, com capacidades avançadas de edição e estilo a partir de referências. -* **Singleton Instance:** `dreamo_agent_singleton` -* **Construtor:** `DreamOAgent(device_id: str = None)` - * Lê `configs/dreamo_config.yaml`. -* **Método Público:** - * `generate_image(prompt: str, reference_images: list[Image.Image], width: int, height: int)` - * **Inputs:** - * `prompt`: Prompt textual para guiar a geração (string). - * `reference_images`: Lista de objetos `PIL.Image` como referência visual. A lógica interna atribui a primeira imagem como `style` e as demais como `ip`. - * `width`, `height`: Dimensões da imagem de saída (int). - * **Output:** `PIL.Image.Image` (O objeto da imagem gerada). - -### **ltx_manager_helpers.py (LtxPoolManager)** - -* **Propósito:** Especialista na geração de fragmentos de vídeo no espaço latente usando a pipeline LTX-Video. Gerencia um pool de workers para otimizar o uso de múltiplas GPUs. -* **Singleton Instance:** `ltx_manager_singleton` -* **Construtor:** `LtxPoolManager(device_ids: list[str], ltx_model_config_file: str, ltx_global_config_file: str)` - * Lê o `ltx_global_config_file` e o `ltx_model_config_file` para configurar a pipeline. -* **Método Público:** - * `generate_latent_fragment(**kwargs)` - * **Inputs:** Dicionário de keyword arguments (`kwargs`) contendo todos os parâmetros da pipeline LTX, incluindo: - * `height`, `width`: Dimensões do vídeo (int). - * `video_total_frames`: Número total de frames a serem gerados (int). - * `video_fps`: Frames por segundo (int). - * `motion_prompt`: Prompt de movimento (string). - * `conditioning_items_data`: Lista de objetos `LatentConditioningItem` contendo os tensores latentes de condição. - * `guidance_scale`, `stg_scale`, `num_inference_steps`, etc. - * **Output:** `tuple[torch.Tensor, tuple]` (Uma tupla contendo o tensor latente gerado e os valores de padding utilizados). - -### **mmaudio_helper.py (MMAudioAgent)** - -* **Propósito:** Especialista em geração de áudio para um determinado fragmento de vídeo. -* **Singleton Instance:** `mmaudio_agent_singleton` -* **Construtor:** `MMAudioAgent(workspace_dir: str, device_id: str = None, mmaudio_config_file: str)` - * Lê `configs/mmaudio_config.yaml`. -* **Método Público:** - * `generate_audio_for_video(video_path: str, prompt: str, negative_prompt: str, duration_seconds: float)` - * **Inputs:** - * `video_path`: Caminho para o arquivo de vídeo silencioso (string). - * `prompt`: Prompt textual para guiar a geração de áudio (string). - * `negative_prompt`: Prompt negativo para áudio (string). - * `duration_seconds`: Duração exata do vídeo (float). - * **Output:** `str` (O caminho para o novo arquivo de vídeo com a faixa de áudio integrada). - - -### https://huggingface.co/spaces/ByteDance-Seed/SeedVR2-3B/tree/main - ---- - -## 🔗 Projetos Originais e Atribuições -(A seção de atribuições e licenças permanece a mesma que definimos anteriormente) - -### DreamO -* **Repositório Original:** [https://github.com/bytedance/DreamO](https://github.com/bytedance/DreamO) -... - -### LTX-Video -* **Repositório Original:** [https://github.com/Lightricks/LTX-Video](https://github.com/Lightricks/LTX-Video) -... - -### MMAudio -* **Repositório Original:** [https://github.com/hkchengrex/MMAudio](https://github.com/hkchengrex/MMAudio) -... \ No newline at end of file diff --git a/projects_x/inference_seedvr2_3b.py b/projects_x/inference_seedvr2_3b.py deleted file mode 100644 index 298cb217451bcf939b5bb0134aca63348c2a5639..0000000000000000000000000000000000000000 --- a/projects_x/inference_seedvr2_3b.py +++ /dev/null @@ -1,322 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -import os -import torch -import mediapy -from einops import rearrange -from omegaconf import OmegaConf -print(os.getcwd()) -import datetime -from tqdm import tqdm -import gc - - -from data.image.transforms.divisible_crop import DivisibleCrop -from data.image.transforms.na_resize import NaResize -from data.video.transforms.rearrange import Rearrange -if os.path.exists("./projects/video_diffusion_sr/color_fix.py"): - from projects.video_diffusion_sr.color_fix import wavelet_reconstruction - use_colorfix=True -else: - use_colorfix = False - print('Note!!!!!! Color fix is not avaliable!') -from torchvision.transforms import Compose, Lambda, Normalize -from torchvision.io.video import read_video -import argparse - - -from common.distributed import ( - get_device, - init_torch, -) - -from common.distributed.advanced import ( - get_data_parallel_rank, - get_data_parallel_world_size, - get_sequence_parallel_rank, - get_sequence_parallel_world_size, - init_sequence_parallel, -) - -from projects.video_diffusion_sr.infer import VideoDiffusionInfer -from common.config import load_config -from common.distributed.ops import sync_data -from common.seed import set_seed -from common.partition import partition_by_groups, partition_by_size - - -def configure_sequence_parallel(sp_size): - if sp_size > 1: - init_sequence_parallel(sp_size) - -def configure_runner(sp_size): - config_path = os.path.join('./configs_3b', 'main.yaml') - config = load_config(config_path) - runner = VideoDiffusionInfer(config) - OmegaConf.set_readonly(runner.config, False) - - init_torch(cudnn_benchmark=False, timeout=datetime.timedelta(seconds=3600)) - configure_sequence_parallel(sp_size) - runner.configure_dit_model(device="cuda", checkpoint='./ckpts/seedvr2_ema_3b.pth') - runner.configure_vae_model() - # Set memory limit. - if hasattr(runner.vae, "set_memory_limit"): - runner.vae.set_memory_limit(**runner.config.vae.memory_limit) - return runner - -def generation_step(runner, text_embeds_dict, cond_latents): - def _move_to_cuda(x): - return [i.to(get_device()) for i in x] - - noises = [torch.randn_like(latent) for latent in cond_latents] - aug_noises = [torch.randn_like(latent) for latent in cond_latents] - print(f"Generating with noise shape: {noises[0].size()}.") - noises, aug_noises, cond_latents = sync_data((noises, aug_noises, cond_latents), 0) - noises, aug_noises, cond_latents = list( - map(lambda x: _move_to_cuda(x), (noises, aug_noises, cond_latents)) - ) - cond_noise_scale = 0.0 - - def _add_noise(x, aug_noise): - t = ( - torch.tensor([1000.0], device=get_device()) - * cond_noise_scale - ) - shape = torch.tensor(x.shape[1:], device=get_device())[None] - t = runner.timestep_transform(t, shape) - print( - f"Timestep shifting from" - f" {1000.0 * cond_noise_scale} to {t}." - ) - x = runner.schedule.forward(x, aug_noise, t) - return x - - conditions = [ - runner.get_condition( - noise, - task="sr", - latent_blur=_add_noise(latent_blur, aug_noise), - ) - for noise, aug_noise, latent_blur in zip(noises, aug_noises, cond_latents) - ] - - with torch.no_grad(), torch.autocast("cuda", torch.bfloat16, enabled=True): - video_tensors = runner.inference( - noises=noises, - conditions=conditions, - dit_offload=True, - **text_embeds_dict, - ) - - samples = [ - ( - rearrange(video[:, None], "c t h w -> t c h w") - if video.ndim == 3 - else rearrange(video, "c t h w -> t c h w") - ) - for video in video_tensors - ] - del video_tensors - - return samples - -def generation_loop(runner, video_path='./test_videos', output_dir='./results', batch_size=1, cfg_scale=1.0, cfg_rescale=0.0, sample_steps=1, seed=666, res_h=1280, res_w=720, sp_size=1): - - def _build_pos_and_neg_prompt(): - # read positive prompt - positive_text = "Cinematic, High Contrast, highly detailed, taken using a Canon EOS R camera, \ - hyper detailed photo - realistic maximum detail, 32k, Color Grading, ultra HD, extreme meticulous detailing, \ - skin pore detailing, hyper sharpness, perfect without deformations." - # read negative prompt - negative_text = "painting, oil painting, illustration, drawing, art, sketch, oil painting, cartoon, \ - CG Style, 3D render, unreal engine, blurring, dirty, messy, worst quality, low quality, frames, watermark, \ - signature, jpeg artifacts, deformed, lowres, over-smooth" - return positive_text, negative_text - - def _build_test_prompts(video_path): - positive_text, negative_text = _build_pos_and_neg_prompt() - original_videos = [] - prompts = {} - video_list = os.listdir(video_path) - for f in video_list: - if f.endswith(".mp4"): - original_videos.append(f) - prompts[f] = positive_text - print(f"Total prompts to be generated: {len(original_videos)}") - return original_videos, prompts, negative_text - - def _extract_text_embeds(): - # Text encoder forward. - positive_prompts_embeds = [] - for texts_pos in tqdm(original_videos_local): - text_pos_embeds = torch.load('pos_emb.pt') - text_neg_embeds = torch.load('neg_emb.pt') - - positive_prompts_embeds.append( - {"texts_pos": [text_pos_embeds], "texts_neg": [text_neg_embeds]} - ) - gc.collect() - torch.cuda.empty_cache() - return positive_prompts_embeds - - def cut_videos(videos, sp_size): - t = videos.size(1) - if t <= 4 * sp_size: - print(f"Cut input video size: {videos.size()}") - padding = [videos[:, -1].unsqueeze(1)] * (4 * sp_size - t + 1) - padding = torch.cat(padding, dim=1) - videos = torch.cat([videos, padding], dim=1) - return videos - if (t - 1) % (4 * sp_size) == 0: - return videos - else: - padding = [videos[:, -1].unsqueeze(1)] * ( - 4 * sp_size - ((t - 1) % (4 * sp_size)) - ) - padding = torch.cat(padding, dim=1) - videos = torch.cat([videos, padding], dim=1) - assert (videos.size(1) - 1) % (4 * sp_size) == 0 - return videos - - # classifier-free guidance - runner.config.diffusion.cfg.scale = cfg_scale - runner.config.diffusion.cfg.rescale = cfg_rescale - # sampling steps - runner.config.diffusion.timesteps.sampling.steps = sample_steps - runner.configure_diffusion() - - # set random seed - set_seed(seed, same_across_ranks=True) - os.makedirs(output_dir, exist_ok=True) - tgt_path = output_dir - - # get test prompts - original_videos, _, _ = _build_test_prompts(video_path) - - # divide the prompts into different groups - original_videos_group = partition_by_groups( - original_videos, - get_data_parallel_world_size() // get_sequence_parallel_world_size(), - ) - # store prompt mapping - original_videos_local = original_videos_group[ - get_data_parallel_rank() // get_sequence_parallel_world_size() - ] - original_videos_local = partition_by_size(original_videos_local, batch_size) - - # pre-extract the text embeddings - positive_prompts_embeds = _extract_text_embeds() - - video_transform = Compose( - [ - NaResize( - resolution=( - res_h * res_w - ) - ** 0.5, - mode="area", - # Upsample image, model only trained for high res. - downsample_only=False, - ), - Lambda(lambda x: torch.clamp(x, 0.0, 1.0)), - DivisibleCrop((16, 16)), - Normalize(0.5, 0.5), - Rearrange("t c h w -> c t h w"), - ] - ) - - # generation loop - for videos, text_embeds in tqdm(zip(original_videos_local, positive_prompts_embeds)): - # read condition latents - cond_latents = [] - for video in videos: - video = ( - read_video( - os.path.join(video_path, video), output_format="TCHW" - )[0] - / 255.0 - ) - print(f"Read video size: {video.size()}") - cond_latents.append(video_transform(video.to(get_device()))) - - ori_lengths = [video.size(1) for video in cond_latents] - input_videos = cond_latents - cond_latents = [cut_videos(video, sp_size) for video in cond_latents] - - runner.dit.to("cpu") - print(f"Encoding videos: {list(map(lambda x: x.size(), cond_latents))}") - runner.vae.to(get_device()) - cond_latents = runner.vae_encode(cond_latents) - runner.vae.to("cpu") - runner.dit.to(get_device()) - - for i, emb in enumerate(text_embeds["texts_pos"]): - text_embeds["texts_pos"][i] = emb.to(get_device()) - for i, emb in enumerate(text_embeds["texts_neg"]): - text_embeds["texts_neg"][i] = emb.to(get_device()) - - samples = generation_step(runner, text_embeds, cond_latents=cond_latents) - runner.dit.to("cpu") - del cond_latents - - # dump samples to the output directory - if get_sequence_parallel_rank() == 0: - for path, input, sample, ori_length in zip( - videos, input_videos, samples, ori_lengths - ): - if ori_length < sample.shape[0]: - sample = sample[:ori_length] - filename = os.path.join(tgt_path, os.path.basename(path)) - # color fix - input = ( - rearrange(input[:, None], "c t h w -> t c h w") - if input.ndim == 3 - else rearrange(input, "c t h w -> t c h w") - ) - if use_colorfix: - sample = wavelet_reconstruction( - sample.to("cpu"), input[: sample.size(0)].to("cpu") - ) - else: - sample = sample.to("cpu") - sample = ( - rearrange(sample[:, None], "t c h w -> t h w c") - if sample.ndim == 3 - else rearrange(sample, "t c h w -> t h w c") - ) - sample = sample.clip(-1, 1).mul_(0.5).add_(0.5).mul_(255).round() - sample = sample.to(torch.uint8).numpy() - - if sample.shape[0] == 1: - mediapy.write_image(filename, sample.squeeze(0)) - else: - mediapy.write_video( - filename, sample, fps=24 - ) - gc.collect() - torch.cuda.empty_cache() - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--video_path", type=str, default="./test_videos") - parser.add_argument("--output_dir", type=str, default="./results") - parser.add_argument("--seed", type=int, default=666) - parser.add_argument("--res_h", type=int, default=720) - parser.add_argument("--res_w", type=int, default=1280) - parser.add_argument("--sp_size", type=int, default=1) - args = parser.parse_args() - - runner = configure_runner(args.sp_size) - generation_loop(runner, **vars(args)) diff --git a/projects_x/inference_seedvr2_7b.py b/projects_x/inference_seedvr2_7b.py deleted file mode 100644 index c4b73c25ce91bc0691a34e87d157edde488272cd..0000000000000000000000000000000000000000 --- a/projects_x/inference_seedvr2_7b.py +++ /dev/null @@ -1,321 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -import os -import torch -import mediapy -from einops import rearrange -from omegaconf import OmegaConf -print(os.getcwd()) -import datetime -from tqdm import tqdm -from models.dit import na -import gc - -from data.image.transforms.divisible_crop import DivisibleCrop -from data.image.transforms.na_resize import NaResize -from data.video.transforms.rearrange import Rearrange -if os.path.exists("./projects/video_diffusion_sr/color_fix.py"): - from projects.video_diffusion_sr.color_fix import wavelet_reconstruction - use_colorfix=True -else: - use_colorfix = False - print('Note!!!!!! Color fix is not avaliable!') -from torchvision.transforms import Compose, Lambda, Normalize -from torchvision.io.video import read_video - - -from common.distributed import ( - get_device, - init_torch, -) - -from common.distributed.advanced import ( - get_data_parallel_rank, - get_data_parallel_world_size, - get_sequence_parallel_rank, - get_sequence_parallel_world_size, - init_sequence_parallel, -) - -from projects.video_diffusion_sr.infer import VideoDiffusionInfer -from common.config import load_config -from common.distributed.ops import sync_data -from common.seed import set_seed -from common.partition import partition_by_groups, partition_by_size -import argparse - -def configure_sequence_parallel(sp_size): - if sp_size > 1: - init_sequence_parallel(sp_size) - -def configure_runner(sp_size): - config_path = os.path.join('./configs_7b', 'main.yaml') - config = load_config(config_path) - runner = VideoDiffusionInfer(config) - OmegaConf.set_readonly(runner.config, False) - - init_torch(cudnn_benchmark=False, timeout=datetime.timedelta(seconds=3600)) - configure_sequence_parallel(sp_size) - runner.configure_dit_model(device="cuda", checkpoint='./ckpts/seedvr2_ema_7b.pth') - runner.configure_vae_model() - # Set memory limit. - if hasattr(runner.vae, "set_memory_limit"): - runner.vae.set_memory_limit(**runner.config.vae.memory_limit) - return runner - -def generation_step(runner, text_embeds_dict, cond_latents): - def _move_to_cuda(x): - return [i.to(get_device()) for i in x] - - noises = [torch.randn_like(latent) for latent in cond_latents] - aug_noises = [torch.randn_like(latent) for latent in cond_latents] - print(f"Generating with noise shape: {noises[0].size()}.") - noises, aug_noises, cond_latents = sync_data((noises, aug_noises, cond_latents), 0) - noises, aug_noises, cond_latents = list( - map(lambda x: _move_to_cuda(x), (noises, aug_noises, cond_latents)) - ) - cond_noise_scale = 0.0 - - def _add_noise(x, aug_noise): - t = ( - torch.tensor([1000.0], device=get_device()) - * cond_noise_scale - ) - shape = torch.tensor(x.shape[1:], device=get_device())[None] - t = runner.timestep_transform(t, shape) - print( - f"Timestep shifting from" - f" {1000.0 * cond_noise_scale} to {t}." - ) - x = runner.schedule.forward(x, aug_noise, t) - return x - - conditions = [ - runner.get_condition( - noise, - task="sr", - latent_blur=_add_noise(latent_blur, aug_noise), - ) - for noise, aug_noise, latent_blur in zip(noises, aug_noises, cond_latents) - ] - - with torch.no_grad(), torch.autocast("cuda", torch.bfloat16, enabled=True): - video_tensors = runner.inference( - noises=noises, - conditions=conditions, - dit_offload=True, - **text_embeds_dict, - ) - - samples = [ - ( - rearrange(video[:, None], "c t h w -> t c h w") - if video.ndim == 3 - else rearrange(video, "c t h w -> t c h w") - ) - for video in video_tensors - ] - del video_tensors - - return samples - -def generation_loop(runner, video_path='./test_videos', output_dir='./results', batch_size=1, cfg_scale=1.0, cfg_rescale=0.0, sample_steps=1, seed=666, res_h=1280, res_w=720, sp_size=1): - - def _build_pos_and_neg_prompt(): - # read positive prompt - positive_text = "Cinematic, High Contrast, highly detailed, taken using a Canon EOS R camera, \ - hyper detailed photo - realistic maximum detail, 32k, Color Grading, ultra HD, extreme meticulous detailing, \ - skin pore detailing, hyper sharpness, perfect without deformations." - # read negative prompt - negative_text = "painting, oil painting, illustration, drawing, art, sketch, oil painting, cartoon, \ - CG Style, 3D render, unreal engine, blurring, dirty, messy, worst quality, low quality, frames, watermark, \ - signature, jpeg artifacts, deformed, lowres, over-smooth" - return positive_text, negative_text - - def _build_test_prompts(video_path): - positive_text, negative_text = _build_pos_and_neg_prompt() - original_videos = [] - prompts = {} - video_list = os.listdir(video_path) - for f in video_list: - if f.endswith(".mp4"): - original_videos.append(f) - prompts[f] = positive_text - print(f"Total prompts to be generated: {len(original_videos)}") - return original_videos, prompts, negative_text - - def _extract_text_embeds(): - # Text encoder forward. - positive_prompts_embeds = [] - for texts_pos in tqdm(original_videos_local): - text_pos_embeds = torch.load('pos_emb.pt') - text_neg_embeds = torch.load('neg_emb.pt') - - positive_prompts_embeds.append( - {"texts_pos": [text_pos_embeds], "texts_neg": [text_neg_embeds]} - ) - gc.collect() - torch.cuda.empty_cache() - return positive_prompts_embeds - - def cut_videos(videos, sp_size): - t = videos.size(1) - if t <= 4 * sp_size: - print(f"Cut input video size: {videos.size()}") - padding = [videos[:, -1].unsqueeze(1)] * (4 * sp_size - t + 1) - padding = torch.cat(padding, dim=1) - videos = torch.cat([videos, padding], dim=1) - return videos - if (t - 1) % (4 * sp_size) == 0: - return videos - else: - padding = [videos[:, -1].unsqueeze(1)] * ( - 4 * sp_size - ((t - 1) % (4 * sp_size)) - ) - padding = torch.cat(padding, dim=1) - videos = torch.cat([videos, padding], dim=1) - assert (videos.size(1) - 1) % (4 * sp_size) == 0 - return videos - - # classifier-free guidance - runner.config.diffusion.cfg.scale = cfg_scale - runner.config.diffusion.cfg.rescale = cfg_rescale - # sampling steps - runner.config.diffusion.timesteps.sampling.steps = sample_steps - runner.configure_diffusion() - - # set random seed - set_seed(seed, same_across_ranks=True) - os.makedirs(output_dir, exist_ok=True) - tgt_path = output_dir - - # get test prompts - original_videos, _, _ = _build_test_prompts(video_path) - - # divide the prompts into different groups - original_videos_group = partition_by_groups( - original_videos, - get_data_parallel_world_size() // get_sequence_parallel_world_size(), - ) - # store prompt mapping - original_videos_local = original_videos_group[ - get_data_parallel_rank() // get_sequence_parallel_world_size() - ] - original_videos_local = partition_by_size(original_videos_local, batch_size) - - # pre-extract the text embeddings - positive_prompts_embeds = _extract_text_embeds() - - video_transform = Compose( - [ - NaResize( - resolution=( - res_h * res_w - ) - ** 0.5, - mode="area", - # Upsample image, model only trained for high res. - downsample_only=False, - ), - Lambda(lambda x: torch.clamp(x, 0.0, 1.0)), - DivisibleCrop((16, 16)), - Normalize(0.5, 0.5), - Rearrange("t c h w -> c t h w"), - ] - ) - - # generation loop - for videos, text_embeds in tqdm(zip(original_videos_local, positive_prompts_embeds)): - # read condition latents - cond_latents = [] - for video in videos: - video = ( - read_video( - os.path.join(video_path, video), output_format="TCHW" - )[0] - / 255.0 - ) - print(f"Read video size: {video.size()}") - cond_latents.append(video_transform(video.to(get_device()))) - - ori_lengths = [video.size(1) for video in cond_latents] - input_videos = cond_latents - cond_latents = [cut_videos(video, sp_size) for video in cond_latents] - - runner.dit.to("cpu") - print(f"Encoding videos: {list(map(lambda x: x.size(), cond_latents))}") - runner.vae.to(get_device()) - cond_latents = runner.vae_encode(cond_latents) - runner.vae.to("cpu") - runner.dit.to(get_device()) - - for i, emb in enumerate(text_embeds["texts_pos"]): - text_embeds["texts_pos"][i] = emb.to(get_device()) - for i, emb in enumerate(text_embeds["texts_neg"]): - text_embeds["texts_neg"][i] = emb.to(get_device()) - - samples = generation_step(runner, text_embeds, cond_latents=cond_latents) - runner.dit.to("cpu") - del cond_latents - - # dump samples to the output directory - if get_sequence_parallel_rank() == 0: - for path, input, sample, ori_length in zip( - videos, input_videos, samples, ori_lengths - ): - if ori_length < sample.shape[0]: - sample = sample[:ori_length] - filename = os.path.join(tgt_path, os.path.basename(path)) - # color fix - input = ( - rearrange(input[:, None], "c t h w -> t c h w") - if input.ndim == 3 - else rearrange(input, "c t h w -> t c h w") - ) - if use_colorfix: - sample = wavelet_reconstruction( - sample.to("cpu"), input[: sample.size(0)].to("cpu") - ) - else: - sample = sample.to("cpu") - sample = ( - rearrange(sample[:, None], "t c h w -> t h w c") - if sample.ndim == 3 - else rearrange(sample, "t c h w -> t h w c") - ) - sample = sample.clip(-1, 1).mul_(0.5).add_(0.5).mul_(255).round() - sample = sample.to(torch.uint8).numpy() - - if sample.shape[0] == 1: - mediapy.write_image(filename, sample.squeeze(0)) - else: - mediapy.write_video( - filename, sample, fps=24 - ) - gc.collect() - torch.cuda.empty_cache() - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--video_path", type=str, default="./test_videos") - parser.add_argument("--output_dir", type=str, default="./results") - parser.add_argument("--seed", type=int, default=666) - parser.add_argument("--res_h", type=int, default=720) - parser.add_argument("--res_w", type=int, default=1280) - parser.add_argument("--sp_size", type=int, default=1) - args = parser.parse_args() - - runner = configure_runner(args.sp_size) - generation_loop(runner, **vars(args)) diff --git a/projects_x/inference_seedvr_3b.py b/projects_x/inference_seedvr_3b.py deleted file mode 100644 index 469a97d8dac0769d208be21943b5f7215b249380..0000000000000000000000000000000000000000 --- a/projects_x/inference_seedvr_3b.py +++ /dev/null @@ -1,323 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -import os -import torch -import mediapy -from einops import rearrange -from omegaconf import OmegaConf -print(os.getcwd()) -import datetime -from tqdm import tqdm -import gc - -from data.image.transforms.divisible_crop import DivisibleCrop -from data.image.transforms.na_resize import NaResize -from data.video.transforms.rearrange import Rearrange -if os.path.exists("./projects/video_diffusion_sr/color_fix.py"): - from projects.video_diffusion_sr.color_fix import wavelet_reconstruction - use_colorfix=True -else: - use_colorfix = False - print('Note!!!!!! Color fix is not avaliable!') -from torchvision.transforms import Compose, Lambda, Normalize -from torchvision.io.video import read_video -import argparse - -from common.distributed import ( - get_device, - init_torch, -) - -from common.distributed.advanced import ( - get_data_parallel_rank, - get_data_parallel_world_size, - get_sequence_parallel_rank, - get_sequence_parallel_world_size, - init_sequence_parallel, -) - -from projects.video_diffusion_sr.infer import VideoDiffusionInfer -from common.config import load_config -from common.distributed.ops import sync_data -from common.seed import set_seed -from common.partition import partition_by_groups, partition_by_size - - -def configure_sequence_parallel(sp_size): - if sp_size > 1: - init_sequence_parallel(sp_size) - -def configure_runner(sp_size): - config_path = os.path.join('./configs_3b', 'main.yaml') - config = load_config(config_path) - runner = VideoDiffusionInfer(config) - OmegaConf.set_readonly(runner.config, False) - - init_torch(cudnn_benchmark=False, timeout=datetime.timedelta(seconds=3600)) - configure_sequence_parallel(sp_size) - runner.configure_dit_model(device="cuda", checkpoint='./ckpts/seedvr_ema_3b.pth') - runner.configure_vae_model() - # Set memory limit. - if hasattr(runner.vae, "set_memory_limit"): - runner.vae.set_memory_limit(**runner.config.vae.memory_limit) - return runner - -def generation_step(runner, text_embeds_dict, cond_latents): - def _move_to_cuda(x): - return [i.to(get_device()) for i in x] - - noises = [torch.randn_like(latent) for latent in cond_latents] - aug_noises = [torch.randn_like(latent) for latent in cond_latents] - print(f"Generating with noise shape: {noises[0].size()}.") - noises, aug_noises, cond_latents = sync_data((noises, aug_noises, cond_latents), 0) - noises, aug_noises, cond_latents = list( - map(lambda x: _move_to_cuda(x), (noises, aug_noises, cond_latents)) - ) - cond_noise_scale = 0.1 - - def _add_noise(x, aug_noise): - t = ( - torch.tensor([1000.0], device=get_device()) - * cond_noise_scale - ) - shape = torch.tensor(x.shape[1:], device=get_device())[None] - t = runner.timestep_transform(t, shape) - print( - f"Timestep shifting from" - f" {1000.0 * cond_noise_scale} to {t}." - ) - x = runner.schedule.forward(x, aug_noise, t) - return x - - conditions = [ - runner.get_condition( - noise, - task="sr", - latent_blur=_add_noise(latent_blur, aug_noise), - ) - for noise, aug_noise, latent_blur in zip(noises, aug_noises, cond_latents) - ] - - with torch.no_grad(), torch.autocast("cuda", torch.bfloat16, enabled=True): - video_tensors = runner.inference( - noises=noises, - conditions=conditions, - dit_offload=True, - **text_embeds_dict, - ) - - samples = [ - ( - rearrange(video[:, None], "c t h w -> t c h w") - if video.ndim == 3 - else rearrange(video, "c t h w -> t c h w") - ) - for video in video_tensors - ] - del video_tensors - - return samples - -def generation_loop(runner, video_path='./test_videos', output_dir='./results', batch_size=1, cfg_scale=6.5, cfg_rescale=0.0, sample_steps=50, seed=666, res_h=1280, res_w=720, sp_size=1): - - def _build_pos_and_neg_prompt(): - # read positive prompt - positive_text = "Cinematic, High Contrast, highly detailed, taken using a Canon EOS R camera, \ - hyper detailed photo - realistic maximum detail, 32k, Color Grading, ultra HD, extreme meticulous detailing, \ - skin pore detailing, hyper sharpness, perfect without deformations." - # read negative prompt - negative_text = "painting, oil painting, illustration, drawing, art, sketch, oil painting, cartoon, \ - CG Style, 3D render, unreal engine, blurring, dirty, messy, worst quality, low quality, frames, watermark, \ - signature, jpeg artifacts, deformed, lowres, over-smooth" - return positive_text, negative_text - - def _build_test_prompts(video_path): - positive_text, negative_text = _build_pos_and_neg_prompt() - original_videos = [] - prompts = {} - video_list = os.listdir(video_path) - for f in video_list: - if f.endswith(".mp4"): - original_videos.append(f) - prompts[f] = positive_text - print(f"Total prompts to be generated: {len(original_videos)}") - return original_videos, prompts, negative_text - - def _extract_text_embeds(): - # Text encoder forward. - positive_prompts_embeds = [] - for texts_pos in tqdm(original_videos_local): - text_pos_embeds = torch.load('pos_emb.pt') - text_neg_embeds = torch.load('neg_emb.pt') - - positive_prompts_embeds.append( - {"texts_pos": [text_pos_embeds], "texts_neg": [text_neg_embeds]} - ) - gc.collect() - torch.cuda.empty_cache() - return positive_prompts_embeds - - def cut_videos(videos, sp_size): - t = videos.size(1) - if t <= 4 * sp_size: - print(f"Cut input video size: {videos.size()}") - padding = [videos[:, -1].unsqueeze(1)] * (4 * sp_size - t + 1) - padding = torch.cat(padding, dim=1) - videos = torch.cat([videos, padding], dim=1) - return videos - if (t - 1) % (4 * sp_size) == 0: - return videos - else: - padding = [videos[:, -1].unsqueeze(1)] * ( - 4 * sp_size - ((t - 1) % (4 * sp_size)) - ) - padding = torch.cat(padding, dim=1) - videos = torch.cat([videos, padding], dim=1) - assert (videos.size(1) - 1) % (4 * sp_size) == 0 - return videos - - # classifier-free guidance - runner.config.diffusion.cfg.scale = cfg_scale - runner.config.diffusion.cfg.rescale = cfg_rescale - # sampling steps - runner.config.diffusion.timesteps.sampling.steps = sample_steps - runner.configure_diffusion() - - # set random seed - set_seed(seed, same_across_ranks=True) - os.makedirs(output_dir, exist_ok=True) - tgt_path = output_dir - - # get test prompts - original_videos, _, _ = _build_test_prompts(video_path) - - # divide the prompts into different groups - original_videos_group = partition_by_groups( - original_videos, - get_data_parallel_world_size() // get_sequence_parallel_world_size(), - ) - # store prompt mapping - original_videos_local = original_videos_group[ - get_data_parallel_rank() // get_sequence_parallel_world_size() - ] - original_videos_local = partition_by_size(original_videos_local, batch_size) - - # pre-extract the text embeddings - positive_prompts_embeds = _extract_text_embeds() - - video_transform = Compose( - [ - NaResize( - resolution=( - res_h * res_w - ) - ** 0.5, - mode="area", - # Upsample image, model only trained for high res. - downsample_only=False, - ), - Lambda(lambda x: torch.clamp(x, 0.0, 1.0)), - DivisibleCrop((16, 16)), - Normalize(0.5, 0.5), - Rearrange("t c h w -> c t h w"), - ] - ) - - # generation loop - for videos, text_embeds in tqdm(zip(original_videos_local, positive_prompts_embeds)): - # read condition latents - cond_latents = [] - for video in videos: - video = ( - read_video( - os.path.join(video_path, video), output_format="TCHW" - )[0] - / 255.0 - ) - print(f"Read video size: {video.size()}") - cond_latents.append(video_transform(video.to(get_device()))) - - ori_lengths = [video.size(1) for video in cond_latents] - input_videos = cond_latents - cond_latents = [cut_videos(video, sp_size) for video in cond_latents] - - runner.dit.to("cpu") - print(f"Encoding videos: {list(map(lambda x: x.size(), cond_latents))}") - runner.vae.to(get_device()) - cond_latents = runner.vae_encode(cond_latents) - runner.vae.to("cpu") - runner.dit.to(get_device()) - - for i, emb in enumerate(text_embeds["texts_pos"]): - text_embeds["texts_pos"][i] = emb.to(get_device()) - for i, emb in enumerate(text_embeds["texts_neg"]): - text_embeds["texts_neg"][i] = emb.to(get_device()) - - samples = generation_step(runner, text_embeds, cond_latents=cond_latents) - runner.dit.to("cpu") - del cond_latents - - # dump samples to the output directory - if get_sequence_parallel_rank() == 0: - for path, input, sample, ori_length in zip( - videos, input_videos, samples, ori_lengths - ): - if ori_length < sample.shape[0]: - sample = sample[:ori_length] - filename = os.path.join(tgt_path, os.path.basename(path)) - # color fix - input = ( - rearrange(input[:, None], "c t h w -> t c h w") - if input.ndim == 3 - else rearrange(input, "c t h w -> t c h w") - ) - if use_colorfix: - sample = wavelet_reconstruction( - sample.to("cpu"), input[: sample.size(0)].to("cpu") - ) - else: - sample = sample.to("cpu") - sample = ( - rearrange(sample[:, None], "t c h w -> t h w c") - if sample.ndim == 3 - else rearrange(sample, "t c h w -> t h w c") - ) - sample = sample.clip(-1, 1).mul_(0.5).add_(0.5).mul_(255).round() - sample = sample.to(torch.uint8).numpy() - - if sample.shape[0] == 1: - mediapy.write_image(filename, sample.squeeze(0)) - else: - mediapy.write_video( - filename, sample, fps=24 - ) - gc.collect() - torch.cuda.empty_cache() - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--video_path", type=str, default="./test_videos") - parser.add_argument("--output_dir", type=str, default="./results") - parser.add_argument("--cfg_scale", type=float, default=6.5) - parser.add_argument("--sample_steps", type=int, default=50) - parser.add_argument("--seed", type=int, default=666) - parser.add_argument("--res_h", type=int, default=720) - parser.add_argument("--res_w", type=int, default=1280) - parser.add_argument("--sp_size", type=int, default=1) - args = parser.parse_args() - - runner = configure_runner(args.sp_size) - generation_loop(runner, **vars(args)) diff --git a/projects_x/inference_seedvr_7b.py b/projects_x/inference_seedvr_7b.py deleted file mode 100644 index 1408c9ca6f1b40ff522611f59937c968e4f15b44..0000000000000000000000000000000000000000 --- a/projects_x/inference_seedvr_7b.py +++ /dev/null @@ -1,324 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -import os -import torch -import mediapy -from einops import rearrange -from omegaconf import OmegaConf -print(os.getcwd()) -import datetime -from tqdm import tqdm -from models.dit import na -import gc - -from data.image.transforms.divisible_crop import DivisibleCrop -from data.image.transforms.na_resize import NaResize -from data.video.transforms.rearrange import Rearrange -if os.path.exists("./projects/video_diffusion_sr/color_fix.py"): - from projects.video_diffusion_sr.color_fix import wavelet_reconstruction - use_colorfix=True -else: - use_colorfix = False - print('Note!!!!!! Color fix is not avaliable!') -from torchvision.transforms import Compose, Lambda, Normalize -from torchvision.io.video import read_video -import argparse - - -from common.distributed import ( - get_device, - init_torch, -) - -from common.distributed.advanced import ( - get_data_parallel_rank, - get_data_parallel_world_size, - get_sequence_parallel_rank, - get_sequence_parallel_world_size, - init_sequence_parallel, -) - -from projects.video_diffusion_sr.infer import VideoDiffusionInfer -from common.config import load_config -from common.distributed.ops import sync_data -from common.seed import set_seed -from common.partition import partition_by_groups, partition_by_size - - -def configure_sequence_parallel(sp_size): - if sp_size > 1: - init_sequence_parallel(sp_size) - -def configure_runner(sp_size): - config_path = os.path.join('./configs_7b', 'main.yaml') - config = load_config(config_path) - runner = VideoDiffusionInfer(config) - OmegaConf.set_readonly(runner.config, False) - - init_torch(cudnn_benchmark=False, timeout=datetime.timedelta(seconds=3600)) - configure_sequence_parallel(sp_size) - runner.configure_dit_model(device="cuda", checkpoint='./ckpts/seedvr_ema_7b.pth') - runner.configure_vae_model() - # Set memory limit. - if hasattr(runner.vae, "set_memory_limit"): - runner.vae.set_memory_limit(**runner.config.vae.memory_limit) - return runner - -def generation_step(runner, text_embeds_dict, cond_latents): - def _move_to_cuda(x): - return [i.to(get_device()) for i in x] - - noises = [torch.randn_like(latent) for latent in cond_latents] - aug_noises = [torch.randn_like(latent) for latent in cond_latents] - print(f"Generating with noise shape: {noises[0].size()}.") - noises, aug_noises, cond_latents = sync_data((noises, aug_noises, cond_latents), 0) - noises, aug_noises, cond_latents = list( - map(lambda x: _move_to_cuda(x), (noises, aug_noises, cond_latents)) - ) - cond_noise_scale = 0.1 - - def _add_noise(x, aug_noise): - t = ( - torch.tensor([1000.0], device=get_device()) - * cond_noise_scale - ) - shape = torch.tensor(x.shape[1:], device=get_device())[None] - t = runner.timestep_transform(t, shape) - print( - f"Timestep shifting from" - f" {1000.0 * cond_noise_scale} to {t}." - ) - x = runner.schedule.forward(x, aug_noise, t) - return x - - conditions = [ - runner.get_condition( - noise, - task="sr", - latent_blur=_add_noise(latent_blur, aug_noise), - ) - for noise, aug_noise, latent_blur in zip(noises, aug_noises, cond_latents) - ] - - with torch.no_grad(), torch.autocast("cuda", torch.bfloat16, enabled=True): - video_tensors = runner.inference( - noises=noises, - conditions=conditions, - dit_offload=True, - **text_embeds_dict, - ) - - samples = [ - ( - rearrange(video[:, None], "c t h w -> t c h w") - if video.ndim == 3 - else rearrange(video, "c t h w -> t c h w") - ) - for video in video_tensors - ] - del video_tensors - - return samples - -def generation_loop(runner, video_path='./test_videos', output_dir='./results', batch_size=1, cfg_scale=6.5, cfg_rescale=0.0, sample_steps=50, seed=666, res_h=1280, res_w=720, sp_size=1): - - def _build_pos_and_neg_prompt(): - # read positive prompt - positive_text = "Cinematic, High Contrast, highly detailed, taken using a Canon EOS R camera, \ - hyper detailed photo - realistic maximum detail, 32k, Color Grading, ultra HD, extreme meticulous detailing, \ - skin pore detailing, hyper sharpness, perfect without deformations." - # read negative prompt - negative_text = "painting, oil painting, illustration, drawing, art, sketch, oil painting, cartoon, \ - CG Style, 3D render, unreal engine, blurring, dirty, messy, worst quality, low quality, frames, watermark, \ - signature, jpeg artifacts, deformed, lowres, over-smooth" - return positive_text, negative_text - - def _build_test_prompts(video_path): - positive_text, negative_text = _build_pos_and_neg_prompt() - original_videos = [] - prompts = {} - video_list = os.listdir(video_path) - for f in video_list: - if f.endswith(".mp4"): - original_videos.append(f) - prompts[f] = positive_text - print(f"Total prompts to be generated: {len(original_videos)}") - return original_videos, prompts, negative_text - - def _extract_text_embeds(): - # Text encoder forward. - positive_prompts_embeds = [] - for texts_pos in tqdm(original_videos_local): - text_pos_embeds = torch.load('pos_emb.pt') - text_neg_embeds = torch.load('neg_emb.pt') - - positive_prompts_embeds.append( - {"texts_pos": [text_pos_embeds], "texts_neg": [text_neg_embeds]} - ) - gc.collect() - torch.cuda.empty_cache() - return positive_prompts_embeds - - def cut_videos(videos, sp_size): - t = videos.size(1) - if t <= 4 * sp_size: - print(f"Cut input video size: {videos.size()}") - padding = [videos[:, -1].unsqueeze(1)] * (4 * sp_size - t + 1) - padding = torch.cat(padding, dim=1) - videos = torch.cat([videos, padding], dim=1) - return videos - if (t - 1) % (4 * sp_size) == 0: - return videos - else: - padding = [videos[:, -1].unsqueeze(1)] * ( - 4 * sp_size - ((t - 1) % (4 * sp_size)) - ) - padding = torch.cat(padding, dim=1) - videos = torch.cat([videos, padding], dim=1) - assert (videos.size(1) - 1) % (4 * sp_size) == 0 - return videos - - # classifier-free guidance - runner.config.diffusion.cfg.scale = cfg_scale - runner.config.diffusion.cfg.rescale = cfg_rescale - # sampling steps - runner.config.diffusion.timesteps.sampling.steps = sample_steps - runner.configure_diffusion() - - # set random seed - set_seed(seed, same_across_ranks=True) - os.makedirs(output_dir, exist_ok=True) - tgt_path = output_dir - - # get test prompts - original_videos, _, _ = _build_test_prompts(video_path) - - # divide the prompts into different groups - original_videos_group = partition_by_groups( - original_videos, - get_data_parallel_world_size() // get_sequence_parallel_world_size(), - ) - # store prompt mapping - original_videos_local = original_videos_group[ - get_data_parallel_rank() // get_sequence_parallel_world_size() - ] - original_videos_local = partition_by_size(original_videos_local, batch_size) - - # pre-extract the text embeddings - positive_prompts_embeds = _extract_text_embeds() - - video_transform = Compose( - [ - NaResize( - resolution=( - res_h * res_w - ) - ** 0.5, - mode="area", - # Upsample image, model only trained for high res. - downsample_only=False, - ), - Lambda(lambda x: torch.clamp(x, 0.0, 1.0)), - DivisibleCrop((16, 16)), - Normalize(0.5, 0.5), - Rearrange("t c h w -> c t h w"), - ] - ) - - # generation loop - for videos, text_embeds in tqdm(zip(original_videos_local, positive_prompts_embeds)): - # read condition latents - cond_latents = [] - for video in videos: - video = ( - read_video( - os.path.join(video_path, video), output_format="TCHW" - )[0] - / 255.0 - ) - print(f"Read video size: {video.size()}") - cond_latents.append(video_transform(video.to(get_device()))) - - ori_lengths = [video.size(1) for video in cond_latents] - input_videos = cond_latents - cond_latents = [cut_videos(video, sp_size) for video in cond_latents] - - runner.dit.to("cpu") - print(f"Encoding videos: {list(map(lambda x: x.size(), cond_latents))}") - runner.vae.to(get_device()) - cond_latents = runner.vae_encode(cond_latents) - runner.vae.to("cpu") - runner.dit.to(get_device()) - - for i, emb in enumerate(text_embeds["texts_pos"]): - text_embeds["texts_pos"][i] = emb.to(get_device()) - for i, emb in enumerate(text_embeds["texts_neg"]): - text_embeds["texts_neg"][i] = emb.to(get_device()) - - samples = generation_step(runner, text_embeds, cond_latents=cond_latents) - runner.dit.to("cpu") - del cond_latents - - # dump samples to the output directory - if get_sequence_parallel_rank() == 0: - for path, input, sample, ori_length in zip( - videos, input_videos, samples, ori_lengths - ): - if ori_length < sample.shape[0]: - sample = sample[:ori_length] - filename = os.path.join(tgt_path, os.path.basename(path)) - # color fix - input = ( - rearrange(input[:, None], "c t h w -> t c h w") - if input.ndim == 3 - else rearrange(input, "c t h w -> t c h w") - ) - if use_colorfix: - sample = wavelet_reconstruction( - sample.to("cpu"), input[: sample.size(0)].to("cpu") - ) - else: - sample = sample.to("cpu") - sample = ( - rearrange(sample[:, None], "t c h w -> t h w c") - if sample.ndim == 3 - else rearrange(sample, "t c h w -> t h w c") - ) - sample = sample.clip(-1, 1).mul_(0.5).add_(0.5).mul_(255).round() - sample = sample.to(torch.uint8).numpy() - - if sample.shape[0] == 1: - mediapy.write_image(filename, sample.squeeze(0)) - else: - mediapy.write_video( - filename, sample, fps=24 - ) - gc.collect() - torch.cuda.empty_cache() - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--video_path", type=str, default="./test_videos") - parser.add_argument("--output_dir", type=str, default="./results") - parser.add_argument("--cfg_scale", type=float, default=6.5) - parser.add_argument("--sample_steps", type=int, default=50) - parser.add_argument("--seed", type=int, default=666) - parser.add_argument("--res_h", type=int, default=720) - parser.add_argument("--res_w", type=int, default=1280) - parser.add_argument("--sp_size", type=int, default=1) - args = parser.parse_args() - - runner = configure_runner(args.sp_size) - generation_loop(runner, **vars(args)) diff --git a/projects_x/utils.py b/projects_x/utils.py deleted file mode 100644 index f2090852bf8371aa758c2c443ba3fc112055d67f..0000000000000000000000000000000000000000 --- a/projects_x/utils.py +++ /dev/null @@ -1,232 +0,0 @@ -# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import math -import re - -import cv2 -import numpy as np -import torch -from torchvision.utils import make_grid - - -# from basicsr -def img2tensor(imgs, bgr2rgb=True, float32=True): - """Numpy array to tensor. - - Args: - imgs (list[ndarray] | ndarray): Input images. - bgr2rgb (bool): Whether to change bgr to rgb. - float32 (bool): Whether to change to float32. - - Returns: - list[tensor] | tensor: Tensor images. If returned results only have - one element, just return tensor. - """ - - def _totensor(img, bgr2rgb, float32): - if img.shape[2] == 3 and bgr2rgb: - if img.dtype == 'float64': - img = img.astype('float32') - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - img = torch.from_numpy(img.transpose(2, 0, 1)) - if float32: - img = img.float() - return img - - if isinstance(imgs, list): - return [_totensor(img, bgr2rgb, float32) for img in imgs] - return _totensor(imgs, bgr2rgb, float32) - - -def tensor2img(tensor, rgb2bgr=True, out_type=np.uint8, min_max=(0, 1)): - """Convert torch Tensors into image numpy arrays. - - After clamping to [min, max], values will be normalized to [0, 1]. - - Args: - tensor (Tensor or list[Tensor]): Accept shapes: - 1) 4D mini-batch Tensor of shape (B x 3/1 x H x W); - 2) 3D Tensor of shape (3/1 x H x W); - 3) 2D Tensor of shape (H x W). - Tensor channel should be in RGB order. - rgb2bgr (bool): Whether to change rgb to bgr. - out_type (numpy type): output types. If ``np.uint8``, transform outputs - to uint8 type with range [0, 255]; otherwise, float type with - range [0, 1]. Default: ``np.uint8``. - min_max (tuple[int]): min and max values for clamp. - - Returns: - (Tensor or list): 3D ndarray of shape (H x W x C) OR 2D ndarray of - shape (H x W). The channel order is BGR. - """ - if not (torch.is_tensor(tensor) or (isinstance(tensor, list) and all(torch.is_tensor(t) for t in tensor))): - raise TypeError(f'tensor or list of tensors expected, got {type(tensor)}') - - if torch.is_tensor(tensor): - tensor = [tensor] - result = [] - for _tensor in tensor: - _tensor = _tensor.squeeze(0).float().detach().cpu().clamp_(*min_max) - _tensor = (_tensor - min_max[0]) / (min_max[1] - min_max[0]) - - n_dim = _tensor.dim() - if n_dim == 4: - img_np = make_grid(_tensor, nrow=int(math.sqrt(_tensor.size(0))), normalize=False).numpy() - img_np = img_np.transpose(1, 2, 0) - if rgb2bgr: - img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR) - elif n_dim == 3: - img_np = _tensor.numpy() - img_np = img_np.transpose(1, 2, 0) - if img_np.shape[2] == 1: # gray image - img_np = np.squeeze(img_np, axis=2) - else: - if rgb2bgr: - img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR) - elif n_dim == 2: - img_np = _tensor.numpy() - else: - raise TypeError(f'Only support 4D, 3D or 2D tensor. But received with dimension: {n_dim}') - if out_type == np.uint8: - # Unlike MATLAB, numpy.unit8() WILL NOT round by default. - img_np = (img_np * 255.0).round() - img_np = img_np.astype(out_type) - result.append(img_np) - if len(result) == 1: - result = result[0] - return result - - -def resize_numpy_image_area(image, area=512 * 512): - h, w = image.shape[:2] - k = math.sqrt(area / (h * w)) - h = int(h * k) - (int(h * k) % 16) - w = int(w * k) - (int(w * k) % 16) - image = cv2.resize(image, (w, h), interpolation=cv2.INTER_AREA) - return image - -def resize_numpy_image_long(image, long_edge=768): - h, w = image.shape[:2] - if max(h, w) <= long_edge: - return image - k = long_edge / max(h, w) - h = int(h * k) - w = int(w * k) - image = cv2.resize(image, (w, h), interpolation=cv2.INTER_AREA) - return image - - -# reference: https://github.com/huggingface/diffusers/pull/9295/files -def convert_flux_lora_to_diffusers(old_state_dict): - new_state_dict = {} - orig_keys = list(old_state_dict.keys()) - - def handle_qkv(sds_sd, ait_sd, sds_key, ait_keys, dims=None): - down_weight = sds_sd.pop(sds_key) - up_weight = sds_sd.pop(sds_key.replace(".down.weight", ".up.weight")) - - # calculate dims if not provided - num_splits = len(ait_keys) - if dims is None: - dims = [up_weight.shape[0] // num_splits] * num_splits - else: - assert sum(dims) == up_weight.shape[0] - - # make ai-toolkit weight - ait_down_keys = [k + ".lora_A.weight" for k in ait_keys] - ait_up_keys = [k + ".lora_B.weight" for k in ait_keys] - - # down_weight is copied to each split - ait_sd.update({k: down_weight for k in ait_down_keys}) - - # up_weight is split to each split - ait_sd.update({k: v for k, v in zip(ait_up_keys, torch.split(up_weight, dims, dim=0))}) # noqa: C416 - - for old_key in orig_keys: - # Handle double_blocks - if 'double_blocks' in old_key: - block_num = re.search(r"double_blocks_(\d+)", old_key).group(1) - new_key = f"transformer.transformer_blocks.{block_num}" - - if "proj_lora1" in old_key: - new_key += ".attn.to_out.0" - elif "proj_lora2" in old_key: - new_key += ".attn.to_add_out" - elif "qkv_lora2" in old_key and "up" not in old_key: - handle_qkv( - old_state_dict, - new_state_dict, - old_key, - [ - f"transformer.transformer_blocks.{block_num}.attn.add_q_proj", - f"transformer.transformer_blocks.{block_num}.attn.add_k_proj", - f"transformer.transformer_blocks.{block_num}.attn.add_v_proj", - ], - ) - # continue - elif "qkv_lora1" in old_key and "up" not in old_key: - handle_qkv( - old_state_dict, - new_state_dict, - old_key, - [ - f"transformer.transformer_blocks.{block_num}.attn.to_q", - f"transformer.transformer_blocks.{block_num}.attn.to_k", - f"transformer.transformer_blocks.{block_num}.attn.to_v", - ], - ) - # continue - - if "down" in old_key: - new_key += ".lora_A.weight" - elif "up" in old_key: - new_key += ".lora_B.weight" - - # Handle single_blocks - elif 'single_blocks' in old_key: - block_num = re.search(r"single_blocks_(\d+)", old_key).group(1) - new_key = f"transformer.single_transformer_blocks.{block_num}" - - if "proj_lora" in old_key: - new_key += ".proj_out" - elif "qkv_lora" in old_key and "up" not in old_key: - handle_qkv( - old_state_dict, - new_state_dict, - old_key, - [ - f"transformer.single_transformer_blocks.{block_num}.attn.to_q", - f"transformer.single_transformer_blocks.{block_num}.attn.to_k", - f"transformer.single_transformer_blocks.{block_num}.attn.to_v", - ], - ) - - if "down" in old_key: - new_key += ".lora_A.weight" - elif "up" in old_key: - new_key += ".lora_B.weight" - - else: - # Handle other potential key patterns here - new_key = old_key - - # Since we already handle qkv above. - if "qkv" not in old_key and 'embedding' not in old_key: - new_state_dict[new_key] = old_state_dict.pop(old_key) - - # if len(old_state_dict) > 0: - # raise ValueError(f"`old_state_dict` should be at this point but has: {list(old_state_dict.keys())}.") - - return new_state_dict diff --git a/projects_x/video_diffusion_sr/color_fix.py b/projects_x/video_diffusion_sr/color_fix.py deleted file mode 100644 index efe804519873717eee01468439c416325eb8e192..0000000000000000000000000000000000000000 --- a/projects_x/video_diffusion_sr/color_fix.py +++ /dev/null @@ -1,113 +0,0 @@ -import torch -from PIL import Image -from torch import Tensor -from torch.nn import functional as F - -from torchvision.transforms import ToTensor, ToPILImage - -def adain_color_fix(target: Image, source: Image): - # Convert images to tensors - to_tensor = ToTensor() - target_tensor = to_tensor(target).unsqueeze(0) - source_tensor = to_tensor(source).unsqueeze(0) - - # Apply adaptive instance normalization - result_tensor = adaptive_instance_normalization(target_tensor, source_tensor) - - # Convert tensor back to image - to_image = ToPILImage() - result_image = to_image(result_tensor.squeeze(0).clamp_(0.0, 1.0)) - - return result_image - -def wavelet_color_fix(target: Image, source: Image): - # Convert images to tensors - to_tensor = ToTensor() - target_tensor = to_tensor(target).unsqueeze(0) - source_tensor = to_tensor(source).unsqueeze(0) - - # Apply wavelet reconstruction - result_tensor = wavelet_reconstruction(target_tensor, source_tensor) - - # Convert tensor back to image - to_image = ToPILImage() - result_image = to_image(result_tensor.squeeze(0).clamp_(0.0, 1.0)) - - return result_image - -def calc_mean_std(feat: Tensor, eps=1e-5): - """Calculate mean and std for adaptive_instance_normalization. - Args: - feat (Tensor): 4D tensor. - eps (float): A small value added to the variance to avoid - divide-by-zero. Default: 1e-5. - """ - size = feat.size() - assert len(size) == 4, 'The input feature should be 4D tensor.' - b, c = size[:2] - feat_var = feat.view(b, c, -1).var(dim=2) + eps - feat_std = feat_var.sqrt().view(b, c, 1, 1) - feat_mean = feat.view(b, c, -1).mean(dim=2).view(b, c, 1, 1) - return feat_mean, feat_std - -def adaptive_instance_normalization(content_feat:Tensor, style_feat:Tensor): - """Adaptive instance normalization. - Adjust the reference features to have the similar color and illuminations - as those in the degradate features. - Args: - content_feat (Tensor): The reference feature. - style_feat (Tensor): The degradate features. - """ - size = content_feat.size() - style_mean, style_std = calc_mean_std(style_feat) - content_mean, content_std = calc_mean_std(content_feat) - normalized_feat = (content_feat - content_mean.expand(size)) / content_std.expand(size) - return normalized_feat * style_std.expand(size) + style_mean.expand(size) - -def wavelet_blur(image: Tensor, radius: int): - """ - Apply wavelet blur to the input tensor. - """ - # input shape: (1, 3, H, W) - # convolution kernel - kernel_vals = [ - [0.0625, 0.125, 0.0625], - [0.125, 0.25, 0.125], - [0.0625, 0.125, 0.0625], - ] - kernel = torch.tensor(kernel_vals, dtype=image.dtype, device=image.device) - # add channel dimensions to the kernel to make it a 4D tensor - kernel = kernel[None, None] - # repeat the kernel across all input channels - kernel = kernel.repeat(3, 1, 1, 1) - image = F.pad(image, (radius, radius, radius, radius), mode='replicate') - # apply convolution - output = F.conv2d(image, kernel, groups=3, dilation=radius) - return output - -def wavelet_decomposition(image: Tensor, levels=5): - """ - Apply wavelet decomposition to the input tensor. - This function only returns the low frequency & the high frequency. - """ - high_freq = torch.zeros_like(image) - for i in range(levels): - radius = 2 ** i - low_freq = wavelet_blur(image, radius) - high_freq += (image - low_freq) - image = low_freq - - return high_freq, low_freq - -def wavelet_reconstruction(content_feat:Tensor, style_feat:Tensor): - """ - Apply wavelet decomposition, so that the content will have the same color as the style. - """ - # calculate the wavelet decomposition of the content feature - content_high_freq, content_low_freq = wavelet_decomposition(content_feat) - del content_low_freq - # calculate the wavelet decomposition of the style feature - style_high_freq, style_low_freq = wavelet_decomposition(style_feat) - del style_high_freq - # reconstruct the content feature with the style's high frequency - return content_high_freq + style_low_freq \ No newline at end of file diff --git a/projects_x/video_diffusion_sr/infer.py b/projects_x/video_diffusion_sr/infer.py deleted file mode 100644 index 54bb5fba186f884dd52aed61672b6c675046e42f..0000000000000000000000000000000000000000 --- a/projects_x/video_diffusion_sr/infer.py +++ /dev/null @@ -1,342 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -from typing import List, Optional, Tuple, Union -import torch -from einops import rearrange -from omegaconf import DictConfig, ListConfig -from torch import Tensor - -from common.config import create_object -from common.decorators import log_on_entry, log_runtime -from common.diffusion import ( - classifier_free_guidance_dispatcher, - create_sampler_from_config, - create_sampling_timesteps_from_config, - create_schedule_from_config, -) -from common.distributed import ( - get_device, - get_global_rank, -) - -from common.distributed.meta_init_utils import ( - meta_non_persistent_buffer_init_fn, -) -# from common.fs import download - -from models.dit_v2 import na - -class VideoDiffusionInfer(): - def __init__(self, config: DictConfig): - self.config = config - self.device = "cuda" - - def get_condition(self, latent: Tensor, latent_blur: Tensor, task: str) -> Tensor: - t, h, w, c = latent.shape - cond = torch.zeros([t, h, w, c + 1], device=latent.device, dtype=latent.dtype) - if task == "t2v" or t == 1: - # t2i or t2v generation. - if task == "sr": - cond[:, ..., :-1] = latent_blur[:] - cond[:, ..., -1:] = 1.0 - return cond - if task == "i2v": - # i2v generation. - cond[:1, ..., :-1] = latent[:1] - cond[:1, ..., -1:] = 1.0 - return cond - if task == "v2v": - # v2v frame extension. - cond[:2, ..., :-1] = latent[:2] - cond[:2, ..., -1:] = 1.0 - return cond - if task == "sr": - # sr generation. - cond[:, ..., :-1] = latent_blur[:] - cond[:, ..., -1:] = 1.0 - return cond - raise NotImplementedError - - @log_on_entry - @log_runtime - def configure_dit_model(self, device="cuda", checkpoint=None): - # Load dit checkpoint. - # For fast init & resume, - # when training from scratch, rank0 init DiT on cpu, then sync to other ranks with FSDP. - # otherwise, all ranks init DiT on meta device, then load_state_dict with assign=True. - - # Create dit model. - with torch.device(self.device): - self.dit = create_object(self.config.dit.model) - self.dit.set_gradient_checkpointing(self.config.dit.gradient_checkpoint) - - if checkpoint: - state = torch.load(checkpoint, map_location=self.device, mmap=True) - loading_info = self.dit.load_state_dict(state, strict=True, assign=True) - print(f"Loading pretrained ckpt from {checkpoint}") - print(f"Loading info: {loading_info}") - self.dit = meta_non_persistent_buffer_init_fn(self.dit) - - # Print model size. - num_params = sum(p.numel() for p in self.dit.parameters() if p.requires_grad) - print(f"DiT trainable parameters: {num_params:,}") - - @log_on_entry - @log_runtime - def configure_vae_model(self): - # Create vae model. - dtype = getattr(torch, self.config.vae.dtype) - self.vae = create_object(self.config.vae.model) - self.vae.requires_grad_(False).eval() - self.vae.to(device=get_device(), dtype=dtype) - - # Load vae checkpoint. - state = torch.load( - self.config.vae.checkpoint, map_location=get_device(), mmap=True - ) - self.vae.load_state_dict(state) - - # Set causal slicing. - if hasattr(self.vae, "set_causal_slicing") and hasattr(self.config.vae, "slicing"): - self.vae.set_causal_slicing(**self.config.vae.slicing) - - # ------------------------------ Diffusion ------------------------------ # - - def configure_diffusion(self): - self.schedule = create_schedule_from_config( - config=self.config.diffusion.schedule, - device=get_device(), - ) - self.sampling_timesteps = create_sampling_timesteps_from_config( - config=self.config.diffusion.timesteps.sampling, - schedule=self.schedule, - device=get_device(), - ) - self.sampler = create_sampler_from_config( - config=self.config.diffusion.sampler, - schedule=self.schedule, - timesteps=self.sampling_timesteps, - ) - - # -------------------------------- Helper ------------------------------- # - - @torch.no_grad() - def vae_encode(self, samples: List[Tensor]) -> List[Tensor]: - use_sample = self.config.vae.get("use_sample", True) - latents = [] - if len(samples) > 0: - device = get_device() - dtype = getattr(torch, self.config.vae.dtype) - scale = self.config.vae.scaling_factor - shift = self.config.vae.get("shifting_factor", 0.0) - - if isinstance(scale, ListConfig): - scale = torch.tensor(scale, device=device, dtype=dtype) - if isinstance(shift, ListConfig): - shift = torch.tensor(shift, device=device, dtype=dtype) - - # Group samples of the same shape to batches if enabled. - if self.config.vae.grouping: - batches, indices = na.pack(samples) - else: - batches = [sample.unsqueeze(0) for sample in samples] - - # Vae process by each group. - for sample in batches: - sample = sample.to(device, dtype) - if hasattr(self.vae, "preprocess"): - sample = self.vae.preprocess(sample) - if use_sample: - latent = self.vae.encode(sample).latent - else: - # Deterministic vae encode, only used for i2v inference (optionally) - latent = self.vae.encode(sample).posterior.mode().squeeze(2) - latent = latent.unsqueeze(2) if latent.ndim == 4 else latent - latent = rearrange(latent, "b c ... -> b ... c") - latent = (latent - shift) * scale - latents.append(latent) - - # Ungroup back to individual latent with the original order. - if self.config.vae.grouping: - latents = na.unpack(latents, indices) - else: - latents = [latent.squeeze(0) for latent in latents] - - return latents - - @torch.no_grad() - def vae_decode(self, latents: List[Tensor]) -> List[Tensor]: - samples = [] - if len(latents) > 0: - device = get_device() - dtype = getattr(torch, self.config.vae.dtype) - scale = self.config.vae.scaling_factor - shift = self.config.vae.get("shifting_factor", 0.0) - - if isinstance(scale, ListConfig): - scale = torch.tensor(scale, device=device, dtype=dtype) - if isinstance(shift, ListConfig): - shift = torch.tensor(shift, device=device, dtype=dtype) - - # Group latents of the same shape to batches if enabled. - if self.config.vae.grouping: - latents, indices = na.pack(latents) - else: - latents = [latent.unsqueeze(0) for latent in latents] - - # Vae process by each group. - for latent in latents: - latent = latent.to(device, dtype) - latent = latent / scale + shift - latent = rearrange(latent, "b ... c -> b c ...") - latent = latent.squeeze(2) - sample = self.vae.decode(latent).sample - if hasattr(self.vae, "postprocess"): - sample = self.vae.postprocess(sample) - samples.append(sample) - - # Ungroup back to individual sample with the original order. - if self.config.vae.grouping: - samples = na.unpack(samples, indices) - else: - samples = [sample.squeeze(0) for sample in samples] - - return samples - - def timestep_transform(self, timesteps: Tensor, latents_shapes: Tensor): - # Skip if not needed. - if not self.config.diffusion.timesteps.get("transform", False): - return timesteps - - # Compute resolution. - vt = self.config.vae.model.get("temporal_downsample_factor", 4) - vs = self.config.vae.model.get("spatial_downsample_factor", 8) - frames = (latents_shapes[:, 0] - 1) * vt + 1 - heights = latents_shapes[:, 1] * vs - widths = latents_shapes[:, 2] * vs - - # Compute shift factor. - def get_lin_function(x1, y1, x2, y2): - m = (y2 - y1) / (x2 - x1) - b = y1 - m * x1 - return lambda x: m * x + b - - img_shift_fn = get_lin_function(x1=256 * 256, y1=1.0, x2=1024 * 1024, y2=3.2) - vid_shift_fn = get_lin_function(x1=256 * 256 * 37, y1=1.0, x2=1280 * 720 * 145, y2=5.0) - shift = torch.where( - frames > 1, - vid_shift_fn(heights * widths * frames), - img_shift_fn(heights * widths), - ) - - # Shift timesteps. - timesteps = timesteps / self.schedule.T - timesteps = shift * timesteps / (1 + (shift - 1) * timesteps) - timesteps = timesteps * self.schedule.T - return timesteps - - @torch.no_grad() - def inference( - self, - noises: List[Tensor], - conditions: List[Tensor], - texts_pos: Union[List[str], List[Tensor], List[Tuple[Tensor]]], - texts_neg: Union[List[str], List[Tensor], List[Tuple[Tensor]]], - cfg_scale: Optional[float] = None, - dit_offload: bool = False, - ) -> List[Tensor]: - assert len(noises) == len(conditions) == len(texts_pos) == len(texts_neg) - batch_size = len(noises) - - # Return if empty. - if batch_size == 0: - return [] - - # Set cfg scale - if cfg_scale is None: - cfg_scale = self.config.diffusion.cfg.scale - - # Text embeddings. - assert type(texts_pos[0]) is type(texts_neg[0]) - if isinstance(texts_pos[0], str): - text_pos_embeds, text_pos_shapes = self.text_encode(texts_pos) - text_neg_embeds, text_neg_shapes = self.text_encode(texts_neg) - elif isinstance(texts_pos[0], tuple): - text_pos_embeds, text_pos_shapes = [], [] - text_neg_embeds, text_neg_shapes = [], [] - for pos in zip(*texts_pos): - emb, shape = na.flatten(pos) - text_pos_embeds.append(emb) - text_pos_shapes.append(shape) - for neg in zip(*texts_neg): - emb, shape = na.flatten(neg) - text_neg_embeds.append(emb) - text_neg_shapes.append(shape) - else: - text_pos_embeds, text_pos_shapes = na.flatten(texts_pos) - text_neg_embeds, text_neg_shapes = na.flatten(texts_neg) - - # Flatten. - latents, latents_shapes = na.flatten(noises) - latents_cond, _ = na.flatten(conditions) - - # Enter eval mode. - was_training = self.dit.training - self.dit.eval() - - # Sampling. - latents = self.sampler.sample( - x=latents, - f=lambda args: classifier_free_guidance_dispatcher( - pos=lambda: self.dit( - vid=torch.cat([args.x_t, latents_cond], dim=-1), - txt=text_pos_embeds, - vid_shape=latents_shapes, - txt_shape=text_pos_shapes, - timestep=args.t.repeat(batch_size), - ).vid_sample, - neg=lambda: self.dit( - vid=torch.cat([args.x_t, latents_cond], dim=-1), - txt=text_neg_embeds, - vid_shape=latents_shapes, - txt_shape=text_neg_shapes, - timestep=args.t.repeat(batch_size), - ).vid_sample, - scale=( - cfg_scale - if (args.i + 1) / len(self.sampler.timesteps) - <= self.config.diffusion.cfg.get("partial", 1) - else 1.0 - ), - rescale=self.config.diffusion.cfg.rescale, - ), - ) - - # Exit eval mode. - self.dit.train(was_training) - - # Unflatten. - latents = na.unflatten(latents, latents_shapes) - - if dit_offload: - self.dit.to("cpu") - - # Vae decode. - self.vae.to(get_device()) - samples = self.vae_decode(latents) - - if dit_offload: - self.dit.to(get_device()) - return samples \ No newline at end of file diff --git a/projects_x/video_diffusion_sr/utils.py b/projects_x/video_diffusion_sr/utils.py deleted file mode 100644 index ae48d2662d7ed8630579cb52b97fc0e256335a5a..0000000000000000000000000000000000000000 --- a/projects_x/video_diffusion_sr/utils.py +++ /dev/null @@ -1,368 +0,0 @@ -# // Copyright (c) 2025 Bytedance Ltd. and/or its affiliates -# // -# // Licensed under the Apache License, Version 2.0 (the "License"); -# // you may not use this file except in compliance with the License. -# // You may obtain a copy of the License at -# // -# // http://www.apache.org/licenses/LICENSE-2.0 -# // -# // Unless required by applicable law or agreed to in writing, software -# // distributed under the License is distributed on an "AS IS" BASIS, -# // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# // See the License for the specific language governing permissions and -# // limitations under the License. - -import os -import random -import threading -from abc import ABC -from concurrent.futures import ThreadPoolExecutor, as_completed -from dataclasses import dataclass -from functools import partial -from itertools import chain -from typing import Any, Dict, List, Optional, Tuple, Union -import pyarrow as pa -import pyarrow.parquet as pq -from omegaconf import DictConfig - -from common.distributed import get_global_rank, get_world_size -from common.fs import copy, exists, listdir, mkdir, remove -from common.partition import partition_by_groups -from common.persistence.utils import get_local_path -from data.common.parquet_sampler import ( - IdentityParquetSampler, - ParquetSampler, - create_parquet_sampler, -) -from data.common.utils import filter_parquets, get_parquet_metadata - - -# Function to save a Parquet file and copy it to a target path -def save_and_copy( - pa_table, - local_path: str, - target_path: str, - row_group_size: int, - executor: ThreadPoolExecutor, - do_async: bool = False, - futures: List[Tuple[threading.Thread, str]] = [], -): - # Function to handle completion of the future - def _make_on_complete(local_path): - def _on_complete(future): - target_path = future.result() - remove(local_path) - # del future - print(f"Target path saved: {target_path}") - - return _on_complete - - # Function to write Parquet table and copy it - def _fn(pa_table, local_path, target_path, row_group_size): - pq.write_table( - pa_table, - local_path, - row_group_size=row_group_size, - ) - mkdir(os.path.dirname(target_path)) - copy(local_path, target_path) - return target_path - - # Submit the task to the executor - future = executor.submit(_fn, pa_table, local_path, target_path, row_group_size) - future.add_done_callback(_make_on_complete(local_path)) - futures.append(future) - - # If not asynchronous, wait for all futures to complete - if not do_async: - for future in as_completed(futures): - try: - future.result() - except Exception as exc: - print(f"Generated an exception: {exc}") - executor.shutdown(wait=True) - - -@dataclass -class FileListOutput: - existing_files: List[str] - source_files: List[Any] - target_files: List[str] - - -@dataclass -class PersistedParquet: - path: str - - # Method to save the Parquet file - def save( - self, - row_group_size: int, - executor: ThreadPoolExecutor, - pa_table: Optional[pa.Table] = None, - data_dict: Optional[Dict[str, List[Union[str, bytes]]]] = None, - is_last_file=False, - futures: List[threading.Thread] = [], - ): - assert (pa_table is None) != (data_dict is None) - local_path = get_local_path(self.path) - if not pa_table: - schema_dict = self.generate_schema_from_dict(data_dict) - pa_table = pa.Table.from_pydict(data_dict, schema=schema_dict) - save_and_copy( - pa_table, - local_path=local_path, - target_path=self.path, - row_group_size=row_group_size, - executor=executor, - do_async=not is_last_file, - futures=futures, - ) - - # Method to generate schema from a dictionary - def generate_schema_from_dict( - self, - data_dict: Dict[str, List[Union[str, bytes]]], - ): - schema_dict = {} - for key, value in data_dict.items(): - if isinstance(value[0], str): - schema_dict[key] = pa.string() - elif isinstance(value[0], bytes): - schema_dict[key] = pa.binary() - else: - raise ValueError(f"Unsupported data type for key '{key}': {type(value)}") - return pa.schema(schema_dict) - - -# Base class for managing Parquet files -class ParquetManager(ABC): - """ - Base class for the DumpingManager and RepackingManager. - """ - - def __init__( - self, - task: Optional[DictConfig] = None, - target_dir: str = ".", - ): - self.task = task - self.target_dir = target_dir.rstrip("/") - self.executor = ThreadPoolExecutor(max_workers=4) - self.futures = [] - - # Method to get list of Parquet files from source path - def get_parquet_files( - self, - source_path: str, - parquet_sampler: ParquetSampler = IdentityParquetSampler(), - path_mode: str = "dir", - ): - - # Helper function to flatten nested lists - def _flatten(paths): - if isinstance(paths, list): - if any(isinstance(i, list) for i in paths): - return list(chain(*paths)) - else: - return paths - else: - return [paths] - - file_paths = _flatten(source_path) - if path_mode == "dir": - file_paths = map(listdir, file_paths) - if isinstance(parquet_sampler.size, float): - file_paths = map(filter_parquets, file_paths) - file_paths = map(parquet_sampler, file_paths) - file_paths = list(chain(*file_paths)) - else: - file_paths = chain(*file_paths) - file_paths = parquet_sampler(filter_parquets(file_paths)) - - return file_paths - - # Method to save a Parquet file - def save_parquet( - self, - *, - file_name: str, - row_group_size: int, - pa_table: Optional[pa.Table] = None, - data_dict: Optional[Dict[str, List[Union[str, bytes]]]] = None, - override: bool = True, - is_last_file: bool = False, - ): - - persist = self._get_parquet(file_name) - if override or not exists(persist.path): - persist.save( - pa_table=pa_table, - data_dict=data_dict, - executor=self.executor, - row_group_size=row_group_size, - is_last_file=is_last_file, - futures=self.futures, - ) - - # Method to get a PersistedParquet object - def _get_parquet(self, file_name: str) -> PersistedParquet: - return PersistedParquet(file_name) - - -# Class to manage dumping of Parquet files -class DumpingManager(ParquetManager): - """ - Dumping manager handles parquet saving and resuming. - """ - - def __init__( - self, - task: DictConfig, - target_dir: str, - ): - super().__init__(task=task, target_dir=target_dir) - - # Method to generate saving path - def generate_saving_path(self, file_path: str, rsplit: int): - part_list = file_path.rsplit("/", rsplit) - result_folder = "/".join( - [self.target_dir] + [f"epoch_{self.task.epoch}"] + part_list[-rsplit:-1] - ) - result_file = "/".join([result_folder, part_list[-1]]) - return result_folder, result_file - - # Method to configure task paths - def configure_task_path(self, source_path: str, rsplit: int, path_mode: str = "dir"): - - file_paths = self.get_parquet_files( - source_path=source_path, - path_mode=path_mode, - ) - - # Shuffle file paths - random.Random(0).shuffle(file_paths) - - # Partition the file paths based on task configuration - full_source_files = partition_by_groups(file_paths, self.task.total_count)[self.task.index] - full_source_files = partition_by_groups(full_source_files, get_world_size())[ - get_global_rank() - ] - - if not full_source_files: - return FileListOutput([], [], []) - - generate_saving_path = partial(self.generate_saving_path, rsplit=rsplit) - full_paths = map(generate_saving_path, full_source_files) - full_target_folders, full_target_files = map(list, zip(*full_paths)) - full_target_folders = set(full_target_folders) - - existing_file_paths = map( - lambda folder: listdir(folder) if exists(folder) else [], full_target_folders - ) - existing_file_paths = chain(*existing_file_paths) - self.existing_files = list( - filter( - lambda path: path.endswith(".parquet") and path in full_target_files, - existing_file_paths, - ) - ) - - filtered_pairs = list( - filter( - lambda pair: pair[1] not in self.existing_files, - zip(full_source_files, full_target_files), - ) - ) - if filtered_pairs: - filtered_source_files, filtered_target_files = map(list, zip(*filtered_pairs)) - else: - filtered_source_files, filtered_target_files = [], [] - - # Skip existing file paths if specified - skip_exists = self.task.skip_exists - self.source_files = filtered_source_files if skip_exists else full_source_files - self.target_files = filtered_target_files if skip_exists else full_target_files - - return FileListOutput(self.existing_files, self.source_files, self.target_files) - - -class RepackingManager(ParquetManager): - """ - Repacking manager handles parquet spliting and saving. - """ - - def __init__( - self, - task: DictConfig, - target_dir: str, - repackaging: DictConfig, - ): - super().__init__(task=task, target_dir=target_dir) - self.repackaging = repackaging - - # Configure the task paths for repacking - def configure_task_path( - self, - source_path: str, - parquet_sampler: Optional[DictConfig] = None, - path_mode: str = "dir", - ): - - parquet_sampler = create_parquet_sampler(config=parquet_sampler) - file_paths = self.get_parquet_files( - source_path=source_path, - parquet_sampler=parquet_sampler, - path_mode=path_mode, - ) - - random.Random(0).shuffle(file_paths) - target_dir = self.target_dir - size = abs(parquet_sampler.size) - - if self.task: - # Partition the file paths based on task configuration - file_paths = partition_by_groups(file_paths, self.task.total_count)[self.task.index] - target_dir = os.path.join(target_dir, f"{self.task.total_count}_{self.task.index}") - - if size > 1: - size = len( - partition_by_groups(range(size), self.task.total_count)[self.task.index] - ) - - # Get metadata for each Parquet file - metadatas = get_parquet_metadata(file_paths, self.repackaging.num_processes) - - # Create a list of (file_path, row) tuples for each row in the files - target_items = [ - (file_path, row) - for file_path, metadata in zip(file_paths, metadatas) - for row in range(metadata.num_rows) - ] - - # Shuffle the target items - random.Random(0).shuffle(target_items) - - if size > 1: - target_items = target_items[:size] - - # Partition the items into groups for each target file - items_per_file = partition_by_groups(target_items, self.repackaging.num_files) - - # Generate target file paths - target_files = [ - os.path.join(target_dir, f"{str(i).zfill(5)}.parquet") - for i in range(self.repackaging.num_files) - ] - - existing_file_paths = listdir(target_dir) if exists(target_dir) else [] - self.existing_files = list( - filter( - lambda path: path.endswith(".parquet"), - existing_file_paths, - ) - ) - self.source_files = items_per_file - self.target_files = target_files - - return FileListOutput(self.existing_files, self.source_files, self.target_files)