| # FBX Handler | |
| ## Load file: | |
| ```python | |
| # Path to file to load. | |
| input_file = Path('/path/to/file.fbx') | |
| # Load file into class. | |
| container = FBXContainer(input_file) | |
| ``` | |
| ## Preprocess data: | |
| ```python | |
| container.init_world_transforms(r=...) | |
| train_raw_data = container.extract_training_translations() | |
| test_raw_data = container.extract_inf_translations() | |
| ``` | |
| ## Training workflow: | |
| ```python | |
| # Load file. | |
| container = FBXContainer(input_file) | |
| # Get np.array with all valid translation numbers. | |
| actors_train, markers_train, t_test, _, _ = container.get_split_transforms(mode='train') | |
| # Convert to dataset... | |
| ... | |
| ``` | |
| ## Testing workflow: | |
| ```python | |
| # Load file. | |
| container = FBXContainer(input_file) | |
| # Get splitted original data (no transforms applied). | |
| actors_test, markers_test, t_test, r_test_, s_test = container.get_split_transforms(mode='test') | |
| # Predict the new actors and classes... | |
| actors_pred, markers_pred = Labeler(scale_translations(t_test)) | |
| # Merge the new labels with their original translations. | |
| merged = merge_tdc(actors_pred, markers_pred, t_test, r_test, s_test) | |
| # Convert the full cloud into a dict structured for easy keyframes. | |
| new_dict = array_to_dict(merged) | |
| # Replace the old translation keyframes with the new values. | |
| container.replace_keyframes_for_all_actors(new_dict) | |
| # Export file. | |
| container.export_fbx(Path('/path/to/outputfile.fbx')) | |
| ``` |