model.create_training
logger_config
logger
traindata_aligned
def traindata_aligned(config: dict,
sessions: List[str] | None = None,
test_fraction: float = 0.1,
read_from_variable: str = "position_processed",
split_mode: Literal["mode_1", "mode_2"] = "mode_2",
keypoints_to_include: List[str] | None = None,
keypoints_to_exclude: List[str] | None = None) -> None
Create training dataset for aligned data. Save numpy arrays with the test/train info to the project folder.
Parameters
- config (
dict
): Configuration parameters dictionary. - sessions (
List[str], optional
): List of session names. If None, all sessions will be used. Defaults to None. - test_fraction (
float, optional
): Fraction of data to use as test data. Defaults to 0.1. - read_from_variable (
str, optional
): Variable name to read from the processed data. Defaults to "position_processed". - split_mode (
Literal["mode_1", "mode_2"], optional
): Mode for splitting data into train/test sets:
- mode_1: Original mode that takes the initial test_fraction portion of the combined data for testing and the rest for training.
- mode_2: Takes random continuous chunks from each session proportional to test_fraction for testing and uses the remaining parts for training. Defaults to "mode_2".
Returns
None
create_trainset
@save_state(model=CreateTrainsetFunctionSchema)
def create_trainset(config: dict,
test_fraction: float = 0.1,
read_from_variable: str = "position_processed",
split_mode: Literal["mode_1", "mode_2"] = "mode_2",
keypoints_to_include: List[str] | None = None,
keypoints_to_exclude: List[str] | None = None,
save_logs: bool = True) -> None
Creates training and test datasets for the VAME model. Fills in the values in the "create_trainset" key of the states.json file. Creates the training dataset for VAME at:
- project_name/
- data/
- train/
- test_seq.npy
- train_seq.npy
- metadata.json
- train/
- data/
The produced test_seq.npy contains the combined data in the shape of (num_features, num_video_frames * test_fraction). The produced train_seq.npy contains the combined data in the shape of (num_features, num_video_frames * (1 - test_fraction)). The metadata.json file contains feature provenance information for tracking which keypoints and coordinates correspond to each feature in the numpy arrays, along with detailed split information for full reproducibility.
Parameters
- config (
dict
): Configuration parameters dictionary. - test_fraction (
float, optional
): Fraction of data to use as test data. Defaults to 0.1. - read_from_variable (
str, optional
): Variable name to read from the processed data. Defaults to "position_processed". - split_mode (
Literal["mode_1", "mode_2"], optional
): Mode for splitting data into train/test sets:
- mode_1: Original mode that takes the initial test_fraction portion of the combined data for testing and the rest for training.
- mode_2: Takes random continuous chunks from each session proportional to test_fraction for testing and uses the remaining parts for training. Defaults to "mode_2".
- save_logs (
bool, optional
): Whether to save logs. Defaults to True.
Returns
None