Skip to main content

vame.util.align_egocentrical

Variational Animal Motion Embedding 0.1 Toolbox © K. Luxem & J. Kürsch & P. Bauer, Department of Cellular Neuroscience Leibniz Institute for Neurobiology, Magdeburg, Germany

https://github.com/LINCellularNeuroscience/VAME Licensed under GNU General Public License v3.0

align_mouse

def align_mouse(
path_to_file: str,
filename: str,
video_format: str,
crop_size: Tuple[int, int],
pose_list: List[np.ndarray],
pose_ref_index: Tuple[int, int],
confidence: float,
pose_flip_ref: Tuple[int, int],
bg: np.ndarray,
frame_count: int,
use_video: bool = True,
tqdm_stream: TqdmToLogger = None
) -> Tuple[List[np.ndarray], List[List[np.ndarray]], np.ndarray]

Align the mouse in the video frames.

Arguments:

  • path_to_file str - Path to the file directory.
  • filename str - Name of the video file without the format.
  • video_format str - Format of the video file.
  • crop_size Tuple[int, int] - Size to crop the video frames.
  • pose_list List[np.ndarray] - List of pose coordinates.
  • pose_ref_index Tuple[int, int] - Pose reference indices.
  • confidence float - Pose confidence threshold.
  • pose_flip_ref Tuple[int, int] - Reference indices for flipping.
  • bg np.ndarray - Background image.
  • frame_count int - Number of frames to align.
  • filename0 bool, optional - bool if video should be cropped or DLC points only. Defaults to True.

Returns:

Tuple[List[np.ndarray], List[List[np.ndarray]], np.ndarray]: List of aligned images, list of aligned DLC points, and time series data.

play_aligned_video

def play_aligned_video(a: List[np.ndarray], n: List[List[np.ndarray]],
frame_count: int) -> None

Play the aligned video.

Arguments:

  • a List[np.ndarray] - List of aligned images.
  • n List[List[np.ndarray]] - List of aligned DLC points.
  • frame_count int - Number of frames in the video.

alignment

def alignment(
path_to_file: str,
filename: str,
pose_ref_index: List[int],
video_format: str,
crop_size: Tuple[int, int],
confidence: float,
pose_estimation_filetype: PoseEstimationFiletype,
path_to_pose_nwb_series_data: str = None,
use_video: bool = False,
check_video: bool = False,
tqdm_stream: TqdmToLogger = None
) -> Tuple[np.ndarray, List[np.ndarray]]

Perform alignment of egocentric data.

Arguments:

  • path_to_file str - Path to the file directory.
  • filename str - Name of the video file without the format.
  • pose_ref_index List[int] - Pose reference indices.
  • video_format str - Format of the video file.
  • crop_size Tuple[int, int] - Size to crop the video frames.
  • confidence float - Pose confidence threshold.
  • use_video bool, optional - Whether to use video for alignment. Defaults to False.
  • check_video bool, optional - Whether to check the aligned video. Defaults to False.

Returns:

Tuple[np.ndarray, List[np.ndarray]]: Aligned time series data and list of aligned frames.

egocentric_alignment

@save_state(model=EgocentricAlignmentFunctionSchema)
def egocentric_alignment(config: str,
pose_ref_index: list = [5, 6],
crop_size: tuple = (300, 300),
use_video: bool = False,
video_format: str = '.mp4',
check_video: bool = False,
save_logs: bool = False) -> None

Aligns egocentric data for VAME training

Arguments:

  • config str - Path for the project config file.
  • pose_ref_index list, optional - Pose reference index to be used to align. Defaults to [5,6].
  • crop_size tuple, optional - Size to crop the video. Defaults to (300,300).
  • use_video bool, optional - Weather to use video to do the post alignment. Defaults to False. # TODO check what to put in this docstring
  • video_format str, optional - Video format, can be .mp4 or .avi. Defaults to '.mp4'.
  • check_video bool, optional - Weather to check the video. Defaults to False.

Raises:

  • ValueError - If the config.yaml indicates that the data is not egocentric.