Camshowrecordings/model/sam_samantha/5 May 2026
import argparse import cv2 from pathlib import Path from run_inference import infer, model, preprocess, cfg # reuse the functions above
def process_video(in_path: Path, out_path: Path, stride: int = 5): cap = cv2.VideoCapture(str(in_path)) if not cap.isOpened(): raise RuntimeError(f"Cannot open in_path") camshowrecordings/model/sam_samantha/5
Open config.yaml to verify things like:
cd model/sam_samantha/5 ls -l Typical files you’ll see: import argparse import cv2 from pathlib import Path
device = torch.device(cfg.get("device", "cpu")) model.to(device) if frame_idx % stride == 0: mask =
python run_inference.py path/to/your/frame.jpg If you have a GPU, make sure torch.cuda.is_available() returns True . The script will automatically use the device defined in config.yaml . 7️⃣ Using the Model on a Whole Video Below is a compact example that reads a video file, runs the model on every N ‑th frame, and writes an output video with the segmentation overlay.
if frame_idx % stride == 0: mask = infer(frame) # binary mask (0/255) overlay = cv2.addWeighted(frame, 0.7, cv2.cvtColor(mask, cv2.COLOR_GRAY2BGR), 0.3, 0) out.write(overlay) else: out.write(frame) # write raw frame for non‑processed indices

Follow Us